Image Title

Search Results for Bob Friday:

Oliver Schuermann, Juniper Networks | RSAC USA 2020


 

>> Announcer: Live from San Francisco, it's theCUBE, covering RSA Conference 2020 San Francisco, brought to you by SiliconANGLE Media. >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We are Thursday, day four of the RSA Show here in Moscone in San Francisco. It's a beautiful day outside, but the show is still going, 40,000-plus people. A couple of challenges with the coronavirus, and some other things going on, but everybody's here, everybody's staying the course, and I think it's really a good message going forward as to what's going to happen in the show season. We go to a lot of shows. Is 2020 the year we're going to know everything with the benefit of hindsight? It's not quite working out so far that way, but we're bringing in the experts to share the knowledge, and we're excited for our next guest, who's going to help us get to know what the answers are. He's Oliver Sherman, senior director, Enterprise Product Marketing for Juniper Networks. Oliver, great to see you. >> Thanks for having me. >> Absolutely, so first off, just general impressions of the show. I'm sure you've been coming here for a little while. >> We have, and I think the show's going very well, as you pointed out, there's a couple of challenges that are around, but I think everybody's staying strong, and pushing through, and really driving the agenda of security. >> So I've got some interesting quotes from you doing a little research for this segment. You said 2019 was the year of enforcement, but 2020 is the year of intelligence. What did you mean by that? >> Specifically, it's around Juniper. We have a Juniper connected security message and strategy that we proved last year by increasing the ability to enforce on all of your infrastructure without having to rip and replace technologies. For instance, on our widely rolled out MX routing platform, we offer second tell to block things like command and control traffic, or on our switching line for campus and data centers, we prevent lateral threat propagation with second tell, allowing you to block hosts as they're infected, and as we rounded that out, and it's a little bit in 2020 we were able to now deliver that on our Mist, or our wireless acquistion that we did last year around this time, so showing the integration of that product portfolio. >> Yeah, we met Bob Friday from Mist. >> Oliver: Excellent. >> He, doing the AI, some of the ethics around AI. >> Oliver: Sure. >> At your guys conference last year. It was pretty interesting conversation. Let's break down what you said a little bit deeper. So you're talking about inside your own product suite, and managing threats across once they get to that level to keep things clean across that first layer of defense. >> Right, well, I mean, whether you're a good packet or a bad packet, you have to traverse the network to be interesting. We've all put our phones in airplane mode at Black Hat or events like that because we don't want anybody on it, but they're really boring when they're offline, but they're also really boring to attackers when they're offline. As soon as you turn them on, you have a problem, or could have a problem, but as things traverse the network, what better place to see who and what's on your network than on the gears, and at the end of the day, we're able to provide that visibility, we're able to provide that enforcement, so as you mentioned, 2020 is now the year of an awareness for us, so the Threat Aware Network. We're able to do things like look at encrypted traffic, do heuristics and analysis to figure out should that even be on my network because as you bring it into a network, and you have to decrypt it, a, there's privacy concerns with that in these times, but also, it's computationally expensive to do that, so it becomes a challenge from both a financial perspective, as well as a compliance perspective, so we're helping solve that so you can offset that traffic, and be able to ensure your network's secure. >> So is that relatively new, and I apologize. I'm not deep into the weeds of feature functionality, but that sounds pretty interesting that you can actually start to do the analysis without encrypting the data, and get some meaningful, insightful information. >> Absolutely, we actually announced it on Monday at 4:45 a.m. Pacific, so it is new. >> Brand new. >> Yes. >> And what's the secret sauce to be able to do that because one would think just by rule encryption would eliminate the ability to really do the analysis, so what analysis can you still do while still keeping the data encrypted? >> You're absolutely right. We're seeing 70 to 80% of internet traffic is now encrypted. Furthermore, bad actors are using that to obfuscate themselves, right, obviously, and then, the magic to that, though, to look at it without having to crack open the package is using things like heuristics that look at connections per second, or connection patterns, or looking at significant exchanges, or even IP addresses to know this is not something you want to let in, and we're seeing a very high rate of success to block things like IoT botnets, for instance, so you'll be seeing more and more of that from us throughout the year, but this is the initial step that we're taking. >> Right, that's great because so much of it it sounds like, a, a lot of it's being generated by machines, but two, it sounds like the profile of the attacks keeps changing quite a bit from a concentrated attacks to more, it sounds like now, everyone's doing the slow creeper to try to get it under the covers. >> Right, and really, you're using your network to your full extent. I mean, a lot of things that we're doing including encrypted traffic analysis is an additional feature on our platform, so that comes with what you already have, so rather than walking in and saying, "Buy my suite of products, this will all" "solve all your problems," as we've done for the past, or as other vendors have done for the past 10, 20 years, and it's never worked. So you why not add things that you already have so you're allowed to amortize your assets, build your best of breed security, and do it within a multi-vendor environment, but also, do it with your infrastructure. >> Right, so I want to shift gears a little bit. Doing some research before you got on, you've always been technical lead. You've been doing technical lead roles. You had a whole bunch of them, and we don't have internet, unfortunately, here, so I can't read them off. >> Oliver: That's fine. >> But now, you've switched over. You've put the marketing hat on. I'm just curious the different, softer, squishy challenge of trying to take the talent that you have, the technical definitions that you have, the detailed compute and stuff you're doing around things like you just described, and now, putting the marketing hat, and trying to get that message out to the market, help people understand what you're trying to do, and break through, quite frankly, some crazy noise that we're sitting here surrounded by hundreds, if not thousands of vendors. >> I think that's really the key, and yes, I've been technical leads. I've run architecture teams. I've run development teams, and really, from a marketing perspective, it's to ensure that we're delivering a message that is, that the market will consume that is actually based in reality. I think a lot of times you see a lot of products that are put together with duct tape, baling twine, et cetera, but then, also have a great Powerpoint that makes it look good, but from a go to market perspective, from whether it's your sellers, meaning the sellers that work for Juniper, whether it's our partners, whether it's our customers, they have to believe in what's out there, and if it's tried and true, and we understand it from an engineering perspective, and we can say it's not a marketing texture, it's a strategy. >> Right. >> That really makes a difference, and we're really seeing that if you look at our year over year growth in security, if you look at what analysts are saying, if you look at what testing houses are saying about our product, that Juniper's back, and that's why I'm in this spot. >> And it really begs to have a deeper relationship with the customer, that you're not selling them a one-off market texture slide. You're not having a quick point solution that's suddenly put together, but really, have this trusted, ongoing relationship that's going to evolve over time. The products are going to evolve over time because the threats are evolving over time, right? >> Absolutely, and to help them get more out of what they already have, and from a go to market perspective, our partners have an addressful market that's naturally through the install base that we have, we're able to provide additional value and services to those customers that may want to lean on a partner to actually build some of these solutions for them. >> All right, well, Oliver, well thanks for stopping by. I'm glad I'm not too late on the encrypted analysis game, so just a couple of days. >> Absolutely. >> Thanks for stopping by. Best to you, and good luck with 2020, the year we'll know everything. >> Absolutely, thanks for having me. >> All right, he's Oliver, I'm Jeff, you're watching theCUBE. We're at RSA 2020 here in Moscone. Thanks for watching. We'll see you next time. (gentle electronic music)

Published Date : Feb 28 2020

SUMMARY :

brought to you by SiliconANGLE Media. to share the knowledge, and we're excited of the show. as you pointed out, there's a couple of challenges but 2020 is the year of intelligence. by increasing the ability to enforce and managing threats across once they get to that level and be able to ensure your network's secure. but that sounds pretty interesting that you can Absolutely, we actually announced it on Monday to know this is not something you want to let in, from a concentrated attacks to more, it sounds like now, so that comes with what you already have, Doing some research before you got on, the technical definitions that you have, that makes it look good, but from a go to market seeing that if you look at our year over year And it really begs to have a deeper relationship Absolutely, and to help them get more so just a couple of days. Best to you, and good luck with 2020, We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

OliverPERSON

0.99+

70QUANTITY

0.99+

JeffPERSON

0.99+

Oliver ShermanPERSON

0.99+

2020DATE

0.99+

last yearDATE

0.99+

ThursdayDATE

0.99+

hundredsQUANTITY

0.99+

thousandsQUANTITY

0.99+

2019DATE

0.99+

JuniperORGANIZATION

0.99+

Juniper NetworksORGANIZATION

0.99+

MosconeLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

Black HatEVENT

0.99+

San FranciscoLOCATION

0.98+

Bob FridayPERSON

0.98+

second tellQUANTITY

0.98+

40,000-plus peopleQUANTITY

0.98+

80%QUANTITY

0.98+

Oliver SchuermannPERSON

0.98+

bothQUANTITY

0.97+

twoQUANTITY

0.96+

first layerQUANTITY

0.96+

Monday at 4:45 a.m. PacificDATE

0.96+

theCUBEORGANIZATION

0.95+

RSA ShowEVENT

0.94+

firstQUANTITY

0.93+

MistORGANIZATION

0.92+

RSA Conference 2020 San FranciscoEVENT

0.91+

day fourQUANTITY

0.89+

PowerpointTITLE

0.87+

vendorsQUANTITY

0.77+

20 yearsQUANTITY

0.73+

RSAC USA 2020ORGANIZATION

0.73+

coronavirusOTHER

0.69+

secondQUANTITY

0.65+

10QUANTITY

0.63+

RSA 2020EVENT

0.52+

coupleQUANTITY

0.5+

Around theCUBE, Unpacking AI | Juniper NXTWORK 2019


 

>>from Las Vegas. It's the Q covering. Next work. 2019 America's Do You buy Juniper Networks? Come back already. Jeffrey here with the Cube were in Las Vegas at Caesar's at the Juniper. Next work event. About 1000 people kind of going over a lot of new cool things. 400 gigs. Who knew that was coming out of new information for me? But that's not what we're here today. We're here for the fourth installment of around the Cube unpacking. I were happy to have all the winners of the three previous rounds here at the same place. We don't have to do it over the phone s so we're happy to have him. Let's jump into it. So winner of Round one was Bob Friday. He is the VP and CTO at Missed the Juniper Company. Bob, Great to see you. Good to be back. Absolutely. All the way from Seattle. Sharna Parky. She's a VP applied scientist at Tech CEO could see Sharna and, uh, from Google. We know a lot of a I happen to Google. Rajan's chef. He is the V p ay ay >>product management on Google. Welcome. Thank you, Christy. Here >>All right, so let's jump into it. So just warm everybody up and we'll start with you. Bob, What are some When you're talking to someone at a cocktail party Friday night talking to your mom And they say, What is a I What >>do you >>give him? A Zen examples of where a eyes of packing our lives today? >>Well, I think we all know the examples of the south driving car, you know? Aye, aye. Starting to help our health care industry being diagnosed cancer for me. Personally, I had kind of a weird experience last week at a retail technology event where basically had these new digital mirrors doing facial recognition. Right? And basically, you start to have little mirrors were gonna be a skeevy start guessing. Hey, you have a beard, you have some glasses, and they start calling >>me old. So this is kind >>of very personal. I have a something for >>you, Camille, but eh? I go walking >>down a mall with a bunch of mirrors, calling me old. >>That's a little Illinois. Did it bring you out like a cane or a walker? You know, you start getting some advertising's >>that were like Okay, you guys, this is a little bit over the top. >>Alright, Charlotte, what about you? What's your favorite example? Share with people? >>Yeah, E think one of my favorite examples of a I is, um, kind of accessible in on your phone where the photos you take on an iPhone. The photos you put in Google photos, they're automatically detecting the faces and their labeling them for you. They're like, Here's selfies. Here's your family. Here's your Children. And you know, that's the most successful one of the ones that I think people don't really think about a lot or things like getting loan applications right. We actually have a I deciding whether or not we get loans. And that one is is probably the most interesting one to be right now. >>Roger. So I think the father's example is probably my favorite as well. And what's interesting to me is that really a I is actually not about the Yeah, it's about the user experience that you can create as a result of a I. What's cool about Google photos is that and my entire family uses Google photos and they don't even know actually that the underlying in some of the most powerful a I in the world. But what they know is they confined every picture of our kids on the beach whenever they whenever they want to. Or, you know, we had a great example where we were with our kids. Every time they like something in the store, we take a picture of it, Um, and we can look up toy and actually find everything that they've taken picture. >>It's interesting because I think most people don't even know the power that they have. Because if you search for beach in your Google photos or you search for, uh, I was looking for an old bug picture from my high school there it came right up until you kind of explore. You know, it's pretty tricky, Raja, you know, I think a lot of conversation about A They always focus the general purpose general purpose, general purpose machines and robots and computers. But people don't really talk about the applied A that's happening all around. Why do you think that? >>So it's a good question. There's there's a lot more talk about kind of general purpose, but the reality of where this has an impact right now is, though, are those specific use cases. And so, for example, things like personalizing customer interaction or, ah, spotting trends that did that you wouldn't have spotted for turning unstructured data like documents into structure data. That's where a eyes actually having an impact right now. And I think it really boils down to getting to the right use cases where a I right? >>Sharon, I want ask you. You know, there's a lot of conversation. Always has A I replace people or is it an augmentation for people? And we had Gary Kasparov on a couple years ago, and he talked about, you know, it was the combination if he plus the computer made the best chess player, but that quickly went away. Now the computer is actually better than Garry Kasparov. Plus the computer. How should people think about a I as an augmentation tool versus a replacement tool? And is it just gonna be specific to the application? And how do you kind of think about those? >>Yeah, I would say >>that any application where you're making life and death decisions where you're making financial decisions that disadvantage people anything where you know you've got u A. V s and you're deciding whether or not to actually dropped the bomb like you need a human in the loop. If you're trying to change the words that you are using to get a different group of people to apply for jobs, you need a human in the loop because it turns out that for the example of beach, you type sheep into your phone and you might get just a field, a green field and a I doesn't know that, uh, you know, if it's always seen sheep in a field that when the sheep aren't there, that that isn't a sheep like it doesn't have that kind of recognition to it. So anything were we making decisions about parole or financial? Anything like that needs to have human in the loop because those types of decisions are changing fundamentally the way we live. >>Great. So shift gears. The team are Jeff Saunders. Okay, team, your mind may have been the liquid on my bell, so I'll be more active on the bell. Sorry about that. Everyone's even. We're starting a zero again, so I want to shift gears and talk about data sets. Um Bob, you're up on stage. Demo ing some some of your technology, the Miss Technology and really, you know, it's interesting combination of data sets A I and its current form needs a lot of data again. Kind of the classic Chihuahua on blue buried and photos. You got to run a lot of them through. How do you think about data sets? In terms of having the right data in a complete data set to drive an algorithm >>E. I think we all know data sets with one The tipping points for a I to become more real right along with cloud computing storage. But data is really one of the key points of making a I really write my example on stage was wine, right? Great wine starts a great grape street. Aye, aye. Starts a great data for us personally. L s t M is an example in our networking space where we have data for the last three months from our customers and rule using the last 30 days really trained these l s t m algorithms to really get that tsunami detection the point where we don't have false positives. >>How much of the training is done. Once you once you've gone through the data a couple times in a just versus when you first started, you're not really sure how it's gonna shake out in the algorithm. >>Yeah. So in our case right now, right, training happens every night. So every night, we're basically retraining those models, basically, to be able to predict if there's gonna be an anomaly or network, you know? And this is really an example. Where you looking all these other cat image thinks this is where these neural networks there really were one of the transformational things that really moved a I into the reality calling. And it's starting to impact all our different energy. Whether it's text imaging in the networking world is an example where even a I and deep learnings ruling starting to impact our networking customers. >>Sure, I want to go to you. What do you do if you don't have a big data set? You don't have a lot of pictures of chihuahuas and blackberries, and I want to apply some machine intelligence to the problem. >>I mean, so you need to have the right data set. You know, Big is a relative term on, and it depends on what you're using it for, right? So you can have a massive amount of data that represents solar flares, and then you're trying to detect some anomaly, right? If you train and I what normal is based upon a massive amount of data and you don't have enough examples of that anomaly you're trying to detect, then it's never going to say there's an anomaly there, so you actually need to over sample. You have to create a population of data that allows you to detect images you can't say, Um oh, >>I'm going to reflect in my data set the percentage of black women >>in Seattle, which is something below 6% and say it's fair. It's not right. You have to be able thio over sample things that you need, and in some ways you can get this through surveys. You can get it through, um, actually going to different sources. But you have to boot, strap it in some way, and then you have to refresh it, because if you leave that data set static like Bob mentioned like you, people are changing the way they do attacks and networks all the time, and so you may have been able to find the one yesterday. But today it's a completely different ball game >>project to you, which comes first, the chicken or the egg. You start with the data, and I say this is a ripe opportunity to apply some. Aye, aye. Or do you have some May I objectives that you want to achieve? And I got to go out and find the >>data. So I actually think what starts where it starts is the business problem you're trying to solve. And then from there, you need to have the right data. What's interesting about this is that you can actually have starting points. And so, for example, there's techniques around transfer, learning where you're able to take an an algorithm that's already been trained on a bunch of data and training a little bit further with with your data on DSO, we've seen that such that people that may have, for example, only 100 images of something, but they could use a model that's trained on millions of images and only use those 100 thio create something that's actually quite accurate. >>So that's a great segue. Wait, give me a ring on now. And it's a great Segway into talking about applying on one algorithm that was built around one data set and then applying it to a different data set. Is that appropriate? Is that correct? Is air you risking all kinds of interesting problems by taking that and applying it here, especially in light of when people are gonna go to outweigh the marketplace, is because I've got a date. A scientist. I couldn't go get one in the marketplace and apply to my data. How should people be careful not to make >>a bad decision based on that? So I think it really depends. And it depends on the type of machine learning that you're doing and what type of data you're talking about. So, for example, with images, they're they're they're well known techniques to be able to do this, but with other things, there aren't really and so it really depends. But then the other inter, the other really important thing is that no matter what at the end, you need to test and generate based on your based on your data sets and on based on sample data to see if it's accurate or not, and then that's gonna guide everything. Ultimately, >>Sharon has got to go to you. You brought up something in the preliminary rounds and about open A I and kind of this. We can't have this black box where stuff goes into the algorithm. That stuff comes out and we're not sure what the result was. Sounds really important. Is that Is that even plausible? Is it feasible? This is crazy statistics, Crazy math. You talked about the business objective that someone's trying to achieve. I go to the data scientist. Here's my data. You're telling this is the output. How kind of where's the line between the Lehman and the business person and the hard core data science to bring together the knowledge of Here's what's making the algorithm say this. >>Yeah, there's a lot of names for this, whether it's explainable. Aye, aye. Or interpret a belay. I are opening the black box. Things like that. Um, the algorithms that you use determine whether or not they're inspect herbal. Um, and the deeper your neural network gets, the harder it is to inspect, actually. Right. So, to your point, every time you take an aye aye and you use it in a different scenario than what it was built for. For example, um, there is a police precinct in New York that had a facial recognition software, and, uh, victim said, Oh, it looked like this actor. This person looked like Bill Cosby or something like that, and you were never supposed to take an image of an actor and put it in there to find people that look like them. But that's how people were using it. So the Russians point yes, like it. You can transfer learning to other a eyes, but it's actually the humans that are using it in ways that are unintended that we have to be more careful about, right? Um, even if you're a, I is explainable, and somebody tries to use it in a way that it was never intended to be used. The risk is much higher >>now. I think maybe I had, You know, if you look at Marvis kind of what we're building for the networking community Ah, good examples. When Marvis tries to do estimate your throughput right, your Internet throughput. That's what we usually call decision tree algorithm. And that's a very interpretive algorithm. and we predict low throughput. We know how we got to that answer, right? We know what features God, is there? No. But when we're doing something like a NAMI detection, that's a neural network. That black box it tells us yes, there's a problem. There's some anomaly, but that doesn't know what caused the anomaly. But that's a case where we actually used neural networks, actually find the anomie, and then we're using something else to find the root cause, eh? So it really depends on the use case and where the night you're going to use an interpreter of model or a neural network which is more of a black box model. T tell her you've got a cat or you've got a problem >>somewhere. So, Bob, that's really interested. So can you not unpacking? Neural network is just the nature of the way that the communication and the data flows and the inferences are made that you can't go in and unpack it, that you have to have the >>separate kind of process too. Get to the root cause. >>Yeah, assigned is always hard to say. Never. But inherently s neural networks are very complicated. Saito set of weights, right? It's basically usually a supervised training model, and we're feeding a bunch of data and trying to train it to detect a certain features, sir, an output. But that is where they're powerful, right? And that's why they basically doing such good, Because they are mimicking the brain, right? That neural network is a very complex thing. Can't like your brain, right? We really don't understand how your brain works right now when you have a problem, it's really trialling there. We try to figure out >>right going right. So I want to stay with you, bought for a minute. So what about when you change what you're optimizing? Four? So you just said you're optimizing for throughput of the network. You're looking for problems. Now, let's just say it's, uh, into the end of the quarter. Some other reason we're not. You're changing your changing what you're optimizing for, Can you? You have to write separate algorithm. Can you have dynamic movement inside that algorithm? How do you approach a problem? Because you're not always optimizing for the same things, depending on the market conditions. >>Yeah, I mean, I think a good example, you know, again, with Marvis is really with what we call reinforcement. Learning right in reinforcement. Learning is a model we use for, like, radio resource management. And there were really trying to optimize for the user experience in trying to balance the reward, the models trying to reward whether or not we have a good balance between the network and the user. Right, that reward could be changed. So that algorithm is basically reinforcement. You can finally change hell that Algren works by changing the reward you give the algorithm >>great. Um, Rajan back to you. A couple of huge things that have come into into play in the marketplace and get your take one is open source, you know, kind of. What's the impact of open source generally on the availability, desire and more applications and then to cloud and soon to be edge? You know, the current next stop. How do you guys incorporate that opportunity? How does it change what you can do? How does it open up the lens of >>a I Yeah, I think open source is really important because I think one thing that's interesting about a I is that it's a very nascent field and the more that there's open source, the more that people could build on top of each other and be able to utilize what what others others have done. And it's similar to how we've seen open source impact operating systems, the Internet, things like things like that with Cloud. I think one of the big things with cloud is now you have the processing power and the ability to access lots of data to be able to t create these thes networks. And so the capacity for data and the capacity for compute is much higher. Edge is gonna be a very important thing, especially going into next few years. You're seeing Maur things incorporated on the edge and one exciting development is around Federated learning where you can train on the edge and then combine some of those aspects into a cloud side model. And so that I think will actually make EJ even more powerful. >>But it's got to be so dynamic, right? Because the fundamental problem used to always be the move, the computer, the data or the date of the computer. Well, now you've got on these edge devices. You've got Tanya data right sensor data all kinds of machining data. You've got potentially nasty hostile conditions. You're not in a nice, pristine data center where the environmental conditions are in the connective ity issues. So when you think about that problem yet, there's still great information. There you got latent issues. Some I might have to be processed close to home. How do you incorporate that age old thing of the speed of light to still break the break up? The problem to give you a step up? Well, we see a lot >>of customers do is they do a lot of training on the cloud, but then inference on the on the edge. And so that way they're able to create the model that they want. But then they get fast response time by moving the model to the edge. The other thing is that, like you said, lots of data is coming into the edge. So one way to do it is to efficiently move that to the cloud. But the other way to do is filter. And to try to figure out what data you want to send to the clouds that you can create the next days. >>Shawna, back to you let's shift gears into ethics. This pesky, pesky issue that's not not a technological issue at all, but right. We see it often, especially in tech. Just cause you should just cause you can doesn't mean that you should. Um so and this is not a stem issue, right? There's a lot of different things that happened. So how should people be thinking about ethics? How should they incorporate ethics? Um, how should they make sure that they've got kind of a, you know, a standard kind of overlooking kind of what they're doing? The decisions are being made. >>Yeah, One of the more approachable ways that I have found to explain this is with behavioral science methodologies. So ethics is a massive field of study, and not everyone shares the same ethics. However, if you try and bring it closer to behavior change because every product that we're building is seeking to change of behavior. We need to ask questions like, What is the gap between the person's intention and the goal we have for them? Would they choose that goal for themselves or not? If they wouldn't, then you have an ethical problem, right? And this this can be true of the intention, goal gap or the intention action up. We can see when we regulated for cigarettes. What? We can't just make it look cool without telling them what the cigarettes are doing to them, right so we can apply the same principles moving forward. And they're pretty accessible without having to know. Oh, this philosopher and that philosopher in this ethicist said these things, it can be pretty human. The challenge with this is that most people building these algorithms are not. They're not trained in this way of thinking, and especially when you're working at a start up right, you don't have access to massive teams of people to guide you down this journey, so you need to build it in from the beginning, and you need to be open and based upon principles. Um, and it's going to touch every component. It should touch your data, your algorithm, the people that you're using to build the product. If you only have white men building the product, you have a problem you need to pull in other people. Otherwise, there are just blind spots that you are not going to think of in order to still that product for a wider audience, but it seems like >>they were on such a razor sharp edge. Right with Coca Cola wants you to buy Coca Cola and they show ads for Coca Cola, and they appeal to your let's all sing together on the hillside and be one right. But it feels like with a I that that is now you can cheat. Right now you can use behavioral biases that are hardwired into my brain is a biological creature against me. And so where is where is the fine line between just trying to get you to buy Coke? Which somewhat argues Probably Justus Bad is Jule cause you get diabetes and all these other issues, but that's acceptable. But cigarettes are not. And now we're seeing this stuff on Facebook with, you know, they're coming out. So >>we know that this is that and Coke isn't just selling Coke anymore. They're also selling vitamin water so they're they're play isn't to have a single product that you can purchase, but it is to have a suite of products that if you weren't that coke, you can buy it. But if you want that vitamin water you can have that >>shouldn't get vitamin water and a smile that only comes with the coat. Five. You want to jump in? >>I think we're going to see ethics really break into two different discussions, right? I mean, ethics is already, like human behavior that you're already doing right, doing bad behavior, like discriminatory hiring, training, that behavior. And today I is gonna be wrong. It's wrong in the human world is gonna be wrong in the eye world. I think the other component to this ethics discussion is really round privacy and data. It's like that mirror example, right? No. Who gave that mirror the right to basically tell me I'm old and actually do something with that data right now. Is that my data? Or is that the mirrors data that basically recognized me and basically did something with it? Right. You know, that's the Facebook. For example. When I get the email, tell me, look at that picture and someone's take me in the pictures Like, where was that? Where did that come from? Right? >>What? I'm curious about to fall upon that as social norms change. We talked about it a little bit for we turn the cameras on, right? It used to be okay. Toe have no black people drinking out of a fountain or coming in the side door of a restaurant. Not that long ago, right in the 60. So if someone had built an algorithm, then that would have incorporated probably that social norm. But social norms change. So how should we, you know, kind of try to stay ahead of that or at least go back reflectively after the fact and say kind of back to the black box, That's no longer acceptable. We need to tweak this. I >>would have said in that example, that was wrong. 50 years ago. >>Okay, it was wrong. But if you ask somebody in Alabama, you know, at the University of Alabama, Matt Department who have been born Red born, bred in that culture as well, they probably would have not necessarily agreed. But so generally, though, again, assuming things change, how should we make sure to go back and make sure that we're not again carrying four things that are no longer the right thing to do? >>Well, I think I mean, as I said, I think you know what? What we know is wrong, you know is gonna be wrong in the eye world. I think the more subtle thing is when we start relying on these Aye. Aye. To make decisions like no shit in my car, hit the pedestrian or save my life. You know, those are tough decisions to let a machine take off or your balls decision. Right when we start letting the machines Or is it okay for Marvis to give this D I ps preference over other people, right? You know, those type of decisions are kind of the ethical decision, you know, whether right or wrong, the human world, I think the same thing will apply in the eye world. I do think it will start to see more regulation. Just like we see regulation happen in our hiring. No, that regulation is going to be applied into our A I >>right solutions. We're gonna come back to regulation a minute. But, Roger, I want to follow up with you in your earlier session. You you made an interesting comment. You said, you know, 10% is clearly, you know, good. 10% is clearly bad, but it's a soft, squishy middle at 80% that aren't necessarily super clear, good or bad. So how should people, you know, kind of make judgments in this this big gray area in the middle? >>Yeah, and I think that is the toughest part. And so the approach that we've taken is to set us set out a set of AI ai principles on DDE. What we did is actually wrote down seven things that we will that we think I should do and four things that we should not do that we will not do. And we now have to actually look at everything that we're doing against those Aye aye principles. And so part of that is coming up with that governance process because ultimately it boils down to doing this over and over, seeing lots of cases and figuring out what what you should do and so that governments process is something we're doing. But I think it's something that every company is going to need to do. >>Sharon, I want to come back to you, so we'll shift gears to talk a little bit about about law. We've all seen Zuckerberg, unfortunately for him has been, you know, stuck in these congressional hearings over and over and over again. A little bit of a deer in a headlight. You made an interesting comment on your prior show that he's almost like he's asking for regulation. You know, he stumbled into some really big Harry nasty areas that were never necessarily intended when they launched Facebook out of his dorm room many, many moons ago. So what is the role of the law? Because the other thing that we've seen, unfortunately, a lot of those hearings is a lot of our elected officials are way, way, way behind there, still printing their e mails, right? So what is the role of the law? How should we think about it? What shall we What should we invite from fromthe law to help sort some of this stuff out? >>I think as an individual, right, I would like for each company not to make up their own set of principles. I would like to have a shared set of principles that were following the challenge. Right, is that with between governments, that's impossible. China is never gonna come up with same regulations that we will. They have a different privacy standards than we D'oh. Um, but we are seeing locally like the state of Washington has created a future of work task force. And they're coming into the private sector and asking companies like text you and like Google and Microsoft to actually advise them on what should we be regulating? We don't know. We're not the technologists, but they know how to regulate. And they know how to move policies through the government. What will find us if we don't advise regulators on what we should be regulating? They're going to regulate it in some way, just like they regulated the tobacco industry. Just like they regulated. Sort of, um, monopolies that tech is big enough. Now there is enough money in it now that it will be regularly. So we need to start advising them on what we should regulate because just like Mark, he said. While everyone else was doing it, my competitors were doing it. So if you >>don't want me to do it, make us all stop. What >>can I do? A negative bell and that would not for you, but for Mark's responsibly. That's crazy. So So bob old man at the mall. It's actually a little bit more codified right, There's GDP are which came through May of last year and now the newness to California Extra Gatorade, California Consumer Protection Act, which goes into effect January 1. And you know it's interesting is that the hardest part of the implementation of that I think I haven't implemented it is the right to be for gotten because, as we all know, computers, air, really good recording information and cloud. It's recorded everywhere. There's no there there. So when these types of regulations, how does that impact? Aye, aye, because if I've got an algorithm built on a data set in in person, you know, item number 472 decides they want to be forgotten How that too I deal with that. >>Well, I mean, I think with Facebook, I can see that as I think. I suspect Mark knows what's right and wrong. He's just kicking ball down tires like >>I want you guys. >>It's your problem, you know. Please tell me what to do. I see a ice kind of like any other new technology, you know, it could be abused and used in the wrong waste. I think legally we have a constitution that protects our rights. And I think we're going to see the lawyers treat a I just like any other constitutional things and people who are building products using a I just like me build medical products or other products and actually harmful people. You're gonna have to make sure that you're a I product does not harm people. You're a product does not include no promote discriminatory results. So I >>think we're going >>to see our constitutional thing is going applied A I just like we've seen other technologies work. >>And it's gonna create jobs because of that, right? Because >>it will be a whole new set of lawyers >>the holdings of lawyers and testers, even because otherwise of an individual company is saying. But we tested. It >>works. Trust us. Like, how are you gonna get the independent third party verification of that? So we're gonna start to see a whole terrorist proliferation of that type of fields that never had to exist before. >>Yeah, one of my favorite doctor room. A child. Grief from a center. If you don't follow her on Twitter Follower. She's fantastic and a great lady. So I want to stick with you for a minute, Bob, because the next topic is autonomous. And Rahman up on the keynote this morning, talked about missed and and really, this kind of shifting workload of fixing things into an autonomous set up where the system now is, is finding problems, diagnosing problems, fixing problems up to, I think, he said, even generating return authorizations for broken gear, which is amazing. But autonomy opens up all kinds of crazy, scary things. Robert Gates, we interviewed said, You know, the only guns that are that are autonomous in the entire U. S. Military are the ones on the border of North Korea. Every single other one has to run through a person when you think about autonomy and when you can actually grant this this a I the autonomy of the agency toe act. What are some of the things to think about in the word of the things to keep from just doing something bad, really, really fast and efficiently? >>Yeah. I mean, I think that what we discussed, right? I mean, I think Pakal purposes we're far, you know, there is a tipping point. I think eventually we will get to the CP 30 Terminator day where we actually build something is on par with the human. But for the purposes right now, we're really looking at tools that we're going to help businesses, doctors, self driving cars and those tools are gonna be used by our customers to basically allow them to do more productive things with their time. You know, whether it's doctor that's using a tool to actually use a I to predict help bank better predictions. They're still gonna be a human involved, you know, And what Romney talked about this morning and networking is really allowing our I T customers focus more on their business problems where they don't have to spend their time finding bad hard were bad software and making better experiences for the people. They're actually trying to serve >>right, trying to get your take on on autonomy because because it's a different level of trust that we're giving to the machine when we actually let it do things based on its own. But >>there's there's a lot that goes into this decision of whether or not to allow autonomy. There's an example I read. There's a book that just came out. Oh, what's the title? You look like a thing. And I love you. It was a book named by an A I, um if you want to learn a lot about a I, um and you don't know much about it, Get it? It's really funny. Um, so in there there is in China. Ah, factory where the Aye Aye. Is optimizing um, output of cockroaches now they just They want more cockroaches now. Why do they want that? They want to grind them up and put them in a lotion. It's one of their secret ingredients now. It depends on what parameters you allow that I to change, right? If you decide Thio let the way I flood the container, and then the cockroaches get out through the vents and then they get to the kitchen to get food, and then they reproduce the parameters in which you let them be autonomous. Over is the challenge. So when we're working with very narrow Ai ai, when use hell the Aye. Aye. You can change these three things and you can't just change anything. Then it's a lot easier to make that autonomous decision. Um and then the last part of it is that you want to know what is the results of a negative outcome, right? There was the result of a positive outcome. And are those results something that we can take actually? >>Right, Right. Roger, don't give you the last word on the time. Because kind of the next order of step is where that machines actually write their own algorithms, right? They start to write their own code, so they kind of take this next order of thought and agency, if you will. How do you guys think about that? You guys are way out ahead in the space, you have huge data set. You got great technology. Got tensorflow. When will the machines start writing their own A their own out rhythms? Well, and actually >>it's already starting there that, you know, for example, we have we have a product called Google Cloud. Ottawa. Mel Village basically takes in a data set, and then we find the best model to be able to match that data set. And so things like that that that are there already, but it's still very nascent. There's a lot more than that that can happen. And I think ultimately with with how it's used I think part of it is you have to start. Always look at the downside of automation. And what is what is the downside of a bad decision, whether it's the wrong algorithm that you create or a bad decision in that model? And so if the downside is really big, that's where you need to start to apply Human in the loop. And so, for example, in medicine. Hey, I could do amazing things to detect diseases, but you would want a doctor in the loop to be able to actually diagnose. And so you need tohave have that place in many situations to make sure that it's being applied well. >>But is that just today? Or is that tomorrow? Because, you know, with with exponential growth and and as fast as these things are growing, will there be a day where you don't necessarily need maybe need the doctor to communicate the news? Maybe there's some second order impacts in terms of how you deal with the family and, you know, kind of pros and cons of treatment options that are more emotional than necessarily mechanical, because it seems like eventually that the doctor has a role. But it isn't necessarily in accurately diagnosing a problem. >>I think >>I think for some things, absolutely over time the algorithms will get better and better, and you can rely on them and trust them more and more. But again, I think you have to look at the downside consequence that if there's a bad decision, what happens and how is that compared to what happens today? And so that's really where, where that is. So, for example, self driving cars, we will get to the point where cars are driving by themselves. There will be accidents, but the accident rate is gonna be much lower than what's there with humans today, and so that will get there. But it will take time. >>And there was a day when will be illegal for you to drive. You have manslaughter, right? >>I I believe absolutely there will be in and and I don't think it's that far off. Actually, >>wait for the day when I have my car take me up to Northern California with me. Sleepy. I've only lived that long. >>That's right. And work while you're while you're sleeping, right? Well, I want to thank everybody Aton for being on this panel. This has been super fun and these air really big issues. So I want to give you the final word will just give everyone kind of a final say and I just want to throw out their Mars law. People talk about Moore's law all the time. But tomorrow's law, which Gardner stolen made into the hype cycle, you know, is that we tend to overestimate in the short term, which is why you get the hype cycle and we turn. Tend to underestimate, in the long term the impacts of technology. So I just want it is you look forward in the future won't put a year number on it, you know, kind of. How do you see this rolling out? What do you excited about? What are you scared about? What should we be thinking about? We'll start with you, Bob. >>Yeah, you know, for me and, you know, the day of the terminus Heathrow. I don't know if it's 100 years or 1000 years. That day is coming. We will eventually build something that's in part of the human. I think the mission about the book, you know, you look like a thing and I love >>you. >>Type of thing that was written by someone who tried to train a I to basically pick up lines. Right? Cheesy pickup lines. Yeah, I'm not for sure. I'm gonna trust a I to help me in my pickup lines yet. You know I love you. Look at your thing. I love you. I don't know if they work. >>Yeah, but who would? Who would have guessed online dating is is what it is if you had asked, you know, 15 years ago. But I >>think yes, I think overall, yes, we will see the Terminator Cp through It was probably not in our lifetime, but it is in the future somewhere. A. I is definitely gonna be on par with the Internet cell phone, radio. It's gonna be a technology that's gonna be accelerating if you look where technology's been over last. Is this amazing to watch how fast things have changed in our lifetime alone, right? Yeah, we're just on this curve of technology accelerations. This in the >>exponential curves China. >>Yeah, I think the thing I'm most excited about for a I right now is the addition of creativity to a lot of our jobs. So ah, lot of we build an augmented writing product. And what we do is we look at the words that have happened in the world and their outcomes. And we tell you what words have impacted people in the past. Now, with that information, when you augment humans in that way, they get to be more creative. They get to use language that have never been used before. To communicate an idea. You can do this with any field you can do with composition of music. You can if you can have access as an individual, thio the data of a bunch of cultures the way that we evolved can change. So I'm most excited about that. I think I'm most concerned currently about the products that we're building Thio Give a I to people that don't understand how to use it or how to make sure they're making an ethical decision. So it is extremely easy right now to go on the Internet to build a model on a data set. And I'm not a specialist in data, right? And so I have no idea if I'm adding bias in or not, um and so it's It's an interesting time because we're in that middle area. Um, and >>it's getting loud, all right, Roger will throw with you before we have to cut out, or we're not gonna be able to hear anything. So I actually start every presentation out with a picture of the Mosaic browser, because what's interesting is I think that's where >>a eyes today compared to kind of weather when the Internet was around 1994 >>were just starting to see how a I can actually impact the average person. As a result, there's a lot of hype, but what I'm actually finding is that 70% of the company's I talked to the first question is, Why should I be using this? And what benefit does it give me? Why 70% ask you why? Yeah, and and what's interesting with that is that I think people are still trying to figure out what is this stuff good for? But to your point about the long >>run, and we underestimate the longer I think that every company out there and every product will be fundamentally transformed by eye over the course of the next decade, and it's actually gonna have a bigger impact on the Internet itself. And so that's really what we have to look forward to. >>All right again. Thank you everybody for participating. There was a ton of fun. Hope you had fun. And I look at the score sheet here. We've got Bob coming in and the bronze at 15 points. Rajan, it's 17 in our gold medal winner for the silver Bell. Is Sharna at 20 points. Again. Thank you. Uh, thank you so much and look forward to our next conversation. Thank Jeffrey Ake signing out from Caesar's Juniper. Next word unpacking. I Thanks for watching.

Published Date : Nov 14 2019

SUMMARY :

We don't have to do it over the phone s so we're happy to have him. Thank you, Christy. So just warm everybody up and we'll start with you. Well, I think we all know the examples of the south driving car, you know? So this is kind I have a something for You know, you start getting some advertising's And that one is is probably the most interesting one to be right now. it's about the user experience that you can create as a result of a I. Raja, you know, I think a lot of conversation about A They always focus the general purpose general purpose, And I think it really boils down to getting to the right use cases where a I right? And how do you kind of think about those? the example of beach, you type sheep into your phone and you might get just a field, the Miss Technology and really, you know, it's interesting combination of data sets A I E. I think we all know data sets with one The tipping points for a I to become more real right along with cloud in a just versus when you first started, you're not really sure how it's gonna shake out in the algorithm. models, basically, to be able to predict if there's gonna be an anomaly or network, you know? What do you do if you don't have a big data set? I mean, so you need to have the right data set. You have to be able thio over sample things that you need, Or do you have some May I objectives that you want is that you can actually have starting points. I couldn't go get one in the marketplace and apply to my data. the end, you need to test and generate based on your based on your data sets the business person and the hard core data science to bring together the knowledge of Here's what's making Um, the algorithms that you use I think maybe I had, You know, if you look at Marvis kind of what we're building for the networking community Ah, that you can't go in and unpack it, that you have to have the Get to the root cause. Yeah, assigned is always hard to say. So what about when you change what you're optimizing? You can finally change hell that Algren works by changing the reward you give the algorithm How does it change what you can do? on the edge and one exciting development is around Federated learning where you can train The problem to give you a step up? And to try to figure out what data you want to send to Shawna, back to you let's shift gears into ethics. so you need to build it in from the beginning, and you need to be open and based upon principles. But it feels like with a I that that is now you can cheat. but it is to have a suite of products that if you weren't that coke, you can buy it. You want to jump in? No. Who gave that mirror the right to basically tell me I'm old and actually do something with that data right now. So how should we, you know, kind of try to stay ahead of that or at least go back reflectively after the fact would have said in that example, that was wrong. But if you ask somebody in Alabama, What we know is wrong, you know is gonna be wrong So how should people, you know, kind of make judgments in this this big gray and over, seeing lots of cases and figuring out what what you should do and We've all seen Zuckerberg, unfortunately for him has been, you know, stuck in these congressional hearings We're not the technologists, but they know how to regulate. don't want me to do it, make us all stop. I haven't implemented it is the right to be for gotten because, as we all know, computers, Well, I mean, I think with Facebook, I can see that as I think. you know, it could be abused and used in the wrong waste. to see our constitutional thing is going applied A I just like we've seen other technologies the holdings of lawyers and testers, even because otherwise of an individual company is Like, how are you gonna get the independent third party verification of that? Every single other one has to run through a person when you think about autonomy and They're still gonna be a human involved, you know, giving to the machine when we actually let it do things based on its own. It depends on what parameters you allow that I to change, right? How do you guys think about that? And what is what is the downside of a bad decision, whether it's the wrong algorithm that you create as fast as these things are growing, will there be a day where you don't necessarily need maybe need the doctor But again, I think you have to look at the downside And there was a day when will be illegal for you to drive. I I believe absolutely there will be in and and I don't think it's that far off. I've only lived that long. look forward in the future won't put a year number on it, you know, kind of. I think the mission about the book, you know, you look like a thing and I love I don't know if they work. you know, 15 years ago. It's gonna be a technology that's gonna be accelerating if you look where technology's And we tell you what words have impacted people in the past. it's getting loud, all right, Roger will throw with you before we have to cut out, Why 70% ask you why? have a bigger impact on the Internet itself. And I look at the score sheet here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff SaundersPERSON

0.99+

SharonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

RogerPERSON

0.99+

AlabamaLOCATION

0.99+

MarkPERSON

0.99+

Sharna ParkyPERSON

0.99+

Robert GatesPERSON

0.99+

GoogleORGANIZATION

0.99+

Garry KasparovPERSON

0.99+

SeattleLOCATION

0.99+

January 1DATE

0.99+

Gary KasparovPERSON

0.99+

15 pointsQUANTITY

0.99+

SharnaPERSON

0.99+

BobPERSON

0.99+

20 pointsQUANTITY

0.99+

ChinaLOCATION

0.99+

Jeffrey AkePERSON

0.99+

400 gigsQUANTITY

0.99+

New YorkLOCATION

0.99+

CharlottePERSON

0.99+

JeffreyPERSON

0.99+

RahmanPERSON

0.99+

ChristyPERSON

0.99+

RajanPERSON

0.99+

Bill CosbyPERSON

0.99+

Las VegasLOCATION

0.99+

California Extra GatoradeTITLE

0.99+

MayDATE

0.99+

70%QUANTITY

0.99+

100 yearsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

tomorrowDATE

0.99+

Northern CaliforniaLOCATION

0.99+

ShawnaPERSON

0.99+

first questionQUANTITY

0.99+

yesterdayDATE

0.99+

ZuckerbergPERSON

0.99+

17QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

last weekDATE

0.99+

todayDATE

0.99+

Coca ColaORGANIZATION

0.99+

MarvisORGANIZATION

0.99+

Friday nightDATE

0.99+

MoorePERSON

0.99+

IllinoisLOCATION

0.99+

FiveQUANTITY

0.99+

1000 yearsQUANTITY

0.99+

OttawaLOCATION

0.99+

80%QUANTITY

0.99+

GardnerPERSON

0.99+

100QUANTITY

0.98+

fourth installmentQUANTITY

0.98+

each companyQUANTITY

0.98+

millions of imagesQUANTITY

0.98+

University of AlabamaORGANIZATION

0.98+

15 years agoDATE

0.98+

three previous roundsQUANTITY

0.98+

10%QUANTITY

0.98+

100 imagesQUANTITY

0.98+

one algorithmQUANTITY

0.98+

WashingtonLOCATION

0.98+

RomneyPERSON

0.98+

50 years agoDATE

0.97+

single productQUANTITY

0.97+

firstQUANTITY

0.97+

next decadeDATE

0.96+

Around theCUBE, Unpacking AI Panel | CUBEConversation, October 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hello everyone, welcome to theCUBE studio here in Palo Alto. I'm John Furrier your host of theCUBE. We're here introducing a new format for CUBE panel discussions, it's called Around theCUBE and we have a special segment here called Get Smart: Unpacking AI with some great with some great guests in the industry. Gene Santos, Professor of Engineering in College of Engineering Dartmouth College. Bob Friday, Vice President CTO at Mist at Juniper Company. And Ed Henry, Senior Scientist and Distinguished Member of the Technical Staff for Machine Learning at Dell EMC. Guys this is a format, we're going to keep score and we're going to throw out some interesting conversations around Unpacking AI. Thanks for joining us here, appreciate your time. >> Yeah, glad to be here. >> Okay, first question, as we all know AI is on the rise, we're seeing AI everywhere. You can't go to a show or see marketing literature from any company, whether it's consumer or tech company around, they all have AI, AI something. So AI is on the rise. The question is, is it real AI, is AI relevant from a reality standpoint, what really is going on with AI, Gene, is AI real? >> I think a good chunk of AI is real there. It depends on what you apply it to. If it's making some sort of decisions for you, that is AI that's blowing into play. But there's also a lot of AI left out there potentially is just simply a script. So, you know, one of the challenges that you'll always have is that, if it were scripted, is it scripted because, somebody's already developed the AI and now just pulled out all the answers and just using the answers straight? Or is it active learning and changing on its own? I would tend to say that anything that's learning and changing on its own, that's where you're having the evolving AI and that's where you get the most power from. >> Bob what's your take on this, AI real? >> Yeah, if you look at Google, What you see is AI really became real in 2014. That's when the AI and ML really became a thing in the industry and when you look why did it become a thing in 2014? It's really back when we actually saw TensorFlow, open source technology really become available. It's all that Amazon Compute story. You know, you look what we're doing here at Mist, I really don't have to worry about compute storage, except for the Amazon bill I get every month now. So I think you're really seeing AI become real, because of some key turning points in the industry. >> Ed, your take, AI real? >> Yeah, so it depends on what lens you want to kind of look at it through. The notion of intelligence is something that's kind of ill defined and depending how how you want to interpret that will kind of guide whether or not you think it's real. I tend to all things AI if it has a notion of agency. So if it can navigate its problem space without human intervention. So, really it depends on, again, what lens you kind of want to look at it through? It's a set of moving goalposts, right? If you take your smartphone back to Turing When he was coming up with the Turing test and asked them if this intelligent, or some value intelligent device was AI, would that be AI, to him probably back then. So really it depends on how you kind of want to look at it. >> Is AI the same as it was in 1988? Or has it changed, what's the change point with AI because some are saying, AI's been around for a while but there's more AI now than ever before, Ed we'll start with you, what's different with AI now versus say in the late 80s, early 90s? >> See what's funny is some of the methods that we're using aren't different, I think the big push that happened in the last decade or so has been the ability to store as much data as we can along with the ability to have as much compute readily disposable as we have today. Some of the methodologies I mean there was a great Wired article that was published and somebody referenced called, method called Eigenvector Decomposition they said it was from quantum mechanic, that came out in 1888 right? So it really a lot of the methodologies that we're using aren't much different, it's the amount of data that we have available to us that represents reality and the amount of compute that we have. >> Bob. >> Yeah so for me back in the 80s when I did my masters I actually did a masters on neural networks so yeah it's been around for a while but when I started Mist what really changed was a couple things. One is this modern cloud stack right so if you're going to have to build an AI solution really have to have all the pieces ingest tons of data and process it in real time so that is one big thing that's changed that we didn't have 20 years ago. The other big thing is we had access to all this open source TensorFlow stuff right now. People like Google and Facebook have made it so easy for the average person to actually do an AI project right? You know anyone here, anyone in the audience here could actually train a machine learning model over the weekend right now, you just have to go to Google, you have to find kind of the, you know they have the data sets you want to basically build a model to recognize letters and numbers, those data sets are on the internet right now and you personally yourself could go become a data scientist over the weekend. >> Gene, your take. >> Yeah I think also on top of that because of all that availability on the open software anybody can come in and start playing with AI, it's also building a really large experience base of what works and what doesn't work and because they have that now you can actually better define the problem you're shooting for and when you do that you increase you know what's going to work, what's not going to work and people can also tell you that on the part that's not going to work, how's it going to expand but I think overall though this comes back to the question of when people ask what is AI, and a lot of that is just being focused on machine learning and if it's just machine learning that's kind of a little limited use in terms of what you're classifying or not. Back in the early 80s AI back then is really what people are trying to call artificial general intelligence nowadays but it's that all encompassing piece. All the things that you know us humans can do, us humans can reason about, all the decision sequences that we make and so you know that's the part that we haven't quite gotten to but there is all the things that's why the applications that the AI with machine learning classification has gotten us this far. >> Okay machine learning is certainly relevant, it's been one of the most hottest, the hottest topic I think in computer science and with AI becoming much more democratized you guys mentioned TensorFlow, a variety of other open source initiatives been a great wave of innovation and again motivation, younger generations is easier to code now than ever before but machine learning seems to be at the heart of AI and there's really two schools of thought in the machine learning world, is it just math or is there more of a cognition learning machine kind of a thing going on? This has been a big debate in the industry, I want to get your guys' take on this, Gene is machine learning just math and running algorithms or is there more to it like cognition, where do you guys fall on this, what's real? >> If I look at the applications and look what people are using it for it's mostly just algorithms it's mostly that you know you've managed to do the pattern recognition, you've managed to compute out the things and find something interesting from it but then on the other side of it the folks working in say neurosciences, the first people working in cogno-sciences. You know I have the interest in that when we look at that, that machine learning does it correspond to what we're doing as human beings, now because the reason I fall more on the algorithm side is that a lot of those algorithms they don't match what we're often thinking so if they're not matching that it's like okay something else is coming up but then what do we do with it, you know you can get an answer and work from it but then if we want to build true human intelligence how does that all stack together to get to the human intelligence and I think that's the challenge at this point. >> Bob, machine learning, math, cognition is there more to do there, what's your take? >> Yeah I think right now you look at machine learning, machine learning are the algorithms we use, I mean I think the big thing that happened to machine learning is the neural network and deep learning, that was kind of a mild stepping stone where we got through and actually building kind of these AI behavior things. You know when you look what's really happening out there you look at the self driving car, what we don't realize is like it's kind of scary right now, you go to Vegas you can actually get on a driving bus now, you know so this AI machine learning stuff is starting to happen right before our eyes, you know when you go to the health care now and you get your diagnosis for cancer right, we're starting to see AI in image recognition really start to change how we get our diagnosis. And that's really starting to affect people's lives. So those are cases where we're starting to see this AI machine learning stuff is starting to make a difference. When we think about the AI singularity discussion right when are we finally going to build something that really has human behavior. I mean right now we're building AI that can actually play Jeopardy right, and that was kind of one of the inspirations for my company Mist was hey, if they can build something to play Jeopardy we should be able to build something answer questions on par with network domain experts. So I think we're seeing people build solutions now that do a lot of behaviors that mimic humans. I do think we're probably on the path to building something that is truly going to be on par with human thinking right, you know whether it's 50 years or a thousand years I think it's inevitable on how man is progressing right now if you look at the technologically exponential growth we're seeing in human evolution. >> Well we're going to get to that in the next question so you're jumping ahead, hold that thought. Ed, machine learning just math, pattern recognition or is there more cognition there to be had? Where do fall in this? >> Right now it's, I mean it's all math, so we collect something some data set about the world and then we use algorithms and some representation of mathematics to find some pattern, which is new and interesting, don't get me wrong, when you say cognition though we have to understand that we have a fundamentally flawed perspective on how maybe the one guiding light that we have on what intelligence could be would be ourselves right. Computers don't work like brains, brains are what we determine embody our intelligence right, computers, our brains don't have a clock, there's no state that's actually between different clock cycles that light up in the brain so when you start using words like cognition we end up trying to measure ourselves or use ourselves as a ruler and most of the methodologies that we have today don't necessarily head down that path. So yeah that's kind of how I view it. >> Yeah I mean stateless those are API kind of mindsets, you can't run Kubernetes in the brain. Maybe we will in the future, stateful applications are always harder than stateless as we all know but again when I'm sleeping, I'm still dreaming. So cognition in the question of human replacement. This has been a huge conversation. This is one, the singularity conversation you know the fear of most average people and then some technical people as well on the job front, will AI replace my job will it take over the world is there going to be a Skynet Terminator moment? This is a big conversation point because it just teases out what could be and tech for good tech for bad. Some say tech is neutral but it can be shaped. So the question is will AI replace humans and where does that line come from. We'll start with Ed on this one. What do you see this singularity discussion where humans are going to be replaced with AI? >> So replace is an interesting term, so there I mean we look at the last kind of Industrial Revolution that happened and people I think are most worried about the potential of job loss and when you look at what happened during the Industrial Revolution this concept of creative destruction kind of came about and the idea is that yes technology has taken some jobs out of the market in some way shape or form but more jobs were created because of that technology, that's kind of our one again lighthouse that we have with respect to measuring that singularity in and of itself. Again the ill defined definition, or the ill defined notion of intelligence that we have today, I mean when you go back and you read some of the early papers from psychologists from the early 1900s the experiment specifically who came up with this idea of intelligence he uses the term general intelligence as kind of the first time that all of civilization has tried to assign a definition to what is intelligent right? And it's only been roughly 100 years or so or maybe a little longer since we have had this understanding that's been normalized at least within western culture of what this notion of intelligence is so singularity this idea of the singularity is interesting because we just don't understand enough about the one measure ruler or yardstick that we have that we consider intelligence ourselves to be able to go and then embed that inside of a thing. >> Gene what's your thoughts on this, reasoning is a big part of your research you're doing a lot of research around intent and contextual, all these cool behavioral things you know this is where machines are there to augment or replace, this is the conversation, your view on this? >> I think one of the things with this is that that's where the downs still lie, if we have bad intentions, if we can actually start communicating then we can start getting the general intelligence yeah I mean sort of like what Ed was referring to how people have been trying to define this but I think one of the problems that comes up is that computers and stuff like that don't really capture that at this time, the intentions that they have are still at a low level, but if we start tying it to you know the question of the terminator moment to the singularity, one of the things is that autonomy, you know how much autonomy that we give to the algorithm, how much does the algorithm have access to? Now there could be you know just to be on an extreme there could be a disaster situation where you know we weren't very careful and we provided an API that gives full autonomy to whatever AI we have to run it and so you can start seeing elements of Skynet that can come from that but I also tend to come to analysis that hey even with APIs, while it's not AI, APIs a lot of that also we have the intentions of what you're going to give us to control. Then you have the AI itself where if you've defined the intentions of what it is supposed to do then you can avoid that terminator moment in terms of that's more of an act. So I'm seeing it at this point. And so overall singularity I still think we're a ways off and you know when people worry about job loss probably the closest thing that I think that can match that in recent history is the whole thing on automation, I grew up at the time in Ohio when the steel industry was collapsing and that was a trade off between automation and what the current jobs are and if you have something like that okay that's one thing that we go forward dealing with and I think that this is something that state governments, our national government something we should be considering. If you're going to have that job loss you know what better study, what better form can you do from that and I've heard different proposals from different people like, well if we need to retrain people where do you get the resources from it could be something even like AI job pack. And so there's a lot of things to discuss, we're not there yet but I do believe the lower, repetitive jobs out there, I should say the things where we can easily define, those can be replaceable but that's still close to the automation side. >> Yeah and there's a lot of opportunities there. Bob, you mentioned in the last segment the singularity, cognition learning machines, you mentioned deep learning, as the machines learn this needs more data, data informs. If it's biased data or real data how do you become cognitive, how do you become human if you don't have the data or the algorithms? The data's the-- >> I mean and I think that's one of the big ethical debates going on right now right you know are we basically going to basically take our human biases and train them into our next generation of AI devices right. But I think from my point of view I think it's inevitable that we will build something as complex as the brain eventually, don't know if it's 50 years or 500 years from now but if you look at kind of the evolution of man where we've been over the last hundred thousand years or so, you kind of see this exponential rise in technology right from, you know for thousands of years our technology was relatively flat. So in the last 200 years where we've seen this exponential growth in technology that's taking off and you know what's amazing is when you look at quantum computing what's scary is, I always thought of quantum computing as being a research lab thing but when you start to see VC's and investing in quantum computing startups you know we're going from university research discussions to I guess we're starting to commercialize quantum computing, you know when you look at the complexity of what a brain does it's inevitable that we will build something that has basic complexity of a neuron and I think you know if you look how people neural science looks at the brain, we really don't understand how it encodes, but it's clear that it does encode memories which is very similar to what we're doing right now with our AI machine right? We're building things that takes data and memories and encodes in some certain way. So yeah I'm convinced that we will start to see more AI cognizance and it starts to really happen as we start with the next hundred years going forward. >> Guys, this has been a great conversation, AI is real based upon this around theCUBE conversation. Look at I mean you've seen the evidence there you guys pointed it out and I think cloud computing has been a real accelerant with the combination of machine learning and open source so you guys have illustrated and so that brings up kind of the final question I'd love to get each of you's thought on this because Bob just brought up quantum computing which as the race to quantum supremacy goes on around the world this becomes maybe that next step function, kind of what cloud computing did for revitalizing or creating a renaissance in AI. What does quantum do? So that begs the question, five ten years out if machine learning is the beginning of it and it starts to solve some of these problems as quantum comes in, more compute, unlimited resource applied with software, where does that go, five ten years? We'll go start with Gene, Bob, then Ed. Let's wrap this up. >> Yeah I think if quantum becomes a reality that you know when you have the exponential growth this is going to be exponential and exponential. Quantum is going to address a lot of the harder AI problems that were from complexity you know when you talk about this regular search regular approaches of looking up stuff quantum is the one that allows you now to potentially take something that was exponential and make it quantum. And so that's going to be a big driver. That'll be a big enabler where you know a lot of the problems I look at trying to do intentions is that I have an exponential number of intentions that might be possible if I'm going to choose it as an explanation. But, quantum will allow me to narrow it down to one if that technology can work out and of course the real challenge if I can rephrase it into say a quantum program while doing it. But that's I think the advance is just beyond the step function. >> Beyond a step function you see. Okay Bob your take on this 'cause you brought it up, quantum step function revolution what's your view on this? >> I mean your quantum computing changes the whole paradigm right because it kind of goes from a paradigm of what we know, this binary if this then that type of computing. So I think quantum computing is more than just a step function, I think it's going to take a whole paradigm shift of you know and it's going to be another decade or two before we actually get all the tools we need to actually start leveraging quantum computing but I think that is going to be one of those step functions that basically takes our AI efforts into a whole different realm right? Let us solve another whole set of classic problems and that's why they're doing it right now because it starts to let you be able to crack all the encryption codes right? You know where you have millions of billions of choices and you have to basically find that one needle in the haystack so quantum computing's going to basically open that piece of the puzzle up and when you look at these AI solutions it's really a collection of different things going underneath the hood. It's not this one algorithm that you're doing and trying to mimic human behavior, so quantum computing's going to be yet one more tool in the AI toolbox that's going to move the whole industry forward. >> Ed, you're up, quantum. >> Cool, yeah so I think it'll, like Gene and Bob had alluded to fundamentally change the way we approach these problems and the reason is combinatorial problems that everybody's talking about so if I want to evaluate the state space of anything using modern binary based computers we have to kind of iteratively make that search over that search space where quantum computing allows you to kind of evaluate the entire search space at once. When you talk about games like AlphaGo, you talk about having more moves on a blank 19 by 19 AlphaGo board than you have if you put 1,000 universes on every proton of our universe. So the state space is absolutely massive so searching that is impossible. Using today's binary based computers but quantum computing allows you to evaluate kind of search spaces like that in one big chunk to really simplify the aspect but I think it will kind of change how we approach these problems to Bob and Gene's point with respect to how we approach, the technology once we crack that quantum nut I don't think will look anything like what we have today. >> Okay thank you guys, looks like we have a winner. Bob you're up by one point, we had a tie for second but Ed and Gene of course I'm the arbiter but I've decided Bob you nailed this one so since you're the winner, Gene you guys did a great job coming in second place, Ed good job, Bob you get the last word. Unpacking AI, what's the summary from your perspective as the winner of Around theCUBE. >> Yeah no I think you know from a societal point of view I think AI's going to be on par with kind of the internet. It's going to be one of these next big technology things. I think it'll start to impact our lives and people when you look around it it's kind of sneaking up on us, whether it's the self driving car the healthcare cancer, the self driving bus, so I think it's here, I think we're just at the beginnings of it. I think it's going to be one of these technologies that's going to basically impact our whole lives or our next one or two decades. Next 10, 20 years is just going to be exponentially growing everywhere in all our segments. >> Thanks so much for playing guys really appreciate it we have an inventor entrepreneur, Gene doing great research at Dartmouth check him out, Gene Santos at Dartmouth Computer Science. And Ed, technical genius at Dell, figuring out how to make those machines smarter and with the software abstractions growing you guys are doing some good work over there as well. Gentlemen thank you for joining us on this inaugural Around theCUBE unpacking AI Get Smart series, thanks for joining us. >> Thank you. >> Thank you. >> Okay, that's a wrap everyone this is theCUBE in Palo Alto, I'm John Furrier thanks for watching. (upbeat funk music)

Published Date : Oct 23 2019

SUMMARY :

in the heart of Silicon Valley, and Distinguished Member of the Technical Staff is on the rise, we're seeing AI everywhere. the evolving AI and that's where you get in the industry and when you look and depending how how you want to interpret that of data that we have available to us to go to Google, you have to find All the things that you know us humans what do we do with it, you know you can to happen right before our eyes, you know or is there more cognition there to be had? of the methodologies that we have today of mindsets, you can't run Kubernetes in the brain. of job loss and when you look at what happened and what the current jobs are and if you have if you don't have the data or the algorithms? and I think you know if you look how people So that begs the question, five ten years out quantum is the one that allows you now Beyond a step function you see. because it starts to let you be able to crack the technology once we crack that quantum nut but Ed and Gene of course I'm the arbiter and people when you look around it you guys are doing some good work over there as well. in Palo Alto, I'm John Furrier thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BobPERSON

0.99+

Gene SantosPERSON

0.99+

Ed HenryPERSON

0.99+

EdPERSON

0.99+

GenePERSON

0.99+

John FurrierPERSON

0.99+

2014DATE

0.99+

1988DATE

0.99+

Palo AltoLOCATION

0.99+

50 yearsQUANTITY

0.99+

1888DATE

0.99+

AmazonORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

OhioLOCATION

0.99+

Bob FridayPERSON

0.99+

DellORGANIZATION

0.99+

October 2019DATE

0.99+

thousands of yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

first questionQUANTITY

0.99+

Dartmouth Computer ScienceORGANIZATION

0.99+

one pointQUANTITY

0.99+

1,000 universesQUANTITY

0.99+

secondQUANTITY

0.99+

todayDATE

0.99+

five ten yearsQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

decadeQUANTITY

0.98+

oneQUANTITY

0.98+

two schoolsQUANTITY

0.98+

twoQUANTITY

0.98+

MistORGANIZATION

0.97+

80sDATE

0.97+

late 80sDATE

0.97+

first timeQUANTITY

0.97+

Juniper CompanyORGANIZATION

0.97+

early 1900sDATE

0.97+

early 90sDATE

0.97+

second placeQUANTITY

0.97+

20 years agoDATE

0.97+

early 80sDATE

0.97+

DartmouthORGANIZATION

0.96+

one needleQUANTITY

0.95+

last decadeDATE

0.95+

500 yearsQUANTITY

0.93+

eachQUANTITY

0.93+

100 yearsQUANTITY

0.93+

AlphaGoORGANIZATION

0.93+

one algorithmQUANTITY

0.92+

JeopardyTITLE

0.92+

theCUBEORGANIZATION

0.92+

OneQUANTITY

0.92+

one big thingQUANTITY

0.92+

Silicon Valley,LOCATION

0.92+

one thingQUANTITY

0.91+

19QUANTITY

0.91+

TensorFlowTITLE

0.91+

Industrial RevolutionEVENT

0.91+

millions of billions of choicesQUANTITY

0.9+