Rami Sass, WhiteSource | CUBE Conversation
>>Welcome to this cube conversation which is part of our third Aws startup showcase of this year. I'm your host lisa martin and I'm pleased to welcome to the cube ceo and co founder of White Source Romney Sasse Rami, Welcome to the program. >>Thank you. Thank you so much for having me. >>I'm excited for our audience to hear about White Source, give us that high level overview of what the company is and what you how you're helping organizations. >>Sure. So we have software engineering teams keep track of their use of open source components sometimes referred to as dependencies and primarily focused on security aspect of those dependencies and are able to very natively and very quickly identify one all of the dependencies that are being used in a certain software that's being developed and alert to any known vulnerabilities that exist in those dependencies and then nick our users through the journey of finding them prioritizing them and fixing the vulnerability is such that their software when it gets released is not at risk, >>not at risk. And one of the things we've talked so much about In the last 18 months is the threat landscape. It's changed dramatically. We've seen a huge increase in ransom where huge increase in Ddos attacks. We also are in the fifth consecutive year of a cybersecurity skills gap. It's been there for a while. We know that there have been barriers between developers and security. How does White Source help address that cybersecurity skills gap. >>So we focus on automating as much of the security practices possible. Right. So basically our main premise is that we want to be the security expert for the engineering team so that they don't have to right? So we provide tools that automate the entire process of remediating the vulnerability so that we can save the developers effort and time in becoming security expert basically saying they don't need to become security expert, they can keep doing what they do best, which is developed software and provide more business value to their employer. And we will take care of anything that has to do with security in their software for them. So basically we're trying to alleviate the need for developers to develop any kind of security related skill set. >>I got to ask you how does that address? We talked about the skills gap but also the cultural shift required for developers to then kind of exhale and and put their trust in you guys and that's a big challenge to change cultures within organizations. How do you help influence that? >>Sure. So look, when you're talking about cultural shift, it always takes time. Like these things do not happen overnight And its gradual and so we are very well aware of it and we do not expect people to have 100% confidence in us immediately in day one. Okay, so our tools and and practices account for it and we help our users uh increasingly trust us more by proving ourselves to them by first starting with providing advice and allowing them to control the pace at which they automate more of the process. Right? So initially we will just tell them what they need to do and let them do it themselves until they are, they have gained enough experience without tools to just allow us to take the full cycle for them. That's one which maybe is even more important is that we rely very heavily on crowd sourcing, Right? So we have a very extensive customer base that is made up of some of the world's leading enterprise organizations that have very complex and a large environments and across those environments, combined with our ongoing and monitoring of everything that's going on in the large world of open source projects, we have compiled a very extensive crowd source database or knowledge base, if you will, that basically gives you intel on what others are doing with those vulnerable open stores dependencies, Right? And we can give you a lot of confidence when we see that the broader community of both commercial and free opens those users have upgraded a vulnerable dependency to a safe version and are speaking to the new version, right? They're not pulling it back there, not undoing that change. And so we give you a lot of visibility into all of that information and also, you know, when when things go bad, right? If we see that many people roll back some change and uh avoiding some dependency version, then we will warn you away from upgrading that version. So I think that the fact that we are establishing our recommendations on a lot of crowd sourced data is another way for us to provide more confidence, automating actions for our users. >>The C word confidence is absolutely critical. I got to ask you though Romney, something that you you mentioned, I was always, I always like to ask start ups, you know, what was the impetus to start the company? You're the Ceo and co founder? What were some of the gaps that were missing? Was it crowdsourcing? And was it the the lack of that community to really provide that visibility to developers that you guys saw as an opportunity to fix in the market? >>Alright. So at the risk of exposing my real age, Uh tell you that the company started over 10 years ago and was actually based on previous experience that as founders had in another company when when it was time to sell it. Right? So when we sold our previous company, we had to go through a two diligence process where we were required to provide a very detailed report of all the open source dependencies that we were using and we didn't have such a report and sort of caught us off guard and we had to spend a lot of time during, you know, the most stressful part of the due diligence, finding out which open source we were using and documenting it and coming up with the report. And so that was a very personal experience we had, but it was very obvious that it's not something that we did special. Right? Everyone is developing software is relying very heavily on open source and usually doesn't track it everywhere. Soon it initially started from just the very basic need for transparency, visibility and the ability to provide a, you know, simple bill of material that's now become a big thing right around S bahn Uh, but 10 years ago it was very difficult, it was very like manually laborious task to be able to come up with your bill of material and that's sort of the experience that big. Uh, the foundation of white suits >>got it and then talk to me about your relationship with AWS and mentioned in the beginning of this segment that this is part of our third AWS startup showcase of the year. Give us an overview of your relationship with AWS from a technology partnership perspective cells marketing product. >>Sure. So we've been working with us for a very long time and they are a wonderful partner to work with. It started right at the beginning where we are a cloud native company. Right? So we're staff solution provider and from the beginning we chose aws to be the infrastructure on which to no solution and we grew together with them over time over the last 10 years. We've been scaling again and again our environment and you know, the services that we provide and have been consuming more and more on AWS services, both for infrastructure and but also and very importantly for securing our runtime environment, which they do a great job at. But then it went even further and we are now integrated with a lot of AWS services and products and technologies. So our offering is very much integrated with several AWS offerings. And even beyond that, we are working closely as they go to market partner with AWS. So we have several co marketing initiatives with them and we are part of the startup coastal program. Such that AWS sales people can coastal white source to their customers. >>I imagine that is an advantage the partnership and the deep relationship that you have with a W. S in terms of getting those customers meetings and and helping them achieve the confidence in the technologies and the power of the two companies in 10 years. We're looking at 1000 customers and some big names. I saw from your website Microsoft Comcast, uh, Splunk 23% of the Fortune 100. Tell me how the aws partnership helps you give those developers the confidence that they need to trust in your technologies. >>Sure. So, first I think the synergy is very apparent, very obvious because both AWS and us sell to the engineering departments into the devil's people. All right. So we are catering to the same users the same customers the same, even decision makers. And so it's very easy to understand. It's also very easy to tell the better together story. Right? So, it's very easy for the the the THE AWS sales people to explain to their customers why it's easily integrate Herbal and it makes the sales motion easier and transparent and fluid and it makes the customer's consumption of the joint services easier. Right? So it's for them, it's easier to work with AWS is a window knowing that they can get all these added security features from them and gained the confidence of having this solution vetted by amazon and get us as a reference for us as a vendor also makes it easier for them to trust us and to use our services uh, with peace of life. >>Sounds like a synergistic cultures as well. I want to dig into something that I saw in the notes that you guys provided that white sources enabling organizations to eliminate up to 85% of security alerts. That's a big number. How do you do that? >>Okay. First, to clarify, we're talking about open source vulnerability or its rights are not in general. Not all security for open source security alliance. We've developed a deeper analysis that goes beyond just looking at your bill of material and identifying which dependencies are vulnerable and analyzes the way in which the developers are using those dependencies and what we've found over the last three years of running that technology with real customers? over many tens of thousands of development projects. Is that on average, 85% of the vulnerabilities in open source dependencies. I'll not reachable from your code. All right. So they are still there. You're still using the dependency but you're using some other function of it, which is not vulnerable. And the vulnerable function is never actively called in your code base. So this is like very specific. It's not some generic analysis. We had to analyze your code and figure that out. And so again, the average statistics statistics, is That just 15% of vulnerabilities are quote unquote, reachable form your code and makes your software vulnerable. Right? All the others are simply not exploitable. And so it can easily be eliminated for the need to remediate. Right? So you don't have to >>got it. How are you guys helping customers? There's been a lot of data that shows companies are spending millions uh annually using multiple web app and a P. I. Security tools on average but are still having problems with those tools being effective. How does white source help customers not waste time and resources and get right to being able to identify and remediate those vulnerabilities >>short. So look again in our philosophy, is that just detecting the problem? The security issues doesn't fix anything. Right. Doesn't help you solve your problem. Right, paramount to going to visit your dentist and having them find the cavity and maybe they do an x ray and they tell you exactly which tooth it's on and how deep it is. And then just send you home and you did you need to deal with it yourself. Right? So it doesn't really solve the problem. Your your mouth still painful. You have to fix the problem in order to get any kind of value for the security service of tool, you have to, you know, close the loop, finish the process and fix the vulnerability. And so by investing a lot in automating the remediation in enabling our tools to close that cycle right to finish the job and fix the vulnerability. We enable you to actually gain the value from the various tools that you're using and make sure that your software is not exposed and not vulnerable and not just give you a report with the vulnerabilities, right? Not just find them for you. >>Got It. Last question for you is if we look at your recommendations when you're talking to customers, especially as I mentioned earlier in the conversation, the threat landscape has changed dramatically in the last 18 months when you're in customer conversations, how do you advise them to start? You start with the developers. Do you start with security or do you start by saying you've got to bring everybody together. >>So we would normally start with security uh and you know, not necessarily the developers themselves, but the engineering managers. The heads of engineering again because our main effort is to leave the developers alone. Right. We want to get as little developer involvement as possible so that they can be free to do what they need to do. Security is something they have to right? It's a sure it's not, it doesn't add business value, it just protects the business from being exposed to greater risk. And so our approach and our practice is to be a sort of exception based tool for developers and only get them involved when you absolutely have to have them chime in and do something. Otherwise, we can fully take ownership and automate the entire process of identification, prioritization and remediation for the organization and just provide reports on, you know, how many vulnerabilities we fix this month and and give them better visibility into their security posture. Yeah, but you know, we invest most of our innovation attention resources as a company to automate as much of that process as possible so that the developers don't have to spend their time on security issues. We will do it for it. >>And I imagine developer productivity goes way up for your customers? I do have one more question for you, given that here we are in the fall of 2021, what are some of the things that you're looking forward to as we go into the new year? >>I love you in the new jewish year or then you >>Uh maybe both. I was thinking, you know, just as we go into 2020 to some of the things that you're excited about. >>Sure, so look, it's it's a little difficult to be happy about something that's a problem for other people, right? Because there is a growing threat for application security and there is more and more attacks going on in the world. But I'm really looking forward to helping more people be more protected while not wasting their time. All right. So it what drives me is the ability for us as a company to provide real value for customers and not be some shelf will not be a tool that just produces reports that no one knows what to do with. And the fact that we are able to steal our users and our customers away from risk and save them. The the hassle of being attacked, being hacked, having their data stolen or having the system broken into is what I mostly look >>and there's plenty of opportunities for you guys to do just that and really add that value for those developers And the company is like I said, big brands Microsoft Comcast block Romney, thank you for joining me on the program today, talking to us about white source and how you're really feeling the gaps in the cybersecurity skills landscape and helping really transform developer productivity where security is concerned. We appreciate your time. >>Thank you. Thank you so much for having me on the show. >>My pleasure for a missus I'm lisa martin. You're watching this cube conversation. Mhm mm mm.
SUMMARY :
of White Source Romney Sasse Rami, Welcome to the program. Thank you so much for having me. of what the company is and what you how you're helping organizations. all of the dependencies that are being used in a certain software that's being developed And one of the things we've talked so much about In the last 18 months is the need for developers to develop any kind of security related skill I got to ask you how does that address? And so we give you a lot of visibility into all of that information I got to ask you though Romney, Soon it initially started from just the very basic got it and then talk to me about your relationship with AWS and mentioned in the beginning of this segment from the beginning we chose aws to be the infrastructure on which to I imagine that is an advantage the partnership and the deep relationship that you have and fluid and it makes the customer's consumption of I want to dig into something that I saw in the notes that you guys And so it can easily be eliminated for the need to and get right to being able to identify and remediate those vulnerabilities So look again in our philosophy, is that just detecting the problem? the threat landscape has changed dramatically in the last 18 months when you're in customer for the organization and just provide reports on, you know, how many vulnerabilities we fix of the things that you're excited about. And the fact that we are able to steal our users and our customers away and there's plenty of opportunities for you guys to do just that and really add that value for Thank you so much for having me on the show. You're watching this cube conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
lisa martin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
1000 customers | QUANTITY | 0.99+ |
Romney Sasse Rami | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
White Source | ORGANIZATION | 0.99+ |
23% | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
Rami Sass | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Romney | PERSON | 0.98+ |
third | QUANTITY | 0.98+ |
one more question | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
10 years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
first | QUANTITY | 0.96+ |
Ceo | ORGANIZATION | 0.95+ |
fifth consecutive year | QUANTITY | 0.95+ |
Comcast | ORGANIZATION | 0.92+ |
up to 85% | QUANTITY | 0.92+ |
this month | DATE | 0.87+ |
last 18 months | DATE | 0.87+ |
this year | DATE | 0.86+ |
fall of 2021 | DATE | 0.86+ |
WhiteSource | ORGANIZATION | 0.86+ |
over 10 years ago | DATE | 0.84+ |
new year | EVENT | 0.82+ |
day one | QUANTITY | 0.78+ |
tens of thousands of development projects | QUANTITY | 0.76+ |
last 10 years | DATE | 0.76+ |
annually | QUANTITY | 0.73+ |
jewish | OTHER | 0.7+ |
two diligence | QUANTITY | 0.68+ |
Fortune 100 | TITLE | 0.65+ |
last three years | DATE | 0.64+ |
third | EVENT | 0.51+ |
year | EVENT | 0.5+ |
Romney | ORGANIZATION | 0.48+ |
Aws | ORGANIZATION | 0.29+ |
Around theCUBE, Unpacking AI | Juniper NXTWORK 2019
>>from Las Vegas. It's the Q covering. Next work. 2019 America's Do You buy Juniper Networks? Come back already. Jeffrey here with the Cube were in Las Vegas at Caesar's at the Juniper. Next work event. About 1000 people kind of going over a lot of new cool things. 400 gigs. Who knew that was coming out of new information for me? But that's not what we're here today. We're here for the fourth installment of around the Cube unpacking. I were happy to have all the winners of the three previous rounds here at the same place. We don't have to do it over the phone s so we're happy to have him. Let's jump into it. So winner of Round one was Bob Friday. He is the VP and CTO at Missed the Juniper Company. Bob, Great to see you. Good to be back. Absolutely. All the way from Seattle. Sharna Parky. She's a VP applied scientist at Tech CEO could see Sharna and, uh, from Google. We know a lot of a I happen to Google. Rajan's chef. He is the V p ay ay >>product management on Google. Welcome. Thank you, Christy. Here >>All right, so let's jump into it. So just warm everybody up and we'll start with you. Bob, What are some When you're talking to someone at a cocktail party Friday night talking to your mom And they say, What is a I What >>do you >>give him? A Zen examples of where a eyes of packing our lives today? >>Well, I think we all know the examples of the south driving car, you know? Aye, aye. Starting to help our health care industry being diagnosed cancer for me. Personally, I had kind of a weird experience last week at a retail technology event where basically had these new digital mirrors doing facial recognition. Right? And basically, you start to have little mirrors were gonna be a skeevy start guessing. Hey, you have a beard, you have some glasses, and they start calling >>me old. So this is kind >>of very personal. I have a something for >>you, Camille, but eh? I go walking >>down a mall with a bunch of mirrors, calling me old. >>That's a little Illinois. Did it bring you out like a cane or a walker? You know, you start getting some advertising's >>that were like Okay, you guys, this is a little bit over the top. >>Alright, Charlotte, what about you? What's your favorite example? Share with people? >>Yeah, E think one of my favorite examples of a I is, um, kind of accessible in on your phone where the photos you take on an iPhone. The photos you put in Google photos, they're automatically detecting the faces and their labeling them for you. They're like, Here's selfies. Here's your family. Here's your Children. And you know, that's the most successful one of the ones that I think people don't really think about a lot or things like getting loan applications right. We actually have a I deciding whether or not we get loans. And that one is is probably the most interesting one to be right now. >>Roger. So I think the father's example is probably my favorite as well. And what's interesting to me is that really a I is actually not about the Yeah, it's about the user experience that you can create as a result of a I. What's cool about Google photos is that and my entire family uses Google photos and they don't even know actually that the underlying in some of the most powerful a I in the world. But what they know is they confined every picture of our kids on the beach whenever they whenever they want to. Or, you know, we had a great example where we were with our kids. Every time they like something in the store, we take a picture of it, Um, and we can look up toy and actually find everything that they've taken picture. >>It's interesting because I think most people don't even know the power that they have. Because if you search for beach in your Google photos or you search for, uh, I was looking for an old bug picture from my high school there it came right up until you kind of explore. You know, it's pretty tricky, Raja, you know, I think a lot of conversation about A They always focus the general purpose general purpose, general purpose machines and robots and computers. But people don't really talk about the applied A that's happening all around. Why do you think that? >>So it's a good question. There's there's a lot more talk about kind of general purpose, but the reality of where this has an impact right now is, though, are those specific use cases. And so, for example, things like personalizing customer interaction or, ah, spotting trends that did that you wouldn't have spotted for turning unstructured data like documents into structure data. That's where a eyes actually having an impact right now. And I think it really boils down to getting to the right use cases where a I right? >>Sharon, I want ask you. You know, there's a lot of conversation. Always has A I replace people or is it an augmentation for people? And we had Gary Kasparov on a couple years ago, and he talked about, you know, it was the combination if he plus the computer made the best chess player, but that quickly went away. Now the computer is actually better than Garry Kasparov. Plus the computer. How should people think about a I as an augmentation tool versus a replacement tool? And is it just gonna be specific to the application? And how do you kind of think about those? >>Yeah, I would say >>that any application where you're making life and death decisions where you're making financial decisions that disadvantage people anything where you know you've got u A. V s and you're deciding whether or not to actually dropped the bomb like you need a human in the loop. If you're trying to change the words that you are using to get a different group of people to apply for jobs, you need a human in the loop because it turns out that for the example of beach, you type sheep into your phone and you might get just a field, a green field and a I doesn't know that, uh, you know, if it's always seen sheep in a field that when the sheep aren't there, that that isn't a sheep like it doesn't have that kind of recognition to it. So anything were we making decisions about parole or financial? Anything like that needs to have human in the loop because those types of decisions are changing fundamentally the way we live. >>Great. So shift gears. The team are Jeff Saunders. Okay, team, your mind may have been the liquid on my bell, so I'll be more active on the bell. Sorry about that. Everyone's even. We're starting a zero again, so I want to shift gears and talk about data sets. Um Bob, you're up on stage. Demo ing some some of your technology, the Miss Technology and really, you know, it's interesting combination of data sets A I and its current form needs a lot of data again. Kind of the classic Chihuahua on blue buried and photos. You got to run a lot of them through. How do you think about data sets? In terms of having the right data in a complete data set to drive an algorithm >>E. I think we all know data sets with one The tipping points for a I to become more real right along with cloud computing storage. But data is really one of the key points of making a I really write my example on stage was wine, right? Great wine starts a great grape street. Aye, aye. Starts a great data for us personally. L s t M is an example in our networking space where we have data for the last three months from our customers and rule using the last 30 days really trained these l s t m algorithms to really get that tsunami detection the point where we don't have false positives. >>How much of the training is done. Once you once you've gone through the data a couple times in a just versus when you first started, you're not really sure how it's gonna shake out in the algorithm. >>Yeah. So in our case right now, right, training happens every night. So every night, we're basically retraining those models, basically, to be able to predict if there's gonna be an anomaly or network, you know? And this is really an example. Where you looking all these other cat image thinks this is where these neural networks there really were one of the transformational things that really moved a I into the reality calling. And it's starting to impact all our different energy. Whether it's text imaging in the networking world is an example where even a I and deep learnings ruling starting to impact our networking customers. >>Sure, I want to go to you. What do you do if you don't have a big data set? You don't have a lot of pictures of chihuahuas and blackberries, and I want to apply some machine intelligence to the problem. >>I mean, so you need to have the right data set. You know, Big is a relative term on, and it depends on what you're using it for, right? So you can have a massive amount of data that represents solar flares, and then you're trying to detect some anomaly, right? If you train and I what normal is based upon a massive amount of data and you don't have enough examples of that anomaly you're trying to detect, then it's never going to say there's an anomaly there, so you actually need to over sample. You have to create a population of data that allows you to detect images you can't say, Um oh, >>I'm going to reflect in my data set the percentage of black women >>in Seattle, which is something below 6% and say it's fair. It's not right. You have to be able thio over sample things that you need, and in some ways you can get this through surveys. You can get it through, um, actually going to different sources. But you have to boot, strap it in some way, and then you have to refresh it, because if you leave that data set static like Bob mentioned like you, people are changing the way they do attacks and networks all the time, and so you may have been able to find the one yesterday. But today it's a completely different ball game >>project to you, which comes first, the chicken or the egg. You start with the data, and I say this is a ripe opportunity to apply some. Aye, aye. Or do you have some May I objectives that you want to achieve? And I got to go out and find the >>data. So I actually think what starts where it starts is the business problem you're trying to solve. And then from there, you need to have the right data. What's interesting about this is that you can actually have starting points. And so, for example, there's techniques around transfer, learning where you're able to take an an algorithm that's already been trained on a bunch of data and training a little bit further with with your data on DSO, we've seen that such that people that may have, for example, only 100 images of something, but they could use a model that's trained on millions of images and only use those 100 thio create something that's actually quite accurate. >>So that's a great segue. Wait, give me a ring on now. And it's a great Segway into talking about applying on one algorithm that was built around one data set and then applying it to a different data set. Is that appropriate? Is that correct? Is air you risking all kinds of interesting problems by taking that and applying it here, especially in light of when people are gonna go to outweigh the marketplace, is because I've got a date. A scientist. I couldn't go get one in the marketplace and apply to my data. How should people be careful not to make >>a bad decision based on that? So I think it really depends. And it depends on the type of machine learning that you're doing and what type of data you're talking about. So, for example, with images, they're they're they're well known techniques to be able to do this, but with other things, there aren't really and so it really depends. But then the other inter, the other really important thing is that no matter what at the end, you need to test and generate based on your based on your data sets and on based on sample data to see if it's accurate or not, and then that's gonna guide everything. Ultimately, >>Sharon has got to go to you. You brought up something in the preliminary rounds and about open A I and kind of this. We can't have this black box where stuff goes into the algorithm. That stuff comes out and we're not sure what the result was. Sounds really important. Is that Is that even plausible? Is it feasible? This is crazy statistics, Crazy math. You talked about the business objective that someone's trying to achieve. I go to the data scientist. Here's my data. You're telling this is the output. How kind of where's the line between the Lehman and the business person and the hard core data science to bring together the knowledge of Here's what's making the algorithm say this. >>Yeah, there's a lot of names for this, whether it's explainable. Aye, aye. Or interpret a belay. I are opening the black box. Things like that. Um, the algorithms that you use determine whether or not they're inspect herbal. Um, and the deeper your neural network gets, the harder it is to inspect, actually. Right. So, to your point, every time you take an aye aye and you use it in a different scenario than what it was built for. For example, um, there is a police precinct in New York that had a facial recognition software, and, uh, victim said, Oh, it looked like this actor. This person looked like Bill Cosby or something like that, and you were never supposed to take an image of an actor and put it in there to find people that look like them. But that's how people were using it. So the Russians point yes, like it. You can transfer learning to other a eyes, but it's actually the humans that are using it in ways that are unintended that we have to be more careful about, right? Um, even if you're a, I is explainable, and somebody tries to use it in a way that it was never intended to be used. The risk is much higher >>now. I think maybe I had, You know, if you look at Marvis kind of what we're building for the networking community Ah, good examples. When Marvis tries to do estimate your throughput right, your Internet throughput. That's what we usually call decision tree algorithm. And that's a very interpretive algorithm. and we predict low throughput. We know how we got to that answer, right? We know what features God, is there? No. But when we're doing something like a NAMI detection, that's a neural network. That black box it tells us yes, there's a problem. There's some anomaly, but that doesn't know what caused the anomaly. But that's a case where we actually used neural networks, actually find the anomie, and then we're using something else to find the root cause, eh? So it really depends on the use case and where the night you're going to use an interpreter of model or a neural network which is more of a black box model. T tell her you've got a cat or you've got a problem >>somewhere. So, Bob, that's really interested. So can you not unpacking? Neural network is just the nature of the way that the communication and the data flows and the inferences are made that you can't go in and unpack it, that you have to have the >>separate kind of process too. Get to the root cause. >>Yeah, assigned is always hard to say. Never. But inherently s neural networks are very complicated. Saito set of weights, right? It's basically usually a supervised training model, and we're feeding a bunch of data and trying to train it to detect a certain features, sir, an output. But that is where they're powerful, right? And that's why they basically doing such good, Because they are mimicking the brain, right? That neural network is a very complex thing. Can't like your brain, right? We really don't understand how your brain works right now when you have a problem, it's really trialling there. We try to figure out >>right going right. So I want to stay with you, bought for a minute. So what about when you change what you're optimizing? Four? So you just said you're optimizing for throughput of the network. You're looking for problems. Now, let's just say it's, uh, into the end of the quarter. Some other reason we're not. You're changing your changing what you're optimizing for, Can you? You have to write separate algorithm. Can you have dynamic movement inside that algorithm? How do you approach a problem? Because you're not always optimizing for the same things, depending on the market conditions. >>Yeah, I mean, I think a good example, you know, again, with Marvis is really with what we call reinforcement. Learning right in reinforcement. Learning is a model we use for, like, radio resource management. And there were really trying to optimize for the user experience in trying to balance the reward, the models trying to reward whether or not we have a good balance between the network and the user. Right, that reward could be changed. So that algorithm is basically reinforcement. You can finally change hell that Algren works by changing the reward you give the algorithm >>great. Um, Rajan back to you. A couple of huge things that have come into into play in the marketplace and get your take one is open source, you know, kind of. What's the impact of open source generally on the availability, desire and more applications and then to cloud and soon to be edge? You know, the current next stop. How do you guys incorporate that opportunity? How does it change what you can do? How does it open up the lens of >>a I Yeah, I think open source is really important because I think one thing that's interesting about a I is that it's a very nascent field and the more that there's open source, the more that people could build on top of each other and be able to utilize what what others others have done. And it's similar to how we've seen open source impact operating systems, the Internet, things like things like that with Cloud. I think one of the big things with cloud is now you have the processing power and the ability to access lots of data to be able to t create these thes networks. And so the capacity for data and the capacity for compute is much higher. Edge is gonna be a very important thing, especially going into next few years. You're seeing Maur things incorporated on the edge and one exciting development is around Federated learning where you can train on the edge and then combine some of those aspects into a cloud side model. And so that I think will actually make EJ even more powerful. >>But it's got to be so dynamic, right? Because the fundamental problem used to always be the move, the computer, the data or the date of the computer. Well, now you've got on these edge devices. You've got Tanya data right sensor data all kinds of machining data. You've got potentially nasty hostile conditions. You're not in a nice, pristine data center where the environmental conditions are in the connective ity issues. So when you think about that problem yet, there's still great information. There you got latent issues. Some I might have to be processed close to home. How do you incorporate that age old thing of the speed of light to still break the break up? The problem to give you a step up? Well, we see a lot >>of customers do is they do a lot of training on the cloud, but then inference on the on the edge. And so that way they're able to create the model that they want. But then they get fast response time by moving the model to the edge. The other thing is that, like you said, lots of data is coming into the edge. So one way to do it is to efficiently move that to the cloud. But the other way to do is filter. And to try to figure out what data you want to send to the clouds that you can create the next days. >>Shawna, back to you let's shift gears into ethics. This pesky, pesky issue that's not not a technological issue at all, but right. We see it often, especially in tech. Just cause you should just cause you can doesn't mean that you should. Um so and this is not a stem issue, right? There's a lot of different things that happened. So how should people be thinking about ethics? How should they incorporate ethics? Um, how should they make sure that they've got kind of a, you know, a standard kind of overlooking kind of what they're doing? The decisions are being made. >>Yeah, One of the more approachable ways that I have found to explain this is with behavioral science methodologies. So ethics is a massive field of study, and not everyone shares the same ethics. However, if you try and bring it closer to behavior change because every product that we're building is seeking to change of behavior. We need to ask questions like, What is the gap between the person's intention and the goal we have for them? Would they choose that goal for themselves or not? If they wouldn't, then you have an ethical problem, right? And this this can be true of the intention, goal gap or the intention action up. We can see when we regulated for cigarettes. What? We can't just make it look cool without telling them what the cigarettes are doing to them, right so we can apply the same principles moving forward. And they're pretty accessible without having to know. Oh, this philosopher and that philosopher in this ethicist said these things, it can be pretty human. The challenge with this is that most people building these algorithms are not. They're not trained in this way of thinking, and especially when you're working at a start up right, you don't have access to massive teams of people to guide you down this journey, so you need to build it in from the beginning, and you need to be open and based upon principles. Um, and it's going to touch every component. It should touch your data, your algorithm, the people that you're using to build the product. If you only have white men building the product, you have a problem you need to pull in other people. Otherwise, there are just blind spots that you are not going to think of in order to still that product for a wider audience, but it seems like >>they were on such a razor sharp edge. Right with Coca Cola wants you to buy Coca Cola and they show ads for Coca Cola, and they appeal to your let's all sing together on the hillside and be one right. But it feels like with a I that that is now you can cheat. Right now you can use behavioral biases that are hardwired into my brain is a biological creature against me. And so where is where is the fine line between just trying to get you to buy Coke? Which somewhat argues Probably Justus Bad is Jule cause you get diabetes and all these other issues, but that's acceptable. But cigarettes are not. And now we're seeing this stuff on Facebook with, you know, they're coming out. So >>we know that this is that and Coke isn't just selling Coke anymore. They're also selling vitamin water so they're they're play isn't to have a single product that you can purchase, but it is to have a suite of products that if you weren't that coke, you can buy it. But if you want that vitamin water you can have that >>shouldn't get vitamin water and a smile that only comes with the coat. Five. You want to jump in? >>I think we're going to see ethics really break into two different discussions, right? I mean, ethics is already, like human behavior that you're already doing right, doing bad behavior, like discriminatory hiring, training, that behavior. And today I is gonna be wrong. It's wrong in the human world is gonna be wrong in the eye world. I think the other component to this ethics discussion is really round privacy and data. It's like that mirror example, right? No. Who gave that mirror the right to basically tell me I'm old and actually do something with that data right now. Is that my data? Or is that the mirrors data that basically recognized me and basically did something with it? Right. You know, that's the Facebook. For example. When I get the email, tell me, look at that picture and someone's take me in the pictures Like, where was that? Where did that come from? Right? >>What? I'm curious about to fall upon that as social norms change. We talked about it a little bit for we turn the cameras on, right? It used to be okay. Toe have no black people drinking out of a fountain or coming in the side door of a restaurant. Not that long ago, right in the 60. So if someone had built an algorithm, then that would have incorporated probably that social norm. But social norms change. So how should we, you know, kind of try to stay ahead of that or at least go back reflectively after the fact and say kind of back to the black box, That's no longer acceptable. We need to tweak this. I >>would have said in that example, that was wrong. 50 years ago. >>Okay, it was wrong. But if you ask somebody in Alabama, you know, at the University of Alabama, Matt Department who have been born Red born, bred in that culture as well, they probably would have not necessarily agreed. But so generally, though, again, assuming things change, how should we make sure to go back and make sure that we're not again carrying four things that are no longer the right thing to do? >>Well, I think I mean, as I said, I think you know what? What we know is wrong, you know is gonna be wrong in the eye world. I think the more subtle thing is when we start relying on these Aye. Aye. To make decisions like no shit in my car, hit the pedestrian or save my life. You know, those are tough decisions to let a machine take off or your balls decision. Right when we start letting the machines Or is it okay for Marvis to give this D I ps preference over other people, right? You know, those type of decisions are kind of the ethical decision, you know, whether right or wrong, the human world, I think the same thing will apply in the eye world. I do think it will start to see more regulation. Just like we see regulation happen in our hiring. No, that regulation is going to be applied into our A I >>right solutions. We're gonna come back to regulation a minute. But, Roger, I want to follow up with you in your earlier session. You you made an interesting comment. You said, you know, 10% is clearly, you know, good. 10% is clearly bad, but it's a soft, squishy middle at 80% that aren't necessarily super clear, good or bad. So how should people, you know, kind of make judgments in this this big gray area in the middle? >>Yeah, and I think that is the toughest part. And so the approach that we've taken is to set us set out a set of AI ai principles on DDE. What we did is actually wrote down seven things that we will that we think I should do and four things that we should not do that we will not do. And we now have to actually look at everything that we're doing against those Aye aye principles. And so part of that is coming up with that governance process because ultimately it boils down to doing this over and over, seeing lots of cases and figuring out what what you should do and so that governments process is something we're doing. But I think it's something that every company is going to need to do. >>Sharon, I want to come back to you, so we'll shift gears to talk a little bit about about law. We've all seen Zuckerberg, unfortunately for him has been, you know, stuck in these congressional hearings over and over and over again. A little bit of a deer in a headlight. You made an interesting comment on your prior show that he's almost like he's asking for regulation. You know, he stumbled into some really big Harry nasty areas that were never necessarily intended when they launched Facebook out of his dorm room many, many moons ago. So what is the role of the law? Because the other thing that we've seen, unfortunately, a lot of those hearings is a lot of our elected officials are way, way, way behind there, still printing their e mails, right? So what is the role of the law? How should we think about it? What shall we What should we invite from fromthe law to help sort some of this stuff out? >>I think as an individual, right, I would like for each company not to make up their own set of principles. I would like to have a shared set of principles that were following the challenge. Right, is that with between governments, that's impossible. China is never gonna come up with same regulations that we will. They have a different privacy standards than we D'oh. Um, but we are seeing locally like the state of Washington has created a future of work task force. And they're coming into the private sector and asking companies like text you and like Google and Microsoft to actually advise them on what should we be regulating? We don't know. We're not the technologists, but they know how to regulate. And they know how to move policies through the government. What will find us if we don't advise regulators on what we should be regulating? They're going to regulate it in some way, just like they regulated the tobacco industry. Just like they regulated. Sort of, um, monopolies that tech is big enough. Now there is enough money in it now that it will be regularly. So we need to start advising them on what we should regulate because just like Mark, he said. While everyone else was doing it, my competitors were doing it. So if you >>don't want me to do it, make us all stop. What >>can I do? A negative bell and that would not for you, but for Mark's responsibly. That's crazy. So So bob old man at the mall. It's actually a little bit more codified right, There's GDP are which came through May of last year and now the newness to California Extra Gatorade, California Consumer Protection Act, which goes into effect January 1. And you know it's interesting is that the hardest part of the implementation of that I think I haven't implemented it is the right to be for gotten because, as we all know, computers, air, really good recording information and cloud. It's recorded everywhere. There's no there there. So when these types of regulations, how does that impact? Aye, aye, because if I've got an algorithm built on a data set in in person, you know, item number 472 decides they want to be forgotten How that too I deal with that. >>Well, I mean, I think with Facebook, I can see that as I think. I suspect Mark knows what's right and wrong. He's just kicking ball down tires like >>I want you guys. >>It's your problem, you know. Please tell me what to do. I see a ice kind of like any other new technology, you know, it could be abused and used in the wrong waste. I think legally we have a constitution that protects our rights. And I think we're going to see the lawyers treat a I just like any other constitutional things and people who are building products using a I just like me build medical products or other products and actually harmful people. You're gonna have to make sure that you're a I product does not harm people. You're a product does not include no promote discriminatory results. So I >>think we're going >>to see our constitutional thing is going applied A I just like we've seen other technologies work. >>And it's gonna create jobs because of that, right? Because >>it will be a whole new set of lawyers >>the holdings of lawyers and testers, even because otherwise of an individual company is saying. But we tested. It >>works. Trust us. Like, how are you gonna get the independent third party verification of that? So we're gonna start to see a whole terrorist proliferation of that type of fields that never had to exist before. >>Yeah, one of my favorite doctor room. A child. Grief from a center. If you don't follow her on Twitter Follower. She's fantastic and a great lady. So I want to stick with you for a minute, Bob, because the next topic is autonomous. And Rahman up on the keynote this morning, talked about missed and and really, this kind of shifting workload of fixing things into an autonomous set up where the system now is, is finding problems, diagnosing problems, fixing problems up to, I think, he said, even generating return authorizations for broken gear, which is amazing. But autonomy opens up all kinds of crazy, scary things. Robert Gates, we interviewed said, You know, the only guns that are that are autonomous in the entire U. S. Military are the ones on the border of North Korea. Every single other one has to run through a person when you think about autonomy and when you can actually grant this this a I the autonomy of the agency toe act. What are some of the things to think about in the word of the things to keep from just doing something bad, really, really fast and efficiently? >>Yeah. I mean, I think that what we discussed, right? I mean, I think Pakal purposes we're far, you know, there is a tipping point. I think eventually we will get to the CP 30 Terminator day where we actually build something is on par with the human. But for the purposes right now, we're really looking at tools that we're going to help businesses, doctors, self driving cars and those tools are gonna be used by our customers to basically allow them to do more productive things with their time. You know, whether it's doctor that's using a tool to actually use a I to predict help bank better predictions. They're still gonna be a human involved, you know, And what Romney talked about this morning and networking is really allowing our I T customers focus more on their business problems where they don't have to spend their time finding bad hard were bad software and making better experiences for the people. They're actually trying to serve >>right, trying to get your take on on autonomy because because it's a different level of trust that we're giving to the machine when we actually let it do things based on its own. But >>there's there's a lot that goes into this decision of whether or not to allow autonomy. There's an example I read. There's a book that just came out. Oh, what's the title? You look like a thing. And I love you. It was a book named by an A I, um if you want to learn a lot about a I, um and you don't know much about it, Get it? It's really funny. Um, so in there there is in China. Ah, factory where the Aye Aye. Is optimizing um, output of cockroaches now they just They want more cockroaches now. Why do they want that? They want to grind them up and put them in a lotion. It's one of their secret ingredients now. It depends on what parameters you allow that I to change, right? If you decide Thio let the way I flood the container, and then the cockroaches get out through the vents and then they get to the kitchen to get food, and then they reproduce the parameters in which you let them be autonomous. Over is the challenge. So when we're working with very narrow Ai ai, when use hell the Aye. Aye. You can change these three things and you can't just change anything. Then it's a lot easier to make that autonomous decision. Um and then the last part of it is that you want to know what is the results of a negative outcome, right? There was the result of a positive outcome. And are those results something that we can take actually? >>Right, Right. Roger, don't give you the last word on the time. Because kind of the next order of step is where that machines actually write their own algorithms, right? They start to write their own code, so they kind of take this next order of thought and agency, if you will. How do you guys think about that? You guys are way out ahead in the space, you have huge data set. You got great technology. Got tensorflow. When will the machines start writing their own A their own out rhythms? Well, and actually >>it's already starting there that, you know, for example, we have we have a product called Google Cloud. Ottawa. Mel Village basically takes in a data set, and then we find the best model to be able to match that data set. And so things like that that that are there already, but it's still very nascent. There's a lot more than that that can happen. And I think ultimately with with how it's used I think part of it is you have to start. Always look at the downside of automation. And what is what is the downside of a bad decision, whether it's the wrong algorithm that you create or a bad decision in that model? And so if the downside is really big, that's where you need to start to apply Human in the loop. And so, for example, in medicine. Hey, I could do amazing things to detect diseases, but you would want a doctor in the loop to be able to actually diagnose. And so you need tohave have that place in many situations to make sure that it's being applied well. >>But is that just today? Or is that tomorrow? Because, you know, with with exponential growth and and as fast as these things are growing, will there be a day where you don't necessarily need maybe need the doctor to communicate the news? Maybe there's some second order impacts in terms of how you deal with the family and, you know, kind of pros and cons of treatment options that are more emotional than necessarily mechanical, because it seems like eventually that the doctor has a role. But it isn't necessarily in accurately diagnosing a problem. >>I think >>I think for some things, absolutely over time the algorithms will get better and better, and you can rely on them and trust them more and more. But again, I think you have to look at the downside consequence that if there's a bad decision, what happens and how is that compared to what happens today? And so that's really where, where that is. So, for example, self driving cars, we will get to the point where cars are driving by themselves. There will be accidents, but the accident rate is gonna be much lower than what's there with humans today, and so that will get there. But it will take time. >>And there was a day when will be illegal for you to drive. You have manslaughter, right? >>I I believe absolutely there will be in and and I don't think it's that far off. Actually, >>wait for the day when I have my car take me up to Northern California with me. Sleepy. I've only lived that long. >>That's right. And work while you're while you're sleeping, right? Well, I want to thank everybody Aton for being on this panel. This has been super fun and these air really big issues. So I want to give you the final word will just give everyone kind of a final say and I just want to throw out their Mars law. People talk about Moore's law all the time. But tomorrow's law, which Gardner stolen made into the hype cycle, you know, is that we tend to overestimate in the short term, which is why you get the hype cycle and we turn. Tend to underestimate, in the long term the impacts of technology. So I just want it is you look forward in the future won't put a year number on it, you know, kind of. How do you see this rolling out? What do you excited about? What are you scared about? What should we be thinking about? We'll start with you, Bob. >>Yeah, you know, for me and, you know, the day of the terminus Heathrow. I don't know if it's 100 years or 1000 years. That day is coming. We will eventually build something that's in part of the human. I think the mission about the book, you know, you look like a thing and I love >>you. >>Type of thing that was written by someone who tried to train a I to basically pick up lines. Right? Cheesy pickup lines. Yeah, I'm not for sure. I'm gonna trust a I to help me in my pickup lines yet. You know I love you. Look at your thing. I love you. I don't know if they work. >>Yeah, but who would? Who would have guessed online dating is is what it is if you had asked, you know, 15 years ago. But I >>think yes, I think overall, yes, we will see the Terminator Cp through It was probably not in our lifetime, but it is in the future somewhere. A. I is definitely gonna be on par with the Internet cell phone, radio. It's gonna be a technology that's gonna be accelerating if you look where technology's been over last. Is this amazing to watch how fast things have changed in our lifetime alone, right? Yeah, we're just on this curve of technology accelerations. This in the >>exponential curves China. >>Yeah, I think the thing I'm most excited about for a I right now is the addition of creativity to a lot of our jobs. So ah, lot of we build an augmented writing product. And what we do is we look at the words that have happened in the world and their outcomes. And we tell you what words have impacted people in the past. Now, with that information, when you augment humans in that way, they get to be more creative. They get to use language that have never been used before. To communicate an idea. You can do this with any field you can do with composition of music. You can if you can have access as an individual, thio the data of a bunch of cultures the way that we evolved can change. So I'm most excited about that. I think I'm most concerned currently about the products that we're building Thio Give a I to people that don't understand how to use it or how to make sure they're making an ethical decision. So it is extremely easy right now to go on the Internet to build a model on a data set. And I'm not a specialist in data, right? And so I have no idea if I'm adding bias in or not, um and so it's It's an interesting time because we're in that middle area. Um, and >>it's getting loud, all right, Roger will throw with you before we have to cut out, or we're not gonna be able to hear anything. So I actually start every presentation out with a picture of the Mosaic browser, because what's interesting is I think that's where >>a eyes today compared to kind of weather when the Internet was around 1994 >>were just starting to see how a I can actually impact the average person. As a result, there's a lot of hype, but what I'm actually finding is that 70% of the company's I talked to the first question is, Why should I be using this? And what benefit does it give me? Why 70% ask you why? Yeah, and and what's interesting with that is that I think people are still trying to figure out what is this stuff good for? But to your point about the long >>run, and we underestimate the longer I think that every company out there and every product will be fundamentally transformed by eye over the course of the next decade, and it's actually gonna have a bigger impact on the Internet itself. And so that's really what we have to look forward to. >>All right again. Thank you everybody for participating. There was a ton of fun. Hope you had fun. And I look at the score sheet here. We've got Bob coming in and the bronze at 15 points. Rajan, it's 17 in our gold medal winner for the silver Bell. Is Sharna at 20 points. Again. Thank you. Uh, thank you so much and look forward to our next conversation. Thank Jeffrey Ake signing out from Caesar's Juniper. Next word unpacking. I Thanks for watching.
SUMMARY :
We don't have to do it over the phone s so we're happy to have him. Thank you, Christy. So just warm everybody up and we'll start with you. Well, I think we all know the examples of the south driving car, you know? So this is kind I have a something for You know, you start getting some advertising's And that one is is probably the most interesting one to be right now. it's about the user experience that you can create as a result of a I. Raja, you know, I think a lot of conversation about A They always focus the general purpose general purpose, And I think it really boils down to getting to the right use cases where a I right? And how do you kind of think about those? the example of beach, you type sheep into your phone and you might get just a field, the Miss Technology and really, you know, it's interesting combination of data sets A I E. I think we all know data sets with one The tipping points for a I to become more real right along with cloud in a just versus when you first started, you're not really sure how it's gonna shake out in the algorithm. models, basically, to be able to predict if there's gonna be an anomaly or network, you know? What do you do if you don't have a big data set? I mean, so you need to have the right data set. You have to be able thio over sample things that you need, Or do you have some May I objectives that you want is that you can actually have starting points. I couldn't go get one in the marketplace and apply to my data. the end, you need to test and generate based on your based on your data sets the business person and the hard core data science to bring together the knowledge of Here's what's making Um, the algorithms that you use I think maybe I had, You know, if you look at Marvis kind of what we're building for the networking community Ah, that you can't go in and unpack it, that you have to have the Get to the root cause. Yeah, assigned is always hard to say. So what about when you change what you're optimizing? You can finally change hell that Algren works by changing the reward you give the algorithm How does it change what you can do? on the edge and one exciting development is around Federated learning where you can train The problem to give you a step up? And to try to figure out what data you want to send to Shawna, back to you let's shift gears into ethics. so you need to build it in from the beginning, and you need to be open and based upon principles. But it feels like with a I that that is now you can cheat. but it is to have a suite of products that if you weren't that coke, you can buy it. You want to jump in? No. Who gave that mirror the right to basically tell me I'm old and actually do something with that data right now. So how should we, you know, kind of try to stay ahead of that or at least go back reflectively after the fact would have said in that example, that was wrong. But if you ask somebody in Alabama, What we know is wrong, you know is gonna be wrong So how should people, you know, kind of make judgments in this this big gray and over, seeing lots of cases and figuring out what what you should do and We've all seen Zuckerberg, unfortunately for him has been, you know, stuck in these congressional hearings We're not the technologists, but they know how to regulate. don't want me to do it, make us all stop. I haven't implemented it is the right to be for gotten because, as we all know, computers, Well, I mean, I think with Facebook, I can see that as I think. you know, it could be abused and used in the wrong waste. to see our constitutional thing is going applied A I just like we've seen other technologies the holdings of lawyers and testers, even because otherwise of an individual company is Like, how are you gonna get the independent third party verification of that? Every single other one has to run through a person when you think about autonomy and They're still gonna be a human involved, you know, giving to the machine when we actually let it do things based on its own. It depends on what parameters you allow that I to change, right? How do you guys think about that? And what is what is the downside of a bad decision, whether it's the wrong algorithm that you create as fast as these things are growing, will there be a day where you don't necessarily need maybe need the doctor But again, I think you have to look at the downside And there was a day when will be illegal for you to drive. I I believe absolutely there will be in and and I don't think it's that far off. I've only lived that long. look forward in the future won't put a year number on it, you know, kind of. I think the mission about the book, you know, you look like a thing and I love I don't know if they work. you know, 15 years ago. It's gonna be a technology that's gonna be accelerating if you look where technology's And we tell you what words have impacted people in the past. it's getting loud, all right, Roger will throw with you before we have to cut out, Why 70% ask you why? have a bigger impact on the Internet itself. And I look at the score sheet here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Saunders | PERSON | 0.99+ |
Sharon | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Roger | PERSON | 0.99+ |
Alabama | LOCATION | 0.99+ |
Mark | PERSON | 0.99+ |
Sharna Parky | PERSON | 0.99+ |
Robert Gates | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Garry Kasparov | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
January 1 | DATE | 0.99+ |
Gary Kasparov | PERSON | 0.99+ |
15 points | QUANTITY | 0.99+ |
Sharna | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
20 points | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
Jeffrey Ake | PERSON | 0.99+ |
400 gigs | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
Charlotte | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Rahman | PERSON | 0.99+ |
Christy | PERSON | 0.99+ |
Rajan | PERSON | 0.99+ |
Bill Cosby | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
California Extra Gatorade | TITLE | 0.99+ |
May | DATE | 0.99+ |
70% | QUANTITY | 0.99+ |
100 years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
tomorrow | DATE | 0.99+ |
Northern California | LOCATION | 0.99+ |
Shawna | PERSON | 0.99+ |
first question | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Zuckerberg | PERSON | 0.99+ |
17 | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
last week | DATE | 0.99+ |
today | DATE | 0.99+ |
Coca Cola | ORGANIZATION | 0.99+ |
Marvis | ORGANIZATION | 0.99+ |
Friday night | DATE | 0.99+ |
Moore | PERSON | 0.99+ |
Illinois | LOCATION | 0.99+ |
Five | QUANTITY | 0.99+ |
1000 years | QUANTITY | 0.99+ |
Ottawa | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Gardner | PERSON | 0.99+ |
100 | QUANTITY | 0.98+ |
fourth installment | QUANTITY | 0.98+ |
each company | QUANTITY | 0.98+ |
millions of images | QUANTITY | 0.98+ |
University of Alabama | ORGANIZATION | 0.98+ |
15 years ago | DATE | 0.98+ |
three previous rounds | QUANTITY | 0.98+ |
10% | QUANTITY | 0.98+ |
100 images | QUANTITY | 0.98+ |
one algorithm | QUANTITY | 0.98+ |
Washington | LOCATION | 0.98+ |
Romney | PERSON | 0.98+ |
50 years ago | DATE | 0.97+ |
single product | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
next decade | DATE | 0.96+ |
Mike Evans, Red Hat | Google Cloud Next 2019
>> reply from San Francisco. It's the Cube covering Google Club next nineteen Tio by Google Cloud and its ecosystem partners. >> We're back at Google Cloud next twenty nineteen. You're watching the Cube, the leader in live tech coverage on Dave a lot with my co host to minimum John Farriers. Also here this day. Two of our coverage. Hash tag. Google Next nineteen. Mike Evans is here. He's the vice president of technical business development at Red Hat. Mike, good to see you. Thanks for coming back in the Cube. >> Right to be here. >> So, you know, we're talking hybrid cloud multi cloud. You guys have been on this open shift for half a decade. You know, there were a lot of deniers, and now it's a real tail one for you in the whole world is jumping on. That bandwagon is gonna make you feel good. >> Yeah. No, it's nice to see everybody echoing a similar message, which we believe is what the customers demand and interest is. So that's a great validation. >> So how does that tie into what's happening here? What's going on with the show? It's >> interesting. And let me take a step back for us because I've been working with Google on their cloud efforts for almost ten years now. And it started back when Google, when they were about to get in the cloud business, they had to decide where they're going to use caveat present as their hyper visor. And that was a time when we had just switched to made a big bet on K V M because of its alignment with the Lenox Colonel. But it was controversial and and we help them do that. And I look back on my email recently and that was two thousand nine. That was ten years ago, and that was that was early stages on DH then, since that time, you know, it's just, you know, cloud market is obviously boomed. I again I was sort of looking back ahead of this discussion and saying, you know, in two thousand six and two thousand seven is when we started working with Amazon with rail on their cloud and back when everyone thought there's no way of booksellers goingto make an impact in the world, etcetera. And as I just play sort of forward to today and looking at thirty thousand people here on DH you know what sort of evolved? Just fascinated by, you know, sort of that open sources now obviously fully mainstream. And there's no more doubters. And it's the engine for everything. >> Like maybe, you know, bring us inside. So you know KK Veum Thie underpinning we know well is, you know, core to the multi clouds tragedy Red hat. And there's a lot that you've built on top of it. Speak, speak a little bit of some of the engineering relationships going on joint customers that you have. Ah, and kind of the value of supposed to, you know, write Hatton. General is your agnostic toe where lives, but there's got to be special work that gets done in a lot of places. >> Ralph has a Google. Yeah, yeah, yeah. >> Through the years, >> we've really done a lot of work to make sure that relative foundation works really well on G C P. So that's been a that's been a really consistent effort and whether it's around optimization for performance security element so that that provides a nice base for anybody who wants to move any work loader application from on crime over there from another cloud. And that's been great. And then the other maid, You know, we've also worked with them. Obviously, the upstream community dynamics have been really productive between Red Hat and Google, and Google has been one of the most productive and positive contributors and participants and open source. And so we worked together on probably ten or fifteen different projects, and it's a constant interaction between our upstream developers where we share ideas. And do you agree with this kind of >> S O Obviously, Cooper Netease is a big one. You know, when you see the list, it's it's Google and Red Hat right there. Give us a couple of examples of some of the other ones. I >> mean again, it's K B M is also a foundation on one that people kind of forget about that these days. But it still is a very pervasive technology and continuing to gain ground. You know, there's all there's the native stuff. There's the studio stuff in the AML, which is a whole fascinating category in my mind as well. >> I like history of kind of a real student of industry history, and so I like that you talk to folks who have been there and try to get it right. But there was a sort of this gestation period from two thousand six to two thousand nine and cloud Yeah, well, like you said, it's a book seller. And then even in the down turn, a lot of CFO said, Hey, cap backstop ex boom! And then come out of the downturn. And it was shadow I t around that two thousand nine time frame. But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in in In our cloud and homogeneity had a lot of a lot of traditional companies fumbling with their cloud strategies. And and And he had the big data craze. And obviously open source was a huge part of that. And then containers, which, of course, have been around since Lennox. Yeah, yeah, and I guess Doctor Boom started go crazy. And now it's like this curve is reshaping with a I and sort of a new era of data thoughts on sort of the accuracy of that little historical narrative and and why that big uptick with containers? >> Well, a couple of things there won the data, the whole data evolution and this is a fascinating one. For many, many years. I'm gonna be there right after nineteen years. So I've seen a lot of the elements of that history and one of the constant questions we would always get sometimes from investor. Why don't you guys buy a database company? You know, years ago and we would, you know, we didn't always look at it. Or why aren't you guys doing a dupe distribution When that became more spark, etcetera. And we always looked at it and said, You know, we're a platform company and if we were to pick anyone database, it would only cover some percentage and there's so many, and then it just kind of upsets the other. So we've we've decided we're going to focus, not on the data layer. We're going to focus on the infrastructure and the application layer and work down from it and support the things underneath. So it's consistent now with the AML explosion, which, you know, we're who was a pioneer of AML. They've got some of the best services and then we've been doing a lot of work within video in the last two years to make sure that all the GP use wherever they're run. Hybrid private cloud on multiple clouds that those air enabled and Raylan enabled in open shift. Because what we see happening and in video does also is right now all the applications being developed by free mlr are written by extremely technical people. When you write to tense airflow and things like that, you kind of got to be able to write a C compiler level, but so were working with them to bring open shift to become the sort of more mass mainstream tool to develop. A I aml enable app because the value of having rail underneath open shift and is every piece of hardware in the world is supported right for when that every cloud And then when we had that GPU enablement open shift and middleware and our storage, everything inherits it. So that's the That's the most valuable to me. That's the most valuable piece of ah, real estate that we own in the industry is actually Ralph and then everything build upon that and >> its interest. What you said about the database, Of course, we're a long discussion about that this morning. You're right, though. Mike, you either have to be, like, really good at one thing, like a data stacks or Cassandra or a mongo. And there's a zillion others that I'm not mentioning or you got to do everything you know, like the cloud guys were doing out there. You know, every one of them's an operational, you know, uh, analytics already of s no sequel. I mean, one of each, you know, and then you have to partner with them. So I would imagine you looked at that as well. I said, How're we going to do all that >> right? And there's only, you know, there's so many competitive dynamics coming at us and, you know, for we've always been in the mode where we've been the little guy battling against the big guys, whoever, maybe whether it was or, you know, son, IBM and HP. Unix is in the early days. Oracle was our friend for a while. Then they became. Then they became a nen ime, you know, are not enemy but a competitor on the Lennox side. And the Amazon was early friend, and then, though they did their own limits. So there's a competitive, so that's that's normal operating model for us to us to have this, you know, big competitive dynamic with a partnering >> dynamic. You gotta win it in the marketplace that the customers say. Come on, guys. >> Right. We'Ll figure it out >> together, Figured out we talked earlier about hybrid cloud. We talked about multi cloud and some people those of the same thing. But I think they actually you know, different. Yeah, hybrid. You think of, you know, on prim and public and and hopefully some kind of level of integration and common data. Plain and control plan and multi cloud is sort of evolved from multi vendor. How do you guys look at it? Is multi cloud a strategy? How do you look at hybrid? >> Yeah, I mean, it's it's it's a simple It's simple in my mind, but I know the words. The terms get used by a lot of different people in different ways. You know, hybrid Cloud to me is just is just that straightforward. Being able to run something on premise have been able to run something in any in a public cloud and have it be somewhat consistent or share a bowl or movable and then multi cloud has been able to do that same thing with with multiple public clouds. And then there's a third variation on that is, you know, wanting to do an application that runs in both and shares information, which I think the world's you know, You saw that in the Google Antos announcement, where they're talking about their service running on the other two major public cloud. That's the first of any sizable company. I think that's going to be the norm because it's become more normal wherever the infrastructure is that a customer's using. If Google has a great service, they want to be able to tell the user toe, run it on their data there at there of choice. So, >> yeah, so, like you brought up Antos and at the core, it's it's g k. So it's the community's we've been talking about and, he said, worked with eight of us work for danger. But it's geeky on top of those public clouds. Maybe give us a little bit of, you know, compare contrast of that open shift. Does open ship lives in all of these environments, too, But they're not fully compatible. And how does that work? So are >> you and those which was announced yesterday. Two high level comments. I guess one is as we talked about the beginning. It's a validation of what our message has been. Its hybrid cloud is a value multi clouds of values. That's a productive element of that to help promote that vision And that concept also macro. We talked about all of it. It it puts us in a competitive environment more with Google than it was yesterday or two days ago. But again, that's that's our normal world way partnered with IBM and HP and competed against them on unit. We partner with that was partnered with Microsoft and compete with them, So that's normal. That said, you know, we believe are with open shift, having five plus years in market and over a thousand customers and very wide deployments and already been running in Google, Amazon and Microsoft Cloud already already there and solid and people doing really things with that. Plus being from a position of an independent software vendor, we think is a more valuable position for multi cloud than a single cloud vendor. So that's, you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For all these public called companies going on from its, you know, it's It's a lot of complexity when you have to deal with, You know, American Express is Infrastructure, Bank of Hong Kong's infrastructure, Ford Motors infrastructure and it's a it's a >> right right here. You know Google before only had to run on Google servers in Google Data Center. Everything's very clean environment, one temperature on >> DH Enterprise customers have it a little different demands in terms of version ality and when the upgrade and and how long they let things like there's a lot of differences. >> But actually, there was one of the things Cory Quinn will. It was doing some analysis with us on there. And Google, for the most part, is if we decide to pull something, you've got kind of a one year window to do, you know? How does Red Hot look at that? >> I mean, and >> I explained, My >> guess is they'LL evolve over time as they get deeper in it. Or maybe they won't. Maybe they have a model where they think they will gain enough share and theirs. But I mean, we were built on on enterprise DNA on DH. We've evolved to cloud and hybrid multi cloud, DNA way love again like we love when people say I'm going to the cloud because when they say they're going to the cloud, it means they're doing new APs or they're modifying old apse. And we have a great shot of landing that business when they say we're doing something new >> Well, right, right. Even whether it's on Prem or in the public cloud, right? They're saying when they say we'LL go to the club, they talk about the cloud experience, right? And that's really what your strategy is to bring that cloud experience to wherever your data lives. Exactly. So talking about that multi cloud or a Romney cloud when we sort of look at the horses on the track and you say Okay, you got a V M. We're going after that. You've got you know, IBM and Red Hat going after that Now, Google sort of huge cloud provider, you know, doing that wherever you look. There's red hat now. Course I know you can't talk much about the IBM, you know, certainly integration, but IBM Executive once said to me still that we're like a recovering alcoholic. We learned our lesson from mainframe. We are open. We're committed to open, so we'LL see. But Red hat is everywhere, and your strategy presumably has to stay that sort of open new tia going last year >> I give to a couple examples of long ago. I mean, probably five. Six years ago when the college stuff was still more early. I had a to seo conference calls in one day, and one was with a big graphics, you know, Hollywood Graphics company, the CEO. After we explained all of our cloud stuff, you know, we had nine people on the call explaining all our cloud, and the guy said, Okay, because let me just tell you, right, that guy, something the biggest value bring to me is having relish my single point of sanity that I can move this stuff wherever I want. I just attach all my applications. I attached third party APS and everything, and then I could move it wherever we want. So realize that you're big, and I still think that's true. And then there was another large gaming company who was trying to decide to move forty thousand observers, from from their own cloud to a public cloud and how they were going to do it. And they had. They had to Do you know, the head of servers, a head of security, the head of databases, the head of network in the head of nine different functions there. And they're all in disagreement at the end. And the CEO said at the end of day, said, Mike, I've got like, a headache. I need some vodka and Tylenol now. So give me one simple piece of advice. How do I navigate this? I said, if you just write every app Terrell, Andrzej, boss. And this was before open shift. No matter >> where you want >> to run him, Raylan J. Boss will be there, and he said, Excellent advice. That's what we're doing. So there's something really beautiful about the simplicity of that that a lot of people overlooked, with all the hand waving of uber Netease and containers and fifty versions of Cooper Netease certified and you know, etcetera. It's it's ah, it's so I think there's something really beautiful about that. We see a lot of value in that single point of sanity and allowing people flexibility at you know, it's a pretty low cost to use. Relish your foundation >> over. Source. Hybrid Cloud Multi Cloud Omni Cloud All tail wins for Red Hat Mike will give you the final world where bumper sticker on Google Cloud next or any other final thoughts. >> To me, it's It's great to see thirty thousand people at this event. It's great to see Google getting more and more invested in the cloud and more and more invested in the enterprise about. I think they've had great success in a lot of non enterprise accounts, probably more so than the other clowns. And now they're coming this way. They've got great technology. We've our engineers love working with their engineers, and now we've got a more competitive dynamic. And like I said, welcome to the jungle. >> We got Red Hat Summit coming up stew. Writerly May is >> absolutely back in Beantown data. >> It's nice. Okay, I'll be in London there, >> right at Summit in Boston And May >> could deal. Mike, Thanks very much for coming. Thank you. It's great to see you. >> Good to see you. >> All right, everybody keep right there. Stew and I would back John Furry is also in the house watching the cube Google Cloud next twenty nineteen we'LL be right back
SUMMARY :
It's the Cube covering Thanks for coming back in the Cube. So, you know, we're talking hybrid cloud multi cloud. So that's a great validation. you know, it's just, you know, cloud market is obviously boomed. Ah, and kind of the value of supposed to, you know, Yeah, yeah, yeah. And do you agree with this kind of You know, when you see the list, it's it's Google and Red Hat right there. There's the studio stuff in the AML, But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in You know, years ago and we would, you know, we didn't always look at it. I mean, one of each, you know, and then you have to partner with them. And there's only, you know, there's so many competitive dynamics coming at us and, You gotta win it in the marketplace that the customers say. We'Ll figure it out But I think they actually you know, different. which I think the world's you know, You saw that in the Google Antos announcement, where they're you know, compare contrast of that open shift. you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For You know Google before only had to run on Google servers in Google Data Center. and how long they let things like there's a lot of differences. And Google, for the most part, is if we decide to pull something, And we have a great shot of landing that business when they say we're doing something new talk much about the IBM, you know, certainly integration, but IBM Executive one day, and one was with a big graphics, you know, at you know, it's a pretty low cost to use. final world where bumper sticker on Google Cloud next or any other final thoughts. And now they're coming this way. Writerly May is It's nice. It's great to see you. Stew and I would back John Furry is also in the house watching the cube Google Cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mike Evans | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
American Express | ORGANIZATION | 0.99+ |
Ford Motors | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
five plus years | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
ten | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
nine people | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Hollywood Graphics | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
thirty thousand people | QUANTITY | 0.99+ |
John Farriers | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Terrell | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
Stew | PERSON | 0.99+ |
two thousand | QUANTITY | 0.99+ |
Six years ago | DATE | 0.99+ |
thirty thousand people | QUANTITY | 0.99+ |
two days ago | DATE | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
Bank of Hong Kong | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Cassandra | PERSON | 0.98+ |
John Furry | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
ten years ago | DATE | 0.98+ |
Andrzej | PERSON | 0.98+ |
half a decade | QUANTITY | 0.98+ |
over a thousand customers | QUANTITY | 0.98+ |
Red Hot | ORGANIZATION | 0.98+ |
one day | QUANTITY | 0.97+ |
forty thousand observers | QUANTITY | 0.97+ |
Google Cloud | TITLE | 0.97+ |
Hatton | PERSON | 0.96+ |
third variation | QUANTITY | 0.96+ |
Cory Quinn | PERSON | 0.95+ |
one simple piece | QUANTITY | 0.95+ |
two thousand nine | QUANTITY | 0.95+ |
fifty versions | QUANTITY | 0.94+ |
Raylan J. Boss | PERSON | 0.93+ |
single point | QUANTITY | 0.93+ |
next twenty nineteen | DATE | 0.93+ |
Lennox | ORGANIZATION | 0.92+ |
Unix | ORGANIZATION | 0.92+ |
KK Veum Thie | PERSON | 0.92+ |
two thousand seven | QUANTITY | 0.92+ |
stew | PERSON | 0.91+ |
Nate Silver, FiveThirtyEight - Tableau Customer Conference 2013 - #TCC #theCUBE
>>Hi buddy, we're back. This is Dave Volante with the cube goes out to the shows. We extract the signal from the noise. Nate Silver's here. Nate, we've been saying that since 2010, rip you off. Hey Marcus feeder. Oh, you have that trademarks. Okay. So anyway, welcome to the cube. You man who needs no introduction, but in case you don't know Nate, uh, he's a very famous author, five 30 eight.com. Statistician influence, influential individual predictor of a lot of things including presidential elections. And uh, great to have you here. Great to be here. So we listened to your keynote this morning. We asked earlier if some of our audience, can you tweet it and you know, what would you ask Nate silver? So of course we got the predictable, how the red Sox going to do this year? Who's going to be in the world series? Are we going to attack Syria? >>Uh, will the fed E's or tightened? Of course we're down here. Who'd you vote for? Or they, you know, they all want to know. And of course, a lot of these questions you can't answer because it's too far out. But, uh, but anyway, again, welcome, welcome to the cube. Um, so I want to start by, uh, picking up on some of the themes in your keynote. Uh, you're here at the Tableau conference. Obviously it's all about about data. Uh, and you, your basic, one of your basic premises was that, um, people will misinterpret data, they'll just use data for their own own biases. You have been a controversial figure, right? A lot of people have accused you of, of bias. Um, how, what do you F how do you feel about that as a person who's, uh, you know, statistician, somebody who loves data? >>I think everyone has bias in the sense that we all have one relatively narrow perspective as compared to a big set of problems that we all are trying to analyze or solve or understand together. Um, you know, but I do think some of this actually comes down to, uh, not just bias, but kind of personal morality and ethics really. It seems weird to talk about it that way, but there are a lot of people involved in the political world who are operating to manipulate public opinion, um, and that don't really place a lot of value on the truth. Right. And I consider that kind of immoral. Um, but people like that I think don't really understand that someone else might act morally by actually just trying to discover the way the objective world is and trying to use science and research to, to uncover things. >>And so I think it's hard people to, because if they were in your shoes, they would try and manipulate the forecast and they would cheat and put their finger on their scale. They assume that anyone else would do the same thing cause they, they don't own any. Yeah. So will you, you've made some incredibly accurate predictions, uh, in the face of, of, of others that clearly had bias that, that, that, you know mispredicted um, so how did you feel when you got those, those attacks? Were you flabbergasted? Were you pissed? Were you hurt? I mean, all of the above having you move houses for, for you? I mean you get used to them with a lot of bullshit, right? You're not too surprised. Um, I guess it surprised me how, but how much the people who you know are pretty intelligent are willing to, to fool themselves and how specious arguments where meet and by the way, people are always constructing arguments for, for outcomes they happen to be rooting for. >>Right? It'd be one thing if you said, well I'm a Republican, but boy I think Obama's going to crush Romney electoral college or vice versa. But you should have an extra layer of scrutiny when you have a view that diverges from the consensus or what kind of the markets are saying. And by the way, you can go and they're betting Margaret's, you can go and you could have bet on the outcome of election bookies in the UK, other countries. Right. And they kind of had forecast similar to ours. We were actually putting their money where their mouth was. Agree that Obama was a. Not a lot, but a pretty heavy favorite route. Most of the last two months in the election. I wanted to ask you about prediction markets cause as you probably know, I mean the betting public are actually very efficient. Handicappers right over. >>So I'll throw a two to one shot is going to be to three to one is going to be a four to one, you know, more often than not. But what are your thoughts on, on prediction markets? I mean you just sort of betting markets, you'd just alluded it to them just recently or is that a, is that a good, well there a lot there then then I think the punditry right. I mean, you know, so with, with prediction markets you have a couple of issues. Number one is do you have enough, uh, liquidity, um, and my volume in the markets for them to be, uh, uh, optimal. Right. And I think the answer right now is maybe not exactly. And like these in trade type markets, knowing trade has been, has been shut down. In fact, it was pretty light trading volumes. It might've had people who stood to gain or lose, um, you know, thousands of dollars. >>Whereas in quote, unquote real markets, uh, the stakes are, are several orders of magnitude higher. If you look at what happened to, for example, just prices of common stocks a day after the election last year, um, oil and gas stocks lost billions of dollars of market capitalization after Romney lost. Uh, conversely, some, you know, green tech stocks or certain types of healthcare socks at benefit from Obamacare going into play gain hundreds of millions, billions of dollars in market capitalization. So real investors have to price in these political risks. Um, anyway, I would love to have see fully legal, uh, trading markets in the U S people can get bet kind of proper sums of money where you have, um, a lot of real capital going in and people can kind of hedge their economic risk a little bit more. But you know, they're, they're bigger and it's very hard to beat markets. They're not flawless. And there's a whole chapter in the book about how, you know, the minute you assume that markets are, are clairvoyant and perfect, then that's when they start to fail. >>Ironically enough. But they're very good. They're very tough to beat and they certainly provide a reality check in terms of providing people with, with real incentives to actually, you know, make a bet on, on their beliefs and people when they have financial incentives, uh, uh, to be accurate then a lot of bullshit. There's a tax on bullshit is one way. That's okay. I've got to ask him for anyway that you're still a baseball fan, right? Is that an in Detroit fan? Right. I'm a tiger. There's my bias. You remember the bird? It's too young to remember a little too. I, so I grew up, I was born in 78, so 84, the Kirk Gibson, Alan Trammell teams are kind of my, my earliest. So you definitely don't remember Mickey Lola cha. I used to be a big guy. That's right fan as well. But so, but Sony, right when Moneyball came out, we just were at the Vertica conference. >>We saw Billy being there and, and uh, when, when, when, when, when that book came out, I said Billy Bean's out of his mind for releasing all these secrets. And you alluded to in your talk today that other teams like the rays and like the red Sox have sort of started to adopt those techniques. At the same time, I feel like culturally when another one of your V and your Venn diagram, I don't want you vectors, uh, that, that Oakland's done a better job of that, that others may S they still culturally so pushing back, even the red Sox themselves, it can be argued, you know, went out and sort of violated the, the principles were of course Oakland A's can't cause they don't have a, have a, have a budget to do. So what's your take on Moneyball? Is the, is the strategy that he put forth sustainable or is it all going to be sort of level playing field eventually? >>I mean, you know, the strategy in terms of Oh fine guys that take a lot of walks, right? Um, I mean everyone realizes that now it's a fairly basic conclusion and it was kind of the sign of, of how far behind how many biases there were in the market for that, you know, use LBP instead of day. And I actually like, but that, that was arbitrage, you know, five or 10 years ago now, um, put butts in the seat, right? Man, if they win, I guess it does, but even the red Sox are winning and nobody goes to the games anymore. The red Sox, tons of empty seats, even for Yankees games. Well, it's, I mean they're also charging 200 bucks a ticket or something. you can get a ticket for 20, 30 bucks. But, but you know, but I, you know, I, I, I mean, first of all, the most emotional connection to baseball is that if your team is in pennant races, wins world series, right then that produces multimillion dollar increases in ticket sales and, and TV contracts down the road. >>So, um, in fact, you know, I think one thing is, is looking at the financial side, like modeling the martial impact of a win, but also kind of modeling. If you do kind of sign a free agent, then, uh, that signaling effect, how much does that matter for season ticket sales? So you could do some more kind of high finance stuff in baseball. But, but some of the low hanging fruit, I mean, you know, almost every team now has a Cisco analyst on their payroll or increasingly the distinctions aren't even as relevant anymore. Right? Where someone who's first in analytics is also listening to what the Scouts say. And you have organizations that you know, aren't making these kind of distinctions between stat heads and Scouts at all. They all kind of get along and it's all, you know, finding better ways, more responsible ways to, to analyze data. >>And basically you have the advantage of a very clear way of measure, measure success where, you know, do you win? That's the bottom line. Or do you make money or, or both. You can isolate guys Marshall contribution. I mean, you know, I am in the process now of hiring a bunch of uh, writers and editors and developers for five 38 right? So someone has a column and they do really well. How much of that is on the, the writer versus the ed or versus the brand of the site versus the guy at ESPN who promoted it or whatever else. Right. That's hard to say. But in baseball, everyone kind of takes their turn. It's very easy to measure each player's kind of marginal contribution to sort of balance and equilibrium and, and, and it's potentially achieved. But, and again, from your talk this morning modeling or volume of data doesn't Trump modeling, right? >>You need both. And you need culture. You need, you need, you know, you need volume of data, you need high quality data. You need, uh, a culture that actually has the right incentives align where you really do want to find a way to build a better product to make more money. Right? And again, they'll seem like, Oh, you know, how difficult should it be for a company to want to make more money and build better products. But, um, when you have large organizations, you have a lot of people who are, uh, who are thinking very short term or only about only about their P and L and not how the whole company as a whole is doing or have, you know, hangups or personality conflicts or, or whatever else. So, you know, a lot of success I think in business. Um, and certainly when it comes to use of analytics, it's just stripping away the things that, that get in the way from understanding and distract you. >>It's not some wave a magic wand and have some formula where you uncover all the secrets in the world. It's more like if you can strip away the noise there and you're going to have a much clearer understanding of, of what's really there. Uh, Nate, again, thanks so much for joining us. So kind of wanna expand on that a little bit. So when people think of Nate silver, sometimes they, you know, they think Nate silver analytics big data, but you're actually a S some of your positions are kind of, you take issue with some of the core notions of big data really around the, the, the importance of causality versus correlation. So, um, so we had Kenneth kookier on from, uh, the economist who wrote a book about big data a while back, the strata conference. And you know, he, in that book, they talk a lot about it really doesn't matter how valid anymore, if you know that your customers are gonna buy more products based on this dataset or this correlation that it doesn't really matter why. >>You just try to try to try to exploit that. Uh, but in your book you talk about, well and in the keynote today you talked about, well actually hypothesis testing coming in with some questions and actually looking for that causality is also important. Um, so, so what is your, what is your opinion of kind of, you know, all this hype around big data? Um, you know, you mentioned volume is important, but it's not the only thing. I mean, like, I mean, I'll tell you I'm, I'm kind of an empiricist about anything, right? So, you know, if it's true that merely finding a lot of correlations and kind of very high volume data sets will improve productivity. And how come we've had, you know, kind of such slow economic growth over the past 10 years, where is the tangible increase in patent growth or, or different measures of progress. >>And obviously there's a lot of noise in that data set as well. But you know, partly why both in the presentation today and in the book I kind of opened up with the, with the history is saying, you know, let's really look at the history of technology. It's a kind of fascinating, an understudied feel, the link between technology and progress and growth. But, um, it doesn't always go as planned. And I certainly don't think we've seen any kind of paradigm shift as far as, you know, technological, economic productivity in the world today. I mean, the thing to remember too is that, uh, uh, technology is always growing in and developing and that if you have roughly 3% economic growth per year exponential, that's a lot of growth, right? It's not even a straight line growth. It's like exponential growth. And to have 3% exponential growth compounding over how many years is a lot. >>So you're always going to have new technologies developing. Um, but what I, I'm suspicious that as people will say this one technology is, is a game changer relative to the whole history of civilization up until now. Um, and also, you know, again, a lot of technologies you look at kind of economic models where you have different factors or productivity. It's not usually an additive relationship. It's more a multiplicative relationships. So if you have a lot of data, but people who aren't very good at analyzing it, you have a lot of data but it's unstructured and unscrutinised you know, you're not going to get particularly good results by and large. Um, so I just want to talk a little bit about the, the kind of the, the cultural issue of adopting kind of analytics and, and becoming a data driven organization. And you talk a lot about, um, you know, really what you do is, is setting, um, you know, try to predict the probabilities of something happening, not really predicting what's going to happen necessarily. >>And you talked to New York, you know, today about, you know, knowledging where, you know, you're not, you're not 100% sure acknowledging that this is, you know, this is our best estimate based on the data. Um, but of course in business, you know, a lot of people, a lot of, um, importance is put on kind of, you know, putting on that front that you're, you know, what you're talking about. It's, you know, you be confident, you go in, this is gonna happen. And, and sometimes that can actually move markets and move decision-making. Um, how do you balance that in a, in a business environment where, you know, you want to keep, be realistic, but you want to, you know, put forth a confident, uh, persona. Well, you know, I mean, first of all, everyone, I think the answer is that you have to, uh, uh, kind of take a long time to build the narrative correctly and kind of get back to the first principles. >>And so at five 38, it's kind of a case where you have a dialogue with the readers of the site every day, right? But it's not that you can solve in one conversation. If you come in to a boss who you never talked to you before, you have to present some PowerPoint and you're like, actually this initiative has a, you know, 57% chance of succeeding and the baseline is 50% and it's really good cause the upside's high, right? Like you know, that's going to be tricky if you don't have a good and open dialogue. And it's another barrier by the way to success is that uh, you know, none of this big data stuff is going to be a solution for companies that have poor corporate cultures where you have trouble communicating ideas where you don't everyone on the same page. Um, you know, you need buy in from, from all throughout the organization, which means both you need senior level people who, uh, who understand the value of analytics. >>You also need analysts or junior level people who understand what business problems the company is trying to solve, what organizational goals are. Um, so I mean, how do you communicate? It's tricky, you know, maybe if you can't communicate it, then you find another firm or go, uh, go trade stocks and, and uh, and short that company if you're not violating like insider trading rules of, of various kinds. Um, you know, I mean, the one thing that seems to work better is if you can, uh, depict things visually. People intuitively grasp uncertainty. If you kind of portray it to them in a graphic environment, especially with interactive graphics, uh, more than they might've just kind of put numbers on a page. You know, one thing we're thinking about doing with the new 580 ESPN, we're hiring a lot of designers and developers is in case where there is uncertainty, then you can press a button, kind of like a slot, Michigan and simulate and outcome many times, then it'll make sense to people. Right? And they do that already for, you know, NCAA tournament stuff or NFL playoffs. Um, but that can help. >>So Nate, I asked you my, my partner John furry, who's often or normally the cohost of this show, uh, just just tweeted me asking about crowd spotting. So he's got this notion that there's all this exhaust out there, the social exhaustive social data. How do you, or do you, or do you see the potential to use that exhaust that's thrown off from the connected consumer to actually make predictions? Um, so I'm >>a, I guess probably mildly pessimistic about this for the reason being that, uh, a lot of this data is very new and so we don't really have a way to kind of calibrate a model based on it. So you can look and say, well, you know, let's say Twitter during the Republican primaries in 2016 that, Oh, Paul Ryan is getting five times as much favorable Twitter sentiment as Rick Santorum or whatever among Republicans. But, but what's that mean? You know, to put something into a model, you have to have enough history generally, um, where you can translate X into Y by means of some function or some formula. And a lot of data is so new where you don't have enough history to do that. And the other thing too is that, um, um, the demographics of who is using social media is changing a lot. Where we are right now you come to conference like this and everyone has you know, all their different accounts but, but we're not quite there yet in terms of the broader population. >>Um, you have a lot of kind of thought leaders now a lot of, you know, kind of young, smart urban tech geeks and they're not necessarily as representative of the population as a whole. That will over time the data will become more valuable. But if you're kind of calibrating expectations based on the way that at Twitter or Facebook were used in 2013 to expect that to be reliable when you want a high degree of precision three years from now, even six months from now is, is I think a little optimistic. Some sentiment though, we would agree with that. I mean sentiment is this concept of how many people are talking about a thumbs up, thumbs down. But to the extent that you can get metadata and make it more stable, longer term, you would see potential there is, I mean, there are environments where the terrain is shifting so fast that by the time you know, the forecast that you'd be interested in, right? >>Like things have already changed enough where like it's hard to do, to make good forecast. Right? And I think one of the kind of fundamental themes here, one of my critiques is some of the, uh, of, uh, the more optimistic interpretations of big data is that fundamentally people are, are, most people want a shortcut, right? Most people are, are fairly lazy like labor. What's the hot stock? Yeah. Right. Um, and so I'm worried whenever people talk about, you know, biased interpretations of, of the data or information, right? Whenever people say, Oh, this is going to solve my problems, I don't have to work very hard. You know, not usually true. Even if you look at sports, even steroids, performance enhancing drugs, the guys who really get the benefits of the steroids, they have to work their butts off, right? And then you have a synergy which hell. >>So they are very free free meal tickets in life when they are going to be gobbled up in competitive environments. So you know, uh, bigger datasets, faster data sets are going to be very powerful for people who have the right expertise and the right partners. But, but it's not going to make, uh, you know anyone to be able to kind of quit their job and go on the beach and sip my ties. So ne what are you working on these days as it relates to data? What's exciting you? Um, so with the, with the move to ESPN, I'm thinking more about, uh, you know, working with them on sports type projects, which is something having mostly cover politics. The past four or five years I've, I've kind of a lot of pent up ideas. So you know, looking at things in basketball for example, you have a team of five players and solving the problem of, of who takes the shot, when is the guy taking a good shot? >>Cause the shot clock's running out. When does a guy stealing a better opportunity from, from one of his teammates. Question. We want to look at, um, you know, we have the world cup the summer, so soccer is an interest of mine and we worked in 2010 with ESPN on something called the soccer power index. So continuing to improve that and roll that out. Um, you know, obviously baseball is very analytics rich as well, but you know, my near term focus might be on some of these sports projects. Yeah. So that the, I have to ask you a followup on the, on the soccer question. Is that an individual level? Is that a team level of both? So what we do is kind of uh, uh, one problem you have with the national teams, the Italian national team or Brazilian or the U S team is that they shift their personnel a lot. >>So they'll use certain guys for unimportant friendly matches for training matches that weren't actually playing in Brazil next year. So the system soccer power next we developed for ESPN actually it looks at the rosters and tries to make inferences about who is the a team so to speak and how much quality improvement do you have with them versus versus, uh, guys that are playing only in the marginal and important games. Okay. So you're able to mix and match teams and sort of predict on your flow state also from club league play to make inferences about how the national teams will come together. Um, but soccer is a case where, where we're going into here where we had a lot more data than we used to. Basically you had goals and bookings, I mean, and yellow cards and red cards and now you've collected a lot more data on how guys are moving throughout the field and how many passes there are, how much territory they're covering, uh, tackles and everything else. So that's becoming a lot smarter. Excellent. All right, Nate, I know you've got to go. I really appreciate the time. Thanks for coming on. The cube was a pleasure to meet you. Great. Thank you guys. All right. Keep it right there, everybody. We'll be back with our next guest. Dave Volante and Jeff Kelly. We're live at the Tableau user conference. This is the cube.
SUMMARY :
can you tweet it and you know, what would you ask Nate silver? Um, how, what do you F how do you feel about that as a person who's, uh, you know, statistician, Um, you know, but I do think some of this actually comes down to, uh, Um, I guess it surprised me how, but how much the people who you know are pretty And by the way, you can go and they're betting I mean, you know, so with, with prediction markets you have a couple of issues. And there's a whole chapter in the book about how, you know, the minute you assume that markets are, are clairvoyant check in terms of providing people with, with real incentives to actually, you know, make a bet on, so pushing back, even the red Sox themselves, it can be argued, you know, went out and sort of violated the, And I actually like, but that, that was arbitrage, you know, five or 10 years And you have organizations that you know, aren't making these kind of distinctions between stat heads and Scouts And basically you have the advantage of a very clear way of measure, measure success where, you know, and not how the whole company as a whole is doing or have, you know, hangups or personality conflicts And you know, he, in that book, they talk a lot about it really doesn't matter how valid anymore, And how come we've had, you know, kind of such slow economic growth over the past 10 with the history is saying, you know, let's really look at the history of technology. Um, and also, you know, again, a lot of technologies you look at kind of economic models you know, a lot of people, a lot of, um, importance is put on kind of, you know, And it's another barrier by the way to success is that uh, you know, none of this big Um, you know, I mean, the one thing that seems to work better is So Nate, I asked you my, my partner John furry, who's often or normally the cohost of this show, And a lot of data is so new where you don't have enough history to do that. Um, you have a lot of kind of thought leaders now a lot of, you know, kind of young, smart urban tech geeks and Um, and so I'm worried whenever people talk about, you know, biased interpretations of, So you know, looking at things in basketball for example, you have a team of five players So that the, I have to ask you a followup on the, on the soccer question. and how much quality improvement do you have with them versus versus, uh, guys that are playing only
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nate | PERSON | 0.99+ |
Obama | PERSON | 0.99+ |
Jeff Kelly | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
red Sox | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Oakland | ORGANIZATION | 0.99+ |
Nate Silver | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Romney | PERSON | 0.99+ |
Paul Ryan | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Trump | PERSON | 0.99+ |
Yankees | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
200 bucks | QUANTITY | 0.99+ |
Rick Santorum | PERSON | 0.99+ |
57% | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
Brazil | LOCATION | 0.99+ |
Kenneth kookier | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
ESPN | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
3% | QUANTITY | 0.99+ |
John furry | PERSON | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Margaret | PERSON | 0.99+ |
Nate silver | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Billy | PERSON | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five players | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Obamacare | TITLE | 0.99+ |
five times | QUANTITY | 0.99+ |
Marshall | PERSON | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
U S | ORGANIZATION | 0.99+ |
each player | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.98+ |
first principles | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one conversation | QUANTITY | 0.98+ |
Billy Bean | PERSON | 0.98+ |
four | QUANTITY | 0.98+ |
thousands of dollars | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
five | DATE | 0.98+ |
this year | DATE | 0.97+ |
30 bucks | QUANTITY | 0.97+ |
a day | QUANTITY | 0.97+ |
Alan Trammell | PERSON | 0.97+ |
one thing | QUANTITY | 0.97+ |
84 | QUANTITY | 0.97+ |
one way | QUANTITY | 0.96+ |
last year | DATE | 0.96+ |
PowerPoint | TITLE | 0.96+ |
10 years ago | DATE | 0.95+ |
Michigan | LOCATION | 0.95+ |
78 | QUANTITY | 0.95+ |
Republican | ORGANIZATION | 0.94+ |
Tableau | EVENT | 0.94+ |
Vertica | EVENT | 0.93+ |
2016 | DATE | 0.93+ |