Seth Myers, Demandbase | George Gilbert at HQ
>> This is George Gilbert, we're on the ground at Demandbase, the B2B CRM company, based on AI, one of uh, a very special company that's got some really unique technology. We have the privilege to be with Seth Myers today, Senior Data Scientist and resident wizard, and who's going to take us on a journey through some of the technology Demandbase is built on, and some of the technology coming down the road. So Seth, welcome. >> Thank you very much for having me. >> So, we talked earlier with Aman Naimat, Senior VP of Technology, and we talked about some of the functionality in Demandbase, and how it's very flexible, and reactive, and adaptive in helping guide, or react to a customer's journey, through the buying process. Tell us about what that journey might look like, how it's different, and the touchpoints, and the participants, and then how your technology rationalizes that, because we know, old CRM packages were really just lists of contact points. So this is something very different. How's it work? >> Yeah, absolutely, so at the highest level, each customer's going to be different, each customer's going to make decisions and look at different marketing collateral, and respond to different marketing collateral in different ways, you know, as the companies get bigger, and their products they're offering become more sophisticated, that's certainly the case, and also, sales cycles take a long time. You're engaged with an opportunity over many months, and so there's a lot of touchpoints, there's a lot of planning that has to be done, so that actually offers a huge opportunity to be solved with AI, especially in light of recent developments in this thing called reinforcement learning. So reinforcement learning is basically machine learning that can think strategically, they can actually plan ahead in a series of decisions, and it's actually technology behind AlphaGo which is the Google technology that beat the best Go players in the world. And what we basically do is we say, "Okay, if we understand "you're a customer, we understand the company you work at, "we understand the things they've been researching elsewhere "on third party sites, then we can actually start to predict "about content they will be likely to engage with." But more importantly, we can start to predict content they're more likely to engage with next, and after that, and after that, and after that, and so what our technology does is it looks at all possible paths that your potential customer can take, all the different content you could ever suggest to them, all the different routes they will take, and it looks at ones that they're likely to follow, but also ones they're likely to turn them into an opportunity. And so we basically, in the same way Google Maps considers all possible routes to get you from your office to home, we do the same, and we choose the one that's most likely to convert the opportunity, the same way Google chooses the quickest road home. >> Okay, this is really, that's a great example, because people can picture that, but how do you, how do you know what's the best path, is it based on learning from previous journeys from customers? >> Yes. >> And then, if you make a wrong guess, you sort of penalize the engine and say, "Pick the next best, "what you thought was the next best path." >> Absolutely, so the way, the nuts and bolts of how it works is we start working with our clients, and they have all this data of different customers, and how they've engaged with different pieces of content throughout their journey, and so the machine learning model, what it's really doing at any moment in time, given any customer in any stage of the opportunity that they find themselves in, it says, what piece of content are they likely to engage with next, and that's based on historical training data, if you will. And then once we make that decision on a step-by-step basis, then we kind of extrapolate, and we basically say, "Okay, if we showed them this page, or if they engage with "this material, what would that do, what situation would "we find them in at the next step, and then what would "we recommend from there, and then from there, "and then from there," and so it's really kind of learning the right move to make at each time, and then extrapolating that all the way to the opportunity being closed. >> The picture that's in my mind is like, the Deep Blue, I think it was chess, where it would map out all the potential moves. >> Very similar, yeah. >> To the end game. >> Very similar idea. >> So, what about if you're trying to engage with a customer across different channels, and it's not just web content? How is that done? >> Well, that's something that we're very excited about, and that's something that we're currently really starting to devote resources to. Right now, we already have a product live that's focused on web content specifically, but yeah, we're working on kind of a multi-channel type solution, and we're all pretty excited about it. >> Okay so, obviously you can't talk too much about it. Can you tell us what channels that might touch? >> I might have to play my cards a little close to my chest on this one, but I'll just say we're excited. >> Alright. Well I guess that means I'll have to come back. >> Please, please. >> So, um, tell us about the personalized conversations. Is the conversation just another way of saying, this is how we're personalizing the journey? Or is there more to it than that? >> Yeah, it really is about personalizing the journey, right? Like you know, a lot of our clients now have a lot of sophisticated marketing collateral, and a lot of time and energy has gone into developing content that different people find engaging, that kind of positions products towards pain points, and all that stuff, and so really there's so much low-hanging fruit by just organizing and leveraging all of this material, and actually forming the conversation through a series of journeys through that material. >> Okay, so, Aman was telling us earlier that we have so many sort of algorithms, they're all open source, or they're all published, and they're only as good as the data you can apply them to. So, tell us, where do companies, startups, you know, not the Googles, Microsofts, Amazons, where do they get their proprietary information? Is it that you have algorithms that now are so advanced that you can refine raw information into proprietary information that others don't have? >> Really I think it comes down to, our competitive advantage I think is largely in the source of our data, and so, yes, you can build more and more sophisticated algorithms, but again, you're starting with a public data set, you'll be able to derive some insights, but there will always be a path to those datasets for, say, a competitor. For example, we're currently tracking about 700 billion web interactions a year, and then we're also able to attribute those web interactions to companies, meaning the employees at those companies involved in those web interactions, and so that's able to give us an insight that no amount of public data or processing would ever really be able to achieve. >> How do you, Aman started to talk to us about how, like there were DNS, reverse DNS registries. >> Reverse IP lookups, yes. >> Yeah, so how are those, if they're individuals within companies, and then the companies themselves, how do you identify them reliably? >> Right, so reverse IP lookup is, we've been doing this for years now, and so we've kind of developed a multi-source solution, so reverse IP lookups is a big one. Also machine learning, you can look at traffic coming from an IP address, and you can start to make some very informed decisions about what the IP address is actually doing, who they are, and so if you're looking at, at the account level, which is what we're tracking at, there's a lot of information to be gleaned from that kind of information. >> Sort of the way, and this may be a weird-sounding analogy, but the way a virus or some piece of malware has a signature in terms of its behavior, you find signatures in terms of users associated with an IP address. >> And we certainly don't de-anonymize individual users, but if we're looking at things at the account level, then you know, the bigger the data, the more signal you can infer, and so if we're looking at a company-wide usage of an IP address, then you can start to make some very educated guesses as to who that company is, the things that they're researching, what they're in market for, that type of thing. >> And how do you find out, if they're not coming to your site, and they're not coming to one of your customer's sites, how do you find out what they're touching? >> Right, I mean, I can't really go into too much detail, but a lot of it comes from working with publishers, and a lot of this data is just raw, and it's only because we can identify the companies behind these IP addresses, that we're able to actually turn these web interactions into insights about specific companies. >> George: Sort of like how advertisers or publishers would track visitors across many, many sites, by having agreements. >> Yes. Along those lines, yeah. >> Okay. So, tell us a little more about natural language processing, I think where most people have assumed or have become familiar with it is with the B2C capabilities, with the big internet giants, where they're trying to understand all language. You have a more well-scoped problem, tell us how that changes your approach. >> So a lot of really exciting things are happening in natural language processing in general, and the research, and right now in general, it's being measured against this yardstick of, can it understand languages as good as a human can, obviously we're not there yet, but that doesn't necessarily mean you can't derive a lot of meaningful insights from it, and the way we're able to do that is, instead of trying to understand all of human language, let's understand very specific language associated with the things that we're trying to learn. So obviously we're a B2B marketing company, so it's very important to us to understand what companies are investing in other companies, what companies are buying from other companies, what companies are suing other companies, and so if we said, okay, we only want to be able to infer a competitive relationship between two businesses in an actual document, that becomes a much more solvable and manageable problem, as opposed to, let's understand all of human language. And so we actually started off with these kind of open source solutions, with some of these proprietary solutions that we paid for, and they didn't work because their scope was this broad, and so we said, okay, we can do better by just focusing in on the types of insights we're trying to learn, and then work backwards from them. >> So tell us, how much of the algorithms that we would call building blocks for what you're doing, and others, how much of those are all published or open source, and then how much is your secret sauce? Because we talk about data being a key part of the secret sauce, what about the algorithms? >> I mean yeah, you can treat the algorithms as tools, but you know, a bag of tools a product does not make, right? So our secret sauce becomes how we use these tools, how we deploy them, and the datasets we put them again. So as mentioned before, we're not trying to understand all of human language, actually the exact opposite. So we actually have a single machine learning algorithm that all it does is it learns to recognize when Amazon, the company, is being mentioned in a document. So if you see the word Amazon, is it talking about the river, is it talking about the company? So we have a classifier that all it does is it fires whenever Amazon is being mentioned in a document. And that's a much easier problem to solve than understanding, than Siri basically. >> Okay. I still get rather irritated with Siri. So let's talk about, um, broadly this topic that sort of everyone lays claim to as their great higher calling, which is democratizing machine learning and AI, and opening it up to a much greater audience. Help set some context, just the way you did by saying, "Hey, if we narrow the scope of a problem, "it's easier to solve." What are some of the different approaches people are taking to that problem, and what are their sweet spots? >> Right, so the the talk of the data science community, talking machinery right now, is some of the work that's coming out of DeepMind, which is a subsidiary of Google, they just built AlphaGo, which solved the strategy game that we thought we were decades away from actually solving, and their approach of restricting the problem to a game, with well-defined rules, with a limited scope, I think that's how they're able to propel the field forward so significantly. They started off by playing Atari games, then they moved to long term strategy games, and now they're doing video games, like video strategy games, and I think the idea of, again, narrowing the scope to well-defined rules and well-defined limited settings is how they're actually able to advance the field. >> Let me ask just about playing the video games. I can't remember Star... >> Starcraft. >> Starcraft. Would you call that, like, where the video game is a model, and you're training a model against that other model, so it's almost like they're interacting with each other. >> Right, so it really comes down, you can think of it as pulling levers, so you have a very complex machine, and there's certain levers you can pull, and the machine will respond in different ways. If you're trying to, for example, build a robot that can walk amongst a factory and pick out boxes, like how you move each joint, what you look around, all the different things you can see and sense, those are all levers to pull, and that gets very complicated very quickly, but if you narrow it down to, okay, there's certain places on the screen I can click, there's certain things I can do, there's certain inputs I can provide in the video game, you basically limit the number of levers, and then optimizing and learning how to work those levers is a much more scoped and reasonable problem, as opposed to learn everything all at once. >> Okay, that's interesting, now, let me switch gears a little bit. We've done a lot of work at WikiBound about IOT and increasingly edge-based intelligence, because you can't go back to the cloud for your analytics for everything, but one of the things that's becoming apparent is, it's not just the training that might go on in a cloud, but there might be simulations, and then the sort of low-latency response is based on a model that's at the edge. Help elaborate where that applies and how that works. >> Well in general, when you're working with machine learning, in almost every situation, training the model is, that's really the data-intensive process that requires a lot of extensive computation, and that's something that makes sense to have localized in a single location which you can leverage resources and you can optimize it. Then you can say, alright, now that I have this model that understands the problem that's trained, it becomes a much simpler endeavor to basically put that as close to the device as possible. And so that really is how they're able to say, okay, let's take this really complicated billion-parameter neural network that took days and weeks to train, and let's actually derive insights at the level, right at the device level. Recent technology though, like I mentioned deep learning, that in itself, just the actual deploying the technology creates new challenges as well, to the point that actually Google invented a new type of chip to just run... >> The tensor processing. >> Yeah, the TPU. The tensor processing unit, just to handle what is now a machine learning algorithm so sophisticated that even deploying it after it's been trained is still a challenge. >> Is there a difference in the hardware that you need for training vs. inferencing? >> So they initially deployed the TPU just for the sake of inference. In general, the way it actually works is that, when you're building a neural network, there is a type of mathematical operation to do a whole bunch, and it's based on the idea of working with matrices and it's like that, that's still absolutely the case with training as well as inference, where actually, querying the model, but so if you can solve that one mathematical operation, then you can deploy it everywhere. >> Okay. So, one of our CTOs was talking about how, in his view, what's going to happen in the cloud is richer and richer simulations, and as you say, the querying the model, getting an answer in realtime or near realtime, is out on the edge. What exactly is the role of the simulation? Is that just a model that understands time, and not just time, but many multiple parameters that it's playing with? >> Right, so simulations are particularly important in taking us back to reinforcement learning, where you basically have many decisions to make before you actually see some sort of desirable or undesirable outcome, and so, for example, the way AlphaGo trained itself is basically by running simulations of the game being played against itself, and really what that simulations are doing is allowing the artificial intelligence to explore the entire possibilities of all games. >> Sort of like WarGames, if you remember that movie. >> Yes, with uh... >> Matthew Broderick, and it actually showed all the war game scenarios on the screen, and then figured out, you couldn't really win. >> Right, yes, it's a similar idea where they, for example in Go, there's more board configurations than there are atoms in the observable universe, and so the way Deep Blue won chess is basically, more or less explore the vast majority of chess moves, that's really not the same option, you can't really play that same strategy with AlphaGo, and so, this constant simulation is how they explore the meaningful game configurations that it needed to win. >> So in other words, they were scoped down, so the problem space was smaller. >> Right, and in fact, basically one of the reasons, like AlphaGo was really kind of two different artificial intelligences working together, one that decided which solutions to explore, like which possibilities it should pursue more, and which ones not to, to ignore, and then the second piece was, okay, given the certain board configuration, what's the likely outcome? And so those two working in concert, one that narrows and focuses, and one that comes up with the answer, given that focus, is how it was actually able to work so well. >> Okay. Seth, on that note, that was a very, very enlightening 20 minutes. >> Okay. I'm glad to hear that. >> We'll have to come back and get an update from you soon. >> Alright, absolutely. >> This is George Gilbert, I'm with Seth Myers, Senior Data Scientist at Demandbase, a company I expect we'll be hearing a lot more about, and we're on the ground, and we'll be back shortly.
SUMMARY :
We have the privilege to and the participants, and the company you work at, say, "Pick the next best, the right move to make the Deep Blue, I think it was chess, that we're very excited about, Okay so, obviously you I might have to play I'll have to come back. Is the conversation just and actually forming the as good as the data you can apply them to. and so that's able to give us Aman started to talk to us about how, and you can start to make Sort of the way, and this the things that they're and a lot of this data is just George: Sort of like how Along those lines, yeah. the B2C capabilities, focusing in on the types of about the company? the way you did by saying, the problem to a game, playing the video games. Would you call that, and that gets very complicated a model that's at the edge. that in itself, just the Yeah, the TPU. the hardware that you need and it's based on the idea is out on the edge. and so, for example, the if you remember that movie. it actually showed all the and so the way Deep Blue so the problem space was smaller. and focuses, and one that Seth, on that note, that was a very, very I'm glad to hear that. We'll have to come back and and we're on the ground,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
George | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
Siri | TITLE | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
Demandbase | ORGANIZATION | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
Starcraft | TITLE | 0.99+ |
second piece | QUANTITY | 0.99+ |
WikiBound | ORGANIZATION | 0.99+ |
two businesses | QUANTITY | 0.99+ |
Seth Myers | PERSON | 0.99+ |
Aman Naimat | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Atari | ORGANIZATION | 0.99+ |
Seth | PERSON | 0.98+ |
each customer | QUANTITY | 0.98+ |
each joint | QUANTITY | 0.98+ |
Go | TITLE | 0.98+ |
single | QUANTITY | 0.98+ |
Matthew Broderick | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Aman | PERSON | 0.96+ |
Deep Blue | TITLE | 0.96+ |
billion-parameter | QUANTITY | 0.94+ |
each time | QUANTITY | 0.91+ |
two different artificial intelligences | QUANTITY | 0.88+ |
decades | QUANTITY | 0.88+ |
Google Maps | TITLE | 0.86+ |
AlphaGo | ORGANIZATION | 0.82+ |
about 700 billion web interactions a year | QUANTITY | 0.81+ |
Star | TITLE | 0.81+ |
AlphaGo | TITLE | 0.79+ |
one mathematical | QUANTITY | 0.78+ |
lot | QUANTITY | 0.76+ |
years | QUANTITY | 0.74+ |
DeepMind | ORGANIZATION | 0.74+ |
lot of information | QUANTITY | 0.73+ |
bag of tools | QUANTITY | 0.63+ |
IOT | TITLE | 0.62+ |
WarGames | TITLE | 0.6+ |
sites | QUANTITY | 0.6+ |
Wesley Kerr, Riot Games - #SparkSummit - #theCUBE
>> Announcer: Live from San Francisco, it's theCUBE covering Spark Summit 2017. Brought to you by Databricks. >> Getting close to the end of the day here at Spark Summit, but we saved the best for last I think. I'm pretty sure about that. I'm David Goad, your host here on theCUBE and we now have data scientists from Riot Games, yes, Riot Games. His name is Wesley Kerr. Wesley, thanks for joining us. >> Thanks for having me. >> What's the best money-making game at Riot Games? >> Well we only have one game. We're known for League of Legends. It came out in 2009, it has been growing and well-received by our fans since then. >> And what's your role there? It says data scientist, but what do you really do? >> So we build models to look at things like in game behavior. We build models to actually help players engage with our store and buy our content. We look at different ways we can, just, improve our player experience. >> Alright well let's talk about a little more under the hood, here. How are you deploying Spark in the game? >> So we relied on Databricks for all of our deployment. We do many different clusters. We have about 14 data scientists that work with us, each one is sort of able to manage their own clusters: spin 'em up, tear 'em down, find their data that way and work with it through Databricks. >> So what else will you cover? You had a keynote session this morning, right? >> Yep. >> Give a recap for theCUBE audience of what you talked about. >> So we talked about our efforts in player behavior where we build models and deploy models that are watching chat between players so we evaluate whether or not players are being unsportsmanlike and come up with ways to, sort of, help them curb that behavior and be more sportsmanlike in our game. >> Oh wow, unsportsmanlike. How do you define that? It's if people are being abusive? >> So what we saw was there are about one or two percent of our games that is some form of serious abuse and that comes in term of hate speech, racism, sexism, things that have no place in the game and so we want them to realize that that language is bad and they shouldn't be using it. >> It's all key word driven or are there other behaviors or things that can indicate? >> So right now it's purely based on things said in chat, but we're currently investigating other, sort of, other ways of measuring that behavior and how it occurs in game and how it could influence what people are saying. >> Maybe like tweets coming from The White House? (laughing) >> Okay, so George. >> We should be able to measure that as well. >> So how about those warriors? (laughing) >> No, George did you want to talk a little bit more >> Sure. >> David: about the technical achievements here? When you look at like trying to measure engagement and sort of maybe it sounds like converting high engagement to store purchases, tell us a little more maybe how that works. >> So we look at, we want. Our game is completely free to play. Players can download, play it all the way through and we really try to create a very engaging game that they want to come back and they want to play and then everything they can buy in the store is actually just cosmetics. So we really hope to build content that our players love and are happy to spend money on. As far as... We just really want engagement to be from around players coming back and playing and having a good time and it's less about how to get that high engagement conversion into monetization as we've seen that players who are happy and loving the game are happy to spend their money. >> So tell us more about how you build some of these models like, you know, turning it into not turning it into Spark code, but how do you analyze it and, sort of, what's the database mechanism for, you know, 'cause the storage layer in Spark, you know, is just like the file system? >> Sure, yeah absolutely. So we are a world-wide game. We're played by over 100 million players around the world >> David: Wow. >> And so that data comes flowing in from all around the world into our centralized data warehouse. That data warehouse has gameplay data so we know how you did in game. It also has time series events, so things that occurred in each game. And our game is really session based so players can come play for an hour, that's one game, and then they leave and come back and play again. And so what we're able to do is then, sort of, look at those models and how they did. And I'll give you an example around our content recommendations. So we look at the champions that you've been playing recently to predict which champions you are likely to play next. And that we can actually just query the database, start building our collaborative filtering models on top of it, and then recommend champions that you may not play now, you may be interested in playing, or we may decide to give you a special discount on a champion if we think it'll resonate well with you. >> And in this case, just to be clear, the champions you're talking about are other players, not models? >> It's actually the in-game avatar. So it's the champion that they play. So we have 130 unique champions and each game you choose which champion you want to play and so then that plays out for like. It's much more like a sport than it is like a game. So it's five v five, online competitive. So there are different objectives on the map. You work with your team to complete those objectives and beat the other team. So we like to think of it like basketball, but with magic and in a virtual world. >> And the teams stay together? Or are they constantly recombining? >> They can disband, yeah. Your next game may find nine other people. If you're playing with your friends then you can just keep queuing up with them as well. So the champions that they control there happen to be who you're playing in that game. >> And when you are trying to anticipate champions that someone might play in the future, what are the variables that you're trying to guess and how long did it take you to build those models? >> Yeah, it's a good question. Right now we are able to sort of leverage the power of our user, our players, so we have 100 million. And so what we do and we have in our game there are roles so, for instance, like there's a center in basketball, we have a bot lane. So we have bottom lane support and bottom lane ADC. So a support character is there to make sure that your ADC is able to defeat the other team. And if you play a lot of support, odds are there are other players in the world who play a lot of support too so we find similar players. We find that if they engaged on the same sorts of champions that you play. For instance, I'm a Leona main and so I play her a lot. And if I were to look at what other people played in addition to Leona it could be things like Braum and so then we would recommend Braum as a champion that you should try out that you've maybe not played yet. >> David: Okay. >> So and then what's the data warehouse that you guys use for the ultimate repository of all this? >> All the data flows into a Hive data warehouse, stored in S3. We have two different ways of interacting with it. One, we can run queries against Hive. It tends to be a bit slower for our use cases. And then our data scientists tend to access that all that data through Databricks and Spark. And it runs much quicker for our use cases. >> Do you take what's in S3 and put it into a parquet format to accelerate? >> Sometimes, so we do some of those rewrites. We do a lot of our secondary ETLs where we're just joining across multiple tables and writing back out. We'll optimize those for our Spark use cases and there's writing back, sort of, read from S3, do some transformations, write back to S3. >> And how latency-sensitive is this? Are you guys trying to make decisions as the player moves along in his level or? >> So historically we've been batch. We do- our recommendations are updated weekly so we haven't needed a much higher cadence. But we're moving to a point where I want to see us be able to actually make recommendations on the client and do it immediately after you've finished a game with, say, Leona, here's an offer for Braum. Go check it out, give it a try in your next game. >> So Wesley what would you like to see developed that hasn't been developed yet that would really help in your business specifically? >> So one thing that's really exciting for gaming right now is procedural generation and artificial intelligence. So here there are a lot of opportunities, you've seen some collaborations between Deep Mind and Blizzard where they're learning to play Starcraft. For me, I think there's a similar world where we have a game that has different sorts of mechanics. So we have a large social piece to our game and teamwork is required. And so understanding how we can leverage that and help influence the future of artificial intelligence is something that I want to see us be able to do. >> Did you talk with anybody here at the Spark Summit about that? >> Anyone who would listen. (laughing) So we chatted some with the teams up at Blizzard and Twitch about some of the things they're doing for natural language as well. >> Alright so what was the most useful conversation you had here at the summit? >> The most useful one that I had, I think, was with the Databricks team. So at the end of my keynote, It was kind of serendipitous, I was talking about some work we had done with deep learning and sort of doing hyper parameter searches over our worker nodes, so actually being able to quickly try out many different models. And in the announcement that morning before my keynote, Tim talked about how they actually have deep learning pipelines now and it was based on a conversation we had had so I was very excited to see it come to fruition and now is open source and we can leverage it. >> Awesome, well, we're up against a hard break here. >> Wesley: Okay. >> We're almost at the end of the day. Wesley, it's been a riot talking to you. We really appreciate it and thank you for coming on the show and sharing your knowledge. >> Wesley: You bet, thanks for having me. >> Alright and that's it, we're going to wrap it up today. We have a wrap-up coming up, as a matter of fact, in just a few minutes. My name is David Goad. You're watching theCUBE at Spark Summit. (upbeat music)
SUMMARY :
Brought to you by Databricks. and we now have data scientists Well we only have one game. So we build models to look at things How are you deploying Spark in the game? So we relied on Databricks for all of our deployment. of what you talked about. So we talked about our efforts in player behavior How do you define that? and so we want them to realize that that language is bad and how it occurs in game and how it could influence When you look at like trying to measure engagement So we really hope to build content So we are a world-wide game. so we know how you did in game. So it's the champion that they play. So the champions that they control there happen and so then we would recommend Braum as a champion One, we can run queries against Hive. Sometimes, so we do some of those rewrites. so we haven't needed a much higher cadence. And so understanding how we can leverage that So we chatted some with the teams up at Blizzard and it was based on a conversation we had had We really appreciate it and thank you Alright and that's it, we're going to wrap it up today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Wesley | PERSON | 0.99+ |
David Goad | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Wesley Kerr | PERSON | 0.99+ |
David | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Blizzard | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Tim | PERSON | 0.99+ |
Deep Mind | ORGANIZATION | 0.99+ |
League of Legends | TITLE | 0.99+ |
100 million | QUANTITY | 0.99+ |
one game | QUANTITY | 0.99+ |
each game | QUANTITY | 0.99+ |
Twitch | ORGANIZATION | 0.99+ |
Braum | PERSON | 0.99+ |
Riot Games | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.99+ |
over 100 million players | QUANTITY | 0.99+ |
Spark | TITLE | 0.99+ |
today | DATE | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
Starcraft | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
each one | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.97+ |
Spark Summit | LOCATION | 0.97+ |
Leona | PERSON | 0.96+ |
Spark Summit | EVENT | 0.95+ |
130 unique champions | QUANTITY | 0.94+ |
nine other people | QUANTITY | 0.94+ |
two different ways | QUANTITY | 0.94+ |
about 14 data scientists | QUANTITY | 0.91+ |
this morning | DATE | 0.89+ |
two percent | QUANTITY | 0.84+ |
one thing | QUANTITY | 0.82+ |
The White House | ORGANIZATION | 0.79+ |
that morning | DATE | 0.72+ |
about one | QUANTITY | 0.72+ |
#SparkSummit | EVENT | 0.67+ |
theCUBE | ORGANIZATION | 0.63+ |
Hive | TITLE | 0.61+ |