Image Title

Search Results for Matthew Broderick:

Seth Myers, Demandbase | George Gilbert at HQ


 

>> This is George Gilbert, we're on the ground at Demandbase, the B2B CRM company, based on AI, one of uh, a very special company that's got some really unique technology. We have the privilege to be with Seth Myers today, Senior Data Scientist and resident wizard, and who's going to take us on a journey through some of the technology Demandbase is built on, and some of the technology coming down the road. So Seth, welcome. >> Thank you very much for having me. >> So, we talked earlier with Aman Naimat, Senior VP of Technology, and we talked about some of the functionality in Demandbase, and how it's very flexible, and reactive, and adaptive in helping guide, or react to a customer's journey, through the buying process. Tell us about what that journey might look like, how it's different, and the touchpoints, and the participants, and then how your technology rationalizes that, because we know, old CRM packages were really just lists of contact points. So this is something very different. How's it work? >> Yeah, absolutely, so at the highest level, each customer's going to be different, each customer's going to make decisions and look at different marketing collateral, and respond to different marketing collateral in different ways, you know, as the companies get bigger, and their products they're offering become more sophisticated, that's certainly the case, and also, sales cycles take a long time. You're engaged with an opportunity over many months, and so there's a lot of touchpoints, there's a lot of planning that has to be done, so that actually offers a huge opportunity to be solved with AI, especially in light of recent developments in this thing called reinforcement learning. So reinforcement learning is basically machine learning that can think strategically, they can actually plan ahead in a series of decisions, and it's actually technology behind AlphaGo which is the Google technology that beat the best Go players in the world. And what we basically do is we say, "Okay, if we understand "you're a customer, we understand the company you work at, "we understand the things they've been researching elsewhere "on third party sites, then we can actually start to predict "about content they will be likely to engage with." But more importantly, we can start to predict content they're more likely to engage with next, and after that, and after that, and after that, and so what our technology does is it looks at all possible paths that your potential customer can take, all the different content you could ever suggest to them, all the different routes they will take, and it looks at ones that they're likely to follow, but also ones they're likely to turn them into an opportunity. And so we basically, in the same way Google Maps considers all possible routes to get you from your office to home, we do the same, and we choose the one that's most likely to convert the opportunity, the same way Google chooses the quickest road home. >> Okay, this is really, that's a great example, because people can picture that, but how do you, how do you know what's the best path, is it based on learning from previous journeys from customers? >> Yes. >> And then, if you make a wrong guess, you sort of penalize the engine and say, "Pick the next best, "what you thought was the next best path." >> Absolutely, so the way, the nuts and bolts of how it works is we start working with our clients, and they have all this data of different customers, and how they've engaged with different pieces of content throughout their journey, and so the machine learning model, what it's really doing at any moment in time, given any customer in any stage of the opportunity that they find themselves in, it says, what piece of content are they likely to engage with next, and that's based on historical training data, if you will. And then once we make that decision on a step-by-step basis, then we kind of extrapolate, and we basically say, "Okay, if we showed them this page, or if they engage with "this material, what would that do, what situation would "we find them in at the next step, and then what would "we recommend from there, and then from there, "and then from there," and so it's really kind of learning the right move to make at each time, and then extrapolating that all the way to the opportunity being closed. >> The picture that's in my mind is like, the Deep Blue, I think it was chess, where it would map out all the potential moves. >> Very similar, yeah. >> To the end game. >> Very similar idea. >> So, what about if you're trying to engage with a customer across different channels, and it's not just web content? How is that done? >> Well, that's something that we're very excited about, and that's something that we're currently really starting to devote resources to. Right now, we already have a product live that's focused on web content specifically, but yeah, we're working on kind of a multi-channel type solution, and we're all pretty excited about it. >> Okay so, obviously you can't talk too much about it. Can you tell us what channels that might touch? >> I might have to play my cards a little close to my chest on this one, but I'll just say we're excited. >> Alright. Well I guess that means I'll have to come back. >> Please, please. >> So, um, tell us about the personalized conversations. Is the conversation just another way of saying, this is how we're personalizing the journey? Or is there more to it than that? >> Yeah, it really is about personalizing the journey, right? Like you know, a lot of our clients now have a lot of sophisticated marketing collateral, and a lot of time and energy has gone into developing content that different people find engaging, that kind of positions products towards pain points, and all that stuff, and so really there's so much low-hanging fruit by just organizing and leveraging all of this material, and actually forming the conversation through a series of journeys through that material. >> Okay, so, Aman was telling us earlier that we have so many sort of algorithms, they're all open source, or they're all published, and they're only as good as the data you can apply them to. So, tell us, where do companies, startups, you know, not the Googles, Microsofts, Amazons, where do they get their proprietary information? Is it that you have algorithms that now are so advanced that you can refine raw information into proprietary information that others don't have? >> Really I think it comes down to, our competitive advantage I think is largely in the source of our data, and so, yes, you can build more and more sophisticated algorithms, but again, you're starting with a public data set, you'll be able to derive some insights, but there will always be a path to those datasets for, say, a competitor. For example, we're currently tracking about 700 billion web interactions a year, and then we're also able to attribute those web interactions to companies, meaning the employees at those companies involved in those web interactions, and so that's able to give us an insight that no amount of public data or processing would ever really be able to achieve. >> How do you, Aman started to talk to us about how, like there were DNS, reverse DNS registries. >> Reverse IP lookups, yes. >> Yeah, so how are those, if they're individuals within companies, and then the companies themselves, how do you identify them reliably? >> Right, so reverse IP lookup is, we've been doing this for years now, and so we've kind of developed a multi-source solution, so reverse IP lookups is a big one. Also machine learning, you can look at traffic coming from an IP address, and you can start to make some very informed decisions about what the IP address is actually doing, who they are, and so if you're looking at, at the account level, which is what we're tracking at, there's a lot of information to be gleaned from that kind of information. >> Sort of the way, and this may be a weird-sounding analogy, but the way a virus or some piece of malware has a signature in terms of its behavior, you find signatures in terms of users associated with an IP address. >> And we certainly don't de-anonymize individual users, but if we're looking at things at the account level, then you know, the bigger the data, the more signal you can infer, and so if we're looking at a company-wide usage of an IP address, then you can start to make some very educated guesses as to who that company is, the things that they're researching, what they're in market for, that type of thing. >> And how do you find out, if they're not coming to your site, and they're not coming to one of your customer's sites, how do you find out what they're touching? >> Right, I mean, I can't really go into too much detail, but a lot of it comes from working with publishers, and a lot of this data is just raw, and it's only because we can identify the companies behind these IP addresses, that we're able to actually turn these web interactions into insights about specific companies. >> George: Sort of like how advertisers or publishers would track visitors across many, many sites, by having agreements. >> Yes. Along those lines, yeah. >> Okay. So, tell us a little more about natural language processing, I think where most people have assumed or have become familiar with it is with the B2C capabilities, with the big internet giants, where they're trying to understand all language. You have a more well-scoped problem, tell us how that changes your approach. >> So a lot of really exciting things are happening in natural language processing in general, and the research, and right now in general, it's being measured against this yardstick of, can it understand languages as good as a human can, obviously we're not there yet, but that doesn't necessarily mean you can't derive a lot of meaningful insights from it, and the way we're able to do that is, instead of trying to understand all of human language, let's understand very specific language associated with the things that we're trying to learn. So obviously we're a B2B marketing company, so it's very important to us to understand what companies are investing in other companies, what companies are buying from other companies, what companies are suing other companies, and so if we said, okay, we only want to be able to infer a competitive relationship between two businesses in an actual document, that becomes a much more solvable and manageable problem, as opposed to, let's understand all of human language. And so we actually started off with these kind of open source solutions, with some of these proprietary solutions that we paid for, and they didn't work because their scope was this broad, and so we said, okay, we can do better by just focusing in on the types of insights we're trying to learn, and then work backwards from them. >> So tell us, how much of the algorithms that we would call building blocks for what you're doing, and others, how much of those are all published or open source, and then how much is your secret sauce? Because we talk about data being a key part of the secret sauce, what about the algorithms? >> I mean yeah, you can treat the algorithms as tools, but you know, a bag of tools a product does not make, right? So our secret sauce becomes how we use these tools, how we deploy them, and the datasets we put them again. So as mentioned before, we're not trying to understand all of human language, actually the exact opposite. So we actually have a single machine learning algorithm that all it does is it learns to recognize when Amazon, the company, is being mentioned in a document. So if you see the word Amazon, is it talking about the river, is it talking about the company? So we have a classifier that all it does is it fires whenever Amazon is being mentioned in a document. And that's a much easier problem to solve than understanding, than Siri basically. >> Okay. I still get rather irritated with Siri. So let's talk about, um, broadly this topic that sort of everyone lays claim to as their great higher calling, which is democratizing machine learning and AI, and opening it up to a much greater audience. Help set some context, just the way you did by saying, "Hey, if we narrow the scope of a problem, "it's easier to solve." What are some of the different approaches people are taking to that problem, and what are their sweet spots? >> Right, so the the talk of the data science community, talking machinery right now, is some of the work that's coming out of DeepMind, which is a subsidiary of Google, they just built AlphaGo, which solved the strategy game that we thought we were decades away from actually solving, and their approach of restricting the problem to a game, with well-defined rules, with a limited scope, I think that's how they're able to propel the field forward so significantly. They started off by playing Atari games, then they moved to long term strategy games, and now they're doing video games, like video strategy games, and I think the idea of, again, narrowing the scope to well-defined rules and well-defined limited settings is how they're actually able to advance the field. >> Let me ask just about playing the video games. I can't remember Star... >> Starcraft. >> Starcraft. Would you call that, like, where the video game is a model, and you're training a model against that other model, so it's almost like they're interacting with each other. >> Right, so it really comes down, you can think of it as pulling levers, so you have a very complex machine, and there's certain levers you can pull, and the machine will respond in different ways. If you're trying to, for example, build a robot that can walk amongst a factory and pick out boxes, like how you move each joint, what you look around, all the different things you can see and sense, those are all levers to pull, and that gets very complicated very quickly, but if you narrow it down to, okay, there's certain places on the screen I can click, there's certain things I can do, there's certain inputs I can provide in the video game, you basically limit the number of levers, and then optimizing and learning how to work those levers is a much more scoped and reasonable problem, as opposed to learn everything all at once. >> Okay, that's interesting, now, let me switch gears a little bit. We've done a lot of work at WikiBound about IOT and increasingly edge-based intelligence, because you can't go back to the cloud for your analytics for everything, but one of the things that's becoming apparent is, it's not just the training that might go on in a cloud, but there might be simulations, and then the sort of low-latency response is based on a model that's at the edge. Help elaborate where that applies and how that works. >> Well in general, when you're working with machine learning, in almost every situation, training the model is, that's really the data-intensive process that requires a lot of extensive computation, and that's something that makes sense to have localized in a single location which you can leverage resources and you can optimize it. Then you can say, alright, now that I have this model that understands the problem that's trained, it becomes a much simpler endeavor to basically put that as close to the device as possible. And so that really is how they're able to say, okay, let's take this really complicated billion-parameter neural network that took days and weeks to train, and let's actually derive insights at the level, right at the device level. Recent technology though, like I mentioned deep learning, that in itself, just the actual deploying the technology creates new challenges as well, to the point that actually Google invented a new type of chip to just run... >> The tensor processing. >> Yeah, the TPU. The tensor processing unit, just to handle what is now a machine learning algorithm so sophisticated that even deploying it after it's been trained is still a challenge. >> Is there a difference in the hardware that you need for training vs. inferencing? >> So they initially deployed the TPU just for the sake of inference. In general, the way it actually works is that, when you're building a neural network, there is a type of mathematical operation to do a whole bunch, and it's based on the idea of working with matrices and it's like that, that's still absolutely the case with training as well as inference, where actually, querying the model, but so if you can solve that one mathematical operation, then you can deploy it everywhere. >> Okay. So, one of our CTOs was talking about how, in his view, what's going to happen in the cloud is richer and richer simulations, and as you say, the querying the model, getting an answer in realtime or near realtime, is out on the edge. What exactly is the role of the simulation? Is that just a model that understands time, and not just time, but many multiple parameters that it's playing with? >> Right, so simulations are particularly important in taking us back to reinforcement learning, where you basically have many decisions to make before you actually see some sort of desirable or undesirable outcome, and so, for example, the way AlphaGo trained itself is basically by running simulations of the game being played against itself, and really what that simulations are doing is allowing the artificial intelligence to explore the entire possibilities of all games. >> Sort of like WarGames, if you remember that movie. >> Yes, with uh... >> Matthew Broderick, and it actually showed all the war game scenarios on the screen, and then figured out, you couldn't really win. >> Right, yes, it's a similar idea where they, for example in Go, there's more board configurations than there are atoms in the observable universe, and so the way Deep Blue won chess is basically, more or less explore the vast majority of chess moves, that's really not the same option, you can't really play that same strategy with AlphaGo, and so, this constant simulation is how they explore the meaningful game configurations that it needed to win. >> So in other words, they were scoped down, so the problem space was smaller. >> Right, and in fact, basically one of the reasons, like AlphaGo was really kind of two different artificial intelligences working together, one that decided which solutions to explore, like which possibilities it should pursue more, and which ones not to, to ignore, and then the second piece was, okay, given the certain board configuration, what's the likely outcome? And so those two working in concert, one that narrows and focuses, and one that comes up with the answer, given that focus, is how it was actually able to work so well. >> Okay. Seth, on that note, that was a very, very enlightening 20 minutes. >> Okay. I'm glad to hear that. >> We'll have to come back and get an update from you soon. >> Alright, absolutely. >> This is George Gilbert, I'm with Seth Myers, Senior Data Scientist at Demandbase, a company I expect we'll be hearing a lot more about, and we're on the ground, and we'll be back shortly.

Published Date : Nov 2 2017

SUMMARY :

We have the privilege to and the participants, and the company you work at, say, "Pick the next best, the right move to make the Deep Blue, I think it was chess, that we're very excited about, Okay so, obviously you I might have to play I'll have to come back. Is the conversation just and actually forming the as good as the data you can apply them to. and so that's able to give us Aman started to talk to us about how, and you can start to make Sort of the way, and this the things that they're and a lot of this data is just George: Sort of like how Along those lines, yeah. the B2C capabilities, focusing in on the types of about the company? the way you did by saying, the problem to a game, playing the video games. Would you call that, and that gets very complicated a model that's at the edge. that in itself, just the Yeah, the TPU. the hardware that you need and it's based on the idea is out on the edge. and so, for example, the if you remember that movie. it actually showed all the and so the way Deep Blue so the problem space was smaller. and focuses, and one that Seth, on that note, that was a very, very I'm glad to hear that. We'll have to come back and and we're on the ground,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

GeorgePERSON

0.99+

AmazonsORGANIZATION

0.99+

MicrosoftsORGANIZATION

0.99+

SiriTITLE

0.99+

GooglesORGANIZATION

0.99+

DemandbaseORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

StarcraftTITLE

0.99+

second pieceQUANTITY

0.99+

WikiBoundORGANIZATION

0.99+

two businessesQUANTITY

0.99+

Seth MyersPERSON

0.99+

Aman NaimatPERSON

0.99+

twoQUANTITY

0.99+

AtariORGANIZATION

0.99+

SethPERSON

0.98+

each customerQUANTITY

0.98+

each jointQUANTITY

0.98+

GoTITLE

0.98+

singleQUANTITY

0.98+

Matthew BroderickPERSON

0.98+

oneQUANTITY

0.98+

todayDATE

0.97+

AmanPERSON

0.96+

Deep BlueTITLE

0.96+

billion-parameterQUANTITY

0.94+

each timeQUANTITY

0.91+

two different artificial intelligencesQUANTITY

0.88+

decadesQUANTITY

0.88+

Google MapsTITLE

0.86+

AlphaGoORGANIZATION

0.82+

about 700 billion web interactions a yearQUANTITY

0.81+

StarTITLE

0.81+

AlphaGoTITLE

0.79+

one mathematicalQUANTITY

0.78+

lotQUANTITY

0.76+

yearsQUANTITY

0.74+

DeepMindORGANIZATION

0.74+

lot of informationQUANTITY

0.73+

bag of toolsQUANTITY

0.63+

IOTTITLE

0.62+

WarGamesTITLE

0.6+

sitesQUANTITY

0.6+

Sean Convery, ServiceNow - ServiceNow Knowledge 17 - #know17 - #theCUBE


 

>> Announcer: Live from Orlando, Florida, it's the Cube. Covering Servicenow, Knowledge 17. Brought to you by Servicenow. >> Welcome back to Orlando everybody this is the Cube the leader in live tech coverage, we go out to the events, we extract the signal from the noise, and we are here for our fifth year at Knowledge this is Knowledge 17, Sean Convery's here he's the general manager of the security business unit at Servicenow, an area that I'm very excited about Shawn. Welcome back to the Cube, it's good to see you again. >> It's great to be here, thanks for having me. >> So let's see you guys launched last year at RSA we talked in depth at Servicenow Knowledge about what you guys were doing. You quoted a stat the other day which I thought was pretty substantial at the financial analyst meeting, 1.1 million job shortfall in cyber. That is huge. That's the problem that you're trying to address. >> Well it's unbelievable, I was- you know we were just doing the keynote earlier this morning and I was recounting, most people in security get in it because they have some, you know desire to save the world right? To to- they watched a movie, they read a book, they're really excited and motivated to come in- >> What's was yours, was it comic book, was it- >> It was, uh, War Games with Matthew Broderick, I was 10 years old which totally dates me, movie came out in '83 so nobody has to look it up. (laughing) And you know I was just, you know blown away by this idea of using technology and being able to change things and the trouble is analysts show up to work and they don't have that experience, and nobody's expected, but they're not even close right? They wind up being told okay here's all this potential phishing email, we'd like you to spend 20 minutes on each one trying to figure out if it actually is phishing. And there's 600 messages. So tell me when you're done and I'll give you the next 600 messages. And so it's not motivating >> Not as sexy as War Games. >> It's not as sexy as War Games exactly. And then the CICO's say, well I can't even afford the people who are well trained. So I hire people right out of school, it takes me six months to train them, they're productive for six months, and then they leave for double their salary. So you wind up with a, sort of a 50 percent productivity rate out of you new hires, and it's just, it's just a recipe for for the past right? You know, we need to think more about how we, how we change things. >> So let's sort of remind our audience in terms of security, you're not building firewalls, you're not, you know competing with a lot of the brand name securities like MacAfee or FireEye, or Palo Alto networks, you're complementing them. Talk about where you fit in the security ecosystem. >> Sure. So if you boil down the entire security market, you can really think about protection and detection as the main two areas, so protection think of a firewall, an antivirus, something that stops something bad, and think of detection as uh, I'm going to flag potentially bad things that I think are bad but I'm not to certain that I want to absolutely stop them. And so what that does is it creates a queue of behavior that needs to be analyzed today by humans, right? So this is where the entire SIM market and everything else was created to aggregate all those alerts. So once you've got the alerts, you know awesome, but you've got to sort of walk thought them and process them. So what Servicenow has focused on is the response category. And visualization, aggregation is nice, but will be much better is to provide folks the mechanism to actually respond to what's happening. Both from a vulnerability standpoint, and from an incidence standpoint. And this is really where Servicenow's expertise shines because we know workflow, we know automation, we know about system of action, right? So that's our pedigree and IT frankly is several years ahead of where the security industry is right now until we can leverage that body of expertise not just with Servicenow, but with now all of our partners to help accelerate the transformation for security team. >> So I got to cut right to the chase. So last year we talked about- and of course every time we get a briefing for instance from a security vendor, where- we're given a stat that is on average it takes 200 sometimes you've seen as high as 300 but let's say 200 days to detect an incident then the answer is so buy our prevention, or our detection solution. >> Yeah. >> I asked you last year and I tweeted out, you know a couple days ago is, has Servicenow affected that? Can you affect- I asked you last year, can you affect that, can you compress that timeframe, you said "we think so." Um what kind of progress have you made? >> Sure so you have to remember about that 200 day stat that that is a industry average across all incidents right? So the Ponemon institute pulls this data together once a year, they survey over 300 companies, and they found that I think it's 206 days is the average right now. And so to identify an- a breach, and then another 75 days to contain it. So together it's nine months, which is a frighteningly long period of time. And so what we wanted to do is measure across all of our productions security operations customers what is their average time to identify and time to contain. So it turns out, it's so small we have to convert it to hours. It's 29 hours to identify, 33 hours to contain, which actually is a 160x improvement in identification, and a 50x improvement in containment. And so we're really excited about that. But you know, frankly, I'm not satisfied. You know, I'm still measuring in hours. Granted we've moved from months to hours, but I want it from hours, to minutes, to seconds, and really, you know we can show how we can do that in minutes today with certain types of attacks. But, there's still the long breaches. >> That's a dramatic reduction, you know I know it's, that 206 whatever it is is an average of averages. >> For sure. >> But the delta between what you're seeing and your customer base is not explainable by, oh well the Servicenow customers just happen to be better at it or lucky year, it's clearly an impact that you're having. >> Well sure, let's be you know as honest as we can be here right? The, you know the people who are adopting security operations are forward thinking security customers so you would expect that they're better, right? And so your- there program should already be more mature than the average program. And if you look across those statistics, like 200 and some days, you know that includes four year long breaches, and it also includes companies that frankly don't pay as much attention to security as they should. But even if you factor all of that out, it's still a massive massive difference. >> So if I looked at the bell curve of your customers versus some of the average in that survey, you'd see, the the shift, the lump would shift way to the left, right? >> Correct. Correct. And, and you know we actually have a customer, Ron Wakely from ANP Financial Services out of Australia, who was just up on stage talking about a 60 percent improvement in his vulnerability and response time. So from identifying the vulnerabilities via Quaales, Rapid 7, Tenable, whoever their scanning vendor is, all the way through IT patching, 60 percent faster, and given that, I think it's something like 80 percent of vulnerabi- or 80 percent of attacks, come from existing vulnerabilities, that's big change. >> So do get- you got to level it when you're measuring things and you change the variable that you're measuring, as opposed to the number, right? That means you're doing a good thing. So to go from, from hours to minutes, is it continuous improvement, or are there some big, you know potential challenges that you can see that if you overcome those challenges, those are going to give you some monumental shifts in the performance. >> I, I think we're ready. I think when we come back next year, the numbers will be even better and this is why, so many of our customers started by saying "I have no process at all, I have manual, you know I'm using spreadsheets, and emails, and notebooks, you know, and trying to manage the security incident when it happens." So let me just get to a system of action, let me get to a common place where I can do all of this investigation. And that's where most of our production customers are so if you look across the ones who gave us the 29 hour and the 33 hour set, that really just getting that benefit from having a place for everybody to work together where we're going, but this is already shipping in our product is the ability to automate the investigation, so back to, back to the, you know, the poor 10 year old who didn't get to save the world, you know, now he gets to say, this entire investigation stage is entirely automated. So if I hand an analyst, for example, an infected server, there's 10 steps they need to do before they even make a decision on anything right? They have to get the network connections, get the running processes, compare them to the processes that should be on the system, look up on a reputation site all the ones that are wrong like all these manual steps. We can automate that entire process so that the analyst gets to make the decision, he's sort of presented the data, here's the report, now decide. The analogy I always use is the, the doctor who's sort of rushing down in an ER show, and somebody hands him an MRI or an X-ray and he's looking at it, you know, through the fluorescent, you know, lights as he's walking and he's like "oh" you know "five millileters of" whatever and "do this" right? >> Right. >> That's the way an analyst wants to work right? They want the data so they can decide. >> I tell you this is the classic way that machines help people do better work right? Which we hear about over and over and over. Let the machines do the machine part, collecting all the shitty boring data, um, and then present you know the data to the person to make the decision. >> Absolutely. >> Probably with recommendations as well right? With some weighted average recommendations >> Yeah and this is where it gets really exciting, because the more we start automating these tasks, you know the human still wants to make the decision but as we grow and grow this industry, one of the benefits of us being in a cloud, is we can start to measure what's happening across all of our customers, so when attack X occurs, this is the behavior that most of our customers follow, so now if you're a new customer, we can just say "in your industry, customers like you tend to do this". >> Right. >> Right? And really excited by what our engineering team is starting to put together. >> Do you have a formal, or at some point maybe down the road a formal process where customers can opt in to an aggregation of, you know we're all in this together we're probably going to share our breach data with one another so that we can start to apply a lot more data across properties to come to better resolutions quicker. >> Well we actually announced today something called trusted security circles. So this is a capability to allow all of our customers to share indicators, so when you're investigating an issue, the indicators are something that are called an indicator of compromise, or an IOC, so we can share those indicators between customers, but we can do that in an anonymous way right? And so you know, the analogy I give you is, what do you do when you lose power in your house? Right? You grab the flashlight, you check the breakers, and then you look out the window, because what are you trying to find out? >> Is anybody else out? >> Is anybody else out exactly. So, you can't do that in security, you're all alone, because if you disclose anything, you risk putting your company further in a bad spot right? Cause now it's reputation damage, somebody discloses the information, so now we've been able to allow people to do this anonymously right so it's automatic. I share something with both of you, you only see that I shared if it's relevant, meaning the service now instance found it in your own environment, and then if all three of us are in a trusted circle, when any one of us shares, we know it was one of the three, but we don't know which one. So the company's protected. >> So just anecdotally when I speak to customers, everybody still is spending more on prevention than on detection. And there's a recognition that that has to shift, and it's starting to. Now you're coming in saying, invest in response. Which, remember from our conversation last year is right on I'm super excited about that because I think the recognition must occur at the board room that you are going to get infiltrated it's the response that is going to determine the quality of your security. And you still have to spend on prevention and detection. But as you go to the market, first of all can you affirm or deny that you're seeing that shift from prevention to detection in spending, is it happening sort of fast enough, and then as you go in and advise people to think about spending on responding, what's their reaction? What are you finding is the, are the headwinds and what's the reception like? >> Sure. So you know to answer your first question about protection to detection, I would say that if you look at the mature protection technologies, right they are continuing to innovate, but certainly what you would expect a firewall to do this year, is somewhat what you expected it to do last year. But the detection category really feels like where there's a lot of innovation, right? So you're seeing you know new capabilities on the endpoint side network side, anomol- you're just seeing all sorts of diff- >> Analytics. >> Analytics, absolutely. And so uh, I do see more spent simply because more of these attacks are too, too nasty to stop, right? You sort of have to detect them and do some more analysis before you can make the decision. To your second question about, you know, what's the reception been when we started talking about response. You know, I haven't had a single meeting with a customer where they haven't said, "wow" like "we need that", right? It was very- I've never had anybody go "Well yeah our program is mature, we're fine, we don't need this." Um, the question is always just where do we start? And so we see, you know vulnerability management as one great place to start incident response is another great place to start. We introduced the third way to start, just today as well. We started shipping this new capability called vendor risk management, which actually acknowledges the the, you know we talked about the perimeter list network what five years ago? Something like that, we're saying oh the perimeter's gone, you know, mobile devices, whatever. But there's another perimeter that's been eroding as well, which is the distinction between a corporate network and your vendors and suppliers. And so your vendors and suppliers become massive sources of potential threat if they're not protected. And so the assessment process, you know, there's telcos who have 50,000 vendors. So you think about the exposure of that many companies and the process to figure out, do they have a strong password policy, right? Do they follow the best practices around network security, those kinds of things, we're allowing you to manage that entire process now. >> So you're obviously hunting within the service now customer-based presumably, right? You want to have somebody to have the platform in order to take advantage of your product. >> Sure. >> Um, could you talk about that dynamic, but also other products that you integrate with. What are you getting from the customers, do I do I have this capability- this is who I use for firewall who I use for detection do you integrate them, I'm sure you're getting that a lot. Maybe talk to that. >> Sure sure. So first off, it's important to share that the Servicenow platform as a whole is very easy to integrate with. There's API's throughout the entire system, you know we can very easily parse even emails, we have a lot of customers that you know have an email generated from an alert system, and we can parse out everything in the email and map it right into a structured workflow, so you can kind of move from unstructured email immediately into now it's in service now. But we have 40 vendors that we directly integrate with today and when I was here about a year ago, I think that number was maybe three or two. And so we're up at 40 now, and that really encompasses a lot of the popular products so we can for example, you know, a common use case, we talked about phishing a little bit right? You know, let me process a potential phishing email, pull out the URL, the subject line, all the things that might indicate bad behavior, let me look them up automatically on these public threat sources like Virus Total or Meta Defender, and then if the answer is they don't think it's bad, I can just close the incident right? If they think it's bad, now I can ask the Palo Alto Firewall, are you already blocking this particular URL, and if the Palo Alto Firewall says "yeah I was already blocking it", again you can close the incident. Only the emails that were known to be bad, and your existing perimeter capabilities didn't stop, did you need to involve people. >> I have to ask you, it goes back to the conversation we had with Robert Gates last year, but I felt like Stuxnet was this milestone, where the, the game just got escalated big time. And it went from sort of harmless, sometimes not harmless, really up the level of risk. Because now others, you know the bad guys really dug into what they could do, and it became pretty substantial. I was asking Gates generally about some future warfare in cyber, and he, this is obviously before the whole Russian hacking, but certainly Snowden and Wikileaks and so fourth was around. And he said, "The United States has to be very careful about how it responds. We have maybe many more capabilities but if we show our hand, others are going to see those weapons, and have access to those weapons, cause it's digital." I wonder as a security expert if you could sort of comment on the state of security, the future of that threat generically, or generally. Where do you see that going? >> Well there's a couple of things that come to mind as you're talking. Uh, one is you're right, Stuxnet was an eye opener I think for a lot of people in the industry that that, that these kinds of vulnerabilities are being used for, you know nation state purposes rather than, you know just sort of, uh random bad behavior. So yeah I would go back to what I said earlier and say that, um, we have to take the noise, the mundane off the table. We have to automate that, you're absolutely right. These sort of nation state attackers, if you're at a Global 2000 organization, right your intellectual property is valuable, the data you have about your employees is valuable, right all this information is going to be sought by competitors, by nation states, you have to be able to focus on those kinds of attacks, which back to my kind of War Games analogy, like that's what these people wanted to do, they wanted to find the needle in the haystack, and instead they're focusing on something more basic. And so I think if we can up the game, that changes things. The second, and really interesting thing for me is this challenge around vulnerability, so you talked about Gates saying that he has to be careful sort of how much he tips his hand. I think it was recently disclosed that the NSA had a stockpile of vulnerabilities that they were not disclosing to weaponize themselves. And that's a really paradoxical question right? You know, do you share it so that everybody can be protected including your own people, right? Imagine Acrobat, you find some problem in Acrobat, like well do you use it to exploit the enemy, or do you use it to protect your own environment? >> It's quite a dilemma. >> You- it's a huge dilemma cause you're assuming either they have it or they don't have the same vulnerability and so I'm fascinated by how that whole plays out. Yeah, it's a little frightening. >> And you know, in the land of defense, you think okay United States, you know biggest defense, spends the most money, has the, you know the most, you know, amazing machines whatever. Um, but in cyber, you know you presume that's the case, but you don't really know, I think of high frequency trading, you know, it was a lit of Russian mathmeticians that actually developed that, so clearly other states have, you know smart people that can you know create, you know, dangerous threats. And it's, it's- >> You only have to live once to, that's kind of the defense game. You got to defend them all, you have to bat 1000 on the defense side, or you know, get it and react, from the other guys side, he can just pow pow pow pow pow, you just got to get through once. >> So this is why your strategy of response is such a winner. >> Well this is where it comes back to risk as well right? At the end of the day you're right, you know a determined adversary you know, sorry to break it to everybody at some point is going to be able to find some way to do some damages. The question is how do you quantify the various risks within your organization? How do you focus your energy from a technology perspective, from a people standpoint, on the things that have the most potential to do your organization harm, and then, you know there's just no way people can stop everything unless you, you know unplug. >> And then there's the business. Then there's the business part of it too right? Cause this is like insurance when do you stop buying more insurance, you know? You could always invest more at what point does the investment no longer justify the cost because there's no simple answer. >> Well this is where, uh you know, we talked to chief information security officers all the time who are struggling with the board of directors conversation. How do I actually have an emotional conversation that's not mired in data on how things are going? And today they often have to fall back on stats like you know we process 5 million alerts per day, or we have, you know x number of vulnerabilities. But with security operations what they can do is say things like well my mean time to identify, you know was 42 hours, and this quarter it's 14 hours, and so the dollars you gave me, here's the impact. You know I have 50 critical vulnerabilities last quarter, this quarter I have 70, but only on my mission critical system, so that indicates future need to fund or reprioritize, right? So suddenly now you've got data where you can actually have a meaningful conversation about where things are from a posture prospective. >> These are the assets that we've, you know quantified the value of, these are the ones that were prioritizing the protection on and here's why we came up with that priority, let's look at that and, you know agree. >> Exactly. You know large organizations, I was talking to the CISO of a fortune ten, 50 I guess and he was sharing that it takes 40 percent of their time in incident response is spent tracking down who owns the IP address. 40 percent. So imagine, you spent 40 percent of a, you know 25 hour response time investigating who owns the asset, and then you find out it's a lab system, or it's a spare. You just wasted 40 percent of your time. But if you can instead know, oh this is your finance reporting infrastructure, okay you super high priority, let's focus in on that. So this is where the business service mapping, the CMDB becomes such a differentiator, when it's in the hands of our customers. >> Super important topic Sean Convery, thanks very much for coming back in the cube and, uh great work. Love it. >> It's great to be here, thanks for having me. >> Alright keep it right there everybody we'll be right back with our next guest, this is the Cube, we're live from Servicenow Knowledge 17 in Orlando. We'll be right back.

Published Date : May 10 2017

SUMMARY :

Brought to you by Servicenow. Welcome back to the Cube, it's good to see you again. So let's see you guys launched last year at And you know I was just, you know blown away So you wind up with a, sort of a 50 percent productivity you know competing with a lot of the brand name securities So if you boil down the entire security market, So I got to cut right to the chase. you know a couple days ago is, and really, you know we can show how we can do that you know I know it's, that 206 whatever it is But the delta between what you're seeing The, you know the people who are adopting And, and you know we actually have a customer, So do get- you got to level it when you're measuring and he's looking at it, you know, through the fluorescent, That's the way an analyst wants to work right? um, and then present you know the data you know the human still wants to make the decision is starting to put together. to an aggregation of, you know we're all in this together You grab the flashlight, you check the breakers, So, you can't do that in security, you're all alone, and then as you go in and advise people to think about So you know to answer your first question And so the assessment process, you know, in order to take advantage of your product. but also other products that you integrate with. so we can for example, you know, a common use case, Because now others, you know the bad guys the data you have about your employees is valuable, and so I'm fascinated by how that whole plays out. so clearly other states have, you know smart people or you know, get it and react, from the other guys side, So this is why your strategy of response and then, you know there's just no way Cause this is like insurance when do you and so the dollars you gave me, These are the assets that we've, you know and then you find out it's a lab system, thanks very much for coming back in the cube this is the Cube, we're live from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sean ConveryPERSON

0.99+

ANP Financial ServicesORGANIZATION

0.99+

Ron WakelyPERSON

0.99+

AustraliaLOCATION

0.99+

six monthsQUANTITY

0.99+

50xQUANTITY

0.99+

40 percentQUANTITY

0.99+

70QUANTITY

0.99+

160xQUANTITY

0.99+

14 hoursQUANTITY

0.99+

80 percentQUANTITY

0.99+

10 stepsQUANTITY

0.99+

25 hourQUANTITY

0.99+

20 minutesQUANTITY

0.99+

ServicenowORGANIZATION

0.99+

33 hourQUANTITY

0.99+

last yearDATE

0.99+

next yearDATE

0.99+

bothQUANTITY

0.99+

42 hoursQUANTITY

0.99+

29 hoursQUANTITY

0.99+

threeQUANTITY

0.99+

nine monthsQUANTITY

0.99+

33 hoursQUANTITY

0.99+

29 hourQUANTITY

0.99+

50 percentQUANTITY

0.99+

GatesPERSON

0.99+

first questionQUANTITY

0.99+

60 percentQUANTITY

0.99+

second questionQUANTITY

0.99+

twoQUANTITY

0.99+

40 vendorsQUANTITY

0.99+

1.1 millionQUANTITY

0.99+

200 daysQUANTITY

0.99+

600 messagesQUANTITY

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

NSAORGANIZATION

0.99+

fifth yearQUANTITY

0.99+

75 daysQUANTITY

0.99+

Matthew BroderickPERSON

0.99+

200QUANTITY

0.99+

OrlandoLOCATION

0.99+

206 daysQUANTITY

0.99+

KnowledgeORGANIZATION

0.99+

secondQUANTITY

0.99+

CMDBORGANIZATION

0.99+

'83DATE

0.99+

Orlando, FloridaLOCATION

0.99+

ServiceNowORGANIZATION

0.99+

over 300 companiesQUANTITY

0.99+

five milliletersQUANTITY

0.99+

Ponemon instituteORGANIZATION

0.98+

last quarterDATE

0.98+

QuaalesORGANIZATION

0.98+

five years agoDATE

0.98+

third wayQUANTITY

0.98+

four yearQUANTITY

0.98+

two areasQUANTITY

0.98+

50 critical vulnerabilitiesQUANTITY

0.98+

TenableORGANIZATION

0.98+

Knowledge 17ORGANIZATION

0.98+

Robert GatesPERSON

0.98+

MacAfeeORGANIZATION

0.98+

StuxnetPERSON

0.98+

CICOORGANIZATION

0.98+

BothQUANTITY

0.98+

this yearDATE

0.98+

ShawnPERSON

0.98+

50,000 vendorsQUANTITY

0.98+