Image Title

Search Results for Chad Sweet:

Chad Sweet & Reggie Brothers , The Chertoff Group | Security in the Boardroom


 

>> Hey, welcome back everybody. Jeff Frick here with theCube. We're in Palo Alto, California, at one of the Chertoff events. It's called Security in the Boardroom. They have these events all over the country, and this is really kind of elevating the security conversation beyond the edge, and beyond CISOs to really the boardroom, which is really where the conversation needs to happen. And our next guest, really excited to have We've got Chad Sweet, he's the co-founder and CEO of the Chertoff Group. Welcome Chad. >> Great to be here. >> And with him also Reggie Brothers, he's the principal at the Chertoff Group, and spent a lot of time in Washington. Again you can check his LinkedIn and find out his whole history. I won't go through it here. First off, welcome gentlemen. >> Thank you. >> Thank you. >> So, before we jump in a little bit of-- What are these events about? Why should people come? >> Well, basically they're a form in which we bring together both practitioners and consumers of security. Often it's around a pragmatic issue that the industry or government's facing, and this one, as you just said, priority of security, cyber screening in particular, in the boardroom, which is obviously what we're reading about everyday in the papers with the Petya and NotPetya and the WannaCry attacks, these are basically, I think, teachable moments that are affecting the whole nation. And so this is a great opportunity for folks to come together in a intimate form, and we welcome everybody who wants to come. Check out our website at chertoffgroup.com >> Okay, great, and the other kind of theme here, that we're hearing over and over is the AI theme, right? >> Yeah. >> We hear about AI and machine learning all over the place and we're in Mountain View and there's self-driving cars driving all over the place and Google tells me, like, "you're home now." And I'm like, "Ah, that's great." But there's much bigger fish to fry with AI and there's a much higher level. And Reggie you just came off a panel talking about some much higher level-- I don't know if issues is the right word, maybe issues is the right word, around AI for security. So, I wonder if you can share some of those insights. >> I think issues, challenges, are the right words. >> Challenges, that's probably a better word. >> Those are good words, because particularly you're talking about security application. Whether it's corporate or government the issue becomes trust. How do you trust that this machine has made the right kind of decision, how do you make it traceable. One of the challenges with the current AI technology is it's mostly based on machine-learning. Machine-learning tends to be kind of a black box where you know know what goes in and you train what comes out. That doesn't necessarily mean you understand what's going inside the box. >> Right. >> So then if you have a situation where you really need to be able to trust this decision this machine's making How do you trust it? What's the traceability? So, in the panel we started discussing that. Why is it so important to have this level of trust? You brought up autonomous-vehicles, well of course, you want to make sure that you can trust your vehicle to make the right decision if it has to make a decision at an intersection. Who's it going to save? How do you trust that machine becomes a really big issue. I think it's something that in the machine-learning community, as we learn in the panel, is really starting to grapple with and face that challenge. So I think there's good news, but I think it's a question that when think about what we have to ask when we're adopting these kind of machine-learning AI solutions we have to make sure we do ourself. >> So, it's really interesting, the trust issue, because there's so many layers to it, right? We all get on airplanes and fly across country all the time, right? And those planes are being flown by machines, for the most part. And at the same time if you start to unpack some of these crazy algorithms, even if you could open up the black box, unless you're a data scientist and you have a PhD, in some of these statistical analysis could you really understand it anyway? So how do you balance it? We're talking about the boardroom. What's the level of discovery? What's the level of knowledge that's appropriate without necessarily being a full-fledged data scientist who are the ones that are actually writing those algorithms? >> So I think that's a challenge, right, because I think when you look at the types of ways that people are addressing this trust challenge it is highly technical, alright. People are making hybrid systems where you can do some type of traceability but that's highly technical for the boardroom. I think what's important is that the-- and one thing that we did talk about on the panel and even prior to panel was on cybersecurity and governance, we talked about the importance of being able to speak in a language that everyone-- that the laborers can understand. You can't just speak in a computer science jargon kind of manner. You have to be able to speak to the person that's actually making the decision. Which means you have to really understand the problem, because I think my experience the people that can speak in the plainest language understand the problem the best. So these problems are things that can be explained they just tend not to be explained, because they're in this super technical domain. >> But you know, Reggie is being very humble. He's got a PhD from MIT and worked at the defense advanced research-- >> Well he can open the box. >> He can open the box. I'm a simple guy from Beaumont, Texas, so I can kind of dumb it down for the average person. I think on the trust issue over time whether, and you just mentioned some of it, if you use the analogy of a car or the board room or a war scenario, it's the result. So you get comfortable, you know the first time, I have a Tesla, the first time I let go of the wheel and let it drive it's self was a scary experience but then when you actually see the result and get to enjoy and experience the actual performance of the vehicle that's when the trust can begin. And I think in a similar vein, in the military context, you know, we're seeing automation start to take hold. The big issue will be in that moment of ultimate trust, i.e. do you allow a weapon actually to have lethal decision-making authority, and we just talked about that on the panel, which is the ultimate trust is-- is not really today in the military something that we're prepared to trust yet. I think we've seen in, there's only a couple places, like the DMZ in North Korea where we actually do have a few systems that are, if they actually detect an attack because there's such a short response time, those are the rare exceptions of where lethal authority is at least being considered. I think Elon Musk has talked about how the threat of AI, and how this could, if it's not, we don't have some norms put around it then that trust could not be developed, cause there wouldn't be this checks and balances. So, in the boardroom that last scenario, I think, the boards are going to be facing these cyber attacks and the more that they experience once the attack happens how the AI is providing some immediate response in mitigation and hopefully even prevention, that's where the trust will begin. >> The interesting thing, though, is that the sophistication of the attacks is going up dramatically, right? >> Chad: Yep. >> Why do we have machine-learning in AI? Because it's fast. It can react to a ton of data and move at speeds that we as people can't, such as your self-driving car. And now we're seeing an increase in state-sponsored threats that are coming in, it's not just the crazy kid in the basement, you know, hacking away to show his friend, but you know, now they're trying to get much more significant information, trying to go after much more significant systems. So, it almost begs then that you have to have the North Korean example when your time windows are shorter, when the assets are more valuable and when the sophistication of the attacking party goes up, can people manage it, you know, I would assume that the people role, you know, will continue to get further and further up the stack where the automation takes an increasing piece of it. >> So let's pull on that, right. So if you talk to the Air Force, cause the Air Force does a lot of work on autonomy, DoD General does, but the Air Force has this chart where they show that over time the resource that will be dedicated by a machine, autonomous machine, will increase and resources to a human decrease, to a certain level, to a certain level. And that level is really governed by policy issues, compliance issues. So there's some level over which because of policy and compliance the human will always be in the loop. You just don't let the machine run totally open loop, but the point is it has to run at machine speed. So let's go back to your example, with the high speed cyber attacks. You need to have some type of defensive mechanism that can react at machine speed, which means at some level the humans are out of that part of the loop, but you still have to have the corporate board person, as Chad said, have trust in that machine to operate at this machine speed, out of the loop. >> In that human oversight one of the things that was discussed on on the panel was that interestingly AI can actually be used in training of humans to upgrade their own skills, and so right now in the Department of Defense, they do these exercises on cyber ranges and there's about a 4 month waiting period just to get on the ranges, that's how congested they are. And even if you get on it, if you think about it, right now there's a limited number of human talent, human instructors that can simulate the adversary and oversee that, and so actually using AI to create a simulated adversary and being able to do it in a gamified environment is something that's increasingly going to be necessary to make it, to keep everyone's skills, and to do it real-time 24/7 against active threats that are being morphed over time. That's really where we have to get our game up to. So, watch for companies like Circadence, which are doing this right now with the Air Force, Army, DISA, and also see them applying this, as Reggie said, in the corporate sphere where a lot of the folks who will tell you today they're facing this asymmetric threat, they have a lot of tools, but they don't necessarily trust or have the confidence that when the balloon goes up, when the attack is happening, is my team ready? And so being able to use AI to help simulate these attacks against their own teams so they can show the board actually our guys are at this level of tested-ness and readiness. >> It's interesting Hal's talking to me in the background as you're talking about the cyber threat, but there's another twist on that, right, which is where machines aren't tired, they didn't have a bad day, they didn't have a fight with the kids in the morning. So you've got that kind of human frailty which machines don't have, right, that's not part of the algorithm generally. But it's interesting to me that it usually comes down to, as most things of any importance, right, it's not really a technical decision. The technical pieces was actually pretty easy. The hard part is what are the moral considerations, what are the legal considerations, what are the governance considerations, and those are what really ultimately drive the decision to go or no-go. >> I absolutely agree. One of the challenges that we face is what is our level of interaction between the machine and the human, and how does that evolve over time. You know, people talk about the centaur model, where the centaur, the mythical horse and human, where you have this same kind of thing with the machine and human, right? You want this seamless type of interaction, but what does that really mean, and who does what? What they've found is you've got machines have beaten, obviously, our human chest masters, they've beaten our goal masters. But the things that seems to work best is when there's some level of teaming between the human and the machine. What does that mean? And I think that's going to be a challenge going forward is how we start understanding what that frontier is where the human and machine have to have this really seamless interaction. How do we train for that, how do we build for that? >> So, give your last thoughts before I let you go. The chime is running, they want you back. As you look down the road, just a couple years, I would never say more than a couple years, and, you know, Moore's Law is not slowing down people argue will argue they're crazy, you know, chips are getting faster, networks are getting faster, data systems are getting faster, computers are getting faster, we're all carrying around mobile phones and just blowing off tons of digital exhaust as our systems. What do you tell people, how do boards react in this rapidly evolving, you know, on like an exponential curve environment in which we're living, how do they not just freeze? >> Well if you look at it, I think, to use a financial analogy and almost every board knows the basic foundational formula for accounting which is assets equals liabilities plus equity. I think in the future because no business today is immune from the digital economy every business is being disrupted by the digital economy and it's-- there are businesses that are underpinned by the trust of the digital economy. So, every board I think going forward has to become literate on cybersecurity and Artificial Intelligence will be part of that board conversation, and they'll need to learn that fundamental formula of risk, which is risk equals threat, times vulnerability, times consequence. So in the months ahead part of what the Chertoff Group will be doing is playing a key role in helping to be an educator of those boards and a facilitator in these important strategic discussions. >> Alright, we'll leave it there. Chad Sweet, Reggie Brothers thanks for stopping by. >> Thank you. >> Thank you, appreciate it. >> Alright, I'm Jeff Frick, you're watching theCube. We're at the Chertoff event, it's security in the boardroom. Think about it, we'll catch ya next time.

Published Date : Aug 25 2017

SUMMARY :

and CEO of the Chertoff Group. he's the principal at the Chertoff Group, in the boardroom, which is obviously I don't know if issues is the right word, the right kind of decision, how do you make it traceable. So, in the panel we started discussing that. And at the same time if you start that the laborers can understand. But you know, Reggie is being very humble. and the more that they experience once the attack happens it's not just the crazy kid in the basement, but the point is it has to run at machine speed. and so right now in the Department of Defense, drive the decision to go or no-go. But the things that seems to work best in this rapidly evolving, you know, So in the months ahead part of what Alright, we'll leave it there. We're at the Chertoff event, it's security in the boardroom.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

ChadPERSON

0.99+

Chertoff GroupORGANIZATION

0.99+

Chad SweetPERSON

0.99+

WashingtonLOCATION

0.99+

ReggiePERSON

0.99+

chertoffgroup.comOTHER

0.99+

MITORGANIZATION

0.99+

Department of DefenseORGANIZATION

0.99+

ChertoffORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Elon MuskPERSON

0.99+

DISAORGANIZATION

0.99+

Mountain ViewLOCATION

0.99+

GoogleORGANIZATION

0.99+

North KoreaLOCATION

0.99+

OneQUANTITY

0.99+

Reggie BrothersPERSON

0.99+

first timeQUANTITY

0.98+

The Chertoff GroupORGANIZATION

0.98+

todayDATE

0.98+

Beaumont, TexasLOCATION

0.98+

CircadenceORGANIZATION

0.98+

FirstQUANTITY

0.98+

TeslaORGANIZATION

0.98+

more than a couple yearsQUANTITY

0.98+

LinkedInORGANIZATION

0.95+

HalPERSON

0.94+

Air ForceORGANIZATION

0.93+

one thingQUANTITY

0.91+

DMZORGANIZATION

0.89+

ArmyORGANIZATION

0.89+

theCubeORGANIZATION

0.88+

both practitionersQUANTITY

0.85+

DoDORGANIZATION

0.81+

oneQUANTITY

0.81+

4 monthQUANTITY

0.81+

couple yearsQUANTITY

0.8+

NotPetyaORGANIZATION

0.72+

AirORGANIZATION

0.7+

WannaCryTITLE

0.69+

NorthLOCATION

0.67+

PetyaORGANIZATION

0.66+

couple placesQUANTITY

0.65+

Moore's LawTITLE

0.62+

Chad SweetORGANIZATION

0.58+

digital exhaustQUANTITY

0.58+

tonsQUANTITY

0.57+

Reggie BrothersORGANIZATION

0.57+

dataQUANTITY

0.54+

aboutQUANTITY

0.52+

KoreanOTHER

0.47+

Jim Pflaging & Michael Chertoff, The Chertoff Group | Security in the Boardroom


 

>> Welcome back everybody. Jeff Frick here with theCUBE, we're at Security in the Boardroom. It's a Chertoff event, they go all around the country and have these small intimate events talking about security, and today it's really about the boardroom, and escalating the conversation into the boardroom. So it's not a tech conversation, it's not a mobile phone management conversation, but really how do we get it up into the boardroom. And I'm really excited for our next guest. He's Michael Chertoff, he's the Co-Founder, Executive Chairman of the Chertoff Group, with a long established career, and I'll let you go check out his LinkedIn. He's been Homeland Security, and it's a long, long list, so I won't even go there. And Jim Pflaging, he's the Principal, Technology Sector and Strategy Performance Lead also for the Chertoff Group. Thanks, Jim kicked it off this morning. And welcome both of you. So first off, Jim, a little bit about this event. What is this event? And what is Chertoff trying to accomplish with this little bit of a road tour? >> So I think it's important to know that we're passionate about the importance of security. I mean, with Secretary Chertoff and Chad Sweet's background, they were at the ground floor of seeing the importance to our country. So we created the firm to focus wholly on security, and to help firms with the whole lifecycle of issues. As a risk, as a business opportunity, as a catalyst for growth. And it was back in 2013 when some stakeholders around said, "Hey you guys have a bunch of ex-DHS folks, there's a bunch of interesting identity technology issues that are coming to the surface, and other technology issues, why don't you bring a group together and do it?" >> Jeff: Right. >> We said, well, we're not an event company. But we went ahead and had a conversation back in D.C. It was a big success, and then it was a little bit like that line from the Godfather, you know when they say, "They keep pulling me back, they keep pulling me back". (laughs) So here we are on our tenth event, we've been to Silicon Valley three times, New York, Houston, and then D.C. And each time, the idea is, make it topical to the local community, and make it topical for the issues at hand at the moment. >> Yeah, it's interesting, the relationship and security. Specifically between government and technology companies. You know, we do a lot of big technology shows, and at IBM and HP. With the customers that we have distributed around the world and the regulations and compliance issues, in some ways we know more from a broad base of these global international customers than the government. On the other hand, the government's driving the compliance, and has the privacy issues, and hopefully looking out for people, so how do the two work more closely together to deliver better solutions? >> Well, in fairness to the government, the government also has access to information and intelligence that the private sector doesn't have. >> That's true >> So each brings to the table a certain set of capabilities, and part of the challenge is to have people speak the same language. The government has tended over the years to develop a very rigid system of procuring, of interacting with the private sector. Out here in Silicon Valley and in other tech centers there's a lot of focus on being innovative and nimble, and sometimes those two cultures need to be bridged. And actually one of the things that we started out doing, was trying to bridge those cultures. Helping the technology companies understand some of the objectives that the government had in terms of security and the economy. And helping the government understand what's out there, what are the capabilities and the techniques that you might use. Because without an awareness of the art of the possible, it's very hard to lay out a strategy for securing cyberspace. >> Right. And the whole security space to me, we talked a little bit before we put the cameras on, feels like insurance. You know you got to do something, right, you can't go unprotected, but by the same token, you can't be 100%, but do you invest forever? Because at the end of the day, for a private company, you know you have limited resources, government too. So, when these conversations are happening, and then what we're talking about here, the boardroom, the worst way a board member wants to get involved is when he reads the Wall Street Journal on Monday morning and he sees that his company has been breached, and he's in big, big trouble. So, how is the relative importance of security investment changing in the boardrooms? What are you seeing? How is that evolving? >> So, from my standpoint, it's about, first of all, understanding that it's a risk, not security. You're managing the risk, you're not guaranteeing people nothing bad will ever happen. And now, GI uses, I say to people it's like physical health. You don't go to your doctor and say, "Doctor, I want you to guarantee I'll never get sick". The doctor would throw you out of the office, or he'd have you committed. What you do, is you say, "Look Doctor, I'd like to be healthy, I'd like to have a healthy immune system, I'd like to keep most of the bacteria and the viruses out of my body, but I'd like to know if I do get invaded by bacterial viruses, which will inevitably happen, I've got a system that can detect it and white blood cells will eliminate it. That's why I get vaccinated, that's why I do other things to keep my immune system up." And that sense of managing expectations I think is critical for the board. If the board wants a guarantee we will never get hacked, then it's not realistic. If the board wants to understand what are the most important parts of our body politic, or our corporate body, we have to protect, and how do we build layers of defense to keep us healthy, then I think you can have an intelligent discussion about how much investment is enough. >> Right. But then as you said, you want to be healthy, but then we still go to bars and have a drink, and we eat ice cream when we probably shouldn't. And the security, so many percentages of the security problems are caused by people didn't update their patches, or they're respondent to this great opportunity to get a bunch of money out of an African Prince. So how are we changing the culture on the people process? You made an interesting comment about culture. We always talk about people process and technology, but you threw the culture piece in it. Which I though was a pretty interesting twist on just people. >> I think that's a key piece, and it's an area where the board can actually lead. This is when it has to start from the top. You know, if management and the board says, "Hey this is a technical issue, we're just gonnna leave it for that security team down the hall". I think you've failed right out of the gate. You need a CEO-lead, cyber-conscious culture, security-conscious culture, that shows that we value it. And that ultimately, you're going to spend time and money to reward the behavior that you're looking for, to then retain and grow that organization. But it's then looking at it both as a risk, as Secretary said, but increasingly, it's part of an opportunity. It's part of an opportunity to engage your customers in new way. Show that you're really a trusted partner. You value, and will hold private, the information that you're collecting about them. As we hurdle into IOT and driverless cars, that are generating massive amounts of information, more and more, people are going to want to do business with people that are good stewards of that information. >> Right. And I think the interesting thing that came up, as well, is it's not even the technology is not even the breaches, you know we talked a little bit about the whole iPhone encryption thing. Now we all have Alexa sitting at our house, you know, is Alexa listening all the time? I heard of a case where they actually went back to the Alexa on a domestic dispute, or domestic violence to see if Alexa had collected evidence and listened in to this domestic violence attack. But the privacy issues are tremendous. So as all these things get weighed, again, you made an interesting comment, how do we define success? What does success look like? Cause it's not never. In the financial services industry, your worst nightmare is too many false positives, if your turning down people's bank account credit card. So what does success look like? How should people be thinking about success? >> I think there's a couple different dimensions to this. As Jim mentioned earlier, to the extent that you are a steward of other people's data, your ability to promise them that it'll be secure, it'll be private, and execute on the promise, is an important part of your business proposition. To the extent that you have your own business secrets, and your own business confidences you want to protect, that's important. But you raise a somewhat different issue, which is, we do make deliberate decisions sometimes to bring into our homes, into our lives, the kind of collection of information that is a feature, not bug. That's got to be a deliberate decision, because once you collect the information, as in the example of the Alexa recording some domestic disturbance, that's going to be there for somebody else to get using a lawful process or otherwise. So, part of, again, the process of culture and education is always asking, "Why do we want to collect?" Why do we want to hold? What are we connecting to?" You can make an intelligent decision, but you've got to ask the question first. >> Right. Although I heard an interesting twist on that one time. Even if you go through that analysis, and you say, okay, based on these, on yes, yes, and this is why, we're going to collect this data, which you don't know, is what someone else might do with that data in a different scenario down the road. So even if you're a responsible steward of that activity, there's always a chance that something else could happen. So there's even kind of a double whammy. >> I mean, this is one of the byproducts that people talk about with big data. And it's techy term, but people talk about a data lake, where we're collecting this, we're collecting this, we're collecting that. In and of itself, it's not sensitive information. But if you connect different breadcrumbs about a person's activity, and their identity, wow, all of sudden that could be incredibly sensitive. >> Right. >> So that's one of the issues that we've been dealing with in the tech community is how to enable us to collect that information, make good decisions from it, but understand the resulting security issues that come. >> Yeah, that's a fascinating issue because, I think that what a lot of people don't understand is although individual items collected may seem fairly benign, the ability to aggregate, and store all the amount of data is huge. And a perfect example is, you know, people are always walking around taking selfies, or pictures, or putting things in their social media, and the third parties and everybody get into that. And normally you'd say, "That's fine, somebody took a picture of me, it's going to be in their house or whatever, who cares." But if it's all up in the cloud, and someone has the ability to aggregate all that, and all of a sudden get a picture of everybody who's ever taken a photograph of me, or mentioned me, or have had some interaction with, all of a sudden, unbeknownst to me, someone could really get a 24/7 picture of all of my life. So how do you deal with those issues? Some of these are legal questions, some of them are technical questions, but I do think we're on the cusp of having some serious conversations about this. >> So they're going to come yank you guys back into the conference. So thank you for taking a few minutes to come sit down with us. So I just want to wrap up again with the board. As you talk to the boards, we've talked about things that are happening now, and things that are happening in the relative recent past, as you look forward, what's your take away for them as you've sat around, you've talked about all this crazy, scary stuff, and how they should think about it. As you tell them to look forward, what's your advice? >> Well, if I could start with that, so today we released some results from a study we did around this topic. What do boards really think about security? Is it discussed? Is it a boardroom competency? And we interviewed over a hundred senior execs, a vast percentage, forty percent, who were responding as a board member. And what we found was, there's a tale of two cities, two cyber cities. If you're in a large public, US company, in what would be called critical infrastructure, finance, healthcare, telecom, yeah, the directors and the board, they're very well versed in cyber, it's been discussed, it's part of a risk management program, and they have very good CSOs, good interaction with the board. Then there's everybody else. And I would say this actually reflects the boards that I sit on. Is that, you know, cyber's not discussed, it's maybe in reaction to a breach, but it's a technical discussion. And most directors self report, we're not where we need to be on education. So then, just quickly, as a finish, what we launched today was a seven point plan, a blueprint for directors, to help guide areas that they can ask questions, document, review. Kind of move them up their cyber-literacy curve. >> The other thing that I would say, is this, I really sympathize with that small and medium enterprises, which simply don't have the money to invest in terms of building up a whole stand alone security system. I think that takes is more and more to outsourcing some of these functions. Some of it is the cloud, because you put your data up there. Some of it is outsourcing the intelligence and information to know what's coming. It's managed services. Because most of these smaller companies, even if their heart is in the right place, they just don't have the scale to do what a major bank, for example, can do in terms of an operation center. >> Yeah, I think that's such a big piece of the cloud story, is sitting through some of the James Hamilton Tuesday night. If you ever get a chance to go to that He's talks about the investment, infrastructure, security, networking, you name it. That Amazon can make at scale, nobody else, except a very small group of companies can make type of investment. >> Exactly. >> There's just not enough money. Alright, we'll leave it there for now. Really appreciate you stopping by, great event, and thanks for having theCUBE. >> Michael: Great, thanks for having us. >> Okay, it's Michael, Jim, I'm Jeff, you're watching theCUBE. We'll be right back.

Published Date : Aug 25 2017

SUMMARY :

and escalating the conversation into the boardroom. and to help firms with the whole lifecycle of issues. like that line from the Godfather, you know when they say, and has the privacy issues, and intelligence that the private sector and the techniques that you might use. but by the same token, you can't be 100%, and the viruses out of my body, And the security, leave it for that security team down the hall". is it's not even the technology is not even the breaches, To the extent that you have your own business secrets, and you say, okay, based on these, But if you connect different breadcrumbs So that's one of the issues that we've been dealing with and someone has the ability to aggregate all that, So they're going to come yank you guys back the directors and the board, Some of it is the cloud, because you put your data up there. He's talks about the investment, infrastructure, security, Really appreciate you stopping by, Okay, it's Michael, Jim, I'm Jeff,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

MichaelPERSON

0.99+

JimPERSON

0.99+

Jeff FrickPERSON

0.99+

2013DATE

0.99+

100%QUANTITY

0.99+

Jim PflagingPERSON

0.99+

IBMORGANIZATION

0.99+

Michael ChertoffPERSON

0.99+

New YorkLOCATION

0.99+

HPORGANIZATION

0.99+

D.C.LOCATION

0.99+

Monday morningDATE

0.99+

AmazonORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Chad SweetPERSON

0.99+

Silicon ValleyLOCATION

0.99+

oneQUANTITY

0.99+

forty percentQUANTITY

0.99+

two citiesQUANTITY

0.99+

Chertoff GroupORGANIZATION

0.99+

Homeland SecurityORGANIZATION

0.99+

todayDATE

0.99+

ChertoffPERSON

0.99+

twoQUANTITY

0.99+

Tuesday nightDATE

0.99+

two cyber citiesQUANTITY

0.99+

two culturesQUANTITY

0.98+

HoustonLOCATION

0.98+

USLOCATION

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.97+

over a hundred senior execsQUANTITY

0.97+

seven pointQUANTITY

0.97+

each timeQUANTITY

0.97+

three timesQUANTITY

0.97+

James HamiltonPERSON

0.97+

The Chertoff GroupORGANIZATION

0.96+

tenth eventQUANTITY

0.96+

LinkedInORGANIZATION

0.93+

ChertoffORGANIZATION

0.92+

firstQUANTITY

0.9+

AlexaTITLE

0.89+

Wall Street JournalTITLE

0.89+

doubleQUANTITY

0.87+

theCUBEORGANIZATION

0.87+

AfricanOTHER

0.86+

one timeQUANTITY

0.82+

SecretaryPERSON

0.79+

this morningDATE

0.77+

both ofQUANTITY

0.73+

issuesQUANTITY

0.63+

coupleQUANTITY

0.5+

theCUBETITLE

0.42+