Image Title

Search Results for Circadence:

Chad Sweet & Reggie Brothers , The Chertoff Group | Security in the Boardroom


 

>> Hey, welcome back everybody. Jeff Frick here with theCube. We're in Palo Alto, California, at one of the Chertoff events. It's called Security in the Boardroom. They have these events all over the country, and this is really kind of elevating the security conversation beyond the edge, and beyond CISOs to really the boardroom, which is really where the conversation needs to happen. And our next guest, really excited to have We've got Chad Sweet, he's the co-founder and CEO of the Chertoff Group. Welcome Chad. >> Great to be here. >> And with him also Reggie Brothers, he's the principal at the Chertoff Group, and spent a lot of time in Washington. Again you can check his LinkedIn and find out his whole history. I won't go through it here. First off, welcome gentlemen. >> Thank you. >> Thank you. >> So, before we jump in a little bit of-- What are these events about? Why should people come? >> Well, basically they're a form in which we bring together both practitioners and consumers of security. Often it's around a pragmatic issue that the industry or government's facing, and this one, as you just said, priority of security, cyber screening in particular, in the boardroom, which is obviously what we're reading about everyday in the papers with the Petya and NotPetya and the WannaCry attacks, these are basically, I think, teachable moments that are affecting the whole nation. And so this is a great opportunity for folks to come together in a intimate form, and we welcome everybody who wants to come. Check out our website at chertoffgroup.com >> Okay, great, and the other kind of theme here, that we're hearing over and over is the AI theme, right? >> Yeah. >> We hear about AI and machine learning all over the place and we're in Mountain View and there's self-driving cars driving all over the place and Google tells me, like, "you're home now." And I'm like, "Ah, that's great." But there's much bigger fish to fry with AI and there's a much higher level. And Reggie you just came off a panel talking about some much higher level-- I don't know if issues is the right word, maybe issues is the right word, around AI for security. So, I wonder if you can share some of those insights. >> I think issues, challenges, are the right words. >> Challenges, that's probably a better word. >> Those are good words, because particularly you're talking about security application. Whether it's corporate or government the issue becomes trust. How do you trust that this machine has made the right kind of decision, how do you make it traceable. One of the challenges with the current AI technology is it's mostly based on machine-learning. Machine-learning tends to be kind of a black box where you know know what goes in and you train what comes out. That doesn't necessarily mean you understand what's going inside the box. >> Right. >> So then if you have a situation where you really need to be able to trust this decision this machine's making How do you trust it? What's the traceability? So, in the panel we started discussing that. Why is it so important to have this level of trust? You brought up autonomous-vehicles, well of course, you want to make sure that you can trust your vehicle to make the right decision if it has to make a decision at an intersection. Who's it going to save? How do you trust that machine becomes a really big issue. I think it's something that in the machine-learning community, as we learn in the panel, is really starting to grapple with and face that challenge. So I think there's good news, but I think it's a question that when think about what we have to ask when we're adopting these kind of machine-learning AI solutions we have to make sure we do ourself. >> So, it's really interesting, the trust issue, because there's so many layers to it, right? We all get on airplanes and fly across country all the time, right? And those planes are being flown by machines, for the most part. And at the same time if you start to unpack some of these crazy algorithms, even if you could open up the black box, unless you're a data scientist and you have a PhD, in some of these statistical analysis could you really understand it anyway? So how do you balance it? We're talking about the boardroom. What's the level of discovery? What's the level of knowledge that's appropriate without necessarily being a full-fledged data scientist who are the ones that are actually writing those algorithms? >> So I think that's a challenge, right, because I think when you look at the types of ways that people are addressing this trust challenge it is highly technical, alright. People are making hybrid systems where you can do some type of traceability but that's highly technical for the boardroom. I think what's important is that the-- and one thing that we did talk about on the panel and even prior to panel was on cybersecurity and governance, we talked about the importance of being able to speak in a language that everyone-- that the laborers can understand. You can't just speak in a computer science jargon kind of manner. You have to be able to speak to the person that's actually making the decision. Which means you have to really understand the problem, because I think my experience the people that can speak in the plainest language understand the problem the best. So these problems are things that can be explained they just tend not to be explained, because they're in this super technical domain. >> But you know, Reggie is being very humble. He's got a PhD from MIT and worked at the defense advanced research-- >> Well he can open the box. >> He can open the box. I'm a simple guy from Beaumont, Texas, so I can kind of dumb it down for the average person. I think on the trust issue over time whether, and you just mentioned some of it, if you use the analogy of a car or the board room or a war scenario, it's the result. So you get comfortable, you know the first time, I have a Tesla, the first time I let go of the wheel and let it drive it's self was a scary experience but then when you actually see the result and get to enjoy and experience the actual performance of the vehicle that's when the trust can begin. And I think in a similar vein, in the military context, you know, we're seeing automation start to take hold. The big issue will be in that moment of ultimate trust, i.e. do you allow a weapon actually to have lethal decision-making authority, and we just talked about that on the panel, which is the ultimate trust is-- is not really today in the military something that we're prepared to trust yet. I think we've seen in, there's only a couple places, like the DMZ in North Korea where we actually do have a few systems that are, if they actually detect an attack because there's such a short response time, those are the rare exceptions of where lethal authority is at least being considered. I think Elon Musk has talked about how the threat of AI, and how this could, if it's not, we don't have some norms put around it then that trust could not be developed, cause there wouldn't be this checks and balances. So, in the boardroom that last scenario, I think, the boards are going to be facing these cyber attacks and the more that they experience once the attack happens how the AI is providing some immediate response in mitigation and hopefully even prevention, that's where the trust will begin. >> The interesting thing, though, is that the sophistication of the attacks is going up dramatically, right? >> Chad: Yep. >> Why do we have machine-learning in AI? Because it's fast. It can react to a ton of data and move at speeds that we as people can't, such as your self-driving car. And now we're seeing an increase in state-sponsored threats that are coming in, it's not just the crazy kid in the basement, you know, hacking away to show his friend, but you know, now they're trying to get much more significant information, trying to go after much more significant systems. So, it almost begs then that you have to have the North Korean example when your time windows are shorter, when the assets are more valuable and when the sophistication of the attacking party goes up, can people manage it, you know, I would assume that the people role, you know, will continue to get further and further up the stack where the automation takes an increasing piece of it. >> So let's pull on that, right. So if you talk to the Air Force, cause the Air Force does a lot of work on autonomy, DoD General does, but the Air Force has this chart where they show that over time the resource that will be dedicated by a machine, autonomous machine, will increase and resources to a human decrease, to a certain level, to a certain level. And that level is really governed by policy issues, compliance issues. So there's some level over which because of policy and compliance the human will always be in the loop. You just don't let the machine run totally open loop, but the point is it has to run at machine speed. So let's go back to your example, with the high speed cyber attacks. You need to have some type of defensive mechanism that can react at machine speed, which means at some level the humans are out of that part of the loop, but you still have to have the corporate board person, as Chad said, have trust in that machine to operate at this machine speed, out of the loop. >> In that human oversight one of the things that was discussed on on the panel was that interestingly AI can actually be used in training of humans to upgrade their own skills, and so right now in the Department of Defense, they do these exercises on cyber ranges and there's about a 4 month waiting period just to get on the ranges, that's how congested they are. And even if you get on it, if you think about it, right now there's a limited number of human talent, human instructors that can simulate the adversary and oversee that, and so actually using AI to create a simulated adversary and being able to do it in a gamified environment is something that's increasingly going to be necessary to make it, to keep everyone's skills, and to do it real-time 24/7 against active threats that are being morphed over time. That's really where we have to get our game up to. So, watch for companies like Circadence, which are doing this right now with the Air Force, Army, DISA, and also see them applying this, as Reggie said, in the corporate sphere where a lot of the folks who will tell you today they're facing this asymmetric threat, they have a lot of tools, but they don't necessarily trust or have the confidence that when the balloon goes up, when the attack is happening, is my team ready? And so being able to use AI to help simulate these attacks against their own teams so they can show the board actually our guys are at this level of tested-ness and readiness. >> It's interesting Hal's talking to me in the background as you're talking about the cyber threat, but there's another twist on that, right, which is where machines aren't tired, they didn't have a bad day, they didn't have a fight with the kids in the morning. So you've got that kind of human frailty which machines don't have, right, that's not part of the algorithm generally. But it's interesting to me that it usually comes down to, as most things of any importance, right, it's not really a technical decision. The technical pieces was actually pretty easy. The hard part is what are the moral considerations, what are the legal considerations, what are the governance considerations, and those are what really ultimately drive the decision to go or no-go. >> I absolutely agree. One of the challenges that we face is what is our level of interaction between the machine and the human, and how does that evolve over time. You know, people talk about the centaur model, where the centaur, the mythical horse and human, where you have this same kind of thing with the machine and human, right? You want this seamless type of interaction, but what does that really mean, and who does what? What they've found is you've got machines have beaten, obviously, our human chest masters, they've beaten our goal masters. But the things that seems to work best is when there's some level of teaming between the human and the machine. What does that mean? And I think that's going to be a challenge going forward is how we start understanding what that frontier is where the human and machine have to have this really seamless interaction. How do we train for that, how do we build for that? >> So, give your last thoughts before I let you go. The chime is running, they want you back. As you look down the road, just a couple years, I would never say more than a couple years, and, you know, Moore's Law is not slowing down people argue will argue they're crazy, you know, chips are getting faster, networks are getting faster, data systems are getting faster, computers are getting faster, we're all carrying around mobile phones and just blowing off tons of digital exhaust as our systems. What do you tell people, how do boards react in this rapidly evolving, you know, on like an exponential curve environment in which we're living, how do they not just freeze? >> Well if you look at it, I think, to use a financial analogy and almost every board knows the basic foundational formula for accounting which is assets equals liabilities plus equity. I think in the future because no business today is immune from the digital economy every business is being disrupted by the digital economy and it's-- there are businesses that are underpinned by the trust of the digital economy. So, every board I think going forward has to become literate on cybersecurity and Artificial Intelligence will be part of that board conversation, and they'll need to learn that fundamental formula of risk, which is risk equals threat, times vulnerability, times consequence. So in the months ahead part of what the Chertoff Group will be doing is playing a key role in helping to be an educator of those boards and a facilitator in these important strategic discussions. >> Alright, we'll leave it there. Chad Sweet, Reggie Brothers thanks for stopping by. >> Thank you. >> Thank you, appreciate it. >> Alright, I'm Jeff Frick, you're watching theCube. We're at the Chertoff event, it's security in the boardroom. Think about it, we'll catch ya next time.

Published Date : Aug 25 2017

SUMMARY :

and CEO of the Chertoff Group. he's the principal at the Chertoff Group, in the boardroom, which is obviously I don't know if issues is the right word, the right kind of decision, how do you make it traceable. So, in the panel we started discussing that. And at the same time if you start that the laborers can understand. But you know, Reggie is being very humble. and the more that they experience once the attack happens it's not just the crazy kid in the basement, but the point is it has to run at machine speed. and so right now in the Department of Defense, drive the decision to go or no-go. But the things that seems to work best in this rapidly evolving, you know, So in the months ahead part of what Alright, we'll leave it there. We're at the Chertoff event, it's security in the boardroom.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

ChadPERSON

0.99+

Chertoff GroupORGANIZATION

0.99+

Chad SweetPERSON

0.99+

WashingtonLOCATION

0.99+

ReggiePERSON

0.99+

chertoffgroup.comOTHER

0.99+

MITORGANIZATION

0.99+

Department of DefenseORGANIZATION

0.99+

ChertoffORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Elon MuskPERSON

0.99+

DISAORGANIZATION

0.99+

Mountain ViewLOCATION

0.99+

GoogleORGANIZATION

0.99+

North KoreaLOCATION

0.99+

OneQUANTITY

0.99+

Reggie BrothersPERSON

0.99+

first timeQUANTITY

0.98+

The Chertoff GroupORGANIZATION

0.98+

todayDATE

0.98+

Beaumont, TexasLOCATION

0.98+

CircadenceORGANIZATION

0.98+

FirstQUANTITY

0.98+

TeslaORGANIZATION

0.98+

more than a couple yearsQUANTITY

0.98+

LinkedInORGANIZATION

0.95+

HalPERSON

0.94+

Air ForceORGANIZATION

0.93+

one thingQUANTITY

0.91+

DMZORGANIZATION

0.89+

ArmyORGANIZATION

0.89+

theCubeORGANIZATION

0.88+

both practitionersQUANTITY

0.85+

DoDORGANIZATION

0.81+

oneQUANTITY

0.81+

4 monthQUANTITY

0.81+

couple yearsQUANTITY

0.8+

NotPetyaORGANIZATION

0.72+

AirORGANIZATION

0.7+

WannaCryTITLE

0.69+

NorthLOCATION

0.67+

PetyaORGANIZATION

0.66+

couple placesQUANTITY

0.65+

Moore's LawTITLE

0.62+

Chad SweetORGANIZATION

0.58+

digital exhaustQUANTITY

0.58+

tonsQUANTITY

0.57+

Reggie BrothersORGANIZATION

0.57+

dataQUANTITY

0.54+

aboutQUANTITY

0.52+

KoreanOTHER

0.47+