Image Title

Search Results for Anders:

CI/CD: Getting Started, No Matter Where You Are


 

>>Hello, everyone. My name is John Jane Shake. I work from Iran. Tous Andi. I am here this afternoon very gratefully with Anders Vulcan, who is VP of technology strategy for cloud bees, a Miranda's partner and a well known company in the space that we're going to be discussing. Anders is also a well known entity in this space, which is continuous integration and continuous delivery. Um, you've seen already today some sessions that focus on specific implementations of continuous integration and delivery, um, particularly around security. And, uh, we think this is a critically important topic for anyone in the cloud space, particularly in this increasingly complicated kubernetes space. To understand, um, Miranda's thanks, Uh, if I can recapitulate our own our own strategy and, uh, and language that with complexity on uncertainty consistently increasing with the depth of the technology stacks that we have to deal with consistently, um um elaborating themselves that navigating this requires, um first three implementation of automation to increase speed, which is what C and C d do. Um, and that this speed ba leveraged toe let us ship and iterate code faster. Since that's ultimately the business that all of us air in one way or another. I would like, I guess, toe open this conversation by asking Onders what does he think of that core strategy? >>You know, I think you know, hitting the security thing, right? Right off the bat. You know, security doesn't happen by accident. You know, security is not something that you know, Like a like a server in a restaurant. You know, Sprinkles a little bit of Parmesan cheese right before they serve you the the food. It's not something you Sprinkle on at the end. It's something that has to be baked in from the beginning, not just in the kitchen, but in the supply chain from from from the very beginning. So the you know it's a feature, and if you don't build it, if you're not going to get an outcome that you're not gonna be happy with and I think the you know it's increasingly it's obviously increasingly important and increasingly visible. You know, the you know, the kinds of security problems that we that we see these days can can be, you know, life altering, for for people that are subject to them and and can be, you know, life or death for a company that that's exposed to it. So it's it's it's very, very important. Thio pay attention to it and to work to achieve that as an explicit outcome of the software delivery process. And I think, you know, C i n c d as as process as tooling as culture plays a big part in that because ah, lot of it has to do with, you know, set things up, right? Um run them the same way over and over, you know, get the machine going. Turned the crane. Now, you wanna you wanna make improvements over over time. You know, it's not just, you know, set it and forget it. You know, we got that set up. We don't have to worry about it anymore, but it really is a question of, you know, get the human out of the loop a lot of the times because if if you're dealing with configuring complex systems, you wanna make sure that you get them set up configured, you know, documented Ideally, you know, as code, whether it's a domain specific language or or something like that. And then that's something that you contest against that you can verify against that you can that you can difficult. And then that becomes the basis for for your, you know, for yourself, for pipelines, for your automation around, you know, kind of the software factory floor. So I think automation is a key aspect of that because it, you know, it takes a lot of the drudgery out of it, for one thing, So now the humans have more time to spend on doing on the on the creative things on the things that we're good at a zoo. Humans and it also make sure that, you know, one of the things that computers are really good at is doing the same thing over and over and over and over. Eso that kind of puts that responsibility into the hands of the entity that that knows how to do that well, which is which is the machine eso I think it's, you know, it's a light. It's a deep, deep topic, obviously, but, you know, automation plays into it. Uh, you know, small batch sizes play into it, you know, being able to test very frequently whether that's testing in. You're kind of you're C I pipeline where you're sort of doing building mostly unit testing, maybe some integration testing, but also in layering in the mawr. Serious kinds of testing in terms of security scanning, penetration, testing, vulnerability, scanning. You know those sorts of things which, you know, maybe you do on every single see I Bill. But most people don't because those things tend toe take a little bit longer on. And you know you want your sea ice cycle to be as fast as possible because that's really in service of the developer who has committed code and wants toe kind of see the thumbs up from the system saying it. And, um, so most organizations most organizations are are are focusing on, you know, making sure that there's a follow on pipeline to follow on set of tests that happened after the C I passes successfully and and that's, you know, where a lot of the security scanning and those sorts of things happen. >>It's a It's an interesting problem. I mean, you mentioned, um, what almost sounds like a Lawrence Lessig Ian kind of idea that, you know, code is law in enterprises today, code particularly see, I code ends up being policy, but At the same time, there's, Ah, it seems to me there's a an alternative peril, which is, as you increase speed, particularly when you become more and more dependent on things like containers and layering technology to provide components and capabilities that you don't have to build yourself to your build pipeline, that there are new vulnerabilities, potentially that creep in and can creep in despite automation. Zor at least 1st. 1st order automation is attempts toe to prevent them from creeping in. You don't wanna you wanna freeze people on a six month old version of a key container image. But on the other hand, if the latest version has vulnerabilities, that could be a problem. >>Yeah, I mean, it's, you know, it's it's a it's a it's a double edged sword. It's two sides of the same coin. I think you know, when I talked to a lot of security people, um, you know, people to do it for a living is supposed to mean I just talk about it, um, that Z not completely true. But, um, the ah, lot of times the problem is old vulnerabilities. The thing that I think keeps a lot of people up at night isn't necessarily that the thing at the tip of the releases for particular, you know, well known open source, library or something like that. But that's gonna burn you all the vast majority of the time. And I want to say, like, 80 85% of the time. The vulnerability is that you that you get hosed by are ones that have been known about for years. And so I think the if I had to pick. So if you know, in that sort of two sides of that coin, if I had to pick, I would say Be aggressive in making sure that your third party dependencies are updated frequently and and continuously right, because that is the biggest, biggest cause of of of security vulnerabilities when it comes to third party code. Um, now you know the famous saying, You know, move fast and break things Well, there's certain things you don't want to break. You know you don't want to break a radiation machine that's going to deliver radio radiotherapy to someone because that will endanger their health. So So those sorts of systems, you know, naturally or subject a little bit more kind of caution and scrutiny and rigor and process those sorts of things. The micro service that I run that shows my little avatar when I log in, that one probably gets a little less group. You know, Andre rightfully so. So I think a lot of it has to do. And somebody once said in a I think it was, Ah, panel. I was on a PR say conference, which was, which was kind of a wise thing to say it was Don't spend a million dollars protecting a $5 assets. You know, you wanna be smart and you wanna you wanna figure out where your vulnerabilities they're going to come from and in my experience, and and you know, what I hear from a lot of the security professionals is pay attention to your supply chain. You're you want to make sure that you're up to date with the latest patches of, of all of your third party, you know, open source or close source. It doesn't really matter. I mean, if anything, you know, open source is is more open. Eso You could inspect things a little bit better than the close source, but with both kinds of streams of code that you consume and and use. You wanna make sure that you're you're more up to date as opposed to a less up to date? Um, that generally will be better. Now, can a new version of the library cause problems? You know, introduce bugs? You know, those sorts of things? Yes. That's why we have tests. That's what we have automated tests, regression, sweets, You know, those sorts of things. And so you wanna, you know, you wanna live in a in a world where you feel the confidence as a as a developer, that if I update this library from, you know, one debt owed at 3 to 1 debt owed at 10 to pick up a bunch of, you know, bug fixes and patches and those sorts of things. But that's not going to break some on demand in the test suites that that will run against that ought to cover that that sort of functionality. And I'd rather be in that world of Oh, yeah, we tried to update to that, but it But it broke the tests and then have to go spend time on that, then say, Oh, it broke the test. So let's not update. And then six months later, you do find out. Oh, geez. There was a problem in one that owed at three. And it was fixed in one. That about four. If only we had updated. Um, you know, you look at the, um you look at some of the highest profile security breaches that are out there that you sort of can trace toe third party libraries. It's almost always gonna be that it was out of date and hadn't been patched. That's so that's my you know, opinionated. Take on that. Sure. >>What are the parts of modern C I c D. As opposed to what one would encounter 56 years ago? Maybe if we can imagine that is being before the micro services and containers revolution really took off. >>You know, I think e think you're absolutely right that, you know, not the whole world is not doing. See, I Yeah, and certainly the whole world is not doing city yet. Um, you know, I think you know, as you say, we kind of live in a little bit of an ivory tower. You know, we live in an echo chamber in a little bit of a bubble Aziz vendors in this space. The truth is that I would say less than 50% of the software organizations out there do real. See, I do real CD. The number's probably less than that. Um, you know, I don't have anything to back that up other than just I talked to a lot of folks and work with, you know, with a lot of organizations and like, Yeah, that team does see I that team does Weekly builds You know, those sorts of things. It's it's really all over the place, Onda. Lot of times there's There's definitely, in my experience, a high correlation there with the amount of time that a team or a code base has been around, and the amount of sort of modern technologies and processes and and and so on that are that are brought to it on. And that sort of makes sense. I mean, if you if you're starting with the green field with a blank sheet of paper, you're gonna adopt, you know, the technologies and the processes and the cultures of today. A knot of 5, 10 15 15 years ago, Um but but most organizations air moving in that direction. Right? Andi, I think you know what? What? What? What's really changed in the last few years is the level of integration between the various tools between the various pieces and the amount of automation that you could bring to bear. I mean, I you know, I remember, you know, five or 10 years ago having all kinds of conversations with customers and prospects and and people of conferences and so on and they said, Oh, yeah, we'd like to automate our our software development life cycle, but, you know, we can't We have a manual thing here. We have a manual thing there. We do this kind of testing that we can automate it, and then we have this system, but it doesn't have any guy. So somebody has to sit and click on the screen. And, you know, and I used to say e used to say I don't accept No for an answer of can you automate this right? Everything. Anything can be automated. Even if you just get the little drinking bird. You know that just pokes the mouse. Everyone something. You can automate it, and I Actually, you know, I had one customer who was like, Okay, and we had a discussion and and and and they said, Well, we had this old Windows tool. We Its's an obscure tool. It's no longer updated, but it's it's it's used in a critical part of the life cycle and it can't be automated. And I said, Well, just install one of those Windows tools that allows you to peek and poke at the, you know, mass with my aunt I said so I don't accept your answer. And I said, Well, unfortunately, security won't allow us to install those tools, Eh? So I had to accept No, at that point, but But I think the big change were one of the biggest changes that's happened in the last few years is the systems now have all I'll have a p i s and they all talk to each other. So if you've gotta, you know, if you if you've got a scanning tool, if you've got a deployment tool, if you have a deployment, you know, infrastructure, you know, kubernetes based or, you know, kind of sitting in front of our around kubernetes thes things. I'll talk to each other and are all automated. So one of the things that's happened is we've taken out a lot of the weight states. A lot of the pauses, right? So if you you know, if you do something like a value stream mapping where you sit down and I'll date myself here and probably lose some of the audience with this analogy. But if you remember Schoolhouse Rock cartoons in in the late seventies, early eighties, there was one which was one of my favorites, and and the guy who did the music for this passed away last year, sadly, But, uh, the it was called How a bill Becomes a Law and they personified the bill. So the bill, you know, becomes a little person and, you know, first time passed by the house and then the Senate, and then the president either signs me or doesn't and or he vetoes, and it really sort of did this and what I always talk about with respect to sort of value stream mapping and talking about your processes, put a GoPro camera on your source codes head, and then follow that source code all the way through to your customer understand all of the stuff that happens to it, including nothing, right? Because a lot of times in that elapsed time, nothing keeps happening, right. If we build software the way we were sorry. If we build cars the way we build software, we would install the radio in a car, and then we would park it in a corner of the factory for three weeks. And then we might remember to test the radio before we ship the car out to the customer. Right, Because that's how a lot of us still develop some for. And I think one thing that's changed in the in the last few years is that we don't have these kind of, Well, we did the bill. So now we're waiting for somebody to create an environment and rack up some hardware and install an operating system and install. You know, this that and the other. You know, that that went from manual to we use Scheffer puppet to do it, which then went to we use containers to do it, which then went to we use containers and kubernetes to do it. So whole swaths of elapsed time in our software development life cycles basically went to nothing, right and went to the point where we can weaken, weaken, configure them way to the left and and and follow them all the way through. And that the artifact that we're delivering isn't necessarily and execute herbal. It could be a container, right? So now that starts to get interesting for us in terms of being able to test against that container scan against that container, def. Against that container, Um, you know, and it, you know, it does bring complexity to in terms of now you've got a layered file system in there. Well, what all is in there, you know, And so there's tools for scanning those kinds of things, But But I think that one of the biggest things that's happened is a lot of the natural pause. Points are no longer natural. Pause points their unnatural pause points, and they're now just delays in yourself for delivery. And so what? What a lot of organizations are working on is kind of getting to the point where those sorts of things get get automated and connected, and that's now possible. And it wasn't 55 or 10 years ago. >>So It sounds like a great deal of the speed benefit, which has been quantified many different ways. But is once you get one of these systems working, as we've all experienced enormous, um, is actually done by collapsing out what would have been unused time in a prior process or non paralyze herbal stuff has been made parallel. >>I remember doing a, uh, spent some time with a customer, and they did a value stream mapping, and they they found out at the end that of the 30 days of elapsed time they were spending three days on task. Everything else was waiting, waiting for a build waiting foran install, waiting for an environment, waiting for an approval, having meetings, you know, those sorts of things. And I thought to myself, Oh, my goodness, you know, 90% of the elapsed time is doing nothing. And I was talking to someone Gene Kim, actually, and I said, Oh my God, it was terrible that these you know, these people are screwed and he says, 0 90%. That's actually pretty good, you know? So So I think you know, if you if you think today, you know, if you If you if you look at the teams that are doing just really pure continuous delivery, you know, write some code committed, gets picked up by the sea ice system and passes through CIA goes through whatever coast, see I processing, you need to do security scanning and so on. It gets staged and it gets pushed into production. That stuff can happen in minutes, right? That's new. That's different. Now, if you do that without having the right automated gates in place around security and and and and those sorts of things you know, then you're living a little bit dangerously, although I would argue not necessarily any more dangerously, than just letting that insecure coat sit around for a week before your shipment, right? It's not like that problem is going to fix itself if you just let it sit there, Um, but But, you know, you definitely operated at a higher velocity. Now that's a lot of the benefit that you're tryingto trying to get out of it, right? You can get stuff out to the market faster, or if you take a little bit more time, you get more out to the market in, in in the same amount of time you could turn around and fix problems faster. Um, if you have a vulnerability, you can get it fixed and pushed out much more quickly. If you have a competitive threat that you need to address, you can you know, you could move that that much faster if you have a critical bug. You know, I mean, all security issues or bugs, sort of by definition. But, you know, if you have a functionality bug, you can you can get that pushed out faster. Eso So I think kind of all factors of the business benefit from from this increase in speed. And I think developers due to because anybody you know, any human that has a context switch and step away from something for for for, you know, duration of time longer than a few minutes, you know, you're gonna you're gonna you're gonna you're gonna have to load back up again. And so that's productivity loss. Now, that's a soft cost. But man, is it Is it expensive and is a painful So you see a lot of benefit there. Think >>if you have, you know, an organization that is just starting this journey What would you ask that organization to consider in orderto sort of move them down this path? >>It's by far the most frequent and almost always the first question I get at the end of the talk or or a presentation or something like that is where do we start? How do I know where to start? And and And there's a couple of answers to that. What one is Don't boil the ocean, right? Don't try to fix everything all at once. You know that because that's not agile, right? The be agile about your transformation Here, you know, pick, pick a set of problems that you have and and make a, you know, basically make a burn down list and and do them in order. So find find a pain point that you have right and, you know, just go address that and and try to make it small and actionable and especially early on when you're trying to affect change. And you're tryingto convinced teams that this is the way to go and you may have some naysayers, or you may have people who are skeptical or have been through these processes before that have been you know failures released, not the successes that they that they were supposed to be. You know, it's important to have some wind. So what I always say is look, you know, if you have a pebble in your shoe, you've got a pain point. You know how to address that. You know, you're not gonna address that by changing out your wardrobe or or by buying a new pair of shoes. You know, you're gonna address that by taking your shoe off, shaking it until the pebble falls out there putting the shoe back on. So look for those kinds of use cases, right? So if you're engineers are complaining that whenever I check in the build is broken and we're not doing see, I well, then let's look at doing C I Let's do see eye, right? If you're not doing that. And for most organizations, you know, setting up C I is a very manageable, very doable thing. There's lots of open source tooling out there. There's lots of commercial tooling out there. Thio do that to do it for small teams to do it for large teams and and everything in between. Um, if the problem is Gosh, Every time we push a change, we break something. You know where every time something works in staging it doesn't work in production. Then you gotta look at Well, how are these systems being configured? If you're If you're configuring them manually, stop automate the configuration of them. Um, you know, if you're if you're fixing system manually, don't you know, as a friend of mine says, don't fix, Repave? Um, you know, you don't wanna, you know, there's a story of, you know how how Google operates in their data centers. You know, they don't they don't go look for a broken disk drive and swap it out. You know, when it breaks, they just have a team of people that, like once a month or something, I don't know what the interval is. They just walked through the data center and they pull out all the dead stuff and they throw it out, and what they did was they assume that if the scale that they operate, things are always going to break physical things are always going to break. You have to build a software to assume that breakage and any system that assumes that we're going to step in when a disk drive is broken and fix it so that we can get back to running just isn't gonna work at scale. There's a similarity. There's sort of ah, parallel to that in in software, which is you know, any time you have these kinds of complex systems, you have to assume that they're gonna break and you have to put the things in place to catch those things. The automated testing, whether it's, you know, whether you have 10,000 tests that you that you've written already or whether you have no tests and you just need to go right, your first test that that journey, you've got to start somewhere. But my answer thio their questions generally always just start small, pick a very specific problem. Build a plan around it, you know, build a burned down list of things that you wanna address and just start working your way down that the same way that you would for any, you know, kind of agile project, your transformation of your own processes of your own internal systems. You should use agile processes for those as well, because if you if you go off for six months and and build something. By the time you come back, it's gonna be relevant. Probably thio the problems that you were facing six months ago. >>A Then let's consider the situation of, ah, company that's using C I and maybe sea ice and C d together. Um, and they want to reach what you might call the next level. Um, they've seen obvious benefits they're interested in, you know, in increasing their investment in, you know and cycles devoted to this technology. You don't have to sell them anymore, but they're looking for a next direction. What would you say that direction should be? I >>think oftentimes what organizations start to do is they start to look at feedback loops. So on DAT starts to go into the area of sort of metrics and analytics and those sorts of things. You know what we're we're always concerned about? You know, we're always affected by things like meantime to recovery. Meantime, the detection, what are our cycle times from, you know, ideation, toe codecommit. What's the cycle? Time from codecommit the production, those sorts of things. And you know you can't change what you don't measure eso so a lot of times the next step after kind of getting the rudimentary zoo of C I Orsini or some combination of both in places start to measure. Stop you, Um, and and then but But there. I think you know, you gotta be smart about it, because what you don't want to do is kind of just pull all the metrics out that exists. Barf them up on the dashboard. And the giant television screens say boom metrics, right. You know, Mike, drop go home. That's the wrong way to do it. You want to use metrics very specifically to achieve outcomes. So if you have an outcome that you want to achieve and you can tie it to a metric start looking at that metric and start working that problem once you saw that problem, you can take that metric. And you know, if that's the metric you're showing on the big you know, the big screen TV, you can pop that off and pick the next one and put it up there. I I always worry when you know a little different when you're in a knock or something like that. When when you're looking at the network stuff and so on. But I'm always leery of when I walk into to a software development organization. You know, just a Brazilian different metrics, this whole place because they're not all relevant. They're not all relevant at the same time. Some of them you wanna look at often, some of them you just want to kind of set an alarm on and make sure that, you know, I mean, you don't go down in your basement every day to check that the sump pump is working. What you do is you put a little water detector in there and you have an alarm go off if the water level ever rises above a certain amount. Well, you want to do the same thing with metrics, right? Once you've got in the water out of your basement, you don't have to go down there and look at it all the time. You put the little detector in, and then you move on and you worry about something else. And so organizations as they start to get a little bit more sophisticated and start to look at the analytics, the metrics, um, start to say, Hey, look, if our if our cycle time from from, you know, commit to deploy is this much. And we want it to be this much. What happens during that time, And where can we take slices out of that? You know, without without affecting the outcomes in terms of quality and so on, or or if it's, you know, from from ideation, toe codecommit. You know what? What can we do there? Um, you start to do that. And and then as you get those sort of virtuous cycles of feedback loops happening, you know, you get better and better and better, but you wanna be careful with metrics, you know, you don't wanna, you know, like I said, you don't wanna barf a bunch of metrics up just to say, Look, we got metrics. Metrics are there to serve a particular outcome. And once you've achieved that outcome, and you know that you can continue to achieve that outcome, you turn it into an alarm or a trigger, and you put it out of sight. And you know that. You know, you don't need to have, like, a code coverage metric prominently displayed you you pick a code coverage number that you're happy with you work to achieve that. Once you achieve it, you just worry about not going below that threshold again. So you can take that graph off and just put a trigger on this as if we ever get below this, you know, raising alarm or fail a build or fail a pipeline or something like that and then start to focus on improving another man. Uh, or another outcome using another matter >>makes enormous sense. So I'm afraid we are getting to be out of time. I want to thank you very much on this for joining us today. This has been certainly informative for me, and I hope for the audience, um, you know, thank you very, very much for sharing your insulin.

Published Date : Sep 15 2020

SUMMARY :

Um, and that this speed ba leveraged toe let us ship and iterate You know, the you know, the kinds of security problems that we that we see these days what almost sounds like a Lawrence Lessig Ian kind of idea that, you know, I think you know, when I talked to a lot of security people, um, you know, What are the parts of modern C I c D. As opposed to what one would encounter I mean, I you know, I remember, you know, five or 10 years ago having all kinds of conversations But is once you get one of these systems working, So So I think you know, if you if you think today, you know, if you If you if you look at the teams that are doing Um, you know, you don't wanna, you know, there's a story of, Um, they've seen obvious benefits they're interested in, you know, I think you know, you gotta be smart about it, you know, thank you very, very much for sharing your insulin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John Jane ShakePERSON

0.99+

$5QUANTITY

0.99+

three weeksQUANTITY

0.99+

Gene KimPERSON

0.99+

GoogleORGANIZATION

0.99+

MikePERSON

0.99+

Anders VulcanPERSON

0.99+

threeQUANTITY

0.99+

30 daysQUANTITY

0.99+

last yearDATE

0.99+

IranLOCATION

0.99+

10,000 testsQUANTITY

0.99+

three daysQUANTITY

0.99+

todayDATE

0.99+

Tous AndiPERSON

0.99+

GoProORGANIZATION

0.99+

less than 50%QUANTITY

0.99+

two sidesQUANTITY

0.99+

3QUANTITY

0.99+

oneQUANTITY

0.99+

80QUANTITY

0.99+

late seventiesDATE

0.99+

first testQUANTITY

0.99+

six months laterDATE

0.99+

six monthsQUANTITY

0.98+

six months agoDATE

0.98+

CIAORGANIZATION

0.98+

90%QUANTITY

0.98+

SenateORGANIZATION

0.98+

1QUANTITY

0.98+

first questionQUANTITY

0.98+

bothQUANTITY

0.98+

WindowsTITLE

0.98+

56 years agoDATE

0.98+

GoshPERSON

0.98+

early eightiesDATE

0.98+

AndrePERSON

0.97+

once a monthQUANTITY

0.97+

10QUANTITY

0.97+

one customerQUANTITY

0.97+

10 years agoDATE

0.96+

first threeQUANTITY

0.96+

55DATE

0.95+

fiveDATE

0.95+

this afternoonDATE

0.94+

one thingQUANTITY

0.94+

a weekQUANTITY

0.93+

both kindsQUANTITY

0.92+

Schoolhouse RockTITLE

0.91+

one wayQUANTITY

0.91+

first timeQUANTITY

0.89+

agileTITLE

0.88+

six month oldQUANTITY

0.86+

million dollarsQUANTITY

0.85+

15 years agoDATE

0.84+

Lawrence Lessig IanPERSON

0.83+

MirandaPERSON

0.81+

AndersORGANIZATION

0.81+

5DATE

0.81+

CTITLE

0.79+

BrazilianOTHER

0.78+

SchefferTITLE

0.78+

about fourQUANTITY

0.77+

singleQUANTITY

0.76+

aboutQUANTITY

0.74+

85%QUANTITY

0.73+

C I OrsiniLOCATION

0.72+

0QUANTITY

0.71+

No Matter Where You AreTITLE

0.7+

doubleQUANTITY

0.7+

last few yearsDATE

0.7+

OndaORGANIZATION

0.69+

billTITLE

0.67+

AndiPERSON

0.67+

OndersPERSON

0.64+

favoritesQUANTITY

0.64+

C ITITLE

0.63+

MirandaORGANIZATION

0.63+

for yearsQUANTITY

0.62+

lastDATE

0.62+

minutesQUANTITY

0.61+

Securing Your Cloud, Everywhere


 

>>welcome to our session on security titled Securing Your Cloud. Everywhere With Me is Brian Langston, senior solutions engineer from Miranda's, who leads security initiatives from Renta's most security conscious customers. Our topic today is security, and we're setting the bar high by talking in some depth about the requirements of the most highly regulated industries. So, Brian four Regulated industries What do you perceive as the benefits of evolution from classic infra za service to container orchestration? >>Yeah, the adoption of container orchestration has given rise to five key benefits. The first is accountability. Think about the evolution of Dev ops and the security focused version of that team. Deb. SEC ops. These two competencies have emerged to provide, among other things, accountability for the processes they oversee. The outputs that they enable. The second benefit is audit ability. Logging has always been around, but the pervasiveness of logging data within container or container environments allows for the definition of audit trails in new and interesting ways. The third area is transparency organizations that have well developed container orchestration pipelines are much more likely to have a higher degree of transparency in their processes. This helps development teams move faster. It helped operations teams operations teams identify and resolve issues easier and help simplify the observation and certification of security operations by security organizations. Next is quality. Several decades ago, Toyota revolutionized the manufacturing industry when they implemented the philosophy of continuous improvement. Included within that philosophy was this dependency and trust in the process as the process was improved so that the quality of the output Similarly, the refinement of the process of container orchestration yields ah, higher quality output. The four things have mentioned ultimately points to a natural outcome, which is speed when you don't have to spend so much time wondering who does what or who did what. When you have the clear visibility to your processes and because you can continuously improve the quality of your work, you aren't wasting time in a process that produces defects or spending time and wasteful rework phases. You can move much faster than we've seen this to be the case with our customers. >>So what is it specifically about? Container orchestration that gives these benefits, I guess. I guess I'm really asking why are these benefits emerging now around these technologies? What's enabling them, >>right? So I think it boils down to four things related to the orchestration pipelines that are also critical components. Two successful security programs for our customers and related industry. The first one is policy. One of the core concepts and container orchestration is this idea of declaring what you want to happen or declaring the way you want things done? One place where declarations air made our policies. So as long as we can define what we want to happen, it's much easier to do complementary activities like enforcement, which is our second enabler. Um, tools that allow you to define a policy typically have a way to enforce that policy. Where this isn't the case, you need to have a way of enforcing and validating the policies objectives. Miranda's tools allow custom policies to be written and also enforce those policies. The third enabler is the idea of a baseline. Having a well documented set of policies and processes allows you to establish a baseline. Um, it allows you to know what's normal. Having a baseline allows you to measure against it as a way of evaluating whether or not you're achieving your objectives with container orchestration. The fourth enabler of benefits is continuous assessment, which is about measuring constantly back to what I said a few minutes ago. With the toilet away measuring constantly helps you see whether your processes and your target and state are being delivered as your output deviates from that baseline, your adjustments can be made more quickly. So these four concepts, I think, could really make or break your compliance status. >>It's a really way interesting way of thinking about compliance. I had thought previously back compliance, mostly as a as a matter of legally declaring and then trying to do something. But at this point, we have methods beyond legal boilerplate for asserting what we wanna happen, as you say, and and this is actually opening up new ways to detect, deviation and and enforce failure to comply. That's really exciting. Um, so you've you've touched on the benefits of container orchestration here, and you've provided some thoughts on what the drivers on enablers are. So what does Miranda's fit in all this? How does how are we helping enable these benefits, >>right? Well, our goal and more antis is ultimately to make the world's most compliant distribution. We we understand what our customers need, and we have developed our product around those needs, and I could describe a few key security aspects about our product. Um, so Miranda's promotes this idea of building and enabling a secure software supply chain. The simplified version of that that pertains directly to our product follows a build ship run model. So at the build stage is doctor trusted registry. This is where images are stored following numerous security best practices. Image scanning is an optional but highly recommended feature to enable within D T R. Image tags can be regularly pruned so that you have the most current validated images available to your developers. And the second or middle stage is the ship stage, where Miranda's enforces policies that also follow industry best practices, as well as custom image promotion policies that our customers can write and align to their own internal security requirements. The third and final stages to run stage. And at this stage, we're talking about the engine itself. Docker Engine Enterprise is the Onley container, run time with 51 40 dash to cryptography and has many other security features built in communications across the cluster across the container platform are all secure by default. So this build ship stage model is one way of how our products help support this idea of a secure supply chain. There are other aspects of the security supply chain that arm or customer specific that I won't go into. But that's kind of how we could help our product. The second big area eso I just touched on the secure supply chain. The second big area is in a Stig certification. Um, a stick is basically an implementation or configuration guide, but it's published by the U. S government for products used by the US government. It's not exclusive to them, but for customers that value security highly, especially in a regulated industry, will understand the significance and value that the Stig certification brings. So in achieving the certification, we've demonstrated compliance or alignment with a very rigid set of guidelines. Our fifth validation, the cryptography and the Stig certification our third party at two stations that our product is secure, whether you're using our product as a government customer, whether you're a customer in a regulated industry or something else, >>I did not understand what the Stig really Waas. It's helpful because this is not something that I think people in the industry by and large talk about. I suspect because these things are hard to get and time consuming to get s so they don't tend to bubble up to the top of marketing speak the way glitzy new features do that may or may not >>be secure. >>The, uh so then moving on, how has container orchestration changed? How your customers approach compliance assessment and reporting. >>Yeah, This has been an interesting experience and observation as we've worked with some of our customers in these areas. Eso I'll call out three areas. One is the integration of assessment tooling into the overall development process. The second is assessment frequency and then the third is how results are being reported, which includes what data is needed to go into the reporting. There are very likely others that could be addressed. But those are three things that I have noticed personally and working with customers. >>What do you mean exactly? By integration of assessment tooling. >>Yeah. So our customers all generally have some form of a development pipeline and process eso with various third party and open source tools that can be inserted at various phases of the pipeline to do things like status static source would analysis or host scanning or image scanning and other activities. What's not very well established in some cases is how everything fits within the overall pipeline framework. Eso fit too many customers, ends up having a conversation with us about what commands need should be run with what permissions? Where in the environment should things run? How does code get there that does this scanning? Where does the day to go? Once the out once the scan is done and how will I consume it? Thies Real things where we can help our customers understand? Um, you know what? Integration? What? Integration of assessment. Tooling really means. >>It is fascinating to hear this on, baby. We can come back to it at the end. But what I'm picking out of this Ah, this the way you speak about this and this conversation is this kind of re emergence of these Japanese innovations in product productivity in in factory floor productivity. Um, like, just in time delivery and the, you know, the Toyota Miracle and, uh, and that kind of stuff. Fundamentally, it's someone Yesterday, Anders Wahlgren from cloud bees, of course. The C I. C D expert told me, um, that one of the things he likes to tell his, uh consult ease and customers is to put a GoPro on the head of your code and figure out where it's going and how it's spending its time, which is very reminiscent of these 19 fifties time and motion studies, isn't it that that that people, you know pioneered accelerating the factory floor in the industrial America of the mid century? The idea that we should be coming back around to this and doing it at light speed with code now is quite fascinating. >>Yeah, it's funny how many of those same principles are really transferrable from 50 60 70 years ago to today. Yeah, quite fascinating. >>So getting back to what you were just talking about integrating, assessment, tooling, it sounds like that's very challenging. And you mentioned assessment frequency and and reporting. What is it about those areas that that's required? Adaptation >>Eso eso assessment frequency? Um, you know, in legacy environments, if we think about what those look like not too long ago, uh, compliance assessment used to be relatively infrequent activity in the form of some kind of an audit, whether it be a friendly peer review or intercompany audit. Formal third party assessments, whatever. In many cases, these were big, lengthy reviews full of interview questions, Um, it's requests for information, periods of data collection and then the actual review itself. One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities would sometimes go unnoticed or unmitigated until these reviews at it. But in this era of container orchestration, with the decomposition of everything in the software supply chain and with clearer visibility of the various inputs to the build life cycle, our customers can now focus on what tooling and processes can be assembled together in the form of a pipeline that allows constant inspection of a continuous flow of code from start to finish. And they're asking how our product can integrate into their pipeline into their Q A frameworks to help simplify this continuous assessment framework. Eso that's that kind of addresses the frequency, uh, challenge now regarding reporting, our customers have had to reevaluate how results are being reported and the data that's needed in the reporting. The root of this change is in the fact that security has multiple stakeholder groups and I'll just focus on two of them. One is development, and their primary focus, if you think about it, is really about finding and fixing defects. That's all they're focused on, really, is there is there pushing code? The other group, though, is the Security Project Management Office, or PMO. This group is interested in what security controls are at risk due to those defects. So the data that you need for these two stakeholder groups is very different. But because it's also related, it requires a different approach to how the data is expressed, formatted and ultimately integrated with sometimes different data sources to be able to appease both use cases. >>Mhm. So how does Miranda's help improve the rate of compliance assessment? Aziz? Well, as this question of the need for differential data presentation, >>right, So we've developed on exposed a P I S that helped report the compliance status of our product as it's implemented in our customers on environment. So through these AP eyes, we express the data and industry standard formats using plastic out Oscar is a relatively new project out of the mist organization. It's really all about standardizing a set of standards instead of formats that expresses control information. So in this way our customers can get machine and human readable information related to compliance, and that data can then be massaged into other tools or downstream processes that our customers might have. And what I mean by downstream processes is if you're a development team and you have the inspection tools, the process is to gather findings defects related to your code. A downstream process might be the ticketing system with the era that might log a formal defect or that finding. But it all starts with having a common, standard way of expressing thes scan output. And the findings such that both development teams and and the security PMO groups can both benefit from the data. So essentially we've been following this philosophy of transparency, insecurity. What we mean by that is security isn't or should not be a black box of information on Lee, accessible and consumable by security professionals. Assessment is happening proactively in our product, and it's happening automatically. We're bringing security out of obscurity by exposing the aspects of our product that ultimately have a bearing on your compliance status and then making that information available to you in very user friendly ways. >>It's fascinating. Uh uh. I have been excited about Oscar's since, uh, since first hearing about it, Um, it seems extraordinarily important to have what is, in effect, a ah query capability. Um, that that let's that that lets different people for different reasons formalize and ask questions of a system that is constantly in flux, very, very powerful. So regarding security, what do you see is the basic requirements for container infrastructure and tools for use in production by the industries that you are working with, >>right? So obviously, you know, the tools and infrastructure is going to vary widely across customers. But Thio generalize it. I would refer back to the concept I mentioned earlier of a secure software supply chain. There are several guiding principles behind us that are worth mentioning. The first is toe have a strategy for ensuring code quality. What this means is being able to do static source code analysis, static source code analysis tools are largely language specific, so there may be a few different tools that you'll need to have to be able to manage that, um, second point is to have a framework for doing regular testing or even slightly more formal security assessments. There are plenty of tools that can help get a company started doing this. Some of these tools are scanning engines like open ESCAP that's also a product of n'est open. ESCAP can use CS benchmarks as inputs, and these tools do a very good job of summarizing and visualizing output, um, along the same family or idea of CS benchmarks. There's many, many benchmarks that are published. And if you look at your own container environment, um, there are very likely to be many benchmarks that can form the core platform, the building blocks of your container environment. There's benchmarks for being too, for kubernetes, for Dr and and it's always growing. In fact, Mirante is, uh, editing the benchmark for container D, so that will be a formal CSCE benchmark coming up very shortly. Um, next item would be defining security policies that line with your organization's requirements. There are a lot of things that come out of box that comes standard that comes default in various products, including ours, but we also give you through our product. The ability to write your own policies that align with your own organization's requirements, uh, minimizing your tax surface. It's another key area. What that means is only deploying what's necessary. Pretty common sense. But sometimes it's overlooked. What this means is really enabling required ports and services and nothing more. Um, and it's related to this concept of least privilege, which is the next thing I would suggest focusing on these privileges related to minimizing your tax service. It's, uh, it's about only allowing permissions to those people or groups that excuse me that are absolutely necessary. Um, within the container environment, you'll likely have heard this deny all approach. This denial approach is recommended here, which means deny everything first and then explicitly allow only what you need. Eso. That's a very common, uh uh, common thing that sometimes overlooked in some of our customer environments. Andi, finally, the idea of defense and death, which is about minimizing your plast radius by implementing multiple layers of defense that also are in line with your own risk management strategy. Eso following these basic principles, adapting them to your own use cases and requirements, uh, in our experience with our customers, they could go a long way and having a secure software supply chain. >>Thank you very much, Brian. That was pretty eye opening. Um, and I had the privilege of listening to it from the perspective of someone who has been working behind the scenes on the launch pad 2020 event. So I'd like to use that privilege to recommend that our listeners, if you're interested in this stuff certainly if you work within one of these regulated industries in a development role, um, that you may want to check out, which will be easy for you to do today, since everything is available once it's been presented. Matt Bentley's live presentation on secure Supply Chain, where he demonstrates one possible example of a secure supply chain that permits image. Signing him, Scanning on content Trust. Um, you may want to check out the session that I conducted with Andres Falcon at Cloud Bees who talks about thes um, these industrial efficiency factory floor time and motion models for for assessing where software is in order to understand what policies can and should be applied to it. Um, and you will probably want to frequent the tutorial sessions in that track, uh, to see about how Dr Enterprise Container Cloud implements many of these concentric security policies. Um, in order to provide, you know, as you say, defense in depth. There's a lot going on in there, and, uh, and it's ah, fascinating Thio to see it all expressed. Brian. Thanks again. This has been really, really educational. >>My pleasure. Thank you. >>Have a good afternoon. >>Thank you too. Bye.

Published Date : Sep 15 2020

SUMMARY :

about the requirements of the most highly regulated industries. Yeah, the adoption of container orchestration has given rise to five key benefits. So what is it specifically about? or declaring the way you want things done? on the benefits of container orchestration here, and you've provided some thoughts on what the drivers So in achieving the certification, we've demonstrated compliance or alignment I suspect because these things are hard to get and time consuming How your customers approach compliance assessment One is the integration of assessment tooling into the overall development What do you mean exactly? Where does the day to go? America of the mid century? Yeah, it's funny how many of those same principles are really transferrable So getting back to what you were just talking about integrating, assessment, One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities Well, as this question of the need for differential the process is to gather findings defects related to your code. the industries that you are working with, finally, the idea of defense and death, which is about minimizing your plast Um, and I had the privilege of listening to it from the perspective of someone who has Thank you. Thank you too.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Brian LangstonPERSON

0.99+

Matt BentleyPERSON

0.99+

Anders WahlgrenPERSON

0.99+

ToyotaORGANIZATION

0.99+

Andres FalconPERSON

0.99+

Cloud BeesORGANIZATION

0.99+

OneQUANTITY

0.99+

two stationsQUANTITY

0.99+

U. S governmentORGANIZATION

0.99+

50DATE

0.99+

bothQUANTITY

0.99+

thirdQUANTITY

0.99+

second pointQUANTITY

0.99+

ESCAPTITLE

0.99+

firstQUANTITY

0.99+

four thingsQUANTITY

0.99+

third areaQUANTITY

0.98+

US governmentORGANIZATION

0.98+

secondQUANTITY

0.98+

five key benefitsQUANTITY

0.98+

MirandaORGANIZATION

0.98+

second enablerQUANTITY

0.98+

todayDATE

0.97+

second benefitQUANTITY

0.97+

fifth validationQUANTITY

0.97+

OscarORGANIZATION

0.97+

three thingsQUANTITY

0.97+

MiracleCOMMERCIAL_ITEM

0.97+

ThioPERSON

0.97+

MiranteORGANIZATION

0.97+

AzizPERSON

0.97+

StigORGANIZATION

0.97+

one wayQUANTITY

0.96+

two competenciesQUANTITY

0.96+

Several decades agoDATE

0.95+

two stakeholder groupsQUANTITY

0.95+

YesterdayDATE

0.95+

four conceptsQUANTITY

0.94+

second bigQUANTITY

0.93+

fourth enablerQUANTITY

0.93+

19 fiftiesDATE

0.92+

RentaORGANIZATION

0.92+

both useQUANTITY

0.91+

three areasQUANTITY

0.9+

Securing Your CloudTITLE

0.9+

oneQUANTITY

0.9+

One placeQUANTITY

0.87+

51 40 dashQUANTITY

0.87+

D TTITLE

0.86+

launch pad 2020EVENT

0.86+

GoProORGANIZATION

0.86+

mid centuryDATE

0.85+

70 years agoDATE

0.84+

first oneQUANTITY

0.83+

few minutesDATE

0.83+

OscarEVENT

0.82+

two of themQUANTITY

0.81+

JapaneseOTHER

0.8+

Everywhere With MeTITLE

0.79+

60DATE

0.78+

Security Project Management OfficeORGANIZATION

0.77+

third enablerQUANTITY

0.75+

one possibleQUANTITY

0.74+

StigTITLE

0.67+

DebPERSON

0.66+

PMOORGANIZATION

0.62+

Two successful security programsQUANTITY

0.62+

AndiPERSON

0.61+

Dr Enterprise Container CloudORGANIZATION

0.6+

fourQUANTITY

0.6+

Docker EngineORGANIZATION

0.59+

AmericaLOCATION

0.53+

ThiesQUANTITY

0.5+

EsoORGANIZATION

0.49+

LeeORGANIZATION

0.48+

MirandaPERSON

0.47+

Susan Wilson, Informatica & Blake Andrews, New York Life | MIT CDOIQ 2019


 

(techno music) >> From Cambridge, Massachusetts, it's theCUBE. Covering MIT Chief Data Officer and Information Quality Symposium 2019. Brought to you by SiliconANGLE Media. >> Welcome back to Cambridge, Massachusetts everybody, we're here with theCUBE at the MIT Chief Data Officer Information Quality Conference. I'm Dave Vellante with my co-host Paul Gillin. Susan Wilson is here, she's the vice president of data governance and she's the leader at Informatica. Blake Anders is the corporate vice president of data governance at New York Life. Folks, welcome to theCUBE, thanks for coming on. >> Thank you. >> Thank you. >> So, Susan, interesting title; VP, data governance leader, Informatica. So, what are you leading at Informatica? >> We're helping our customers realize their business outcomes and objectives. Prior to joining Informatica about 7 years ago, I was actually a customer myself, and so often times I'm working with our customers to understand where they are, where they going, and how to best help them; because we recognize data governance is more than just a tool, it's a capability that represents people, the processes, the culture, as well as the technology. >> Yeah so you've walked the walk, and you can empathize with what your customers are going through. And Blake, your role, as the corporate VP, but more specifically the data governance lead. >> Right, so I lead the data governance capabilities and execution group at New York Life. We're focused on providing skills and tools that enable government's activities across the enterprise at the company. >> How long has that function been in place? >> We've been in place for about two and half years now. >> So, I don't know if you guys heard Mark Ramsey this morning, the key-note, but basically he said, okay, we started with enterprise data warehouse, we went to master data management, then we kind of did this top-down enterprise data model; that all failed. So we said, all right, let's pump the governance. Here you go guys, you fix our corporate data problem. Now, right tool for the right job but, and so, we were sort of joking, did data governance fail? No, you always have to have data governance. It's like brushing your teeth. But so, like I said, I don't know if you heard that, but what are your thoughts on that sort of evolution that he described? As sort of, failures of things like EDW to live up to expectations and then, okay guys over to you. Is that a common theme? >> It is a common theme, and what we're finding with many of our customers is that they had tried many of the, if you will, the methodologies around data governance, right? Around policies and structures. And we describe this as the Data 1.0 journey, which was more application-centric reporting to Data 2.0 to data warehousing. And a lot of the failed attempts, if you will, at centralizing, if you will, all of your data, to now Data 3.0, where we look at the explosion of data, the volumes of data, the number of data consumers, the expectations of the chief data officer to solve business outcomes; crushing under the scale of, I can't fit all of this into a centralized data at repository, I need something that will help me scale and to become more agile. And so, that message does resonate with us, but we're not saying data warehouses don't exist. They absolutely do for trusted data sources, but the ability to be agile and to address many of your organizations needs and to be able to service multiple consumers is top-of-mind for many of our customers. >> And the mind set from 1.0 to 2.0 to 3.0 has changed. From, you know, data as a liability, to now data as this massive asset. It's sort of-- >> Value, yeah. >> Yeah, and the pendulum is swung. It's almost like a see-saw. Where, and I'm not sure it's ever going to flip back, but it is to a certain extent; people are starting to realize, wow, we have to be careful about what we do with our data. But still, it's go, go, go. But, what's the experience at New York Life? I mean, you know. A company that's been around for a long time, conservative, wants to make sure risk averse, obviously. >> Right. >> But at the same time, you want to keep moving as the market moves. >> Right, and we look at data governance as really an enabler and a value-add activity. We're not a governance practice for the sake of governance. We're not there to create a lot of policies and restrictions. We're there to add value and to enable innovation in our business and really drive that execution, that efficiency. >> So how do you do that? Square that circle for me, because a lot of people think, when people think security and governance and compliance they think, oh, that stifles innovation. How do you make governance an engine of innovation? >> You provide transparency around your data. So, it's transparency around, what does the data mean? What data assets do we have? Where can I find that? Where are my most trusted sources of data? What does the quality of that data look like? So all those things together really enable your data consumers to take that information and create new value for the company. So it's really about enabling your value creators throughout the organization. >> So data is an ingredient. I can tell you where it is, I can give you some kind of rating as to the quality of that data and it's usefulness. And then you can take it and do what you need to do with it in your specific line of business. >> That's right. >> Now you said you've been at this two and half years, so what stages have you gone through since you first began the data governance initiative. >> Sure, so our first year, year and half was really focused on building the foundations, establishing the playbook for data governance and building our processes and understanding how data governance needed to be implemented to fit New York Life in the culture of the company. The last twelve months or so has really been focused on operationalizing governance. So we've got the foundations in place, now it's about implementing tools to further augment those capabilities and help assist our data stewards and give them a better skill set and a better tool set to do their jobs. >> Are you, sort of, crowdsourcing the process? I mean, you have a defined set of people who are responsible for governance, or is everyone taking a role? >> So, it is a two-pronged approach, we do have dedicated data stewards. There's approximately 15 across various lines of business throughout the company. But, we are building towards a data democratization aspect. So, we want people to be self-sufficient in finding the data that they need and understanding the data. And then, when they have questions, relying on our stewards as a network of subject matter experts who also have some authorizations to make changes and adapt the data as needed. >> Susan, one of the challenges that we see is that the chief data officers often times are not involved in some of these skunkworks AI projects. They're sort of either hidden, maybe not even hidden, but they're in the line of business, they're moving. You know, there's a mentality of move fast and break things. The challenge with AI is, if you start operationalizing AI and you're breaking things without data quality, without data governance, you can really affect lives. We've seen it. In one of these unintended consequences. I mean, Facebook is the obvious example and there are many, many others. But, are you seeing that? How are you seeing organizations dealing with that problem? >> As Blake was mentioning often times what it is about, you've got to start with transparency, and you got to start with collaborating across your lines of businesses, including the data scientists, and including in terms of what they are doing. And actually provide that level of transparency, provide a level of collaboration. And a lot of that is through the use of our technology enablers to basically go out and find where the data is and what people are using and to be able to provide a mechanism for them to collaborate in terms of, hey, how do I get access to that? I didn't realize you were the SME for that particular component. And then also, did you realize that there is a policy associated to the data that you're managing and it can't be shared externally or with certain consumer data sets. So, the objective really is around how to create a platform to ensure that any one in your organization, whether I'm in the line of business, that I don't have a technical background, or someone who does have a technical background, they can come and access and understand that information and connect with their peers. >> So you're helping them to discover the data. What do you do at that stage? >> What we do at that stage is, creating insights for anyone in the organization to understand it from an impact analysis perspective. So, for example, if I'm going to make changes, to as well as discovery. Where exactly is my information? And so we have-- >> Right. How do you help your customers discover that data? >> Through machine learning and artificial intelligence capabilities of our, specifically, our data catalog, that allows us to do that. So we use such things like similarity based matching which help us to identify. It doesn't have to be named, in miscellaneous text one, it could be named in that particular column name. But, in our ability to scan and discover we can identify in that column what is potentially social security number. It might have resided over years of having this data, but you may not realize that it's still stored there. Our ability to identify that and report that out to the data stewards as well as the data analysts, as well as to the privacy individuals is critical. So, with that being said, then they can actually identify the appropriate policies that need to be adhered to, alongside with it in terms of quality, in terms of, is there something that we need to archive. So that's where we're helping our customers in that aspect. >> So you can infer from the data, the meta data, and then, with a fair degree of accuracy, categorize it and automate that. >> Exactly. We've got a customer that actually ran this and they said that, you know, we took three people, three months to actually physically tag where all this information existed across something like 7,000 critical data elements. And, basically, after the set up and the scanning procedures, within seconds we were able to get within 90% precision. Because, again, we've dealt a lot with meta data. It's core to our artificial intelligence and machine learning. And it's core to how we built out our platforms to share that meta data, to do something with that meta data. It's not just about sharing the glossary and the definition information. We also want to automate and reduce the manual burden. Because we recognize with that scale, manual documentation, manual cataloging and tagging just, >> It doesn't work. >> It doesn't work. It doesn't scale. >> Humans are bad at it. >> They're horrible at it. >> So I presume you have a chief data officer at New York Life, is that correct? >> We have a chief data and analytics officer, yes. >> Okay, and you work within that group? >> Yes, that is correct. >> Do you report it to that? >> Yes, so-- >> And that individual, yeah, describe the organization. >> So that sits in our lines of business. Originally, our data governance office sat in technology. And then, our early 2018 we actually re-orged into the business under the chief data and analytics officer when that role was formed. So we sit under that group along with a data solutions and governance team that includes several of our data stewards and also some others, some data engineer-type roles. And then, our center for data science and analytics as well that contains a lot of our data science teams in that type of work. >> So in thinking about some of these, I was describing to Susan, as these skunkworks projects, is the data team, the chief data officer's team involved in those projects or is it sort of a, go run water through the pipes, get an MVP and then you guys come in. How does that all work? >> We're working to try to centralize that function as much as we can, because we do believe there's value in the left hand knowing what the right hand is doing in those types of things. So we're trying to build those communications channels and build that network of data consumers across the organization. >> It's hard right? >> It is. >> Because the line of business wants to move fast, and you're saying, hey, we can help. And they think you're going to slow them down, but in fact, you got to make the case and show the success because you're actually not going to slow them down to terms of the ultimate outcome. I think that's the case that you're trying to make, right? >> And that's one of the things that we try to really focus on and I think that's one of the advantages to us being embedded in the business under the CDAO role, is that we can then say our objectives are your objectives. We are here to add value and to align with what you're working on. We're not trying to slow you down or hinder you, we're really trying to bring more to the table and augment what you're already trying to achieve. >> Sometimes getting that organization right means everything, as we've seen. >> Absolutely. >> That's right. >> How are you applying governance discipline to unstructured data? >> That's actually something that's a little bit further down our road map, but one of the things that we have started doing is looking at our taxonomy's for structured data and aligning those with the taxonomy's that we're using to classify unstructured data. So, that's something we're in the early stages with, so that when we get to that process of looking at more of our unstructured content, we can, we already have a good feel for there's alignment between the way that we think about and organize those concepts. >> Have you identified automation tools that can help to bring structure to that unstructured data? >> Yes, we have. And there are several tools out there that we're continuing to investigate and look at. But, that's one of the key things that we're trying to achieve through this process is bringing structure to unstructured content. >> So, the conference. First year at the conference. >> Yes. >> Kind of key take aways, things that interesting to you, learnings? >> Oh, yes, well the number of CDO's that are here and what's top of mind for them. I mean, it ranges from, how do I stand up my operating model? We just had a session just about 30 minutes ago. A lot of questions around, how do I set up my organization structure? How do I stand up my operating model so that I could be flexible? To, right, the data scientists, to the folks that are more traditional in structured and trusted data. So, still these things are top-of-mind and because they're recognizing the market is also changing too. And the growing amount of expectations, not only solving business outcomes, but also regulatory compliance, privacy is also top-of-mind for a lot of customers. In terms of, how would I get started? And what's the appropriate structure and mechanism for doing so? So we're getting a lot of those types of questions as well. So, the good thing is many of us have had years of experience in this phase and the convergence of us being able to support our customers, not only in our principles around how we implement the framework, but also the technology is really coming together very nicely. >> Anything you'd add, Blake? >> I think it's really impressive to see the level of engagement with thought leaders and decision makers in the data space. You know, as Susan mentioned, we just got out of our session and really, by the end of it, it turned into more of an open discussion. There was just this kind of back and forth between the participants. And so it's really engaging to see that level of passion from such a distinguished group of individuals who are all kind of here to share thoughts and ideas. >> Well anytime you come to a conference, it's sort of any open forum like this, you learn a lot. When you're at MIT, it's like super-charged. With the big brains. >> Exactly, you feel it when you come on the campus. >> You feel smarter when you walk out of here. >> Exactly, I know. >> Well, guys, thanks so much for coming to theCUBE. It was great to have you. >> Thank you for having us. We appreciate it, thank you. >> You're welcome. All right, keep it right there everybody. Paul and I will be back with our next guest. You're watching theCUBE from MIT in Cambridge. We'll be right back. (techno music)

Published Date : Aug 2 2019

SUMMARY :

Brought to you by SiliconANGLE Media. Susan Wilson is here, she's the vice president So, what are you leading at Informatica? and how to best help them; but more specifically the data governance lead. Right, so I lead the data governance capabilities and then, okay guys over to you. And a lot of the failed attempts, if you will, And the mind set from 1.0 to 2.0 to 3.0 has changed. Where, and I'm not sure it's ever going to flip back, But at the same time, Right, and we look at data governance So how do you do that? What does the quality of that data look like? and do what you need to do with it so what stages have you gone through in the culture of the company. in finding the data that they need is that the chief data officers often times and to be able to provide a mechanism What do you do at that stage? So, for example, if I'm going to make changes, How do you help your customers discover that data? and report that out to the data stewards and then, with a fair degree of accuracy, categorize it And it's core to how we built out our platforms It doesn't work. And that individual, And then, our early 2018 we actually re-orged is the data team, the chief data officer's team and build that network of data consumers but in fact, you got to make the case and show the success and to align with what you're working on. Sometimes getting that organization right but one of the things that we have started doing is bringing structure to unstructured content. So, the conference. And the growing amount of expectations, and decision makers in the data space. it's sort of any open forum like this, you learn a lot. when you come on the campus. Well, guys, thanks so much for coming to theCUBE. Thank you for having us. Paul and I will be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

SusanPERSON

0.99+

Dave VellantePERSON

0.99+

PaulPERSON

0.99+

Susan WilsonPERSON

0.99+

BlakePERSON

0.99+

InformaticaORGANIZATION

0.99+

CambridgeLOCATION

0.99+

Mark RamseyPERSON

0.99+

Blake AndersPERSON

0.99+

three monthsQUANTITY

0.99+

three peopleQUANTITY

0.99+

FacebookORGANIZATION

0.99+

New York LifeORGANIZATION

0.99+

early 2018DATE

0.99+

Cambridge, MassachusettsLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

First yearQUANTITY

0.99+

oneQUANTITY

0.99+

90%QUANTITY

0.99+

two and half yearsQUANTITY

0.98+

firstQUANTITY

0.98+

approximately 15QUANTITY

0.98+

7,000 critical data elementsQUANTITY

0.97+

about two and half yearsQUANTITY

0.97+

first yearQUANTITY

0.96+

twoQUANTITY

0.96+

about 30 minutes agoDATE

0.96+

theCUBEORGANIZATION

0.95+

Blake AndrewsPERSON

0.95+

MIT Chief Data Officer andEVENT

0.93+

MIT Chief Data Officer Information Quality ConferenceEVENT

0.91+

EDWORGANIZATION

0.86+

last twelve monthsDATE

0.86+

skunkworksORGANIZATION

0.85+

CDAOORGANIZATION

0.85+

this morningDATE

0.83+

MITORGANIZATION

0.83+

7 years agoDATE

0.78+

yearQUANTITY

0.75+

Information Quality Symposium 2019EVENT

0.74+

3.0OTHER

0.66+

York LifeORGANIZATION

0.66+

2.0OTHER

0.59+

MIT CDOIQ 2019EVENT

0.58+

halfQUANTITY

0.52+

Data 2.0OTHER

0.52+

Data 3.0TITLE

0.45+

1.0OTHER

0.43+

DataOTHER

0.21+