LIVE Panel: "Easy CI With Docker"
>>Hey, welcome to the live panel. My name is Brett. I am your host, and indeed we are live. In fact, if you're curious about that, if you don't believe us, um, let's just show a little bit of the browser real quick to see. Yup. There you go. We're live. So, all right. So how this is going to work is I'm going to bring in some guests and, uh, in one second, and we're going to basically take your questions on the topic designer of the day, that continuous integration testing. Uh, thank you so much to my guests welcoming into the panel. I've got Carlos, Nico and Mandy. Hello everyone. >>Hello? All right, >>Let's go. Let's go around the room and all pretend we don't know each other and that the internet didn't read below the video who we are. Uh, hi, my name is Brett. I am a Docker captain, which means I'm supposed to know something about Docker. I'm coming from Virginia Beach. I'm streaming here from Virginia Beach, Virginia, and, uh, I make videos on the internet and courses on you to me, Carlos. Hey, >>Hey, what's up? I'm Carlos Nunez. I am a solutions architect, VMware. I do solution things with computers. It's fun. I live in Dallas when I'm moving to Houston in a month, which is where I'm currently streaming. I've been all over the Northeast this whole week. So, um, it's been fun and I'm excited to meet with all of you and talk about CIA and Docker. Sure. >>Yeah. Hey everyone. Uh, Nico, Khobar here. I'm a solution engineer at HashiCorp. Uh, I am streaming to you from, uh, the beautiful Austin, Texas. Uh, ignore, ignore the golden gate bridge here. This is from my old apartment in San Francisco. Uh, just, uh, you know, keeping that, to remember all the good days, um, that that lived at. But, uh, anyway, I work at Patrick Corp and I work on all things, automation, um, and cloud and dev ops. Um, and I'm excited to be here and Mandy, >>Hi. Yeah, Mandy Hubbard. I am streaming from Austin, Texas. I am, uh, currently a DX engineer at ship engine. Um, I've worked in QA and that's kind of where I got my, uh, my Docker experience and, um, uh, moving into DX to try and help developers better understand and use our products and be an advocate for them. >>Nice. Well, thank you all for joining me. Uh, I really appreciate you taking the time out of your busy schedule to be here. And so for those of you in chat, the reason we're doing this live, because it's always harder to do things live. The reason we're here is to answer a question. So we didn't come with a bunch of slides and demos or anything like that. We're here to talk amongst ourselves about ideas and really here for you. So we've, we obviously, this is about easy CII, so we're, we're going to try to keep the conversation around testing and continuous integration and all the things that that entails with containers. But we may, we may go down rabbit holes. We may go veer off and start talking about other things, and that's totally fine if it's in the realm of dev ops and containers and developer and ops workflows, like, Hey, it's, it's kinda game. >>And, uh, these people have a wide variety of expertise. They haven't done just testing, right? We, we live in a world where you all kind of have to wear many hats. So feel free to, um, ask what you think is on the top of your mind. And we'll do our best to answer. It may, might not be the best answer or the correct answer, but we're going to do our best. Um, well, let's get it start off. Uh, let's, let's get a couple of topics to start off with. Uh, th the, the easy CGI was my, one of my three ideas. Cause he's the, one of the things that I'm most excited about is the innovation we're seeing around easier testing, faster testing, automated testing, uh, because as much as we've all been doing this stuff for, you know, 15 years, since 20 years since the sort of Jenkins early days, um, it it's, it seems like it's still really hard and it's still a lot of work. >>So, um, let's go around the room real quick, and everybody can just kind of talk for a minute about like your experience with testing and maybe some of your pain points, like what you don't like about our testing world. Um, and we can talk about some pains, cause I think that will lead us to kind of talk about what, what are the things we're seeing now that might be better, uh, ideas about how to do this. I know for me, uh, testing, obviously there's the code part, but just getting it automated, but mostly getting it in the hands of developers so that they can control their own testing. And don't have to go talk to a person to run that test again, or the mysterious Jenkins platform somewhere. I keep mentioning Jenkins cause it's, it is still the dominant player out there. Um, so for me, I'm, I'm, I, I don't like it when I'm walking into a room and there's, there's only one or two people that know how the testing works or know how to make the new tests go into the testing platform and stuff like that. So I'm always trying to free those things so that any of the developers are enabled and empowered to do that stuff. So someone else, Carlos, anybody, um, >>Oh, I have a lot of opinions on that. Having been a QA engineer for most of my career. Um, the shift that we're saying is everyone is dev ops and everyone is QA. Th the issue I see is no one asked developers if they wanted to be QA. Um, and so being the former QA on the team, when there's a problem, even though I'm a developer and we're all running QA, they always tend to come to the one of the former QA engineers. And they're not really owning that responsibility and, um, and digging in. So that's kind of what I'm saying is that we're all expected to test now. And some people, well, some people don't know how it's, uh, for me it was kind of an intuitive skill. It just kind of fit with my personality, but not knowing what to look for, not knowing what to automate, not even understanding how your API end points are used by your front end to know what to test when a change is made. It's really overwhelming for developers. And, um, we're going to need to streamline that and, and hold their hands a little bit until they get their feet wet with also being QA. >>Right. Right. So, um, uh, Carlos, >>Yeah, uh, testing is like, Tesla is one of my favorite subjects to talk about when I'm baring with developers. And a lot of it is because of what Mandy said, right? Like a lot of developers now who used to write a test and say, Hey, QA, go. Um, I wrote my unit tests. Now write the rest of the test. Essentially. Now developers are expected to be able to understand how testing, uh, testing methodologies work, um, in their local environments, right? Like they're supposed to understand how to write an integration tasks federate into and tasks, a component test. And of course, how to write unit tests that aren't just, you know, assert true is true, right? Like more comprehensive, more comprehensive, um, more high touch unit tests, which include things like mocking and stubbing and spine and all that stuff. And, you know, it's not so much getting those tests. Well, I've had a lot of challenges with developers getting those tests to run in Docker because of usually because of dependency hell, but, um, getting developers to understand how to write tests that matter and mean something. Um, it's, it's, it can be difficult, but it's also where I find a lot of the enjoyment of my work comes into play. So yeah. I mean, that's the difficulty I've seen around testing. Um, big subject though. Lots to talk about there. >>Yeah. We've got, we've already got so many questions coming in. You already got an hour's worth of stuff. So, uh, Nico 81st thoughts on that? >>Yeah, I think I definitely agree with, with other folks here on the panel, I think from a, um, the shift from a skillset perspective that's needed to adopt the new technologies, but I think from even from, uh, aside from the organizational, um, and kind of key responsibilities that, that the new developers have to kinda adapt to and, and kind of inherit now, um, there's also from a technical perspective as there's, you know, um, more developers are owning the full stack, including the infrastructure piece. So that adds a lot more to the plate in Tim's oaf, also testing that component that they were not even, uh, responsible for before. Um, and, um, also the second challenge that, you know, I'm seeing is that on, you know, the long list of added, um, uh, tooling and, you know, there's new tool every other day. Um, and, um, that kind of requires more customization to the testing, uh, that each individual team, um, any individual developer Y by extension has to learn. Uh, so the customization, uh, as well as the, kind of the scope that had, uh, you know, now in conferences, the infrastructure piece, um, uh, both of act to the, to the challenges that we're seeing right now for, um, for CGI and overall testing, um, uh, the developers are saying, uh, in, in the market today. >>Yeah. We've got a lot of questions, um, about all the, all the different parts of this. So, uh, let me just go straight to them. Cause that's why we're here is for the people, uh, a lot of people asking about your favorite tools and in one of this is one of the challenges with integration, right? Is, um, there is no, there are dominant players, but there, there is such a variety. I mean, every one of my customers seems like they're using a different workflow and a different set of tools. So, and Hey, we're all here to just talk about what we're, what we're using, uh, you know, whether your favorite tools. So like a lot of the repeated questions are, what are your favorite tools? Like if you could create it from scratch, uh, what would you use? Pierre's asking, you know, GitHub actions sounds like they're a fan of GitHub actions, uh, w you know, mentioning, pushing the ECR and Docker hub and, uh, using vs code pipeline, I guess there may be talking about Azure pipelines. Um, what, what's your preferred way? So, does anyone have any, uh, thoughts on that anyone want to throw out there? Their preferred pipeline of tooling? >>Well, I have to throw out mine. I might as Jenkins, um, like kind of a honorary cloud be at this point, having spoken a couple of times there, um, all of the plugins just make the functionality. I don't love the UI, but I love that it's been around so long. It has so much community support, and there are so many plugins so that if you want to do something, you don't have to write the code it's already been tested. Um, unfortunately I haven't been able to use Jenkins in, uh, since I joined ship engine, we, most of our, um, our, our monolithic core application is, is team city. It's a dotnet application and TeamCity plays really well with.net. Um, didn't love it, uh, Ms. Jenkins. And I'm just, we're just starting some new initiatives that are using GitHub actions, and I'm really excited to learn, to learn those. I think they have a lot of the same functionality that you're looking for, but, um, much more simplified in is right there and get hubs. So, um, the integration is a lot more seamless, but I do have to go on record that my favorite CICT tools Jenkins. >>All right. You heard it here first people. All right. Anyone else? You're muted? I'm muted. Carlin says muted. Oh, Carla says, guest has muted themselves to Carlos. You got to unmute. >>Yes. I did mute myself because I was typing a lot, trying to, you know, try to answer stuff in the chat. And there's a lot of really dark stuff in there. That's okay. Two more times today. So yeah, it's fine. Yeah, no problem. So totally. And it's the best way to start a play more. So I'm just going to go ahead and light it up. Um, for enterprise environments, I actually am a huge fan of Jenkins. Um, it's a tool that people really understand. Um, it has stood the test of time, right? I mean, people were using Hudson, but 15 years ago, maybe longer. And, you know, the way it works, hasn't really changed very much. I mean, Jenkins X is a little different, but, um, the UI and the way it works internally is pretty familiar to a lot of enterprise environments, which is great. >>And also in me, the plugin ecosystem is amazing. There's so many plugins for everything, and you can make your own if you know, Java groovy. I'm sure there's a perfect Kotlin in there, but I haven't tried myself, but it's really great. It's also really easy to write, um, CIS code, which is something I'm a big fan of. So Jenkins files have been, have worked really well for me. I, I know that I can get a little bit more complex as you start to build your own models and such, but, you know, for enterprise enterprise CIO CD, if you want, especially if you want to roll your own or own it yourself, um, Jenkins is the bellwether and for very good reason now for my personal projects. And I see a lot on the chat here, I think y'all, y'all been agreed with me get hub actions 100%, my favorite tool right now. >>Um, I love GitHub actions. It's, it's customizable, it's modular. There's a lot of plugins already. I started using getting that back maybe a week after when GA and there was no documentation or anything. And I still, it was still my favorite CIA tool even then. Um, and you know, the API is really great. There's a lot to love about GitHub actions and, um, and I, and I use it as much as I can from my personal project. So I still have a soft spot for Travis CAI. Um, you know, they got acquired and they're a little different now trying to see, I, I can't, I can't let it go. I just love it. But, um, yeah, I mean, when it comes to Seattle, those are my tools. So light me up in the comments I will respond. Yeah. >>I mean, I, I feel with you on the Travis, the, I think, cause I think that was my first time experiencing, you know, early days get hub open source and like a free CIA tool that I could describe. I think it was the ammo back then. I don't actually remember, but yeah, it was kind of an exciting time from my experience. There was like, oh, this is, this is just there as a service. And I could just use it. It doesn't, it's like get hub it's free from my open source stuff. And so it does have a soft spot in my heart too. So yeah. >>All right. We've got questions around, um, cam, so I'm going to ask some questions. We don't have to have these answers because sometimes they're going to be specific, but I want to call them out because people in chat may have missed that question. And there's probably, you know, that we have smart people in chat too. So there's probably someone that knows the answer to these things. If, if it's not us, um, they're asking about building Docker images in Kubernetes, which to me is always a sore spot because it's Kubernetes does not build images by default. It's not meant for that out of the gate. And, uh, what is the best way to do this without having to use privileged containers, which privileged containers just implying that yeah, you, you, it probably has more privileges than by default as a container in Kubernetes. And that is a hard thing because, uh, I don't, I think Docker doesn't lie to do that out of the gate. So I don't know if anyone has an immediate answer to that. That's a pretty technical one, but if you, if you know the answer to that in chat, call it out. >>Um, >>I had done this, uh, but I'm pretty sure I had to use a privileged, um, container and install the Docker Damon on the Kubernetes cluster. And I CA I can't give you a better solution. Um, I've done the same. So, >>Yeah, uh, Chavonne asks, um, back to the Jenkins thing, what's the easiest way to integrate Docker into a Jenkins CICB pipeline. And that's one of the challenges I find with Jenkins because I don't claim to be the expert on Jenkins. Is there are so many plugins because of this, of this such a huge ecosystem. Um, when you go searching for Docker, there's a lot that comes back, right. So I, I don't actually have a preferred way because every team I find uses it differently. Um, I don't know, is there a, do you know if there's a Jenkins preferred, a default plugin? I don't even know for Docker. Oh, go ahead. Yeah. Sorry for Docker. And jacon sorry, Docker plugins for Jenkins. Uh, as someone's asking like the preferred or easy way to do that. Um, and I don't, I don't know the back into Jenkins that well, so, >>Well, th the new, the new way that they're doing, uh, Docker builds with the pipeline, which is more declarative versus the groovy. It's really simple, and their documentation is really good. They, um, they make it really easy to say, run this in this image. So you can pull down, you know, public images and add your own layers. Um, so I don't know the name of that plugin, uh, but I can certainly take a minute after this session and going and get that. Um, but if you really are overwhelmed by the plugins, you can just write your, you know, your shell command in Jenkins. You could just by, you know, doing everything in bash, calling the Docker, um, Damon directly, and then getting it working just to see that end to end, and then start browsing for plugins to see if you even want to use those. >>The plugins will allow more integration from end to end. Some of the things that you input might be available later on in the process for having to manage that yourself. But, you know, you don't have to use any of the plugins. You can literally just, you know, do a block where you write your shell command and get it working, and then decide if, for plugins for you. Um, I think it's always under important to understand what is going on under the hood before you, before you adopt the magic of a plugin, because, um, once you have a problem, if you're, if it's all a lockbox to you, it's going to be more difficult to troubleshoot. It's kind of like learning, get command line versus like get cracking or something. Once, once you get in a bind, if you don't understand the underlying steps, it's really hard to get yourself out of a bind, versus if you understand what the plugin or the app is doing, then, um, you can get out of situations a lot easier. That's a good place. That's, that's where I'd start. >>Yeah. Thank you. Um, Camden asks better to build test environment images, every commit in CII. So this is like one of those opinions of we're all gonna have some different, uh, or build on build images on every commit, leveraging the cash, or build them once outside the test pile pipeline. Um, what say you people? >>Uh, well, I I've seen both and generally speaking, my preference is, um, I guess the ant, the it's a consultant answer, right? I think it depends on what you're trying to do, right. So if you have a lot of small changes that are being made and you're creating images for each of those commits, you're going to have a lot of images in your, in your registry, right? And on top of that, if you're building those images, uh, through CAI frequently, if you're using Docker hub or something like that, you might run into rate limiting issues because of Docker's new rate, limiting, uh, rate limits that they put in place. Um, but that might be beneficial if the, if being able to roll back between those small changes while you're testing is important to you. Uh, however, if all you care about is being able to use Docker images, um, or being able to correlate versions to your Docker images, or if you're the type of team that doesn't even use him, uh, does he even use, uh, virgins in your image tags? Then I would think that that might be a little, much you might want to just have in your CIO. You might want to have a stage that builds your Docker images and Docker image and pushes it into your registry, being done first particular branches instead of having to be done on every commit regardless of branch. But again, it really depends on the team. It really depends on what you're building. It really depends on your workflow. It can depend on a number of things like a curse sometimes too. Yeah. Yeah. >>Once had two points here, you know, I've seen, you know, the pattern has been at every, with every, uh, uh, commit, assuming that you have the right set of tests that would kind of, uh, you would benefit from actually seeing, um, the, the, the, the testing workflow go through and can detect any issue within, within the build or whatever you're trying to test against. But if you're just a building without the appropriate set of tests, then you're just basically consuming almond, adding time, as well as all the, the image, uh, stories associated with it without treaty reaping the benefit of, of, of this pattern. Uh, and the second point is, again, I think if you're, if you're going to end up doing a per commit, uh, definitely recommend having some type of, uh, uh, image purging, um, uh, and, and, and garbage collection process to ensure that you're not just wasting, um, all the stories needed and also, um, uh, optimizing your, your bill process, because that will end up being the most time-consuming, um, um, you know, within, within your pipeline. So this is my 2 cents on this. >>Yeah, that's good stuff. I mean, those are both of those are conversations that could lead us into the rabbit hole for the rest of the day on storage management, uh, you know, CP CPU minutes for, uh, you know, your build stuff. I mean, if you're in any size team, more than one or two people, you immediately run into headaches with cost of CIA, because we have now the problem of tools, right? We have so many tools. We can have the CIS system burning CPU cycles all day, every day, if we really wanted to. And so you re very quickly, I think, especially if you're on every commit on every branch, like that gets you into a world of cost mitigation, and you probably are going to have to settle somewhere in the middle on, uh, between the budget, people that are saying you're spending way too much money on the CII platform, uh, because of all these CPU cycles, and then the developers who would love to have everything now, you know, as fast as possible and the biggest, biggest CPU's, and the biggest servers, and have the bills, because the bills can never go fast enough, right. >>There's no end to optimizing your build workflow. Um, we have another question on that. This is another topic that we'll all probably have different takes on is, uh, basically, uh, version tags, right? So on images, we, we have a very established workflow in get for how we make commits. We have commit shots. We have, uh, you know, we know get tags and there's all these things there. And then we go into images and it's just this whole new world that's opened up. Like there's no real consensus. Um, so what, what are your thoughts on the strategy for teams in their image tag? Again, another, another culture thing. Um, commander, >>I mean, I'm a fan of silver when we have no other option. Um, it's just clean and I like the timestamp, you know, exactly when it was built. Um, I don't really see any reason to use another, uh, there's just normal, incremental, um, you know, numbering, but I love the fact that you can pull any tag and know exactly when it was created. So I'm a big fan of bar, if you can make that work for your organization. >>Yep. People are mentioned that in chat, >>So I like as well. Uh, I'm a big fan of it. I think it's easy to be able to just be as easy to be able to signify what a major changes versus a minor change versus just a hot fix or, you know, some or some kind of a bad fix. The problem that I've found with having teams adopt San Bernardo becomes answering these questions and being able to really define what is a major change, what is a minor change? What is a patch, right? And this becomes a bit of an overhead or not so much of an overhead, but, uh, uh, uh, a large concern for teams who have never done versioning before, or they never been responsible for their own versioning. Um, in fact, you know, I'm running into that right now, uh, with, with a client that I'm working with, where a lot, I'm working with a lot of teams, helping them move their applications from a legacy production environment into a new one. >>And in doing so, uh, versioning comes up because Docker images, uh, have tags and usually the tax correlate to versions, but some teams over there, some teams that I'm working with are only maintaining a script and others are maintaining a fully fledged JAK, three tier application, you know, with lots of dependencies. So telling the script, telling the team that maintains a script, Hey, you know, you should use somber and you should start thinking about, you know, what's major, what's my number what's patch. That might be a lot for them. And for someone or a team like that, I might just suggest using commit shots as your versions until you figure that out, or maybe using, um, dates as your version, but for the more for the team, with the larger application, they probably already know the answers to those questions. In which case they're either already using Sember or they, um, or they may be using some other version of the strategy and might be in December, might suit them better. So, um, you're going to hear me say, it depends a lot, and I'm just going to say here, it depends. Cause it really does. Carlos. >>I think you hit on something interesting beyond just how to version, but, um, when to consider it a major release and who makes those decisions, and if you leave it to engineers to version, you're kind of pushing business decisions down the pipe. Um, I think when it's a minor or a major should be a business decision and someone else needs to make that call someone closer to the business should be making that call as to when we want to call it major. >>That's a really good point. And I add some, I actually agree. Um, I absolutely agree with that. And again, it really depends on the team that on the team and the scope of it, it depends on the scope that they're maintaining, right? And so it's a business application. Of course, you're going to have a product manager and you're going to have, you're going to have a product manager who's going to want to make that call because that version is going to be out in marketing. People are going to use it. They're going to refer to and support calls. They're going to need to make those decisions. Sember again, works really, really well for that. Um, but for a team that's maintaining the scripts, you know, I don't know, having them say, okay, you must tell me what a major version is. It's >>A lot, but >>If they want it to use some birds great too, which is why I think going back to what you originally said, Sember in the absence of other options. I think that's a good strategy. >>Yeah. There's a, there's a, um, catching up on chat. I'm not sure if I'm ever going to catch up, but there's a lot of people commenting on their favorite CII systems and it's, and it, it just goes to show for the, the testing and deployment community. Like how many tools there are out there, how many tools there are to support the tools that you're using. Like, uh, it can be a crazy wilderness. And I think that's, that's part of the art of it, uh, is that these things are allowing us to build our workflows to the team's culture. Um, and, uh, but I do think that, you know, getting into like maybe what we hope to be at what's next is I do hope that we get to, to try to figure out some of these harder problems of consistency. Uh, one of the things that led me to Docker at the beginning to begin with was the fact that it wa it created a consistent packaging solution for me to get my code, you know, off of, off of my site of my local system, really, and into the server. >>And that whole workflow would at least the thing that I was making at each step was going to be the same thing used. Right. And that, that was huge. Uh, it was also, it also took us a long time to get there. Right. We all had to, like Docker was one of those ones that decade kind of ideas of let's solidify the, enter, get the consensus of the community around this idea. And we, and it's not perfect. Uh, you know, the Docker Docker file is not the most perfect way to describe how to make your app, but it is there and we're all using it. And now I'm looking for that next piece, right. Then hopefully the next step in that, um, that where we can all arrive at a consensus so that once you hop teams, you know, okay. We all knew Docker. We now, now we're all starting to get to know the manifests, but then there's this big gap in the middle where it's like, it might be one of a dozen things. Um, you know, so >>Yeah, yeah. To that, to that, Brett, um, you know, uh, just maybe more of a shameless plug here and wanting to kind of talk about one of the things that I'm on. So excited, but I work, I work at Tasha Corp. I don't know anyone, or I don't know if many people have heard of, um, you know, we tend to focus a lot on workflows versus technologies, right. Because, you know, as you can see, even just looking at the chat, there's, you know, ton of opinions on the different tooling, right. And, uh, imagine having, you know, I'm working with clients that have 10,000 developers. So imagine taking the folks in the chat and being partnered with one organization or one company and having to make decisions on how to build software. Um, but there's no way you can conversion one or, or one way or one tool, uh, and that's where we're facing in the industry. >>So one of the things that, uh, I'm pretty excited about, and I don't know if it's getting as much traction as you know, we've been focused on it. This is way point, which is a project, an open source project. I believe we got at least, uh, last year, um, which is, it's more of, uh, it's, it is aim to address that really, uh, uh, Brad set on, you know, to come to tool to, uh, make it extremely easy and simple. And, you know, to describe how you want to build, uh, deploy or release your application, uh, in, in a consistent way, regardless of the tools. So similar to how you can think of Terraform and having that pluggability to say Terraform apply or plan against any cloud infrastructure, uh, without really having to know exactly the details of how to do it, uh, this is what wave one is doing. Um, and it can be applied with, you know, for the CIA, uh, framework. So, you know, task plugability into, uh, you know, circle CEI tests to Docker helm, uh, Kubernetes. So that's the, you know, it's, it's a hard problem to solve, but, um, I'm hopeful that that's the path that we're, you know, we'll, we'll eventually get to. So, um, hope, you know, you can, you can, uh, see some of the, you know, information, data on it, on, on HashiCorp site, but I mean, I'm personally excited about it. >>Yeah. Uh I'm to gonna have to check that out. And, um, I told you on my live show, man, we'll talk about it, but talk about it for a whole hour. Uh, so there's another question here around, uh, this, this is actually a little bit more detailed, but it is one that I think a lot of people deal with and I deal with a lot too, is essentially the question is from Cameron, uh, D essentially, do you use compose in your CIO or not Docker compose? Uh, because yes I do. Yeah. Cause it, it, it, it solves so many problems am and not every CGI can, I don't know, there's some problems with a CIO is trying to do it for me. So there are pros and cons and I feel like I'm still on the fence about it because I use it all the time, but also it's not perfect. It's not always meant for CIA. And CIA sometimes tries to do things for you, like starting things up before you start other parts and having that whole order, uh, ordering problem of things anyway. W thoughts and when have thoughts. >>Yes. I love compose. It's one of my favorite tools of all time. Um, and the reason why it's, because what I often find I'm working with teams trying to actually let me walk that back, because Jack on the chat asked a really interesting question about what, what, what the hardest thing about CIS for a lot of teams. And in my experience, the hardest thing is getting teams to build an app that is the same app as what's built in production. A lot of CGI does things that are totally different than what you would do in your local, in your local dev. And as a result of that, you get, you got this application that either doesn't work locally, or it does work, but it's a completely different animal than what you would get in production. Right? So what I've found in trying to get teams to bridge that gap by basically taking their CGI, shifting the CII left, I hate the shift left turn, but I'll use it. >>I'm shifting the CIO left to your local development is trying to say, okay, how do we build an app? How do we, how do we build mot dependencies of that app so that we can build so that we can test our app? How do we run tests, right? How do we build, how do we get test data? And what I found is that trying to get teams to do all this in Docker, which is normally a first for a lot of teams that I'm working with, trying to get them all to do all of this. And Docker means you're running Docker, build a lot running Docker, run a lot. You're running Docker, RM a lot. You ran a lot of Docker, disparate Docker commands. And then on top of that, trying to bridge all of those containers together into a single network can be challenging without compose. >>So I like using a, to be able to really easily categorize and compartmentalize a lot of the things that are going to be done in CII, like building a Docker image, running tests, which is you're, you're going to do it in CII anyway. So running tests, building the image, pushing it to the registry. Well, I wouldn't say pushing it to the registry, but doing all the things that you would do in local dev, but in the same network that you might have a mock database or a mock S3 instance or some of something else. Um, so it's just easy to take all those Docker compose commands and move them into your Yammel file using the hub actions or your dankest Bob using Jenkins, or what have you. Right. It's really, it's really portable that way, but it doesn't work for every team. You know, for example, if you're just a team that, you know, going back to my script example, if it's a really simple script that does one thing on a somewhat routine basis, then that might be a lot of overhead. Um, in that case, you know, you can get away with just Docker commands. It's not a big deal, but the way I looked at it is if I'm, if I'm building, if I build something that's similar to a make bile or rate file, or what have you, then I'm probably gonna want to use Docker compose. If I'm working with Docker, that's, that's a philosophy of values, right? >>So I'm also a fan of Docker compose. And, um, you know, to your point, Carlos, the whole, I mean, I'm also a fan of shifting CEI lift and testing lift, but if you put all that logic in your CTI, um, it changes the L the local development experience from the CGI experience. Versus if you put everything in a compose file so that what you build locally is the same as what you build in CGI. Um, you're going to have a better experience because you're going to be testing something more, that's closer to what you're going to be releasing. And it's also very easy to look at a compose file and kind of, um, understand what the dependencies are and what's happening is very readable. And once you move that stuff to CGI, I think a lot of developers, you know, they're going to be intimidated by the CGI, um, whatever the scripting language is, it's going to be something they're going to have to wrap their head around. >>Um, but they're not gonna be able to use it locally. You're going to have to have another local solution. So I love the idea of a composed file use locally, um, especially if he can Mount the local workspace so that they can do real time development and see their changes in the exact same way as it's going to be built and tested in CGI. It gives developers a high level of confidence. And then, you know, you're less likely to have issues because of discrepancies between how it was built in your local test environment versus how it's built in NCI. And so Docker compose really lets you do all of that in a way that makes your solution more portable, portable between local dev and CGI and reduces the number of CGI cycles to get, you know, the test, the test data that you need. So that's why I like it for really, for local dev. >>It'll be interesting. Um, I don't know if you all were able to see the keynote, but there was a, there was a little bit, not a whole lot, but a little bit talk of the Docker, compose V two, which has now built into the Docker command line. And so now we're shifting from the Python built compose, which was a separate package. You could that one of the challenges was getting it into your CA solution because if you don't have PIP and you got down on the binary and the binary wasn't available for every platform and, uh, it was a PI installer. It gets a little nerdy into how that works, but, uh, and the team is now getting, be able to get unified with it. Now that it's in Golang and it's, and it's plugged right into the Docker command line, it hopefully will be easier to distribute, easier to, to use. >>And you won't have to necessarily have dependencies inside of where you're running it because there'll be a statically compiled binary. Um, so I've been playing with that, uh, this year. And so like training myself to do Docker going from Docker dash compose to Docker space, compose. It is a thing I I'm almost to the point of having to write a shell replacement. Yeah. Alias that thing. Um, but, um, I'm excited to see what that's going, cause there's already new features in it. And it, these built kit by default, like there's all these things. And I, I love build kit. We could make a whole session on build kit. Um, in fact there's actually, um, maybe going on right now, or right around this time, there is a session on, uh, from Solomon hikes, the seat, uh, co-founder of Docker, former CTO, uh, on build kit using, uh, using some other tool on top of build kit or whatever. >>So that, that would be interesting for those of you that are not watching that one. Cause you're here, uh, to do a check that one out later. Um, all right. So another good question was caching. So another one, another area where there is no wrong answers probably, and everyone has a different story. So the question is, what are your thoughts on CII build caching? There's often a debate between security. This is from Quentin. Thank you for this great question. There's often a debate between security reproducibility and build speeds. I haven't found a good answer so far. I will just throw my hat in the ring and say that the more times you want to build, like if you're trying to build every commit or every commit, if you're building many times a day, the more caching you need. So like the more times you're building, the more caching you're gonna likely want. And in most cases caching doesn't bite you in the butt, but that could be, yeah, we, can we get the bit about that? So, yeah. Yeah. >>I'm going to quote Carlos again and say, it depends on, on, you know, how you're talking, you know, what you're trying to build and I'm quoting your colors. Um, yeah, it's, it's got, it's gonna depend because, you know, there are some instances where you definitely want to use, you know, depends on the frequency that you're building and how you're building. Um, it's you would want to actually take advantage of cashing functionalities, um, for the build, uh, itself. Um, but if, um, you know, as you mentioned, there could be some instances where you would want to disable, um, any caching because you actually want to either pull a new packages or, um, you know, there could be some security, um, uh, disadvantages related to security aspects that would, you know, you know, using a cache version of, uh, image layer, for example, could be a problem. And you, you know, if you have a fleet of build, uh, engines, you don't have a good grasp of where they're being cashed. We would have to, um, disable caching in that, in that, um, in those instances. So it, it would depend. >>Yeah, it's, it's funny you have that problem on both sides of cashing. Like there are things that, especially in Docker world, they will cash automatically. And, and then, and then you maybe don't realize that some of that caching could be bad. It's, it's actually using old, uh, old assets, old artifacts, and then there's times where you would expect it to cash, that it doesn't cash. And then you have to do something extra to enable that caching, especially when you're dealing with that cluster of, of CIS servers. Right. And the cloud, the whole clustering problem with caching is even more complex, but yeah, >>But that's, that's when, >>Uh, you know, ever since I asked you to start using build kits and able to build kit, you know, between it's it's it's reader of Boston in, in detecting word, you know, where in, in the bill process needs to cash, as well as, uh, the, the, um, you know, the process. I don't think I've seen any other, uh, approach there that comes close to how efficient, uh, that process can become how much time it can actually save. Uh, but again, I think, I think that's, for me that had been my default approach, unless I actually need something that I would intentionally to disable caching for that purpose, but the benefits, at least for me, the benefits of, um, how bill kit actually been processing my bills, um, from the builds as well as, you know, using the cash up until, you know, how it detects the, the difference in, in, in the assets within the Docker file had been, um, you know, uh, pretty, you know, outweigh the disadvantages that it brings in. So it, you know, take it each case by case. And based on that, determine if you want to use it, but definitely recommend those enabling >>In the absence of a reason not to, um, I definitely think that it's a good approach in terms of speed. Um, yeah, I say you cash until you have a good reason not to personally >>Catch by default. There you go. I think you catch by default. Yeah. Yeah. And, uh, the trick is, well, one, it's not always enabled by default, especially when you're talking about cross server. So that's a, that's a complexity for your SIS admins, or if you're on the cloud, you know, it's usually just an option. Um, I think it also is this, this veers into a little bit of, uh, the more you cash the in a lot of cases with Docker, like the, from like, if you're from images and checked every single time, if you're not pinning every single thing, if you're not painting your app version, you're at your MPN versions to the exact lock file definition. Like there's a lot of these things where I'm I get, I get sort of, I get very grouchy with teams that sort of let it, just let it all be like, yeah, we'll just build two images and they're totally going to have different dependencies because someone happened to update that thing and after whatever or MPM or, or, and so I get grouchy about that, cause I want to lock it all down, but I also know that that's going to create administrative burden. >>Like the team is now going to have to manage versions in a very much more granular way. Like, do we need to version two? Do we need to care about curl? You know, all that stuff. Um, so that's, that's kind of tricky, but when you get to, when you get to certain version problems, uh, sorry, uh, cashing problems, you, you, you don't want those set those caches to happen because it, if you're from image changes and you're not constantly checking for a new image, and if you're not pinning that V that version, then now you, you don't know whether you're getting the latest version of Davion or whatever. Um, so I think that there's, there's an art form to the more you pen, the less you have, the less, you have to be worried about things changing, but the more you pen, the, uh, all your versions of everything all the way down the stack, the more administrative stuff, because you're gonna have to manually change every one of those. >>So I think it's a balancing act for teams. And as you mature, I to find teams, they tend to pin more until they get to a point of being more comfortable with their testing. So the other side of this argument is if you trust your testing, then you, and you have better testing to me, the less likely to the subtle little differences in versions have to be penned because you can get away with those minor or patch level version changes. If you're thoroughly testing your app, because you're trusting your testing. And this gets us into a whole nother rant, but, uh, yeah, but talking >>About penny versions, if you've got a lot of dependencies isn't that when you would want to use the cash the most and not have to rebuild all those layers. Yeah. >>But if you're not, but if you're not painting to the exact patch version and you are caching, then you're not technically getting the latest versions because it's not checking for all the time. It's a weird, there's a lot of this subtle nuance that people don't realize until it's a problem. And that's part of the, the tricky part of allow this stuff, is it, sometimes the Docker can be almost so much magic out of the box that you, you, you get this all and it all works. And then day two happens and you built it a second time and you've got a new version of open SSL in there and suddenly it doesn't work. Um, so anyway, uh, that was a great question. I've done the question on this, on, uh, from heavy. What do you put, where do you put testing in your pipeline? Like, so testing the code cause there's lots of types of testing, uh, because this pipeline gets longer and longer and Docker building images as part of it. And so he says, um, before staging or after staging, but before production, where do you put it? >>Oh man. Okay. So, um, my, my main thought on this is, and of course this is kind of religious flame bait, so sure. You know, people are going to go into the compensation wrong. Carlos, the boy is how I like to think about it. So pretty much in every stage or every environment that you're going to be deploying your app into, or that your application is going to touch. My idea is that there should be a build of a Docker image that has all your applications coded in, along with its dependencies, there's testing that tests your application, and then there's a deployment that happens into whatever infrastructure there is. Right. So the testing, they can get tricky though. And the type of testing you do, I think depends on the environment that you're in. So if you're, let's say for example, your team and you have, you have a main branch and then you have feature branches that merged into the main branch. >>You don't have like a pre-production branch or anything like that. So in those feature branches, whenever I'm doing CGI that way, I know when I freak, when I cut my poll request, that I'm going to merge into main and everything's going to work in my feature branches, I'm going to want to probably just run unit tests and maybe some component tests, which really, which are just, you know, testing that your app can talk to another component or another part, another dependency, like maybe a database doing tests like that, that don't take a lot of time that are fascinating and right. A lot of would be done at the beach branch level and in my opinion, but when you're going to merge that beach branch into main, as part of a release in that activity, you're going to want to be able to do an integration tasks, to make sure that your app can actually talk to all the other dependencies that it talked to. >>You're going to want to do an end to end test or a smoke test, just to make sure that, you know, someone that actually touches the application, if it's like a website can actually use the website as intended and it meets the business cases and all that, and you might even have testing like performance testing, low performance load testing, or security testing, compliance testing that would want to happen in my opinion, when you're about to go into production with a release, because those are gonna take a long time. Those are very expensive. You're going to have to cut new infrastructure, run those tests, and it can become quite arduous. And you're not going to want to run those all the time. You'll have the resources, uh, builds will be slower. Uh, release will be slower. It will just become a mess. So I would want to save those for when I'm about to go into production. Instead of doing those every time I make a commit or every time I'm merging a feature ranch into a non main branch, that's the way I look at it, but everything does a different, um, there's other philosophies around it. Yeah. >>Well, I don't disagree with your build test deploy. I think if you're going to deploy the code, it needs to be tested. Um, at some level, I mean less the same. You've got, I hate the term smoke tests, cause it gives a false sense of security, but you have some mental minimum minimal amount of tests. And I would expect the developer on the feature branch to add new tests that tested that feature. And that would be part of the PR why those tests would need to pass before you can merge it, merge it to master. So I agree that there are tests that you, you want to run at different stages, but the earlier you can run the test before going to production. Um, the fewer issues you have, the easier it is to troubleshoot it. And I kind of agree with what you said, Carlos, about the longer running tests like performance tests and things like that, waiting to the end. >>The only problem is when you wait until the end to run those performance tests, you kind of end up deploying with whatever performance you have. It's, it's almost just an information gathering. So if you don't run your performance test early on, um, and I don't want to go down a rabbit hole, but performance tests can be really useless if you don't have a goal where it's just information gap, uh, this is, this is the performance. Well, what did you expect it to be? Is it good? Is it bad? They can get really nebulous. So if performance is really important, um, you you're gonna need to come up with some expectations, preferably, you know, set up the business level, like what our SLA is, what our response times and have something to shoot for. And then before you're getting to production. If you have targets, you can test before staging and you can tweak the code before staging and move that performance initiative. Sorry, Carlos, a little to the left. Um, but if you don't have a performance targets, then it's just a check box. So those are my thoughts. I like to test before every deployment. Right? >>Yeah. And you know what, I'm glad that you, I'm glad that you brought, I'm glad that you brought up Escalades and performance because, and you know, the definition of performance says to me, because one of the things that I've seen when I work with teams is that oftentimes another team runs a P and L tests and they ended, and the development team doesn't really have too much insight into what's going on there. And usually when I go to the performance team and say, Hey, how do you run your performance test? It's usually just a generic solution for every single application that they support, which may or may not be applicable to the application team that I'm working with specifically. So I think it's a good, I'm not going to dig into it. I'm not going to dig into the rabbit hole SRE, but it is a good bridge into SRE when you start trying to define what does reliability mean, right? >>Because the reason why you test performance, it's test reliability to make sure that when you cut that release, that customers would go to your site or use your application. Aren't going to see regressions in performance and are not going to either go to another website or, you know, lodge in SLA violation or something like that. Um, it does, it does bridge really well with defining reliability and what SRE means. And when you have, when you start talking about that, that's when you started talking about how often do I run? How often do I test my reliability, the reliability of my application, right? Like, do I have nightly tasks in CGI that ensure that my main branch or, you know, some important branch I does not mean is meeting SLA is meeting SLR. So service level objectives, um, or, you know, do I run tasks that ensure that my SLA is being met in production? >>Like whenever, like do I use, do I do things like game days where I test, Hey, if I turn something off or, you know, if I deploy this small broken code to production and like what happens to my performance? What happens to my security and compliance? Um, you can, that you can go really deep into and take creating, um, into creating really robust tests that cover a lot of different domains. But I liked just using build test deploy is the overall answer to that because I find that you're going to have to build your application first. You're going to have to test it out there and build it, and then you're going to want to deploy it after you test it. And that order generally ensures that you're releasing software. That works. >>Right. Right. Um, I was going to ask one last question. Um, it's going to have to be like a sentence answer though, for each one of you. Uh, this is, uh, do you lint? And if you lint, do you lent all the things, if you do, do you fail the linters during your testing? Yes or no? I think it's going to depend on the culture. I really do. Sorry about it. If we >>Have a, you know, a hook, uh, you know, on the get commit, then theoretically the developer can't get code there without running Melinta anyway, >>So, right, right. True. Anyone else? Anyone thoughts on that? Linting >>Nice. I saw an additional question online thing. And in the chat, if you would introduce it in a multi-stage build, um, you know, I was wondering also what others think about that, like typically I've seen, you know, with multi-stage it's the most common use case is just to produce the final, like to minimize the, the, the, the, the, the image size and produce a final, you know, thin, uh, layout or thin, uh, image. Uh, so if it's not for that, like, I, I don't, I haven't seen a lot of, you know, um, teams or individuals who are actually within a multi-stage build. There's nothing really against that, but they think the number one purpose of doing multi-stage had been just producing the minimalist image. Um, so just wanted to kind of combine those two answers in one, uh, for sure. >>Yeah, yeah, sure. Um, and with that, um, thank you all for the great questions. We are going to have to wrap this up and we could go for another hour if we all had the time. And if Dr. Khan was a 24 hour long event and it didn't sadly, it's not. So we've got to make room for the next live panel, which will be Peter coming on and talking about security with some developer ex security experts. And I wanted to thank again, thank you all three of you for being here real quick, go around the room. Um, uh, where can people reach out to you? I am, uh, at Bret Fisher on Twitter. You can find me there. Carlos. >>I'm at dev Mandy with a Y D E N D Y that's me, um, >>Easiest name ever on Twitter, Carlos and DFW on LinkedIn. And I also have a LinkedIn learning course. So if you check me out on my LinkedIn learning, >>Yeah. I'm at Nicola Quebec. Um, one word, I'll put it in the chat as well on, on LinkedIn, as well as, uh, uh, as well as Twitter. Thanks for having us, Brett. Yeah. Thanks for being here. >>Um, and, and you all stay around. So if you're in the room with us chatting, you're gonna, you're gonna, if you want to go to see the next live panel, I've got to go back to the beginning and do that whole thing, uh, and find the next, because this one will end, but we'll still be in chat for a few minutes. I think the chat keeps going. I don't actually know. I haven't tried it yet. So we'll find out here in a minute. Um, but thanks you all for being here, I will be back a little bit later, but, uh, coming up next on the live stuff is Peter Wood security. Ciao. Bye.
SUMMARY :
Uh, thank you so much to my guests welcoming into the panel. Virginia, and, uh, I make videos on the internet and courses on you to me, So, um, it's been fun and I'm excited to meet with all of you and talk Uh, just, uh, you know, keeping that, to remember all the good days, um, uh, moving into DX to try and help developers better understand and use our products And so for those of you in chat, the reason we're doing this So feel free to, um, ask what you think is on the top of your And don't have to go talk to a person to run that Um, and so being the former QA on the team, So, um, uh, Carlos, And, you know, So, uh, Nico 81st thoughts on that? kind of the scope that had, uh, you know, now in conferences, what we're using, uh, you know, whether your favorite tools. if you want to do something, you don't have to write the code it's already been tested. You got to unmute. And, you know, the way it works, enterprise CIO CD, if you want, especially if you want to roll your own or own it yourself, um, Um, and you know, the API is really great. I mean, I, I feel with you on the Travis, the, I think, cause I think that was my first time experiencing, And there's probably, you know, And I CA I can't give you a better solution. Um, when you go searching for Docker, and then start browsing for plugins to see if you even want to use those. Some of the things that you input might be available later what say you people? So if you have a lot of small changes that are being made and time-consuming, um, um, you know, within, within your pipeline. hole for the rest of the day on storage management, uh, you know, CP CPU We have, uh, you know, we know get tags and there's Um, it's just clean and I like the timestamp, you know, exactly when it was built. Um, in fact, you know, I'm running into that right now, telling the script, telling the team that maintains a script, Hey, you know, you should use somber and you should start thinking I think you hit on something interesting beyond just how to version, but, um, when to you know, I don't know, having them say, okay, you must tell me what a major version is. If they want it to use some birds great too, which is why I think going back to what you originally said, a consistent packaging solution for me to get my code, you know, Uh, you know, the Docker Docker file is not the most perfect way to describe how to make your app, To that, to that, Brett, um, you know, uh, just maybe more of So similar to how you can think of Terraform and having that pluggability to say Terraform uh, D essentially, do you use compose in your CIO or not Docker compose? different than what you would do in your local, in your local dev. I'm shifting the CIO left to your local development is trying to say, you know, you can get away with just Docker commands. And, um, you know, to your point, the number of CGI cycles to get, you know, the test, the test data that you need. Um, I don't know if you all were able to see the keynote, but there was a, there was a little bit, And you won't have to necessarily have dependencies inside of where you're running it because So that, that would be interesting for those of you that are not watching that one. I'm going to quote Carlos again and say, it depends on, on, you know, how you're talking, you know, And then you have to do something extra to enable that caching, in, in the assets within the Docker file had been, um, you know, Um, yeah, I say you cash until you have a good reason not to personally uh, the more you cash the in a lot of cases with Docker, like the, there's an art form to the more you pen, the less you have, So the other side of this argument is if you trust your testing, then you, and you have better testing to the cash the most and not have to rebuild all those layers. And then day two happens and you built it a second And the type of testing you do, which really, which are just, you know, testing that your app can talk to another component or another you know, someone that actually touches the application, if it's like a website can actually Um, the fewer issues you have, the easier it is to troubleshoot it. So if you don't run your performance test early on, um, and you know, the definition of performance says to me, because one of the things that I've seen when I work So service level objectives, um, or, you know, do I run Hey, if I turn something off or, you know, if I deploy this small broken code to production do you lent all the things, if you do, do you fail the linters during your testing? So, right, right. And in the chat, if you would introduce it in a multi-stage build, And I wanted to thank again, thank you all three of you for being here So if you check me out on my LinkedIn Um, one word, I'll put it in the chat as well on, Um, but thanks you all for being here,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Carlos Nunez | PERSON | 0.99+ |
Carla | PERSON | 0.99+ |
Carlos | PERSON | 0.99+ |
Brett | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Houston | LOCATION | 0.99+ |
Nico | PERSON | 0.99+ |
Virginia Beach | LOCATION | 0.99+ |
Chavonne | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
December | DATE | 0.99+ |
Mandy | PERSON | 0.99+ |
Khobar | PERSON | 0.99+ |
Carlin | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
two points | QUANTITY | 0.99+ |
24 hour | QUANTITY | 0.99+ |
Tasha Corp. | ORGANIZATION | 0.99+ |
Pierre | PERSON | 0.99+ |
Patrick Corp | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Jenkins X | TITLE | 0.99+ |
second point | QUANTITY | 0.99+ |
second challenge | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Docker | TITLE | 0.99+ |
2 cents | QUANTITY | 0.99+ |
10,000 developers | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
both | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
Cameron | PERSON | 0.99+ |
two images | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Jenkins | TITLE | 0.99+ |
Khan | PERSON | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
each case | QUANTITY | 0.99+ |
Brad | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
three ideas | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
Quentin | PERSON | 0.98+ |
both sides | QUANTITY | 0.98+ |
Tim | PERSON | 0.98+ |
last year | DATE | 0.98+ |
20 years | QUANTITY | 0.98+ |
Camden | PERSON | 0.98+ |
each step | QUANTITY | 0.98+ |
Two more times | QUANTITY | 0.98+ |
A Day in the Life of an IT Admin | HPE Ezmeral Day 2021
>>Hi, everyone. Welcome to ASML day. My name is Yasmin Joffey. I'm the director of systems engineering for ASML at HPE. Today. We're here and joined by my colleague, Don wake, who is a technical marketing engineer who will talk to us about the date and the life of an it administrator through the lens of ASML container platform. We'll be answering your questions real time. So if you have any questions, please feel free to put your questions in the chat, and we should have some time at the end for some live Q and a. Don wants to go ahead and kick us off. >>All right. Thanks a lot, Yasir. Yeah, my name is Don wake. I'm the tech marketing guy and welcome to asthma all day, day in the life of an it admin and happy St. Patrick's day. At the same time, I hope you're wearing green virtual pinch. If you're not wearing green, don't have to look that up if you don't know what I'm scouting. So we're just going to go through some quick things. Talk about discussion of modern business. It needs to kind of set the stage and go right into a demo. Um, so what is the need here that we're trying to fulfill with, uh, ASML container platform? It's, it's all rooted in analytics. Um, modern businesses are driven by data. Um, they are also application centric and the separation of applications and data has never been more important or, or the relationship between the two applications are very data hungry. >>These days, they consume data in all new ways. The applications themselves are, are virtualized, containerized, and distributed everywhere, and optimizing every decision and every application is, is become a huge problem to tackle for every enterprise. Um, so we look at, um, for example, data science, um, as one big use case here, um, and it's, it's really a team sport and I'm today wearing the hat of perhaps, you know, operations team, maybe software engineer, guy working on, you know, continuous integration, continuous development integration with source control, and I'm supporting these data scientists, data analysts. And I also have some resource control. I can decide whether or not the data science team gets a, a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an it admin, and that is the ASML container platform. >>And just walking through this real quick, at the top, I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, dev ops guys, they all have particular needs and they can access their resources and spin up clusters, or just do work with the Jupiter notebook or run spark or Kafka or any of the, you know, popular analytics platforms by just getting in points that we can provide to them web URLs and their self service. But in the backend, I can then as the it guy makes sure the Kubernetes clusters are up and running, I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can, uh, you know, put my like clusters on premise if I want to. >>And I can do all this through this centralized control plane. So today I'm just going to show you I'm supporting some data scientists. So one of our very own guys is actually doing a demo right now as well, called the a day in the life of the data scientist. And he's on the opposite side, not caring about all the stuff I'm doing in the backend and he's training models and registering the models and working with data, uh, inside his, you know, Jupiter notebook, running inferences, running postman scripts. And so I'm in the background here, making sure that he's got access to his cluster storage protected, make sure it's, um, you know, his training models are up, he's got service endpoints, connecting him to, um, you know, his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupiter notebook and models. So why don't we, um, get hands on and I'll just jump right over it. >>It was no container platform. So this is a web UI. So this is the interface into the container platform. Our centralized control plane, I'm using my active directory credentials to log in here. >>And >>When I log in, I've also been assigned a particular role, uh, with regard to how much of the resources I can access. Now, in my case, I'm a site admin you can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. Um, so I have a cluster I can go in here and let's say, um, we have a new data scientists come on board one. I can give him his own resources so he can do whatever he wants, use some GPU's and not affect other clusters. Um, so we have all these other clusters already created here. You can see here that, um, this is a very busy, um, you know, production system. They've got some dev clusters over here. >>I see here, we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and, and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster. And literally I just push a button, create Kubernetes cluster once I've done that. And I'll just show you some of the screens and this is a live environment. So this is, I could actually do it all my hosts are used up right now, but I wouldn't be able to go in here and give it a name, just select, um, some hosts to use as the primary master controller and some workers answer a few more questions. And then once that's done, I have now created a special, a whole nother Kubernetes cluster, um, that I could also create tenants from. >>So tenants are really Kubernetes. Uh namespaces so in addition to taking hosts and Kubernetes clusters, I can also go to that, uh, to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and, um, let's see, we've got, um, we've got this year is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create and similar to how you create a cluster. You can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AI ML project, um, which really is our ML ops license. So at the end of the day, I can say, okay, I'm going to create an ML ops tenant from that cluster that I created. >>And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call it tenant. I mean, it's like, multitenancy the name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do. Um, and at this point I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign users right here. So right now it's just me. But if I want it to, for example, give this, um, to Terry, I could go in here and find another user and assign him from this lead, from this list, as long as he's got the proper credentials here. So you can see here, all these other users have active directory credentials, and they, uh, when we created the cluster itself, we also made sure it integrated with our active directory, so that only authorized users can get in there. >>Let's say the first thing I want to do is make sure when I do Jupiter notebook work, or when Terry does, I'm going to connect him up straight up to the get hub repository. So he gives me a link to get hub and says, Hey man, this is all of my cluster work that I've been doing. I've got my source control there. My scripts, my Python notebooks, my Jupiter notebooks. So when I create that, I simply give him, you know, he gives me his, I create a configuration. I say, okay, here's a, here's a get repo. Here's the link to it. I can use a token, here's his username. And I can now put in that token. So this is actually a private repo and using a token, you know, standard get interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. >>And this gets into the Kubernetes world. Um, you know, if you want to make sure you have secure integration with things like your source control or perhaps your active directory, that's all maintained in secrets. So you can take that secret. And when I then create his notebook, I can put that secret right in here in this, uh, launch Yammel. And I say, Hey, connect this Jupiter notebook up with this secret so he can log in. And when I've launched this Jupiter notebook cluster, this is actually now, uh, within my, my, uh, Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that, uh, Kubernetes tenant and say, coop CTL, these are standard, you know, CNCF certified Kubernetes get pods. And when I do this, it'll tell me all of the active pods and within those positive containers that I'm running. >>So I'm running quite a few pods and containers here in this, uh, artificial intelligence machine learning, um, tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes, uh, control. Uh well, and then I can do something like this, where on my own system where I'm more comfortable, perhaps coop CTL get pods. So this is running on my laptop and I just had to do a coop CTL refresh and give the IP address and authorization, um, information in order to connect from my laptop to that end point. So from a CIC D perspective from, you know, an it admin guides, he usually wants to use tools right on his, uh, desktop. So here am I back in my web browser, I'm also here on the dashboard of this, uh, Kubernetes, um, tenant, and I can see how it's doing. >>It looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupiter notebook pod. So aren't, I show how, you know, I could enable my data scientists by just giving him the, uh, URL or what we call a notebook service end points or notebook end point. And just by clicking on this URL or copying it, copying, you know, it's a link, uh, and then emailing it to them and say, okay, here's your, uh, you know, here's your duper notebook. And I say, Hey, just log in with your credentials. I've already logged in. Um, and so then he's got his Jupiter notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project and within here, and this is really in the data science realm, data scientists realm. >>He can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage. And, you know, these, these commands, um, are kind of cool. They're a little Jupiter magic commands, and we've got some of our own that showed that attachment to the cluster. Um, but you can see here if you run these commands, they're actually looking at the shared project repository managed by the container platform. So, you know, just to show you that again, I'll go back to the container platform. And in fact, the data scientist, uh, could do the same thing. Attitude put a notebook back to platform. So here's this project repository. So this is other big point. So now putting on my storage admin hat, you know, I've got this shared, um, storage, um, volume that is managed for me by the ESMO data fabric. >>Um, in, in here, you can see that the data scientist, um, from his get repo is able to through Jupiter notebook directly, uh, copy his code. He was able to run as Jupiter notebook and create this XG boost, uh, model. So this file can then be registered in this AIML tenant. So he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service kick off his notebooks, even get a deployment end point so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a postman rest URL and get answers. Um, but let's say he wants to, um, he's been doing all this work and I want to make sure that his, uh, data's protected, uh, how about creating a mirror. >>So if I want to create a mirror of that data, now I go back to this other, uh, and this is the, the, uh, data fabric embedded in a very special cluster called the Picasso cluster. And it's a version of the ASML data fabric that allows you to launch what was formerly called Matt bar as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically, uh, gets things like that. Tenant storage. I showed you to create a shared workspace, and it's automatically managed by this, uh, data fabric. Uh, and you're even given an end point to go into the data fabric and then use all of the awesome features of ASML data fabric. So here I can just log in here. And now I'm at the, uh, data fabric, web UI to do some data protection and mirroring. >>So >>Let's go over here. Let's say I want to, uh, create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source, control my project repository that I want to protect. And I see that the ESMO data fabric has created 10 and 30 as a volume. So I'll go back to my, um, data fabric here, and I'm going to look for 10 and 30. And if I want to, I can go into tenant 30, >>Okay. >>Down here, I can look at the usage. I can look at all of the, you know, I've used very little of the, uh, allocated storage that I want, but let's, uh, you know what, let's go ahead and create a volume to mirror that one. So very simple web UI that has said create volume. I go in here and I say, I want to do a, a tenant 30 mirror. And I say, mirror the mirror volume. Um, I want to use my Picasso cluster. I want to use tenant 30. So now that's actually looking up in the data fabric, um, database there's 10 and 30 K. So it knows exactly which one I want to use. I can go in here and I can say, you know, ext HCP, tenant, 30 mirror, you know, I can give it whatever name I want and this path here. >>And that's a whole nother, uh, demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world, because this is truly a global name, split namespace, which is a huge differentiator for us in this case, I'm creating a local mirror and that can go down here and, um, I can add, uh, audit and encryptions. I can do, um, access control. I can, you know, change permissions, you know, so full service, um, interactivity here. And of course this is using the web UI, but there's also rest API interfaces as well. So that is pretty much the, the brunt of what I wanted to show you in the demo. Um, so we got hands on and I'm just going to throw this up real quick and then come back to Yasser. See if he's got any questions he has received from anybody watching, if you have any new questions. >>Yeah. We've got a few questions. Um, we can, uh, just take some time to go, hopefully answer a few. Um, so it, it does look like you can integrate or incorporate your existing get hub, uh, to be able to, um, extract, uh, shared code or repositories. Correct? >>Yeah. So we have that built in and can either be, um, get hub or bit bucket it's, you know, pretty standard interface. So just like you can go into any given, get hub and do a clone of a, of a repo, pull it into your local environment. We integrated that directly into the gooey so that you can, uh, say to your, um, AIML tenant, uh, to your Jupiter notebook. You know, here's, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some, some steps there because Jupiter notebook is designed to be integrated with get hub. So we have get hub integrated in as well or bit bucket. Right. >>Um, another question around the file system, um, has the map, our file system that was carried over, been modified in any way to run on top of Kubernetes. >>So yeah, I would say that the map, our file system data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features, but if you need, um, perhaps run it on bare metal, maybe you have performance, um, concerns, um, you know, you can, uh, you can also deploy it as a separate bare metal instance of data fabric, but this is just one way that you can, uh, use it integrated directly into Kubernetes depends really the needs of, of the, uh, the user and that a fabric has a lot of different capabilities, but this is, um, it has a lot of the core file system capabilities where you can do snapshots and mirrors, and it it's of course, striped across multiple, um, multiple disks and nodes. And, uh, you know, Matt BARR data fabric has been around for years. It's, uh, and it's designed for integration with these, uh, analytic type workloads. >>Great. Um, you showed us how you can manage, um, Kubernetes clusters through the ASML container platform you buy. Um, but the question is, can you, uh, control who accesses, which tenant, I guess, namespace that you created, um, and also can you restrict or, uh, inject resource limitations for each individual namespace through the UI? >>Oh yeah. So that's, that's a great question. Yes. To both of those. So, um, as a site admin, I had lots of authority to create clusters, to go into any cluster I wanted, but typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. Um, and it's all role-based access control. So I could create a local user and have container platform authenticate him, or I can say integrate directly with, uh, active directory or LDAP, and then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant and only this tenant. Um, another thing you asked about is his limitations. So when you create the tenant to prevent that noisy neighbor problem, you can, um, go in and create quotas. >>So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, um, flow is okay, I've defined which cluster I want to use. I defined how much memory I want to use. So there's a quota right there. You could say, Hey, how many CPU's am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host, um, you know, you can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob, over here, um, he's only going to get 50 of the a hundred CPU's available and he's only going to get X amount of gigabytes of memory. And he's only going to get this much storage that he can consume. So you can then safely hand off something and know they're not going to take all the resources, especially the GPU's where those will be expensive. And you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. >>Fantastic. Well, we, I think we are out of time. Um, we have, uh, a list of other questions that we will absolutely reach out and, um, get all your questions answered, uh, for those of you who ask questions in the chat. Um, Don, thank you very much. Thanks everyone else for joining Don, will this recording be made available for those who couldn't make it today? >>I believe so. Honestly, I'm not sure what the process is, but, um, yeah, it's being recorded so they must've done that for a reason. >>Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.
SUMMARY :
So if you have any questions, please feel free to put your questions in the chat, don't have to look that up if you don't know what I'm scouting. you know, continuous integration, continuous development integration with source control, and I'm supporting I can, uh, you know, And so I'm in the background here, making sure that he's got access to So this is a web UI. You can see here that, um, this is a very busy, um, you know, And I'll just show you some of the screens and this is a live environment. in the namespace, I just hit create and similar to how you create a cluster. So you can see here, all these other users have active I create that, I simply give him, you know, he gives me his, I create a configuration. So you can take that secret. So this is running on my laptop and I just had to do a coop CTL refresh And just by clicking on this URL or copying it, copying, you know, it's a link, So now putting on my storage admin hat, you know, I've got this shared, So here again, another URL that you could then take this and put it into like a postman rest URL And when you create this special cluster, every other cluster that you create is automatically, And I see that the ESMO data I can look at all of the, you know, I can, you know, change permissions, Um, so it, it does look like you can integrate So just like you can go into any given, Um, another question around the file system, um, has the it has a lot of the core file system capabilities where you can do snapshots and mirrors, and also can you restrict or, uh, inject resource limitations for each So when you create the tenant to prevent So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, Um, Don, thank you very much. I believe so.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Yasir | PERSON | 0.99+ |
Terry | PERSON | 0.99+ |
Don wake | PERSON | 0.99+ |
Tokyo | LOCATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Yasmin Joffey | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
two applications | QUANTITY | 0.99+ |
Don | PERSON | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
St. Patrick's day | EVENT | 0.98+ |
10 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
30 K. | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
HPE | ORGANIZATION | 0.97+ |
one person | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
Yasser | PERSON | 0.97+ |
Kafka | TITLE | 0.97+ |
Python | TITLE | 0.96+ |
ASML | ORGANIZATION | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
one way | QUANTITY | 0.95+ |
Jupiter | LOCATION | 0.94+ |
ESMO | ORGANIZATION | 0.94+ |
GitHub | ORGANIZATION | 0.94+ |
ASML | EVENT | 0.93+ |
Bob | PERSON | 0.93+ |
Matt BARR | PERSON | 0.92+ |
this year | DATE | 0.91+ |
Jupiter | ORGANIZATION | 0.9+ |
each individual | QUANTITY | 0.86+ |
30 | OTHER | 0.85+ |
a hundred CPU | QUANTITY | 0.82+ |
ASML | TITLE | 0.82+ |
2021 | DATE | 0.8+ |
coop | ORGANIZATION | 0.78+ |
a day | QUANTITY | 0.78+ |
Kubernetes | ORGANIZATION | 0.75+ |
couple | QUANTITY | 0.75+ |
A Day in the Life | TITLE | 0.73+ |
an IT | TITLE | 0.7+ |
30 mirror | QUANTITY | 0.69+ |
case | QUANTITY | 0.64+ |
CTL | COMMERCIAL_ITEM | 0.57+ |
few more questions | QUANTITY | 0.57+ |
coop CTL | ORGANIZATION | 0.55+ |
years | QUANTITY | 0.55+ |
Quentin | PERSON | 0.51+ |
30 | QUANTITY | 0.49+ |
Ezmeral Day | PERSON | 0.48+ |
lots | QUANTITY | 0.43+ |
Jupiter | COMMERCIAL_ITEM | 0.42+ |
10 | TITLE | 0.41+ |
Picasso | ORGANIZATION | 0.38+ |
Day One Wrap | PentahoWorld 2017
>> Announcer: Live from Orlando, Florida. It's TheCUBE covering PentahoWorld 2017. Brought to you by Hitachi Ventara. >> Welcome back to TheCUBE's live coverage of PentahoWorld brought to you by Hitachi Ventara, we are wrapping up day one. I'm your host Rebecca Knight along with my cohosts today James Kobielus and Dave Vellante. Guys, day one is done what have we learned? What's been the most exciting thing that you've seen at this conference? >> The most exciting thing is that clearly Hitachi Ventara which of course, Pentaho is a centerpiece is very much building on their strong background and legacy and open analytics, and pushing towards open analytics in the Internet of things, their portfolio, the whole edge to outcome theme, with Brian Householder doing a sensational Keynote this morning, laying out their strategic directions now Dave had a great conversation with him on TheCUBE earlier but I was very impressed with the fact that they've got a dynamic leader and a dynamic strategy, and just as important Hitachi, the parent company, has clearly put together three product units that make sense. You got strong data integration, you got a strong industrial IOT focus, and you got a really strong predictive and machine learning capability with Pentaho for the driving the entire pipeline towards the edge. Now that to me shows that they've got all the basic strategic components necessary to seize the future, further possibilities. Now, they brought a lot of really good customers on, including our latest one from IMS, Hillove, to discuss exactly what they're doing in that area. So I was impressed with the amount of solid substance of them seizing the opportunity. >> Well so I go back two years, when TheCUBE first did PentagoWorld 2015, and the story then was pretty strong. You had a company in big data, they seemingly were successful, they had a lot of good customer references, they achieved escape velocity, and had a nice exit under Quentin Galavine, who was the CEO at the time and the team. And they had a really really good story, I thought. But I was like okay, now what? We heard about conceptually we're going to bring the industrial internet and analytics together, and then it kind of got quiet for two years. And now, you're starting to see the strategy take shape in typical Hitachi form. They tend not to just rush in to big changes and transformations like this, they've been around for a long time, a very thoughtful company. I kind of look at Hitachi limited in a way, as an IBM like company of Japan, even though they do industrial equipment, and IBM's obviously in a somewhat different business, but they're very thoughtful. And so I like the story the problem I see is not enough people know about the story. Brian was very transparent this morning, how many people do business with Hitachi? Very few. And so I want to see the ecosystem grow. The ecosystem here is Hitachi, a couple of big data players, I don't see any reason why they can't explode this event and the ecosystem around Hitachi Ventara, to fulfill it's vision. I think that that's a key aspect of what they have to do. >> I want to see-- >> What will be the tipping point? Just to get as you said, I mean it's the brand awareness, and every customer we had on the show really said, when he when he said that my eyes lit up and I thought oh wow, we could actually be doing more stuff with Hitachi, there's more here. >> I want to see a strong developer focus, >> Yeah. >> Going forward, that focuses on AI and deep learning at the at the edge. I'm not hearing a lot of that here at PentahoWorld, of that rate now. So that to me is a strategic gap right now and what they're offering. When everybody across the IT and data and so forth is going real deep on things like frameworks like TensorFlow and so forth, for building evermore sophisticated, data driven algorithms with the full training pipeline and deployment and all that, I'm not hearing a lot of that from the Pentaho product group or from the Hitachi Ventara group here at this event. So next year at this event I would like to hear more of what they're doing in that area. For them to really succeed, they're going to have to have a solid strategy to migrate up there, openstack to include like I said, a bit of TensorFlow, MXNet, or some of the other deep learning tool kits that are becoming essentially defacto standards with developers. >> Yeah, so I mean I think the vision's right. Many of the pieces are in place, and the pieces that aren't there, I'm actually not that worried about, because Hitachi has the resources to go get them, either build them organically, which has proven it can do overtime, or bring in acquisition. Hitachi is a decent acquire of companies. Its content platform came in on an acquisition, I've seen them do some hardware acquisitions, some have worked, some haven't. But there's a lot of interesting software players out there and I think there's some values, frankly. The big data, tons of money poured in to this open source world, hard to make money in opensource, which means I think companies like Hitachi could pick off to do some M and A and find some value. Personally, I think if the numbers right at a half a billion dollars, I personally think that that was pretty good value for Hitachi. You see in all these multi billion dollar acquisitions going left and right. And so the other thing is the fact that Hitachi under the leadership under Brian Householder and others, was able to shift its model from 80% hardware, now it's 50/50 software and services I'd like to dig into that a little bit. They're a public company but you can't really peel the onion on the Hitachi Ventara side, so it kind of is what they say it is, I would imagine that's a lot of infrastructure software, kind of like EMC's a software company. >> James: Right. >> But nonetheless, they're moving towards a subscription model, they're committed to that, and I think that the other thing is that a lot of costumers. We come to a lot of shows and they struggle to get costumers on with substantive stories, so we heard virtually every costumer we talked to today is like Here's how I'm using Pentaho, here's how it's affecting. Not like super sexy stories yet, I mean that's what the IOT and the edge piece come in, but fundamental plumbing around big data, Pentaho seems like a pretty important piece of it. >> Their fundamental-- >> Their fundamental plumbing that's really saving them a lot of money too, and having a big ROI. >> They're fairly blue-chip as a solution provider of a full core data of a portfolio of Pentaho. I think of them in many ways as sort of like SAP, not a flashy vendor, but a very much a solid blue-chip in their core markets >> Right. >> I'm just naming another vendor that I don't see with a strong AI focus yet. >> Yeah. >> Pentaho, nothing to sneeze at when you have one customer after another like we've had here, rolling out some significant work they've been doing with Pantaho for quite a while, not to sneeze at their delivering value but they have to rise to the next level of value before long, to avoid be left in the dust. >> You got this data obviously they're going to be capturing more more data with the devices. >> James: Yeah. >> And The relationship with Hitachi proper, the elevator makers is still a little fuzzy to me, I'm trying to understand how that all shakes up, but my question for you Jim is: okay so let's assume for second they're going to have this infrastructure in place because they are industrial internet, and they got the analytics platform, maybe there's some holes that they can fill in, one being AI and some of the deep learning stuff, can't they get that somewhere? I mean there's so much action going on-- >> Yes. >> In the AI world, can't they bring that in and learn how to apply it overtime? >> Of course they can. First of all they can acquire and tap their own internal expertise. They've got like Mark Hall for example on the panel, they've obviously got a deep bench of data scientist like him who can take it to that next level, that's important. I think another thing that Hitachi Ventara needs to do to take it to the next level is they need a strong robotics portfolio. It's really talking about industrial internet of things, it's robotics with AI inside. I think they're definitely a company that could go there fairly quickly, a wide range of partners they can bring in or acquire to get fairly significant in terms of not just robotics in general, but robotics for a broad range of use cases where the AI is not so much the supervise learning and stuff that involves training, but things like reinforcement learning, and there's a fair amount of smarts and academe on Reinforcement learning for in body cognition, for robots, that's out there in terms of that's like the untapped space other than the broad AI portfolio, reinforcement learning. If somebody's going to innovate and differentiate themselves in terms of the enterprise, in terms of leveraging robotics in a variety of applications, it's going to to be somebody with a really strong grounding and reinforcement learning and productizing that and baking that in to an actual solution portfolio, I don't see yet the Google's and the IBM's and the Microsofts going there, and so if these guys want to stand out, that's one area they might explore. >> Yeah, and I think to pick up on that, I think this notion of robotics process automation, that market's going to explode. We were at a conference this week in Boston, the data rowdy of Boston, the chief data officer conference at the Park Plaza, 20 to 25% of the audiences, the CDO's in the audience had some kind of RPA, robotic process automation, initiative going on which I thought was astoundingly high. And so it would seem to me that Hitachi's going to be in a good position to capture all that data. The other thing that Brian stressed, which a lot of companies without a cloud will stress, is that it's your data, you own the data, we're not trying to resell that data, monetize that data, repackage that data. I pushed him a little bit on well what about that data training models, and where do those models go? And he says Look we are not in the business of taking models and you know as a big consultancy, and bringing it over to other competitors. Now Hitachi does have consultancy, but it's sort of in a focus, but as Brian said in his keynote, you have to listen to what people say and then watch them to see how they act. >> Rebecca: Do they walk the walk? >> How they respond. >> Right. >> And so that's you have to make your decision, but I do think that's going to be a very interesting field to watch because Hitachi's going to have so much data in their devices. Of course they're going to mine that data for things like predictive analytics, those devices are going to be in factories, they're going to be in ecosystems, and there's going to be a battle for who owns the data, and it's going to be really interesting to see how that shakes out. >> So I want to ask you both, as you've both have said, we've had a lot of great customer stories here on TheCUBE today. We had a woman who does autonomous vehicles, we had a gamer from Finland, we had a benefit scientist out of Massachusetts, Who were your favorite customer stories and what excited you most about their stories? >> James: Hmmm. >> Well I know you like the car woman. >> Well, yeah the car woman, >> The car woman. >> Ella Hillel. >> Ella Hillel, Yes. >> The PHD. That was really what I found many things fascinating, I was on a panel with Ella as well as she was on TheCUBE, what I found interesting I was expecting her to go to town on all things autonomous driving, self driving vehicles, and so forth, was she actually talked about the augmentation of the driver, passenger experience through analytics, dashboards in the sense that dashboards that help not only drivers but insurance companies and fleet managers, to do behavioral modification to help them modify the behavior, to get the most out of their vehicular experience, like reducing wear and tear on tires, and by taking better roads, or revising I thought that's kind of interesting; build more of the recommendation engine capability into the overall driving experience. That depends on an infrastructure of predictive analytics and big data, but also metered data coming from the vehicle and so forth. I found that really interesting because they're doing work clearly in that area, that's an area that you don't need levels one through five of self driving vehicles to get that. You can get that at any level of that whole model, just by bringing those analytics somehow into an organic way hopefully safely, into your current driving experience, maybe through a heads-up display that's integrated through your GPS or whatever might be, I found that interesting because that's something you could roll out universally, and it can actually make a huge difference in A: safety, B: people's sort of pleasure with the driving experience, Fahrvergnugen that's a Volkswagon, and then also see how people make the best use of their own vehicular assets in an era where people still mostly own their own car. >> Well for me if there's gambling involved-- >> Rebecca: You're there. >> It was the gaming, now not only because of the gambling, and we didn't find out how to beat the house Leonard, maybe next time, but it was confirmation of the three-tier data model from from edge-- >> James: Yes. >> To gateway to cloud, and that the cloud is two vectors; the on-premise and the off-premise cloud, and the fact that as a gaming company who designs their own slot machines it's an edge device, and they're basically instrumenting that edge device for real-time interactions. He said that most of the data will go back, I'm not sure. Maybe in that situation it might, maybe all the data will go back like weather data, it all comes back, But generally speaking I think there's going to be a lot of analog data at the edge that's going to be digitize that maybe you don't have to save and persist. But anyway, confirmation of that three-tiered data model I think is important because I think that is how Brian talked about it, we all know the pendulum is swinging, swung away from mainframe to decentralize back to the centralized data center and now it's swinging again to a much more distributed sort of data architecture. So it was good to hear confirmation of that, and I think it's again, it's really early innings in terms of how that all shakes out. >> Great, and we'll know more tomorrow at Pentaho day two, and I look forward to to being up here again with both of you tomorrow. >> Likewise. >> Great, this has been TheCUBE's live coverage of PentahoWorld brought to you by Hitachi Ventara, I'm Rebecca Knight for Jim Kobielus and Dave Vellante, we'll see you back here tomorrow.
SUMMARY :
Brought to you by Hitachi Ventara. brought to you by Hitachi Ventara, Now that to me shows that they've got PentagoWorld 2015, and the Just to get as you said, So that to me and the pieces that aren't there, and they struggle to get costumers on with a lot of money too, and having a big ROI. I think of them in many with a strong AI focus yet. have to rise to the next level they're going to be capturing and baking that in to Yeah, and I think to pick up on that, and there's going to be a So I want to ask you both, build more of the and that the cloud is two vectors; and I look forward to to you by Hitachi Ventara,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Mark Hall | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Quentin Galavine | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ella | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Ella Hillel | PERSON | 0.99+ |
Massachusetts | LOCATION | 0.99+ |
Finland | LOCATION | 0.99+ |
Leonard | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Brian Householder | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
Pentaho | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
three-tier | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
Hitachi Ventara | ORGANIZATION | 0.99+ |
IMS | ORGANIZATION | 0.99+ |
Park Plaza | LOCATION | 0.99+ |
Pantaho | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two vectors | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
this week | DATE | 0.98+ |
TheCUBE | ORGANIZATION | 0.98+ |
Hillove | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
25% | QUANTITY | 0.98+ |
one customer | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
day one | QUANTITY | 0.97+ |
day two | QUANTITY | 0.96+ |
half a billion dollars | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
SAP | ORGANIZATION | 0.95+ |
Japan | LOCATION | 0.93+ |
PentahoWorld | ORGANIZATION | 0.92+ |
50/50 | QUANTITY | 0.92+ |
TensorFlow | TITLE | 0.92+ |
Linda Tadic, Digital Bedrock - NAB Show 2017 - #NABShow - #theCUBE
>> Narrator: Live from Las Vegas, it's theCube, covering NAB 2017, brought to you by HGST (lively music) >> Hey welcome back everybody, Jeff Frick here at theCube. We're here at NAB 2017 again with 100,000 of our friends. It's a crazy, busy conference. I think it's got three halls, two levels on each hall, more stuff than you could ever take in in four days, but we're going to do our best to give you a little bit of the inside, and we're going to go down a completely different path here with our next guest. We're really excited to have Linda Tadic on, she's the founder and CEO of Digital Bedrock. Linda, welcome. >> Thank you Jeff, happy to be here. >> Jeff: Absolutely. So for those that aren't familiar with your company, give us a little bit of an overview. >> Well what we do at Digital Bedrock is we provide the managed digital preservation services that are required to keep digital content alive. [Jeff] Okay, so managed digital preservation. [Linda] Yes. >> Okay, so what does that mean? >> Managed, meaning we do the work for you. You just have to give us the files and we take care of it, so you don't have to license software, you don't have to train people, you don't have purchase all the infrastructure, no big CAPEX, we just do the work for you with our staff and infrastructure. >> Jeff: Okay. >> Digital, meaning its all digital content. Any format, any kind of content, we don't care. And then preservation. And so what that means is keeping the content alive so it can be used in a hundred years. And that's not just storing it, because that means you have to know everything about how that file was created so that you can monitor obsolescence, because digital files will become obsolete over time. >> So it's a really different kind of spin because we're here in the HGST booth, and a lot of talk about storage or storage people all around us. But when you talk about archiving and preservation, how do you delineate that from just, it's a backup copy, I know I have a backup copy on a server someplace? >> Yeah, so the preservation part of it is it has to live somewhere. I mean the bits have to live on something, and so it can be spinning disk, it can be solid state, it can be tape, and so storing it is the easy part actually, but then the hard part is the managing it. So you want to make sure those bits are okay, that the bits are healthy, so you will be doing fixity checks over time, according to a schedule, and then you want to also make sure that the file formats themselves, so everybody's concerned about migrating the data onto other storage media in the future 'cause you just have to do that, end of life, you have to move things along, but it's those formats that can become obsolete over time, which means let's say you have a format, a specific format, which requires a software to render it, which requires an operating system for it to run, which requires a chip or a piece of hardware or a file system to run. So what you have to do is you have to monitor all those vulnerabilities in order to keep that format alive. So you have to either migrate it or you can emulate it, or use another software, or you can do nothing and just keep the bits alive until you can do something with it. >> So you'll do those things, so you'll, if there's a new file format that comes out next year to NAB that's the new preferred, the format, you'll take some of those assets you have in your protection, and go ahead and recreate them in whatever feels like a viable format going forward? >> Actually we don't do that. We don't do the transcoding work. What we do is we monitor it. We have a separate database that's tapped into our support database. It's called the Digital Object Obsolescence Database, or the DUDE is what we call it >> That's a good thing. >> So in the DUDE it's monitoring all those, what version of a software can be used to be able to render a file. So if something in our database suddenly is flagged as being, uh -oh, this is not, it's endangered now, because one of those vulnerability factors has now been deprecated, we'll notify the client and we'll say you have all these files you've given us to preserve that are now endangered. But we can't just do the media transcoding because you know that those digital objects also then have perhaps these underlying files that feed up into that object. If you change one of those subsidiary files, you can't then render that final object. And so you have to be very careful not to just suddenly flip something and change it. So we tell the client here's the files and here's all the relations between all the files, and here's what you can do to migrate it or to keep it alive. But we won't do that work for them because they probably can either do it themselves, they have to choose first of all what they want to do, or they might have a preferred vendor themselves who will do that work for them. >> Jeff: And the other piece you talk about a lot, in doing some research before we sat down, is the metadata, and how important the metadata is. There's a lot of conversation about metadata, especially in media entertainment because there's the asset itself that you need all this other information, so I wonder if you can give us kind of the 101 on metadata and why it's so important and maybe not necessarily just the 101, but something a little bit more advanced that people don't think about when they think about metada. >> Right. I would say that most of the folks here at the event, at NAB, they're thinking about metadata in two ways. One is the description, which is describing the content, so what is the nature of this content, what is it about, what's in it, do you want to search for a particular scene or a particular clip, and that's based on the content. They also may be thinking about technical metadata but technical metadata in the sense of interoperability with machines. And so you want to know that the software can work with this or with this system or whatever, and that's why this camera can then work with a certain system, and that's all because of the technical metadata behind the scenes. What they're not thinking about is the metadata that is required to keep that content alive. And that's all those obsolescence factors, and in order to monitor all that obsolescence as we do in the DUDE, is where you need to be able to validate a particular format. And you know immediately, yeah, this was shot with this camera, and it's a certain kind of raw format, it's this version of it, which can only be used in this particular system. >> A lot of complex variables that are moving very very quickly. >> A lot of metadata, yeah. >> I mean in the typical bit of technical metadata we extract off a file, we'll get over 400 bits of metadata and that's not even the descriptive metadata. >> 400 bits, 400 different classifications >> 400 different elements of metadata. And we just pull it off the file. >> Jeff: Wow. >> And if that's not complicated enough, we were talking a little bit before we turned the cameras on about virtual reality and a whole different way of really describing that experience. Probably experience is a better word than asset because there is no asset until you engage with what the software is feeding into your experience. >> It's kind of virtual metadata when you kind of think about it because it's like, so there's a code that creates the software for the virtual reality to all work, it's all required, but the actual experience that is what the human, the person who's using the software and how they're interacting with it, and so that metadata about your experience in the content is in your head. Unless you're recording it as you're going, your experience, and so then there's an output of it, but otherwise it's all in your head, in your experience. >> It's fascinating. The other piece we've heard a number of times here is, especially now with all the different content distribution methods, there's many many flavors of the same file. So are you keeping track of all the different variants as well? >> Yeah. And so in fact in the research for the DUDE, 'cause it's humans who are doing the research to add the data to the DUDE, they'll say okay, great, this one software works with all these different operating systems except for this one package that went out, so it's somewhere in the middle, so we can't even say this range from here to here, and we'll work with it, oh no, but there's always an exception in between. So it's very complicated. >> So it's complicated and expensive in a lot of versions, and storage is getting cheaper every day, but it's not free >> right >> and managing is not free, and so it begs a value question, and I'm sure you can bring up all kinds of sad tales of phenomenal assets that were lost in the past. But how are people thinking about the value of these assets so that they feel comfortable making the investment in this preservation and archiving. >> Yeah. Two different mindsets I think that people have to just start adjusting to. One is they're just creating so much data they need to start doing appraisal and retention policies on them. You can't save everything, you shouldn't have to save everything. So that means you should really in reality set those policies at the point of when you're shooting, when you're creating it, so that it's automated, so that it's not at the end of a huge project when you have a petabyte of data there. That's not the time to choose what you want to keep. You need to set that policy in advance and try to automate it. >> So are there best practices? What are some of the best practices? Or are there some reference points that people should kind of start from I guess? >> I think the bottom line that they should be thinking about is let's say that in a hundred years, so thinking about Paramount. Paramount just had it's 100-year anniversary. And they were able to go back to their original nitrates and digitization and they're showing films that were made a hundred years ago. So what about the content being created now? What if in a hundred years you want to be able to have your own one-hundred-year retrospective? What would you need in order to be able to render the file that you're creating now in order to show it then? So what elements do you need to keep in case you need to restore it or recreate it? So that's one thing you have to think about. >> That feels like it could be a complete rabbit hole though. >> It could be. >> So that's why you have to think about the bottom line, the hundred years. Now of course in a hundred years who knows, 'cause of all of this artificial intelligence and all of this automated capture, then there could be systems that will just recreate it for you. So you might, you know, I'll be out of business, as I call it, the virtual Linda. I'll be out of a gig in a hundred years. >> So this is a fascinating area. How did you get involved in this area? I started out as a creator, so I was a composer and a filmmaker way back when, but then I got into the archival community, the archival field. So I've been working in audiovisual, film video, auto and then digital. Really starting in 2000 all my work's been in digital format and doing that preservation because all of this content is important to me and whether it's your own personal home videos or images, of your kids when they were born, it's all digital or whatever, to a studio product a station, government documents, it doesn't really matter. If that content is important to you, it should be preserved, because it documents your personal history, it documents our cultural history, it documents governments who are going forward for evidence, for law enforcement, all of that if it has to be preserved you have to really focus on that and how to keep it alive. And it's all important, and that's why I got into it. >> And as you spoke, you're involved in some really interesting cultural heritage preservation, which is a completely different kind of value chain than a movie or my home video of the kids. I wonder if you can kind of talk us through that use case that you described earlier, 'cause this is a very different way to think about virtual reality, preservation, and digital assets. >> Yeah. So I also do some consulting work, and I'm working with this organization in Dunhuong, China, which is on the Western part of China, so that's out in the Gobi Desert, far out. So what this organization is in charge of are these caves that were created by Buddhist monks starting in three sixty four A.D. going up to around eleven hundred A.D. Hundreds of caves out in the desert, carved out of sandstone and the monks would then paint murals, and beautiful, incredible murals showing Buddhist culture, history, and the culture of the time. You can see how people lived, how they farmed, 'cause they have that representation on the murals. So the Dunhuong Academy, they came to me and they said they're doing digital capture of the caves, high-res capture of the murals, and they said Linda, these caves are fifteen-hundred years old. We know they will not be around in fifteen-hundred years, so these digital assets must be around in fifteen-hundred years, 'cause those will be the only representations of these caves that are there. So I'm helping them build a digital repository to keep those digital images alive. Because if they are, they consider them to be the embodiment of the caves. So I've seen some great examples of virtual reality implementations in the cultural heritage environment, again thinking about some of these critical places around us, in the world and the environment. They won't be around in fifteen-hundred years, either because humans have destroyed them, through the environment, or just natural deterioration and destruction. So what virtual reality can do is go out and capture those environments, capture those sites, so that we can experience them, or people can experience them when those sites are no longer around. If the humans are still around in fifteen-hundred years. >> Fascinating. And what a great application of virtual reality. >> Yes, absolutely. It's my favorite. And entertainment is fun, to pretend you're somewhere, but it's not just to go to a different site, go to a different place. >> I want to shift gears just a little bit. As you've done all this archiving and you look at these old movies, 'cause we're here at NAB and it's all about media entertainment, I'm curious if you have any kind of historical perspective of how the storytelling has changed over time. Is there a consistent thread that you see or just reflection as you've spent so much time with this historical archive footage, that you could share with the audience, that maybe will get them to go look at the ... that aren't opening this weekend at your local cineplex. >> Okay, so think about film. So film in the early days was basically just a representation of theater. Because that was the moving art form of the time. And so it was really static, just one camera standing there and people would act in front of the camera. And then of course that changed what with D.W. Griffith and others to mold the intercutting into the show and then things happening at the same time in different locations, that was really radical in 1912, 1913, just over a hundred years ago. And then you go into the golden age of cinema in the '30s and the spectacle, and so it's more, and so now we're in the age of virtual reality where instead of we're being told a story, it's more like we are part of the story and going through that. And we'll see how if people still want to go back and return to "tell me a story," just like when we were little kids we all wanted "tell me a story daddy and mommy," kind of thing so when we're in the theater maybe we want to be told that and just be engrossed in somebody else's story and relax our brains instead of feeling like gosh I just want to rest and relax, do I have to interact with this thing? >> Right. Do I have to work? I'd rather have somebody who's really good at it, like Quentin Tarantino, tell me his interpretation of this story. >> So I'm really curious to see, it's still new with virtual reality and augmented reality to see how it's going to really expand. And people ... it might just be a fad, I know people who don't want to hear that, but it has all these other great uses as a cultural heritage or in gaming and that kind of thing it's totally fun, but for narrative, sometimes you just want a story. >> Well Linda, you're doing great work, so we have to let you get back to the booth so that more people can take advantage and keep track, and I think the word that you used a number of times, keep these things alive for future consumption, not just in cold storage in a vault someplace. >> Yes, absolutely. >> Alright, well thanks again Linda for stopping by. >> Thank you. Thanks so much Jeff. >> Alright. Linda Tadic. I'm Jeff Frick. We're at NAB 2017, you're watching theCube, and we'll be back after this short break. Thanks for watching. (lively music)
SUMMARY :
a little bit of the inside, and we're going to go down So for those that aren't familiar that are required to keep digital content alive. have to license software, you don't have to train people, because that means you have to know everything But when you talk about archiving and preservation, that the bits are healthy, so you will or the DUDE is what we call it and here's what you can do to migrate it Jeff: And the other piece you talk about a lot, And so you want to know that the software can work with this A lot of complex variables that and that's not even the descriptive metadata. And we just pull it off the file. because there is no asset until you engage It's kind of virtual metadata when you kind of So are you keeping track somewhere in the middle, so we can't even say and so it begs a value question, and I'm sure you can That's not the time to choose what you want to keep. So that's one thing you have to think about. So that's why you have to think about the bottom line, if it has to be preserved you have to really focus that use case that you described earlier, So the Dunhuong Academy, they came to me And what a great application And entertainment is fun, to pretend you're somewhere, and you look at these old movies, 'cause we're here So film in the early days of this story. but for narrative, sometimes you just want a story. so we have to let you get back to the booth Thanks so much Jeff. after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Linda Tadic | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Quentin Tarantino | PERSON | 0.99+ |
Paramount | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Linda | PERSON | 0.99+ |
Digital Bedrock | ORGANIZATION | 0.99+ |
D.W. Griffith | PERSON | 0.99+ |
1913 | DATE | 0.99+ |
2000 | DATE | 0.99+ |
400 bits | QUANTITY | 0.99+ |
1912 | DATE | 0.99+ |
100-year | QUANTITY | 0.99+ |
fifteen-hundred years | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
400 different elements | QUANTITY | 0.99+ |
three halls | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two levels | QUANTITY | 0.99+ |
one-hundred-year | QUANTITY | 0.99+ |
Dunhuong Academy | ORGANIZATION | 0.99+ |
Gobi Desert | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
hundred years | QUANTITY | 0.99+ |
one camera | QUANTITY | 0.98+ |
two ways | QUANTITY | 0.98+ |
four days | QUANTITY | 0.98+ |
each hall | QUANTITY | 0.98+ |
NAB 2017 | EVENT | 0.98+ |
400 different classifications | QUANTITY | 0.98+ |
Dunhuong, China | LOCATION | 0.97+ |
NAB Show 2017 | EVENT | 0.96+ |
Hundreds of caves | QUANTITY | 0.94+ |
#NABShow | EVENT | 0.94+ |
over 400 bits | QUANTITY | 0.94+ |
fifteen-hundred years old | QUANTITY | 0.92+ |
NAB | ORGANIZATION | 0.92+ |
a hundred years ago | DATE | 0.91+ |
NAB | EVENT | 0.91+ |
one thing | QUANTITY | 0.9+ |
ink | QUANTITY | 0.89+ |
HGST | ORGANIZATION | 0.86+ |
petabyte | QUANTITY | 0.85+ |
a hundred years | QUANTITY | 0.85+ |
over a hundred years ago | DATE | 0.85+ |
Buddhist | OTHER | 0.83+ |
first | QUANTITY | 0.81+ |
100,000 of | QUANTITY | 0.78+ |
hundred A.D. | DATE | 0.77+ |
one package | QUANTITY | 0.76+ |
one | QUANTITY | 0.75+ |
around eleven | QUANTITY | 0.74+ |
'30s | DATE | 0.74+ |
theCube | ORGANIZATION | 0.72+ |
three | QUANTITY | 0.72+ |
DUDE | ORGANIZATION | 0.7+ |
101 | QUANTITY | 0.69+ |
one software | QUANTITY | 0.68+ |
Narrator | TITLE | 0.67+ |
China | LOCATION | 0.65+ |
DUDE | TITLE | 0.6+ |
sixty four A.D. | DATE | 0.43+ |
#theCUBE | TITLE | 0.35+ |
Brian Lillie, Equinix | NAB Show 2017
[Announcer] Live from Las Vegas. It's theCUBE. Covering NAB 2017. Brought to you by HGST. >> Welcome back everybody, Jeff Frick here with theCUBE. We're at NAB 2017 with a hundred thousand of our closes friends but we actually do have one of my friends here. Who I can't believe we haven't had on theCUBE since 2013 ServiceNow Knowledge. >> That's right. That's right. >> Just down the road at the Cosmopolitan. Brian Lillie, he is now the Chief Customer Officer and EVP of Technology Services from Equinix. >> Brian, it's always great to see you. >> Jeff, it's always a good thing to be on theCUBE. And I love NAB. Love it! >> What do you think, you've been coming here for awhile. What's kind of your take away, what's the vibe? >> Well, so the vibe, it feels as innovative and as exciting as ever. And I really think that, people are seeing, are starting to hit a tipping point where they're seeing what's possible. What's possible with the cloud, possible with increased collaboration. When I first started coming here a few years ago, saw very few of these kinds of projects. Now, we're seeing tons of innovative approaches to using the cloud. Using our facilities, using really some of our network providers that are really innovating around this vertical. >> Yeah, it's pretty interesting Brian because this is our first time for theCUBE being here. And what's surprising me is how many of the macro trends that we see time and time again at all the other shows about increasing capacity, flexibility, democratization of data, democratization of assets. All these kinds of typical IT themes that are being executed here within the media entertainment industry both on the creative side and as well as the production side. >> That's right. That's very well said. I think this industry, really more than many, is very, very collaborative. You know, from everything from acquisition to pre-production, production, post production, delivery. It feels like a community that wants to share, wants to learn, sees that they don't necessarily own all the best ideas. And that we're seeing some young innovative startups from all over the world. Everywhere from Europe to Asia coming up with ideas that the big houses, big players are starting to see as viable. And I do think, I think, when you talk about it being maybe some of these IT trends, I think some of the secular trends. The fact that consumers want their content anytime, anywhere, on any device. >> Jeff: Right, right. >> Really if you work from the customers backwards, everybody else has to adjust to that. And we're parents. >> Jeff: Right, right. >> We see what our kids wants. And it's really driving I think the whole industry. >> And good stuff for you. You guys at Equinix made a big bet on cloud long time ago. And the fact of the matter is, we're surrounded by all these crazy hardware, both in the production side, the data center side. No one is buying this. You don't just take this stuff home anymore and plug it in. It's just too big and too expensive. As you said, I think was interesting about the media business, is everybody comes together around a project. When the project's over, they go away. How many people has Quentin Tarantino employed directly, probably not that many. But the guy kicks out a lot of big budget movies. >> That's right. I think when you think about the creation of a production, like a QT movie, wherever that set is, it's ephemeral. You go, you setup and it's big data needs, it's high bandwidth, low latency, you've got to get the data. In some cases centrally, but in some cases you're processing at the edge. But it's very cloud-like. We're seeing a lot of this unfold. We're seeing these players not only in the centers where it makes sense to consolidate, but we're actually seeing some of this kit show up in our data centers in a distributed mode, where they say some information, some equipment, we want to keep behind our firewalls on our premise, which could be an Equinix cage or their own. But then I want to absolutely connect to multiple clouds. I want to use the tools in Asure, the tools in Amazon, the tools in Google and others to further enhance our abilities. And so it's truly this hybrid, best of breed, I got a lot of tools in my tool kit, some cloud, some on premise. And there has never been a better time to be in this industry. >> Right. >> You see a lot of industries, you got a lot of customers, how do you see it kind of compare, are financial services, the entertainment, et cetera, are they all kind of progressing pretty much down the same path, at the same rate or do you see some significant laggers or significant people ahead of the curve? >> Well, I would say that financial services is way ahead, to be frank. Financial services has been doing this for a long time. Like when we built Equinix, it was really starting with the networks at the core. And the first vertical to take advantage of that was the financial services, where they said, hey, I want low latency routes between New York and London. Low latency routes between Chicago and New York. And so they've been doing that and then building communities of interest where they could reach all the folks in their digital supply chain. On the financial services side, guys like Bloomberg and Reuters, they said, I can reach all my customers in one place. And I can direct connect to them. So they built early. The content guys did see it right after that. Guys like Yahoo, and if you remember Myspace. >> Jeff: Right, right. >> So it's wonderful to see Facebook video here. I mean, here's now Facebook, real-time video, live at NAB. And with a big presence. So I think content digital media has been a little bit slower to move. But it's one of these ramps. >> Jeff: Right, right. >> And they, over the last two years, I think they have been the fastest excelerating vertical using the cloud and interconnection to build their brand, to build their business. >> Right. It's interesting, because some of our other guests were talking about the theme I guess last year, here was a lot of VR. >> Brian: Yes. >> It's all about the VR theme. But now, we're hearing about machine learning, and metadata and a lot more kind of tradition themes, it's not necessarily just about the VR and the 360. >> Brian: Yup, yup. >> To add more value to these assets, to be able to distribute them better, to have the metadata, to create an experience for that individual person, >> Yup. >> even within the context of a bigger asset, have these small ones, they're pretty interesting trend. >> Yeah, it's spot on. I think VR, virtual reality and augmented reality, >> Jeff: Yeah, I think so. >> is the future. I mean it's the future. I think what maybe what people are realizing is, it's at it's really early days. But data we have, and this whole notion of data science and analytics that you can put around the customer experience in real-time, in situ. >> Right. >> They're like, we can do that now. >> Where virtual reality, the massive bandwidth, the storage, the compute, the compute. Because it's no longer that you're watching the movie in a third person, you are the movie. You are the experience, you're in it. And that's just going to require just massive compute, that in my opinion, only the cloud can do. [Jeff] Right, right. >> So I think it's a little bit further off, But I think VR and AR is the wave, it's the future. >> And certainly in the AR, I think is really cool because there's so much potential there. So from a data center perspective, you guys are sitting right at the heart of this thing. And you're taking advantage of these tremendous Moore's law impacts on not only compute and store but networking, it's got to be phenomenal to see the increase demand. I always think of the old Microsoft Intel, you know back in the day, >> Brian: Right, right. you get a better microprocessor, well, Microsoft's OS heats up, another 80% of that one back and forth. But now we're really hitting huge, huge efficiencies in these core components that are enabling ridiculous scale that you could never even imagine before. >> I think the Intel Microsoft example or analogy is a really, really interesting one because in fact, when you look at companies like Mesophere and Google's Kubernetes and these others, that are, they're calling themselves the data center operating system which is operating containers with the move to microservices, all this technology that's coming, that's making compute more ubiquitous, where you can run workloads anywhere. The fact that we sit, we feel privileged cuz we sit in the middle, of not only all the networks, but of the clouds, the multi-clouds. >> Right, right. >> And if you're a, whether you're a producer or you're in production, you're in delivery, you're an over-the-top guy, where you want to be is where you can connect very directly with little latency and high security and high reliability, to the clouds you need, to the networks you need, to the partners you need. I think that's just a powerful thing. Now the operating system is how do we make that easy, how do we create the easy button. >> Right, right. >> For these folks to access these resources. And what' the value we provide as that neutral, in the middle provider that brings people together. You know, I was at an event last night, and DPP, Mark from DPP was there. We were talking about the question of who owns this new business model. He said he saw a panel on Sunday, because it's transforming in front of us. [Jeff] Right, right. >> And it's an excellent question. I don't know who owns it, but I know we see it. And we're seeing people talk about it. I think the community owns it. They own what this new business model looks like and we're just listening to our customers and letting them lead us. >> Jeff: Right. >> To the place we need to go. >> Interesting. So we're running a little low on time. Just want to get kind of what are your priorities for 2017. >> Well, priorities in this area is really to make cloud ubiquitous globally. It's to push that out to the edge, make that available in as many markets, to as many customers as we can. With our big partners, with Google and Amazon and Microsoft and Oracle and all the rest. That's a big priority. Second is this notion of the easy button. How can we add value, how can we take friction out of the system to make collaboration and communication between this industry that much easier, that much faster. Those are our two big ones in particular here. And I'm delighted to see this vertical just taking off with the cloud. >> Yeah. Pretty exciting times. >> Brian: It's a great time. >> Alright, I got to embarrass you before I let you go Brian. Never have I met an executive that takes such pride in in losing good employees to better jobs. I just want to compliment you on that. (Brian laughs) I know you take pride in CIOs all over the industry that were once your charges. So I want to give you a shout-out for that. >> Okay. Alright, he's Brian Lillie, keep working for him. Don't take the other CIO jobs just yet, but if you do, he'll be happy to mentor you. >> Brian: I will help you get there. >> Alright, thanks for stopping by. He's Brian Lillie, I'm Jeff Frick. You're watching theCUBE from NAB 2017. We'll be right back after this short break. >> Brian: Thanks Jeff. >> Good to see you buddy. (techno music)
SUMMARY :
Brought to you by HGST. We're at NAB 2017 with a hundred thousand of our closes That's right. Brian Lillie, he is now the Chief Customer Officer Jeff, it's always a good thing to be on theCUBE. What do you think, you've been coming here for awhile. And I really think that, on the creative side and as well as the production side. And that we're seeing some young innovative startups everybody else has to adjust to that. And it's really driving I think the whole industry. And the fact of the matter is, I think when you think about the creation of a production, And I can direct connect to them. And with a big presence. and interconnection to build their brand, about the theme I guess last year, here was a lot of VR. It's all about the VR theme. have these small ones, they're pretty interesting trend. I think VR, virtual reality I mean it's the future. that in my opinion, only the cloud can do. But I think VR and AR is And certainly in the AR, I think is really cool ridiculous scale that you could never even imagine before. but of the clouds, the multi-clouds. to the clouds you need, to the networks you need, in the middle provider I think the community owns it. Just want to get kind of what are your priorities for 2017. And I'm delighted to see Alright, I got to embarrass you before I let you go Brian. Don't take the other CIO jobs just yet, but if you do, We'll be right back after this short break. Good to see you buddy.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Brian Lillie | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Brian | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Reuters | ORGANIZATION | 0.99+ |
Quentin Tarantino | PERSON | 0.99+ |
Sunday | DATE | 0.99+ |
Mark | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Bloomberg | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
first time | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
last night | DATE | 0.98+ |
NAB Show 2017 | EVENT | 0.98+ |
one place | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
2013 | DATE | 0.97+ |
NAB 2017 | EVENT | 0.97+ |
80% | QUANTITY | 0.96+ |
DPP | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.94+ |
Myspace | ORGANIZATION | 0.92+ |
two big ones | QUANTITY | 0.89+ |
Mesophere | ORGANIZATION | 0.89+ |
hundred thousand | QUANTITY | 0.88+ |
Cosmopolitan | ORGANIZATION | 0.87+ |
last two years | DATE | 0.85+ |
few years ago | DATE | 0.85+ |
NAB | EVENT | 0.84+ |
Moore | PERSON | 0.82+ |
third | QUANTITY | 0.8+ |
theCUBE | ORGANIZATION | 0.65+ |
Asure | ORGANIZATION | 0.62+ |
EVP | PERSON | 0.55+ |
Kubernetes | TITLE | 0.46+ |
360 | QUANTITY | 0.44+ |
HGST | DATE | 0.4+ |
ServiceNow | ORGANIZATION | 0.39+ |
Robin Matlock - VMware Influencer Event - theCUBE
it's the kue here is your host John furrier hi I'm John furrier with the cube we are on the ground in san francisco for vmware's big launch in February this is the one cloud any application any device in San Francisco all the influences the whole press course here in the enterprise we had Quintin Hardy on stage i'm here with Robyn Matlock who's the CMO of VMware great to see you hi John free to be here pat pat gelsinger quentin hardy from the year x jeffrey more crossing the chasm awesome years all the influence all the press was here tell us about the event well this is all about really coming together and take a moment to sit back and see all of the great products and technologies that we're bringing to market so we launched one cloud any application any device and essentially this is a discussing a whole framework or architecture of how we're delivering and harnessing the power of all of your cloud and computing environments into one common architecture for one cloud how that supports both traditional apps modern apps and is delivered on any device for your consumers will be doing a bunch of crowd chest throughout the month and you guys have a lot of activities planned but this is really the kickoff you had a sales meeting last week get a partner exchange going on burnings were good business is good but there's a lot of competition right now so I want you to comment on the big announcement because Pat Gelsinger said this is the biggest vsphere launch in the history but it's not just vSphere it's a whole platform I'm it's complicated where you put it all together for the crowd out there you know it's a very dynamic marketplace right now and businesses are changing and transforming every industry we know is undergoing really significant transformations we think there's no better time to be in the industry we feel extremely well-positioned we have laid out a foundation for the data center of the future it's based on the software to find architecture we have tremendous opportunity now to help our customers harness all of their cloud silos into one cloud we think really we've got a great offering and it's just a great time to launch so you got VMware come I'm sure you got some point vmworld coming up what's the planning like he gives a little cheesy iran what's happening tomorrow this year John I'm just trying to get through partner exchange right now starting tomorrow so we've been really debris our day 7 of 10 days of intense launch and events and activities with our shareholders with our sales force with our partners so we're really going to think of it right now when I live for the moment we have a great offering some of the biggest news to come out of the company in a long time and so it's really I'm sure it's on your plate but it's around the coma talk about the partners change is one of things that's really important in ecosystem VMware's ecosystem is really really impressive but it's changing its growing you guys are growing what's new give us the update on the ecosystem you bet I mean our philosophy first of all is that we can't do this alone that we have to team and partner wisely and we are surrounded by the richest ecosystem in the industry bar none now I do believe that it is transforming as consumption models are changing as technologies are changing as cloud is stepping in it does require new types of services and new types of partners so we're talking more service providers more ISVs more sass providers but all of us coming together as one large ecosystem and ensuring that our customers have a unified experience what are you seeing the trends for them for the for the partners is it more channel more software more of ours what's the mix recent service providers that's new is that yeah I mean we've been in all of these various partner you know types for a long time I do think though that the mobile cloud era is putting you know more emphasis on services on cloud services on consulting services helping companies transform their operations that requires process transformation people transformation so I do think system integrators ISPs there's there's definitely new partner types I think are getting a day in the Sun so I gotta ask you I'm really impressed with the VMware culture you know that a big fan of VMware living in Palo Alto being a local local boy but in a fanny pack el Senor you've done an amazing job you're an amazing campus what's the culture like now at vm you guys are at the heart of Silicon Valley you know there's a lot of things going on in Silicon Valley right now that's really great and some things they're not going so great what's going on with VMware what's going on with your culture can you give us an update on what's you know I've been there almost six years and I think the VMware culture is stronger than it's ever been our culture is anchored around our values and its really clear their epic execution passion integrity customers and community and you will talk to any VMware employee and they feel that in their heart that's what we are first and foremost it's more about how we do what we do technology is great but if you know the day it's all about our values and it kind of shows when your campus is just so beautiful I mean just shows it okay so the next question final question is how do you market this complex complexity to customers obviously it's changing for you guys product wives heard the whole announcements change for your customers well how do you stay on top of the marketing and what is your strategy to market to the customers because you have now more stakeholders but used to have the IT guys out there so I would define data center what's new and what's some of the marketing opportunities you have well it's a great question at the end of the day our customers want business outcomes they want real value that solves critical business problems and I think although our portfolio is really is complex and diversifying what we ultimately deliver for our customers is getting quite simple we help them deliver one cloud for any application on any device we help them solve their business mobility problems we have a new term liquid-liquid cloud liquidity with whether that come from the pats no no no you're are you we're all behind that liquid is really just describing the context of the environment we're in that's the world around us business you know rigid structures of the past are giving away too much more fluid dynamic business models it's a liquid world it's real time and I think that speaks to it an average your storage a lot of announcement so summarized final question what's the bottom line for this event what's the main takeaway the main takeaway is that vmware is continue to innovate i mean we are really fearless innovators and we are delivering tremendous innovation that is helping deliver a brave new model of IT that is instant fluid and secure brave new world we are here on the ground robin Matlock CMO at vmware this is on the ground i'm john forever the cube thanks for watching you
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robyn Matlock | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Robin Matlock | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
san francisco | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
February | DATE | 0.99+ |
John furrier | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
vmware | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
10 days | QUANTITY | 0.99+ |
John furrier | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.98+ |
Quintin Hardy | PERSON | 0.98+ |
tomorrow this year | DATE | 0.98+ |
Silicon Val | LOCATION | 0.97+ |
vSphere | TITLE | 0.96+ |
one cloud | QUANTITY | 0.96+ |
both | QUANTITY | 0.95+ |
almost six years | QUANTITY | 0.95+ |
robin Matlock | PERSON | 0.94+ |
a day | QUANTITY | 0.92+ |
first | QUANTITY | 0.92+ |
one large ecosystem | QUANTITY | 0.91+ |
one | QUANTITY | 0.86+ |
one common architecture | QUANTITY | 0.86+ |
jeffrey | PERSON | 0.86+ |
gelsinger | PERSON | 0.78+ |
lot | QUANTITY | 0.75+ |
lot of things | QUANTITY | 0.75+ |
quentin hardy | PERSON | 0.68+ |
7 | QUANTITY | 0.65+ |
a lot of competition | QUANTITY | 0.64+ |
vmworld | ORGANIZATION | 0.64+ |
day | QUANTITY | 0.63+ |
VMware Influencer | EVENT | 0.63+ |
john | PERSON | 0.62+ |
el | ORGANIZATION | 0.56+ |
announcement | QUANTITY | 0.52+ |
Sun | DATE | 0.49+ |