Image Title

Search Results for Kubernetes one:

Dan Walsh, Red Hat | KubeCon 2017


 

>> Announcer: Live from Austin Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back, this is SiliconANGLE Media's live coverage wall to wall of KubeCon and CloudNativeCon here in Austin, Texas. Got the house banner rocking all day. I'm Stu Miniman, happy to be joined on the program, Dan Walsh who's a consulting engineering with Red Hat. Rocking the red hat, Dan thanks so much for joining us. >> Pleasure to be here. >> Alright so we've, you know Red Hat has a strong presence at the show, we had Clayton on yesterday, top contributor, won an award actually for all the contribution he's done here. Going through a lot of angles. Why don't you start with, tell us kind of your role, what you've been doing at Red Hat. >> So at Red Hat I'm a consulting engineer, which basically means I lead a team of about 20 engineers, and we work on the base operating system. Basically anything to do with containers from the operating system on down. So kernel engineers. But everything underneath Kubernetes. So traditionally for the last four and a half years I've been working on the Docker Project as well as other container type efforts. We've added things like file system support, Docker, lots of kernel changes, lots of, you know we're working forever on usernames base things like that. More recently though we've been working, we started to work on sort of one of the, well OpenShift and Kubernetes were built on top of Docker originally, and they found over time that the Docker base was changing in ways that were continuously breaking Kubernetes. So about a year and a half ago we start to work on a project called Crio. So a little history is if you go back, Kubernetes was originally built on top of Docker. But core OS came to Kubernetes and wanted to get rocket support into Kubernetes. And rather than add rocket support, Kubernetes decided to find this interface. Basically a CRI, container runtime interface, which is an API that they would call out to to run containers. So rocket could build a container runtime interface, they actually built a shim for the Docker API. But we decided at that time to basically build our own one, and we called it Crio. So it's container runtime interface for OCI images. So the plan was to build a very minimalist daemon that could support Kubernetes, and Kubernetes alone. So we don't support any other orchestrations or anything else. It's totally based on Kubernetes CRI. So our versioning matches up with Kubernetes. So Kubernetes one dot eight, you got Crio one dot eight. Kubernetes one dot nine, you got Crio one dot nine. >> So Dan we've been talking about this. You know Red Hat made a pretty strong bet on Kubernetes relatively early in there. Red Hat, very open, everything you do is 100% open source. Why for Crio, why only Kubernetes? There's other orchestrations out there that are open source. >> Well let's take a step back. So one of our goals in my group was to take, sort of what does it mean to run a container. So if you think about when I run a container, what do I need? I need a standard container image format, so there's the OCI image bundle format that defines that. The next thing I need is the ability to pull an image from a container registry to the host. So we built a library called containers image that actually implements all of the capabilities of moving containers back and forth around, but basically at a Command Line or a library level. We built a tool on top of that called Scopio, which allows us to basic Command Line, I can move from one container registry to another, I can move container registries to different kinds of storage. I can move directly from a container registry into a Docker daemon. So we have a, so the next step you need when you want to run a container is storage. So you need to take that container image and put in on disk. And in the case of containers you do that on top of what's called the copy and write file system. So you need to be able to have a layering file system. So we created another project called container storage that allows you to basically store those images on storage. The last step for running a container is actually to launch an OCI runtime. So we, OCI runtime specification and run c takes care of that. So we have the four building components for what it means to run a container separate. So we're building other tools around that, but we built one too that was focused on Kubernetes. And again, the reason Red Hat bet on Kubernetes is we felt that they had the best longterm potential, and judging by this show I think we made a sane bet. But we will work with others. I mean these are all fully open source projects. We actually have contributors coming in that are contributing at these low level tools. For instance pivotal is a major contributor in container image. And they're using it for pulling images into their base. We have other products that projects are using, and so it's just not Kubernetes. It's just Crio is a daemon for Kubernetes. >> Yeah Dan it's really interesting. You listen in Clayton's keynote this morning. He talked about one of the goals you have at Red Hat is making that underlying infrastructure boring so that everything about it can rely on it, and works on. There's a lot of work that goes on under there. So it's like, the plumbers and the mechanics down underneath making sure it all works. >> A lot of times when I give talks, the number one thing I'm always trying to teach people is that containers are not anything really significantly different. Containers are just processes on a Linux system. So if you booted up a regular REL system right now, and you looked at Pid One of a system. Let me take a step back, I define containers as being something that has, c groups associated with a resource constraints, it has some security constraints associated with it, and it has these things called name spaces, which is a virtualization layer that gives you a different view of the processes. If you looked at every process on a Linux system, they all c groups associated with them, they all have security constraints associated with them, and they all have name spaces associated with. So if you went to Pid One, if you went to slash proc Pid One slash NS you would see the name spaces associated with Pid One. So that means that every process on Linux is in a container. By the definition of a container being those three things. And all that happens on the system is you toggle those. So you can tighten them or change some of the name space and stuff like that, and that gives you the feel of the virtualization. But bottom line is they're all containers. So all the tools like Docker, rocket, Crio, run c, or any one of those tools are all just basically going into the kernel, configuring the kernel, and then launching the Pid One on the system. And from that point on it's just a kernel that's taking 'em. We at Red Hat has a t-shirt that we often wear that says Linux is containers and containers is Linux. And that actually proves the point. So bottom line is you know the operating system is key, and my team and the developers I work with, and the open source community is all about how can we make containers better? How can we further constrain these processes? How can we create new name spaces? How can we create new c groups, new stuff like that? So it's all low level stuff. >> Dan, you know give us some flavor as to some of the customer conversations you're having at the show here. Where are they? I mean we know it's a spectrum of where they are, but what are some of the commonalities that you're hearing? >> I mean at Red Hat our customers run the gamut. So you know we have customers who can barely get off a rel five which came out 12 years ago. Two sort of the leading edge customers. And the funny thing is a lot of these are in the some companies. So most of our customers at this point are just beginning to move into the container world. You know they might have a few containers running, or they had their developers insisting, hey this container stuff cool I want to start playing with it. But getting them from that step to the step of say Kubernetes, or to get them to step with OpenShift, is sort of a big leap. My fear with a lot of this is a lot of people are concentrating too much on the containers. You know the bottom line is what people need to do is develop applications. And secure applications. My history is very based in heavy security. So really we face a lot of customers who sort of have home grown environments. And their engineers come in and say oh I want to do a Docker build, or I want to talk to the Docker socket. And I always look at that and question, you know you're supposed to be building apps, you're building banking apps, or you're building military apps, you're building medical apps. They should be concentrating on that and not so much on the containers. And that's actually the beauty of OpenShift. You can set up OpenShift workloads in such a way that their interaction to build a container is just a Git check it. And it's not, you don't have to go out and understand what it means to build a container. You don't have to get the knowledge of what it means to be able to build a container and things like that. >> Dan you bring up a really good point. At this show most of the customers I'm talking about, it's really about the speed for them to be able to deliver on the applications. Yes there's the people building all the tooling, and the projects here, and there's many customers that are involved with it. But we've gone further up the stack where it's closer to the application, less to that underlying infrastructure. >> And the other thing customers are looking for, in my case, as I said I have a strong background in security, I did SE Linux for like 13 years. Most of my time talking to customers is about security, and how can we actually confine containers, how do we keep them under control, and especially when they go to multi tenancy. And some good things, I don't know if you're going to talk to Kata? Have you heard about the Kata project? >> So we've talked to a couple people, Kata coming out of the open-- >> Clear containers and-- >> Yeah clear container of the intel. >> Yeah and I think that those, getting to those levels of using hardware isolation, it really helps out in-- >> It's interesting because actually, you know when first looking at, it's like wait it's kind of a lightweight VM, it's a container. Where does that fit in? >> They're really just containers, 'cause they're not, a lightweight VM would be actually booting up like an init system and running logging and all these other things. So like a Kata container or, I'm more familiar with clear containers. A clear container is literally just running a very small init system and then it launches run c to run, actually start up the container. So it has almost no operating system inside of the lightweight VM. As opposed to running just regular virtual machines. >> Dan would love your take on, you know you talked about security. Security of containers, the role of security in the cloud native space. What are you seeing, and what do we need to work on even more as an industry? >> It's funny because my world view is at a much lower level than other security people that we talk to. There's other security people that'll be looking at sort of network isolation and role based access control inside of Kubernetes. I look at it as basically multi tenancy. So running multiple containers with different workloads, and what happens if one container gets hacked, how does that affect the other containers that are running and how do I protect the services? So over the years when we've been working with Docker, I got SE Linux support in, we've gotten Setcom support in. We're trying to take advantage of everything in the Linux kernel to further tighten the security. But the bottom line is a process inside of the container is talking to the real kernel on the host. Any vulnerability in the host kernel could lead to an escalation and a breakout. So that's why no matter what you say, a hyper, like a hyper shell, a separate container running inside of a VM is always going to be more secure. But that being, on the other hand, containers in a lot of cases you want to have some interaction. If you go all the way to VM you get really bad isolation. So you really have to cover the gamut. So a lot of times I'll tell people to look at containers as being, they're not a zero sum game. You don't have to throw away all your VMs to move to containers. I tell people the most secure way to run a application is separate physical hardware. The second most is on VM. So the third most is inside a container. And then you can go on to all down the line. But there's nothing to say that you can't run your containers inside of separate VMs, inside of separate physical machines. So you can set up your environment in such a way. Say you have your web front end sitting inside of VMs inside of (mumbles) zone on separate physical hardware you setup your databases or your credit card data on separate physical machines, separate VMs, and separate containers inside of it. So you can build up these really high levels of security based on containers, virtualization, and physical hardware. I can go on forever on this stuff. >> Dan Walsh, really appreciate sharing some of the ways that Red Hat's trying to help some of those underlying pieces become boring. So the customers won't have to worry about. >> That's really what it's about. If you know what's going on at the host level then I haven't done my job. So our goal is to basically take that host level, and make it disappear. And you can work with your higher level orchestration level. >> Well Dan, it's great to catch up with you, thanks so much for joining us. We'll be back with lots more coverage here from KubeCon 2017 in Austin, Texas. I'm Stu Miniman and you're watching theCUBE. (electronic music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by Red Hat, the Linux Foundation, Rocking the red hat, Dan thanks so much for joining us. presence at the show, we had Clayton on yesterday, So a little history is if you go back, So Dan we've been talking about this. So we have a, so the next step you need when you So it's like, the plumbers and the mechanics And all that happens on the system is you toggle those. some of the customer conversations you're having So you know we have customers who can barely get and the projects here, and there's many customers And the other thing customers are looking for, you know when first looking at, So it has almost no operating system inside of the Security of containers, the role of security So a lot of times I'll tell people to look at containers So the customers won't have to worry about. So our goal is to basically take that host level, Well Dan, it's great to catch up with you,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan WalshPERSON

0.99+

DanPERSON

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

ClaytonPERSON

0.99+

100%QUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

13 yearsQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

KubeConEVENT

0.99+

TwoQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

KubeCon 2017EVENT

0.99+

three thingsQUANTITY

0.99+

DockerTITLE

0.99+

CloudNativeConEVENT

0.98+

KubernetesTITLE

0.98+

LinuxTITLE

0.98+

Austin TexasLOCATION

0.98+

yesterdayDATE

0.98+

OpenShiftTITLE

0.98+

theCUBEORGANIZATION

0.98+

one containerQUANTITY

0.97+

about a year and a half agoDATE

0.97+

about 20 engineersQUANTITY

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.97+

KataTITLE

0.96+

thirdQUANTITY

0.96+

four building componentsQUANTITY

0.96+

12 years agoDATE

0.96+

Red HatTITLE

0.94+

Crio oneTITLE

0.94+

CloudNativeCon 2017EVENT

0.93+

secondQUANTITY

0.93+

Kubernetes oneTITLE

0.91+

nineTITLE

0.9+

CrioTITLE

0.9+

ScopioTITLE

0.88+

DockerORGANIZATION

0.86+

SE LinuxTITLE

0.81+

eightTITLE

0.81+