Image Title

Search Results for SE Linux:

Keynote Enabling Business and Developer Success | Open Cloud Innovations


 

(upbeat music) >> Hello, and welcome to this startup showcase. It's great to be here and talk about some of the innovations we are doing at AWS, how we work with our partner community, especially our open source partners. My name is Deepak Singh. I run our compute services organization, which is a very vague way of saying that I run a number of things that are connected together through compute. Very specifically, I run a container services organization. So for those of you who are into containers, ECS, EKS, fargate, ECR, App Runner Those are all teams that are within my org. I also run the Amazon Linux and BottleRocketing. So anything AWS does with Linux, both externally and internally, as well as our high-performance computing team. And perhaps very relevant to this discussion, I run the Amazon open source program office. Serving at AWS for over 13 years, almost 14, involved with compute in various ways, including EC2. What that has done has given me a vantage point of seeing how our customers use the services that we build for them, how they leverage various partner solutions, and along the way, how AWS itself has gotten involved with opensource. And I'll try and talk to you about some of those factors and how they impact, how you consume our services. So why don't we get started? So for many of you, you know, one of the things, there's two ways to look at AWS and open-source and Amazon in general. One is the number of contributors you may have. And the number of repositories that contribute to. Those are just a couple of measures. There are people that I work with on a regular basis, who will remind you that, those are not perfect measures. Sometimes you could just contribute to one thing and have outsized impact because of the nature of that thing. But it address being what it is, increasingly we'll look at different ways in which we can help contribute and enhance open source 'cause we consume a lot of it as well. I'll talk about it very specifically from the space that I work in the container space in particular, where we've worked a lot with people in the Kubernetes community. We've worked a lot with people in the broader CNCF community, as well as, you know, small projects that our customers might have got started off with. For example, I want to like talking about is Argo CD from Intuit. We were very actively involved with helping them figure out what to do with it. And it was great to see how into it. And we worked, etc, came together to think about get-ups at the Kubernetes level. And while those are their projects, we've always been involved with them. So we try and figure out what's important to our customers, how we can help and then take because of that. Well, let's talk about a little bit more, here's some examples of the kinds of open source projects that Amazon and AWS contribute to. They arranged from the open JDK. I think we even now have our own implementation of Java, the Corretto open source project. We contribute to projects like rust, where we are very active in the rest foundation from a leadership role as well, the robot operating system, just to pick some, we collaborate with Facebook and actively involved with the pirates project. And there's many others. You can see all the logos in here where we participate either because they're important to us as AWS in the services that we run or they're important to our customers and the services that they consume or the open source projects they care about and how we get to those. How we get and make those decisions is often depends on the importance of that particular project. At that point in time, how much impact they're having to AWS customers, or sometimes very feel that us contributing to that project is super critical because it helps us build more robust services. I'll talk about it in a completely, you know, somewhat different basis. You may have heard of us talk about our new next generation of Amazon Linux 2022, which is based on fedora as its sub stream. One of the reasons we made this decision was it allows us to go and participate in the preneurial project and make sure that the upstream project is robust, stays robust. And that, that what that ends up being is that Amazon Linux 2022 will be a robust operating system with the kinds of capabilities that our customers are asking for. That's just one example of how we think about it. So for example, you know, the Python software foundation is something that we work with very closely because so many of our customers use Python. So we help run something like PyPy which is many, you know, if you're a Python developer, I happened to be a Ruby one, but lots of our customers use Python and helping the Python project be robust by making sure PyPy is available to everybody is something that we help provide credits for help support in other ways. So it's not just code. It can mean many different ways of contributing as well, but in the end code and operations is where we hang our happens. Good examples of this is projects that we will create an open source because it makes sense to make sure that we open source some of the core primitives or foundations that are part of our own services. A great example of that, whether this be things that we open source or things that we contribute to. And I'll talk about both and I'll talk about things near and dear to my heart. There's many examples I've picked the two that I like talking about. The first of these is firecracker. Many of you have heard about it, a firecracker for those of you who don't know is a very lightweight virtual machine manager, which allows you to run these micro VMs. And why was this important many years ago when we started Lambda and quite honestly, Fugate and foggy, it still runs quite a bit in that mode, we used to have to run on VMs like everything else and finding the right VM for the size of tasks that somebody asks for the size of function that somebody asks for is requires us to provision capacity ahead of time. And it also wastes a lot of capacity because Lambda function is small. You won't even if you find the smallest VM possible, those can be a little that can be challenging. And you know, there's a lot of resources that are being wasted. VM start at a particular speed because they have to do a whole bunch of things before the operating system spins up and the virtual machine spins up and we asked ourselves, can we do better? come up with something that allows us to create right size, very lightweight, very fast booting. What's your machines, micro virtual machine that we ended up calling them. That's what led to firecracker. And we open source the project. And today firecrackers use, not just by AWS Lambda or foggy, but by a number of other folks, there's companies like fly IO that are using it. We know people using firecracker to run Kubernetes on prem on bare metal as an example. So we've seen a lot of other folks embrace it and use it as the foundation for building their own serverless services, their own container services. And we think there's a lot of value and learnings that we can bring to the table because we get the experience of operating at scale, but other people can bring to the table cause they may have specific requirements that we may not find it as important from an AWS perspective. So that's firecracker an example of a project where we contribute because we feel it's fundamentally important to us as continually. We were found, you know, we've been involved with continuity from the beginning. Today, we are a whole team that does nothing else, but contribute to container D because container D underlies foggy. It underlies our Kubernetes offerings. And it's increasingly being used by customers directly by their placement. You know, where they're running container D instead of running a full on Docker or similar container engine, what it has allowed us to do is focus on what's important so that we can operate continuously at scale, keep it robust and secure, add capabilities to it that AWS customers need manifested often through foggy Kubernetes, but in the end, it's a win-win for everybody. It makes continuously better. If you want to use containers for yourself on AWS, that's a great way to you. You know, you still, you still benefit from all the work that we're doing. The decision we took was since it's so important to us and our customers, we wanted a team that lived in breathed container D and made sure a super robust and there's many, many examples like that. No, that we ended up participating in, either by taking a project that exists or open sourcing our own. Here's an example of some of the open source projects that we have done from an AWS on Amazon perspective. And there's quite a few when I was looking at this list, I was quite surprised, not quite surprised I've seen the reports before, but every time I do, I have to recount and say, that's a lot more than one would have thought, even though I'd been looking at it for such a long time, examples of this in my world alone are things like, you know, what work had to do with Amazon Linux BottleRocket, which is a container host operating system. That's been open-sourced from day one. Firecracker is something we talked about. We have a project called AWS peril cluster, which allows you to spin up high performance computing clusters on AWS using the kind of schedulers you may use to use like slum. And that's an open source project. We have plenty of source projects in the web development space, in the security space. And more recently things like the open 3d engine, which is something that we are very excited about and that'd be open sourced a few months ago. And so there's a number of these projects that cover everything from tooling to developer, application frameworks, all the way to database and analytics and machine learning. And you'll notice that in a few areas, containers, as an example, machine learning as an example, our default is to go with open source option is where we can open source. And it makes sense for us to do so where we feel the product community might benefit from it. That's our default stance. The CNCF, the cloud native computing foundation is something that we've been involved with quite a bit. You know, we contribute to Kubernetes, be contribute to Envoy. I talked about continuity a bit. We've also contributed projects like CDK 8, which marries the AWS cloud development kit with Kubernetes. It's now a sandbox project in Kubernetes, and those are some of the areas. CNCF is such a wide surface area. We don't contribute to everything, but we definitely participate actively in CNCF with projects like HCB that are critical to eat for us. We are very, very active in just how the project evolves, but also try and see which of the projects that are important to our customers who are running Kubernetes maybe by themselves or some other project on AWS. Envoy is a good example. Kubernetes itself is a good example because in the end, we want to make sure that people running Kubernetes on AWS, even if they are not using our services are successful and we can help them, or we can work on the projects that are important to them. That's kind of how we think about the world. And it's worked pretty well for us. We've done a bunch of work on the Kubernetes side to make sure that we can integrate and solve a customer problem. We've, you know, from everything from models to work that we have done with gravity on our arm processor to a virtual GPU plugin that allows you to share and media GPU resources to the elastic fabric adapter, which are the network device for high performance computing that it can use at Kubernetes on AWS, along with things that directly impact Kubernetes customers like the CDKs project. I talked about work that we do with the container networking interface to the Amazon control of a Kubernetes, which is an open source project that allows you to use other AWS services directly from Kubernetes clusters. Again, you notice success, Kubernetes, not EKS, which is a managed Kubernetes service, because if we want you to be successful with Kubernetes and AWS, whether using our managed service or running your own, or some third party service. Similarly, we worked with premetheus. We now have a managed premetheus service. And at reinvent last year, we announced the general availability of this thing called carpenter, which is a provisioning and auto-scaling engine for Kubernetes, which is also an open source project. But here's the beauty of carpenter. You don't have to be using EKS to use it. Anyone running Kubernetes on AWS can leverage it. We focus on the AWS provider, but we've built it in such a way that if you wanted to take carpenter and implemented on prem or another cloud provider, that'd be completely okay. That's how it's designed and what we anticipated people may want to do. I talked a little bit about BottleRocket it's our Linux-based open-source operating system. And the thing that we have done with BottleRocket is make sure that we focus on security and the needs of customers who want to run orchestrated container, very focused on that problem. So for example, BottleRocket only has essential software needed to run containers, se Linux. I just notice it says that's the lineups, but I'm sure that, you know, Lena Torvalds will be pretty happy. And seeing that SE linux is enabled by default, we use things like DM Verity, and it has a read only root file system, no shell, you can assess it. You can install it if you wanted to. We allowed it to create different bill types, variants as we call them, you can create a variant for a non AWS resource as well. If you have your own homegrown container orchestrator, you can create a variant for that. It's designed to be used in many different contexts and all of that is open sourced. And then we use the update framework to publish and secure repository and kind of how this transactional system way of updating the software. And it's something that we didn't invent, but we have embraced wholeheartedly. It's a bottle rockets, completely open source, you know, have partners like Aqua, where who develop security tools for containers. And for them, you know, something I bought in rocket is a natural partnership because people are running a container host operating system. You can use Aqua tooling to make sure that they have a secure Indiana environment. And we see many more examples like that. You may think so over us, it's all about AWS proprietary technology because Lambda is a proprietary service. But you know, if you look peek under the covers, that's not necessarily true. Lambda runs on top of firecracker, as we've talked about fact crackers and open-source projects. So the foundation of Lambda in many ways is open source. What it also allows people to do is because Lambda runs at such extreme scale. One of the things that firecracker is really good for is running at scale. So if you want to build your own firecracker base at scale service, you can have most of the confidence that as long as your workload fits the design parameters, a firecracker, the battle hardening the robustness is being proved out day-to-day by services at scale like Lambda and foggy. For those of you who don't know service support services, you know, in the end, our goal with serverless is to make sure that you don't think about all the infrastructure that your applications run on. We focus on business logic as much as you can. That's how we think about it. And serverless has become its own quote-unquote "Sort of environment." The number of partners and open-source frameworks and tools that are spun up around serverless. In which case mostly, I mean, Lambda, API gateway. So it says like that is pretty high. So, you know, number of open source projects like Zappa server serverless framework, there's so many that have come up that make it easier for our customers to consume AWS services like Lambda and API gateway. We've also done some of our own tooling and frameworks, a serverless application model, AWS jealous. If you're a Python developer, we have these open service runtimes for Lambda, rust dot other options. We have amount of number of tools that we opened source. So in general, you'll find that tooling that we do runtime will tend to be always be open-sourced. We will often take some of the guts of the things that we use to build our systems like firecracker and open-source them while the control plane, etc, AWS services may end up staying proprietary, which is the case in Lambda. Increasingly our customers build their applications and leverage the broader AWS partner network. The AWS partner network is a network of partnerships that we've built of trusted partners. when you go to the APN website and find a partner, they know that that partner meets a certain set of criteria that AWS has developed, and you can rely on those partners for your own business. So whether you're a little tiny business that wants some function fulfill that you don't have the resources for or large enterprise that wants all these applications that you've been using on prem for a long time, and want to keep leveraging them in the cloud, you can go to APN and find that partner and then bring their solution on as part of your cloud infrastructure and could even be a systems integrator, for example, to help you solve this specific development problem that you may have a need for. Increasingly, you know, one of the things we like to do is work with an apartment community that is full of open-source providers. So a great one, there's so many, and you have, we have a panel discussion with many other partners as well, who make it easier for you to build applications on AWS, all open source and built on open source. But I like to call it a couple of them. The first one of them is TIDELIFT. TIDELIFT, For those of you who don't know is a company that provides SAS based tools to curate track, manage open source catalogs. You know, they have a whole network of maintainers and providers. They help, if you're an independent open developer, or a smart team should probably get to know TIDELIFT. They provide you benefits and, you know, capabilities as a developer and maintainer that are pretty unique and really help. And I've seen a number of our open source community embraced TIDELIFT quite honestly, even before they were part of the APN. But as part of the partner network, they get to participate in things like ISP accelerate and they get to they're officially an advanced tier partner because they are, they migrated the SAS offering onto AWS. But in the end, if you're part of the open source supply chain, you're a maintainer, you are a developer. I would recommend working with TIDELIFT because their goal is making all of you who are developing open source solutions, especially on AWS, more successful. And that's why I enjoy this partnership with them. And I'm looking to do a lot more because I think as a company, we want to make sure that open source developers don't feel like they are not supported because all you have to do is read various forums. It's challenging often to be a maintainer, especially of a small project. So I think with helping with licensing license management, security identification remediation, helping these maintainers is a big part of what TIDELIFT to us and it was great to see them as part of a partner network. Another partner that I like to call sysdig. I actually got introduced to them many years ago when they first launched. And one of the things that happened where they were super interested in some of our serverless stuff. And we've been trying to figure out how we can work together because all of our customers are interested in the capabilities that cystic provides. And over the last few years, he found a number of areas where we can collaborate. So sysdig, I know them primarily in a security company. So people use cystic to secure the bills, detect, you know, do threat response, threat detection, completely continuously validate their posture, get this continuous analytics signal on how they're doing and monitor performance. At the end of it, it's a SAS platform. They have a very nice open source security stack. The one I'm most familiar with. And I think most of you are probably familiar with is Falco. You know, sysdig, a CNCF project has been super popular. It's just to go SSS what 3, 37, 40 million downloads by now. So that's pretty, pretty cool. And they have been a great partner because we've had to do make sure that their solution works at target, which is not a natural place for their software to run, but there was enough demand and interest from our customers that, you know, or both companies leaned in to make sure they can be successful. So last year sister got a security competency. We have a number of specific competencies that we for our partners, they have integration and security hub is great. partners are lean in the way cystic has onto making our customer successful. And working with us are the best partners that we have. And there's a number of open source companies out there built on open source where their entire portfolio is built on open source software or the active participants like we are that we love working with on a day to day basis. So, you know, I think the thing I would like to, as we wind this out in this presentation is, you know, AWS is constantly looking for partnerships because our partners enable our customers. They could be with companies like Redis with Mongo, confluent with Databricks customers. Your default reaction might be, "Hey, these are companies that maybe compete with AWS." but no, I mean, I think we are partners as well, like from somebody at the lower end of the spectrum where people run on top of the services that I own on Linux and containers are SE 2, For us, these partners are just as important customers as any AWS service or any third party, 20 external customer. And so it's not a zero sum game. We look forward to working with all these companies and open source projects from an AWS perspective, a big part of how, where my open source program spends its time is making it easy for our developers to contribute, to open source, making it easy for AWS teams to decide when to open source software or participate in open source projects. Over the last few years, we've made significant changes in how we reduce the friction. And I think you can see it in the results that I showed you earlier in this stock. And the last one is one of the most important things that I say and I'll keep saying that, that we do as AWS is carry the pager. There's a lot of open source projects out there, operationalizing them, running them at scale is not easy. It's not all for whatever reason. It may not have anything to do with the software itself. But our core competency is taking that and being really good at operating it and becoming experts at operating it. And then ideally taking that expertise and experience and operating that project, that software and contributing back upstream. Cause that makes it better for everybody. And I think you'll see us do a lot more of that going forward. We've been doing that for the last few years, you know, in the container space, we do it every day. And I'm excited about the possibilities. With that. Thank you very much. And I hope you enjoy the rest of the showcase. >> Okay. Welcome back. We have Deepak sing here. We just had the keynote closing keynote vice-president of compute services. Deepak. Great to a great keynote, great wisdom and insight from that session. A very notable highlights and cutting edge trends and product information. Thanks for sharing. >> No, anytime it's always good to be here. It's too bad that we still doing this virtually, but always good to talk to you, John. >> We'll get hopefully through this way pretty quickly, I want to jump right in. Cause we don't have a lot of time. I want to get some quick question. You've brought up a good things. Open source innovation. Okay. Going next level. You've seen the rise of super clouds and super apps developing at open source. You're seeing big companies contributing, you know, you mentioned Argo into it. You're seeing that dynamic where companies are forming around this. This is a rising tide. This is, this is actually real. It's not the old school of, okay, here's a project. And then someone manages support and commercialization of it. It's actually platform in cloud scale. This is next gen. >> Yeah. And actually I think it started a few years ago. We can talk about a company that, you know, you're very familiar with as part of this event, which is armory many years ago, Netflix spun off this project called Spinnaker. A Spinnaker is CISED you know, CSED system that was developed at Netflix for their own purposes, but they chose to open solicit. And since then, it's become very popular with customers who want to use it even on prem. And you have a company that spun up on it. I think what's making this world very unique is you have very large companies like Facebook that will build things for themselves like VITAS or Netflix with Spinnaker and open source them. And you can have a lot of discussion about why they chose to do so, etc. But increasingly that's becoming the default when Amazon or Netflix or Facebook or Mehta, I guess you call them these days, build something for themselves for their own needs. The first question we ask ourselves is, should it be opensource? And increasingly we are all saying yes. And here's what happens because of that. It gives an opportunity depending on how you open source it for innovation through commercial deployments, so that you get SaaS companies, you know, that are going to take that product and make it relevant and useful to a very broad number of customers. You build partnerships with cloud providers like AWS, because our customers love this open source project and they need help. And they may choose an AWS managed service, or they may end up working with this partner on a day-to-day basis. And we want to work with that partner because they're making our customers successful, which is one reason all of us are here. So you're having this set of innovation from large companies from, you know, whether they are just consumer companies like Metta infrastructure companies like us, or just random innovation that's happening in an open source project that which ends up in companies being spun up and that foster that innovative innovation and that flywheel that's happening right now. And I think you said that like, this is unique. I mean, you never saw this happen before from so many different directions. >> It really is a nice progression on the business model side as well. You mentioned Argo, which is a great organic thing that was Intuit developed. We just interviewed code fresh. They just presented here in the showcase as well. You seeing the formation around these projects develop now in the community at a different scale. I mean, look at code fresh. I mean, Intuit did it Argo and they're not just supporting it. They're building a platform. So you seeing the dynamics of tools and now emerging the platforms, you mentioned Lambda, okay. Which is proprietary for AWS and your talk powered by open source. So again, open source combined with cloud scale allows for new potential super applications or super clouds that are developing. This is a new phenomenon. This isn't just lift and shift and host on the cloud. This is actually a construction production developer workflow. >> Yeah. And you are seeing consumers, large companies, enterprises, startups, you know, it used to be that startups would be comfortable adopting some of these solutions, but now you see companies of all sizes doing so. And I said, it's not just software it's software, the services increasingly becoming the way these are given, delivered to customers. I actually think the innovation is just getting going, which is why we have this. We have so many partners here who are all in inventing and innovating on top of open source, whether it's developed by them or a broader community. >> Yeah. I liked, I liked the represent container. Do you guys have, did that drove that you've seen a lot of changes and again, with cloud scale and open source, you seeing the dynamics change, whether you're enabling that, and then you see kind of like real big change. So let's take snowflake, a big customer of AWS. They started out as a startup too, but they weren't a data warehouse. They were bringing data warehouse like functionality and then changing everything differently and making it consumable for the cloud. And hence they're huge. So that's a disruption into an incumbent leader or sector. Then you've got new capabilities emerging. What's your thoughts, Deepak? Can you share your vision on how you have the disruption to existing leaders, old guard, if you will, as you guys call them and then new capabilities as these new platforms emerge at a net new functionality, how do you see that emerging? >> Yeah. So I speak from my side of the world. I've lived in over the last few years, which has containers and serverless, right? There's a lot of, if you go to any enterprise and ask them, do you want to modernize the infrastructure? Do you want to take advantage of automated software delivery, continuous delivery infrastructure as code modern observability, all of them will say yes, but they also are still a large enterprise, which has these enterprise level requirements. I'm using the word enterprise a lot. And I usually it's a trigger word for me because so many customers have similar requirements, but I'm using it here as large company with a lot of existing software and existing practices. I think the innovation that's coming and I see a lot of companies doing that is saying, "Hey, we understand the problems you want to solve. We understand the world where you live in, which could be regulated." You want to use all these new modalities. How do we allow you to use all of them? Keep the advantages of switching to a Lambda or switching to, and a service running on far gate, but give you the same capabilities. And I think I'll bring up cystic here because we work so closely with them on Falco. As an example, I just talked about them in my keynote. They could have just said, "Oh no, we'll just support the SE2 and be done with it." They said, "No, we're going to make sure that serverless containers in particular are something that you're going to be really good at because our customers want to use them, but requires us to think differently. And then they ended up developing new things like Falco that are born in this new world, but understand the requirements of the old world. If you get what I'm saying. And I think that a real example. >> Yeah. Oh, well, I mean, first of all, they're smart. So that was pretty obvious for most people that know, sees that you can connect the dots on serverless, which is a great point, but not everyone can see that again, this is what's new and and systig was just found in his backyard. As I found out on my interview, a great, great founder, they would do a new thing. So it was a very easy to connect the dots there again, that's the trend. Well, I got to ask if they're doing that for serverless, you mentioned graviton in your speech and what came out of you mentioned graviton in your speech and what came out of re-invent this past year was all the innovation going on at the compute level with gravitron at many levels in the Silicon. How should companies and open source developers think about how to innovate with graviton? >> Yeah, I mean, you've seen examples from people blogging and tweeting about how fast their applications run and grab it on the price performance benefits that they get, whether it's on, you know, whether it's an observability or other places. something that AWS is going to embrace across a compute something that AWS is going to embrace across a compute portfolio. Obviously you can go find EC2 instances, the gravitron two instances and run on them and that'll be great. But we know that most of our customers, many of our customers are building new applications on serverless containers and serveless than even as containers increasingly with things like foggy, where they don't want to operate the underlying infrastructure. A big part of what we're doing is to make sure that graviton is available to you on every compute modality. You can run it on a C2 forever. You've been running, being able to use ECS and EKS and run and grab it on almost since launch. What do you want me to take it a step further? You elastic Beanstalk customers, elastic Beanstalk has been around for a decade, but you can now use it with graviton. people running ECS on for gate can now use graviton. Lambda customers can pick graviton as well. So we're taking this price performance benefits that you get So we're taking this price performance benefits that you get from graviton and basically putting it across the entire compute portfolio. What it means is every high level service that gets built on compute infrastructure. And you get the price performance benefits, you get the price performance benefits of the lower power consumption of arm processes. So I'm personally excited like crazy. And you know, this has graviton 2 graviton 3 is coming. >> That's incredible. It's an opportunity like serverless was it's pretty obvious. And I think hopefully everyone will jump on that final question as the time's ticking here. I want to get your thoughts quickly. If you look at what's happened with containers over the past say eight years since the original founding of the first Docker instance, if you will, to how that's evolved and then the introduction of Kubernetes and the cloud native wave we're seeing now, what is, how would you describe the relationship between the success Docker, seeing now with Kubernetes in the cloud native construct what's different and why is this combination so successful? >> Yeah. I often say that containers would have, let me rephrase that. what I say is that people would have adopted sort of the modern way of running applications, whether containers came around or not. But the fact that containers came around made that migration and that journey is so much more efficient for people. So right from, I still remember the first doc that Solomon gave Billy announced DACA and starting to use it on customers, starting to get interested all the way to the more sort of advanced orchestration that we have now for containers across the board. And there's so many examples of the way you can do that. Kubernetes being the most, most well-known one. Here's the thing that I think has changed. I think what Kubernetes or Docker, or the whole sort of modern way of building applications has done is it's taken people who would have taken years adopting these practices and by bringing it right to the fingertips and rebuilding it into the APIs. And in the case of Kubernetes building an entire sort of software world around it, the number of, I would say number of decisions people have to take has gone smaller in many ways. There's so many options, the number of decisions that become higher, but the com the speed at which they can get to a result and a production version of an application that works for them is way low. I have not seen anything like what I've seen in the last 6, 7, 8 years of how quickly the most you know, the most I would say is, you know, a company that you would think would never adopt modern technology has been able to go from, this is interesting to getting a production really quickly. And I think it's because the tooling makes it So, and the fact that you see the adoption that you see right and the fact that you see the adoption that you see right from the fact that you could do Docker run Docker, build Docker, you know, so easily back in the day, all the way to all the advanced orchestration you can do with container orchestrator is today. sort of taking all of that away as well. there's never been a better time to be a developer independent of whatever you're trying to build. And I think containers are a big central part of why that's happened. >> Like the recipe, the combination of cloud-scale, the timing of Kubernetes and the containerization concepts just explode as a beautiful thing. And it creates more opportunities and will challenges, which are opportunities that are net new, but it solves the automation piece that we're seeing this again, it's only makes things go faster. >> Yes. >> And that's the key trend. Deepak, thank you so much for coming on. We're seeing tons of open cloud innovations, thanks to the success of your team at AWS and being great participants in the community. We're seeing innovations from startups. You guys are helping enabling that. Of course, they want to live on their own and be successful and build their super clouds and super app. So thank you for spending the time with us. Appreciate. >> Yeah. Anytime. And thank you. And you know, this is a great event. So I look forward to people running software and building applications, using AWS services and all these wonderful partners that we have. >> Awesome, great stuff. Great startups, great next generation leaders emerging. When you see startups, when they get successful, they become the modern software applications platforms out there powering business and changing the world. This is the cube you're watching the AWS startup showcase. Season two episode one open cloud innovations on John Furrier your host, see you next time.

Published Date : Jan 26 2022

SUMMARY :

And the thing that we have We just had the keynote closing but always good to talk to you, John. It's not the old school And I think you said that So you seeing the dynamics but now you see companies and then you see kind How do we allow you to use all of them? sees that you can connect is available to you on Kubernetes and the cloud of the way you can do that. but it solves the automation And that's the key trend. And you know, and changing the world.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DeepakPERSON

0.99+

Lena TorvaldsPERSON

0.99+

FalcoORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

JohnPERSON

0.99+

Deepak SinghPERSON

0.99+

MehtaORGANIZATION

0.99+

twoQUANTITY

0.99+

FacebookORGANIZATION

0.99+

LambdaTITLE

0.99+

firstQUANTITY

0.99+

John FurrierPERSON

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

SolomonPERSON

0.99+

two waysQUANTITY

0.99+

OneQUANTITY

0.99+

PyPyTITLE

0.99+

last yearDATE

0.99+

over 13 yearsQUANTITY

0.99+

LinuxTITLE

0.99+

TodayDATE

0.99+

IndianaLOCATION

0.99+

DatabricksORGANIZATION

0.99+

bothQUANTITY

0.99+

Robyn Bergeron, Red Hat and Thomas Anderson, Red Hat | Red Hat Summit 2021 Virtual Experience


 

(upbeat electronic music) >> Hello, welcome back to the Red Hat Summit, 2021 virtual coverage. I'm John Ferez, theCUBE coverage. I'm in Palo Alto with the remote interviews for our virtual conference here. We've got two great guests, CUBE alumnis, Tom Anderson, VP of Ansible Automation Platform, and Robin Bergeron, who's the Senior Manager, Ansible Community, community architect and all the great things involved. Robin, great to see you. Tom, thanks for coming back on Red Hat Summit, here, virtual. Good to see you. >> Thanks for having us. >> So since last summit, what's the updates on the Ansible Community and the Automation Platform? Tom, we'll start with you. Automation Platform, what's the big updates? >> Yeah. So since last Summit a lot has happened in Ansible land, if you will. So last time, I remember talking to you about content collections. Packing distribution format for into the sports. So we put a lot of effort into bringing all the Ansible content collections really, as well as the commercial users. And we launched last year a program certified content, working with our partners, including partners to certify the content collections that they create. Co-certify them, where we work together to make sure that the developed against, and tested against a Proctor spec, so that both of us can provide them to our customer bases with the confidence that they're going to be working and performing properly, and that we at Red Hat, and our partnership, co-support those out in our customer's production parts. That was a big deal. The other thing that we announced, late last fall, was the private automation hub. And that's the idea where our customers, obviously appreciate the idea of being able to go to Ansible galaxy or to the Ansible automation opt, to go and grab these content collections, these integrations, and bring them down in their environment. They wanted a way, they wanted a methodology, or a repository, where they can curate content from different sources, and then the manager across their environment, the automation across the environment. Kind of leaning into a little bit of automation content as code, if you will. And so we launched the automation hub, the private automation hub, where that sits in our customer's infrastructure; whether that's in the cloud, or on premise, or both, and allows them to grab content from galaxy, from the Ansible automation hub, the Ansible, automation hub on call.red hat.com, as well as their internally developed content, and be able to manage and provide that across their organization, governed by a set policies. So lots of stuff that's going on. Really advanced considering the amount of content that we provide. The amount of collections that we provide. Have certified that for our customers. And have the ability to curate and manage that content across the teams. >> I want to do a drill down on some of the unification of teams, which is a big message as well, as operating at scale, cause that's a super value proposition you guys have. And I want to get into that, but Robin, I want to come back to you on the community. So much has gone on. We're now into the pandemic for almost a year and a half now. It's been a productivity boom. Developers have been working at home for a long time, so it's not a new workflow for them, but you've seen a lot more productivity. What it's changed in the community since last summit, again, virtual to virtual again, between the windows here, event windows. You guys have a lot going on. What's new in the community? Gives us an update. >> Yeah, well, I mean, if we go back to summit, you know, this time-ish, you know, last year, we were wrapping up, more or less, the, it was, you know, we used to have everything you would install Ansible. You would get all the modules. You had everything, you know. It was all all altogether, which, you know, it was great for new users, who don't want to have to figure things out. It helps them to really get up and started running quickly. And, but, you know, from a community perspective, trying to manage that level of complexity turned out to be pretty hard. So the move to collections was actually great for, you know, not just, you know, for about user perspective, but also from a community perspective. And we came out with the Ansible 2-10. That was last fall, I believe. And that was the first real release of Ansible where we had, you know, collections were fully instantiated. We, you know, they were available on galaxy, but you could also get them as part of the Ansible community distribution. Fast forward to now, you know, we just had the Ansible 3.0 release, here in February, and we're looking to Ansible 4.0 here in early May. So, you know, there's been a lot of activity. A lot has improved, honestly, as a result of the changes that we've made. It's made it a lot easier for contributors to get in with a smaller group, that's more of their size and, you know, be able to get started and identify, you know, who are their interested peers in the community. So it's been a boom for us, honestly. You know, the pandemic otherwise is, you know, I think taught all of us, you know, certainly you, John, about the amazing things that we can do virtually. So we've had a lot of our meetups pivot to being virtual meetups, and things like that. And it's been great to see how easily the community has been able to pivot around, you know, this sort of event. I hope that we don't have to just keep practicing it for forever, but in the meantime, you know, it's enabled us to continue to get things done. Thank goodness to every video platform on Earth. >> Yeah. Well, we appreciate it. We're going to come back and talk more about that in the future; the best practice, what we all learned, and stories, but I think I want to come back to you on the persona side of Ansible, because one of the things we talked about last time that seems to be gaining a lot of traction, is that multiple personas. So I want to just hold on to that. We'll come back. Tom, back to you. We're at Red Hat summit. You guys have Ansible Fest, which is your own event that you guys drilled down on this. So users watching can know this your own community, but now we're part of Red Hat, part of IBM, which IBM Thinks, also happening soon as well. Red Hat summit still is unique event. How is Ansible fitting into the big picture? Because the value proposition of unifying teams is really consistent now with Red Hat's overall arching thing; which is operating at scale, open shift, Robin just mentioned. Where's the automation platform going this year? What's the story here at Red Hat summit for the automation platform? >> Yeah, no, that's a great question . We've seen so, we got time, just a little bit of the pandemic, and how it has accelerated some existing trends that we already saw. And one of those is really around the democratization of the application to work routines. More people delivering infrastructure and applications, independent of each other. Which is great. Faster and more agile, all those other good words that apply to that. But what that does bring up is the opportunity for patient work. Replication of effort. Not reusing necessarily things that are in existence already that other teams may have. They'd be not complying with all of the policies, if you will, the configuration and clients' policies. And so it's really kind of brought Ansible out into focus even more here. Now, because of the kind of common back lane that Ansible provides; a common language and common automation backplane across these different teams, and across these different personas. The great thing about what we supply for these different personas, whether it's outpatient developers, infrastructure honors, network engineers, SecOps teams, GetOps teams. There's so many of these obstacles out there, who now all want independent access to infrastructure, and deploying infrastructure. And Ansible has the kind of leverage that each of those communities, whether it's APIs or CLIs, or event based automation, or web hooks, et cetera, et cetera, you know? Service catalogs, utilize all of those interfaces, if you will, or modalities are accessible in Ansible automations. So it's really allowed us to be this sort of connective tissue, or glue, across these different silos or manes of the organization. Timing it opens specifically, one of the things that we talked about last fall, at our Ansible Fest, was our integration between the Ansible automation platform, our advanced cluster management product, and our OpenShift platform, that allows native applications, running on OpenShift, be able to talk to a Ansible automation operator that's running on that same platform, to do things off platform for their customers are already using Ansible. So connecting their cloud native platforms with our existing systems and infrastructures. Systems of records, network systems, ticketing systems, you name it. So all of those sorts of integrations, Ansible's become the connected glue across all of these different environments. Tying traditional IT, cloud IT, cloud native, you name it. So it's really been fun, and it's been an exciting time for us inside the portfolio and out. >> That's a great point. Connective tissue is a great way to describe some of these platform benefits, cause you guys have been on this platform for really long time. And the benefits are kind of being seen in the market, certainly as people have to move faster with the agility. Robin, I want to come back to you because he brought up this idea of personas. I mean, we all know DevOps infrastructure has code; it's been our religion for over a decade or more, but now the word DevSecOps is more prevalent in all the conversations. The security's now weaved in here. How are you seeing that play out in the community? And then, Tom, if you can give some color commentary too, on the automation platform, how security fits in? So DevOps, everything's being operationalized at scale, we get that. That's one of the value propositions you have, but DevSecOps has a persona. More people want more sec. Dev is great, more ops and standardization, more developers, agile standards, and then security. DevSecOps. What's your? >> I thought it was DevNetSecOps? (man chuckling) >> Okay. I've forgot net. Put net in there. Well, networks abstracted away, you know, as we say. >> Yeah! Well, you know, from my perspective, you know, they're people in their jobs all over the places, right? Like, they, you know, the more they can feel like they're efficient, and doing great stuff at their work, like, they're happy to bring as many people into the fold as possible. Right? And you know, normally, security's always been this, you know, it's sort of like networking, right? It's always been this sort of isolated, this special group over here, that's the traditional, you know, one of the traditional IT bottlenecks that causes us to not be able to get anything done. But, you know, on a community level, we see folks who are interested in security, you know, all the time. I know we've certainly done quite a bit of work with the some folks at IBM around one of their products; which I assume Tom will get more into here in just a moment. But from, you know, community perspective, I mean, we've seen people who've been writing, you know, playbooks and roles and, you know, now collections for, you know, all of the traditional government testing, you know, is, you know, missed standards, all of that kind of stuff. And, you know, it's one of those, it's part of network effects. And it's a great place for actually automation hub. I think, you know, for folks who were on prem or, you know, any of our customers are really going to start to see lots of value. How it will be able to connect folks inside the organization, you know, organically through just the place where I'm doing my Ansible things, allows them to find each other, really. And build those, you know, take it from being silos of automation everywhere into a really sort of networked, you know, internal network of Ansible friends and Ansible power users that, you know, can work together and collaborate, you know, just the same way that we do in open source. >> Yeah. And Tom, so IT modernization requires security. What's your take on this? Because you know, you got cluster, a lot of cluster, advanced cluster management issues. You got to deal with the modern apps that are coming. IT's got to evolve. What's your take on all this? >> Yeah. Not only does IT have to evolve, but it's the integration of IT into the rest of the environment. To be able to respond. So, one of the areas that we put a lot of effort into advancement of curating and solutions around security automation. And we've talked about that in the past, the idea of connecting SecOps teams that are doing intrusion detection, or threat hunting, and then responding in an automated way to those threat protections. Right? So connect SecOps with my team; which has traditionally been siloed operations and silo teams. And now with this curated, Ansible security automation solution that we brought to market, with our partners, that connects those two teams in a seamless sort of way. And we've got a lot of work with our friends at IBM, around this area because they are digging that security, their facility, the products in their portfolio. So we've done a lot of work with them. We've done a lot of work with lots of our partners; whether it's cyber or Microsoft, or whoever. Those areas are traditionally, Ansible's done a great job on sort of compliance around configuration enforcement, right? Setting configuration. Now we moved into connecting set-mops with IT. Security automation, now of our acquisition of SecOps, along with our advanced cluster management integration with Ansible, we're starting to say, what are the things inside that DevSecOps workflow that may require integration or automation, or package automation with other parts of the environment? So bringing all of those pieces together, as we move forward, which is really exciting for us. >> Okay, I got to ask you guys the number one question that I get all the time, and I see in the marketplace, kind of a combo question, is, how do I accelerate the automation of my cloud native development, with my traditional infrastructure? Because as people put in green, if one of the cloud projects, whether it's, and then integrating with the cloud on premises with the traditional infrastructure, how do I accelerate those two environments? How do I automate, accelerate the automation? >> It's a great short for us, as what we were talking about last Ansible Fest. We are bringing together with our advanced, cluster management product, ownership platform. Ansible is just been widespread use in all of the automation of both traditional, and cloud native, infrastructures. Whether it's cloud infrastructure, on-premise storage, compute network, you name it. Customers are using Ansible, using Ansible to do all kinds of pieces of infrastructure. Being able to tie that to their new, cloud native initiatives, without having to redo all of that work that they've already done, you integrate that, this thing, infrastructure automation, with their cloud native stuff, it accelerates substantially the, what I call, the operationalization of their cloud native platforms, with their existing IT infrastructure in the existing, IT ecosystem. I believe that that's what the Ansible automation platform plays a key role in connecting those pieces together, without having to redo all that work, that's been done and invested. >> Robin, what's your take on this? This is what people are working on in the trenches. They realize cloud benefits. They've got some cloud native action, and also then they got on the traditional environment, and they've got to get them connected and automated. >> Yeah, absolutely. I mean, you know, the beauty of Ansible, you know, from a end user perspective is, you know, how easy it is to learn and how easy the languages to learn. And I think, you know, that portability, you know, it doesn't matter like, how much of a rocket scientist you are, you know? Everybody appreciates simplicity. Everybody appreciates being able to hand something simple to somebody else, and letting other people get done, and having it, be more or less, it's not quite English, but it's definitely, you know, Ansible's quite readable. Right? And you know, when we looked at, when we started to work on all the Ansible operators, you know, one of that, one of the main pieces there was making sure that that simplicity that we have in Ansible, is brought over directly into the operator. So, just because it's cloud native doesn't mean you suddenly have to learn, you know, a whole set of new languages. Ansible's just as portable there, as it is to any other part of the, your IT organization, infrastructure, whatever it is that you have going on. >> Well, there's a lot of action going on here at Red Hat summit, 2021. Things I wanted to bring up, in context of the show, is the success, and the importance, of you guys having Ansible collections. This has come up multiple times, as we talked about those personas, and you've got these new contributors. You've got people contributing content, as open-source continues to grow and be phenomenal. Value proposition. Touch on this concept of collections. What's the updates? Why is it important? Why should folks pay attention to it, and continue to innovate with collections? >> From a commercial perspective, or from a product perspective, collections have made it a lot easier for contributors to create, and deploy, and distribute content. As Robin's mentioned earlier, previous iterations of Ansible have all of that integration. All of those collections, all within one big group. We call the "batteries included" back in the time. Back in the day, right? That that meant that contributors deployed content with the base, Ansible distribution, they had to wait for the next version of Ansible to come out. That's when that content would get redistributed with the next version of Ansible. By de-coupling, on platform, or engine, putting that into collections, individual elements of related integrations, those can move that their own pace. So users, new customers, can get the content they need, based their contributors like and keep up with. So, customers will have to wait for the next version of the shipping products and get a new version of the new integration they really like now. So again, de-coupling those things, it allows them to move at different paces. The engine, or the platform itself, needs to be stable, performance secure. It's going to move at a certain lifecycle. The content itself, all the different content, hub, and network providers, platforms, all of those things can now move at their own pace. Each of those have their own life cycle. Allows us to get more functionality in our customers hands a lot quicker. And then launching our certified program, partners, when we support that content, certified support that content, helps meet the values that we bring to our customers with this subscription. It's that ecosystem of partners that we work with, who certified and support the stuff that we ship and support with our customers. Benefits both from the accessing the technology, as well as to the access to the value added in terms of integration, testing and support. >> Robin, what's your take on the community? I see custom automation with connect here. A lot of action going on with collections. >> Yeah. Absolutely. You know, it's been interesting, you know? Tom just mentioned the, you know, how everything, previously, all had to be released all at once. Right? And if you think about, you know, sure I have Ansible installed, but you know, how often do I have to, you know, just even as a regular, I'm not a system administrator these days, type person, like how often do I have to, you know, click that button to update, you know, my Mac or my Linux machine? Or, you know, my windows machine, or you know, the operating system on my telephone, right? Every time one of these devices that Ansible connects to, or program, or whatever it is, connects to something, those things are all operating and, you know, developing themselves at their own paces. Right? So when a new version of, you know, we'll call it Red Hat, Enterprise Linux. When a new version of Red Hat, Enterprise Linux comes out, if there are new changes, or new features that, you know, we want to be able to connect to, that's not really helpful when we're not releasing for another six months. Right? So it's really helped us, you know, from a community angle, to able to have each of these collections working in concert with, you know, for example, the Lennox subsystems that are actually making things that will turn be turned into collections, right? Like, SE Linux, or a system D, right? Like, those things move at their own pace. We can update those at our own pace in collections, and then people can update those collections without having to wait another six months, or eight months, or whatever it is, for a new version of Ansible to come out. It's really made it easier for all of those, you know, developers of content to work on their content and their, you know, Ansible relationships almost in sync. And make sure that, you know, not, "I'm going to do it over here. And then I'm going to come back over here and fix everything later." It's more of a, you know, continuous development process. >> So, the experience. So the contributor experience is better then? You'd say? >> I'm sorry? >> The contributor experience is better then? >> Oh, absolutely. Yeah. 100%. I mean, it's, you know, there's something to be said for, I wouldn't say it's like, instant satisfaction, but certainly the ability to have a little bit more independence, and be able to release things as you see fit, and not be gated by the entire rest of the project, is amazing for those folks. >> All right. So I'll put you on the spot, Robin. So if I'm a developer, bottom line me, what's in it for me? Why should I pay attention to collections? What's the bottom line? >> Well, you know, Ansible is a platform, and Ansible benefits from network effects. You know, the reason that we've gotten as big as we have, is sort of like the snowball rolling downhill, right? The more people that latch onto what you're doing, the more people benefit and the more, you know, additional folks want to join in. So, you know, if I was working on any other product that I would consider being able to have automated with Ansible, you know, the biggest thing that I would look at is, well, you know, what are those people also using? Are they automating it with Ansible? And I can guarantee you, 99% of the time, everything else that people are using is also being automated with Ansible. So you'd be crazy to not, you know, want to participate, and make sure that you're providing the best, Ansible experience for, you know, your application, cause for every application or, you know, device that we can connect to, there's probably 20 other competitors that also make similar applications that, you know, folks might also consider in lieu of you if you're not using, if you're not providing Ansible content for it. >> Hey, make things easier, simple to use, and you reduce the steps it takes to do things. That's a winning formula, Tom. I mean, when you make things that good, then you get the network effect. But this highlights what you mentioned earlier, about connective tissue. When you were using words like "connective tissue" it implies an organizational's, not a mechanism. It's not just software, it's people. As a people experience here in the automation platform. >> Robin: Yep. >> This seems to be the bottom line. What's your take? What's your bottom line view? I'm a developer, what's in it for me? Why should I pay attention to the automation platform? >> What Robert just said to me is, more people using. Automation platform, crossing those domains, and silos as kind of connective tissue across those teams, and its personas, means those contributors, those developers, creating automation content, getting in the hands of more people across the organization. In a more simplified way by using Ansible automation. They get access, the automation itself, those personas, they get access to the system automation faster, they can have the money quicker, local to local folks. To reinvent the wheel in terms of automation, we're trying to, (man speaking faintly) They don't want to know about the details, and what it takes to configure the network, configure the storage elements. They rely on those automation developers and contributors that review that for them. One powers of the platform. Across those teams, across those others. Okay we're going to talk about SecOps, The ITOps, in SecOps, in networkOps. And to do all of these tasks, with the same language, and same unition content, running faster, and it's monitoring core responsibilities without worrying. >> Robin, you wanted to talk about something in the community, any updates? I think navigator, you mentioned you wanted to mention a plug for that? >> Absolutely! So, you know, much like any other platform in the universe, you know, if you don't have really great tools for developing content, you're kind of, you know, dead in the water, right? Or you're leaving it to fate. So we've been working on a new project, not part of the product yet, but you know, it's sort of in a community, exploratory phrase. A release, early release often, or, you know, minimum viable product, I guess, might be the other way to describe it currently. It's called Ansible navigator. It's a Tooey, which is like a gooey, but it's got a, sort of a terminal, user interface look to it, that allows you to, you know, develop, it's a sort of interface where you can develop content, you know, all in one window. Have your, you know, documentation accessible to you. Have, you know, all of your test results available to you in one window, rather than, I'm going to do something here, And then I'm going to go over here, and now I'm not sure. So now I'm going to go over here and look at docs instead. It's all, you know, it's all in one place. Which we think will actually, but I mean, I know the folks who have seen it already been like, (woman squealing) but you know, it's definitely in early, community stages right now. It's, you know, we can give you the link. It's github.com/Ansible/Ansiblenavigator >> A tooey versus a gooey, versus a command line interface. >> Yeah! >> How do you innovate on the command line? It's a cooey, or a? >> Yeah! >> It's, you know, there are so many IDs out there and I think Tom can probably talk to some of this, you know, how that might relate to VA code or, you know, many of the other, you know, traditional developer IDs that are out there. But, you know, the goal is certainly to be able to integrate with some of those other pieces. But, you know, it's one of those things where, you know, if everybody's using the same tool and we can start to enforce higher levels, quality and standards through that tool, there's benefits for everyone. Tom, I don't know if you want to add on to that in any way? >> Yeah, it's just kind of one of our focus areas here, which is making it as easy as possible for contributors to create Ansible automation content. And so part of that is production, meaning S & K. Remember what happened to S & K for Ansible? That involved developers and contributors to use ID's, build and deploy automation content. So, I'm really focused on making that contributor life their job. >> Well, thanks for coming on Tom and Robin. Thanks for sharing the insight here at Red Hat Summit 21, virtual. So you guys continue to do a great job with the success of the platform, which has been, you know, consistently growing and having great satisfaction with developers, and now ops teams, and sec teams, and net teams. You know, unifying these teams is certainly a huge priority for enterprises because the end of the day, cloud-scale is all about operating. Which means more standards, more operations. That's what you guys are doing. So congratulations on the continued success. Thanks for sharing. >> Thanks for having us. >> Okay. I'm John for here in theCUBE we are remote with CUBE virtual for Red Hat Summit, 2021. Thanks for watching. (upbeat electronic music)

Published Date : Apr 28 2021

SUMMARY :

and all the great things involved. and the Automation Platform? And have the ability to curate and manage on some of the unification of teams, the meantime, you know, and talk more about that in the future; of the application to work routines. of being seen in the market, away, you know, as we say. that's the traditional, you know, Because you know, you got cluster, but it's the integration of IT in all of the automation and they've got to get them have to learn, you know, in context of the show, of the new integration take on the community? click that button to update, you know, So the contributor but certainly the ability to have you on the spot, Robin. and the more, you know, and you reduce the steps the bottom line. the automation itself, those personas, in the universe, you know, A tooey versus a gooey, you know, many of the other, you know, for contributors to create which has been, you know, we are remote with CUBE virtual

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+

Dan Walsh, Red Hat | KubeCon 2017


 

>> Announcer: Live from Austin Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back, this is SiliconANGLE Media's live coverage wall to wall of KubeCon and CloudNativeCon here in Austin, Texas. Got the house banner rocking all day. I'm Stu Miniman, happy to be joined on the program, Dan Walsh who's a consulting engineering with Red Hat. Rocking the red hat, Dan thanks so much for joining us. >> Pleasure to be here. >> Alright so we've, you know Red Hat has a strong presence at the show, we had Clayton on yesterday, top contributor, won an award actually for all the contribution he's done here. Going through a lot of angles. Why don't you start with, tell us kind of your role, what you've been doing at Red Hat. >> So at Red Hat I'm a consulting engineer, which basically means I lead a team of about 20 engineers, and we work on the base operating system. Basically anything to do with containers from the operating system on down. So kernel engineers. But everything underneath Kubernetes. So traditionally for the last four and a half years I've been working on the Docker Project as well as other container type efforts. We've added things like file system support, Docker, lots of kernel changes, lots of, you know we're working forever on usernames base things like that. More recently though we've been working, we started to work on sort of one of the, well OpenShift and Kubernetes were built on top of Docker originally, and they found over time that the Docker base was changing in ways that were continuously breaking Kubernetes. So about a year and a half ago we start to work on a project called Crio. So a little history is if you go back, Kubernetes was originally built on top of Docker. But core OS came to Kubernetes and wanted to get rocket support into Kubernetes. And rather than add rocket support, Kubernetes decided to find this interface. Basically a CRI, container runtime interface, which is an API that they would call out to to run containers. So rocket could build a container runtime interface, they actually built a shim for the Docker API. But we decided at that time to basically build our own one, and we called it Crio. So it's container runtime interface for OCI images. So the plan was to build a very minimalist daemon that could support Kubernetes, and Kubernetes alone. So we don't support any other orchestrations or anything else. It's totally based on Kubernetes CRI. So our versioning matches up with Kubernetes. So Kubernetes one dot eight, you got Crio one dot eight. Kubernetes one dot nine, you got Crio one dot nine. >> So Dan we've been talking about this. You know Red Hat made a pretty strong bet on Kubernetes relatively early in there. Red Hat, very open, everything you do is 100% open source. Why for Crio, why only Kubernetes? There's other orchestrations out there that are open source. >> Well let's take a step back. So one of our goals in my group was to take, sort of what does it mean to run a container. So if you think about when I run a container, what do I need? I need a standard container image format, so there's the OCI image bundle format that defines that. The next thing I need is the ability to pull an image from a container registry to the host. So we built a library called containers image that actually implements all of the capabilities of moving containers back and forth around, but basically at a Command Line or a library level. We built a tool on top of that called Scopio, which allows us to basic Command Line, I can move from one container registry to another, I can move container registries to different kinds of storage. I can move directly from a container registry into a Docker daemon. So we have a, so the next step you need when you want to run a container is storage. So you need to take that container image and put in on disk. And in the case of containers you do that on top of what's called the copy and write file system. So you need to be able to have a layering file system. So we created another project called container storage that allows you to basically store those images on storage. The last step for running a container is actually to launch an OCI runtime. So we, OCI runtime specification and run c takes care of that. So we have the four building components for what it means to run a container separate. So we're building other tools around that, but we built one too that was focused on Kubernetes. And again, the reason Red Hat bet on Kubernetes is we felt that they had the best longterm potential, and judging by this show I think we made a sane bet. But we will work with others. I mean these are all fully open source projects. We actually have contributors coming in that are contributing at these low level tools. For instance pivotal is a major contributor in container image. And they're using it for pulling images into their base. We have other products that projects are using, and so it's just not Kubernetes. It's just Crio is a daemon for Kubernetes. >> Yeah Dan it's really interesting. You listen in Clayton's keynote this morning. He talked about one of the goals you have at Red Hat is making that underlying infrastructure boring so that everything about it can rely on it, and works on. There's a lot of work that goes on under there. So it's like, the plumbers and the mechanics down underneath making sure it all works. >> A lot of times when I give talks, the number one thing I'm always trying to teach people is that containers are not anything really significantly different. Containers are just processes on a Linux system. So if you booted up a regular REL system right now, and you looked at Pid One of a system. Let me take a step back, I define containers as being something that has, c groups associated with a resource constraints, it has some security constraints associated with it, and it has these things called name spaces, which is a virtualization layer that gives you a different view of the processes. If you looked at every process on a Linux system, they all c groups associated with them, they all have security constraints associated with them, and they all have name spaces associated with. So if you went to Pid One, if you went to slash proc Pid One slash NS you would see the name spaces associated with Pid One. So that means that every process on Linux is in a container. By the definition of a container being those three things. And all that happens on the system is you toggle those. So you can tighten them or change some of the name space and stuff like that, and that gives you the feel of the virtualization. But bottom line is they're all containers. So all the tools like Docker, rocket, Crio, run c, or any one of those tools are all just basically going into the kernel, configuring the kernel, and then launching the Pid One on the system. And from that point on it's just a kernel that's taking 'em. We at Red Hat has a t-shirt that we often wear that says Linux is containers and containers is Linux. And that actually proves the point. So bottom line is you know the operating system is key, and my team and the developers I work with, and the open source community is all about how can we make containers better? How can we further constrain these processes? How can we create new name spaces? How can we create new c groups, new stuff like that? So it's all low level stuff. >> Dan, you know give us some flavor as to some of the customer conversations you're having at the show here. Where are they? I mean we know it's a spectrum of where they are, but what are some of the commonalities that you're hearing? >> I mean at Red Hat our customers run the gamut. So you know we have customers who can barely get off a rel five which came out 12 years ago. Two sort of the leading edge customers. And the funny thing is a lot of these are in the some companies. So most of our customers at this point are just beginning to move into the container world. You know they might have a few containers running, or they had their developers insisting, hey this container stuff cool I want to start playing with it. But getting them from that step to the step of say Kubernetes, or to get them to step with OpenShift, is sort of a big leap. My fear with a lot of this is a lot of people are concentrating too much on the containers. You know the bottom line is what people need to do is develop applications. And secure applications. My history is very based in heavy security. So really we face a lot of customers who sort of have home grown environments. And their engineers come in and say oh I want to do a Docker build, or I want to talk to the Docker socket. And I always look at that and question, you know you're supposed to be building apps, you're building banking apps, or you're building military apps, you're building medical apps. They should be concentrating on that and not so much on the containers. And that's actually the beauty of OpenShift. You can set up OpenShift workloads in such a way that their interaction to build a container is just a Git check it. And it's not, you don't have to go out and understand what it means to build a container. You don't have to get the knowledge of what it means to be able to build a container and things like that. >> Dan you bring up a really good point. At this show most of the customers I'm talking about, it's really about the speed for them to be able to deliver on the applications. Yes there's the people building all the tooling, and the projects here, and there's many customers that are involved with it. But we've gone further up the stack where it's closer to the application, less to that underlying infrastructure. >> And the other thing customers are looking for, in my case, as I said I have a strong background in security, I did SE Linux for like 13 years. Most of my time talking to customers is about security, and how can we actually confine containers, how do we keep them under control, and especially when they go to multi tenancy. And some good things, I don't know if you're going to talk to Kata? Have you heard about the Kata project? >> So we've talked to a couple people, Kata coming out of the open-- >> Clear containers and-- >> Yeah clear container of the intel. >> Yeah and I think that those, getting to those levels of using hardware isolation, it really helps out in-- >> It's interesting because actually, you know when first looking at, it's like wait it's kind of a lightweight VM, it's a container. Where does that fit in? >> They're really just containers, 'cause they're not, a lightweight VM would be actually booting up like an init system and running logging and all these other things. So like a Kata container or, I'm more familiar with clear containers. A clear container is literally just running a very small init system and then it launches run c to run, actually start up the container. So it has almost no operating system inside of the lightweight VM. As opposed to running just regular virtual machines. >> Dan would love your take on, you know you talked about security. Security of containers, the role of security in the cloud native space. What are you seeing, and what do we need to work on even more as an industry? >> It's funny because my world view is at a much lower level than other security people that we talk to. There's other security people that'll be looking at sort of network isolation and role based access control inside of Kubernetes. I look at it as basically multi tenancy. So running multiple containers with different workloads, and what happens if one container gets hacked, how does that affect the other containers that are running and how do I protect the services? So over the years when we've been working with Docker, I got SE Linux support in, we've gotten Setcom support in. We're trying to take advantage of everything in the Linux kernel to further tighten the security. But the bottom line is a process inside of the container is talking to the real kernel on the host. Any vulnerability in the host kernel could lead to an escalation and a breakout. So that's why no matter what you say, a hyper, like a hyper shell, a separate container running inside of a VM is always going to be more secure. But that being, on the other hand, containers in a lot of cases you want to have some interaction. If you go all the way to VM you get really bad isolation. So you really have to cover the gamut. So a lot of times I'll tell people to look at containers as being, they're not a zero sum game. You don't have to throw away all your VMs to move to containers. I tell people the most secure way to run a application is separate physical hardware. The second most is on VM. So the third most is inside a container. And then you can go on to all down the line. But there's nothing to say that you can't run your containers inside of separate VMs, inside of separate physical machines. So you can set up your environment in such a way. Say you have your web front end sitting inside of VMs inside of (mumbles) zone on separate physical hardware you setup your databases or your credit card data on separate physical machines, separate VMs, and separate containers inside of it. So you can build up these really high levels of security based on containers, virtualization, and physical hardware. I can go on forever on this stuff. >> Dan Walsh, really appreciate sharing some of the ways that Red Hat's trying to help some of those underlying pieces become boring. So the customers won't have to worry about. >> That's really what it's about. If you know what's going on at the host level then I haven't done my job. So our goal is to basically take that host level, and make it disappear. And you can work with your higher level orchestration level. >> Well Dan, it's great to catch up with you, thanks so much for joining us. We'll be back with lots more coverage here from KubeCon 2017 in Austin, Texas. I'm Stu Miniman and you're watching theCUBE. (electronic music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by Red Hat, the Linux Foundation, Rocking the red hat, Dan thanks so much for joining us. presence at the show, we had Clayton on yesterday, So a little history is if you go back, So Dan we've been talking about this. So we have a, so the next step you need when you So it's like, the plumbers and the mechanics And all that happens on the system is you toggle those. some of the customer conversations you're having So you know we have customers who can barely get and the projects here, and there's many customers And the other thing customers are looking for, you know when first looking at, So it has almost no operating system inside of the Security of containers, the role of security So a lot of times I'll tell people to look at containers So the customers won't have to worry about. So our goal is to basically take that host level, Well Dan, it's great to catch up with you,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan WalshPERSON

0.99+

DanPERSON

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

ClaytonPERSON

0.99+

100%QUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

13 yearsQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

KubeConEVENT

0.99+

TwoQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

KubeCon 2017EVENT

0.99+

three thingsQUANTITY

0.99+

DockerTITLE

0.99+

CloudNativeConEVENT

0.98+

KubernetesTITLE

0.98+

LinuxTITLE

0.98+

Austin TexasLOCATION

0.98+

yesterdayDATE

0.98+

OpenShiftTITLE

0.98+

theCUBEORGANIZATION

0.98+

one containerQUANTITY

0.97+

about a year and a half agoDATE

0.97+

about 20 engineersQUANTITY

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.97+

KataTITLE

0.96+

thirdQUANTITY

0.96+

four building componentsQUANTITY

0.96+

12 years agoDATE

0.96+

Red HatTITLE

0.94+

Crio oneTITLE

0.94+

CloudNativeCon 2017EVENT

0.93+

secondQUANTITY

0.93+

Kubernetes oneTITLE

0.91+

nineTITLE

0.9+

CrioTITLE

0.9+

ScopioTITLE

0.88+

DockerORGANIZATION

0.86+

SE LinuxTITLE

0.81+

eightTITLE

0.81+