Securing Your Cloud, Everywhere
>>welcome to our session on security titled Securing Your Cloud. Everywhere With Me is Brian Langston, senior solutions engineer from Miranda's, who leads security initiatives from Renta's most security conscious customers. Our topic today is security, and we're setting the bar high by talking in some depth about the requirements of the most highly regulated industries. So, Brian four Regulated industries What do you perceive as the benefits of evolution from classic infra za service to container orchestration? >>Yeah, the adoption of container orchestration has given rise to five key benefits. The first is accountability. Think about the evolution of Dev ops and the security focused version of that team. Deb. SEC ops. These two competencies have emerged to provide, among other things, accountability for the processes they oversee. The outputs that they enable. The second benefit is audit ability. Logging has always been around, but the pervasiveness of logging data within container or container environments allows for the definition of audit trails in new and interesting ways. The third area is transparency organizations that have well developed container orchestration pipelines are much more likely to have a higher degree of transparency in their processes. This helps development teams move faster. It helped operations teams operations teams identify and resolve issues easier and help simplify the observation and certification of security operations by security organizations. Next is quality. Several decades ago, Toyota revolutionized the manufacturing industry when they implemented the philosophy of continuous improvement. Included within that philosophy was this dependency and trust in the process as the process was improved so that the quality of the output Similarly, the refinement of the process of container orchestration yields ah, higher quality output. The four things have mentioned ultimately points to a natural outcome, which is speed when you don't have to spend so much time wondering who does what or who did what. When you have the clear visibility to your processes and because you can continuously improve the quality of your work, you aren't wasting time in a process that produces defects or spending time and wasteful rework phases. You can move much faster than we've seen this to be the case with our customers. >>So what is it specifically about? Container orchestration that gives these benefits, I guess. I guess I'm really asking why are these benefits emerging now around these technologies? What's enabling them, >>right? So I think it boils down to four things related to the orchestration pipelines that are also critical components. Two successful security programs for our customers and related industry. The first one is policy. One of the core concepts and container orchestration is this idea of declaring what you want to happen or declaring the way you want things done? One place where declarations air made our policies. So as long as we can define what we want to happen, it's much easier to do complementary activities like enforcement, which is our second enabler. Um, tools that allow you to define a policy typically have a way to enforce that policy. Where this isn't the case, you need to have a way of enforcing and validating the policies objectives. Miranda's tools allow custom policies to be written and also enforce those policies. The third enabler is the idea of a baseline. Having a well documented set of policies and processes allows you to establish a baseline. Um, it allows you to know what's normal. Having a baseline allows you to measure against it as a way of evaluating whether or not you're achieving your objectives with container orchestration. The fourth enabler of benefits is continuous assessment, which is about measuring constantly back to what I said a few minutes ago. With the toilet away measuring constantly helps you see whether your processes and your target and state are being delivered as your output deviates from that baseline, your adjustments can be made more quickly. So these four concepts, I think, could really make or break your compliance status. >>It's a really way interesting way of thinking about compliance. I had thought previously back compliance, mostly as a as a matter of legally declaring and then trying to do something. But at this point, we have methods beyond legal boilerplate for asserting what we wanna happen, as you say, and and this is actually opening up new ways to detect, deviation and and enforce failure to comply. That's really exciting. Um, so you've you've touched on the benefits of container orchestration here, and you've provided some thoughts on what the drivers on enablers are. So what does Miranda's fit in all this? How does how are we helping enable these benefits, >>right? Well, our goal and more antis is ultimately to make the world's most compliant distribution. We we understand what our customers need, and we have developed our product around those needs, and I could describe a few key security aspects about our product. Um, so Miranda's promotes this idea of building and enabling a secure software supply chain. The simplified version of that that pertains directly to our product follows a build ship run model. So at the build stage is doctor trusted registry. This is where images are stored following numerous security best practices. Image scanning is an optional but highly recommended feature to enable within D T R. Image tags can be regularly pruned so that you have the most current validated images available to your developers. And the second or middle stage is the ship stage, where Miranda's enforces policies that also follow industry best practices, as well as custom image promotion policies that our customers can write and align to their own internal security requirements. The third and final stages to run stage. And at this stage, we're talking about the engine itself. Docker Engine Enterprise is the Onley container, run time with 51 40 dash to cryptography and has many other security features built in communications across the cluster across the container platform are all secure by default. So this build ship stage model is one way of how our products help support this idea of a secure supply chain. There are other aspects of the security supply chain that arm or customer specific that I won't go into. But that's kind of how we could help our product. The second big area eso I just touched on the secure supply chain. The second big area is in a Stig certification. Um, a stick is basically an implementation or configuration guide, but it's published by the U. S government for products used by the US government. It's not exclusive to them, but for customers that value security highly, especially in a regulated industry, will understand the significance and value that the Stig certification brings. So in achieving the certification, we've demonstrated compliance or alignment with a very rigid set of guidelines. Our fifth validation, the cryptography and the Stig certification our third party at two stations that our product is secure, whether you're using our product as a government customer, whether you're a customer in a regulated industry or something else, >>I did not understand what the Stig really Waas. It's helpful because this is not something that I think people in the industry by and large talk about. I suspect because these things are hard to get and time consuming to get s so they don't tend to bubble up to the top of marketing speak the way glitzy new features do that may or may not >>be secure. >>The, uh so then moving on, how has container orchestration changed? How your customers approach compliance assessment and reporting. >>Yeah, This has been an interesting experience and observation as we've worked with some of our customers in these areas. Eso I'll call out three areas. One is the integration of assessment tooling into the overall development process. The second is assessment frequency and then the third is how results are being reported, which includes what data is needed to go into the reporting. There are very likely others that could be addressed. But those are three things that I have noticed personally and working with customers. >>What do you mean exactly? By integration of assessment tooling. >>Yeah. So our customers all generally have some form of a development pipeline and process eso with various third party and open source tools that can be inserted at various phases of the pipeline to do things like status static source would analysis or host scanning or image scanning and other activities. What's not very well established in some cases is how everything fits within the overall pipeline framework. Eso fit too many customers, ends up having a conversation with us about what commands need should be run with what permissions? Where in the environment should things run? How does code get there that does this scanning? Where does the day to go? Once the out once the scan is done and how will I consume it? Thies Real things where we can help our customers understand? Um, you know what? Integration? What? Integration of assessment. Tooling really means. >>It is fascinating to hear this on, baby. We can come back to it at the end. But what I'm picking out of this Ah, this the way you speak about this and this conversation is this kind of re emergence of these Japanese innovations in product productivity in in factory floor productivity. Um, like, just in time delivery and the, you know, the Toyota Miracle and, uh, and that kind of stuff. Fundamentally, it's someone Yesterday, Anders Wahlgren from cloud bees, of course. The C I. C D expert told me, um, that one of the things he likes to tell his, uh consult ease and customers is to put a GoPro on the head of your code and figure out where it's going and how it's spending its time, which is very reminiscent of these 19 fifties time and motion studies, isn't it that that that people, you know pioneered accelerating the factory floor in the industrial America of the mid century? The idea that we should be coming back around to this and doing it at light speed with code now is quite fascinating. >>Yeah, it's funny how many of those same principles are really transferrable from 50 60 70 years ago to today. Yeah, quite fascinating. >>So getting back to what you were just talking about integrating, assessment, tooling, it sounds like that's very challenging. And you mentioned assessment frequency and and reporting. What is it about those areas that that's required? Adaptation >>Eso eso assessment frequency? Um, you know, in legacy environments, if we think about what those look like not too long ago, uh, compliance assessment used to be relatively infrequent activity in the form of some kind of an audit, whether it be a friendly peer review or intercompany audit. Formal third party assessments, whatever. In many cases, these were big, lengthy reviews full of interview questions, Um, it's requests for information, periods of data collection and then the actual review itself. One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities would sometimes go unnoticed or unmitigated until these reviews at it. But in this era of container orchestration, with the decomposition of everything in the software supply chain and with clearer visibility of the various inputs to the build life cycle, our customers can now focus on what tooling and processes can be assembled together in the form of a pipeline that allows constant inspection of a continuous flow of code from start to finish. And they're asking how our product can integrate into their pipeline into their Q A frameworks to help simplify this continuous assessment framework. Eso that's that kind of addresses the frequency, uh, challenge now regarding reporting, our customers have had to reevaluate how results are being reported and the data that's needed in the reporting. The root of this change is in the fact that security has multiple stakeholder groups and I'll just focus on two of them. One is development, and their primary focus, if you think about it, is really about finding and fixing defects. That's all they're focused on, really, is there is there pushing code? The other group, though, is the Security Project Management Office, or PMO. This group is interested in what security controls are at risk due to those defects. So the data that you need for these two stakeholder groups is very different. But because it's also related, it requires a different approach to how the data is expressed, formatted and ultimately integrated with sometimes different data sources to be able to appease both use cases. >>Mhm. So how does Miranda's help improve the rate of compliance assessment? Aziz? Well, as this question of the need for differential data presentation, >>right, So we've developed on exposed a P I S that helped report the compliance status of our product as it's implemented in our customers on environment. So through these AP eyes, we express the data and industry standard formats using plastic out Oscar is a relatively new project out of the mist organization. It's really all about standardizing a set of standards instead of formats that expresses control information. So in this way our customers can get machine and human readable information related to compliance, and that data can then be massaged into other tools or downstream processes that our customers might have. And what I mean by downstream processes is if you're a development team and you have the inspection tools, the process is to gather findings defects related to your code. A downstream process might be the ticketing system with the era that might log a formal defect or that finding. But it all starts with having a common, standard way of expressing thes scan output. And the findings such that both development teams and and the security PMO groups can both benefit from the data. So essentially we've been following this philosophy of transparency, insecurity. What we mean by that is security isn't or should not be a black box of information on Lee, accessible and consumable by security professionals. Assessment is happening proactively in our product, and it's happening automatically. We're bringing security out of obscurity by exposing the aspects of our product that ultimately have a bearing on your compliance status and then making that information available to you in very user friendly ways. >>It's fascinating. Uh uh. I have been excited about Oscar's since, uh, since first hearing about it, Um, it seems extraordinarily important to have what is, in effect, a ah query capability. Um, that that let's that that lets different people for different reasons formalize and ask questions of a system that is constantly in flux, very, very powerful. So regarding security, what do you see is the basic requirements for container infrastructure and tools for use in production by the industries that you are working with, >>right? So obviously, you know, the tools and infrastructure is going to vary widely across customers. But Thio generalize it. I would refer back to the concept I mentioned earlier of a secure software supply chain. There are several guiding principles behind us that are worth mentioning. The first is toe have a strategy for ensuring code quality. What this means is being able to do static source code analysis, static source code analysis tools are largely language specific, so there may be a few different tools that you'll need to have to be able to manage that, um, second point is to have a framework for doing regular testing or even slightly more formal security assessments. There are plenty of tools that can help get a company started doing this. Some of these tools are scanning engines like open ESCAP that's also a product of n'est open. ESCAP can use CS benchmarks as inputs, and these tools do a very good job of summarizing and visualizing output, um, along the same family or idea of CS benchmarks. There's many, many benchmarks that are published. And if you look at your own container environment, um, there are very likely to be many benchmarks that can form the core platform, the building blocks of your container environment. There's benchmarks for being too, for kubernetes, for Dr and and it's always growing. In fact, Mirante is, uh, editing the benchmark for container D, so that will be a formal CSCE benchmark coming up very shortly. Um, next item would be defining security policies that line with your organization's requirements. There are a lot of things that come out of box that comes standard that comes default in various products, including ours, but we also give you through our product. The ability to write your own policies that align with your own organization's requirements, uh, minimizing your tax surface. It's another key area. What that means is only deploying what's necessary. Pretty common sense. But sometimes it's overlooked. What this means is really enabling required ports and services and nothing more. Um, and it's related to this concept of least privilege, which is the next thing I would suggest focusing on these privileges related to minimizing your tax service. It's, uh, it's about only allowing permissions to those people or groups that excuse me that are absolutely necessary. Um, within the container environment, you'll likely have heard this deny all approach. This denial approach is recommended here, which means deny everything first and then explicitly allow only what you need. Eso. That's a very common, uh uh, common thing that sometimes overlooked in some of our customer environments. Andi, finally, the idea of defense and death, which is about minimizing your plast radius by implementing multiple layers of defense that also are in line with your own risk management strategy. Eso following these basic principles, adapting them to your own use cases and requirements, uh, in our experience with our customers, they could go a long way and having a secure software supply chain. >>Thank you very much, Brian. That was pretty eye opening. Um, and I had the privilege of listening to it from the perspective of someone who has been working behind the scenes on the launch pad 2020 event. So I'd like to use that privilege to recommend that our listeners, if you're interested in this stuff certainly if you work within one of these regulated industries in a development role, um, that you may want to check out, which will be easy for you to do today, since everything is available once it's been presented. Matt Bentley's live presentation on secure Supply Chain, where he demonstrates one possible example of a secure supply chain that permits image. Signing him, Scanning on content Trust. Um, you may want to check out the session that I conducted with Andres Falcon at Cloud Bees who talks about thes um, these industrial efficiency factory floor time and motion models for for assessing where software is in order to understand what policies can and should be applied to it. Um, and you will probably want to frequent the tutorial sessions in that track, uh, to see about how Dr Enterprise Container Cloud implements many of these concentric security policies. Um, in order to provide, you know, as you say, defense in depth. There's a lot going on in there, and, uh, and it's ah, fascinating Thio to see it all expressed. Brian. Thanks again. This has been really, really educational. >>My pleasure. Thank you. >>Have a good afternoon. >>Thank you too. Bye.
SUMMARY :
about the requirements of the most highly regulated industries. Yeah, the adoption of container orchestration has given rise to five key benefits. So what is it specifically about? or declaring the way you want things done? on the benefits of container orchestration here, and you've provided some thoughts on what the drivers So in achieving the certification, we've demonstrated compliance or alignment I suspect because these things are hard to get and time consuming How your customers approach compliance assessment One is the integration of assessment tooling into the overall development What do you mean exactly? Where does the day to go? America of the mid century? Yeah, it's funny how many of those same principles are really transferrable So getting back to what you were just talking about integrating, assessment, One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities Well, as this question of the need for differential the process is to gather findings defects related to your code. the industries that you are working with, finally, the idea of defense and death, which is about minimizing your plast Um, and I had the privilege of listening to it from the perspective of someone who has Thank you. Thank you too.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Brian Langston | PERSON | 0.99+ |
Matt Bentley | PERSON | 0.99+ |
Anders Wahlgren | PERSON | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Andres Falcon | PERSON | 0.99+ |
Cloud Bees | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two stations | QUANTITY | 0.99+ |
U. S government | ORGANIZATION | 0.99+ |
50 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
ESCAP | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
four things | QUANTITY | 0.99+ |
third area | QUANTITY | 0.98+ |
US government | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
five key benefits | QUANTITY | 0.98+ |
Miranda | ORGANIZATION | 0.98+ |
second enabler | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
second benefit | QUANTITY | 0.97+ |
fifth validation | QUANTITY | 0.97+ |
Oscar | ORGANIZATION | 0.97+ |
three things | QUANTITY | 0.97+ |
Miracle | COMMERCIAL_ITEM | 0.97+ |
Thio | PERSON | 0.97+ |
Mirante | ORGANIZATION | 0.97+ |
Aziz | PERSON | 0.97+ |
Stig | ORGANIZATION | 0.97+ |
one way | QUANTITY | 0.96+ |
two competencies | QUANTITY | 0.96+ |
Several decades ago | DATE | 0.95+ |
two stakeholder groups | QUANTITY | 0.95+ |
Yesterday | DATE | 0.95+ |
four concepts | QUANTITY | 0.94+ |
second big | QUANTITY | 0.93+ |
fourth enabler | QUANTITY | 0.93+ |
19 fifties | DATE | 0.92+ |
Renta | ORGANIZATION | 0.92+ |
both use | QUANTITY | 0.91+ |
three areas | QUANTITY | 0.9+ |
Securing Your Cloud | TITLE | 0.9+ |
one | QUANTITY | 0.9+ |
One place | QUANTITY | 0.87+ |
51 40 dash | QUANTITY | 0.87+ |
D T | TITLE | 0.86+ |
launch pad 2020 | EVENT | 0.86+ |
GoPro | ORGANIZATION | 0.86+ |
mid century | DATE | 0.85+ |
70 years ago | DATE | 0.84+ |
first one | QUANTITY | 0.83+ |
few minutes | DATE | 0.83+ |
Oscar | EVENT | 0.82+ |
two of them | QUANTITY | 0.81+ |
Japanese | OTHER | 0.8+ |
Everywhere With Me | TITLE | 0.79+ |
60 | DATE | 0.78+ |
Security Project Management Office | ORGANIZATION | 0.77+ |
third enabler | QUANTITY | 0.75+ |
one possible | QUANTITY | 0.74+ |
Stig | TITLE | 0.67+ |
Deb | PERSON | 0.66+ |
PMO | ORGANIZATION | 0.62+ |
Two successful security programs | QUANTITY | 0.62+ |
Andi | PERSON | 0.61+ |
Dr Enterprise Container Cloud | ORGANIZATION | 0.6+ |
four | QUANTITY | 0.6+ |
Docker Engine | ORGANIZATION | 0.59+ |
America | LOCATION | 0.53+ |
Thies | QUANTITY | 0.5+ |
Eso | ORGANIZATION | 0.49+ |
Lee | ORGANIZATION | 0.48+ |
Miranda | PERSON | 0.47+ |
ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN
>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.
SUMMARY :
of our application to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Bentley | PERSON | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
one reason | QUANTITY | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
NGINX | TITLE | 0.99+ |
Docker | TITLE | 0.99+ |
two approaches | QUANTITY | 0.99+ |
Monolith | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
UCP | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
One thing | QUANTITY | 0.98+ |
one developer | QUANTITY | 0.98+ |
Jenkins | TITLE | 0.98+ |
today | DATE | 0.98+ |
Brownfield | ORGANIZATION | 0.97+ |
both worlds | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
one click | QUANTITY | 0.96+ |
Greenfield | ORGANIZATION | 0.95+ |
each | QUANTITY | 0.95+ |
single pane | QUANTITY | 0.92+ |
Docker hub | TITLE | 0.91+ |
a hundred | QUANTITY | 0.91+ |
Lens | TITLE | 0.9+ |
Docker | ORGANIZATION | 0.9+ |
Microservice | ORGANIZATION | 0.9+ |
VS | TITLE | 0.88+ |
DevOps | TITLE | 0.87+ |
K8S | COMMERCIAL_ITEM | 0.87+ |
Docker hub | ORGANIZATION | 0.85+ |
ways | QUANTITY | 0.83+ |
Kubernetes | ORGANIZATION | 0.83+ |
last six years | DATE | 0.82+ |
Jenkins | PERSON | 0.72+ |
One of | QUANTITY | 0.7+ |
Speed K8S Dev Ops Secure Supply Chain
>>this session will be reviewing the power benefits of implementing a secure software supply chain and how we can gain a cloud like experience with flexibility, speed and security off modern software delivery. Hi, I'm Matt Bentley, and I run our technical pre sales team here. Um Iran. Tous I spent the last six years working with customers on their container ization journey. One thing almost every one of my customers is focused on how they can leverage the speed and agility benefits of contain arising their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason, and that is for our applications. So now let's take a look at how we could provide flexibility all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focus platforms, I generally see two different mindsets in terms of where the responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure yet robust service that fits the organization's goals around how modern applications are built and delivered. Yeah. First, let's take a look at the developer or application team approach. This approach follows Mawr of the Dev ops philosophy, where a developer and application teams are the owners of their applications. From the development through their life cycle, all the way to production. I would refer this more of a self service model of application, delivery and promotion when deployed to a container platform. This is fairly common organizations where full stack responsibilities have been delegated to the application teams, even in organizations were full stack ownership doesn't exist. I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers and other organizations. There's a strong separation between responsibilities for developers and I T operations. This is often do the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include doctorate the development layer or b'more traditional throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach, where we can take container platforms and be delivered as a service to other consumers inside of the I T organization. This is fairly prescriptive, in the manner of which application teams would consume it. When examining the two approaches, there are pros and cons to each process. Controls and appliance are often seen as inhibitors to speak. Self service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self service is great without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization and the true infrastructure is a code. Experience requires Dev ops related coding skills that teams often have in pockets but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this Doc Enterprise Container Cloud provides the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that are professional services Team and your operations team spend their time designing and implementing. This removes much of the additional work and worry Run, ensuring that your clusters and experiences are consistent while maintaining the ideal self service model, no matter if it is a full stack ownership or easing the needs of I T operations. We're also bringing the most natural kubernetes experience today with winds to allow for multi cluster visibility that is both developer and operator friendly. Let's provides immediate feedback for the health of your applications. Observe ability for your clusters. Fast context, switching between environments and allowing you to choose the best in tool for the task at hand. Whether is three graphical user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get Dave off speed with all the security controls to meet the regulations your business lives by. We're talking about more frequent deployments. Faster time to recover from application issues and better code quality, as you can see from our clusters we have worked with were able to tie these processes back to real cost savings, riel efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look at see how we're able to actually build a secure supply chain. Help deliver these sorts of initiatives in our example. Secure Supply chain. We're utilizing doctor desktop to help with consistency of developer experience. Get hub for our source Control Jenkins for a C A C D. Tooling the doctor trusted registry for our secure container registry in the universal control playing to provide us with our secure container run time with kubernetes and swarm. Providing a consistent experience no matter where are clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience for my developers that works for any application. Brownfield or Greenfield monolith or micro service on boarding teams could be simplified with integrations into enterprise authentication services. Calls to get help repositories. Jenkins Access and Jobs, Universal Control Plan and Dr Trusted registry teams and organizations. Cooper down his name space with access control, creating doctor trusted registry named spaces with access control, image scanning and promotion policies. So now let's take a look and see what it looks like from the C I c D process, including Jenkins. So let's start with Dr Desktop from the doctor desktop standpoint, what should be utilizing visual studio code and Dr Desktop to provide a consistent developer experience. So no matter if we have one developer or 100 we're gonna be able to walk through the consistent process through docker container utilization at the development layer. Once we've made our changes to our code will be able to check those into our source code repository in this case, abusing Get up. Then, when Jenkins picks up, it will check out that code from our source code repository, build our doctor containers, test the application that will build the image, and then it will take the image and push it toward doctor trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we consign them. So once we signed our images, we've deployed our application to Dev. We can actually test their application deployed in our real environment. Jenkins will then test the deployed application, and if all tests show that is good, will promote the r R Dr and Mr Production. So now let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as is deployed today. Here, we can see that we have a change that we want to make on our application. So marketing Team says we need to change containerized injure next to something more Miranda's branded. So let's take a look at visual studio coat, which will be using for I D to change our application. So here's our application. We have our code loaded, and we're gonna be able to use Dr Desktop on our local environment with our doctor desktop plug in for visual studio code to be able to build our application inside of doctor without needing to run any command line. Specific tools here is our code will be able to interact with docker, make our changes, see it >>live and be able to quickly see if our changes actually made the impact that we're expecting our application. Let's find our updated tiles for application and let's go and change that to our Miranda sized into next. Instead of containerized in genetics, so will change in the title and on the front page of the application, so that we save. That changed our application. We can actually take a look at our code here in V s code. >>And as simple as this, we can right click on the docker file and build our application. We give it a name for our Docker image and V s code will take care of the automatic building of our application. So now we have a docker image that has everything we need in our application inside of that image. So here we can actually just right click on the image tag that we just created and do run this winter, actively run the container for us and then what's our containers running? We could just right click and open it up in a browser. So here we can see the change to our application as it exists live. So once we can actually verify that our applications working as expected, weaken, stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here we're going to go ahead and make a commit message to say that we updated to our Mantis branding. We will commit that change and then we'll push it to our source code repository again. In this case we're using get Hub to be able to use our source code repository. So here in V s code will have that pushed here to our source code repository. And then we'll move on to our next environment, which is Jenkins. Jenkins is gonna be picking up those changes for our application, and it checked it out from our source code repository. So get Hub Notifies Jenkins. That there is a change checks out. The code builds our doctor image using the doctor file. So we're getting a consistent experience between the local development environment on our desktop and then and Jenkins or actually building our application, doing our tests, pushing in toward doctor trusted registry, scanning it and signing our image. And our doctor trusted registry, then 2.4 development environment. >>So let's actually take a look at that development environment as it's been deployed. So here we can see that our title has been updated on our application so we can verify that looks good and development. If we jump back here to Jenkins, will see that Jenkins go >>ahead and runs our integration tests for a development environment. Everything worked as expected, so it promoted that image for production repository and our doctor trusted registry. Where then we're going to also sign that image. So we're signing that. Yes, we have signed off that has made it through our integration tests, and it's deployed to production. So here in Jenkins, we could take a look at our deployed production environment where our application is live in production. We've made a change automated and very secure manner. >>So now let's take a look at our doctor trusted registry where we can see our game Space for application are simple in genetics repository. From here we will be able to see information about our application image that we've pushed into the registry, such as Thean Midge signature when it was pushed by who and then we'll also be able to see the scan results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Dr Trusted registry does binary level scanning, so we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what, exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how, exactly we would remediate that and secure supply chain. So let's take a look at that and the example that we were looking at the vulnerability is actually in the base layer of our image. In order to pull in a new base layer of our image, we need to actually find the source of that and updated. One of the ways that we can help secure that is a part of the supply chain is to actually take a look at where we get our base layers of our images. Dr. Help really >>provides a great source of content to start from, but opening up docker help within your organization opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker, However, curated by docker, open source projects and other vendors, one of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them instead of just blindly trusting the content from doctor. How we could take a set >>of content that we find useful, such as those base image layers or content from vendors, and pull that into our own Dr trusted registry using our rearing feature. Once the images have been mirrored into a staging area of our DACA trusted registry, we can then scan them to ensure that the images meet our security requirements and then, based off the scan result, promote the image toe a public repository where we can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know a secure and controlled within our environment. So from here we confined our updated doctor image in our doctor trust registry, where we can see that the vulnerabilities have been resolved from a developers point of view, that's about a smooth process gets. Now let's take a look at how we could provide that secure content for developers and our own Dr Trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our doctor trusted registry. Here we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up nearing and we can quickly turn it on by making active. Then we can see that our image mirroring will pull our content from Dr Hub and then make it available in our doctor trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within docker trusted registry that makes it so. That content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the docker image. So are actually users. How they would consume this content is by taking a look at the public to them official images that we've made available here again, Looking at our Alpine image, we can take a look at the tags that exist. We could see that we have our content that has been made available, so we've pulled in all sorts of content from Dr Hub. In this case, we have even pulled in the multi architectural images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Len's. Lens provides capabilities to be able to give developers a quick, opinionated view that focuses around how they would want to view, manage and inspect applications to point to a Cooper Days cluster. Lindsay integrates natively out of the box with universal control playing clam bundles so you're automatically generated. Tell certificates from UCP. Just work inside our organization. We want to give our developers the ability to see their applications and a very easy to view manner. So in this case, let's actually filter down to the application that we just deployed to our development environment. Here we can see the pot for application and we click on that. We get instant, detailed feedback about the components and information that this pot is utilizing. We can also see here in Linz that it gives us the ability to quickly switch context between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package of applications, especially those that may be more complex to make it much simpler to be able to consume inversion our applications. In this case, let's take a look at the application that we just built and deployed. This case are simple in genetics. Application has been bundled up as a helm chart and has made available through lens here. We can just click on that description of our application to be able to see more information about the helm chart so we can publish whatever information may be relevant about our application, and through one click, we can install our helm chart here. It will show us the actual details of the home charts. So before we install it, we can actually look at those individual components. So in this case, we could see that's created ingress rule. And then it's well, tell kubernetes how to create the specific components of our application. We just have to pick a name space to to employ it, too. And in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Dr Hub in our universal Control plane. We've turned on Dr Content Trust Policy Enforcement. So this is actually gonna fail to deploy because we're trying to deploy application from Dr Hub. The image hasn't been properly signed in our environment. So the doctor can to trust policy enforcement prevents us from deploying our doctor image from Dr Hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know our image came from, and that meets our quality standards. So if we comment out the doctor Hub repository and comment in our doctor trusted registry repository and click install, it will then install the helm chart with our doctor image being pulled from our GTR, which then has a proper signature, we can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple in genetics application, and in this case we'll get details around the actual deploy and help chart. The nice thing is that Linds provides us this capability here with home. To be able to see all the components that make up our application from this view is giving us that single pane of glass into that specific application so that we know all the components that is created inside of kubernetes. There are specific details that can help us access the applications, such as that ingress world that we just talked about gives us the details of that. But it also gives us the resource is such as the service, the deployment in ingress that has been created within kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around dev ups and operations controlled processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators mawr time designing systems that meet our security and compliance concerns
SUMMARY :
So now let's take a look at how we could provide flexibility all layers of the stack from the and on the front page of the application, so that we save. So here we can see the change to our application as it exists live. So here we can So here in Jenkins, we could take a look at our deployed production environment where our application So let's take a look at that and the example that we were looking at of the most important use cases is around how you get base images into your So in this case, let's actually filter down to the application that we just deployed to our development environment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Bentley | PERSON | 0.99+ |
UCP | ORGANIZATION | 0.99+ |
Mawr | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
Cooper | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
one reason | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Dr Hub | ORGANIZATION | 0.98+ |
Dave | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Jenkins | TITLE | 0.97+ |
two | QUANTITY | 0.97+ |
Linds | ORGANIZATION | 0.97+ |
Iran | LOCATION | 0.97+ |
One thing | QUANTITY | 0.97+ |
one developer | QUANTITY | 0.96+ |
DACA | TITLE | 0.95+ |
each process | QUANTITY | 0.95+ |
Dr Desktop | TITLE | 0.93+ |
one click | QUANTITY | 0.92+ |
single pane | QUANTITY | 0.92+ |
both worlds | QUANTITY | 0.91+ |
Thean Midge | PERSON | 0.91+ |
docker | TITLE | 0.89+ |
three graphical user | QUANTITY | 0.86+ |
Mantis | ORGANIZATION | 0.85+ |
last six years | DATE | 0.84+ |
Dr | ORGANIZATION | 0.82+ |
Miranda | ORGANIZATION | 0.81+ |
Brownfield | ORGANIZATION | 0.8+ |
this winter | DATE | 0.75+ |
ways | QUANTITY | 0.75+ |
C | TITLE | 0.74+ |
one of | QUANTITY | 0.74+ |
Lindsay | ORGANIZATION | 0.72+ |
ingress | TITLE | 0.71+ |
Alpine | ORGANIZATION | 0.69+ |
most important use cases | QUANTITY | 0.67+ |
Cooper Days | ORGANIZATION | 0.66+ |
Jenkins | PERSON | 0.65+ |
mindsets | QUANTITY | 0.63+ |
Greenfield | LOCATION | 0.62+ |
Miranda | PERSON | 0.62+ |
R | PERSON | 0.59+ |
C A C | TITLE | 0.59+ |
Linz | TITLE | 0.59+ |
every one | QUANTITY | 0.56+ |
challenges | QUANTITY | 0.53+ |
Enterprise | COMMERCIAL_ITEM | 0.5+ |
2.4 | OTHER | 0.5+ |
Hub | ORGANIZATION | 0.48+ |
K8S | TITLE | 0.48+ |
Lens | TITLE | 0.44+ |
Doc | ORGANIZATION | 0.4+ |
Help | PERSON | 0.39+ |
Docker | ORGANIZATION | 0.37+ |
Alpine | OTHER | 0.35+ |