Kirsten Newcomer, Red Hat V2
(upbeat music) >> Hello everyone, my name is Dave Vellante, and we're digging into the many facets of the software supply chain and how to better manage digital risk. I'd like to introduce Kirsten Newcomer, who is the Director of Cloud and DevSecOps Strategy at Red Hat. Hello Kirsten, welcome. >> Hello Dave, great to be here with you today. >> Let's dive right in. What technologies and practices should we be thinking about that can help improve the security posture within the software supply chain? >> So I think the most important thing for folks to think about really is adopting DevSecOps. And while organizations talk about DevSecOps, and many folks have adopted DevOps, they tend to forget the security part of DevSecOps. And so for me, DevSecOps is both DevSec, how do I shift security left into my supply chain, and SecOps which is a better understood and more common piece of the puzzle, but then closing that loop between what issues are discovered in production and feeding that back to the development team to ensure that we're really addressing that supply chain. >> Yeah I heard a stat. I don't know what the source is, I don't know if it's true, but it probably is that around 50% of the organizations in North America, don't even have a SecOps team. Now of course that probably includes a lot of smaller organizations, but the SecOps team, they're not doing DevSecOps, but so what are organizations doing for supply chain security today? >> Yeah, I think the most common practice, that people have adopted is vulnerability scanning. And so they will do that as part of their development process. They might do it at one particular point, they might do it at more than one point. But one of the challenges that, we see first of all, is that, that's the only security gate that they've integrated into their supply chain, into their pipeline. So they may be scanning code that they get externally, they may be scanning their own code. But the second challenge is that the results take so much work to triage. This is static vulnerability scanning. You get information that is not in full context, because you don't know whether a vulnerability is truly exploitable, unless you know how exposed that particular part of the code is to the internet, for example, or to other aspects. And so it's just a real challenge for organizations, who are only looking at static vulnerability data, to figure out what the right steps to take are to manage those. And there's no way we're going to wind up with zero vulnerabilities, in the code that we're all working with today. Things just move too quickly. >> Is that idea of vulnerability scanning, is it almost like sampling where you may or may not find the weakest link? >> I would say that it's more comprehensive than that. The vulnerability scanners that are available, are generally pretty strong, but they are, again, if it's a static environment, a lot of them rely on NVD database, which typically it's going to give you the worst case scenario, and by nature can't account for things like, was the software that you're scanning built with controls, mitigations built in. It's just going to tell you, this is the package, and this is the known vulnerabilities associated with that package. It's not going to tell you whether there were compiler time flags, that may be mitigated that vulnerability. And so it's almost overwhelming for organizations, to prioritize that information, and really understand it in context. And so when I think about the closed loop feedback, you really want not just that static scan, but also analysis that takes into account, the configuration of the application, and the runtime environment and any mitigations that might be present there. >> I see, thank you for that. So, given that this digital risk and software supply chains are now front and center, we read about them all the time now, how do you think organizations are responding? What's the future of software supply chain going to look like? >> That's a great one. So I think organizations are scrambling. We've certainly at Red Hat, We've seen an increase in questions, about Red Hat's own supply chain security, and we've got lots of information that we can share and make available. But I think also we're starting to see, this strong increased interest, in security bill of materials. So I actually started working with, automation and standards around security bill of materials, a number of years ago. I participated in The Linux Foundation, SPDX project. There are other projects like CycloneDX. But I think all organizations are going to need to, those of us who deliver software, we're going to need to provide S-bombs and consumers of our software should be looking for S-bombs, to help them understand, to build transparency across the projects. And to facilitate that automation, you can leverage the data, in a software package list, to get a quick view of vulnerabilities. Again, you don't have that runtime context yet, but it saves you that step, perhaps of having to do the initial scanning. And then there are additional things that folks are looking at. Attested pipelines is going to be key, for building your custom software. As you pull the code in and your developers build their solutions, their applications, being able to vet the steps in your pipeline, and attest that nothing has happened in that pipeline, is really going to be key. >> So the software bill of materials is going to give you, a granular picture of your software, and then what the chain of, providence if you will or? >> Well, an S-bomb depending on the format, an S-bomb absolutely can provide a chain of providence. But another thing when we think about it, from the security angles, so there's the providence, where did this come from? Who provided it to me? But also with that bill of materials, that list of packages, you can leverage tooling, that will give you information about vulnerability information about those packages. At Red Hat we don't think that vulnerability info should be included in the S-bomb, because vulnerability data changes everyday. But, it saves you a step potentially. Then you don't necessarily have to be so concerned about doing the scan, you can pull data about known vulnerabilities for those packages without a scan. Similarly the attestation in the pipeline, that's about things like ensuring that, the code that you pull into your pipeline is signed. Signatures are in many ways of more important piece for defining providence and getting trust. >> Got it. So I was talking to Asiso the other day, and was asking her okay, what are your main challenges, kind of the standard analyst questions, if you will. She said look, I got great people, but I just don't have enough depth of talent, to handle, the challenges I'm always sort of playing catch up. That leads one to the conclusion, okay, automation is potentially an answer to address that problem, but the same time, people have said to me, sometimes we put too much faith in automation. some say okay, hey Kirsten help me square the circle. I want to automate because I lack the talent, but it's not, it's not sufficient. What are your thoughts on automation? >> So I think in the world we're in today, especially with cloud native applications, you can't manage without automation, because things are moving too quickly. So I think the way that you assess whether automation is meeting your goals becomes critical. And so looking for external guidance, such as the NIST's Secure Software Development Framework, that can help. But again, when we come back, I think, look for an opinionated position from the vendors, from the folks you're working with, from your advisors, on what are the appropriate set of gates. And we've talked about vulnerability scanning, but analyzing the configed data for your apps it's just as important. And so I think we have to work together as an industry, to figure out what are the key security gates, how do we audit the automation, so that I can validate that automation and be comfortable, that it is actually meeting the needs. But I don't see how we move forward without automation. >> Excellent. Thank you. We were forced into digital, without a lot of thought. Some folks, it's a spectrum, some organizations are better shape than others, but many had to just dive right in without a lot of strategy. And now people have sat back and said, okay, let's be more planful, more thoughtful. So as you, and then of course, you've got, the supply chain hacks, et cetera. How do you think the whole narrative and the strategy is going to change? How should it change the way in which we create, maintain, consume softwares as both organizations and individuals? >> Yeah. So again, I think there's going to be, and there's already, need request for more transparency, from software vendors. This is a place where S-bombs play a role, but there's also a lot of conversation out there about zero trust. So what does that mean in, you have to have a relationship with your vendor, that provides transparency, so that you can assess the level of trust. You also have to, in your organization, determine to your point earlier about people with skills and automation. How do you trust, but verify? This is not just with your vendor, but also with your internal supply chain. So trust and verify remains key. That's been a concept that's been around for a while. Cloud native doesn't change that, but it may change the tools that we use. And we may also decide what are our trust boundaries. Are they where are we comfortable trusting? Where do we think that zero trust is more applicable place, a more applicable frame to apply? But I do think back to the automation piece, and again, it is hard for everybody to keep up. I think we have to break down silos, we have to ensure that teams are talking across those silos, so that we can leverage each other's skills. And we need to think about managing everything as code. What I like about the everything is code including security, is it does create auditability in new ways. If you're managing your infrastructure, and get Ops like approach your security policies, with a get Ops like approach, it provides visibility and auditability, and it enables your dev team to participate in new ways. >> So when you're talking about zero trust I think, okay, I can't trust users, I got to trust the verified users, machines, employees, my software, my partners. >> Yap >> Every possible connection point. >> Absolutely. And this is where both attestation and identity become key. So being able to, I mean, the SolarWinds team has done a really interesting set of things with their supply chain, after they were, in response to the hack they were dealing with. They're now using Tekton CD chains, to ensure that they have, attested every step in their supply chain process, and that they can replicate that with automation. So they're doing a combination of, yep. We've got humans who need to interact with the chain, and then we can validate every step in that chain. And then workload identity, is a key thing for us to think about too. So how do we assert identity for the workloads that are being deployed to the cloud and verify whether that's with SPIFFE SPIRE, or related projects verify, that the workload is the one that we meant to deploy and also runtime behavioral analysis. I know we've been talking about supply chain, but again, I think we have to do this closed loop. You can't just think about shifting security left. And I know you mentioned earlier, a lot of teams don't have SecOps, but there are solutions available, that help assess the behavior and runtime, and that information can be fed back to the app dev team, to help them adjust and verify and validate. Where do I need to tighten my security? >> Am glad you brought up the SolarWinds to Kirsten what they're doing. And as I remember after 911, everyone was afraid to fly, but it was probably the safest time in history to fly. And so same analogy here. SolarWinds probably has learned more about this and its reputation took a huge hit. But if you had to compare, what SolarWinds has learned and applied, at the speed at which they've done it with maybe, some other software suppliers, you might find that they've actually done a better job. It's just, unfortunately, that something hit that we never saw before. To me it was Stuxnet, like we'd never seen anything like this before, and then boom, we've entered a whole new era. I'll give you the last word Kirsten. >> No just to agree with you. And I think, again, as an industry, it's pushed us all to think harder and more carefully about where do we need to improve? What tools do we need to build to help ourselves? Again, S-bombs have been around, for a good 10 years or so, but they are enjoying a resurgence of importance signing, image signing, manifest signing. That's been around for ages, but we haven't made it easy to integrate that into the supply chain, and that's work that's happening today. Similarly that attestation of a supply chain, of a pipeline that's happening. So I think as a industry, we've all recognized, that we need to step up, and there's a lot of creative energy going into improving in this space. >> Excellent Kirsten Newcomer, thanks so much for your perspectives. Excellent conversation. >> My pleasure, thanks so much. >> You're welcome. And you're watching theCUBE, the leader in tech coverage. (soft music)
SUMMARY :
and how to better manage digital risk. Hello Dave, great to that can help improve the security posture and more common piece of the puzzle, that around 50% of the that particular part of the code It's not going to tell you going to look like? And to facilitate that automation, the code that you pull into but the same time, people have said to me, that it is actually meeting the needs. and the strategy is going to change? But I do think back to the to trust the verified users, that the workload is the to Kirsten what they're doing. No just to agree with you. thanks so much for your perspectives. the leader in tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirsten | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Kirsten Newcomer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
SolarWinds | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Tekton | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
DevSecOps | TITLE | 0.99+ |
Kir | PERSON | 0.99+ |
more than one point | QUANTITY | 0.98+ |
around 50% | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Stuxnet | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
DevSec | TITLE | 0.95+ |
Secure Software Development Framework | TITLE | 0.93+ |
SecOps | TITLE | 0.9+ |
point | QUANTITY | 0.89+ |
zero vulnerabilities | QUANTITY | 0.88+ |
zero trust | QUANTITY | 0.87+ |
Asiso | ORGANIZATION | 0.85+ |
sten Newcomer | PERSON | 0.82+ |
of years ago | DATE | 0.73+ |
911 | OTHER | 0.7+ |
DevOps | TITLE | 0.67+ |
CycloneDX | TITLE | 0.66+ |
Ops | ORGANIZATION | 0.65+ |
SPIFFE SPIRE | TITLE | 0.65+ |
DevSecOps | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.61+ |
SPDX | TITLE | 0.41+ |
Linux | ORGANIZATION | 0.21+ |
Andrea Hall & Andrew Block, Red Hat V2
(upbeat music) >> Okay, we're here talking about how you can better understand and manage the risks associated with the digital supply chain. How in this day and age where software comes from so many different places and sources throughout the ecosystem, how can organizations manage the risks associated with our dependence on software? And with me now are two great guests, Andrea Hall, who is a specialist solution architect and project manager for security and compliance at Red Hat. She's going to focus on public sector. And Andrew Block who's a distinguished architect at Red Hat Consulting, folks welcome. >> Welcome >> Thank you. Thanks for having us. >> You're very welcome. Andrea, let's start with you. Let's talk about regulations. What exists today that we should be aware of that organizations should be paying attention to? >> Oh sure, so the thing that comes to mind first being in the US is the presidential executive order on cybersecurity that came out a few months ago. Organizations are really paying attention to that. And in the US, it's having a ripple effect with policy, but we're also seeing policy considerations pop up in other countries, Australia and England. The supply chain is a big focus right now, of course, but we see these changes coming down the road as more and more government organizations are trying to secure their critical infrastructure. >> Is there kind of a leadership, or probably in other words, is somebody saying seeing what the UK does and say, okay, we're going to follow that template? Or is it just a variety and a mish mash with no sort of consolidation? How is that sort of playing out? >> I see a lot of organizations kind of basing their requirements on (indistinct) However, each organization has its own nuances. Each agency has its own nuances to how it wants them implemented. >> Andrew, maybe you could chime in here. What are you seeing when you talk to customers that are tuned into this issue? >> No as Andrea had just mentioned having that north star in terms of regulations is so fundamentally great for them because many of them especially in regulate industries, look to these regulations on how they apply their own policies. So at least it has some guidance on how to move forward because as we all know the secure software supply chain is getting news every day and how they react to it is something that I know all their leaders are asking themselves, especially those IT leaders. >> Andrea, when I talk to practitioners, sometimes they're frustrated. They understand they have to comply. They know new regulations are coming out, but sometimes it's hard for them to keep up. It would be helpful if you're sitting across the table from somebody who's frustrated and they ask you, what are your expectations? What are the trends in regulations? How do you see the current regulations evolving to specifically accommodate the digital supply chain and the security exposures and corollary requirements there? >> We see a lot of organizations struggling in the sense of trying to understand what the policy actually wants. Definitions are still a little bit vague, but implementation is also difficult because sometimes organizations will add more tools to their toolkit, adding a layer of complexity there. Really automation has to be pulled in. That's key to implementing this instead of adding more workload and more burden to your folks. It's really important for these organizations to pull stakeholders in the organization together. So the IT leaders bring together the developers, the security operations sit at the same table, talk about whether or not what needs to be implemented or what's proposed to be implemented, will affect the mission or in any way or disrupt operations. It's important for everybody to be on the same page so it doesn't slow anything down as you're trying to roll it out. >> And one of the things here is that we're seeing a lot of change with these new regulations and with a lot of organizations, any type of change is scary. And that is one area that they're looking for guidance not only in the tooling, but also how they apply it in the organization. >> I'll add on. >> Please. >> I'll add onto that and say, organizations really need to take into account the people side of things too. People need to understand what the impact is to the organization, so that they don't try to find the loopholes, they're buying into what needs to be done. They understand the why behind it. You for example, if you walk into your house, you normally close the door behind you. Security needs to be seen as that, as well, that's the culture and it's the habit. And it's ingrained in the fabric of the organization to live this way, not just implement the tools to do it. >> Right, and the number of doors you have in your infrastructure are a lot more than just a couple. Andrew mentioned sort of guidance and governments are obviously taking a more active role. I mean, sometimes I'm a cynic. I mean, the president Biden signs an executive order, but swipe of a pen doesn't really give us enough to go on. Do you think Andrea, that we're going to see new guidance from governments in the very near future? What are you expecting? >> I expect to see more conversations happening. I know that agencies who developed the policies are pulling together stakeholders and getting input. But I do see in the not too distant future, that mandates will be rolling out, yes. >> Well, so Andrew of course, Andrea, if you have a thought on this as well, but how do you see organizations dealing with adopting these new policies. >> Slowly, don't boil the ocean is one thing I tell a lot to every one of them, because a lot of these tooling, a lot of these concepts are foreign to them, brand new. How they adopt those and how they implement them, needs to be done in a very agile fashion, very slow and prescriptive. Go ahead and try to find one area of improvement and go ahead and work upon it and build upon it. Because not only does that normally make your organization more successful and secure, but also helps your organization just from a more out standpoint. One thing that you need to emphasize is that don't blame anyone. 'Cause a lot of times when you're going through this, you're reassessing your own supply chain. You might find where you could see improvements that need to be done. Don't blame things that may have occurred in the past. See how you can benefit from these lessons learned in the future. >> It's interesting you say that the blame game, I mean it used to be that failure meant you get fired and that's obviously has changed. As many have said, you know you're going to have incidents. It's how you respond to those incidents. What you learn from them. Do you have Andrew, any insights from specifically working with customers on securing their software supply chain? What can you tell us about what leading practitioners are doing today? >> They're going in and not only assessing what their software components consist of. Using tools like an SBOM, a software bill of materials, understand where all the components of their ecosystem and their lineage comes from. We're hearing almost every single day, new vulnerabilities that are being introduced in various software packages. By having that understanding of what is in your ecosystem, you can then better understand how to mitigate those concerns moving forward. >> Andrea, Andrew was just saying, one of the things is you don't just dive in. You've got to be careful. There's going to be ripple effects is what I'm inferring, but at the same time, there's a mandate to move quickly. Are there things that could accelerate the adoption of regulation or even the creation of regulations and that guidance in your view? What could accelerate this? >> As far as accelerating it goes, I think it's having those conversations proactively with the stakeholders in your organization and understanding the environment like Andrew said. Go ahead and get that baseline. And just know that whatever changes you make are maybe going to be audited down the road, because as we were moving towards this kind of third-party verification, that you're actually implementing things in order to do business with another organization. The importance of that, if organizations see that gravity to this, I think they will try to speed things up. I think that if organizations and the people in those organizations understand that why, that I talked about earlier and they understand how things like solar winds or things like the oil disruption that happened earlier this year. The personal effect to cyber events will help your organization move forward. Again, everybody's bought into the concept, everybody's working towards the same goals and they understand that why behind it. >> In addition to that, having tooling available, that makes it easy for them. You have a lot of individuals who this is all foreign, providing that base level tooling that aligns to a lot of the regulations that might be applicable within their real realm and their domain, makes it easier for them to start to complying and taking less burden off of them to be able to be successful. >> So it's a hard problem because Andrew, how do you deal with sort of the comment more tools, okay. But I look at that the Optiv map, if you've seen that. It makes your eyes cross. You've got so many tools, so much fragmentation, you're introducing new tools. Can automation help that? Is there hope for consolidation of that tools portfolio? >> Right now, this space is very emerging. It's very emergent, it's very fluid to be honest, 'cause there is actually mandates only a year or two old. But as they come over the course of time, however, I do see these types of tooling starting to consolidate where right now it seems like every vendor has a tool that tries to address this. It's being able to have the people work together, have more regulations that will come out that will allow us to start to redefine and solidify on certain tools like ISO standards. There are certain ones that I mentioned on as balance previously, there's now a ISO standard on SBOM there wasn't previously. So as more and more of these regulations come out, it makes it easier to provide that recommended set of tooling that organization is leveraging instead of vendor A, vendor B. >> Andrea, I said this before I was a cynic, but will give you the last word, give us some hope. I mean, obviously public policy is very important. A partnership between governments and industry, both the practitioners, the organizations that are buying these tools, as well as the technology industry got to work together in an ecosystem. Give us some hope. >> The hope I think will come from realizing that as you're doing this, as you are implementing these changes, you're in a sense trying to prevent those future incidents from happening. There's some assurance that you're doing everything that you can do here. It's a situation, it can be daunting, I'll put it that way. It can be really daunting for organizations, but just know that organizations like Red Hat are doing what we can to help you down the road. >> And really it's just continuing this whole shifting left mentality. The top of supply chain is just one component, but the introducing dev sec ops security at the beginning, that really will make the organizations become successful because this is not just a technology problem, It's a people issue as well. And being able to kind of package them all up together will help organizations as a whole. >> Yeah, so that's a really important point. You hear that term shift left. For years, people say, hey, you can't just bolt security on, as an afterthought, that's problematic. And that's the answer to that problem, right? Is shifting left meaning designing it in at the point of code, infrastructure as code, dev sec ops. That's where it starts, right? >> Exactly, being able to have security at the forefront and then have everything afterwards. Propagate from your security mindset. >> Excellent, okay, Andrea, Andrew, thanks so much for coming to the program today. >> Thank you for having us. >> Very welcome, thanks for watching. This is Dave Vellante for The Cube. Your a global leader in enterprise tech coverage. (soft music)
SUMMARY :
how can organizations manage the risks Thanks for having us. that organizations should that comes to mind first to how it wants them implemented. What are you seeing when and how they react to it is something What are the trends in regulations? more burden to your folks. And one of the things fabric of the organization from governments in the very near future? But I do see in the but how do you see organizations dealing that need to be done. say that the blame game, how to mitigate those of regulations and that if organizations see that gravity to this, to be able to be successful. But I look at that the Optiv have more regulations that will come out but will give you the last that you can do here. And being able to kind of And that's the answer have security at the forefront to the program today. This is Dave Vellante for The Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andrea | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Andrew Block | PERSON | 0.99+ |
Andrea Hall | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Red Hat Consulting | ORGANIZATION | 0.99+ |
England | LOCATION | 0.98+ |
one component | QUANTITY | 0.98+ |
a year | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Australia | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
each organization | QUANTITY | 0.97+ |
Each agency | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Biden | PERSON | 0.97+ |
One thing | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.94+ |
SBOM | ORGANIZATION | 0.92+ |
earlier this year | DATE | 0.88+ |
few months ago | DATE | 0.88+ |
one area | QUANTITY | 0.84+ |
one thing | QUANTITY | 0.84+ |
single day | QUANTITY | 0.78+ |
two old | QUANTITY | 0.77+ |
Red Hat V2 | ORGANIZATION | 0.75+ |
The Cube | ORGANIZATION | 0.74+ |
UK | ORGANIZATION | 0.72+ |
years | QUANTITY | 0.71+ |
couple | QUANTITY | 0.68+ |
Optiv | ORGANIZATION | 0.63+ |
SBOM | TITLE | 0.59+ |
ISO | TITLE | 0.41+ |
star | ORGANIZATION | 0.38+ |
Chris Wright, Red Hat v2
(gentle music) >> Narrator: From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020 brought to you by Red Hat. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. Welcome back to our continuous coverage of AnsibleFest 2020. We're not in person this year, as everybody knows, but we're back covering the event. We're excited to be here and really our next guest we've had him on a lot of times. He's super insightful coming right off the keynote, driving into some really interesting topics that we're excited to get into. It's Chris Wright, he's the Chief Technology Officer of Red Hat Chris, great to see you. >> Hey, great to see you. Thanks for having me on. >> Absolutely. So let's jump into it. I mean, you covered so many topics in your keynote. The first one though, that just jumps off the page, right, is automation and really rethinking automation. You know, and I remember talking to a product manager at a hyperscaler many months ago, and he talked about the process of them mapping out their growth and trying to figure out how are they going to support it in their own data center. And he just basically figured out, we cannot do this at scale without automation. So I think the hyperscaler has been doing it, but really it's kind of a new approach for enterprises to incorporate new and more automation in what they do every day. >> It's a fundamental part of scaling. And I think we've learned over time that one we need programming interfaces on everything. So that's a critical part of beginning of the automation journey. So now you have a programmatic way to interact with all the things out there. But the other piece is just creating really confidence in knowing that when you're automating and you're taking tasks away from humans which are actually error prone and typing on the keyboard is not always the greatest way to get things done. The confidence that those automation scripts or playbooks are going to do the right things at the right time. And so, creating really a business and a mindset around infusing automation everything you do is a pretty big journey for the enterprise >> Right. And that's one of the topics you talked about as well. And you know it comes up all the time with digital transformation or software development. This kind of shift the focus from, you know, kind of it's a destination to it's a journey. And you talked very specifically that you need to think about automation as a journey and as a process and even a language, and really bake it into as many processes as you possibly can. I'm sure that shocks a lot of people and probably scares them but really that's the only way to achieve these types of scales that we're seeing out there. >> Well, I think so. And part of what I was trying to highlight is the notion that a business is filled with people with domain expertise. So everybody brings something to the table. You're a business analyst, you understand the business part of what you're providing. You're the technologist. You really understand the technology. There's a partner ecosystem coming in with a critical parts of the technology stack. And when you want to bring this all together, you need to have a common way to communicate. And the... What I was really trying to point out is a language for communication across all those different cross functional parts of your business is critical. Number one, and number two, that language can actually be an automation language. And so, choosing that language wisely obviously we're talking to AnsibleFest. So we're going to be talking a lot about Ansible in this context. Treating that language wisely is part of how you build the end to end sort of internalized view of what automation means to your business. >> Right. I wrote down a bunch of quotes that you talked about, you know, Ansible is the language of automation, and automation should be a primary communication language. Again, very different kind of language that we don't hear. Now, it's more than a tool but a process a constant process and should be an embedded component of any organization. So, I mean, you're really talking about automation as a first class citizen, not kind of this last thing for the most advanced or potentially last thing for the most simple things where we can apply this process, but really needs to be a fundamental core of the way you think about everything that you do. Really a very different way to think about things and probably really appropriate, you know, as we come out of 2020 in this kind of new world where, you know, everyone like distributed teams, well now you have distributed teams. And so, you know, the forcing function on better tooling, that's really wrapped in better culture has never been greater than we're seeing today. >> I completely agree with that. And that domain expertise, I think we understand well in certain areas. So for example, application developers, they rely on one another. So, you maybe as an application developer consuming a service from somebody else in your microservices architecture, and so you're dependent on that other engineering team's domain expertise. Maybe that's even the database service, and you're not a database DBA or an engineer that really builds schemas for databases. So we kind of get that notion of encapsulating domain expertise in the building and delivering about applications that notion the CICD pipeline, which itself is automating how you build and deliver applications, that notion of encapsulating domain expertise across a series of different functions in your business can go much broader than just building and delivering the application. It's running your business. And that's where it becomes fundamental. It becomes a process. That's the journey, you know, not the end state, but it's the... And it's not the destination, it's the journey that matters. And I've seen some really interesting ways that people actually work on this and try to approach it from the, "How do you change your mindset?" Here's one example that I thought was really unique. I was speaking with a customer who quite literally automated their existing process, and what they did was automate everything from generating the emails to the PDFs, which would then be shared as basically printed out documents for how they walked through business change when they're making a change in their product. And the reason they did that was not because that was the most efficient model at all. It was... That was the way they could get the teams comfortable with automation. If it produced the same artifacts that they were already used to, then it created confidence and then they could sort of evolve the model to streamline it because printing out a piece of paper to review it is not going to be the efficient way to (indistinct) change your business. >> Well, just to follow up on that, right? Cause I think what would probably scares a lot of people about automation, one is exception handling and can you get all the Edge cases in the use cases? So in the one you just talked about, how do they deal with that? And then I think the other one is just simply control. Do I feel confident enough that I can get the automation to a place that I'm comfortable to hand over control? And I'm just curious in that case you just outlined how do they deal with kind of those two factors? >> Well, they always enabled a human checkpoint, so especially in the beginning. So it was sort of trust but verify that model and over time you can look at the things that you really understand well and start with those and the things that have more kind of gray zones, where the exceptions may be the rule or maybe the critical part of the decision making process. Those can be sort of flagged as needs real kind of human intervention. And that's a way to sort of evolve and iterate and not start off with the notion that everything has to be automated. You can do it piecemeal and grow over time and you'll build confidence and you'll understand how to flag those exceptions, where you actually need to change your process itself because you may have bottlenecks that don't really make sense for the business anymore and where you can incorporate the exception handling into the automation essentially. >> Right, that's great. Thank you for sharing that example. I want to shift gears a little bit cause another big topic that you covered in your keynote that we talk about all the time on theCUBE is Edge. So everybody knows what a data center is, everybody knows what a public cloud is, you know lots of conversations around hybrid cloud and multicloud, et cetera, et cetera, et cetera. But this new thing is Edge and I think people talk about Edge in kind of an esoteric way, but I think you just nailed it. I mean you just nailed it very simply, moving the compute to where the data is collected and or consumed. You know I thought that was super elegant, but what you didn't get into on all the complexity is what means. I mean data centers are our pristine environments that they're very, very controlled, the environment's controlled, the network is controlled, the security is controlled and you have the vision of an Edge device and the one everyone always likes to use let's say like a wind farm. Those things are out in crazy harsh conditions and then there's still this balancing act as to what information does get stored and processed and used and then what does have to go back to the data center because it's not a substitute for the data center it's really an extension of the data center or maybe the data center is actually an extension of the Edge. Maybe that's a better way to think of it but we've had all these devices out there, now suddenly we're connecting them and bringing them into a network and add a control. And I just thought the Edge represents such a big shift in the way we're going to see compute change probably as fundamental I would imagine as the cloud shift has been. >> I believe it is, I absolutely believe it's as big a change in the industry as the cloud has been. The cloud really created scale, it created automation, programmatic interfaces to infrastructure and higher level services. But it also was built around a premise of centralization. I mean clouds themselves are distributed and so you can create availability zones and resilient applications, but there's still a sense of centralization. Edge is really embracing the notion that data production is kind of only up into the right and the way to scale processing that data and turning that data into insights and information that's valuable for our business is to bring compute closer to data. Not really a new concept, but the scale at which it's happening is what's really changing how we think about building infrastructure and building the support behind all that processing and it's that scale that requires automation. Because you're just not going to be able to manage thousands or tens of thousands or in certain scenarios even millions of devices without putting automation at the forefront. It's critical. >> Right. And we can't talk about Edge without talking about 5G and I laugh every time I'm watching football on Sundays and they have the 5G commercials on talking about my handset that I can order my food to get delivered faster at my house like completely missing the point. 5G is about machine to machine communication and the scale and the speed and the volume of machine to machine is so fundamentally different than humans talking voice to voice. And that's really this big drivers to instrument as you said, all these machines, all these devices there's already been sensors on them forever but now the ability to actually connect them and pull them into this network and start to use the data and control the machines is a huge shift in the way things are going to happen going forward. >> A couple of things that are important in there. Number one, that data production and sensors and bringing computer closer to data, what that represents is bringing the digital world and the physical world closer together. We'll experience that at a personal level with how we communicate we're already distributed in today's environment and the ways we can augment our human connections through a digital medium are really going to be important to how we maintain our human connections. And then on the enterprise side, we're building this infrastructure in 5G that when you think about it from a consumer point of view and ordering your pizza faster it really isn't the right way to think about it. Couple of key characteristics of 5G. Greater bandwidth, so you can just push more package to the network. Lower latency, so you're closer to the data and higher connection density and more reliable connections. And that kind of combination of characteristics make it really valuable for enterprise businesses. You can bring your data and compute close together you have these highly reliable and dense connections that allow for device proliferation and that's the piece that's really changing where the world's going. I like to think of it in a really simple way which is, 4G and the cloud and the smartphone created a world that today we take for granted, 10 years ago we really couldn't imagine what it looked like. 5G, device proliferation and Edge computing today is building the footprint for what we can't really imagine what we will be taking for granted in 10 years from now. So we're at this great kind of change inflection point in the industry. >> I have to always take a moment to call out (indistinct). I think it's the most underappreciated law and it's been stolen by other people and repackage many ways, but it's basically we overestimate the impact of these things in the short term and we way, way, way, way, kind of underestimate the impact in the longterm and I think your story in they keynote about once we had digital phones and smartphones, we don't even think twice about looking at a map and where are we and where is a store close buy-in are they open and is there a review? I mean the infrastructure to put that together kind of an API based economy which is pulling together all these bits and pieces the stupid relay expectation of performance and how fast that information is going to be delivered to me. I think we still take it for granted, as you said I think it's like magic and we never thought of all the different applications of these interconnected apps enabled by and always on device that's always connected and knows where we are it's a huge change. And as you say that when we think about 5G, 10 years from now, oh my goodness, where are we going to be? >> It's hard to imagine? It really is hard to imagine and I think that's okay. And what we're doing today is introducing everything that we need to help businesses evolve, take advantage of that and that scale of the Edge is a fundamental characteristic of the Edge. And so automating to manage that scale is the only way you're going to be successful and extending what we've learned in the data center, how to the Edge using the same tools, the things we already understand really is a great way to evolve a business. And that's where that common language and the discussions that I was trying to generate around Ansible as a great tool, but it's not just the tool, it's the whole process, the mindset. The culture changed the way you change how you operate your business that's going to allow us to take advantage of the future where my clothes are full of sensors and you can look through a video camera and tell immediately that I'm happy with this conversation. That's a very different kind of augmented reality than we have today and maybe it's a bad example but it's hard to imagine really what it will be like. >> So, Chris, I just want to close on a slight shift. We've been talking a lot about technology, but you talk about culture all the time and really it's about the people and I think a number of times in the keynote you reinforced this is about people and culture. And I just had InaMarie Johnson on the Chief Diversity Officer from Zendesk and she said culture eats strategy for breakfast. Great line. So I wonder if you can talk about the culture because it's very different and you've seen it in opensource from Red Hat for a long time really a shifting culture around opensource the shifting culture around DevOps and continuous delivery and change is a good thing, not a bad thing and we want to be able to change our code frequently and push out new features. So again, as you think of automation and culture, what kind of comes to mind and what should people be thinking about when they think about the people and less about the technology? >> Well, there's a couple of things. Some I'll reinforce what we already touched on which is the notion of creating confidence in the automation. So there's an element of trust associated with that and that's more maybe trusting the technology. So when you're automating something you've already got a process, you already understand how something works, it's turning that something into an automated script or playbook in the Ansible context and trusting that it's going to do the right thing. There's another important part of trust which is getting more to the people part. And I've learned this a lot from open source communities collaboration and communities are fundamentally built around trust and human trust relationships. And the change in process, trusting not only that the tools are going to the right job but that people are really assuming good intent and working with or trying to build for the right outcomes for your business. I think that's a really important part of the overall picture. And then finally that trust is extended to knowing that that change for the business isn't going to compromise your job. So thinking differently about what is your job? Is your job to do the repetitive task or is your job to free up your time from that repetitive task to think more creatively about value you can bring to the business. And that's where I think it's really challenging for organizations to make changes because you build a personal identity around the jobs that you do and making changes to those personal identities really gets to the core of who you are as a person. And that's why I think it's so complicated. The tools almost start the easy part, it's the process changes and the cultural changes, the mindset changes behind that which is difficult but more powerful in the end. >> Well, I think people process tools the tech is always the easy part relative to culture and people in changing the way people do things and as you said, who their identity is, how they get kind of wrapped into what they do and what they think their value is and who they are. So to free them up from that that's a really important point. Well, Chris, I always love having you on, thank you for coming on again, sharing your insight, great keynote. And give your the last word about AnsibleFest 2020. What are you looking forward to take away from this little show? >> Well, number one, my personal hope is that the conversation that I was trying to sort of ignite through the keynote is an opportunity for the community to see where Ansible fits in the Edge and automation and helping really the industry at large scale. And that key part of bringing a common language to help change how we communicate internally is the message I was hoping to impart on the AnsibleFest Community. And so hopefully we can take that broader and appreciate the time here to really amplify some of those messages. >> All right, great. Well, thanks a lot Chris and have a great day. >> Thanks Jeff, thank you. >> All right. He's Chris, I'm Jeff you're watching theCUBE and our ongoing coverage of AnsibleFest 2020. Thanks for watching we'll see you next time. (gentle music)
SUMMARY :
brought to you by Red Hat. back covering the event. Hey, great to see you. and he talked about the process of beginning of the automation journey. but really that's the only way of the technology stack. of the way you think about and delivering the application. So in the one you just talked about, and the things that have and the one everyone always likes to use and the way to scale processing that data and the scale and the speed and the volume and the ways we can augment I mean the infrastructure and that scale of the Edge is and really it's about the people and the cultural changes, and as you said, who their identity is, and appreciate the time here and have a great day. and our ongoing coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
InaMarie Johnson | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
Zendesk | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
two factors | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
AnsibleFest | ORGANIZATION | 0.98+ |
AnsibleFest 2020 | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
today | DATE | 0.97+ |
one example | QUANTITY | 0.97+ |
Edge | TITLE | 0.97+ |
tens of thousands | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
10 years | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
DevOps | TITLE | 0.88+ |
5G | QUANTITY | 0.87+ |
Sundays | DATE | 0.87+ |
Red Hat | ORGANIZATION | 0.8+ |
many months ago | DATE | 0.79+ |
theCUBE | ORGANIZATION | 0.79+ |
Couple | QUANTITY | 0.78+ |
AnsibleFest 2020 | TITLE | 0.75+ |
devices | QUANTITY | 0.75+ |
5G | ORGANIZATION | 0.75+ |
number two | QUANTITY | 0.7+ |
Number one | QUANTITY | 0.7+ |
Ansible | TITLE | 0.69+ |
football | TITLE | 0.67+ |
Edge | COMMERCIAL_ITEM | 0.65+ |
opensource | TITLE | 0.52+ |
5G | TITLE | 0.46+ |
2020 | EVENT | 0.44+ |
5G | OTHER | 0.37+ |
Robyn Bergeron v2 ITA Red Hat Ansiblefest
>> Good morning, good afternoon and good evening wherever you are. My name is Robyn Bergeron and I'm very, very, very, truly excited to welcome all of you to this year's Ansible Fest. Whether you're joining us for the first time or if you've attended in the past. It's wonderful to know that you're all out there watching from the office in your home, maybe the makeshift office in your closet or dining room, or maybe even your actual office. Even though I've attended many Ansible Fest in the fast I continue to find to be one of the most educational and interesting events I've ever been to. I hope you do as well. One of the most exciting things about Ansible Fest for me this year is our theme, which can you shared in her opening remarks, automate to connect. As a community architect and as a manager of the Ansible community team. The solitating connections enable our community is most important thing we do every day. And scaling what we can all accomplish together is becoming credibly important, as Ansible become one of the most active open source projects in the world. There's no better example of understanding the growth and the scale of the Ansible community than seeing just how many of you were, we were able to connect with today at our very first virtual Ansible Fest with tens of thousands of folks attending over the course of our time together and even more on their own schedules. We're now actually able to connect with more members of our community than we ever have before. And this statement we are all the community continues to be as true as ever for Ansible. I truly believe that every connection counts I really believe that each and every one of us has the ability to participate in the Ansible community in countless ways, whether it's through code, whether it's through sharing with your friends or coworkers or helping others all over the world. And no matter how big we becomes a community we want to make sure that those connections and your sense of being part of this community to be alive and is full of potential as it always has been. And we want that because that very potential and taking advantage of all of those connections and opportunities is what enables innovation to happen. And we see that innovation happening in Ansible every single day. We all know that open source communities and produce some of the most innovative and most popular software that exists today. But being a community doesn't just come simply by the virtue of being open source, right? And neither does innovation growing the community requires frameworks and enable these connections to exist right between developers and users code and tools. And when we are able to combine those frameworks with opportunities and ideas, that's where innovation can actually flourish and that's the place where the benefits of opensource truly shine. And in Ansible from the very beginning, we strive to ensure that all of those frameworks and ingredients versus such a successful project were present. We made it easy to learn and get started with. We made sure it was at least minimally useful and then it could grow over time. We built the tool itself with a modular plugin architecture that would make it easy to contribute to. In turn, all of those contributions enabled Ansible to become even more useful, connecting audit, connecting and automating even more technologies which then made it useful to even more people. And this, this is really open source innovation at its finest, right? Thousands of users and contributors working together developing feedback loops all going to build software that everybody loves. But doing it well and having some good fortune and timing along the way has also meant that we've gotten fairly large and incredibly active as a community. What this level of scale and more relatable terms that we probably understand since we've all been on video call lately, imagine that you've made something and you would like to get feedback about the thing that you've made. So you invite a hundred people to your video call and you want all of them to provide feedback. Cause you want that feedback, You need the feedback cause you want to act on the feedback. Are you actually going to get that feedback in the phone call? Or the folks that you invited truly feel like they were heard or will you spend the whole call saying I'm sorry, are you trying to talk? Are you, are you on mute? Maybe? Could you make yourself? No, no, not you. The other person you can, you, can you try again? Thanks, for us in Ansible we want to make sure that every voice counts, every contribution matters. Every single bug reported, every improvement and usability, every question answered every word for me Emoji really does count, over its history Ansible had more than 13,000 individual voices speak at least once. And many of those individuals have done so hundreds and some even thousands of times. And for every single connection individual makes multiple automated processes and communications occur fanning out to even more members of our community. And for end users, while we know that adding new ways to automate with Ansible increases its usefulness we've heard that particular folks get more experienced that things like being more selective and flexible in what they choose rather than having all 6,800 modules included for them. The collection concept is the innovative answer to improving our country contributor process and the end user experience and ensuring they'll both scale smarter ways as we move into the future. And it's really more the result of more than a year of work. In a nutshell, collections are a new way to make use of the content that you connect to Ansible modules even roles, and do soar in more dynamic and flexible ways. There's a couple of things that really excited me about collections. So number one, it's easier to contribute to right? collections can live in their own individual repositories which for the Ansible community makes it a lot easier for folks to find and connect with the content that they care about and connect with the users and contributors to that more human scaled community. The second thing, is for users, it's not easier to easier than ever to use Ansible in all the ways that you want to or need to. Since collections can be packaged and made available on their own schedules, you can update them or upgrade them as frequently or as infrequently as you'd like or you can upgrade a more minimal Ansible installation without updating your collection. Now, I know what you're all saying. What do we want? Collections. When do we want it? yesterday. Woo. Well, behold I am super proud to re announce the release of Ansible to Dutch and which arrives in late September. And yes, the collection curated by the Ansible community for inclusion in Ansible are into 2.10. The amazing thing about this release is really the amount of coordination amongst so many points of connection, right? We required changes to our build and release processes extraordinary amounts of work being done under the hood. And it was a significant part of our feedback loop as well. We finished the work on the collections concept but the important thing about two 10 is this as an upstream community that continually develops new technologies. You know, we really see collections as a significant part of the future of Ansible. So getting early feedback on your experiences and using collections is incredibly important to our community. Two 10 is the first release where we're actually able to start broadly gathering an information. And as a part of the process we can hear your feedback a lot better as well now. Now, we all know that this year has been interesting for all of us, right here, I have the 2020 dumpster fire. Yes, it's been the best. We've all had to adapt and change in lots of ways right? At home, at work in public and at school and in the communities that we love. I always like to remind people that contributing code is not the only way to contribute or participate in the community. Our aunts will meet up communities that have been growing in size and membership over the past number of years. We now have more than 260 groups all over the world still looking for that one in Antarctica. Self organizing and creating Ansible content to share in their local communities. This is really an aspect of the Ansible community that I've always loved that so many humans recognize how sharing information in their own local areas collaborating together to teach each other new things helps to improve their own communities for the better. But this year they've also reminded me that humans are incredibly resilient right? We bounce back from life altering situations. We bounce back from, you know, New store, we adapt to new situations and we always form new connections and points of collaboration along the way. A ton of these groups still wanting to share all of their experiences and teach each other together pivoted their meetup groups in a variety of ways. Since March, there've been more than 54 virtual Ansible meetups all over the world. And all those organizers are starting to see the patterns that connect local and not so local meetup members in the best ways whether that's by common language, time zone country rather than a city they're all coming together adapting as best as they can given the year. But for me personally one of the most important lessons that I draw from is a community person over and over again, is that connecting contributors to opportunities isn't necessarily about deciding what I think should be available as opportunities. It's about making that the short doors open for anyone to create those connections and opportunities that nobody had yet started. This year in Ansible, we've seen this in action with the creation of a diversity and inclusion working group which meets regularly to explore all the ways in which we can improve and do outreach and be more inclusive. And the group was also key in our projects work to improve the inclusiveness of our code and language itself, which began many months ago. I can honestly say that all of these activities initiated by all the passionate folks in the Ansible community are things that make me so proud to call Ansible my home. And it really does make me proud to see how much we're maturing as a community. In my role, I've had the privilege of you know, being able to meet and chat with Ansible users all over the world. Sometimes the person, sometimes on the internet one of the discussions I had a few years ago was with someone who was a systems administrator which was a job that I had numerous years ago. So lots of empathy. It was the nineties for me, but anyway, chatted with him and he told me how much he loves Ansible how's finally a tool that he could get started with and be productive with and you know, felt good about it very quickly, he was able to start solving problems. And he told me about all the things that he had accomplished and you know, he'd been able to change things you know, for the better for himself at work for his coworkers and you know, they're actually finally getting ahead. I asked how long they've been using Ansible. And he said, well, you know, it's been about a year but then he also said this, you know, a year ago I wasn't really sure that I could go on at my job. Like, you know, there was so much to do I was, you know, fight fighting fires. I was on call constantly, you know, nothing ever ended I was deep underwater and I was missing out on my family and their lives, you know, all of their milestones. And he said, this year I actually got to go to my daughter's fifth birthday which he had missed her fourth birthday the year before because he was at work fighting fires and he said, you know, Ansible has changed my life I can see my kids again, like on the weekends and you know like a normal person should and you know, hearing stories like that, that's stuff that makes me very proud you know, and humbled to be in this community. And one of the main reasons why I go to work every single day and feel great about what I do and those stories, aren't really all that uncommon it's really our shared love of automation. Our, all of our shared embrace of this universal language and tool that we call Ansible that has helped so many people to improve their personal lives their work lives, their careers you know, the world they're in the world for all of us. And it really truly does connect us in so many amazing ways. It's not just shared code, it's a shared passion and that kind of connection is something that can change our own worlds or the world for all of us. If you're attending Ansible festival live this week I hope you'll take time to connect with others in the event platform to meet all of the other automation users in your community our experts ask questions or share your own experiences. And I hope to bump into you too on the internet and hear your stories about your own connections to automation and the Ansible community. Thank you so much and I hope you enjoy the event.
SUMMARY :
and in the communities that we love.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robyn Bergeron | PERSON | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
late September | DATE | 0.99+ |
fourth birthday | QUANTITY | 0.99+ |
more than a year | QUANTITY | 0.99+ |
first release | QUANTITY | 0.99+ |
more than 260 groups | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Thousands of users | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
This year | DATE | 0.98+ |
fifth birthday | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
6,800 modules | QUANTITY | 0.97+ |
March | DATE | 0.96+ |
more than 13,000 individual voices | QUANTITY | 0.96+ |
tens of thousands of folks | QUANTITY | 0.96+ |
more than 54 virtual | QUANTITY | 0.96+ |
Ansible Fest | EVENT | 0.96+ |
today | DATE | 0.95+ |
One | QUANTITY | 0.94+ |
both | QUANTITY | 0.93+ |
this week | DATE | 0.92+ |
hundred people | QUANTITY | 0.88+ |
nineties | QUANTITY | 0.88+ |
Red Hat | EVENT | 0.88+ |
thousands of times | QUANTITY | 0.88+ |
each | QUANTITY | 0.87+ |
Ansible festival | EVENT | 0.84+ |
few years ago | DATE | 0.83+ |
Dutch | LOCATION | 0.82+ |
every single day | QUANTITY | 0.79+ |
years | DATE | 0.79+ |
many months ago | DATE | 0.76+ |
single day | QUANTITY | 0.75+ |
every single connection | QUANTITY | 0.74+ |
single bug | QUANTITY | 0.72+ |
2.10 | QUANTITY | 0.71+ |
ITA | EVENT | 0.69+ |
this year | DATE | 0.67+ |
Two | QUANTITY | 0.66+ |
least once | QUANTITY | 0.65+ |
about two 10 | QUANTITY | 0.64+ |
every question | QUANTITY | 0.63+ |
10 | QUANTITY | 0.63+ |
every | QUANTITY | 0.62+ |
a year | QUANTITY | 0.61+ |
couple | QUANTITY | 0.6+ |
Ansiblefest | EVENT | 0.6+ |
these groups | QUANTITY | 0.59+ |
Matthew Jones v2 ITA Red Hat Ansiblefest
>> Welcome back to AnsibleFest. I'm Matthew Jones, I'm the architect of the Ansible Automation Platform. And today I want to talk to you a little bit about what we've got coming in 2021, and some of the things that we're working on for the future. Today, I really want to cover some of the work that we're doing on scale and flexibility, and how we're going to focus on that for the next year. I also want to talk about how we're going to help you grow and manage and use your content on the Automation platform. And then finally, I want to look a little bit beyond the automation platform itself. So, last year we introduced Ansible Content Collections. Earlier this year, we introduced the Ansible Automation Hub on Red Hat Cloud. And yesterday you heard Richard mentioned on private automation hub that's coming later this year. And automation hub, Ansible tower, this is really what the automation platform means for us. It's bringing together that content, with the ability to execute and run and manage that content, that's really important. And so what we really want to do, is we want to help you bring Red Hat and partner content that you trust together with community content from galaxy that you may need, and bring this together with content that you develop for yourself, your roles, your collections, the automation that you actually do. And we want to give you control over that content and help you curate that content and build a community around your automation. We want to focus on a seamless experience with this automation from Ansible Tower and from Automation Hub for the automation platform itself, and make it accessible to the automation and infrastructure that you're managing. Now that we've talked about content a little bit, I want to talk about how you run Ansible. Today an Ansible Tower, use virtual environments to manage the actual execution of Ansible, and virtual environments are okay, but they have some drawbacks. Primarily they're not very portable. It's difficult to manage dependencies and the version of Ansible. Sometimes those dependencies conflict with the other systems that are on the infrastructure itself, even Ansible Tower. So what we've done is created a new system that we call execution environments. Execution environments are container-based. And what we're doing is bringing the flexibility and portability of containers to these Ansible execution environments. And the goal really is portability. And we want to be able to leverage the tools that the community develops as well as the tools that Red Hat provides to be able to produce these container images and use them effectively. At Ansible we've developed a tool called Ansible Builder. Ansible builder will let you bring content collections together with the version of Ansible and Red Hats base container image so that you can put together your own images for execution environments. And you'll be able to host these on your own private registry infrastructure. If you don't already have a container registry solution, Automation Hub itself provides that registry. The idea here is that, unlike today where your virtual environments and your production execution environments diverge a little bit from what your developers, your content developers and your automation developers experience, we want to give you the same experience between your production environments and your development environments, all the way through your test and validation workloads. Red Hat's also going to provide some prebuilt execution environments. We want to have some continuity between the experience that you have today on the Ansible tower and what you'll have next year, once we bring execution environments into production. We want you to be able to trust the Ansible, the version of Ansible that's running on your execution environments, and that you have the content that you expect. At the same time, we're going to provide a version of the execution environment, that's just the base execution environment. All it has is Ansible. This will let you take those using Ansible builder, take the collections that you've developed, that you need in your automation and combine them without having to bring in things that you don't need, or that you don't want in your automation and build them together into a very opinionated, container image. If you're interested in execution environments and you want to know how these are built and how you'll use them, we actually have them available for you to use today. Shane McDonald and Adam Miller are giving a talk later with a walk through how to build execution environments and how you'll use them. You can use this to make sure that you're ready for execution environments coming to the automation platform next year. Now that we've talked about how we build execution environments, I want to talk about how execution runs in your infrastructure. So today when you deploy Ansible tower, you're deploying a monolithic web application. Your execution capability is tied up into how you actually deploy Ansible tower. This makes scaling Ansible tower and your automation workloads difficult, and everything has to be co-located together in the same data center. Isolated nodes solve this a little bit, but they bring about their own sort of opinionated challenges in setting up SSH and having direct connectivity between the control nodes and the execution nodes themselves. We want to make this more flexible and easier to use. And so one of the things that we've created over the last year and that we've been working on over the last year is something that we call receptor. Receptor is an overlay network that's an Automation Mesh. And the goal here is to separate the execution capability of your Ansible content from the control plane capability, where you manage the web infrastructure, the users, the role-based access control. We want to draw a line between those. We want you to be able to deploy execution environments anywhere. Chris Wright earlier today mentioned Edge. Well Edge Cloud, we want you to be able to manage data centers anywhere in the world, and you can do this with the Automation Mesh,. The Automation Mesh connects your control plane with those execution nodes, anywhere in the world. Another thing that the Automation Mesh brings is, we're going to be able to draw the lines between the control plane themselves and each Automation Mesh node. This means that if you have an outage or a problem on your network and on your infrastructure, if you can draw a line between the control plane itself and the node that needs to execute, the sensible work, the Automation Mesh can route around problems. The Automation Mesh in the way it's deployed, also allows this to fit closer with ingress and egress policies that you have between your infrastructure. It doesn't matter which direction the Automation Mesh itself connects in. Once the connection is established, automation will be able to flow from the control systems to the execution nodes and get responses back. Now, this all works together with automation of the content collections that we mentioned earlier, the execution environments that we were just talking about and your container registries. All of these work together with these Automation Mesh nodes. They're very lightweight and very simple systems. This means you can scale up and scale down execution capacity as your needs increase or decrease. You don't need to keep around a lot of extra capacity just in case you automate more, just because you're not sure when your execution capacity needs will increase and decrease. This fits into an automated system for scaling your infrastructure and scaling your execution capacity. Now that we've talked about the content that you use to manage, and how that execution is performed and where that execution is performed. I want to look a little bit beyond the actual automation platform itself. And specifically, I want to talk about how the automation platform works with OpenShift and Kubernetes. Now we have an existing installer for Ansible tower that we'll deploy to OpenShift Kubernetes, and we support OpenShift and Kubernetes as a first-class system for deploying Ansible tower. But I mentioned automation hub and Ansible tower as this is what the automation platform is for us. So we want to take that installer and replace it with an operator-based full life cycle approach to deploying and managing the automation platform on OpenShift. This operator will be available in OperatorHub. So there's no need to manage complex YAML files that represent the deployment. Since it's available in OperatorHub, you have one place that you can go to manage deployments, upgrades, backup and restore. And all of this work seamlessly with the container groups feature that we introduced last year. But I want to take this a little bit beyond just deploying and upgrading the automation platform from the operator. We want to look at what other capabilities that we can get out of those operators. So beyond just deploying and upgrading, we're also creating a resource operators and CRDs that will allow other systems running in OpenShift or Kubernetes to directly manage resources within the automation platform. Anything from triggering jobs and getting the status of jobs back, we want to enable that capability if you're using OpenShift and Kubernetes. The first place we're starting with this, is Red Hats Advanced Cluster Management system. Advanced Cluster Management brings together the ability to manage OpenShift and Kubernetes clusters to install them and manage them, as well as applications and products in managing the life cycle of those across your clusters. So what we really want to do, is give you the ability to connect traditional and container-based workloads together. You're already using the Ansible automation platform to manage workloads with Ansible. When using Advanced Cluster Management and OpenShift and Kubernetes, now you have a full system. You can manage across clouds across clusters, anywhere in the world. And this sort of brings me back to one of the areas of focuses for us. Our goal is complete end-to-end automation. We want to connect your people, your domains and the processes. We want to help you deliver for you and your customers by expanding the capabilities of the Ansible automation platform. And we want to make this a seamless experience to both curate content, control the content for your organization, and run the content and run Ansible itself using the full suite of the Ansible automation platform. So the Advanced Cluster management team is giving a talk later where you'll actually be able to see Advanced cluster Management and the Ansible automation platform working together. Don't forget to check out Adam and Shane's talk on execution environments, how those are built and how you can use those. Thank you for coming to AnsibleFest, and we'll see you next time.
SUMMARY :
and the node that needs to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matthew Jones | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Adam Miller | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
2021 | DATE | 0.99+ |
Shane McDonald | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Today | DATE | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Shane | PERSON | 0.99+ |
AnsibleFest | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Kubernetes | TITLE | 0.98+ |
later this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Earlier this year | DATE | 0.95+ |
Ansible Automation Hub | ORGANIZATION | 0.95+ |
Ansiblefest | EVENT | 0.91+ |
Red Hats | ORGANIZATION | 0.9+ |
Ansible Builder | TITLE | 0.9+ |
Automation Hub | ORGANIZATION | 0.89+ |
one | QUANTITY | 0.87+ |
OpenShift Kubernetes | TITLE | 0.86+ |
Ansible Tower | TITLE | 0.85+ |
one place | QUANTITY | 0.84+ |
Hat | ORGANIZATION | 0.84+ |
Ansible Automation | ORGANIZATION | 0.81+ |
Red Hat | TITLE | 0.75+ |
Ansible Tower | ORGANIZATION | 0.74+ |
earlier today | DATE | 0.72+ |
Automation Hub | TITLE | 0.71+ |
Ansible | TITLE | 0.69+ |
AnsibleFest | EVENT | 0.65+ |
Red Hat Cloud | ORGANIZATION | 0.62+ |
Red | EVENT | 0.6+ |
OperatorHub | ORGANIZATION | 0.59+ |
class | QUANTITY | 0.56+ |
Collections | ORGANIZATION | 0.55+ |
Edge | TITLE | 0.54+ |
Tower | COMMERCIAL_ITEM | 0.52+ |
ITA | ORGANIZATION | 0.52+ |
Chris Wright v2 ITA Red Hat Ansiblefest
>> If you want to innovate, you must automate at the edge. I'm Chris Wright, chief technology officer at Red Hat. And that's what I'm here to talk to you about today. So welcome to day two of AnsibleFest, 2020. Let me start with a question, do you remember 3G when you first experienced mobile data connections? The first time that internet on a mobile device was available to everyone? It took forever to load a page, but it was something entirely different. It was an exciting time. And then came 4G, and suddenly data connections actually became usable. Together with the arrival of smartphones, people were suddenly online all the time. The world around us changed immensely. Fast forward to today, things are changing yet again, 5G is entering the market. And it's in evolution that brings about fundamental change of how connections are made and what will be connected. Now it's not only the people anymore who are online all the time, devices are entering the stage, sensors, industrial robots, cars, maybe even the jacket you're wearing. And with this revolutionary change and telecommunications technology, another trend moves into the picture, the rise of edge computing. And that's what I'll be focusing on today. So what is edge computing exactly? Well, it's all about data. Specifically, moving compute closer to the producers and consumers of data. Let's think about how data was handled in the past. Previously, everything was collected, stored and processed in the core of the data center. Think of server racks, one after the other. This was the typical setup. And it worked as long as the environment was similarly traditional. However, with the new way devices are connected and how they work, we have more and more data created at the edge and processed there immediately. Gathering and processing data takes place close to the application users, and close to the systems generating data. The fact that data is processed where it is created means that the computing itself now moves out to the edge as well. Outside of the traditional data center barriers into the hands of application users. Sometimes, literally into the hands of people. Look at your smartphone next to you, is one good example. Data sources are more distributed. The data is generated by your mobile phone, by your thermostat, by your doorbell, and data distribution isn't just happening at home, it's happening in businesses too. It's at the assembly line, high on top of a cell tower, by a pump deep down in a well, and at the side of a train track, every few miles for thousands of miles. This leads to more distributed computing overall. Platforms are pushed outside the data center. Devices are spread across huge areas in inaccessible locations, and applications run on demand close to the data. Often even the ownership of the devices is with other parties. And data gathering and processing is only partially under our direct control. That is what we mean by edge computing. And why is this even interesting for us, for our customers? To say it with the words of a customer, edge computing will be a fundamental enabling technology within industrial automation. Transitioning how you handle IT from a traditional approach, towards a distributed computing model, like edge computing, isn't necessarily easy. Let's imagine how a typical data center works right now. We own the machines, create the containers, run the workloads and carefully decide what external services we connect to, and where the data flows. This is the management sphere we know and love. Think of your primary OpenShift cluster for example. With edge computing, we don't have this level of ownership, knowledge or control. The servo motors in our assembly line are black boxes controlled only via special APIs. The small devices next to our train tracks, running embedded operating system, which does not run our default system management software. And our doorbell is connected to a cloud, which we do not control at all. Yet we still need to be able to exercise control our business processes suddenly depend on what is happening at the edge. That doesn't mean we throw away our ways of running the data centers, in fact, the opposite is true. Our data centers are the backbone of our operations. In the data center, we still tie everything together and run our core workloads. But with edge computing, we have more to manage. To do so, we have to leave our comfort zones and reach into the unknown. To be successful, we need to get data, tools and processes under management and connect it back to our data center. Let's take train tracks as an example. We're in charge of a huge network. Thousands of miles of tracks zig-zagging across the country. We have small boxes next to the train tracks every few miles, which collect data of the passing trains. Takes care of signaling and so on. The train tracks are extremely rugged devices and they're doing their jobs in the coldest winter nights and the hottest summer days. One challenge in our operation is, if we lose connection to one box, we have to stop all traffic on this track segment, no signal, no traffic. So we reroute all of the traffic passengers, cargo, you name it, via other track segments. And while the track segments now suddenly have unexpected traffic congestion and so on, we have sent a maintenance team to figure out why we lost the signal, do root cause analysis, repair what needs to be fixed and make sure it all works again. Only then, can we reopen the segment. As you can imagine, just bringing a maintenance team out there takes time, finding the root issue and solving it, also takes time. And all the while, traffic is rerouted. This can amount to a lot of money lost. Now imagine these little devices get a new software update and are now able to report not only signals sent across the tracks, but also the signal quality. And with those additional data points, we can get to work. Subsequently, we can see trends. And the device itself can act on these trends. If the signal quality is getting worse over time, the device itself can generate an event, and from this event, we can trigger followup actions. We can get our team out there in time, investigating everything before the track goes down. Of course the question here is, how do you even update the device in the first place? And how do you connect such an event to your maintenance team? There are three things we need to be able to properly tie events and everything together to answer this challenge. First, we need to be able to connect through the last mile. We need to reach out from our comfort zones, down the tracks and talk to a device, running a special embedded OS on a chip architecture we don't have in our data center. And we have thousands of them. We need to manage at the edge in a way suited to its scale. Besides connecting, we need the skills to address our individual challenges of edge computing. While the train track example is a powerful image, your challenge might be different. Your boxes might be next to an assembly line or on a shipping container or a unit under an antenna. Finally, the edge is about the interaction of things. Without our data center or humans in the equation at all. As I mentioned previously, in the end, there is an event generated by the little box. We have to take the event and first increase the signal strength temporarily between this box and the other boxes on either side, to buy us some more time. Then we ask the corporate CMDB for the actual location of that box, put all this information into a ticket, assign the ticket to the maintenance team at high priority to make sure they get out there soon. As you can see, our success here critically depends on our ability to create an environment with the right management skills and technical capabilities that can react decentrally in a secure and trusted way. And how do we do these three things, with automation. Yeah, it might not come as much of a surprise, right? However, there is a catch. Automation as a single technology product, won't cut it. It's tempting to say that an automation product can solve all these problems. Hey, we're at a tech conference, right? But that's not enough. Edge computing is not simple. And the solution to the challenges is, is not simply a tool where we buy three buckets full, and spread it across our data center and devices. Automation must be more than a tool. It must be a process, constantly evolving, iterating on and on. We only have a chance if we embed automation as a fundamental component of an organization, and use it as a central means to reach out to the last mile. And the process must not focus on technology itself, but on people. The people who are in charge of the edge IT as well as the people in charge of the data center IT. Automation can't be a handy tool that is used occasionally, it should become the primary language for all people involved to communicate in. This leads to a cooperation and common ground to further evolve the automation. And at the same time, ensure that the people build and improve the necessary skills. But with the processes and the people aligned, we can shed light on the automation technology itself. We need a tool set that is capable of doing more than automating an island here and a pocket there. We need a platform powerful enough to write the capabilities we need and support the various technologies, devices, and services out at the edge. If we connect these three findings, we come to a conclusion. To automate the edge, we need a cultural change that embraces automation in a new and fundamental way. As a new language, integrating across teams and technology alike. Such a unified automation language, speaks natively with the world out there as well as with our data centers at any scale. And this very same language is spoken by domain experts, by application developers and by us as automation experts, to pave the way for the next iteration of our business. And this language has the building blocks to create new interfaces, tools and capabilities, to integrate with the world out there and translate the events and needs into new actions, being the driving motor of the IT at the edge and evolving it further. And yes, we have this language right here, right now. It is the Ansible language. If we come back to our train track, one more time, this Ansible that can reach out and talk to our thousands of little boxes sitting next to the train tracks. The Ansible language, the domain experts of the boxes can natively work together with the train operations experts and the business intelligence people. Together, they can combine their skills to write workflows in a language they can all understand and where the deep down domain knowledge is encapsulated away. And the Ansible platform offers the APIs and components to react to events in a secure and trusted way. If there's one thing I'd like you to take away from this, it is edge computing is complex enough. But luckily we do have the right language, the right tools, and here with you and awesome community at our fingertips, to build upon it and grow it even further. So let's not worry about the tooling, we have that covered. Instead, let's focus on making that tool great. We need to become able to execute automation anywhere we need. At the edge, in the cloud, in other data centers, in the end, just like serverless functions, the location where the code is actually running, should not matter to us anymore. Let's hear this from someone who is right at the core of the development of Ansible, over to Matt Jones, our automation platform architect.
SUMMARY :
And the solution to the challenges is,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wright | PERSON | 0.99+ |
Matt Jones | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Thousands of miles | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
one box | QUANTITY | 0.99+ |
thousands of miles | QUANTITY | 0.99+ |
One challenge | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
CMDB | ORGANIZATION | 0.98+ |
Ansible | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
2020 | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
three buckets | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
one good example | QUANTITY | 0.93+ |
thousands of little boxes | QUANTITY | 0.93+ |
day two | QUANTITY | 0.89+ |
every few miles | QUANTITY | 0.88+ |
one thing | QUANTITY | 0.83+ |
three findings | QUANTITY | 0.82+ |
one more time | QUANTITY | 0.8+ |
first place | QUANTITY | 0.76+ |
Ansiblefest | ORGANIZATION | 0.75+ |
Ansible | TITLE | 0.74+ |
single technology product | QUANTITY | 0.74+ |
ITA | ORGANIZATION | 0.73+ |
money | QUANTITY | 0.56+ |
OpenShift | ORGANIZATION | 0.47+ |
AnsibleFest | ORGANIZATION | 0.43+ |
4G | OTHER | 0.38+ |