Chris Wright, Red Hat | Red Hat Summit 2022
(bright upbeat music) >> We're back at the Red Hat Summit at the Seaport in Boston, theCUBE's coverage. This is day two. Dave Vellante and Paul Gillin. Chris Wright is here, the chief technology officer at Red Hat. Chris, welcome back to theCUBE. Good to see you. >> Yeah, likewise. Thanks for having me. >> You're very welcome. So, you were saying today in your keynote. We got a lot of ground to cover here, Chris. You were saying that, you know, software, Andreessen's software is eating the world. Software ate the world, is what you said. And now we have to think about AI. AI is eating the world. What does that mean? What's the implication for customers and developers? >> Well, a lot of implications. I mean, to start with, just acknowledging that software isn't this future dream. It is the reality of how businesses run today. It's an important part of understanding what you need to invest in to make yourself successful, essentially, as a software company, where all companies are building technology to differentiate themselves. Take that, all that discipline, everything we've learned in that context, bring in AI. So, we have a whole new set of skills to learn, tools to create and discipline processes to build around delivering data-driven value into the company, just the way we've built software value into companies. >> I'm going to cut right to the chase because I would say data is eating software. Data and AI, to me, are like, you know, kissing cousins. So here's what I want to ask you as a technologist. So we have the application development stack, if you will. And it's separate from the data and analytics stack. All we talk about is injecting AI into applications, making them data-driven. You just used that term. But they're totally two totally separate stacks, organizationally and technically. Are those worlds coming together? Do they have to come together in order for the AI vision to be real? >> Absolutely, so, totally agree with you on the data piece. It's inextricably linked to AI and analytics and all of the, kind of, machine learning that goes on in creating intelligence for applications. The application connection to a machine learning model is fundamental. So, you got to think about not just the software developer or the data scientist, but also there's a line of business in there that's saying, "Here's the business outcomes I'm looking for." It's that trifecta that has to come together to make advancements and really make change in the business. So, you know, some of the folks we had on stage today were talking about exactly that. Which is, how do you bring together those three different roles? And there's technology that can help bridge gaps. So, we look at what we call intelligent applications. Embed intelligence into the application. That means you surface a machine learning model with APIs to make it accessible into applications, so that developers can query a machine learning model. You need to do that with some discipline and rigor around, you know, what does it mean to develop this thing and life cycle it and integrate it into this bigger picture. >> So the technology is capable of coming together. You know, Amanda Purnell is coming on next. >> Oh, great. >> 'Cause she was talking about, you know, getting, you know, insights in the hands of nurses and they're not coders. >> That's right. >> But they need data. But I feel like it's, well, I feel very strongly that it's an organizational challenge, more so. I think you're confirming. It's not really a technical challenge. I can insert a column into the application development stack and bring TensorFlow in or AI or data, whatever it is. It's not a technical issue. Is that fair? >> Well, there are some technical challenges. So, for example, data scientists. Kind of a scarce kind of skillset within any business. So, how do you scale data scientists into the developer population? Which will be a large population within an organization. So, there's tools that we can use to bring those worlds together. So, you know, it's not just TensorFlow but it's the entire workflow and platform of how you share the data, the data training models and then just deploying models into a runtime production environment. That looks similar to software development processes but it's slightly different. So, that's where a common platform can help bridge the gaps between that developer world and the data science world. >> Where is Red Hat's position in this evolving AI stack? I mean, you're not into developing tool sets like TensorFlow, right? >> Yeah, that's right. If you think about a lot of what we do, it's aggregate content together, bring a distribution of tools, giving flexibility to the user. Whether that's a developer, a system administrator, or a data scientist. So our role here is, one, make sure we work with our hardware partners to create accelerated environments for AI. So, that's sort of an enablement thing. The other is bring together those disparate tools into a workflow and give a platform that enables data scientists to choose which, is it PyTorch, is it TensorFlow? What's the best tool for you? And assemble that tool into your workflow and then proceed training, doing inference, and, you know, tuning and lather, rinse, repeat. >> So, to make your platform then, as receptive as possible, right? You're not trying to pick winners in what languages to work with or what frameworks? >> Yeah, that's right. I mean, picking winners is difficult. The world changes so rapidly. So we make big bets on key areas and certainly TensorFlow would be a great example. A lot of community attraction there. But our goal isn't to say that's the one tool that everybody should use. It's just one of the many tools in your toolbox. >> There are risks of not pursuing this, from an organization's perspective. A customer, they kind of get complacent and, you know, they could get disrupted, but there's also an industry risk. If the industry can't deliver this capability, what are the implications if the industry doesn't step up? I believe the industry will, just 'cause it always does. But what about customer complacency? We certainly saw that a lot with digital transformation and COVID sort of forced us to march to digital. What should we be thinking about of the implications of not leaning in? >> Well, I think that the disruption piece is key because there's always that spectrum of businesses. Some are more leaning in, invested in the future. Some are more laggards and kind of wait and see. Those leaning in tend to be separating themselves, wheat from the chaff. So, that's an important way to look at it. Also, if you think about it, many data science experiments fail within businesses. I think part of that is not having the rigor and discipline around connecting, not just the tools and data scientists together, but also looking at what business outcomes are you trying to drive? If you don't bring those things together then it sort of can be too academic and the business doesn't see the value. And so there's also the question of transparency. How do you understand why is a model predicting you should take a certain action or do a certain thing? As an industry, I think we need to focus on bringing tools together, bringing data together, and building better transparency into how models work. >> There's also a lot of activity around governance right now, AI governance. Particularly removing bias from ML models. Is that something that you are guiding your customers on? Or, how important do you feel this is at this point of AI's development? >> It's really important. I mean, the challenge is finding it and understanding, you know, we bring data that maybe already carrying a bias into a training process and building a model around that. How do you understand what the bias is in that model? There's a lot of open questions there and academic research to try to understand how you can ferret out, you know, essentially biased data and make it less biased or unbiased. Our role is really just bringing the toolset together so that you have the ability to do that as a business. So, we're not necessarily building the next machine learning algorithm or models or ways of building transparency into models, as much as building the platform and bringing the tools together that can give you that for your own organization. >> So, it brings up the question of architectures. I've been sort of a casual or even active observer of data architectures over the last, whatever, 15 years. They've been really centralized. Our data teams are highly specialized. You mentioned data scientists, but there's data engineers and there's data analysts and very hyper specialized roles that don't really scale that well. So there seems to be a move, talk about edge. We're going to talk about edge. The ultimate edge, which is space, very cool. But data is distributed by its very nature. We have this tendency to try to force it into this, you know, monolithic system. And I know that's a pejorative, but for good reason. So I feel like there's this push in organizations to enable scale, to decentralize data architectures. Okay, great. And put data in the hands of those business owners that you talked about earlier. The domain experts that have business context. Two things, two problems that brings up, is you need infrastructure that's self-service, in that instance. And you need, to your point, automated and computational governance. Those are real challenges. What do you see in terms of the trends to decentralize data architectures? Is it even feasible that everybody wants a single version of the truth, centralized data team, right? And they seem to be at odds. >> Yeah, well I think we're coming from a history informed by centralization. That's what we understand. That's what we kind of gravitate towards, but the reality, as you put it, the world's just distributed. So, what we can do is look at federation. So, it's not necessarily centralization but create connections between data sources which requires some policy and governance. Like, who gets access to what? And also think about those domain experts maybe being the primary source of surfacing a model that you don't necessarily have to know how it was trained or what the internals are. You're using it more to query it as a, you know, the domain expert produces this model, you're in a different part of the organization just leveraging some work that somebody else has done. Which is how we build software, reusable components in software. So, you know, I think building that mindset into data and the whole process of creating value from data is going to be a really critical part of how we roll forward. >> So, there are two things in your keynote. One, that I was kind of in awe of. You wanted to be an astronaut when you were a kid. You know, I mean, I watched the moon landing and I was like, "I'm never going up into space." So, I'm in awe of that. >> Oh, I got the space helmet picture and all that. >> That's awesome, really, you know, hat's off to you. The other one really pissed me off, which was that you're a better skier 'cause you got some device in your boot. >> Oh, it's amazing. >> And the reason it angered me is 'cause I feel like it's the mathematicians taking over baseball, you know. Now, you're saying, you're a better skier because of that. But those are two great edge examples and there's a billion of them, right? So, talk about your edge strategy. Kind of, your passion there, how you see that all evolving. >> Well, first of all, we see the edge as a fundamental part of the future of computing. So in that centralization, decentralization pendulum swing, we're definitely on the path towards distributed computing and that is edge and that's because of data. And also because of the compute capabilities that we have in hardware. Hardware gets more capable, lower power, can bring certain types of accelerators into the mix. And you really create this world where what's happening in a virtual context and what's happening in a physical context can come together through this distributed computing system. Our view is, that's hybrid. That's what we've been working on for years. Just the difference was maybe, originally it was focused on data center, cloud, multi-cloud and now we're just extending that view out to the edge and you need the same kind of consistency for development, for operations, in the edge that you do in that hybrid world. So that's really where we're placing our focus and then it gets into all the different use cases. And you know, really, that's the fun part. >> I'd like to shift gears a little bit 'cause another remarkable statistic you cited during your keynote was, it was a Forrester study that said 99% of all applications now have open source in them. What are the implications of that for those who are building applications? In terms of license compliance and more importantly, I think, confidence in the code that they're borrowing from open source projects. >> Well, I think, first and foremost, it says open source has won. We see that that was audited code bases which means there's mission critical code bases. We see that it's pervasive, it's absolutely everywhere. And that means developers are pulling dependencies into their applications based on all of the genius that's happening in open source communities. Which I think we should celebrate. Right after we're finished celebrating we got to look at what are the implications, right? And that shows up as, are there security vulnerabilities that become ubiquitous because we're using similar dependencies? What is your process for vetting code that you bring into your organization and push into production? You know that process for the code you author, what about your dependencies? And I think that's an important part of understanding and certainly there are some license implications. What are you required to do when you use that code? You've been given that code on a license from the open source community, are you compliant with that license? Some of those are reasonably well understood. Some of those are, you know, newer to the enterprise. So I think we have to look at this holistically and really help enterprises build safe application code that goes into production and runs their business. >> We saw Intel up in the keynotes today. We heard from Nvidia, both companies are coming on. We know you've done a lot of work with ARM over the years. I think Graviton was one of the announcements this week. So, love to see that. I want to run something by you as a technologist. The premise is, you know, we used to live in this CPU centric world. We marched to the cadence of Moore's Law and now we're seeing the combinatorial factors of CPU, GPU, NPU, accelerators and other supporting components. With IO and controllers and NICs all adding up. It seems like we're shifting from a processor centric world to a connect centric world on the hardware side. That first of all, do you buy that premise? And does hardware matter anymore with all the cloud? >> Hardware totally matters. I mean the cloud tried to convince us that hardware doesn't matter and it actually failed. And the reason I say that is because if you go to a cloud, you'll find 100s of different instance types that are all reflections of different types of assemblies of hardware. Faster IO, better storage, certain sizes of memory. All of that is a reflection of, applications need certain types of environments for acceleration, for performance, to do their job. Now I do think there's an element of, we're decomposing compute into all of these different sort of accelerators and the only way to bring that back together is connectivity through the network. But there's also SOCs when you get to the edge where you can integrate the entire system onto a pretty small device. I think the important part here is, we're leveraging hardware to do interesting work on behalf of applications that makes hardware exciting. And as an operating system geek, I couldn't be more thrilled, because that's what we do. We enable hardware, we get down into the bits and bytes and poke registers and bring things to life. There's a lot happening in the hardware world and applications can't always follow it directly. They need that level of indirection through a software abstraction and that's really what we're bringing to life here. >> We've seen now hardware specific AI, you know, AI chips and AI SOCs emerge. How do you make decisions about what you're going to support or do you try to support all of them? >> Well, we definitely have a breadth view of support and we're also just driven by customer demand. Where our customers are interested we work closely with our partners. We understand what their roadmaps are. We plan together ahead of time and we know where they're making investments and we work with our customers. What are the best chips that support their business needs and we focus there first but it ends up being a pretty broad list of hardware that we support. >> I could pick your brain for an hour. We didn't even get into super cloud, Chris. But, thanks so much for coming on theCUBE. It's great to have you. >> Absolutely, thanks for having me. >> All right. Thank you for watching. Keep it right there. Paul Gillin, Dave Vellante, theCUBE's live coverage of Red Hat Summit 2022 from Boston. We'll be right back. (mellow music)
SUMMARY :
We're back at the Red Hat Summit Thanks for having me. Software ate the world, is what you said. what you need to invest in And it's separate from the So, you know, some of the So the technology is 'Cause she was talking about, you know, I can insert a column into the and the data science world. and give a platform that say that's the one tool of the implications of not leaning in? and the business doesn't see the value. Is that something that you and understanding, you know, that you talked about earlier. but the reality, as you put it, when you were a kid. Oh, I got the space you know, hat's off to you. And the reason it angered in the edge that you do What are the implications of that for the code you author, The premise is, you know, and the only way to specific AI, you know, What are the best chips that It's great to have you. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amanda Purnell | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
100s | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
two problems | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
both companies | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
two | QUANTITY | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
Seaport | LOCATION | 0.99+ |
one | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
one tool | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
Two things | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
Red Hat Summit | EVENT | 0.95+ |
an hour | QUANTITY | 0.93+ |
TensorFlow | TITLE | 0.92+ |
Graviton | ORGANIZATION | 0.87+ |
PyTorch | TITLE | 0.87+ |
separate stacks | QUANTITY | 0.85+ |
single version | QUANTITY | 0.83+ |
Andreessen | PERSON | 0.82+ |
day two | QUANTITY | 0.81+ |
Moore | TITLE | 0.79+ |
three different roles | QUANTITY | 0.76+ |
years | QUANTITY | 0.75+ |
COVID | OTHER | 0.7+ |
edge | QUANTITY | 0.6+ |
billion | QUANTITY | 0.54+ |
Chris Wright, Red.Hat | Red Hat Summit 2021 Virtual Experience
>>mhm Yes. >>Welcome back to the cubes coverage of red hat summit 2021 virtual. I'm john for a host of the cube we're here in Palo alto. Were remote with our great guest here cube alumni. I've been on many times chris wright, Senior vice president and CTO of red hat chris great to see you. Always a pleasure to have you on the screen here too. But we're not in person but thanks for coming in remote. >>Yeah, you bet. Glad to be here. >>Not only were talking about speeds and feeds, digital transformation going under the hood here we're gonna talk about red hats, expanded collaboration with boston University to help fund education and research for open source projects. So you guys have a huge relationship with boston University. Talk about this continued commitment. What's the news, what's the, what's the story? >>Well, we have a couple different things going on uh and and the relationship we have with the EU is many years in. So this itself isn't brand new. Um one of the things that's important to highlight here is we are giving something north of $550 million dollars worth of software to be you really in pursuit of running uh powering and running scaled infrastructure. That's part of the open hybrid class. Um and that's that's an important piece which we can touch on a little bit as we talk to this conversation. The other one is like I said, this isn't a new relationship with the U. And what we're doing now is really expanding the relationship. So we've we've built a great connection directly with the You were substantially expanding that. Um The original relationship we had was a $5 million relationship spread over five years now. We're talking about a $20 million Relationship spread over five years. So really a significant expansion. And of course that expansion is connected to some of the work that we plan to do together in this open hybrid cloud infrastructure and research space. So a lot of things coming together at once to really really advance the red hat ca laboratory at the U. That combined effort in bringing you know, cloud research and open source and all these things together >>and a lot of actually going on. So basically the boston area lot of universities, but I love the shirt you're wearing with his red hat innovation in the open. This is kind of one of those things you also mentioned out of this huge subscription of software grant that's going to be you just a huge number give value for for the boston University. But you also have another project that's been going on the collaborative research and education agreement called red hat collaborative orI Okay, this was in place. You mentioned that. How's that tying in because that was pre existing. Now. You've got the grant, you got your funding more and more research. Talk about how this connects into the open cloud initiative because this is kind of interesting. You're not bringing hybrid cloud kind of research and practical value in A i ops is hot. You can't you can't go anywhere these days without having great observe ability. Cloud native more and more is more complex and you've got these young students and researchers dying and get their hands on it. Take us through the connection between the CA laboratory and open open cloud. >>So the CA laboratory is a clever name that just talks about collaboration and research laboratory type research. And initially the CA laboratory focus was on the infrastructure running the cloud and some of the application workloads that can run on top of an open cloud infrastructure uh that are that's very data centric. And so this is uh an opportunity for multidisciplinary work looking at modeling for um for health care, for example for how you can improve imaging and we've had a great results in this collaboration. Um We've talked at times about the relationship with the boston Children's Hospital and the chris project not related to me, but just similar acronym that spells chris. Um and these things come together in part through connecting relationships to academia, where academia as research is increasingly built in on and around open source software. So if you think of two parallel worlds, open source software development, just the activity of building open source software, it brings so many people together and it moves so quickly that if you're not directly connected to that as an academic researcher, you risk producing academic research results that aren't relevant because it's hard for them to connect back to these large, fast moving projects, which may have invented a solution to the problem you've been focused on as an academic if you're not directly connected. So we see academia and open source coming together to build really a next generation of understanding of the scientific in depth and he's joining the >>train operations you're talking about here though, this is significant because there's dollars behind it, right? There's real money, it's not >>just the right software, >>it's it's a center, it's a joint operation. >>That's right. And so when you think about just the academic research of producing um ideas that manifest themselves as code and software projects, we want to make sure we're first connecting the software projects to open source communities in with our own engineering experience, bringing code into these open, open source projects to just advance the the feeds and speeds and speeds, the kind of functionality the state of the art of the actual project. We're also taking this to a new level with this expanded relationship and that is software today. When you, when you operate software as a cloud, a critical part of the software is the operationalization of that software. So software just sitting there on the shelf doesn't do anybody any good. Even if the shelf is an open source project, it's a tar ball waiting for you to download. If you don't ever grab it and run it, it's not doing anybody any good. And if the challenge of running it is substantial enough that it stops you from using that software, you've created a barrier to the value that's locked inside that project. The focus here is how can we take that the operations experience of running a cloud, which itself is a big complex distributed system, tie some of those experiences back into the projects that are used to build that infrastructure. So you're taking not just the output of the project, but also the understanding of what it takes to run a project and bringing that understanding and even the automation and code associated with that back into the project. So, your operational izing this open source software and you're building deeper understanding of what it means to operate things that scale, including data and data sets that you can use to build models that show how you can create the remediation and closed loop systems with AI and machine learning, you know, sort of synthesizing all the data that you generate out of a big distributed infrastructure and feed that back into the operations of that same infrastructure. So a lot going on there at the same time operationalization as as an open source initiative but also um really the understanding advancement of A I and data centric operations, so ai ops and closed the remediation. >>Yeah, I mean, devops developer and operations to operationalize it and certainly cloud Native put an emphasis on Day two operations, which leads a lot more research, a lot more uh student work on understanding the coding environment. Um so with that I got to ask um I asked you about this uh massachusetts focused or this open cloud initiative because you guys are talking about this open cloud initiative including this massachusetts. Open Cloud, what is that? What is the massachusetts? Open Cloud sounds like you're offering a kind of open person, not just bu but other um Yeah, institutions. >>That's right. So the the M o C massachusetts open cloud is itself a cross um organizational collaboration bringing together five different academic institutions in New England In massachusetts. It's bu it's Harvard mit, its Northeastern and its U. Mass. Coming together to support a common set of infrastructure which is cloud. It's a cloud that runs in a data center and then um it serves a couple of different purposes. One is research on clouds directly. So what does it mean to run a cloud? What does it look like from a research point of view to understand large scale distributed systems? And then the other is more on top. When you have a cloud you can run workloads and those workloads scaled out to do say data processing, looking at the implications of across different fields which could be natural sciences, could be medicine, could be, even political science or social science is really a multidisciplinary view of what it means to leverage a cloud and run data centric workloads on top. So two different areas that are of a focus for the M. O. C. And this becomes this sort of vehicle for collaboration between Red Hat View and the Red Hot Laboratory. >>So I have to ask only because I'm a big fan of the area and I went to one of those schools, is there like a bean pot for technical hackathons where you get all the schools matched up against each other on the mass open cloud and compete for who gets bragging rights and the text city there. >>It's a great question. Not yet. But I'll jot that down here in hell. Up on that. >>Happy to sponsor. We'll we'll do the play by play coverage, you know. Great. >>I love that. Yeah, kind of twitch tv style. The one thing that there is which is very practical is academic research grants themselves are competitive, right? People are vying for research dollars to put together proposals, Bring those proposals to um the agency that's that's that's giving out grants and winning those grants is certainly prestigious. It's important as part of her research institutes continue to fund the work that they're doing. Uh Now we've been associated uh through the work we've done to date with the U. With Yeah almost $15 million 20 papers. So there's there's a lot of work you can't quite call the play by play. It's a >>scoreboard. I mean their numbers you can put numbers on the board. I mean that's what's one of the things you can measure. But let me ask you on those grants. So you're saying this is just the bu you guys actually have data on um the impact of the relationship in terms of grants and papers and stuff like that academic work. >>That's right. That's right. And so those numbers that I'm giving you are examples of how we've worked together with the u to help their faculty generate grant dollars that then fund some of the research that's happening there together with redhead engineers and on and on the infrastructure like the massachusetts Open cloud. >>That's a good way to look at the scoreboard. It's a good point. We have to research that if you don't mind me asking on this data that you have um are all those projects contributing to open source or do they have to be? That's just generic. Is that all of you all papers around bu is part of the research. In other words, I'm trying to think if I'm in open source, has this contributed to me as an >>open source? Yeah, it's a big and complex question because there's so much research that can happen through a research institution. And those research grants tend to be governed with agreements and some of those agreements have intellectual property rights um front and center and might require things like open source software as a result, the stuff that we're working on clearly isn't that focus area of open source software and and research activities that help kind of propel our understanding forward of what does it mean to do large scale distributed systems creation and then operation. So how do you develop software that does it? How do you how do you run the software that builds these big large distributed systems? So we're focused in that area. Um some of the work that we facilitated through that focus includes integrating non open source software that might be part of um same medical imaging. So for example work we've done with the boston Children's Hospital That isn't 100 doesn't require us to be involved 100 of the open source pieces. All the infrastructure there to support it is. And so we're learning how we can build integrated pipelines for data analysis and image analysis and data sharing across different institutions uh at the open source project level. Well maybe we have a specific imaging program that is not generated from this project. And of course that's okay with >>us. You know chris you bring up a good point with all those conversations. I could see this really connecting the dots. Most computer science programs. Most engineering programs haven't really traditionally focused on it at the scale we're talking about because we look at cloud scale but now scaling with hybrid it's real engineering going on to think about the large scale. We know all the big hyper scale ear's right so it's not just I. T. Provisioning you know network connection and doing some I. T. Work. We're talking about large scale. So I have to ask you as you guys look at these relationships with academics uh academia like like bu and others um how are the students responding to this? Are you guys seeing any specific graduate level advancements? Because you're talking about operational roles that are becoming so important whether it's cyber security and as cloud needed because once more data driven you need to have all this new scale engineered up. That's >>what how >>do you look at that? >>There's two different pieces that I would highlight. One is just the data science itself. So schools still need to produce data scientists. And having data is a big part of being a data scientist and knowing what your what your goals are with that data and then experimenting with different techniques, whether it's algorithms or tools. It's a big part of being a data scientist sort of spelunking through the data. So we're helping produce data. We're looking at data science efforts around data that's used to operationalize infrastructure, which is an interesting data science endeavor by itself. The other piece is really what you highlighted, which is there's an emergence of a skill set in the industry, often referred to as SRE site reliability engineering. Um it is a engineering discipline. And if you back up a little bit and you start thinking about what are the underlying principles behind large scale distributed systems, you get to some information theory and computer science. So this isn't just something that you might think of as um some simple training of a few key tools and knowing how to interpret a dashboard. And you're good to go, this is a much more sophisticated view of what does it mean to really operate large scale infrastructure, which to date, there aren't a lot of these large scale infrastructures available to academics to research because their commercial endeavors >>and their new to me. I was talking to some young folks my son's age and daughters age and I was saying, you know, architect in a building, a skyscraper isn't trivial. You can't just do that overnight. There's a lot of engineering that goes on in that science, but you're bringing kind of operating systems theory, systems thinking to distributed computing. I mean that's combination of a interdisciplinary shift and you got, I won't say civil engineering, but like concept is there, you've got structure, you've got networks, they're changing and then you've got software so again completely new area. >>That's right and there's not a lot of even curriculum that explores this space. So one of the opportunity, there's a great program that really focuses on um that that space of site reliability engineering or operational izing software. Um And then the other piece that I'm I'm really excited about is connecting to open source communities so that as we build software, we have a way to run and operationalize that software that doesn't have to be directly tied to a commercial outlet. So products running in the cloud will have a commercial S. L. A. And commercial agreements between the user and the producer of that service. How do you do that in open source context? How do you leverage a community, bring that community software to a community run service, learn through the running of that service. How to best build architect the service itself and then operationalized with the tooling and automation that service? How do you, how do you bring that into the open source community? And that's something that we've been referring to as the operate first initiative. How do you get the operationalization of software? Really thought of as a primary focal point in the software project where you normally think about the internals of software, the features, the capabilities of functionality, less about the operationalization. So important shift at the open source project level, which is something that I think will really be interesting and we'll see a lot of reaping a lot of rewards. Just an open source communities directly. >>Yeah, speed and durability. Certainly having that reliability is great. You know, I love talking with you guys at red hat because, you know, software, you know, open source and you know, operating systems because as it comes together in this modern era, what a great, great fit, great work you're doing with Boston University's and the mass open cloud initiative. Congratulations on that. I got I got to ask you about this Red Hat Graduate Fellows program you have because this kind of speaks to what you guys are doing, you have this kind of this redhead graduate fellows network and the work that's being done. Does that translate into red hat at all? From an engineering standpoint? How does that, how does that work together? >>Basically, what we do is we support um PhD students, we support post docs. So there's a real direct support to the, you know, that is the Red Jack Graduate Fellow program on our focus there is connecting those um uh academics, the faculty members and the students to our engineers to work together on key research initiatives that we think will help drive open source software agendas forward really broad can be in all different areas from security to virtualization too, the operating systems to cloud distributed systems, uh and one of the things that we've discovered is it creates a great relationship with the university and we find students that will be excited to leave university and come into the the industry workforce and work at Red hat. So there is a direct talent relationship between the work that we do at bu and the talent that we can bring into red hat, which is awesome. Uh We know these people we've worked with well with them, but also we're kind of expanding understanding of open source across, you know, more and more of academia, which I think is really valuable and important for red hat. We just go out to the the industry at large, um, and helping bring a set of skills to the industry that whether they're coming, whether these are students that come into red hat or go elsewhere into the industry, these are important skills to have in the industry. So we look at the, how do you work in open source communities? How to operationalize software at scale? These are important things. They didn't >>expand, expand the territory if you will in terms of systems thinking. We just talked about great collaboration. You guys do a great job chris great to have you on a quick final word from you on this year at red hat summer. I know it's virtual again, which we could be in person, but we're starting to come out of the covid kind of post covid right around the corner. Um, what's the update? How would you describe the current state of red hat? Obviously you guys still got that, that vibe. You still pumping strong a lot going on. What's the current? What's the current, uh, bumper sticker? What's the vibe? >>Well, in many ways, because we're so large and distributed. Um, the last year has been, uh, can't say business as usual because it's been an impact on everybody, but it hasn't required us to fundamentally change. And as we work across open source communities, there's been a lot of continuity that's come through a workforce that's gone completely distributed. People are anxious to get to the next phase, whatever back to normal means. Uh, and people at Red Hat are no different. So we're looking forward to what it can mean to spend time with colleagues in offices, were looking forward to what it means to spend time together with our friends and families and travel and all those things. But from a, from a business point of view, Red Hat's focus on the open hybrid cloud and that distributed view of how we work with open source communities. That's something that's, it's only continued to grow and pick up over the course of the last year. So it's clearly an important area for the industry and we've been busier than ever the last year. So, uh, interesting, interesting times for everybody. >>Well, it's great to see and I love how the culture maintains its its relevance, its coolness intersection between software, Open Source and systems. Great, Great working congratulations chris. Thanks for coming on. >>Thank you. >>All right. I'm John for here with the Cube for Red Hat Summit 2021. Thanks for watching. Mhm.
SUMMARY :
Always a pleasure to have you on the screen here too. Yeah, you bet. So you guys have a huge relationship with boston University. Um one of the things that's important to highlight here is we are giving You've got the grant, you got your funding more and more research. Hospital and the chris project not related to me, but just similar acronym that spells chris. the software projects to open source communities in with our own engineering experience, Um so with that I got to ask um I asked you about this uh that are of a focus for the M. O. C. And this becomes this sort of vehicle So I have to ask only because I'm a big fan of the area and I went to one of those schools, But I'll jot that down here in hell. We'll we'll do the play by play coverage, you know. So there's there's a lot of work you can't quite I mean that's what's one of the things you can measure. And so those numbers that I'm giving you are examples of how we've We have to research that if you don't mind me asking on this data that you All the infrastructure there to support it is. So I have to ask you as you guys look at these relationships with academics uh academia So this isn't just something that you might think of as um and I was saying, you know, architect in a building, a skyscraper isn't trivial. a primary focal point in the software project where you normally think about I got I got to ask you about this Red Hat the faculty members and the students to our engineers to work together on key You guys do a great job chris great to have you on a quick final word from you So we're looking forward to what it can mean to spend time with colleagues in Well, it's great to see and I love how the culture maintains its its relevance, its coolness intersection I'm John for here with the Cube for Red Hat Summit 2021.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
New England | LOCATION | 0.99+ |
$5 million | QUANTITY | 0.99+ |
Palo alto | LOCATION | 0.99+ |
boston University | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
boston Children's Hospital | ORGANIZATION | 0.99+ |
boston Children's Hospital | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Boston University | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
chris | PERSON | 0.99+ |
last year | DATE | 0.99+ |
EU | ORGANIZATION | 0.99+ |
boston | LOCATION | 0.98+ |
john | PERSON | 0.98+ |
over five years | QUANTITY | 0.98+ |
first initiative | QUANTITY | 0.97+ |
Red Hat Summit 2021 | EVENT | 0.97+ |
Red Hot | ORGANIZATION | 0.96+ |
two different pieces | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
CA | LOCATION | 0.94+ |
U. Mass | LOCATION | 0.94+ |
red hat summit 2021 | EVENT | 0.94+ |
red hat | ORGANIZATION | 0.93+ |
twitch | ORGANIZATION | 0.93+ |
red hat ca laboratory | ORGANIZATION | 0.92+ |
Day two | QUANTITY | 0.92+ |
redhead | ORGANIZATION | 0.92+ |
today | DATE | 0.91+ |
first | QUANTITY | 0.9+ |
Red.Hat | ORGANIZATION | 0.9+ |
this year | DATE | 0.89+ |
CTO | PERSON | 0.89+ |
two parallel worlds | QUANTITY | 0.87+ |
Northeastern | LOCATION | 0.87+ |
Red hat | ORGANIZATION | 0.87+ |
Red Hat | TITLE | 0.86+ |
$15 million 20 papers | QUANTITY | 0.85+ |
two different areas | QUANTITY | 0.85+ |
Red Hat View | ORGANIZATION | 0.84+ |
red hat | TITLE | 0.8+ |
$20 million | QUANTITY | 0.79+ |
chris wright | PERSON | 0.77+ |
north of $550 million dollars | QUANTITY | 0.76+ |
U. With Yeah | ORGANIZATION | 0.76+ |
Harvard mit | ORGANIZATION | 0.75+ |
hat | TITLE | 0.74+ |
five different academic institutions | QUANTITY | 0.72+ |
one thing | QUANTITY | 0.71+ |
hats | ORGANIZATION | 0.69+ |
Senior | PERSON | 0.64+ |
Open cloud | TITLE | 0.61+ |
Fellows | OTHER | 0.6+ |
president | PERSON | 0.59+ |
red | ORGANIZATION | 0.57+ |
massachusetts | EVENT | 0.55+ |
Red | ORGANIZATION | 0.54+ |
open | QUANTITY | 0.52+ |
Jack | TITLE | 0.46+ |
SRE | ORGANIZATION | 0.44+ |
S. L. | ORGANIZATION | 0.43+ |
M | ORGANIZATION | 0.37+ |
Cube | PERSON | 0.34+ |
Chris Wright, Red Hat v2
(gentle music) >> Narrator: From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020 brought to you by Red Hat. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. Welcome back to our continuous coverage of AnsibleFest 2020. We're not in person this year, as everybody knows, but we're back covering the event. We're excited to be here and really our next guest we've had him on a lot of times. He's super insightful coming right off the keynote, driving into some really interesting topics that we're excited to get into. It's Chris Wright, he's the Chief Technology Officer of Red Hat Chris, great to see you. >> Hey, great to see you. Thanks for having me on. >> Absolutely. So let's jump into it. I mean, you covered so many topics in your keynote. The first one though, that just jumps off the page, right, is automation and really rethinking automation. You know, and I remember talking to a product manager at a hyperscaler many months ago, and he talked about the process of them mapping out their growth and trying to figure out how are they going to support it in their own data center. And he just basically figured out, we cannot do this at scale without automation. So I think the hyperscaler has been doing it, but really it's kind of a new approach for enterprises to incorporate new and more automation in what they do every day. >> It's a fundamental part of scaling. And I think we've learned over time that one we need programming interfaces on everything. So that's a critical part of beginning of the automation journey. So now you have a programmatic way to interact with all the things out there. But the other piece is just creating really confidence in knowing that when you're automating and you're taking tasks away from humans which are actually error prone and typing on the keyboard is not always the greatest way to get things done. The confidence that those automation scripts or playbooks are going to do the right things at the right time. And so, creating really a business and a mindset around infusing automation everything you do is a pretty big journey for the enterprise >> Right. And that's one of the topics you talked about as well. And you know it comes up all the time with digital transformation or software development. This kind of shift the focus from, you know, kind of it's a destination to it's a journey. And you talked very specifically that you need to think about automation as a journey and as a process and even a language, and really bake it into as many processes as you possibly can. I'm sure that shocks a lot of people and probably scares them but really that's the only way to achieve these types of scales that we're seeing out there. >> Well, I think so. And part of what I was trying to highlight is the notion that a business is filled with people with domain expertise. So everybody brings something to the table. You're a business analyst, you understand the business part of what you're providing. You're the technologist. You really understand the technology. There's a partner ecosystem coming in with a critical parts of the technology stack. And when you want to bring this all together, you need to have a common way to communicate. And the... What I was really trying to point out is a language for communication across all those different cross functional parts of your business is critical. Number one, and number two, that language can actually be an automation language. And so, choosing that language wisely obviously we're talking to AnsibleFest. So we're going to be talking a lot about Ansible in this context. Treating that language wisely is part of how you build the end to end sort of internalized view of what automation means to your business. >> Right. I wrote down a bunch of quotes that you talked about, you know, Ansible is the language of automation, and automation should be a primary communication language. Again, very different kind of language that we don't hear. Now, it's more than a tool but a process a constant process and should be an embedded component of any organization. So, I mean, you're really talking about automation as a first class citizen, not kind of this last thing for the most advanced or potentially last thing for the most simple things where we can apply this process, but really needs to be a fundamental core of the way you think about everything that you do. Really a very different way to think about things and probably really appropriate, you know, as we come out of 2020 in this kind of new world where, you know, everyone like distributed teams, well now you have distributed teams. And so, you know, the forcing function on better tooling, that's really wrapped in better culture has never been greater than we're seeing today. >> I completely agree with that. And that domain expertise, I think we understand well in certain areas. So for example, application developers, they rely on one another. So, you maybe as an application developer consuming a service from somebody else in your microservices architecture, and so you're dependent on that other engineering team's domain expertise. Maybe that's even the database service, and you're not a database DBA or an engineer that really builds schemas for databases. So we kind of get that notion of encapsulating domain expertise in the building and delivering about applications that notion the CICD pipeline, which itself is automating how you build and deliver applications, that notion of encapsulating domain expertise across a series of different functions in your business can go much broader than just building and delivering the application. It's running your business. And that's where it becomes fundamental. It becomes a process. That's the journey, you know, not the end state, but it's the... And it's not the destination, it's the journey that matters. And I've seen some really interesting ways that people actually work on this and try to approach it from the, "How do you change your mindset?" Here's one example that I thought was really unique. I was speaking with a customer who quite literally automated their existing process, and what they did was automate everything from generating the emails to the PDFs, which would then be shared as basically printed out documents for how they walked through business change when they're making a change in their product. And the reason they did that was not because that was the most efficient model at all. It was... That was the way they could get the teams comfortable with automation. If it produced the same artifacts that they were already used to, then it created confidence and then they could sort of evolve the model to streamline it because printing out a piece of paper to review it is not going to be the efficient way to (indistinct) change your business. >> Well, just to follow up on that, right? Cause I think what would probably scares a lot of people about automation, one is exception handling and can you get all the Edge cases in the use cases? So in the one you just talked about, how do they deal with that? And then I think the other one is just simply control. Do I feel confident enough that I can get the automation to a place that I'm comfortable to hand over control? And I'm just curious in that case you just outlined how do they deal with kind of those two factors? >> Well, they always enabled a human checkpoint, so especially in the beginning. So it was sort of trust but verify that model and over time you can look at the things that you really understand well and start with those and the things that have more kind of gray zones, where the exceptions may be the rule or maybe the critical part of the decision making process. Those can be sort of flagged as needs real kind of human intervention. And that's a way to sort of evolve and iterate and not start off with the notion that everything has to be automated. You can do it piecemeal and grow over time and you'll build confidence and you'll understand how to flag those exceptions, where you actually need to change your process itself because you may have bottlenecks that don't really make sense for the business anymore and where you can incorporate the exception handling into the automation essentially. >> Right, that's great. Thank you for sharing that example. I want to shift gears a little bit cause another big topic that you covered in your keynote that we talk about all the time on theCUBE is Edge. So everybody knows what a data center is, everybody knows what a public cloud is, you know lots of conversations around hybrid cloud and multicloud, et cetera, et cetera, et cetera. But this new thing is Edge and I think people talk about Edge in kind of an esoteric way, but I think you just nailed it. I mean you just nailed it very simply, moving the compute to where the data is collected and or consumed. You know I thought that was super elegant, but what you didn't get into on all the complexity is what means. I mean data centers are our pristine environments that they're very, very controlled, the environment's controlled, the network is controlled, the security is controlled and you have the vision of an Edge device and the one everyone always likes to use let's say like a wind farm. Those things are out in crazy harsh conditions and then there's still this balancing act as to what information does get stored and processed and used and then what does have to go back to the data center because it's not a substitute for the data center it's really an extension of the data center or maybe the data center is actually an extension of the Edge. Maybe that's a better way to think of it but we've had all these devices out there, now suddenly we're connecting them and bringing them into a network and add a control. And I just thought the Edge represents such a big shift in the way we're going to see compute change probably as fundamental I would imagine as the cloud shift has been. >> I believe it is, I absolutely believe it's as big a change in the industry as the cloud has been. The cloud really created scale, it created automation, programmatic interfaces to infrastructure and higher level services. But it also was built around a premise of centralization. I mean clouds themselves are distributed and so you can create availability zones and resilient applications, but there's still a sense of centralization. Edge is really embracing the notion that data production is kind of only up into the right and the way to scale processing that data and turning that data into insights and information that's valuable for our business is to bring compute closer to data. Not really a new concept, but the scale at which it's happening is what's really changing how we think about building infrastructure and building the support behind all that processing and it's that scale that requires automation. Because you're just not going to be able to manage thousands or tens of thousands or in certain scenarios even millions of devices without putting automation at the forefront. It's critical. >> Right. And we can't talk about Edge without talking about 5G and I laugh every time I'm watching football on Sundays and they have the 5G commercials on talking about my handset that I can order my food to get delivered faster at my house like completely missing the point. 5G is about machine to machine communication and the scale and the speed and the volume of machine to machine is so fundamentally different than humans talking voice to voice. And that's really this big drivers to instrument as you said, all these machines, all these devices there's already been sensors on them forever but now the ability to actually connect them and pull them into this network and start to use the data and control the machines is a huge shift in the way things are going to happen going forward. >> A couple of things that are important in there. Number one, that data production and sensors and bringing computer closer to data, what that represents is bringing the digital world and the physical world closer together. We'll experience that at a personal level with how we communicate we're already distributed in today's environment and the ways we can augment our human connections through a digital medium are really going to be important to how we maintain our human connections. And then on the enterprise side, we're building this infrastructure in 5G that when you think about it from a consumer point of view and ordering your pizza faster it really isn't the right way to think about it. Couple of key characteristics of 5G. Greater bandwidth, so you can just push more package to the network. Lower latency, so you're closer to the data and higher connection density and more reliable connections. And that kind of combination of characteristics make it really valuable for enterprise businesses. You can bring your data and compute close together you have these highly reliable and dense connections that allow for device proliferation and that's the piece that's really changing where the world's going. I like to think of it in a really simple way which is, 4G and the cloud and the smartphone created a world that today we take for granted, 10 years ago we really couldn't imagine what it looked like. 5G, device proliferation and Edge computing today is building the footprint for what we can't really imagine what we will be taking for granted in 10 years from now. So we're at this great kind of change inflection point in the industry. >> I have to always take a moment to call out (indistinct). I think it's the most underappreciated law and it's been stolen by other people and repackage many ways, but it's basically we overestimate the impact of these things in the short term and we way, way, way, way, kind of underestimate the impact in the longterm and I think your story in they keynote about once we had digital phones and smartphones, we don't even think twice about looking at a map and where are we and where is a store close buy-in are they open and is there a review? I mean the infrastructure to put that together kind of an API based economy which is pulling together all these bits and pieces the stupid relay expectation of performance and how fast that information is going to be delivered to me. I think we still take it for granted, as you said I think it's like magic and we never thought of all the different applications of these interconnected apps enabled by and always on device that's always connected and knows where we are it's a huge change. And as you say that when we think about 5G, 10 years from now, oh my goodness, where are we going to be? >> It's hard to imagine? It really is hard to imagine and I think that's okay. And what we're doing today is introducing everything that we need to help businesses evolve, take advantage of that and that scale of the Edge is a fundamental characteristic of the Edge. And so automating to manage that scale is the only way you're going to be successful and extending what we've learned in the data center, how to the Edge using the same tools, the things we already understand really is a great way to evolve a business. And that's where that common language and the discussions that I was trying to generate around Ansible as a great tool, but it's not just the tool, it's the whole process, the mindset. The culture changed the way you change how you operate your business that's going to allow us to take advantage of the future where my clothes are full of sensors and you can look through a video camera and tell immediately that I'm happy with this conversation. That's a very different kind of augmented reality than we have today and maybe it's a bad example but it's hard to imagine really what it will be like. >> So, Chris, I just want to close on a slight shift. We've been talking a lot about technology, but you talk about culture all the time and really it's about the people and I think a number of times in the keynote you reinforced this is about people and culture. And I just had InaMarie Johnson on the Chief Diversity Officer from Zendesk and she said culture eats strategy for breakfast. Great line. So I wonder if you can talk about the culture because it's very different and you've seen it in opensource from Red Hat for a long time really a shifting culture around opensource the shifting culture around DevOps and continuous delivery and change is a good thing, not a bad thing and we want to be able to change our code frequently and push out new features. So again, as you think of automation and culture, what kind of comes to mind and what should people be thinking about when they think about the people and less about the technology? >> Well, there's a couple of things. Some I'll reinforce what we already touched on which is the notion of creating confidence in the automation. So there's an element of trust associated with that and that's more maybe trusting the technology. So when you're automating something you've already got a process, you already understand how something works, it's turning that something into an automated script or playbook in the Ansible context and trusting that it's going to do the right thing. There's another important part of trust which is getting more to the people part. And I've learned this a lot from open source communities collaboration and communities are fundamentally built around trust and human trust relationships. And the change in process, trusting not only that the tools are going to the right job but that people are really assuming good intent and working with or trying to build for the right outcomes for your business. I think that's a really important part of the overall picture. And then finally that trust is extended to knowing that that change for the business isn't going to compromise your job. So thinking differently about what is your job? Is your job to do the repetitive task or is your job to free up your time from that repetitive task to think more creatively about value you can bring to the business. And that's where I think it's really challenging for organizations to make changes because you build a personal identity around the jobs that you do and making changes to those personal identities really gets to the core of who you are as a person. And that's why I think it's so complicated. The tools almost start the easy part, it's the process changes and the cultural changes, the mindset changes behind that which is difficult but more powerful in the end. >> Well, I think people process tools the tech is always the easy part relative to culture and people in changing the way people do things and as you said, who their identity is, how they get kind of wrapped into what they do and what they think their value is and who they are. So to free them up from that that's a really important point. Well, Chris, I always love having you on, thank you for coming on again, sharing your insight, great keynote. And give your the last word about AnsibleFest 2020. What are you looking forward to take away from this little show? >> Well, number one, my personal hope is that the conversation that I was trying to sort of ignite through the keynote is an opportunity for the community to see where Ansible fits in the Edge and automation and helping really the industry at large scale. And that key part of bringing a common language to help change how we communicate internally is the message I was hoping to impart on the AnsibleFest Community. And so hopefully we can take that broader and appreciate the time here to really amplify some of those messages. >> All right, great. Well, thanks a lot Chris and have a great day. >> Thanks Jeff, thank you. >> All right. He's Chris, I'm Jeff you're watching theCUBE and our ongoing coverage of AnsibleFest 2020. Thanks for watching we'll see you next time. (gentle music)
SUMMARY :
brought to you by Red Hat. back covering the event. Hey, great to see you. and he talked about the process of beginning of the automation journey. but really that's the only way of the technology stack. of the way you think about and delivering the application. So in the one you just talked about, and the things that have and the one everyone always likes to use and the way to scale processing that data and the scale and the speed and the volume and the ways we can augment I mean the infrastructure and that scale of the Edge is and really it's about the people and the cultural changes, and as you said, who their identity is, and appreciate the time here and have a great day. and our ongoing coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
InaMarie Johnson | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
Zendesk | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
two factors | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
AnsibleFest | ORGANIZATION | 0.98+ |
AnsibleFest 2020 | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
today | DATE | 0.97+ |
one example | QUANTITY | 0.97+ |
Edge | TITLE | 0.97+ |
tens of thousands | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
10 years | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
DevOps | TITLE | 0.88+ |
5G | QUANTITY | 0.87+ |
Sundays | DATE | 0.87+ |
Red Hat | ORGANIZATION | 0.8+ |
many months ago | DATE | 0.79+ |
theCUBE | ORGANIZATION | 0.79+ |
Couple | QUANTITY | 0.78+ |
AnsibleFest 2020 | TITLE | 0.75+ |
devices | QUANTITY | 0.75+ |
5G | ORGANIZATION | 0.75+ |
number two | QUANTITY | 0.7+ |
Number one | QUANTITY | 0.7+ |
Ansible | TITLE | 0.69+ |
football | TITLE | 0.67+ |
Edge | COMMERCIAL_ITEM | 0.65+ |
opensource | TITLE | 0.52+ |
5G | TITLE | 0.46+ |
2020 | EVENT | 0.44+ |
5G | OTHER | 0.37+ |
Chris Wright, Red Hat | AnsibleFest 2020
>> Narrator: From around the globe, it's theCube. With digital coverage of AnsibleFest 2020. Brought to you by Red Hat. (twinkly music) >> Hey, welcome back, everybody. Jeff Frick here with theCube. Welcome back to our continuous coverage of AnsibleFest 2020. We're not in-person this year, as everybody knows, but we're back covering the event. We're excited to be here, and really our next guest... We've had him on a lot of times. He's super insightful. Coming right off the keynote, diving into some really interesting topics that we're excited to get into, and it's Chris Wright. He's the chief technology officer of Red Hat. Chris, great to see you. >> Hey, great to see you. Thanks for having me on. >> Absolutely. So let's jump into it. I mean, you covered so many topics in your keynote. The first one though, that just jumps off the page, right, is automation, and really rethinking automation. And I remember talking to a product manager at a hyperscaler many moons ago, and he talked about the process of them mapping out their growth and trying to figure out how they were going to support it in their own data center. And he just basically figured out we cannot do this at scale without automation. So I think the hyperscalers have been doing it, but really it's kind of a new approach for enterprises to incorporate new, and more, automation into what they do every day. >> It's a fundamental part of scaling, and I think we've learned over time that, one, we need programming interfaces on everything. So that's a critical part of beginning of the automation journey, so now you have a programmatic way to interact with all the things out there. But the other piece is just creating, really, confidence in knowing that when you're automating and you're taking tasks away from humans, which are actually error-prone, and typing on a keyboard is not always the greatest way to get things done, the confidence that those automation scripts, or playbooks, are going to do the right things at the right time. And so creating, really, a business and a mindset around infusing automation into everything you do is a pretty big journey for the enterprise. >> Right. And that's one of the topics you talked about as well, and it comes up all the time with digital transformation or software development; this kind of shift the focus from kind of it's a destination to it's a journey. And you talked very specifically that you need to think about automation as a journey, and as a process, and even a language, and really bake it into as many processes as you possibly can. I'm sure that shocks a lot of people and probably scares them, but really that's the only way to achieve the types of scales that we're seeing out there. >> Well, I think so. And part of what I was trying to highlight is the notion that a business is filled with people with domain expertise. So everybody brings something to the table. You're a business analyst. You understand the business part of what you're providing. You're the technologist. You really understand the technology. There's a partner ecosystem coming in with a critical parts of the technology stack. When you want to bring this all together, you need to have a common way to communicate. What I was really trying to point out is a language for communication across all those different cross-functional parts of your business is critical, number one, and number two, that language can actually be an automation language. And so choosing that language wisely... Obviously, we're talking at AnsibleFest, so we're going to be talking a lot about Ansible in this context. Choosing that language wisely is part of how you build the end-to-end sort of internalized view of what automation means to your business. >> Right. I mean, I wrote down a bunch of quotes that you talked about. "Ansible is the language of automation, and automation should be a primary communication language." Again, very different kind of language that we don't hear. And that it's "more than a tool, but a process, a constant process, and should be an embedded component of any organization." So I mean, you're really talking about automation as a first class citizen, not kind of this last thing for the most advanced, or potentially last thing for the most simple things where we can apply this process, but really needs to be a fundamental core of the way you think about everything that you do. Really a very different way to think about things, and probably really appropriate as we come out of 2020 in this kind of new world where everyone liked distributed teams. Well, now you have distributed teams, and so the forcing function on better tooling that's really wrapped in better culture has never been greater than we're seeing today. >> I completely agree with that. That domain expertise I think we understand well in certain areas. So for example, application developers, they rely on one another. So you're, maybe as an application developer, consuming a service from somebody else in your microservices architecture, and so you're dependent on that other engineering team's domain expertise. Maybe that's even the database service, and you're not a database, a DBA, or an engineer that really builds schemas for databases. We kind of get that notion of encapsulating domain expertise in the building and delivering of applications. That notion, the CI/CD pipeline, which itself is automating how you build and deliver applications, that notion of encapsulating domain expertise across a series of different functions in your business can go much broader than just building and delivering the application. It's running your business. And that's where it becomes fundamental. It becomes a process that's the journey. Not the end state. And it's not the destination. It's the journey that matters. And I've seen some really interesting ways that people actually work on this and try to approach it from the "how do you change your mindset?" Here's one example that I thought was really unique. I was speaking with a customer who quite literally automated their existing process, and what they did was automate everything from generating the emails to the PDFs, which would then be shared as basically printed out documents for how they walked through business change when they're making a change in their product. And the reason they did that was not because that was the most efficient model at all. It was that was the way they could get the teams comfortable with automation. If it produced the same artifacts that they were already used to, then it created confidence, and then they could sort of evolve the model to streamline it, because printing out a piece of paper to review, it is not going to be the efficient way to make changes in your business. >> Well, just to follow up on that, right, cause I think what probably scares a lot of people about automation... One is exception handling, right? And can you get all the edge cases in the use case. So in the one you just talked about, how do they deal with that? And then I think the other one is just simply control. Do I feel confident enough that I can get the automation to a place that I'm comfortable to hand over control? And I'm just curious, in that case you just outlined, how do they deal with kind of those two factors? >> Well, they always enabled a human checkpoint. Especially in the beginning. So it was sort of "trust but verify" that model, and over time you can look at the things that you really understand well and start with those, and the things that have more kind of gray zones, where the exceptions may be the rule, or may be the critical part of the decision making process, those can be sort of flagged as "needs real kind of human intervention," and that's a way to sort of evolve, and iterate, and not start off with the notion that everything has to be automated. You can do it piecemeal and grow over time, and you'll build confidence, and you'll understand where... How to flag those exceptions, where you actually need to change your process itself, because you may have bottlenecks that don't really make sense for the business anymore, and where you can incorporate the exception handling into the automation, essentially. >> Right. That's great. Thank you for sharing that example. I want to shift gears a little bit, cause another big topic that you covered in your keynote that we talk about all the time on theCube is edge, right? So everybody knows what a data center is. Everybody knows what a public cloud is. Lots of conversations around hybrid cloud and multi cloud, et cetera, et cetera, et cetera... But this new thing is edge, and I think people talk about edge in kind of an esoteric way, but I think you just nailed it. I mean, you just nailed it. It's very simply moving the compute to where the data is collected and/or consumed. I thought that was super elegant, but what you didn't get into on all the complexity is what that means, right? I mean, data centers are pristine environments that... They're very, very controlled. The environment's controlled. The network is controlled. The security is controlled, and you have the vision of an edge device. And the one everyone always likes to use is say like a wind farm, right? Those things are out in crazy harsh conditions, and then there's still this balancing act as to what information does get stored, and processed, and used, and then what does have to go back to the data center, because it's not a substitute for the data center. It's really an extension of the data center, or maybe the data center is actually an extension of the edge. Maybe that's a better way to think of it, but we've had all these devices out there. Now, suddenly we're connecting them and bringing them into a network and adding control. And I just thought the edge represents such a big shift in the way we're going to see compute change. Probably as fundamental, I would imagine, as the cloud shift has been. >> I believe it is. I absolutely believe it's as big a change in the industry as the cloud has been. The cloud really created scale. It created automation, programmatic interfaces to infrastructure and higher level services. But it also was built around a premise of centralization. I mean, clouds themselves are distributed, and so you can create availability zones and resilient applications, but there's still a sense of centralization. Edge is really embracing the notion that data production is kind of only up and to the right, and the way to scale, processing that data, and turning that data into insights and information that's valuable for a business, is to bring compute closer to data. It's not really a new concept, but the scale at which it's happening is what's really changing how we think about building infrastructure and building the support behind all that processing. And it's that scale that requires automation, because you're just not going to be able to manage thousands, or tens of thousands, or in certain scenarios even millions of devices, without putting automation at the forefront. It's critical. >> Right. And we can't talk about edge without talking about 5G, and I laugh every time I'm watching football on Sundays and they have the 5G commercials on talking about my handset, that I can order my food to get delivered faster at my house, completely missing the point, right? 5G's about machine-to-machine communication, and the scale, and the speed, and the volume of machine-to-machine is so fundamentally different than humans talking voice-to-voice. And that's really this big driver to instrument, as you said, all these machines, all these devices. There's been sensors on them forever, but now the ability to actually connect them, and pull them into this network, and start to use the data, and control the machines is a huge shift in the way things are going to happen going forward. >> Well, it's a couple of things that are important in there. Number one, that data production, and sensors, and bringing compute closer to data, what that represents is bringing the digital world and the physical world closer together. We'll experience that at a personal level with how we communicate. We're already distributed in today's environment, and the ways we can augment our human connections through a digital medium are really going to be important to how we maintain our human connections. And then on the enterprise side, we're building this infrastructure in 5G that when you think about it from a consumer point of view and ordering your pizza faster, it really isn't the right way to think about it. Couple of key characteristics of 5G: greater bandwidth, so you can just push more packets through the network; lower latency, so you're closer to the data; and higher connection density and more reliable connections, and that kind of combination of characteristics make it really valuable for enterprise businesses. You can bring your data and compute close together. You have these highly reliable and dense connections that allow for device proliferation, and that's the piece that's really changing where the world's going. I like to think of it in a really simple way, which is 4G, and the cloud, and the smartphone created a world that today we take for granted. 10 years ago, we really couldn't imagine what it looked like. >> 5G- >> Jeff: Like tomorrow... Excuse me. >> Device proliferation, and edge computing today is building the footprint for what we can't really imagine what we will be taking for granted in 10 years from now. So we're at this great kind of change in inflection point in the industry. >> Yeah. I have to always take a moment to call out a Amara's law. I think it's the most underappreciated law. It's been stolen by other people and repackaged many ways, but it's basically we overestimate the impact of these things in the short term, and we way, way, way, way kind of underestimate the impact in the longterm. And I think your story in they keynote about once you had digital phones and smartphones, we don't even think twice about looking at a map, and where are we, and where's a store close by, and are they open, and is there a review? I mean, the infrastructure to put that together, kind of an API-based economy, which is pulling together all these bits and pieces... (scoffs) The stupid rely... Expectation, right, of performance, and how fast that information's going to be delivered to me. I think we so take it for granted. As you say, I think it's like magic, and we never thought of all the different applications of these interconnected apps enabled by an always-on device that's always connected and knows where we are. It is a huge change, and as you say that when we think about 5G... (chuckling) 10 years from now. Oh, my goodness. Where are we going to be? >> It's hard to imagine? I mean, it really is hard to imagine, and I think that's okay. And what we're doing today is introducing everything that we need to help businesses evolve. Take advantage of that. And that scale of the edge is... It's a fundamental characteristic of the edge, and so automating to manage that scale is the only way you're going to be successful, and extending what we've learned in the data center out to the edge using the same tools, the things we already understand, really is a great way to evolve a business. And that's where that common language and the discussions that I was trying to generate around Ansible as a great tool. But it's not just the tool, it's the whole process, the mindset, the culture change, the way you change how you operate your business that's going to allow us to take advantage of the future where my clothes are full of sensors and you can look through a video camera and tell immediately that I'm happy with this conversation. That's a very different kind of augmented reality than we have today. Maybe it's a bad example, but it's hard to imagine really what it'll be like. >> So Chris, I just want to close on a slight shift, right? We've been talking a lot about technology, but you talk about culture all the time, and really, it's about the people. And I think a number of times in the keynote you reinforced this is about people and culture. And I just had I'm InaMarie Johnson on, the chief diversity officer from Zendesk. And she said culture eats strategy for breakfast. Great line. So I wondered if you can talk about the culture, because it's very different and you've seen it in opensource from Red Hat for a long time, really, a shift in culture around opensource, the shift in culture around devops, and continuous delivery, and "change is a good thing, not a bad thing," and we want to be able to change our code frequently and push out new features. So again, as you think of automation and culture, what kind of comes to mind, and what should people be thinking about when they think about the people and less about the technology? >> Well, there's a couple of things. I'll reinforce what we already touched on, which is the notion of creating confidence in the automation. There's an element of trust associated with that, and that's more maybe trusting the technology. So when you're automating something, you've already got a process. You already understand how something works. It's turning that something into an automated script, or playbook in the Ansible context, and trusting that it's going to do the right thing. There's another important part of trust, which is getting more to the people part, and I've learned this a lot from opensource communities. Collaboration and communities are fundamentally built around trust, and human trust relationships, and the change in process, trusting not only that the tools are going to do the right job, but the people are really... Assuming good intent, and working with they're trying to build for the right outcomes for your business, I think that's a really important part of the overall picture. And then finally, that trust is extended to knowing that that change for the business isn't going to compromise your job, right? So thinking differently about what is your job. Is your job to do the repetitive task, or is your job to free up your time from that repetitive task to think more creatively about value you can bring to the business? That's where I think it's really challenging for organizations to make changes because you build a personal identity around the jobs that you do, and making changes to those personal identities really gets to the core of who you are as a person. That's why I think it's so complicated. The tools almost are the easy part. It's the process changes and the cultural changes, the mindset changes behind that which is difficult, but more powerful in the end. >> Yeah. Yeah. Well, I think people, process, tools... The tech is always the easy part relative to culture, and people, and changing the way people do things, and as you said, who their identity is, how they get kind of wrapped into what they do, and what they think their value is, and who they are. So to free them up from that, that's a really important point. Well, Chris, I always love having you on. Thank you for coming on again, sharing your insight. Great keynote, and give me the last word about AnsibleFest 2020. What are you looking forward to take away from this little show? >> Well, number one, my personal hope is that the conversation that I was trying to sort of ignite through the keynote is an opportunity for the community to see where Ansible fits in the edge and automation, and helping, really the industry at large, scale. And that key part of bringing a common language to help change how we communicate internally is the message I was hoping to impart on the AnsibleFest community, and so hopefully we can take that broader. Appreciate the time here to really amplify some of those messages. >> All right. Great. Well, thanks a lot, Chris, and have a great day. >> Thanks, Jeff. Thank you. >> All right. He's Chris. I'm Jeff. You're watching theCube, and our ongoing coverage of AnsibleFest 2020. Thanks for watching. We'll see you next time. (twinkly music)
SUMMARY :
Brought to you by Red Hat. and really our next guest... Hey, great to see you. and he talked about the process of the automation journey, but really that's the only way to achieve of the technology stack. of the way you think about and delivering the application. So in the one you just talked about, and the things that have And the one everyone always likes to use and the way to scale, and the scale, and the speed, and the ways we can augment is building the footprint and as you say that when and the discussions that and really, it's about the people. and the change in process, and give me the last word and helping, really the and have a great day. and our ongoing coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Dennis Donohue | PERSON | 0.99+ |
Michelle Lin | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Indianapolis | LOCATION | 0.99+ |
Herain Oberoi | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
JJ Davis | PERSON | 0.99+ |
Paul Noglows | PERSON | 0.99+ |
John Fourier | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
John Farrier | PERSON | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Manoj Agarwal | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Cassandra Garber | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Andy | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Gil Haberman | PERSON | 0.99+ |
JJ | PERSON | 0.99+ |
Jen Saavedra | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Michelle Adeline | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Bruce Taylor | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
California | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Michelle Zatlyn | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
1999 | DATE | 0.99+ |
McLaren | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Anaheim | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Salinas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
91% | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Fred | PERSON | 0.99+ |
18% | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Chris Wright v2 ITA Red Hat Ansiblefest
>> If you want to innovate, you must automate at the edge. I'm Chris Wright, chief technology officer at Red Hat. And that's what I'm here to talk to you about today. So welcome to day two of AnsibleFest, 2020. Let me start with a question, do you remember 3G when you first experienced mobile data connections? The first time that internet on a mobile device was available to everyone? It took forever to load a page, but it was something entirely different. It was an exciting time. And then came 4G, and suddenly data connections actually became usable. Together with the arrival of smartphones, people were suddenly online all the time. The world around us changed immensely. Fast forward to today, things are changing yet again, 5G is entering the market. And it's in evolution that brings about fundamental change of how connections are made and what will be connected. Now it's not only the people anymore who are online all the time, devices are entering the stage, sensors, industrial robots, cars, maybe even the jacket you're wearing. And with this revolutionary change and telecommunications technology, another trend moves into the picture, the rise of edge computing. And that's what I'll be focusing on today. So what is edge computing exactly? Well, it's all about data. Specifically, moving compute closer to the producers and consumers of data. Let's think about how data was handled in the past. Previously, everything was collected, stored and processed in the core of the data center. Think of server racks, one after the other. This was the typical setup. And it worked as long as the environment was similarly traditional. However, with the new way devices are connected and how they work, we have more and more data created at the edge and processed there immediately. Gathering and processing data takes place close to the application users, and close to the systems generating data. The fact that data is processed where it is created means that the computing itself now moves out to the edge as well. Outside of the traditional data center barriers into the hands of application users. Sometimes, literally into the hands of people. Look at your smartphone next to you, is one good example. Data sources are more distributed. The data is generated by your mobile phone, by your thermostat, by your doorbell, and data distribution isn't just happening at home, it's happening in businesses too. It's at the assembly line, high on top of a cell tower, by a pump deep down in a well, and at the side of a train track, every few miles for thousands of miles. This leads to more distributed computing overall. Platforms are pushed outside the data center. Devices are spread across huge areas in inaccessible locations, and applications run on demand close to the data. Often even the ownership of the devices is with other parties. And data gathering and processing is only partially under our direct control. That is what we mean by edge computing. And why is this even interesting for us, for our customers? To say it with the words of a customer, edge computing will be a fundamental enabling technology within industrial automation. Transitioning how you handle IT from a traditional approach, towards a distributed computing model, like edge computing, isn't necessarily easy. Let's imagine how a typical data center works right now. We own the machines, create the containers, run the workloads and carefully decide what external services we connect to, and where the data flows. This is the management sphere we know and love. Think of your primary OpenShift cluster for example. With edge computing, we don't have this level of ownership, knowledge or control. The servo motors in our assembly line are black boxes controlled only via special APIs. The small devices next to our train tracks, running embedded operating system, which does not run our default system management software. And our doorbell is connected to a cloud, which we do not control at all. Yet we still need to be able to exercise control our business processes suddenly depend on what is happening at the edge. That doesn't mean we throw away our ways of running the data centers, in fact, the opposite is true. Our data centers are the backbone of our operations. In the data center, we still tie everything together and run our core workloads. But with edge computing, we have more to manage. To do so, we have to leave our comfort zones and reach into the unknown. To be successful, we need to get data, tools and processes under management and connect it back to our data center. Let's take train tracks as an example. We're in charge of a huge network. Thousands of miles of tracks zig-zagging across the country. We have small boxes next to the train tracks every few miles, which collect data of the passing trains. Takes care of signaling and so on. The train tracks are extremely rugged devices and they're doing their jobs in the coldest winter nights and the hottest summer days. One challenge in our operation is, if we lose connection to one box, we have to stop all traffic on this track segment, no signal, no traffic. So we reroute all of the traffic passengers, cargo, you name it, via other track segments. And while the track segments now suddenly have unexpected traffic congestion and so on, we have sent a maintenance team to figure out why we lost the signal, do root cause analysis, repair what needs to be fixed and make sure it all works again. Only then, can we reopen the segment. As you can imagine, just bringing a maintenance team out there takes time, finding the root issue and solving it, also takes time. And all the while, traffic is rerouted. This can amount to a lot of money lost. Now imagine these little devices get a new software update and are now able to report not only signals sent across the tracks, but also the signal quality. And with those additional data points, we can get to work. Subsequently, we can see trends. And the device itself can act on these trends. If the signal quality is getting worse over time, the device itself can generate an event, and from this event, we can trigger followup actions. We can get our team out there in time, investigating everything before the track goes down. Of course the question here is, how do you even update the device in the first place? And how do you connect such an event to your maintenance team? There are three things we need to be able to properly tie events and everything together to answer this challenge. First, we need to be able to connect through the last mile. We need to reach out from our comfort zones, down the tracks and talk to a device, running a special embedded OS on a chip architecture we don't have in our data center. And we have thousands of them. We need to manage at the edge in a way suited to its scale. Besides connecting, we need the skills to address our individual challenges of edge computing. While the train track example is a powerful image, your challenge might be different. Your boxes might be next to an assembly line or on a shipping container or a unit under an antenna. Finally, the edge is about the interaction of things. Without our data center or humans in the equation at all. As I mentioned previously, in the end, there is an event generated by the little box. We have to take the event and first increase the signal strength temporarily between this box and the other boxes on either side, to buy us some more time. Then we ask the corporate CMDB for the actual location of that box, put all this information into a ticket, assign the ticket to the maintenance team at high priority to make sure they get out there soon. As you can see, our success here critically depends on our ability to create an environment with the right management skills and technical capabilities that can react decentrally in a secure and trusted way. And how do we do these three things, with automation. Yeah, it might not come as much of a surprise, right? However, there is a catch. Automation as a single technology product, won't cut it. It's tempting to say that an automation product can solve all these problems. Hey, we're at a tech conference, right? But that's not enough. Edge computing is not simple. And the solution to the challenges is, is not simply a tool where we buy three buckets full, and spread it across our data center and devices. Automation must be more than a tool. It must be a process, constantly evolving, iterating on and on. We only have a chance if we embed automation as a fundamental component of an organization, and use it as a central means to reach out to the last mile. And the process must not focus on technology itself, but on people. The people who are in charge of the edge IT as well as the people in charge of the data center IT. Automation can't be a handy tool that is used occasionally, it should become the primary language for all people involved to communicate in. This leads to a cooperation and common ground to further evolve the automation. And at the same time, ensure that the people build and improve the necessary skills. But with the processes and the people aligned, we can shed light on the automation technology itself. We need a tool set that is capable of doing more than automating an island here and a pocket there. We need a platform powerful enough to write the capabilities we need and support the various technologies, devices, and services out at the edge. If we connect these three findings, we come to a conclusion. To automate the edge, we need a cultural change that embraces automation in a new and fundamental way. As a new language, integrating across teams and technology alike. Such a unified automation language, speaks natively with the world out there as well as with our data centers at any scale. And this very same language is spoken by domain experts, by application developers and by us as automation experts, to pave the way for the next iteration of our business. And this language has the building blocks to create new interfaces, tools and capabilities, to integrate with the world out there and translate the events and needs into new actions, being the driving motor of the IT at the edge and evolving it further. And yes, we have this language right here, right now. It is the Ansible language. If we come back to our train track, one more time, this Ansible that can reach out and talk to our thousands of little boxes sitting next to the train tracks. The Ansible language, the domain experts of the boxes can natively work together with the train operations experts and the business intelligence people. Together, they can combine their skills to write workflows in a language they can all understand and where the deep down domain knowledge is encapsulated away. And the Ansible platform offers the APIs and components to react to events in a secure and trusted way. If there's one thing I'd like you to take away from this, it is edge computing is complex enough. But luckily we do have the right language, the right tools, and here with you and awesome community at our fingertips, to build upon it and grow it even further. So let's not worry about the tooling, we have that covered. Instead, let's focus on making that tool great. We need to become able to execute automation anywhere we need. At the edge, in the cloud, in other data centers, in the end, just like serverless functions, the location where the code is actually running, should not matter to us anymore. Let's hear this from someone who is right at the core of the development of Ansible, over to Matt Jones, our automation platform architect.
SUMMARY :
And the solution to the challenges is,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wright | PERSON | 0.99+ |
Matt Jones | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Thousands of miles | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
one box | QUANTITY | 0.99+ |
thousands of miles | QUANTITY | 0.99+ |
One challenge | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
CMDB | ORGANIZATION | 0.98+ |
Ansible | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
2020 | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
three buckets | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
one good example | QUANTITY | 0.93+ |
thousands of little boxes | QUANTITY | 0.93+ |
day two | QUANTITY | 0.89+ |
every few miles | QUANTITY | 0.88+ |
one thing | QUANTITY | 0.83+ |
three findings | QUANTITY | 0.82+ |
one more time | QUANTITY | 0.8+ |
first place | QUANTITY | 0.76+ |
Ansiblefest | ORGANIZATION | 0.75+ |
Ansible | TITLE | 0.74+ |
single technology product | QUANTITY | 0.74+ |
ITA | ORGANIZATION | 0.73+ |
money | QUANTITY | 0.56+ |
OpenShift | ORGANIZATION | 0.47+ |
AnsibleFest | ORGANIZATION | 0.43+ |
4G | OTHER | 0.38+ |
Chris Wright, Red Hat | Red Hat Summit 2020
from around the globe it's the cube with digital coverage of Red Hat summit 2020 brought to you by Red Hat welcome back this is the cubes coverage of Red Hat summit 2020 of course the event happening digitally we're bringing in the guests from where they are around the globe happy to welcome back to the program and he's one of the keynotes because he's also many times cube alumni chris wright is the senior vice president and chief technology officer at Red Hat chris it is great to see you and we've got almost matching hats you have a real red hat fedora I've got one that the you know kubernetes Red Hat team OpenShift team gives out in Europe so in case anybody in the Red Hat community goes yes I've been a longtime member of the community I got you know I think my original Red Hat baseball cap probably 15 years ago but the Hat that I had is not one of the nice felt one it is they're pretty good to see here all right so we've gotta wait a little bit to get your keynote but so many topics I want to get to with you but you know of course as I mentioned me open and it's pretty obvious everyone's remote right now is kind of you know special times we are living in so bring us inside a little bit you know your your organization your group or community you know what what this means and how's everybody doing well I mean it'd be hard not to sort of acknowledge that there's a major global event happening right now and and kovetz really changing how we operate how we work from a RedHat perspective our number one priority is just employee safety and employ health and so we we were quick to send our folks home and have everybody to work from home and so what's interesting from a RedHat point of view I think and then even if you broaden that out to open-source communities the the distributed nature of open-source development and and specifically the engineering teams Red Hatter are pretty distributed kind of mirroring those open-source communities that we participate in so in the one hand you can kind of say well things haven't changed substantially in the sense of how do we how do we operate in upstream communities but on the other hand people working from home is it's a whole new set of challenges I mean my kids are 12 and 14 but you know say you have toddlers that's a real distraction or you have a working environment at home that's crowded with multiple people I mean it can really change how you approach your daily your your your daily work life um so creating that balance has been really important and for our teams we talk a lot about just think empathy think about how you're supporting one another and again when you broaden that out to the larger communities I think probably a really important aspect of open-source development is crossing corporate boundaries and being inclusive of such a broad set of contributors that there's a built-in resiliency associated with open source communities which i think is fantastic and then when you add to that sort of the the enthusiasm around just doing great things there's a lot of interesting activities that are collaborative in nature that are community based that are trying to address the Kovach crisis whether it's 3d printing of supplies or whether it's contact tracing applications that help people understand where they become across kovat or anything like that I mean a lot of cool stuff happening that's inspired by a real challenge to the entire globe yeah okay Chris one of my favorite things the last few years that summit has you know talk and he's cut talking to companies that are going through their journey of you know what we usually call digital transformation what we have always said from the research side is what separates you know people that have successfully gone through this is that data and they become data-driven and data is such an important piece of what they're doing well I think everyone has been getting a real crash course on data because not only businesses but you know governments and you know the entire globe now is you know watching the daily data trying to understand data sources you know bring us inside is to you know really the importance of data and you know where that intersects with everything that red hat is well the those are great examples I mean it's sometimes a little depressing but the the notion that data is a critical part of decision-making and access to quality data in real time is what helps us make better decisions more effective decisions and more efficient decisions and so when you when you look at the amount of data being produced it just keeps growing you know it's sort of on the exponential growth curve and when you look at the commensurate amount of compute power associated with all of that data it's also growing which is maybe an obvious statement what it says is we are gathering more and more data and the degree to which we can pull meaningful insights out of that data is really how much we can impact our companies you know value and differentiation and in the context of something like Cova that means vaccine discoveries and you know shortening times to field trials in in a more business context it's talking about how quickly you can respond to your customers needs and we see a really dynamic shift and the work force all working from home that puts a real strain on the infrastructure we're here supporting infrastructure builders and the amount of data that they can collect to efficiently operate infrastructure is critical at a time when people are distributed and getting access into the lab environments is challenging and so it you know I think there's a lot to be said for the amount of data that's being produced and then how we analyze it we think of it in terms of bringing data to applications and historically they kind of lived in separate I'd call them silos bringing the data sources and data processing and model development all onto a common platform is a really powerful thing that's happening in the industry today which is which is exciting so you know we were bringing data to be a central actors how I like to describe it yeah well look I'm really glad how you connected that discussion of data to the applications we as you know my background really is on the infrastructure side and the concern I have a lot of times as infrastructure people you know we talk about the bits and bytes we talk about the infrastructure but the only reason we have infrastructure is to run those applications and you know deal with that data it was hoping you can connect the dots for us the key note that all gave one of the main things he's talking about it where's the open hybrid cloud and I had a great discussion with him on the cube so with that setup of applications and data you know how does that intersect you know with what Red Hat calls the open hybrid cloud and what differentiates Red Hat's position there from some of the other discussions that we hear in the industry about cloud whether the open hybrid cloud is is a platform I think that's the best way to think of it and that platform it's a it's a platform that spans different types of infrastructures so that's public clouds that's on-premises data centers you know the enterprise zones themselves and I think important increasingly out to the edge so the notion of where you deploy isn't also coupled to what platform do I have to develop to in order to do that deployment and you know when we talk about the edge extending out to the edge that means you're getting closer to those data sources so bringing the data in doing the Associated inference and making decisions close to that data where latency really can matter is a big part of what that open hybrid cloud platform brings to to the market or to our customers and when you think about an application developer typically an application developer is trying to in a you know enable some some behavior or feature or functionality and the more we can drive use data to drive the behavior or drive the functionality the more personalized and application is the more intelligent the application is and so the connection between data the data sources the data processing the data science behind data cleansing and model generation and the associated models that can be easily accessed by applications that's the real power that's the real value that works to help develop for our customers so they can change their business we actually do this internally it's how we operate you know we collect data we use data to make decisions we use data in our product release process and the platform that we've created is a data processing and analytics and machine learning platform that we use internally and we also make that externally available as an open source project the open data hub so open and data and hybrid cloud are all intertwined at this point yeah one of the things that really has been highlighted to me at Summit this year is that connection you know we always knew Red Hat had you know strong developer community out there but you know you think back to Linux Linux has eyes directly into the application you look across the portfolio and it's not the app dev team over here and the infrastructure team over here and you know how do we operate all of these various pieces you know ansible you know has connections into all the various roles so what want you to just comment you know with kind of your you know CTO role and you you look over the entire portfolio but that discussion of you know how roles are changing how organization and make sure that they're not a bunch of various functions that aren't in sync but you know we're really coming together to help respond to the business needs and move forward in the speed that is needed in today's world well I think the the early stages of that were well captured with the DevOps phrase so bringing developers and operations closer together it's not always clear what that means and in some cases that the the notion of a of a platform and the notion of operating an application and then who operates the platform I think there there's been some question in the industry about exactly what that means we're thinking of it today to sort of stick with the buzzwords in the dev sac ops context and even what I would call AI dead set cops so in data and intelligence infused obses cops and the idea is developers are just trying to move rapidly so the degree to which the underlying infrastructure is just there to support application development is the operations teams need yeah that's what the operation seems trying to provide developers need at the same time access to tooling to consistency from test environments through to production environments and also access to those data models that I was talking about earlier so bringing that all together I think on the DevOps side or the dev Sackhoff side it's how can you build a platform that gives the right business specific guidelines and sort of guardrails that allow developers to move as quickly as possible without getting themselves into trouble and you know inadvertently creating a security vulnerability by pulling in an old dependency as a concrete example so bringing these things together I think is what's really important and it's a big part of what we're focused on the so operational side being infused with intelligence that's data in telemetry you're gathering from at the platform level and using models to inform how you operate the system and then if you go up a level to the application development sort of CIC deep pipeline where can you make intelligent recommendations to developers as they're pulling in dependencies or even writing code and then give easy access to the data science workflow to intercept so that what you're delivering is a well integrated model with an application that you know has a lifecycle and a maintenance that is well understood yeah so so Chris you know we've watched this is the seventh year we've had the cubit at Red Hat summit of course Red Hat itself has a large portfolio but not only Red Hat but you know the open source communities there are so many you know countless projects out there and you have a huge partner ecosystem you were just talking a bunch about DevOps you know I've got sitting at my desk you know one of those charts that shows you know DevOps tooling and it here's some of the platforms and here's all the various pieces and it's like you know I think there's only you know 50 or 80 different rules on that but how's Red Hat and the community overall how are you helping customers you know deal with this you know challeng world is you know we've got the paradox in place out there on it you know we understand that you know everybody's needs something a little bit different but how are we helping to give a little bit of structure and guidance in the the ever-changing world well I think it's one of the values of pulling content together if you think of a set of components being brought together as curation then we're helping curate the content and assembling pieces together it turns out is a is a lot of work especially when you want a lifecycle manage those components together so one basic thing that we're doing is bringing together an entire distribution of content so it's not just a single it's not just Linux it's not just kubernetes it's Linux and kubernetes engineered together with a set of supporting tooling for logging and monitoring and CI pipelines and all of that we bring together in a context that we opinionated or prescriptive what we also focus on is understanding that every Enterprise has a as its own legacy and history and set of investments that they've made so that process where we bring together an opinionated stack also needs to incorporate the flexibility so where can we plug in a CI pipeline that your your enterprise already has or where can we plug in your monitoring logging tools so that kind of flexibility allows us to bring together some best-of-breed components that we're finding in the open-source communities with flexibility to bring a whole set of ecosystem partners and if we go back to that open data have conversation there are a lot of data centric tools that we put in the open data have open source project we have commercial partners that can support things like say spark as a concrete example or tensorflow and so you know combine those those are open source projects but they're not coming from Red Hat they're coming from our ecosystem partners combine that all together into something that's engineered to work together and you're taking a lot of the friction out of the system so that developers can just move quickly all right so Chris give us a little bit of preview what what are people gonna see in the keynote and you know there's some people that are going to be watching this interview live but others will be efforts though I believe edge is one of the pieces we'll be touching on in the keynote but give us a little bit of what will we can expect well whatever you'll have to come to the keynote to really get the full full experience but what we're trying to to talk through is how data is really fundamentally changing business and if and we talk through that that's sort of story line starting with how it impacts red hats but you know at one level we're an enterprise we have our own business needs we use data to drive how we operate we also see that the platforms that we're building are really helpful for our customers to harness the value of data and change their own business and in the context of doing that we get to take a look at some ways where those business changes have industry-wide effects you know that we talk about things like 5g and artificial intelligence and where these things come together especially in edge computing really interesting space for these things all kind of converge and you know so kind of that that broad broad story line of data something that we use to change how we operate something that we build is from a platform point of view of our customers change how they operate and ultimately those changes have major impacts across the industry which is was which is pretty exciting pretty cool yeah I'm curious Chris you know I think back a few years ago I would have been interviewing you about like NFB and many of the themes it feels like we were talking about there we're really setting the table for the discussion we've been having for 5b is is that you know do you agree with that you know what would what's kind of the same and different from what we might have been looking at five years ago this it's very much and I love that question because it touches on something I think is really important it's very much an evolution and so in the tech world we talked so much about disruption and I think we overplay disruption I think what's interesting is technology evolution just consistently changing and moving forward gives rise at points in time to really interesting convergence of change that can be disruptive so as a concrete example NFV historically was about really improving the operational efficiencies of the service providers building networks and helping them move more rapidly so they could introduce new services most of that was focused on 4G most of that was focused on the core of the network today we're introducing 5g across the industry the discussions are moving technology wise into where do containers fit into this new world and the discussion at the network level is not only in the core but all the way out to the edge and then when you look at the edge where you have a portion of the network operating as software you have a platform like open ship that can also host enterprise or consumer facing education so this is really all of those early stages of NFV are culminating in this in a place today where the technology supports total software infrastructure for the network and utilizing that same cloud that you're running using to run the network to power enterprise or consumer facing applications that's pretty far away from where we were in the early days of NFB very much in evolution and then if you take it one step further and say orgy smart devices and cloud computing gave rise to a set of disruptive businesses ten years ago those businesses did not exist today we can't imagine life without them 5g device proliferations and not just smartphones but a whole set of new devices and edge computing are the ingredients that give rise to that same next wave of innovation where 10 years from now we can't really imagine what are the businesses that in 10 years we won't be able to imagine our lives without so we're at a really interesting inflection point and it's it's partially through this evolution of technology I think it's really exciting all right Chris last question for you there's always so many different pieces going on you know red hats really striking a nice balance there's not really as much of the habla and announcements but you know so much you know everything that does is built on open source so you know there's always things I run across it's like oh I need to you know look down the rabbit hole a little bit and what was that Farkas thing I think I'd heard that word before where all of the projects at the CN CF where you know Red Hat's involved in so you know in the last minute he or give us you know any areas where people said hey you know go google this go look up this you know project other cool things that you know you and your team are working on that you want to make sure to highlight well you you've mentioned one which is Korkis and not often time we talk about infrastructure I think it's a really cool project that is developer focus it's it's in the Java space and it's really bringing Java from an enterprise development platform into a modern language that can be used to build cloud native applications or even serverless functions I think serverless is a critical space so we've been talking for quite some time about all the ways serverless can be impactful we're in a place now where K native as a project is maturing and the the kind of world around it is getting more sophisticated so we have a serverless offer and as part of part of the open shift platform so you know making sure you're paying attention to what's happening in the K native space I think is is really important there's a whole new set of management challenges that will be in the security and a multi cluster space we're bringing those we're bringing technology to bear in this space and as RedHat we will bring those out as open source projects so looking for the open source communities around where you hear things like ECM or advanced container management or multi cluster managed environments which are the norm at this point you know those are some examples of things I think are important and then there's a world of stuff that's data focused there's all of the data science tools you know too many to really enumerate but that I think is an example where open-source is leading the space leading the industry in terms of where all where all those tools are developed and how the coverage and access developers have to data science tools all right well thank you so much Chris right always a pleasure to catch up with you and definitely looking forward to your your you know alright thank you all right lots more coverage check out the cube dotnet you can see all the interviews after they've gone out live they will be on demand all those projects Chris mentioned I've had deep dives on all of them so also hit up Chris square myself on Twitter if you have any follow up always love to hear the feedback I'm Stu minimun and as always thank you for watching the cube [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
14 | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
NFB | ORGANIZATION | 0.99+ |
Red Hat | TITLE | 0.99+ |
Red Hat summit 2020 | EVENT | 0.98+ |
seventh year | QUANTITY | 0.98+ |
ten years ago | DATE | 0.98+ |
Stu minimun | PERSON | 0.98+ |
one | QUANTITY | 0.97+ |
15 years ago | DATE | 0.97+ |
Red Hat summit 2020 | EVENT | 0.97+ |
five years ago | DATE | 0.97+ |
Kovach crisis | EVENT | 0.97+ |
10 years | QUANTITY | 0.97+ |
one basic thing | QUANTITY | 0.96+ |
80 different rules | QUANTITY | 0.96+ |
DevOps | TITLE | 0.95+ |
RedHat | TITLE | 0.94+ |
Chris square | PERSON | 0.94+ |
today | DATE | 0.94+ |
this year | DATE | 0.92+ |
one level | QUANTITY | 0.92+ |
last few years | DATE | 0.91+ |
Red Hat | EVENT | 0.91+ |
OpenShift | ORGANIZATION | 0.89+ |
red hat | ORGANIZATION | 0.88+ |
a few years ago | DATE | 0.87+ |
Red Hat Summit 2020 | EVENT | 0.86+ |
senior vice president | PERSON | 0.84+ |
CN CF | ORGANIZATION | 0.84+ |
chief technology officer | PERSON | 0.83+ |
ORGANIZATION | 0.8+ | |
Red | ORGANIZATION | 0.8+ |
one of the things | QUANTITY | 0.76+ |
Hat | TITLE | 0.72+ |
chris wright | PERSON | 0.71+ |
one of the keynotes | QUANTITY | 0.7+ |
Red Hat chris | EVENT | 0.69+ |
dotnet | ORGANIZATION | 0.68+ |
Cova | ORGANIZATION | 0.66+ |
single | QUANTITY | 0.62+ |
Hatter | TITLE | 0.62+ |
K | ORGANIZATION | 0.61+ |
Summit | EVENT | 0.59+ |
wave | EVENT | 0.57+ |
Sackhoff | ORGANIZATION | 0.57+ |
Korkis | ORGANIZATION | 0.56+ |
every | QUANTITY | 0.56+ |
favorite | QUANTITY | 0.49+ |
5g | OTHER | 0.46+ |
lot | QUANTITY | 0.43+ |
Chris Wright, Red Hat | AWS re:Invent 2019
la from Las Vegas it's the cube covering AWS reinvent 2019 brought to you by Amazon Web Services and Vinum care along with its ecosystem partners Oh welcome back to the sands here we are live here in Las Vegas along with Justin Warren I'm John wall's you're watching the Cuban our coverage here of AWS rain vut 2019 day one off in Rowan and EJ on the keynote stage this morning for a couple of hours and now a jam-packed show for Chris Wright joins us the CTO and Red Hat waking his way toward Cube Hall of Fame status we're getting there this is probably worth 50 of the parents I think good to see you good to see you yeah always a pleasure first off let's just let's just talk about kind of the broad landscape right now the pace of innovation that's going on what's happening in the open cloud you know catching up to that acceleration if you're if you're a legacy enterprise you know you got all these guys that are born over here and they're moving at warp speed you got to be you've got to play catch-up and and talk about maybe that friction if you will and and what people are learning about that in terms of trying to get caught up to the folks that have two head start well I think number one the way I like to frame it is open source is the source of innovation for the industry and part of that is you look at the collaborative model bringing different people together across industry to build technology together it's hard to compete with that pace and speed the challenge of course is as you describe how do you how do you consume that how do you bring it into the enterprise which is you know got a whole business that's running off of infrastructure that has been sustaining their business for potentially decades so there's that impedance mismatch of needing to go quickly to keep abreast of of the technology changes while honoring the fact that your core business is running already on key technology so I think looking at how you bring platforms in that support the newer technologies as well as create connections or even support existing applications is a great way to kind of bridge that gap and then partnering with people who can build a bridge like an impedance match between your speed and the speed of innovation is a great way to kind of you know harness the power without exposing yourself to the ragged edges as much sure yeah talk to us a bit more about it about enterprise experience with open source a Red Hat has a long heritage of providing open source to enterprise and couldn't pretty much sits out as a unique example of how you make money with open source so enterprises have lots of open source that they're using every day now you know Linux has come into the enterprise left right and center but there's a lot more open source technologies that enterprises are using today so give us a bit of a flavor of how enterprises are coming to grips with how open source helps sustain their business well in one sense it's that innovation engine so it's bringing new technology and in another sense it's what we've experienced in the in the Linux space is post driving a kind of commoditization of infrastructure so switching away from the traditional vertically integrated stack of a RISC UNIX environment to providing choice so you have a common platform that you can target all your applications do that creates independence from the underlying hardware that's that's something that provide a real value to the enterprise that notion continues to play out today as infrastructure changes it's not just hardware it's virtualized data centers it's public clouds how do you create that consistency for developers to target their applications too as well as the operation seems to manage well you know it's through leveraging open source and bringing a common platform in into your environment as you go up the stack I think you get more and more proliferation of ideas and choices from developer tools and modules and dependencies you know most software stacks today have some open source even included inside whether you're building exclusively on top of a platform that's open source based you're probably also including open source into your application so it's a whole variety from building your key infrastructure to supporting your your enterprise applications and you mentioned openness which y'all know is a big very important thing to Red Hat and one thing that red has been speaking of lately is open hybrid cloud so maybe you can explain that to us what what he is open hybrid cloud what does red head mean by that sure so open hybrid cloud for us start with open that's our platforms are built from open source project so we work across like literally thousands of open source projects bring those together into products that build our platform also we create an open ecosystem so you know we're really fostering partnerships and collaboration at every level from the developer level up through our commercial partnerships the hybrid piece is talking about where you deploy this infrastructure inside your data center on bare metal servers inside your data center virtualized in a private cloud across multiple public clouds and increasingly out to the edge so that that notion of what is the data center - to me it really encompasses all those different footprints so the hybrid cloud cloud meaning give a cloud like experience from an Operations point of view simple to operate meaning you know we're doing everything we can to help operators manage that infrastructure from a developer point of view surface scene functionality as services Nate the eyes and you know how do you give a self-service environment to developers like you know like a cloud so it's across all that first you talk about data in the edge which you know the fact that there's so much the computing that's going on out there and staying closer to the source right we're not bringing it back in you're leaving it out there that adds a whole new level of complexity - I would think and scale you know massive amounts what everything is happening out there so what are you seeing in that in that in terms of handling that complexity and addressing challenges that you see coming as this growth is tremendous growth continues well one it's how do you manage all of that infrastructure so I think having some consistency is a great way to manage that so using the same platform across all of those different environments including the edge that's really going to give you a direct benefit to targeting your applications to that same common platform having the ability to recognize some dependencies so maybe you have a dependency on a data set and that data sets supplied from sources that are in an edge location we can codify that and then enable developers to build applications you know do test dev Prada cross a variety of environments pushing all the way out to an edge deployment where you know thinking you're taking in a lot of data you may be building models in a scale out environment internally in your private cloud or out in the public cloud taking those models deploying those to the edge for inference in real time to make real-time decisions based on data flows through the system and that's that's the world that we live in today so managing that complexity is critical automation for managing that consistency common platforms I think are key tools that we can use to to help build up that that rich in person just from an industry perspective so who does who's that applied to in your mind right what kind of industry is looking at this and saying all right this is this is a an opportunity but also a challenge for us and something we really need to address what's the array there do you think honestly I see it across almost all market verticals so we look at the world or a platform centric view from from a RedHat perspective so we look at the world across industries what I find interesting in the edge use cases is they tend to get more vertically specific so in a manufacturing case you know maybe you're dealing with a manufacturing line which is a set of applications and a set of devices which looks quite different from a retail office or branch office environment some similar problems but very different environments and then you take the service providers networks the telco network out of the edge and that looks quite different from a manufacturing floor so you know it's a it's a wide variety of vertically oriented solutions drawing from some common platform technologies containers Linux you know how do you do automation across all of those environments that machine learning tools those are the things that I think are consistent but you get all a lot of very vertically focused use cases yeah I'm now in the canine today that that Andy was mentioning that they love open source and when we're here at Amazon and and he likes to talk about the compatibility that and customer choice is also very important to Amazon's wit tell us a little bit about how openness interacts with somewhere like ADA we're actually we're here at reinvent which is an ADA where show so how does Red Hat and AWS work together how do you coexist in this ecosystem and get the benefits of open source technologies we could exist in a number of different ways one would be as engineers working together in open source communities building technology another is we have commercial partnerships so we run our platforms on top of AWS so we bring customers to AWS which is a shared you know we have a shared benefit there and then there's also areas where we have competitive offerings so it's you know it's a full spectrum kind of the modern world of the buzzword co-op petitioner or whatever you know it I really think when you look in the open source communities engineers thrive on building great technologies together independent of any kind of corporate boundaries commercially people develop relationships that are complicated today and we have a great working relationship we've run a lot of our cloud customers on Amazon but again there's there's areas where we're both invested in kubernetes ours is openshift there's a zk s so customers have a choice in that context yeah sorry is that in that context that there are some in the open-source community who view cloud as possibly a bit of a villain and certain things we've seen some some dynamics around some particular providers around the debt the database face I went I went name 50 particular players but we've seen some competitive moves in in that place so do you see cloud is it the villain or is it an enabler of open-source technologies well it's definitely an enabler now there's a complicated scenario and this like is it a villain which is how do we create sustainable communities and in the context where a technology is developed largely by one vendor and it's monetized largely by another vendor it's not going to be a very sustainable model so we just have to focus on how are we building technology together and building it in a sustainable way and part of that is making the contributions back into the community to help the project's themselves grow and thrive part of it is having a great diversity of contributors into the into the project and recognizing that business models change and you know the world evolves yeah that doesn't introduce an element of risk it's been around for a while that enterprise are a little bit concerned about open source oh well who's really behind this will this project or software still be here in six months that seems to be decreasing as as the commercial support for particular open source projects and initiatives come to me and we see the rise of foundations and so on that try to give a little bit of an underpinning to some of these projects particularly ones that are critical for the supportive of enterprise technologies do you see enterprises maturing in their view of open source do they do they see it as no no that we understand that this is definitely a sustainable technology whereas these other ones like yeah that one's not quite there yet or do they still need a lot of assistance in making that kind of decision I've been at it for a couple of decades so in the beginning there was a lot of evangelism that this is safe it's consumable by the enterprise it's not some kind of crazy idea to bring open-source you're not gonna lose your intellectual property or things like that those days I mean I'm sure you could find an exception but those days are largely over in this in the sense that open source has gone mainstream so I would say open source is one most large enterprises have an open-source strategy they consider open source as critical to not only how they source software from vendors but also how they build their own applications so the world has really really evolved and now it's really a question of where are you partnering with vendors to build infrastructure that's critical to your business but not your differentiator and where are you leveraging open source internally for your to differentiate your business I think that's a more sophisticated view it's not the safety question it's not is it is it legally you know that you're bringing legal concerns into the picture it's really a much different conversation and people in the enterprise are looking how can we contribute to these projects so that's really it's pretty exciting actually so so what do you think it is then in the maturation process then as it did is it in the adolescent years is it growing into young adulthood you said you've been at it for a long time and it's more acceptable but where are we you think on that in that arc you know what in terms of adapting or or adopting if you will that philosophy probably depends on where you are in the layer of the stack and so the lower you get into the infrastructure the more commonplace it is the closer you get to differentiated value and something that's really unique there's less reason to even build those applications as open source if it's only you consuming it you know pretty pretty broad spectrum there I think that in general we're in some level of adulthood it's a very mature world in the open-source communities and what's interesting today is how we change business models around deploying and consuming open source technologies and then a next generation of technology will be very data-centric data drives a whole set of questions there's policy and governance around data placement there's model training and model exchanging and where models come from data or the models open source is the data shareable you know that it sets a whole new wave of questions that I think in that context it's much earlier so that's our next interview by the way with Chris next time down the road thanks for the time as always really good to see you and I know you're you're awfully busy this week so we really do appreciate you carving out a little slice of time glad to do face press yeah thank this right over Red Hat CTO back with Justin and John live on the cube here at AWS reinvent 2019
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wright | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Justin | PERSON | 0.98+ |
John wall | PERSON | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
six months | QUANTITY | 0.98+ |
50 particular players | QUANTITY | 0.97+ |
Rowan | LOCATION | 0.97+ |
Vinum care | ORGANIZATION | 0.97+ |
2019 | DATE | 0.96+ |
Linux | TITLE | 0.96+ |
Red Hat | TITLE | 0.96+ |
RedHat | TITLE | 0.94+ |
today | DATE | 0.94+ |
one thing | QUANTITY | 0.94+ |
both | QUANTITY | 0.93+ |
John | PERSON | 0.93+ |
Prada | TITLE | 0.9+ |
one vendor | QUANTITY | 0.88+ |
one | QUANTITY | 0.87+ |
RISC UNIX | TITLE | 0.84+ |
couple of decades | QUANTITY | 0.84+ |
decades | QUANTITY | 0.83+ |
this morning | DATE | 0.82+ |
CTO | ORGANIZATION | 0.8+ |
open source | QUANTITY | 0.79+ |
first | QUANTITY | 0.77+ |
a couple of hours | QUANTITY | 0.73+ |
two | QUANTITY | 0.73+ |
number one | QUANTITY | 0.71+ |
EJ | LOCATION | 0.68+ |
this week | DATE | 0.66+ |
red | ORGANIZATION | 0.63+ |
day | QUANTITY | 0.61+ |
parents | QUANTITY | 0.59+ |
reinvent | EVENT | 0.58+ |
Cube | TITLE | 0.53+ |
Cuban | OTHER | 0.49+ |
re:Invent | EVENT | 0.49+ |
Red | TITLE | 0.49+ |
Hat | ORGANIZATION | 0.48+ |
ADA | TITLE | 0.47+ |
of | TITLE | 0.46+ |
2019 | TITLE | 0.41+ |
Chris Wright, Red Hat | Red Hat Summit 2019
>> live from Boston, Massachusetts. It's the you covering your red have some twenty nineteen rots. You buy bread hat. >> Good to have you back here on the Cube as we continue our coverage. Live at the Red Had Summit twenty nineteen, Day three of our coverage with you since Tuesday. And now it's just fresh off the keynote stage, joining stew, Minutemen and myself. Chris. Right? VP and chief technology officer at Red Hat. Good job there, Chris. Thanks for being with us this morning. Yeah. >> Thank you. Glad to be here. >> Great. Right? Among your central things, you talked about this, this new cycle of innovation and those components and how they're integrating to create all these great opportunities. So if you would just share for those with those at home who didn't have an opportunity to see the keynote this morning, it's what you were talking about. I don't think they play together. And where that lies with red hat. Yeah, you bet. >> So, I think an important first kind of concept is a lot of what we're doing. Is lane a foundation or a platform? Mean red hats focuses in the platform space. So we think of it as building this platform upon which you build an innovate. And so what we're seeing is a critical part of the future is data. So we're calling it a Kino data centric. It's the data centric economy. Along with that is machine learning. So all the intelligence that comes, what do you dividing? The insights you're grabbing from that data. It introduces some interesting challenges data and privacy and what we do with that data, I mean, we're all personally aware of this. You see the Cambridge Analytica stuff, and you know, we all have concerns about our own data when you combine all of us together techniques for how we can create insights from data without compromising privacy. We're really pushing the envelope into full distributed systems, EJ deployments, data coming from everywhere and the insights that go along with that. So it's really exciting time built on a consistent platform like lycopene shift. >> So, Chris, I always loved getting to dig in with you because that big trend of distributed systems is something that you know we've been working on for quite a long time. But, you know, we fully agree. You said data at the center of everything and that roll of even more distributed system. You know, the multi cloud world. You know, customers have their stuff everywhere and getting their arms around that, managing it, being about leverage and take advantage. That data is super challenging. So you know where where, you know, help us understand some of the areas that red hat in the communities are looking to solve those problems, you know, where are we and what's going well and what's still left to work on. >> Well, there's a couple of different aspect. So number one we're building these big, complex systems. Distributed systems are challenging distribute systems, engineers, air really solving harder problems. And we have to make that accessible to everybody operations teams. And it's one of the things that I think the cloud taught us when you sort of outsource your operations is somebody else. You get this encapsulated operational excellence. We need to bring that to wherever your work clothes are running. And so we talked a lot about a I ops, how you harness the value of data that's coming out of this complex infrastructure, feed it through models and gain insights, and then predict and really Ultimately, we're looking at autonomic computing how we can create autonomous clouds, things that really are operating themselves as much as possible with minimal human intervention. So we get massive scale. I think that's one of the key pieces. The other one really talking about a different audience. The developers. So developers air trying to incorporate similar types of intelligence into their applications were making recommendations. You're tryingto personalize applications for end users. They need easy access to that data. They need easy access to train models. So how do we do that? How do we make that challenging data scientist centric workflow accessible to developers? >> Yeah, just some of the challenges out there. I think about, you know, ten, fifteen years ago, you talk to people, it was like, Well, I had my central source of truth and it was a database. And you talk to most companies now and it's like, Well, I've got a least a dozen different database and you know, my all my different flavors of them and whether in the cloud or whether I have them in my environment, you know, things like a ops trying to help people get involved with them. You talked a little bit in your keynote about some of the partners that you're working on. So how do you, you know, bring these together and simplify them when they're getting, you know, even more and more fragmented? >> Well, it's part of the >> challenge of innovation. I mean, I think there's a there's a natural cycle. Creativity spawns new ideas. New ideas are encapsulated in projects, so there's a wave of expansion in any kind of new technology time frame. And then there's ultimately, you see some contraction as we get really clear winners and the best ideas and in the container orchestration space communities is a great example of that. We had a lot of proliferation of different ways of doing it. Today we're consolidating as an industry around Cooper Netease. So what we're doing is building a platform, building a rich ecosystem around that platform and bringing our partners in who have specific solutions. They look at whether it's the top side of the house, talking to the operations teams or whether it's giving developers easy access to data and training models through some partners that we had today, like perceptive labs and each to a A I this partnership. Bringing it to a common platform, I think, is a critical part of helping the industry move forward and ultimately will see where these best of breed tools come into play. >> Here, uh, you know, maybe help a little bit with with in terms of practical application, you got, you know, open source where you've got this community development going on and then people customized based on their individual needs all well, great, right? How does the inverse happen? Where somebody who does some custom ization and comes up with a revelation of some kind and that applies back to the general community. And we can think of a time where maybe something I'm thinking like Boston children, their imaging, that hospital we saw actually related to another industry somehow and gave them an ah ha moment that maybe they weren't expecting an open source. Roy was the driver that >> Yeah, I think what we showed today were some examples of what If you distill it down to the core, there's some common patterns. There's data, they're streaming data. There's the data processing, and there's a connection of that processed data or train model to an application. So we've been building an open source project called Open Data Hub, where we can bring people together to collaborate on what are the tools that we need to be in this stack of this kind of framework or stack And and then, as we do, that we're talking to banks. They're looking at any money laundering and fraud detection. We're talking to these hospitals that were looking at completely different use cases like HC Healthcare, which is taking data to reduce the amount of time nurses need to spend, gathering information from patients and clearly identify Septus sepsis concerns totally different applications, similar framework. And so getting that industry level collaboration, I think is the key, and that having common platforms and common tools and a place to rally around these bigger problems is exactly how we do that through open source. >> So Lynn exits and an interesting place in the stack is you talked about the one commonality and everything like that. But we're actually at a time where the proliferation of what's happen to get the hardware level is something that you know of an infrastructure and harbor guy by background, and it was like, Oh, I thought We're going to homogenize everything, standardize everything, and it's like, Oh, you're showing off Colin video stuff. And when we're doing all these pieces there, there's all these. You know, new things, Every been things you know you work from the mainframe through the latest armed processors. Give us a little insight as to how your team's geeking out, making sure that they provide that commonality yet can take advantage of some of the cool, awesome stuff that's out there that's enabling that next wave of innovation. >> Yeah, so I share that infrastructure geek nous with you. So I'm so stoked the word that we're in this cycle of harbor innovation, I'll say something that maybe you sounds controversial if we go back in time just five years or a little, a little more. The focus was around cloud computing and bringing massive number of APS to the cloud, and the cloud had kind of a T shirt size, small, medium, large view of the world of computer. It created this notion that Khun computers homogenous. It's a lie. If you go today to a cloud provider and count the number of different machine types they have or instance types it's It's not just three, it's a big number. And those air all specialized. It's for Io throughput. It's for storage acceleration. It's big memory, you know. It's all these different use cases that are required for the full set of applications. Maybe you get the eighty percent in a common core, but there's a whole bunch of specific use cases that require performance optimization that are unique. And what we're seeing, I think, is Moore's law. The laws of physics are kind of colliding a little bit, and the way to get increased acceleration is through specialized hardware. So we see things like TP use from Google. We see until doing deal boost. We've got GPS and even F p G A s and the operating system is there TIO give a consistent application run time while enabling all those hardware components and bringing it all together so the applications can leverage the performance acceleration without having to be tied directly to it. >> Yeah, you actually think you wrote about that right now, one of your a block post that came about how hardware plays this hugely important role. You also talked about innovation and change happening incrementally and And that's not how we kind of think about like big Banks, right? Yeah. Wow, this is But you pointed out in the open source, it really is step by step by step. Which way? Think about disruption is being very dramatic. And there's nothing sexy about step by step. Yeah, that's how we get to Yeah, disruption. I kind of >> hate this innovation, disruption and their buzz words. On the one hand, that's what captures attention. It's not necessarily clear what they mean. I like the idea that, you know, in open source, we do every day, incremental improvements. And it's the culmination of all these improvements over time that unlock new opportunities. And people ask me all the time, where is the future? What do we do and what's going on? You know, we're kind of doing the same thing we've been doing for a long time. You think about micro services as a way to encapsulate functionality, share and reuse with other developers. Well, object oriented programming decades ago was really tryingto tryingto established that same capability for developers. So, you know, the technologies change we're building on our history were always incrementally improving. You bring it all together. And yes, occasionally you can apply that in a business case that totally disrupts an industry and changes the game. But I really wanted encourage people to think about what are the incremental changes you can make to create something fundamentally new. >> All right, I need to poke it that a little bit, Chris, because there's one thing you know, I looked back in my career and look back a decade or two decades. We used to talk about things like intelligence and automation. Those have been around my entire career. Yeah, you look it today, though, you talk about intelligence and talk about automation, it's not what we were doing, you know, just the amount of degrees, what we're having there. It is like if we'd looked at it before, it was like, Oh, my gosh, science fiction's here so, you know, way sometimes lose when we're doing step by step, that something's there making step function, improvements. And now the massive compact, massive changes. So love your opinions there. >> Yeah, well, I think it's a combination, so I talk about the perpetual pursuit of excellence. So you pick up, pick a field, you know, we're talking about management. We got data and how you apply that data. We've been working towards autonomic computing for decades. Concepts and research are old, the details and the technologies and the tools that we have today are quite different. But I'm not. You know, I'm not sure that that's always a major step function. I think part of that is this incremental change. And you look at the number for the amount of kind of processing power and in the GPU today No, this is a problem that that industry has been working on for quite a long time. At some point, we realize, Hey, the vector processing capabilities in the GPU really, really suit the machine learning matrix multiplication world real world news case. So that was a fundamental shift which unlocked a whole bunch of opportunity in terms of how we harness data and turn it into knowledge. >> Yes. So are there any areas that you look at? Now that we've been working at that, you feel we're kind of getting to those tipping points or the thie waves of technology or coming together to really enable Cem Cem massive change? >> I do think our ability to move data around, like generate data. For one thing, move data around efficiently, have access to it from a processing capability. And turning that into ah, >> model >> has so fundamentally changed in the past couple of decades that we are tapping into the next generation of what's possible and things like having this. This holy grail of a self healing, self optimizing, self driving cluster is not as science fiction as it felt twenty years ago. It's >> kind of exciting. You talk about you've been there in the past, the president, but there is very much a place in the future, right? And how would that future looks like just from from again? That aye aye perspective. It's a little scary, sometimes through to some people. So how are you going about, I guess, working with your partners to bring them along and accept certain notions that maybe five six years ago I've been a little tough to swallow or Teo feel comfortable with? >> Yeah, well, there's a couple of different dimensions there. One is, uh, finding tasks that air computers are great at that augment tasks that humans were great at and the example we had today. I love the example, which was, Let's have computers, crunch numbers and nurses do what they do best, which is provide care and empathy for the patients. So it's not taking the nurse's job away. In fact, is taking the part that is drudgery ITT's computation >> and you forget what was the >> call it machine enhanced human intelligence right on a couple of different ways of looking at that, with the idea that we're not necessarily trying to eliminate humans out of the loop. We're trying to get humans to do what they do best and take away the drudgery that computers air awesome at repetitive tasks. Big number crunching. I think that's one piece. The other pieces really, from that developer point of view, how do you make it easily accessible? And then the one step that needs to come after that is understanding the black box. What happens inside the machine learning model? How is it creating the insights that it's creating and there's definitely work to be done there? There's work that's already underway. Tto help understand? Uh, the that's really what's behind the inside so that we don't just trust, which can create some problems when we're introducing data that itself might already be biased. Then we assumed because we gave data to a computer which is seemingly unbiased, it's going to give us an unbiased result, right? Garbage in garbage out. >> So we got really thoughtful >> about what the models are and what the data is that we're feeding >> It makes perfect sense it. Thanks for the time. Good job on the keynote stage again this morning. I know you've got a busy afternoon scheduled as well, so yeah, I will let you. We'Ll cut you loose. But thank you again. Always good to see you. >> Yeah. I always enjoy being here >> right at that's right. Joining us from red hat back with Wharton Red Hat Summit forty nineteen. You're watching live here on the Cube?
SUMMARY :
It's the you covering Good to have you back here on the Cube as we continue our coverage. Glad to be here. an opportunity to see the keynote this morning, it's what you were talking about. So all the intelligence that comes, what do you dividing? So, Chris, I always loved getting to dig in with you because that big trend of distributed And it's one of the things that I think the cloud taught us when you sort of outsource your operations is somebody else. I think about, you know, And then there's ultimately, you see some contraction as we get really clear winners and the best ideas Here, uh, you know, maybe help a little bit with with in terms of practical application, Yeah, I think what we showed today were some examples of what If you distill it down So Lynn exits and an interesting place in the stack is you talked about the one commonality the word that we're in this cycle of harbor innovation, I'll say something that maybe you sounds controversial Yeah, you actually think you wrote about that right now, one of your a block post that came about how people to think about what are the incremental changes you can make to create something fundamentally new. and talk about automation, it's not what we were doing, you know, just the amount of degrees, So you pick up, pick a field, you know, we're talking about management. Now that we've been working at that, you feel we're kind of getting to those I do think our ability to move data around, like generate data. has so fundamentally changed in the past couple of decades that we are tapping So how are you So it's not taking the The other pieces really, from that developer point of view, how do you make it easily accessible? Good job on the keynote stage again this morning. Joining us from red hat back with Wharton Red Hat Summit forty nineteen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
eighty percent | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Red Hat | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Colin | PERSON | 0.99+ |
Lynn | PERSON | 0.99+ |
Cooper Netease | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
Cambridge Analytica | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
Roy | PERSON | 0.97+ |
twenty years ago | DATE | 0.97+ |
ITT | ORGANIZATION | 0.97+ |
this morning | DATE | 0.96+ |
ten | DATE | 0.96+ |
five six years ago | DATE | 0.96+ |
Tuesday | DATE | 0.96+ |
one thing | QUANTITY | 0.96+ |
three | QUANTITY | 0.96+ |
HC Healthcare | ORGANIZATION | 0.95+ |
Day three | QUANTITY | 0.95+ |
decades ago | DATE | 0.95+ |
two decades | QUANTITY | 0.95+ |
Kino | ORGANIZATION | 0.94+ |
one step | QUANTITY | 0.94+ |
fifteen years ago | DATE | 0.93+ |
past couple of decades | DATE | 0.93+ |
One | QUANTITY | 0.93+ |
Boston | LOCATION | 0.88+ |
Open Data Hub | TITLE | 0.87+ |
a decade | QUANTITY | 0.87+ |
each | QUANTITY | 0.86+ |
stew | PERSON | 0.82+ |
Red Hat Summit 2019 | EVENT | 0.81+ |
twenty nineteen rots | QUANTITY | 0.8+ |
decades | QUANTITY | 0.79+ |
a dozen | QUANTITY | 0.79+ |
Red Had Summit | EVENT | 0.79+ |
wave of | EVENT | 0.76+ |
Moore | PERSON | 0.75+ |
pieces | QUANTITY | 0.74+ |
Septus sepsis | OTHER | 0.7+ |
waves of | EVENT | 0.68+ |
Khun | ORGANIZATION | 0.67+ |
forty nineteen | EVENT | 0.64+ |
twenty | QUANTITY | 0.61+ |
Minutemen | PERSON | 0.61+ |
Wharton Red Hat Summit | ORGANIZATION | 0.56+ |
big | ORGANIZATION | 0.55+ |
Cem Cem | ORGANIZATION | 0.53+ |
red hat | ORGANIZATION | 0.5+ |
nineteen | EVENT | 0.48+ |
Cube | ORGANIZATION | 0.45+ |
Scott Sneddon, Juniper Networks & Chris Wright, Red Hat | KubeCon 2018
>> Live from Seattle, Washington, it's the Cube, covering KubeCon andCloudNativeCon North America 2018. Brought to you buy Red Hat, the CloudNative computing foundation and it's ecosystem partners. (background crowd chatter) >> Okay welcome back everyone, live here in Seattle forKubeCon and CloudNativeCon. This is the Cube's coverage, I'm John Furrier with Stu Miniman. We've got two great guests, Chris Wright CTO of Red Hat, Scott Sneddon who's the senior director ofcloud at Juniper Networks, breaking down, windingdown day one of three days of coverage here. Rise of kubernetes, rise of cloudnatives, certainly impacting IT,open source communities, and developers. Guys, thanks for coming on the Cube. Appreciate it. It's good to see you. >> Yeah, good to see you. >> Welcome to the Cube. Okay, so, talk aboutthe relationship between Red Hat and Juniper. Why we're here, what are we talking about? >> Well, we're here to talkabout a combined solution. So, Red Hat's bringingkind of the software platform infrastructure piece and Juniper's bringinga networking component that ties it together.>> Yeah. >> So, we do have a fairly, well, in tech terms arelatively long history of working together. We've had a partnership for a little more than two years on sometelco Cloud initiatives around OpenStack, using the right OpenStackplatform with Contrail Juniper's contrail solutionas an SDN layer for these telco Cloud deployments. And have had a lot of successwith that partnership. A lot of large and smallto medium telco's around the world have deployed that. Earlier this year at theOpenStack summit in Vancouver, we announced an expandedpartnership to start to address some enterprise use cases. And, you know, naturallyopen shift is the lead technology that we wanted to tie in with around enterpriseadoption of cloud and some alternatives to someof the legacy platforms that are out there. >> And we were talkingearlier in the Cube here, we always get kind ofthe feel of the show, kubernetes maturing? But it kind of two worlds colliding and working together. A systems kind of view,almost like operating systems. The network systems, allkind of systems thinking. And then just apps. Okay, the old app thing. So these old legacy worldthat we all lived in kind of happening in really dynamic ways with the apps aren't thinkingabout what's below it. This is really kind of whereyou guys have a tailwind with Juniper.>> Yeah. Because you still gotto make things dynamic, you still got latency, onpremises not going away. You got IOT, so networkingplays a really big thing as software starts figuringthings out as kubernetes. Let's talk about that. Where is that value? How's it expanding? Cause clearly you stillneed to move packets from A to B.>> Yeah. Be more efficient with it. Apps going to have policy. >> The, well, I mean you've still got to, the network is always been the foundation of technology or at least for the last 20 plus years. And as cloud has been adopted, really we've seen network scale drive in different ways. The mega scalers thathave built infrastructure that we've been enabling for quite a while and have been working withthose customers as well. We've been developing a lot of simplified architecture just forthe physical plumbing to connect these things together. But what we've seen andis more and more important is, you know, it's all about the app, the app is the thing that'sgoing to consume these things. And the app developerdoesn't necessarily want to worry about IP addresses and port numbers and firewall rules and things like that, so how could we justmore simply extract that? And so, you know, we'vebeen developing automation and aimed at the networkfor quite a while, but I think more andmore it's becoming more important that theapplication can just consume that without having to directthe automation at the app. And so, you know, groupslike CloudNative foundation and a lot of the workwith kubernetes are on network policy, let's us use CloudNativeprivatives and then we can translate into the network primitives that we need to deploy to move packets, you know, IP addresses and subnets. >> And Chris, talk aboutthe multi cloud dynamic here because again, the dayof things are moving around the standardizationaround those core value propositions, youmentioned about networking and software networks, all kinds of software, you know, venations under the covers. I'm a customer, I havemultiple clouds now. This is going to be a core requirement. So you got to have a a clean integration between it. >> There's really two things. If you look at a modern application, you got your traditionalmonolithic application and as you tease itapart and into components and services, there's only one thingthat reconnects them and that's the network and so insuring that that's as easy to use as an applicationdevelopers focus is around the app and not aroundnetwork engineering is fundamental to a single cluster. And then if you have multiple clusters and you're trying to take advantage of different specialtiesin different clouds or geo replication or things like this that also require thenetwork to reconstitute those applications across thedifferent multiple clouds. If you expect your applicationengineers to become experts in networking, you're just sort ofsetting everybody up with misset expectations. >> It slows things down,requires all these other tasks you got to do. I mean it's like a rock fetch. You don't want to do it. Okay, stack a bunch of rocks, move them from there to there. I mean, this is whatthe holy grail of this infrastructure's code really is. >> Yeah.>> Yeah. I mean, that's the goal. >> Help connect the dots for us. When you look at multicloud networking obviously is a very critical component, what're your customers looking for? How does this solution goto market for your company? >> Absolute ease ofuse is top of the list. So, it can't be overly complicated. Because we're alreadybuilding complex systems, these are big distributive systems and you're adding multipleclusters and trying to connect them together. So ease of use is important. And then something that'sdynamic and reflects the current application requirements, I think is also really important. So that you don't over utilize resources in a cloud to maintainsort of a static connection that isn't actually needed at that moment. I'm sure you probably havea different perspective. >> Yeah, I mean, this isthe whole concept of SDN and network virtualization, a lot of the buzzwordsthat have been around for a few years now, is the ability to deliveron demand network services that are turned on whenthe application asks for it and are turned off when the application's done with it. We can create dynamic connectionsas applications scale. And then with a lot of thenewer things we've been doing around contrailand with Red Hat are the ability to extend thoseapplications environments with networking andsecurity into various cloud platforms. So, you know, if it's runningon top of an openstack environment or in a public cloud or, some other bare metal infrastructure, we're going to make surethat the network and security primitives are inplace when the application needs it and then get deepervisioned or pulled out when they go away. >> Being at a show like this, I don't think we need to talktoo much about open source, because that's reallycore and fundamental, but what we're doing here, but I guess, how doesthat play into customers? We've been watching the slow change in the networking world, you know, I'm a networking guy by background, used to measure changesin networks in decades and now it feels like we'removing a tiny bit faster, >> Little bit. >> What're we seeing is--? >> Well, I mean the historyof openness in networking was the ITF>> Standards. >> and IEEE and standards bodies, right? How do we interact? We're going to have ourlittle private playground and then we'll makesure to protocol layer, we can interact with each otherand we call that openness. But the new openness is open source and transparency into the platform and the ability tocontribute and participate. And so Juniper shifted a lot of our focus, I mean we still haveour own silicone and the operating system we built on our routers and switches, but we'vealso taken the contrail platform, open sourced it a few years ago, it's now called thetungsten fabric project under the Linux foundation. And we're activeparticipants in a community. And our customers really demand that. The telco's are drivingtowards an open source model, more and more enterpriseswant to be able to consume open source software with support, which is where we come in, but also be able to have an understanding of what's going on under the covers to participate if that's a possibility. But really drivinginteroperability through a different way then justa protocol interaction and a standards body. >> I can see how kubernetescan be a great fit for you guys at Juniper, clearly out of the boxyou have this kind of inter cloud, inter networking, paradigm that you're used to, right? How does the relationshipof Red Hat take it to the next level? What specifically areyou guys partnering on, where's that, what'sthat impact on customers? Can you just give a quick explanation, take a minute to explainthe Juniper Red Hat-- >> Well a lot of itcomes down to usability and ease of use, right? I mean what Red Hat's done with open shift is developed a platformleveraging kubernetes heavily, to make kubernetes easierto use with the great support model and a lot of tooling built on top of that to make thatmore easily deployable, more easily developersto develop on top of. What we're doing withcontrail is providing a supported version ofour open source project and then by tying thesethings together with some installation tools and packaging and most importantly a support model, that let's a customer have the proverbial single throat to choke. >> Have you ever hadcustomers that can run beautifully on your platform? >> Yeah yeah, and theinstallation process is seamless, it's a nob that installtime to consume contrail or some other networking stack and they can call Red Hat for support and they'll escalate toJuniper when appropriate and vice versa. And we've got all those things in place. >> I think one of the things that we have like shared vision on is, the ease of use andthen if you think about two separate systems with a plug in, there's going to be someintegration that needs to happen and we're lookingat how much automation can we do to keep thoseintegrations always functional so that ifwe need to do upgrades, we can do those together instead of abandoning one side or the other. And I think another areawhere we have shared vision is the multi cloud space where we really see the importance for our customer base toget applications deployed to the right locations. And that could be takingadvantage of different pricing structures in different clouds or it could be hardwarefeatures of functionality. Especially as we getinto edge computing and really creating a differentview of computing fabric, which isn't quite so, you know, client serveror cloud centralized, but much more distributed. >> I like how you said that Chris, earlier about how when you decomposethat monolithic app it connects with the network. That's also the other way around. Little pieces can cometogether and work with the network and then form in real time, whether it's an IOT datacoming into the data center, or pushing computdata to the edge, you got to have that network interaction. This is a real CloudNative evolution, this is the core. >> Yeah, and I think anotherpiece that we haven't touched on as much, Scott mentioned it, was the security component. >> Yeah, explain that. >> Again, with as youdecompose that application into components, you surface those components with APIs, those were internal APIswhen they're now exposed externally security really matters. And having simple policythat describes not just the connectivity topologybut who can speak to whom is pretty fundamentally important. So that you maintainsecurity posture and a risk profile that's acceptable. >> And then I think it'sreally important is, your traditionalenterprise starts to adopt these CloudNative models. You've got a securityteam there that might not necessarily be up to speed or on board. So you've got to havetooling and visualization and analytics to beable to present to them that policies are being enforced correctly and are compliant and all those things so. >> Yeah and they're tough customers too. They're not going to, they expectreally rock solid capability. >> They don't let youjust deploy a big flat network with no policy-- >> Hey what about the APIs? Service areas exposed in the IOT space. >> Yeah.>> Right. >> You got to nail it down. >> Yeah absolutely, sothat's a lot of what we're bringing to the table here, is a lot of Juniper'shistory around developing security products. >> Take a minute to explain,I want to give you some time to get a plug in for Juniper. I've been following youguys for a long time. Junos back on the old days, contrail. Juniper's has had a software, big time software view. >> Yeah. >> Explain the DNA of software at Juniper. >> You know the earlydays of Juniper were, we weren't the first networkvendor on the market. There was already somebodyon the market in the mid 90s that had a pretty solid stronghold on carrier and enterprise networking. We had to come in with a better model. Let's make the box easierto use and simpler. Let's make the interfacea little more structured and understandable. Let's make it programmable, right? I mean the first feature request for Junos was to have a CLI becausethe first interaction to it was just an API call. And that was out of the box from day one. We had to write a user interface to it just to fit in to theexisting network world in the mid 90s. And so we've alwaysbeen really proud of the Junos operating systemthat runs on our boxes. We've really been proudthat we've had this one Junos concept of a commonoperating system on every network device that we deliver. As we've started tovirtualize those network devices for NFE and things like that, it's again that same operatingsystem that we deliver. Contrail came to us through acquisition, so it's not Junos in and of itself, but still leveraging a lot of those same fundamentals around,model driven configuration management, understandableAPIs, and openness that we've always had. >> Cloud operating modelthat everyone's going to, the common operating modelfits in that unification vision that you guys have had. >> Yeah absolutely. >> And really early, by the way, was before SDN was SDN, I think that was SDN's kind of like-- >> I like to dry, I-- >> Should have called it SDN. >> Right, I described SDN as just a big distributed router andreally we've had big distributed routers for a long time. >> John, we are in Seattle, everything we're talkingabout in tech is hipster. >> Chris, great stuff. Great to have you on, Scott. Great smart commentary. CTO Red Hat, you guys are winning. Congratulations on the betsyou made at kubernetes early, >> Yeah. >> CoreOS great acquisition,great team there, and some news there aboutsome dealings out back into the C and CF, soI mean, you've got it-- >> A lot going on. >> A lot going on. And yeah, big news with that other things, I can't remember what it was, it was some big-->> Something in there. >> Something for a million dollars. >> Great news out there. Thanks for coming out, appreciate it. Good to see you.>> Good to see you. >> Alright, breakingdown day one coverage. I'm John Furrier, Stu Miniman. Day two starts tomorrow. Three days of wall towall coverage of KubeCon. And they're shutting down the hall. Be right back and see you tomorrow. Thanks for watching. (techy music)
SUMMARY :
Brought to you buy Red Hat, This is the Cube's coverage, Welcome to the Cube. So, Red Hat's bringingkind of the software And have had a lot of successwith that partnership. Okay, the old app thing. from A to B. Apps going to have policy. and a lot of the workwith kubernetes are on all kinds of software, you know, and so insuring that that's as easy to use move them from there to there. I mean, that's the goal. Help connect the dots for us. So that you don't over utilize resources is the ability to deliveron demand network services and the ability tocontribute and participate. Well a lot of itcomes down to usability it's a nob that installtime to consume contrail the ease of use andthen if you think about the network and then form in real time, Yeah, and I think anotherpiece that we haven't And having simple policythat describes not just the and analytics to beable to present to them Yeah and they're tough customers too. Service areas exposed in the IOT space. is a lot of Juniper'shistory around developing Take a minute to explain,I want to give you some We had to come in with a better model. the common operating modelfits in that unification distributed router andreally we've had big John, we are in Seattle, Great to have you on, Scott. And yeah, big news with that other things, Good to see you. Be right back and see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Scott Sneddon | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
KubeCon | EVENT | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Three days | QUANTITY | 0.99+ |
mid 90s | DATE | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
two things | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
Earlier this year | DATE | 0.97+ |
Day two | QUANTITY | 0.97+ |
single cluster | QUANTITY | 0.96+ |
first interaction | QUANTITY | 0.96+ |
telco | ORGANIZATION | 0.95+ |
day one | QUANTITY | 0.95+ |
more than two years | QUANTITY | 0.95+ |
two separate systems | QUANTITY | 0.95+ |
single | QUANTITY | 0.92+ |
CloudNative | ORGANIZATION | 0.92+ |
KubeCon 2018 | EVENT | 0.91+ |
two great guests | QUANTITY | 0.9+ |
two worlds | QUANTITY | 0.88+ |
CloudNativeprivatives | TITLE | 0.88+ |
few years ago | DATE | 0.88+ |
Juniper'shistory | ORGANIZATION | 0.88+ |
a million dollars | QUANTITY | 0.87+ |
andCloudNativeCon North America 2018 | EVENT | 0.83+ |
Junos | ORGANIZATION | 0.82+ |
first feature | QUANTITY | 0.82+ |
SDN | ORGANIZATION | 0.82+ |
last 20 plus years | DATE | 0.8+ |
IEEE | ORGANIZATION | 0.78+ |
one side | QUANTITY | 0.75+ |
CTO | PERSON | 0.69+ |
Red Hat | TITLE | 0.65+ |
Cube | COMMERCIAL_ITEM | 0.64+ |
Junos | TITLE | 0.63+ |
Linux | TITLE | 0.6+ |
theOpenStack summit | EVENT | 0.59+ |
techy | PERSON | 0.54+ |
Cube | ORGANIZATION | 0.54+ |
OpenStack | TITLE | 0.53+ |
Seattle | EVENT | 0.52+ |
CloudNative | TITLE | 0.52+ |
OpenStackplatform | ORGANIZATION | 0.49+ |
Cloud | TITLE | 0.49+ |
Cube | PERSON | 0.49+ |
Juniper Red Hat | ORGANIZATION | 0.49+ |
Chris Wright, Red Hat | Red Hat Summit 2018
>> Narrator: Live from San Francisco. It's theCUBE! Covering RedHat Summit 2018. Brought to you by Red Hat. >> Alright welcome back, this is theCUBE's exclusive coverage of Red Hat 2018. I'm John Furrier, the co host of theCUBE with John Troyer, co-founder of TechReckoning Advisory Firm. Next guest is Chris Wright, Vice President and CTO Chief of Technology of his Red Hat. Great to see you again, thanks for joining us today. >> Yeah, great to be here. >> Day one of three days of CUBE coverage, you got, yesterday had sessions over there in Moscone South, yet in classic Red Hat fashion, good vibes, things are rocking. Red Hat's got a spring to their step, making some good calls technically. >> Chris: That's right. >> Kubernetes' one notable, Core OS Acquisition, really interesting range, this gives, I mean I think people are now connecting the dots from the tech side, but also now on the business side, saying "Okay we can see now some, a wider market opportunity for Red Hat". Not just doing it's business with Linux, software, you're talking about a changing modern software architecture, for application developers. I mean, this is a beautiful thing, I mean. >> Chris: It's not just apps but it's the operator, you know, operation side as well, so we've been at it for a long time. We've been doing something that's really similar for quite some time, which is building a platform for applications, independent from the underlying infrastructure, in the Linux days I was X86 hardware, you know, you get this HeteroGenius hardware underneath, and you get a consistent standardized application run time environment on top of Linux. Kubernetes is helping us do that at a distributive level. And it's taken some time for the industry to kind of understand what's going on, and we've been talking about hybrid cloud for years and, you really see it real and happening and it's in action and for us that distributed layer round Kubernetes which just lights up how do you manage distributed applications across complex infrastructure, makes it really real. >> Yeah it's also timing's everything too right? I mean, good timing, that helps, the evolution of the business, you always have these moments and these big waves where you can kind of see clunking going on, people banging against each other and you know, the glue layers developing, and then all of a sudden snaps into place, and then it just scales, right? So you're starting to see that, we've seen this in other ways, TCPIP, Linux itself, and you guys are certainly making that comparison, being Red Hat, but what happens next is usually an amazing growth phase. Again, small little, and move the ball down the field, and then boom, it opens up. As a CTO, you have to look at that 20 mile stair now, what's next? What's that wave coming that you're looking at in the team that you have on Red Hat's side and across your partners? What's the wave next? >> Well there's a lot of activity going on that's beyond what we're building today. And so much of it, first of all, is happening in Open Source. So that itself is awesome. Like we're totally tuned into these environments, it's core to who we are, it's our DNA to be involved in these Open Source communities, and you look across all of the different projects and things like machine learning and blockchain, which are really kind of native Open Source developments, become really relevant in ways that we can change how we build functionality and build business, and build business value in the future. So, those are the things that we look at, what's emerging out of the Open Source communities, what's going to help continue to accelerate developers' ability to quickly build applications? Operations team's ability to really give that broad scale, policy level view of what's going on inside your infrastructure to support those applications, and all the data that we're gathering and needing to sift through and build value from inside the applications, that's very much where we're going. >> Well I think we had a really good example of machine learning used in an everyday enterprise application this morning, they kicked off the keynote, talking about optimizing the schedule and what sessions were in what rooms, you know, using an AI tool right? >> Chris: That's right. >> And so, that's reality as you look at, is that going to be the new reality as you're looking into the future of building in these kind of machine learning opportunities into everyday business applications that, you know, in the yesteryear would've been just some, I don't know, visual basic, or whatever, depending on how far back you look, right? You know, is that really going to be a reality in the enterprise? It seems so. >> It is, absolutely. And so what we're trying to do is build the right platforms, and build the right tools, and then interfaces to those platforms and tools to make it easier and easier for developers to build, you know, what we've been calling "Intelligent Apps", or applications that take advantage of the data, and the insights associated with that data, right in the application. So, the scheduling optimization that you saw this morning in the keynote is a great example of that. Starting with basic rules engine, and augmenting that with machine learning intelligence is one example, and we'll see more and more of that as the sophisticated tools that are coming out of Open Source communities building machine learning platforms, start to specialize and make it easier and easier to do specific machine learning tasks within an application. So you don't have to be a data scientist and an app developer all in one, you know, that's, there's different roles and different responsibilities, and how do we build, develop, life cycle managed models is one question, and how do we take advantage of those models and applications is another question, and we're really looking at that from a Red Hat perspective. >> John F: And the enterprises are always challenged, they always (mumbles), Cloud Native speaks to both now, right? So you got hybrid cloud and now multi-cloud on the horizon, set perfectly up with Open Shift's kind of position in that, kind of the linchpin, but you got, they're still two different worlds. You got the cloud-native born in the cloud, and that's pretty much a restart-up these days, and then you've got legacy apps with container, so the question is, that people are asking is, okay, I get the cloud-native, I see the benefits, I know what the investment is, let's do it upfront, benefits are horizontally scalable, asynchronous, et cetera et cetera, but I got legacy. I want to do micro-servicing, I want to do server-less, do I re-engineer that or just containers, what's the technical view and recommendation from Red Hat when you say, when the CIO says or enterprise says, "Hey I want to go cloud native for over here and new staff, but I got all this old staff, what do I do?". Do I invest more region, or just containerize it, what's the play? >> I think you got to ask kind of always why? Why you're doing something. So, we hear a lot, "Can I containerize it?", often the answer is yes. A different question might be, "What's the value?", and so, a containerized application, whether it's an older application that's stateful or whether it's a newer cloud-native application (mumbles), horizontally scalable, and all the great things, there's value potentially in just the automation around the API's that allow you to lifecycle manage the application. So if the application itself is still continuing to change, we have some great examples with some of our customers, like Keybank, doing what we call the "Fast moving monolith". So it's still a traditional application, but it's containerized and then you build a CICD model around it, and you have automation on how you deliver and deploy production. There's value there, there's also value in your existing system, and maybe building some different services around the legacy system to give you access, API access, to data in that system. So different ways to approach that problem, I don't think there's a one size fits all. >> So Chris, some of this is also a cultural and a process shift. I was impressed this morning, we've already talked with two Red Hat customers, Macquarie and Amadeus, and you know Macquarie was talking about, "Oh yeah we moved 40 applications in a year, you know, onto Open Shift", and it turns out they were already started to be containerized and dockerized and, oh yeah yeah you know, that is standard operating procedure, for that set of companies. There's a long tail of folks who are still dealing with the rest of the stuff we've had to deal, the stack we've had to deal with for years. How is Red Hat, how are you looking at this kind of cultural shift? It's nice that it's real, right? It's not like we're talking about microservices, or some sort of future, you know, Jettison sort of thing, that's going to save us all, it's here today and they're doing it. You know, how are you helping companies get there? >> So we have a practice that we put in place that we call the "Open Innovation Lab". And it's very much an immersive practice to help our customers first get experience building one of these cloud native applications. So we start with a business problem, what are you trying to solve? We take that through a workshop, which is a multi-week workshop, really to build on top of a platform like Open Shift, real code that's really useful for that business, and those engineers that go through that process can then go back to their company and be kind of the change agent for how do we build the internal cultural shift and the appreciation for Agile development methodologies across our organization, starting with some of this practical, tangible and realist. That's one great example of how we can help, and I think part of it is just helping customers understand it isn't just technology, I'm a technologist so there's part of me that feels pain to say that but the practical reality is there's whole organizational shifts, there's mindset and cultural changes that need to happen inside the organization to take advantage of the technology that we put in place to build that optimize. >> John F: And roles are changing too, I'll see the system admin kind of administrative things getting automated way through more operating role. I heard some things last week at CubeCon in Copenhagen, Denmark, and I want to share some quotes and I want to get your reaction. >> Alright. >> This is the hallway, I won't attribute the names but, these were quotes, I need, quote, "I need to get away from VP Engine firewalls. I need user and application layer security with unfishable access, otherwise I'm never safe". Second quote, "Don't confuse lift and shift with running cloud-native global platform. Lot of actors in this system already running seamlessly. Versus say a VM Ware running environment wherein V Center running in a data center is an example of a lift and shift". So the comments are one for (mumbles) cloud, you need to have some sort of security model, and then two, you know we did digital transformation before with VM's, that was a different world, but the new world's not a lift and shift, it's re-architect of a cloud-native global platform. Your reaction to those two things, and what that means to customers as they think about what they're going to look like, as they build that bridge to the future. >> Security peace is critical, so every CIO that we're talking to, it's top of mind, nobody wants to be on the front page of The Wall Street Journal for the wrong reasons. And so understanding, as you build a micro-services software architected application, the components themselves are exposed to services, those services are API's that become potentially part of the attack surface. Thinking of it in terms of VPN's and firewalls, is the kind of traditional way that we manage security at the edge. Hardened at the edge, soft in the middle isn't an acceptable way to build a security policy around applications that are internally exposing parts of their API's to other parts of the application. So, looking at it for me, application use case perspective, which portions of the application need to be able to talk to one another, and it's part of why somebody like Histio are so exciting, because it builds right in to the platform, the notion of mutual authentication between services. So that you know you're talking to a service that you're allowed to talk to. Encryption associated with that, so that you get another level of security for data and motion, and all of that is not looking at what is the VPN or what is the VLAN tag, or what is the encapsulation ID, and thinking layer two, layer three security, it's really application layer, and thinking in terms of that policy, which pieces of the application have to talk to each other, and nobody else can talk to that service unless it's, you know, understood that that's an important part for how the application works. So I think, really agree, and you could even say DevSecOps to me is something that I've come around to. Initially I thought it was a bogus term and I see the value in considering security at every step of build, test and deliver an application. Lift and shift, totally different topic. What does it mean to lift and shift? And I think there's still, some people want to say there's no value in lift and shift, and I don't fully agree, I think there's still value in moving, and modernizing the platform without changing the application, but ultimately the real value does come in re-architecting, and so there's that balance. What can you optimize by moving? And where does that free up resources to invest in that real next generation application re-architecting? >> So Chris, you've talked about machine learning, right? Huge amounts of data, you've just talked about security, we've talked about multi-cloud, to me that says we might have an issue in the future with the data layer. How are people thinking about the data layer, where it lives, on prem, in the cloud, think about GDPR compliance, you know, all that sort of good stuff. You know, how are you and Red Hat, how are you asking people to think about that? >> So, data management is a big question. We build storage tooling, we understand how to put the bytes on disc, and persist, and maintain the storage, it's a different question what are the data services, and what is the data governance, or policy around placement, and I think it's a really interesting part of the ecosystem today. We've been working with some research partners in the Massachusetts Open Cloud and Boston University on a project called "Cloud Dataverse", and it has a whole policy question around data. 'Cause there, scientists want to share data sets, but you have to control and understand who you're sharing your data sets with. So, it's definitely a space that we are interested in, understand, that there's a lot of work to be done there, and GDPR just kind of shines a light right on it and says policy and governance around where data is placed is actually fundamental and important, and I think it's an important part, because you've seen some of the data issues recently in the news, and you know, we got to get a handle on where data goes, and ultimately, I'd love to see a place where I'm in control of how my data is shared with the rest of the world. >> John F: Yeah, certainly the trend. So a final question for you, Open Source absolutely greatness going on, more and more good things are happening in projects, and bigger than ever before, I mean machine learning's a great example, seeing not just code snippets, code bases being you know, TensorFlow jumps out at me (mumbles), what are you doing here this year that's new and different from an Open Source standpoint, but also from a Red Hat standpoint that's notable that people should pay attention to? >> Well, one of the things that we're focused on is that platform layer, how do we enable a machine learning workload to run well on our platform? So it starts actually at the very bottom of the stack, hardware enablement. You got to get GPUs functional, you got to get them accessible to virtual machine based applications, and container based applications, so that's kind of table stakes. Accelerate a machine learning workload to make it usable, and valuable, to an enterprise by reducing the training and interference times for a machine learning model. Some of the next questions are how do we embed that technology in our own products? So you saw Access Insights this morning, talking about how we take machine learning, look at all of the data that we're gathering from the systems that our customers are deploying, and then derive insights from those and then feed those back to our customers so they can optimize the infrastructure that they're building and running and maintaining, and then, you know, the next step is that intelligent application. How do we get that machine learning capability into the hands of the developer, and pair the data scientist with the developers so you build these intelligent applications, taking advantage of all the data that you're gathering as an enterprise, and turning that into value as part of your application development cycle. So those are the areas that we're focused on for machine learning, and you know, some of that is partnering, you know, talking through how do we connect some of these services from Open Shift to the cloud service providers that are building some of these great machine learning tools, so. >> Any new updates on (mumbles) the success of Red Hat just in the past two years? You see the growth, that correlates, that was your (mumbles) Open Shift, and a good calls there, positioned perfectly, analysts, financial analysts are really giving you guys a lot of props on Wall Street, about the potential revenue growth opportunities on the business side, what's it like now at Red Hat? I mean, do you look back and say, "Hey, it was only like three years ago we did this", and I mean, the vibes are good, I mean share some inside commentary on what's happening inside Red Hat. >> It's really exciting. I mean, we've been working on these things for a long time. And, the simplest example I have is the combination of tools like the JBoss Middleware Suite and Linux, well they could run well together and we have a lot of customers that combine those, but when you take it to the next step, and you build containerized services and you distribute those broadly, you got a container platform, you got middleware components, you know, even providing functionality as services, you see how it all comes together and that's just so exciting internally. And at the same time we're growing. And a big part of-- >> John F: Customers are using it. >> Customers are using it, so putting things into production is critical. It's not just exciting technology but it's in production. The other piece is we're growing, and as we grow, we have to maintain the core of who we are. There's some humility that's involved, there's some really core Open Source principles that are involved, and making sure that as we continue to grow, we don't lose sight of who we are, really important thing for our internal culture, so. >> John F: Great community driven, and great job. Chris, thanks for coming on theCUBE, appreciate it. Chris Wright, CTO of Red Hat, sharing his insights here on theCUBE. Of course, bringing you all a live action as always here in San Francisco in Moscone West, for Red Hat Summit 2018, we'll be right back. (electronic music) (intense music)
SUMMARY :
Brought to you by Red Hat. Great to see you again, thanks for joining us today. you got, yesterday had sessions over there from the tech side, but also now on the business side, and you get a consistent standardized application run time in the team that you have on Red Hat's side and all the data that we're gathering is that going to be the new reality So, the scheduling optimization that you in that, kind of the linchpin, but you got, around the legacy system to give you access, Macquarie and Amadeus, and you know and be kind of the change agent for I'll see the system admin kind of administrative and then two, you know we did digital transformation and I see the value in considering think about GDPR compliance, you know, and you know, we got to get a handle on code bases being you know, TensorFlow jumps out at me and then, you know, the next step is that I mean, do you look back and say, and you build containerized services and as we grow, we have to maintain Of course, bringing you all a live action as always
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John F | PERSON | 0.99+ |
40 applications | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
one question | QUANTITY | 0.99+ |
Massachusetts Open Cloud | ORGANIZATION | 0.99+ |
20 mile | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Keybank | ORGANIZATION | 0.99+ |
Moscone South | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
last week | DATE | 0.99+ |
Boston University | ORGANIZATION | 0.99+ |
Amadeus | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Linux | TITLE | 0.98+ |
Macquarie | ORGANIZATION | 0.98+ |
TechReckoning Advisory Firm | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
Moscone West | LOCATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
HeteroGenius | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
Second quote | QUANTITY | 0.97+ |
three years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
Red Hat Summit 2018 | EVENT | 0.97+ |
three days | QUANTITY | 0.96+ |
Copenhagen, Denmark | LOCATION | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
Day one | QUANTITY | 0.93+ |
Open Innovation Lab | ORGANIZATION | 0.92+ |
RedHat Summit 2018 | EVENT | 0.92+ |
this year | DATE | 0.92+ |
first | QUANTITY | 0.91+ |
CUBE | ORGANIZATION | 0.91+ |
CTO | PERSON | 0.9+ |
The Wall Street Journal | TITLE | 0.89+ |
Red Hat | TITLE | 0.89+ |
this morning | DATE | 0.86+ |
a year | QUANTITY | 0.85+ |
one size | QUANTITY | 0.85+ |
Agile | TITLE | 0.82+ |
Red Hat 2018 | TITLE | 0.82+ |
years | QUANTITY | 0.81+ |
Open Shift | TITLE | 0.8+ |
Vice President | PERSON | 0.8+ |
past two years | DATE | 0.77+ |
two different worlds | QUANTITY | 0.76+ |
TCPIP | TITLE | 0.74+ |
Middleware Suite | TITLE | 0.74+ |
X86 | TITLE | 0.72+ |
Wall Street | LOCATION | 0.72+ |
Chris Wright, Red Hat | Open Source Summit 2017
(lively, bouncy music) >> Host: Live from Los Angeles, it's The Cube, covering Open Source Summit North America 2017, brought to you by the.
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wright | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.98+ |
Open Source Summit 2017 | EVENT | 0.97+ |
Open Source Summit North America 2017 | EVENT | 0.97+ |
Los Angeles | LOCATION | 0.97+ |
The Cube | TITLE | 0.44+ |
Wrap with Stephanie Chan | Red Hat Summit 2022
(upbeat music) >> Welcome back to theCUBE. We're covering Red Hat Summit 2022. We're going to wrap up now, Dave Vellante, Paul Gillin. We want to introduce you to Stephanie Chan, who's our new correspondent. Stephanie, one of your first events, your very first CUBE event. So welcome. >> Thank you. >> Up from NYC. Smaller event, but intimate. You got a chance to meet some folks last night at some of the after parties. What are your overall impressions? What'd you learn this week? >> So this has been my first in-person event in over two years. And even though, like you said, is on the smaller scale, roughly around 1000 attendees, versus it's usual eight to 10,000 attendees. There's so much energy, and excitement, and openness in these events and sessions. Even before and after the sessions people have been mingling and socializing and hanging out. So, I think a lot of people appreciate these in-person events and are really excited to be here. >> Cool. So, you also sat in some of the keynotes, right? Pretty technical, right? Which is kind of new to sort of your genre, right? I mean, I know you got a financial background but, so what'd you think of the keynotes? What'd you think of the format, the theater in the round? Any impressions of that? >> So, I think there's three things that are really consistent in these Red Hat Summit keynotes. There's always a history lesson. There's always, you know, emphasis in the culture of openness. And, there's also inspirational stories about how people utilize open source. And I found a lot of those examples really compelling and interesting. For instance, people use open source in (indistinct), and even in space. So I really enjoyed, you know, learning about all these different people and stories. What about you guys? What do you think were the big takeaways and the best stories that came out of the keynotes? >> Paul, want to start? >> Clearly the Red Hat Enterprise Linux 9 is a major rollout. They do that only about every three years. So that's a big deal to this audience. I think what they did in the area of security, with rolling out sigstore, which is a major new, I think an important new project that was sort of incubated at Red Hat. And they're trying to put in to create an open source ecosystem around that now. And the alliances. I'm usually not that much on partnerships, but the Accenture and the Microsoft partnerships do seem to be significant to the company. And, finally, the GM partnership which I think was maybe kind of the bombshell that they sort of rushed in at the last minute. But I think has the biggest potential impact on Red Hat and its partner ecosystem that is really going to anchor their edge architecture going forward. So I didn't see it so much on the product front, but the sense of Red Hat spreading its wings, and partnering with more companies, and seeing its itself as really the center of an ecosystem indicates that they are, you know, they're in a very solid position in their business. >> Yeah, and also like the pandemic has really forced us into this new normal, right? So customer demand is changing. There has been the shift to remote. There's always going to be a new normal according to Paul, and open source carries us through that. So how do you guys think Red Hat has helped its portfolio through this new normal and the shift? >> I mean, when you think of Red Hat, you think of Linux. I mean, that's where it all started. You think OpenShift which is the application development platforms. Linux is the OS. OpenShift is the application development platform for Kubernetes. And then of course, Ansible is the automation framework. And I agree with you, ecosystem is really the other piece of this. So, I mean, I think you take those three pieces and extend that into the open source community. There's a lot of innovation that's going around each of those, but ecosystems are the key. We heard from Stefanie Chiras, that fundamental, I mean, you can't do this without those gap fillers and those partnerships. And then another thing that's notable here is, you know, this was, I mean, IBM was just another brand, right? I mean, if anything it was probably a sub-brand, I mean, you didn't hear much about IBM. You certainly had no IBM presence, even though they're right across the street running Think. No Arvind present, no keynote from Arvind, no, you know, Big Blue washing. And so, I think that's a testament to Arvind himself. We heard that from Paul Cormier, he said, hey, this guy's been great, he's left us alone. And he's allowed us to continue innovating. It's good news. IBM has not polluted Red Hat. >> Yes, I think that the Red Hat was, I said at the opening, I think Red Hat is kind of the tail wagging the dog right now. And their position seems very solid in the market. Clearly the market has come to them in terms of their evangelism of open source. They've remained true to their business model. And I think that gives them credibility that, you know, a lot of other open source companies have lacked. They have stuck with the plan for over 20 years now and have really not changed it, and it's paying off. I think they're emerging as a company that you can trust to do business with. >> Now I want to throw in something else here. I thought the conversation with IDC analyst, Jim Mercer, was interesting when he said that they surveyed customers and they wanted to get the security from their platform vendor, versus having to buy these bespoke tools. And it makes a lot of sense to me. I don't think that's going to happen, right? Because you're going to have an identity specialist. You're going to have an endpoint specialist. You're going to have a threat detection specialist. And they're going to be best of breed, you know, Red Hat's never going to be all of those things. What they can do is partner with those companies through APIs, through open source integrations, they can add them in as part of the ecosystem and maybe be the steward of that. Maybe that's the answer. They're never going to be the best at all those different security disciplines. There's no way in the world, Red Hat, that's going to happen. But they could be the integration point. And that would be, that would be a simplifying layer to the equation. >> And I think it's smart. You know, they're not pretending to be an identity in access management or an anti-malware company, or even a zero trust company. They are sticking to their knitting, which is operating system and developers. Evangelizing DevSecOps, which is a good thing. And, that's what they're going to do. You know, you have to admire this company. It has never gotten outside of its swim lane. I think it's understood well really what it wants to be good at. And, you know, in the software business knowing what not to do is more important than knowing what to do. Is companies that fail are usually the ones that get overextended, this company has never overextended itself. >> What else do you want to know? >> And a term that kept popping up was multicloud, or otherwise known as metacloud. We know what the cloud is, but- >> Oh, supercloud, metacloud. >> Supercloud, yeah, here we go. We know what the cloud is but, what does metacloud mean to you guys? And why has it been so popular in these conversations? >> I'm going to boot this to Dave, because he's the expert on this. >> Well, expert or not, but I mean, again, we've coined this term supercloud. And the idea behind the supercloud or what Ashesh called metacloud, I like his name, cause it allows Web 3.0 to come into the equation. But the idea is that instead of building on each individual cloud and have compatibility with that cloud, you build a layer across clouds. So you do the hard work as a platform supplier to hide the underlying primitives and APIs from the end customer, or the end developer, they can then add value on top of that. And that abstraction layer spans on-prem, clouds, across clouds, ultimately out to the edge. And it's new, a new value layer that builds on top of the hyperscale infrastructure, or existing data center infrastructure, or emerging edge infrastructure. And the reason why that is important is because it's so damn complicated, number one. Number two, every company's becoming a software company, a technology company. They're bringing their services through digital transformation to their customers. And you've got to have a cloud to do that. You're not going to build your own data center. That's like Charles Wang says, not Charles Wang. (Paul laughing) Charles Phillips. We were just talking about CA. Charles Phillips. Friends don't let friends build data centers. So that supercloud concept, or what Ashesh calls metacloud, is this new layer that's going to be powered by ecosystems and platform companies. And I think it's real. I think it's- >> And OpenShift, OpenShift is a great, you know, key card for them or leverage for them because it is perhaps the best known Kubernetes platform. And you can see here they're really doubling down on adding features to OpenShift, security features, scalability. And they see it as potentially this metacloud, this supercloud abstraction layer. >> And what we said is, in order to have a supercloud you got to have a superpaz layer and OpenShift is that superpaz layer. >> So you had conversations with a lot of people within the past two days. Some people include companies, from Verizon, Intel, Accenture. Which conversation stood out to you the most? >> Which, I'm sorry. >> Which conversation stood out to you the most? (Paul sighs) >> The conversation with Stu Miniman was pretty interesting because we talked about culture. And really, he has a lot of credibility in that area because he's not a Red Hat. You know, he hasn't been a Red Hat forever, he's fairly new to the company. And got a sense from him that the culture there really is what they say it is. It's a culture of openness and that's, you know, that's as important as technology for a company's success. >> I mean, this was really good content. I mean, there were a lot, I mean Stefanie's awesome. Stefanie Chiras, we're talking about the ecosystem. Chris Wright, you know, digging into some of the CTO stuff. Ashesh, who coined metacloud, I love that. The whole in vehicle operating system conversation was great. The security discussion that we just had. You know, the conversations with Accenture were super thoughtful. Of course, Paul Cormier was a highlight. I think that one's going to be a well viewed interview, for sure. And, you know, I think that the customer conversations are great. Red Hat did a really good job of carrying the keynote conversations, which were abbreviated this year, to theCUBE. >> Right. >> I give 'em a lot of kudos for that. And because, theCUBE, it allows us to double click, go deeper, peel the onion a little bit, you know, all the buzz words, and cliches. But it's true. You get to clarify some of the things you heard, which were, you know, the keynotes were, were scripted, but tight. And so we had some good follow up questions. I thought it was super useful. I know I'm leaving somebody out, but- >> We're also able to interview representatives from Intel and Nvidia, which at a software conference you don't typically do. I mean, there's the assimilation, the combination of hardware and software. It's very clear that, and this came out in the keynote, that Red Hat sees hardware as matter. It matters. It's important again. And it's going to be a source of innovation in the future. That came through clearly. >> Yeah. The hardware matters theme, you know, the old days you would have an operating system and the hardware were intrinsically linked. MVS in the mainframe, VAX, VMS in the digital mini computers. DG had its own operating system. Wang had his own operating system. Prime with Prime OS. You remember these days? >> Oh my God. >> Right? (Paul laughs) And then of course Microsoft. >> And then x86, everything got abstracted. >> Right. >> Everything became x86 and now it's all atomizing again. >> Although WinTel, right? I mean, MS-DOS and Windows were intrinsically linked for many, many years with Intel x86. And it wasn't until, you know, well, and then, you know, Sun Solaris, but it wasn't until Linux kind of blew that apart. And the internet is built on the lamp stack. And of course, Linux is the fundamental foundation for Red Hat. So my point is, that the operating system and the hardware have always been very closely tied together. Whether it's security, or IO, or registries and memory management, everything controlled by the OS are very close to the hardware. And so that's why I think you've got an affinity in Red Hat to hardware. >> But Linux is breaking that bond, don't you think? >> Yes, but it still has to understand the underlying hardware. >> Right. >> You heard today, how taking advantage of Nvidia, and the AI capabilities. You're seeing that with ARM, you're seeing that with Intel. How you can optimize the operating system to take advantage of new generations of CPU, and NPU, and CPU, and PU, XPU, you know, across the board. >> Yep. >> Well, I really enjoyed this conference and it really stressed how important open source is to a lot of different industries. >> Great. Well, thanks for coming on. Paul, thank you. Great co-hosting with you. And thank you. >> Always, Dave. >> For watching theCUBE. We'll be on the road, next week we're at KubeCon in Valencia, Spain. We're at VeeamON. We got a ton of stuff going on. Check out thecube.net. Check out siliconangle.com for all the news. Wikibon.com. We publish there weekly, our breaking analysis series. Thanks for watching everybody. Dave Vellante, for Paul Gillin, and Stephanie Chan. Thanks to the crew. Shout out, Andrew, Alex, Sonya. Amazing job, Sonya. Steven, thanks you guys for coming out here. Mark, good job corresponding. Go to SiliconANGLE, Mark's written some great stuff. And thank you for watching. We'll see you next time. (calm music)
SUMMARY :
We're going to wrap up now, at some of the after parties. And even though, like you I mean, I know you got And I found a lot of those examples indicates that they are, you know, There has been the shift to remote. and extend that into the Clearly the market has come to them And it makes a lot of sense to me. And I think it's smart. And a term that kept but, what does metacloud mean to you guys? because he's the expert on this. And the idea behind the supercloud And you can see here and OpenShift is that superpaz layer. out to you the most? that the culture there really I think that one's going to of the things you heard, And it's going to be a source and the hardware were And then of course Microsoft. And then x86, And it wasn't until, you know, well, the underlying hardware. and PU, XPU, you know, across the board. to a lot of different industries. And thank you. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Jim Mercer | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Andrew | PERSON | 0.99+ |
Sonya | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
NYC | LOCATION | 0.99+ |
Stefanie | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Charles Phillips | PERSON | 0.99+ |
Charles Wang | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Ashesh | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Linux | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
Red Hat | TITLE | 0.99+ |
Windows | TITLE | 0.98+ |
Red Hat Summit 2022 | EVENT | 0.98+ |
Valencia, Spain | LOCATION | 0.98+ |
over 20 years | QUANTITY | 0.98+ |
over two years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three pieces | QUANTITY | 0.98+ |
first events | QUANTITY | 0.98+ |
Wang | PERSON | 0.97+ |
x86 | TITLE | 0.97+ |
around 1000 attendees | QUANTITY | 0.97+ |
zero trust | QUANTITY | 0.97+ |
Red Hat Summit | EVENT | 0.97+ |
this week | DATE | 0.96+ |
MS-DOS | TITLE | 0.96+ |
today | DATE | 0.96+ |
three things | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
10,000 attendees | QUANTITY | 0.96+ |
WinTel | TITLE | 0.96+ |
Ashesh | ORGANIZATION | 0.96+ |
Red Hat Enterprise Linux 9 | TITLE | 0.95+ |
last night | DATE | 0.95+ |
this year | DATE | 0.94+ |
Red Hat | ORGANIZATION | 0.94+ |
GM | ORGANIZATION | 0.93+ |
ARM | ORGANIZATION | 0.93+ |
Tushar Katarki & Justin Boitano | Red Hat Summit 2022
(upbeat music) >> We're back. You're watching theCUBE's coverage of Red Hat Summit 2022 here in the Seaport in Boston. I'm Dave Vellante with my co-host, Paul Gillin. Justin Boitano is here. He's the Vice President of Enterprise and Edge Computing at NVIDIA. Maybe you've heard of him. And Tushar Katarki who's the Director of Product Management at Red Hat. Gentlemen, welcome to theCUBE, good to see you. >> Thank you. >> Great to be here, thanks >> Justin, you are a keynote this morning. You got interviewed and shared your thoughts on AI. You encourage people to got to think bigger on AI. I know it's kind of self-serving but why? Why should we think bigger? >> When you think of AI, I mean, it's a monumental change. It's going to affect every industry. And so when we think of AI, you step back, you're challenging companies to build intelligence and AI factories, and factories that can produce intelligence. And so it, you know, forces you to rethink how you build data centers, how you build applications. It's a very data centric process where you're bringing in, you know, an exponential amount of data. You have to label that data. You got to train a model. You got to test the model to make sure that it's accurate and delivers business value. Then you push it into production, it's going to generate more data, and you kind of work through that cycle over and over and over. So, you know, just as Red Hat talks about, you know, CI/CD of applications, we're talking about CI/CD of the AI model itself, right? So it becomes a continuous improvement of AI models in production which is a big, big business transformation. >> Yeah, Chris Wright was talking about basically take your typical application development, you know, pipeline, and life cycle, and apply that type of thinking to AI. I was saying those two worlds have to come together. Actually, you know, the application stack and the data stack including AI need to come together. What's the role of Red Hat? What's your sort of posture on AI? Where do you fit with OpenShift? >> Yeah, so we're really excited about AI. I mean, a lot of our customers obviously are looking to take that data and make meaning out of it using AI is definitely a big important tool. And OpenShift, and our approach to Open Hybrid Cloud really forms a successful platform to base all your AI journey on with the partners such as NVIDIA whom we are working very closely with. And so the idea really is as Justin was saying, you know, the end to end, when you think about life of a model, you've got data, you mine that data, you create models, you deploy it into production. That whole thing, what we call CI/CD, as he was saying DevOps, DevSecOps, and the hybrid cloud that Red Hat has been talking about, although with OpenShift as the center forms a good basis for that. >> So somebody said the other day, I'm going to ask you, is INVIDIA a hardware company or a software company? >> We are a company that people know for our hardware but, you know, predominantly now we're a software company. And that's what we were on stage talking about. I mean, ultimately, a lot of these customers know that they've got to embark on this journey to apply AI, to transform their business with it. It's such a big competitive advantage going into, you know, the next decade. And so the faster they get ahead of it, the more they're going to win, right? But some of them, they're just not really sure how to get going. And so a lot of this is we want to lower the barrier to entry. We built this program, we call it Launchpad to basically make it so they get instant access to the servers, the AI servers, with OpenShift, with the MLOps tooling, with example applications. And then we walk them through examples like how do you build a chatbot? How do you build a vision system for quality control? How do you build a price recommendation model? And they can do hands on labs and walk out of, you know, Launchpad with all the software they need, I'll say the blueprint for building their application. They've got a way to have the software and containers supported in production, and they know the blueprint for the infrastructure and operating that a scale with OpenShift. So more and more, you know, to come back to your question is we're focused on the software layers and making that easy to help, you know, either enterprises build their apps or work with our ecosystem and developers to buy, you know, solutions off the shelf. >> On the harbor side though, I mean, clearly NVIDIA has prospered on the backs of GPUs, as the engines of AI development. Is that how it's going to be for the foreseeable future? Will GPUs continue to be core to building and training AI models or do you see something more specific to AI workloads? >> Yeah, I mean, it's a good question. So I think for the next decade, well, plus, I mean not forever, we're going to always monetize hardware. It's a big, you know, market opportunity. I mean, Jensen talks about a $100 billion, you know, market opportunity for NVIDIA just on hardware. It's probably another a $100 billion opportunity on the software. So the reality is we're getting going on the software side, so it's still kind of early days, but that's, you know, a big area of growth for us in the future and we're making big investments in that area. On the hardware side, and in the data center, you know, the reality is since Moore's law has ended, acceleration is really the thing that's going to advance all data centers. So I think in the future, every server will have GPUs, every server will have DPUs, and we can talk a bit about what DPUs are. And so there's really kind of three primary processors that have to be there to form the foundation of the enterprise data center in the future. >> Did you bring up an interesting point about DPUs and MPUs, and sort of the variations of GPUs that are coming about? Do you see those different PU types continuing to proliferate? >> Oh, absolutely. I mean, we've done a bunch of work with Red Hat, and we've got a, I'll say a beta of OpenShift 4.10 that now supports DPUs as the, I'll call it the control plane like software defined networking offload in the data center. So it takes all the software defined networking off of CPUs. When everybody talks about, I'll call it software defined, you know, networking and core data centers, you can think of that as just a CPU tax up to this point. So what's nice is it's all moving over to DPU to, you know, offload and isolate it from the x86 cores. It increases security of data center. It improves the throughput of your data center. And so, yeah, DPUs, we see everybody copying that model. And, you know to give credit where credit is due, I think, you know, companies like AWS, you know, they bought Annapurna, they turned it into Nitro which is the foundation of their data centers. And everybody wants the, I'll call it democratized version of that to run their data centers. And so every financial institution and bank around the world sees the value of this technology, but running in their data centers. >> Hey, everybody needs a Nitro. I've written about it. It's Annapurna acquisition, 350 million. I mean, peanuts in the grand scheme of things. It's interesting, you said Moore's law is dead. You know, we have that conversation all the time. Pat Gelsinger promised that Moore's law is alive and well. But the interesting thing is when you look at the numbers, that's, you know, Moore's law, we all know it, doubling of the transistor densities every 18 to 24 months. Let's say that, that promise that he made is true. What I think the industry maybe doesn't appreciate, I'm sure you do, being in NVIDIA, when you combine what you were just saying, the CPU, the GPU, Paul, the MPU, accelerators, all the XPUs, you're talking about, I mean, look at Apple with the M1, I mean 6X in 15 months versus doubling every 18 to 24. The A15 is probably averaging over the last five years, a 110% performance improvement each year versus the historical Moore's law which is 40%. It's probably down to the low 30s now. So it's a completely different world that we're entering now. And the new applications are going to be developed on these capabilities. It's just not your general purpose market anymore. From an application development standpoint, what does that mean to the world? >> Yeah, I mean, yeah, it is a great point. I mean, from an application, I mean first of all, I mean, just talk about AI. I mean, they are all very compute intensive. They're data intensive. And I mean to move data focus so much in to compute and crunch those numbers. I mean, I'd say you need all the PUs that you mentioned in the world. And also there are other concerns that will augment that, right? Like we want to, you know, security is so important so we want to secure everything. Cryptography is going to take off to new levels, you know, that we are talking about, for example, in the case of DPUs, we are talking about, you know, can that be used to offload your encryption and firewalling, and so on and so forth. So I think there are a lot of opportunities even from an application point of view to take of this capacity. So I'd say we've never run out of the need for PUs if you will. >> So is OpenShift the layer that's going to simplify all that for the developer. >> That's right. You know, so one of the things that we worked with NVIDIA, and in fact was we developed this concept of an operator for GPUs, but you can use that pattern for any of the PUs. And so the idea really is that, how do you, yeah-- (all giggle) >> That's a new term. >> Yeah, it's a new term. (all giggle) >> XPUs. >> XPUs, yeah. And so that pattern becomes very easy for GPUs or any other such accelerators to be easily added as a capacity. And for the Kubernetes scaler to understand that there is that capacity so that an application which says that I want to run on a GPU then it becomes very easy for it to run on that GPU. And so that's the abstraction to your point about how we are making that happen. >> And to add to this. So the operator model, it's this, you know, open source model that does the orchestration. So Kubernetes will say, oh, there's a GPU in that node, let me run the operator, and it installs our entire run time. And our run time now, you know, it's got a MIG configuration utility. It's got the driver. It's got, you know, telemetry and metering of the actual GPU and the workload, you know, along with a bunch of other components, right? They get installed in that Kubernetes cluster. So instead of somebody trying to chase down all the little pieces and parts, it just happens automatically in seconds. We've extended the operator model to DPUs and networking cards as well, and we have all of those in the operator hub. So for somebody that's running OpenShift in their data centers, it's really simple to, you know, turn on Node Feature Discovery, you point to the operators. And when you see new accelerated nodes, the entire run time is automatically installed for you. So it really makes, you know, GPUs and our networking, our advanced networking capabilities really first class citizens in the data center. >> So you can kind of connect the dots and see how NVIDIA and the Red Hat partnership are sort of aiming at the enterprise. I mean, NVIDIA, obviously, they got the AI piece. I always thought maybe 25% of the compute cycles in the data center were wasted doing storage offloads or networking offload, security. I think Jensen says it's 30%, probably a better number than I have. But so now you're seeing a lot of new innovation in new hardware devices that are attacking that with alternative processors. And then my question is, what about the edge? Is that a blue field out at the edge? What does that look like to NVIDIA and where does OpenShift play? >> Yeah, so when we talk about the edge, we always going to start talking about like which edge are we talking about 'cause it's everything outside the core data center. I mean, some of the trends that we see with regard to the edges is, you know, when you get to the far edge, it's single nodes. You don't have the guards, gates, and guns protection of the data center. So you start having to worry about physical security of the hardware. So you can imagine there's really stringent requirements on protecting the intellectual property of the AI model itself. You spend millions of dollars to build it. If I push that out to an edge data center, how do I make sure that that's fully protected? And that's the area that we just announced a new processor that we call Hopper H100. It supports confidential computing so that you can basically ensure that model is always encrypted in system memory across the bus, of the PCI bus to the GPU, and it's run in a confidential way on the GPU. So you're protecting your data which is your model plus the data flowing through it, you know, in transit, wallet stored, and then in use. So that really adds to that edge security model. >> I wanted to ask you about the cloud, correct me if I'm wrong. But it seems to me that that AI workloads have been slower than most to make their way to the cloud. There are a lot of concerns about data transfer capacity and even cost. Do you see that? First of all, do you agree with that? And secondly, is that going to change in the short-term? >> Yeah, so I think there's different classes of problems. So we'll take, there's some companies where their data's generated in the cloud and we see a ton of, I'll say, adoption of AI by cloud service providers, right? Recommendation engines, translation engines, conversational AI services, that all the clouds are building. That's all, you know, our processors. There's also problems that enterprises have where now I'm trying to take some of these automation capabilities but I'm trying to create an intelligent factory where I want to, you know, merge kind of AI with the physical world. And that really has to run at the edge 'cause there's too much data being generated by cameras to bring that all the way back into the cloud. So, you know, I think we're seeing mass adoption in the cloud today. I think at the edge a lot of businesses are trying to understand how do I deploy that reliably and securely and scale it. So I do think, you know, there's different problems that are going to run in different places, and ultimately we want to help anybody apply AI where the business is generating the data. >> So obviously very memory intensive applications as well. We've seen you, NVIDIA, architecturally kind of move away from the traditional, you know, x86 approach, take better advantage of memories where obviously you have relationships with Arm. So you've got a very diverse set of capabilities. And then all these other components that come into use, to just be a kind of x86 centric world. And now it's all these other supporting components to support these new applications and it's... How should we think about the future? >> Yeah, I mean, it's very exciting for sure, right? Like, you know, the future, the data is out there at the edge, the data can be in the data center. And so we are trying to weave a hybrid cloud footprint that spans that. I mean, you heard Paul come here, talk about it. But, you know, we've talked about it for some time now. And so the paradigm really that is, that be it an application, and when I say application, it could be even an AI model as a service. It can think about that as an application. How does an application span that entire paradigm from the core to the edge and beyond is where the future is. And, of course, there's a lot of technical challenges, you know, for us to get there. And I think partnerships like this are going to help us and our customers to get there. So the world is very exciting. You know, I'm very bullish on how this will play out, right? >> Justin, we'll give you the last word, closing thoughts. >> Well, you know, I think a lot of this is like I said, it's how do we reduce the complexity for enterprises to get started which is why Launchpad is so fundamental. It gives, you know, access to the entire stack instantly with like hands on curated labs for both IT and data scientists. So they can, again, walk out with the blueprints they need to set this up and, you know, start on a successful AI journey. >> Just a position, is Launchpad more of a Sandbox, more of a school, or more of an actual development environment. >> Yeah, think of it as it's, again, it's really for trial, like hands on labs to help people learn all the foundational skills they need to like build an AI practice and get it into production. And again, it's like, you don't need to go champion to your executive team that you need access to expensive infrastructure and, you know, and bring in Red Hat to set up OpenShift. Everything's there for you so you can instantly get started. Do kind of a pilot project and then use that to explain to your executive team everything that you need to then go do to get this into production and drive business value for the company. >> All right, great stuff, guys. Thanks so much for coming to theCUBE. >> Yeah, thanks. >> Thank you for having us. >> All right, thank you for watching. Keep it right there, Dave Vellante and Paul Gillin. We'll be back right after this short break at the Red Hat Summit 2022. (upbeat music)
SUMMARY :
here in the Seaport in Boston. Justin, you are a keynote this morning. And so it, you know, forces you to rethink Actually, you know, the application And so the idea really to buy, you know, solutions off the shelf. Is that how it's going to be the data center, you know, of that to run their data centers. I mean, peanuts in the of the need for PUs if you will. all that for the developer. And so the idea really is Yeah, it's a new term. And so that's the So it really makes, you know, Is that a blue field out at the edge? across the bus, of the PCI bus to the GPU, First of all, do you agree with that? And that really has to run at the edge you know, x86 approach, from the core to the edge and beyond Justin, we'll give you the Well, you know, I think a lot of this is Launchpad more of a that you need access to Thanks so much for coming to theCUBE. at the Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tushar Katarki | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Justin Boitano | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
110% | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
INVIDIA | ORGANIZATION | 0.99+ |
Annapurna | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Seaport | LOCATION | 0.99+ |
350 million | QUANTITY | 0.99+ |
15 months | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
24 months | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.98+ |
18 | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
OpenShift | TITLE | 0.98+ |
30s | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
A15 | COMMERCIAL_ITEM | 0.97+ |
secondly | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
6X | QUANTITY | 0.96+ |
next decade | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
Open Hybrid Cloud | TITLE | 0.95+ |
Kubernetes | TITLE | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
Launchpad | TITLE | 0.94+ |
two worlds | QUANTITY | 0.93+ |
millions of dollars | QUANTITY | 0.92+ |
M1 | COMMERCIAL_ITEM | 0.92+ |
Nitro | ORGANIZATION | 0.91+ |
Vice President | PERSON | 0.91+ |
OpenShift 4.10 | TITLE | 0.89+ |
single nodes | QUANTITY | 0.88+ |
DevSecOps | TITLE | 0.86+ |
Jensen | ORGANIZATION | 0.83+ |
one | QUANTITY | 0.82+ |
three primary processors | QUANTITY | 0.82+ |
DevOps | TITLE | 0.81+ |
first | QUANTITY | 0.8+ |
last five years | DATE | 0.79+ |
this morning | DATE | 0.79+ |
Moore | PERSON | 0.77+ |
x86 cores | QUANTITY | 0.71+ |
Matthew Jones v2 ITA Red Hat Ansiblefest
>> Welcome back to AnsibleFest. I'm Matthew Jones, I'm the architect of the Ansible Automation Platform. And today I want to talk to you a little bit about what we've got coming in 2021, and some of the things that we're working on for the future. Today, I really want to cover some of the work that we're doing on scale and flexibility, and how we're going to focus on that for the next year. I also want to talk about how we're going to help you grow and manage and use your content on the Automation platform. And then finally, I want to look a little bit beyond the automation platform itself. So, last year we introduced Ansible Content Collections. Earlier this year, we introduced the Ansible Automation Hub on Red Hat Cloud. And yesterday you heard Richard mentioned on private automation hub that's coming later this year. And automation hub, Ansible tower, this is really what the automation platform means for us. It's bringing together that content, with the ability to execute and run and manage that content, that's really important. And so what we really want to do, is we want to help you bring Red Hat and partner content that you trust together with community content from galaxy that you may need, and bring this together with content that you develop for yourself, your roles, your collections, the automation that you actually do. And we want to give you control over that content and help you curate that content and build a community around your automation. We want to focus on a seamless experience with this automation from Ansible Tower and from Automation Hub for the automation platform itself, and make it accessible to the automation and infrastructure that you're managing. Now that we've talked about content a little bit, I want to talk about how you run Ansible. Today an Ansible Tower, use virtual environments to manage the actual execution of Ansible, and virtual environments are okay, but they have some drawbacks. Primarily they're not very portable. It's difficult to manage dependencies and the version of Ansible. Sometimes those dependencies conflict with the other systems that are on the infrastructure itself, even Ansible Tower. So what we've done is created a new system that we call execution environments. Execution environments are container-based. And what we're doing is bringing the flexibility and portability of containers to these Ansible execution environments. And the goal really is portability. And we want to be able to leverage the tools that the community develops as well as the tools that Red Hat provides to be able to produce these container images and use them effectively. At Ansible we've developed a tool called Ansible Builder. Ansible builder will let you bring content collections together with the version of Ansible and Red Hats base container image so that you can put together your own images for execution environments. And you'll be able to host these on your own private registry infrastructure. If you don't already have a container registry solution, Automation Hub itself provides that registry. The idea here is that, unlike today where your virtual environments and your production execution environments diverge a little bit from what your developers, your content developers and your automation developers experience, we want to give you the same experience between your production environments and your development environments, all the way through your test and validation workloads. Red Hat's also going to provide some prebuilt execution environments. We want to have some continuity between the experience that you have today on the Ansible tower and what you'll have next year, once we bring execution environments into production. We want you to be able to trust the Ansible, the version of Ansible that's running on your execution environments, and that you have the content that you expect. At the same time, we're going to provide a version of the execution environment, that's just the base execution environment. All it has is Ansible. This will let you take those using Ansible builder, take the collections that you've developed, that you need in your automation and combine them without having to bring in things that you don't need, or that you don't want in your automation and build them together into a very opinionated, container image. If you're interested in execution environments and you want to know how these are built and how you'll use them, we actually have them available for you to use today. Shane McDonald and Adam Miller are giving a talk later with a walk through how to build execution environments and how you'll use them. You can use this to make sure that you're ready for execution environments coming to the automation platform next year. Now that we've talked about how we build execution environments, I want to talk about how execution runs in your infrastructure. So today when you deploy Ansible tower, you're deploying a monolithic web application. Your execution capability is tied up into how you actually deploy Ansible tower. This makes scaling Ansible tower and your automation workloads difficult, and everything has to be co-located together in the same data center. Isolated nodes solve this a little bit, but they bring about their own sort of opinionated challenges in setting up SSH and having direct connectivity between the control nodes and the execution nodes themselves. We want to make this more flexible and easier to use. And so one of the things that we've created over the last year and that we've been working on over the last year is something that we call receptor. Receptor is an overlay network that's an Automation Mesh. And the goal here is to separate the execution capability of your Ansible content from the control plane capability, where you manage the web infrastructure, the users, the role-based access control. We want to draw a line between those. We want you to be able to deploy execution environments anywhere. Chris Wright earlier today mentioned Edge. Well Edge Cloud, we want you to be able to manage data centers anywhere in the world, and you can do this with the Automation Mesh,. The Automation Mesh connects your control plane with those execution nodes, anywhere in the world. Another thing that the Automation Mesh brings is, we're going to be able to draw the lines between the control plane themselves and each Automation Mesh node. This means that if you have an outage or a problem on your network and on your infrastructure, if you can draw a line between the control plane itself and the node that needs to execute, the sensible work, the Automation Mesh can route around problems. The Automation Mesh in the way it's deployed, also allows this to fit closer with ingress and egress policies that you have between your infrastructure. It doesn't matter which direction the Automation Mesh itself connects in. Once the connection is established, automation will be able to flow from the control systems to the execution nodes and get responses back. Now, this all works together with automation of the content collections that we mentioned earlier, the execution environments that we were just talking about and your container registries. All of these work together with these Automation Mesh nodes. They're very lightweight and very simple systems. This means you can scale up and scale down execution capacity as your needs increase or decrease. You don't need to keep around a lot of extra capacity just in case you automate more, just because you're not sure when your execution capacity needs will increase and decrease. This fits into an automated system for scaling your infrastructure and scaling your execution capacity. Now that we've talked about the content that you use to manage, and how that execution is performed and where that execution is performed. I want to look a little bit beyond the actual automation platform itself. And specifically, I want to talk about how the automation platform works with OpenShift and Kubernetes. Now we have an existing installer for Ansible tower that we'll deploy to OpenShift Kubernetes, and we support OpenShift and Kubernetes as a first-class system for deploying Ansible tower. But I mentioned automation hub and Ansible tower as this is what the automation platform is for us. So we want to take that installer and replace it with an operator-based full life cycle approach to deploying and managing the automation platform on OpenShift. This operator will be available in OperatorHub. So there's no need to manage complex YAML files that represent the deployment. Since it's available in OperatorHub, you have one place that you can go to manage deployments, upgrades, backup and restore. And all of this work seamlessly with the container groups feature that we introduced last year. But I want to take this a little bit beyond just deploying and upgrading the automation platform from the operator. We want to look at what other capabilities that we can get out of those operators. So beyond just deploying and upgrading, we're also creating a resource operators and CRDs that will allow other systems running in OpenShift or Kubernetes to directly manage resources within the automation platform. Anything from triggering jobs and getting the status of jobs back, we want to enable that capability if you're using OpenShift and Kubernetes. The first place we're starting with this, is Red Hats Advanced Cluster Management system. Advanced Cluster Management brings together the ability to manage OpenShift and Kubernetes clusters to install them and manage them, as well as applications and products in managing the life cycle of those across your clusters. So what we really want to do, is give you the ability to connect traditional and container-based workloads together. You're already using the Ansible automation platform to manage workloads with Ansible. When using Advanced Cluster Management and OpenShift and Kubernetes, now you have a full system. You can manage across clouds across clusters, anywhere in the world. And this sort of brings me back to one of the areas of focuses for us. Our goal is complete end-to-end automation. We want to connect your people, your domains and the processes. We want to help you deliver for you and your customers by expanding the capabilities of the Ansible automation platform. And we want to make this a seamless experience to both curate content, control the content for your organization, and run the content and run Ansible itself using the full suite of the Ansible automation platform. So the Advanced Cluster management team is giving a talk later where you'll actually be able to see Advanced cluster Management and the Ansible automation platform working together. Don't forget to check out Adam and Shane's talk on execution environments, how those are built and how you can use those. Thank you for coming to AnsibleFest, and we'll see you next time.
SUMMARY :
and the node that needs to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matthew Jones | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Adam Miller | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
2021 | DATE | 0.99+ |
Shane McDonald | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Today | DATE | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Shane | PERSON | 0.99+ |
AnsibleFest | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Kubernetes | TITLE | 0.98+ |
later this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Earlier this year | DATE | 0.95+ |
Ansible Automation Hub | ORGANIZATION | 0.95+ |
Ansiblefest | EVENT | 0.91+ |
Red Hats | ORGANIZATION | 0.9+ |
Ansible Builder | TITLE | 0.9+ |
Automation Hub | ORGANIZATION | 0.89+ |
one | QUANTITY | 0.87+ |
OpenShift Kubernetes | TITLE | 0.86+ |
Ansible Tower | TITLE | 0.85+ |
one place | QUANTITY | 0.84+ |
Hat | ORGANIZATION | 0.84+ |
Ansible Automation | ORGANIZATION | 0.81+ |
Red Hat | TITLE | 0.75+ |
Ansible Tower | ORGANIZATION | 0.74+ |
earlier today | DATE | 0.72+ |
Automation Hub | TITLE | 0.71+ |
Ansible | TITLE | 0.69+ |
AnsibleFest | EVENT | 0.65+ |
Red Hat Cloud | ORGANIZATION | 0.62+ |
Red | EVENT | 0.6+ |
OperatorHub | ORGANIZATION | 0.59+ |
class | QUANTITY | 0.56+ |
Collections | ORGANIZATION | 0.55+ |
Edge | TITLE | 0.54+ |
Tower | COMMERCIAL_ITEM | 0.52+ |
ITA | ORGANIZATION | 0.52+ |
theCUBE Insights | Red Hat Summit 2019
>> Announcer: Live from Boston, Massachusetts, it's theCUBE, covering Red Hat Summit 2019. Brought to you by Red Hat. >> Welcome back here on theCUBE, joined by Stu Miniman, I'm John Walls, as we wrap up our coverage here of the Red Hat Summit here in 2019. We've been here in Boston all week, three days, Stu, of really fascinating programming on one hand, the keynotes showing quite a diverse ecosystem that Red Hat has certainly built, and we've seen that array of guests reflected as well here, on theCUBE. And you leave with a pretty distinct impression about the vast reach, you might say, of Red Hat, and how they diversified their offerings and their services. >> Yeah, so, John, as we've talked about, this is the sixth year we've had theCUBE here. It's my fifth year doing it and I'll be honest, I've worked with Red Hat for 19 years, but the first year I came, it was like, all right, you know, I know lots of Linux people, I've worked with Linux people, but, you know, I'm not in there in the terminal and doing all this stuff, so it took me a little while to get used to. Today, I know not only a lot more people in Red Hat and the ecosystem, but where the ecosystem is matured and where the portfolio is grown. There's been some acquisitions on the Red Hat side. There's a certain pending acquisition that is kind of a big deal that we talked about this week. But Red Hat's position in this IT marketplace, especially in the hybrid and multi-cloud world, has been fun to watch and really enjoyed digging in it with you this week and, John Walls, I'll turn the camera to you because- >> I don't like this. (laughing) >> It was your first time on the program. Yeah, you know- >> I like asking you the questions. >> But we have to do this, you know, three days of Walls to Miniman coverage. So let's get the Walls perspective. >> John: All right. >> On your take. You've been to many shows. >> John: Yeah, no, I think that what's interesting about what I've seen here at Red Hat is this willingness to adapt to the marketplace, at least that's the impression I got, is that there are a lot of command and control models about this is the way it's going to be, and this is what we're going to give you, and you're gonna have to take it and like it. And Red Hat's just on the other end of that spectrum, right? It's very much a company that's built on an open source philosophy. And it's been more of what has the marketplace wanted? What have you needed? And now how can we work with you to build it and make it functional? And now we're gonna just offer it to a lot of people, and we're gonna make a lot of money doing that. And so, I think to me, that's at least what I got talking to Jim Whitehurst, you know about his philosophy and where he's taken this company, and has made it obviously a very attractive entity, IBM certainly thinks so to the tune of 34 billion. But you see that. >> Yeah, it's, you know, some companies say, oh well, you know, it's the leadership from the top. Well, Jim's philosophy though, it is The Open Organization. Highly recommend the book, it was a great read. We've talked to him about the program, but very much it's 12, 13 thousand people at the company. They're very much opinionated, they go in there, they have discussions. It's not like, well okay, one person pass this down. It's we're gonna debate and argue and fight. Doesn't mean we come to a full consensus, but open source at the core is what they do, and therefore, the community drives a lot of it. They contribute it all back up-stream, but, you know, we know what Red Hat's doing. It's fascinating to talk to Jim about, yeah you know, on the days where I'm thinking half glass empty, it's, you know, wow, we're not yet quite four billion dollars of the company, and look what an impact they had. They did a study with IDC and said, ten trillion dollars of the economy that they touch through RHEL, but on the half empty, on the half full days, they're having a huge impact outside. He said 34 billion dollars that IBM's paying is actually a bargain- >> It's a great deal! (laughing) >> for where they're going. But big announcements. RHEL 8, which had been almost five years in the works there. Some good advancements there. But the highlight for me this week really was OpenShift. We've been watching OpenShift since the early days, really pre-Kubernetes. It had a good vision and gained adoption in the marketplace, and was the open source choice for what we called Paths back then. But, when Kubernetes came around, it really helped solidify where OpenShift was going. It is the delivery mechanism for containerization and that container cluster management and Red Hat has a leadership position in that space. I think that almost every customer that we talked to this week, John, OpenShift was the underpinning. >> John: Absolutely. >> You would expect that RHEL's underneath there, but OpenShift as the lever for digital transformation. And that was something that I really enjoyed talking to. DBS Bank from Singapore, and Delta, and UPS. It was, we talked about their actual transformation journeys, both the technology and the organizational standpoint, and OpenShift really was the lever to give them that push. >> You know, another thing, I know you've been looking at this and watching this for many many years. There's certainly the evolution of open source, but we talked to Chris Wright earlier, and he was talking about the pace of change and how it really is incremental. And yet, if you're on the outside looking in, and you think, gosh, technology is just changing so fast, it's so crazy, it's so disruptive, but to hear it from Chris, not so. You don't go A to Z, you go A to B to C to D to D point one. (laughing) It takes time. And there's a patience almost and a cadence that has this slow revolution that I'm a little surprised at. I sense they, or got a sense of, you know, a much more rapid change of pace and that's not how the people on the inside see it. >> Yeah. Couple of comment back at that. Number one is we know how much rapid change there is going because if you looked at the Linux kernel or what's happening with Kubernetes and the open source, there's so much change going on there. There's the data point thrown out there that, you know, I forget, that 75% or 95% of all the data in the world was created in the last two years. Yet, only 2% of that is really usable and searchable and things like that. That's a lot of change. And the code base of Linux in the last two years, a third of the code is completely overhauled. This is technology that has been around for decades. But if you look at it, if you think about a company, one of the challenges that we had is if they're making those incremental change, and slowly looking at them, a lot of people from the outside would be like, oh, Red Hat, yeah that's that little Linux company, you know, that I'm familiar with and it runs on lots of places there. When we came in six years ago, there was a big push by Red Hat to say, "We're much more than Linux." They have their three pillars that we spent a lot of time through from the infrastructure layer to the cloud native to automation and management. Lots of shows I go to, AnsiballZ all over the place. We talked about OpenShift 4 is something that seems to be resonating. Red Hat takes a leadership position, not just in the communities and the foundations, but working with their customers to be a more trusted and deeper partner in what they're doing with digital transformation. There might have been little changes, but, you know, this is not the Red Hat that people would think of two years or five years ago because a large percentage of Red Hat has changed. One last nugget from Chris Wright there, is, you know, he spent a lot of time talking about AI. And some of these companies go buzzwords in these environments, but, you know, but he hit a nice cogent message with the punchline is machines enhance human intelligence because these are really complex systems, distributed architectures, and we know that the people just can't keep up with all of the change, and the scope, and the scale that they need to handle. So software should be able to be helping me get my arms around it, as well as where it can automate and even take actions, as long as we're careful about how we do it. >> John: Sure. There's another, point at least, I want to pick your brain about, is really the power of presence. The fact that we have the Microsoft CEO on the stage. Everybody thought, well (mumbles) But we heard it from guest after guest after guest this week, saying how cool was that? How impressive was that? How monumental was that? And, you know, it's great to have that kind of opportunity, but the power of Nadella's presence here, it's unmistakable in the message that has sent to this community. >> Yeah, you know, John, you could probably do a case study talking about culture and the power of culture because, I talked about Red Hat's not the Red Hat that you know. Well, the Satya Nadella led Microsoft is a very different Microsoft than before he was on board. Not only are they making great strides in, you know, we talk about SaaS and public cloud and the like, but from a partnership standpoint, Microsoft of old, you know, Linux and Red Hat were the enemy and you know, Windows was the solution and they were gonna bake everything into it. Well, Microsoft partnered with many more companies. Partnerships and ecosystem, a key message this week. We talked about Microsoft with Red Hat, but, you know, announcement today was, surprised me a little bit, but when we think about it, not too much. OpenShift supported on VMware environments, so, you know, VMware has in that family of Dell, there's competitive solutions against OpenShift and, you know, so, and virtualization. You know, Red Hat has, you know, RHV, the Red Hat Virtualization. >> John: Right, right, right. >> The old day of the lines in the swim lanes, as one of our guests talked about, really are there. Customers are living in a heterogeneous, multi-cloud world and the customers are gonna go and say, "You need to work together, before you're not gonna be there." >> Azure. Right, also we have Azure compatibility going on here. >> Stu: Yeah, deep, not just some tested, but deep integration. I can go to Azure and buy OpenShift. I mean that, the, to say it's in the, you know, not just in the marketplace, but a deep integration. And yeah, there was a little poke, if our audience caught it, from Paul Cormier. And said, you know, Microsoft really understands enterprise. That's why they're working tightly with us. Uh, there's a certain other large cloud provider that created Kubernetes, that has their own solution, that maybe doesn't understand enterprise as much and aren't working as closely with Red Hat as they might. So we'll see what response there is from them out there. Always, you know, we always love on theCUBE to, you know, the horse is on the track and where they're racing, but, you know, more and more all of our worlds are cross-pollinating. You know, the AI and AI Ops stuff. The software ecosystems because software does have this unifying factor that the API economy, and having all these things work together, more and more. If you don't, customers will go look for solutions that do provide the full end to end solution stuff they're looking for. >> All right, so we're, I've got a couple in mind as far as guests we've had on the show. And we saw them in action on the keynotes stage too. Anybody that jumps out at you, just like, wow, that was cool, that was, not that we, we love all of our children, right? (laughing) But every once in awhile, there's a story or two that does stand out. >> Yeah, so, it is so tough, you know. I loved, you know, the stories. John, I'm sure I'm going to ask you, you know, Mr. B and what he's doing with the children. >> John: Right, Franklin Middle School. >> And the hospitals with Dr. Ellen and the end of the brains. You know, those tech for good are phenomenal. For me, you know, the CIOs that we had on our first day of program. Delta was great and going through transformation, but, you know, our first guest that we had on, was DBS Bank in Singapore and- >> John: David Gledhill. >> He was so articulate and has such a good story about, I took outsourced environments. I didn't just bring it into my environment, say okay, IT can do it a little bit better, and I'll respond to business. No, no, we're going to total restructure the company. Not we're a software company. We're a technology company, and we're gonna learn from the Googles of the world and the like. And he said, We want to be considered there, you know, what was his term there? It was like, you know, bank less, uh, live more and bank less. I mean, what- >> Joyful banking, that was another of his. >> Joyful banking. You don't think of a financial institution as, you know, we want you to think less of the bank. You know, that's just a powerful statement. Total reorganization and, as we mentioned, of course, OpenShift, one of those levers underneath helping them to do that. >> Yeah, you mentioned Dr. Ellen Grant, Boston Children's Hospital, I think about that. She's in fetal neuroimaging and a Professor of Radiology at Harvard Medical School. The work they're doing in terms of diagnostics through imaging is spectacular. I thought about Robin Goldstone at the Livermore Laboratory, about our nuclear weapon monitoring and efficacy of our monitoring. >> Lawrence Livermore. So good. And John, talk about the diversity of our guests. We had expats from four different countries, phenomenal accents. A wonderful slate of brilliant women on the program. From the customer side, some of the award winners that you interviewed. The executives on the program. You know, Stefanie Chiras, always great, and Denise who were up on the keynotes stage. Denise with her 3D printed, new Red Hat logo earrings. Yeah, it was an, um- >> And a couple of old Yanks (laughing). Well, I enjoyed it, Stu. As always, great working with you, and we thank you for being with us as well. For now, we're gonna say so long. We're gonna see you at the next Red Hat Summit, I'm sure, 2020 in San Francisco. Might be a, I guess a slightly different company, but it might be the same old Red Hat too, but they're going to have 34 billion dollars behind them at that point and probably riding pretty high. That will do it for our CUBE coverage here from Boston. Thanks for much for joining us. For Stu Miniman, and our entire crew, have a good day. (funky music)
SUMMARY :
Brought to you by Red Hat. about the vast reach, you might say, of Red Hat, but the first year I came, it was like, all right, you know, I don't like this. Yeah, you know- But we have to do this, you know, You've been to many shows. And Red Hat's just on the other end of that spectrum, right? It's fascinating to talk to Jim about, yeah you know, and Red Hat has a leadership position in that space. and OpenShift really was the lever to give them that push. I sense they, or got a sense of, you know, and the scale that they need to handle. And, you know, it's great to have that kind of opportunity, I talked about Red Hat's not the Red Hat that you know. The old day of the lines in the swim lanes, Right, also we have Azure compatibility going on here. I mean that, the, to say it's in the, you know, And we saw them in action on the keynotes stage too. I loved, you know, the stories. and the end of the brains. And he said, We want to be considered there, you know, you know, we want you to think less of the bank. Yeah, you mentioned Dr. Ellen Grant, that you interviewed. and we thank you for being with us as well.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
David Gledhill | PERSON | 0.99+ |
UPS | ORGANIZATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Denise | PERSON | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
DBS Bank | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
19 years | QUANTITY | 0.99+ |
Lawrence Livermore | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
95% | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
Nadella | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
34 billion dollars | QUANTITY | 0.99+ |
Ellen Grant | PERSON | 0.99+ |
ten trillion dollars | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
34 billion | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Boston Children's Hospital | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Ellen | PERSON | 0.99+ |
sixth year | QUANTITY | 0.99+ |
Harvard Medical School | ORGANIZATION | 0.99+ |
Walls | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Red Hat | TITLE | 0.99+ |
first day | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
four billion dollars | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
six years ago | DATE | 0.98+ |
2020 | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
OpenShift | ORGANIZATION | 0.98+ |
RHEL | TITLE | 0.98+ |
OpenShift | TITLE | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
Stu | PERSON | 0.98+ |
today | DATE | 0.98+ |
Franklin Middle School | ORGANIZATION | 0.98+ |
Mark Little & Mike Piech, Red Hat | Red Hat Summit 2019
>> Voiceover: Live from Boston, Massachusetts, it's the CUBE. Covering your Red Hat Summit 2019. Brought to you by Red Hat. >> And welcome back to our coverage here on the CUBE Red Hat Summit 2019. We're at the BCEC in Beantown, Boston, Massachusetts playing host this week to some 9000 strong attendees, pack keynotes. Just a great three days of programming here and educational sessions. Stu Miniman and I'm John Walls. We're joined by Mike Piech, who's the VP and general manager of Middleware at Red Hat. Mike, good to see you today. >> Great to be back. >> And Mark Little, VP of engineering Middleware at Red Hat. Mark, Good to see you as well, sir. >> You too. >> Yeah. First of, let's just talk about your ideas at the show here. Been here for a few days. As we've seen on the keynote stage, wide variety of first off, announcements and great case studies, great educational sessions. But your impressions of what's going on and some of the announcements we've heard about this week. >> Well, sure. I mean definitely some very big announcements with RHEL 8 and OpenShift 4. So as Middleware we're a little bit more in sort of gorilla mode here while some of the bigger announcements take a lot of the limelight. But nevertheless those announcements and the advances that they represent are very important for us as Middleware. Particularly OpenShift 4 as sort of the next layer up from OpenShift which the developers sort of touch and feel and live and breathe on a daily basis. We are the immediate beneficiaries of much of the advances in OpenShift and so that's something that, we as the Middleware guys sort of make real for the enterprise application developer. >> I'd say, probably for me, building on that in a way, one of the biggest announcements, one of the biggest surprises is gotta be the first keynote where we had Satya from Microsoft on stage with Jim announcing the collaboration that we're doing. I never believed that would ever happen and that's, that's fantastic. Has a benefit for Middleware as well but just for Red Hat as a whole. Who would've thought it? >> John: Who would have thought it, right? Yeah, we actually just had Marco Bill-Peter on and he was talking about, he's like "Look, we've actually had some of our support people up in Redmond now for a couple of years." And we had Chris Wright on earlier and he says "You know, sometimes we got to these shows and you get the big bang announcement. It's like, well, really we're working incrementally along the way and open source you can watch it. Sure sometimes you get the new chipset or there's a new this or that. But you know, it's very very small things." So in the spirit of that, maybe, you know, give us the updates since last time we got together. What's happening in the Middleware space as you said. If we build up the stack, you know, we got RHEL 8, we got OpenShift 4 and you're sitting on top. >> Yeah. Well one aspect that's an event like this makes clear in almost a reverse sort of way. We put a lot of effort particularly in Mark's team in getting to a much more frequent and more incremental release cycle and style, right. So getting away from sort of big bang releases every year, couple of years, to a much more agile incremental again sort of regime of rolling out functionality. Now, one of the downsides of that is that you don't have these big grand product announcements to make a big deal about in the same way as RHEL just did with 8 for example. So we need to rethink how we sort of (Laughs) >> absence the sort of big .0 releases, you know how we sort of batch up interesting news and roll it out at a large event like this. Now one of the things that we have been working on is our application environment narrative. Right now, the whole idea of the story here is that many people talk about Cloud-Native and about having lot's of different capabilities and services in a cloud environment. And as we've sort of gone through the, particularly the last year or so, it's really become apparent from what our customers tell us and from what we really see as the opportunities in the cloud-native world. The value that we bring is engineering all these pieces together, right? So that it's not simply a list of these disparate, disconnected, independent services but rather Middleware in the world of cloud native re-imagined. It is capabilities that when engineered together in the right way they make for this comprehensive, unified, cohesive environment within which our customers can develop applications and run those applications. And for the developer, you get developer productivity and then at runtime, you're getting operational reliability. So there really is a sort of a dual-sided value proposition there. And this notion of Middleware engineered together for the cloud is what the application environment idea is all about. >> Yeah. I'd add kinda one of the things that ties into that which has been big for us at least at summit this year is an effort that we kicked off or we announced two months ago called Quakers and as you all know a lot of what we do within Middleware, within Red Hat is based on Java and Java is still the dominant language in the enterprise but it's been around for 20 years. It developed in a pre-cloud era and that made lots of assumptions on the way in which the Java language and the JVM on which it runs would develop which aren't necessarily that conducive for running, in a cloud environment, a hybrid cloud environment and certainly public cloud environment based on Linux containers and Kubernetes. So, we've been working for a number of years in the upstream open JDK community to try and make Java much more cloud-native itself. And Quakers kind of builds on that. It essentially is what we call a kub-native approach where we optimize all of the Middleware stack upfront to work really really well in Kubernetes and specifically on OpenShift. And it's all Java though, that's the important thing. And now if people look into this they'll find that we're showing performance figures and memory utilization that is on a per with some of the newer languages like Go for instance, very very fast. Typically your boot time has gone from seconds to tens of milliseconds. And people who have seen it demonstrated have literally been blown away cause it allows them to leverage the skills that they've had invested in their employees to learn Java and move to the cloud without telling them "You guys are gonna have to learn a completely new language and start from scratch" >> All right, so Mark, if I get it right cause we've been at the Kubernetes show for a bunch of years but this is, you're looking at kinda the application side of what's happening in those Kubernetes environment >> Mark: Yeah. So many times we've talked about the platforms and the infrastructure down but it's the the art piece on top. Super important. I know down the DevZone people were buzzing around all the Quaker stuff. What else for people that are you know, looking at that kinda cloud-native containerization space? What other areas that they should be looking at when it comes to your space? >> Well, again, tying into the up environment thing, hopefully, you know, you'll have heard of knative and Istio. So knative is, to put it in a quick sentence is essentially an enabler for serverless if you like. It's where we're spinning containers really really quickly based on events. But really any serverless platform lives and dies based on the services in which your business logic can then rely upon. Do I have a messaging service there? Do I have a transaction service or a database service? So, we've been working with, with Google on knative and with Microsoft on knative to ensure that we have a really good story in OpenShift but tying it into our Middleware suite as well. So, many of our Middleware products are now knative enabled if you like. The second thing is, as I mentioned, Istio which is a sidecar approach. I won't go into details on that but again Istio the aim behind that is to remove from the application developer some of the non-functional business logic that they had to put in there like "How do I use a messaging service? How do I secure this endpoint and push it down the infrastructure?" So the security servers, the messaging servers, the cashing servers et cetera. They move out of the business logic and they move into Istio. But from our point of view, it's our security servers that we've been working on for years, it's our transactional servers that we've been working on for years. So, these are bullet-proof implementations that we have just made more cloud-native by embedding them in a way in Istio and like I said, enabling them with knative. >> I think we'd mentioned that Chris Wright was on earlier and one of the things he talked about was, this new data-eccentric focus and how, that's at the core so much of what enterprise is doing these days. The fact that whenever speed is distributed, they are and you've got so many data inputs come in from, so to a unified user trying to get their data the way they wanna see it. You might want it for a totally other reason, right? I'm just curious, how does that influence or how has that influenced your work in terms of making sure that transport goes smoothly? Because you do have so much more to work with in a much more complex environment for multiple uses that are unique, right? >> (Mike) Yeah. >> It's not all the same. >> Huge, huge impact for sure. The whole idea of decomposing an application into a much larger number of much smaller pieces than was done in the past has many benefits probably one of the most significant being the ability to make small changes, small incremental changes and afford a much more trial and error approach to innovation versus more macro-level planning waterfall as they call it. But one of the implications of that is now you have a large number of entities. Whether they be big or small, there's a large number of them running within the estate. And there's the orchestration of them and the interconnection of them for sure but it's a n-squared relationship, right. The more these entities you have, the more potential connections between each of them you have to somehow structure and manage and ensure are being done securely and so on. So that has really driven the need for new ways of tying things together, new ways essentially of integration. It has definitely amplified the need for disciplines, EPI management for example. It has driven a lot of increase demand for an event-driven approach where you're streaming in realtime and distributing events to many receivers and dealing with things asynchronously and not depending on round-trip times for everything to be consistent and so on. So, there's just a myriad of implications there that are very detailed technical-level drive some of the things that we're doing now. >> Yeah, I'll just add that in terms of data itself, you've probably heard this a number of times, data is king. Everything we do is based on data in one way or another, So we as Red Hat as a whole and Middleware specifically, we've had a very strong data strategy for a long time. Just as you've got myriad types of data, you can't assume that one way of storing that data is gonna be right for every type of data that you've got. So, we've worked through the integration efforts on ensuring that no sequel data stores, relational data stores^, in-memory data caching and even the messaging services as a whole is a way of sto^ring data in transit, that allows you to, in some ways it allows you to actually look at it in an event-driven way and make intelligent decisions. So that's a key part of what anybody should do if they are in the enterprise space. That's certainly what we're doing because at the end of the day people are building these apps to use that data. >> Well, gentlemen, I know you have another engagement. We're gonna cut you loose but I do wanna say you're the first guests to get applause. (guests laugh) >> From across all the way there. People at home can't hear but, so congratulations. You've been well received already. >> I think they're clearly tuned in to the renaissance of the job in here. >> Yes. >> Thank you both. >> Thanks for the time. >> Mark: Thanks so much. >> We appreciate that. Back with more, we are watching a Red Hat summer 2019 coverage live on the CUBE. (Upbeat music)
SUMMARY :
it's the CUBE. We're at the BCEC in Beantown, Boston, Massachusetts Mark, Good to see you as well, sir. and some of the announcements we've heard about this week. of much of the advances in OpenShift one of the biggest surprises is gotta be the first keynote So in the spirit of that, maybe, you know, Now, one of the downsides of that And for the developer, you get developer productivity and that made lots of assumptions on the way in which and the infrastructure down but it's the and push it down the infrastructure?" and one of the things he talked about was, So that has really driven the need for new ways and even the messaging services as a whole Well, gentlemen, I know you have another engagement. From across all the way there. of the job in here. live on the CUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike Piech | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Mark Little | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Middleware | ORGANIZATION | 0.99+ |
Redmond | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
Mike | PERSON | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
OpenShift 4 | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
two months ago | DATE | 0.99+ |
Beantown, Boston, Massachusetts | LOCATION | 0.98+ |
Red Hat Summit 2019 | EVENT | 0.98+ |
tens of milliseconds | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
OpenShift | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Red Hat | TITLE | 0.98+ |
Linux | TITLE | 0.98+ |
today | DATE | 0.97+ |
this week | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
first guests | QUANTITY | 0.97+ |
last year | DATE | 0.97+ |
DevZone | TITLE | 0.97+ |
this year | DATE | 0.96+ |
CUBE Red Hat Summit 2019 | EVENT | 0.96+ |
second thing | QUANTITY | 0.96+ |
first keynote | QUANTITY | 0.95+ |
Istio | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.95+ |
Satya | PERSON | 0.93+ |
summer 2019 | DATE | 0.93+ |
RHEL | TITLE | 0.93+ |
one aspect | QUANTITY | 0.92+ |
Middleware | LOCATION | 0.91+ |
three days | QUANTITY | 0.9+ |
9000 strong attendees | QUANTITY | 0.89+ |
JDK | TITLE | 0.89+ |
20 years | QUANTITY | 0.87+ |
knative | ORGANIZATION | 0.86+ |
couple of years | QUANTITY | 0.84+ |
JVM | TITLE | 0.82+ |
CUBE | ORGANIZATION | 0.79+ |
David Levine, Red Hat | Red Hat Summit 2018
>> Announcer: Live from San Francisco, it's theCUBE, covering Red Hat Summit 2018, brought to you by Red Hat. >> Hello everyone, welcome back to theCUBE's exclusive coverage of Red Hat Summit 2018 in San Francisco, Moscone West. I'm John Furrier, my co-host John Troyer, and we are here with David Levine, Assistant General Counsel of Red Hat, we've got the lawyer in the house. Who's billing for this hour? >> Exactly. >> Welcome to theCUBE. >> Thank you, John, it's good to be here. >> So, obviously the legal challenges, putting GDPR aside, which I don't want to get on that rant, we're not going to talk about, is licenses. In open source, this has been an enabler but also an inhibitor for many in not knowing what license to use or what code is, licenses mean for them, their role in the community, all of this stuff could be a morass of gray area, or just no one's educated in some cases, right? So it's tough. >> And that's what I do. I mean my job is to help bring some order to what you describe as some morass, right. How do we help reassure especially enterprises that it's safe to go in the water, it's safe to use open source. Red Hat is an open source company, our entire business is built on open source, and that sort of has a couple aspects to it. One is on the development side, you know we collaborate in the development of software, but what really enables that are licenses, open source licenses. And much of Red Hat's software is built on top of a particular license, which is called the copyleft license. It's known as the GPL or the General Public License. And it's a great tool to foster collaboration, right? What copyleft means is if I create a piece of software under a copyleft license and I give this software to you, I give it to you with all my copyrights. So you have the right to copy it, to distribute it to John, to improve upon it, but the only requirement is if you give it to John, you have to give it to him with the same rights and you have to give him the source code, and if you improve upon it, you have to license the improvements to him under those same rights. So it's this whole virtuous circle, right? I create something, I give it to you, you're able to continue to improve on it, you redistribute it, and we all get to share... >> Furrier: So if I create value, do you get that back? >> If you decide to distribute it to me, you don't have to, >> OK. >> David: But if you distribute it to someone else, then you have to give it back with all those same rights. >> Furrier: So you're paying it forward, basically all the rights forward, >> Exactly. >> Furrier: A dose of good ethos. But then if I improve upon, I create a derivative work, whatever the legal jargon is, >> Right, right. >> Furrier: And I have, this is a magic secret sauce, ten percent of it is magic secret sauce, now I distribute that product, I pass along the license. >> David: Correct. >> Including my secret sauce. >> David: If you decide to, there's nothing that requires you to do it, so a lot of our customers sort of build their secret sauce internally, they keep it within their companies and it doesn't go out any further than that, and that's perfectly fine, but if you decide to distribute it, you have to continue to... >> What does that mean, >> Furrier: Distribute, distribute the software to a partner or the product itself? >> David: It could be both. >> So the product is sold publicly as a service, say a cloud service, and I've got some secret sauce. >> David: So if it's a service, it's a great question and it goes into legal issues, but generally speaking, if you're providing a service that's not a distribution, so I don't really have access to the software. >> Furrier: That's actually a really good thing for developers. >> Yeah. >> Well, it's an issue, we are now in a service-oriented world so that's a, we are, maybe that's one of the next things that we as a technology community and an IT industry have to deal with. Certainly, it seems though, David, before we get into the new news here and the specifics of the new development, but open source was scary... A generation or two ago. It seems like, at this point especially in cloud, it's the new normal. Is that as you, inside Red Hat, as you all look at your landscape, it doesn't seem like you have, do you have big Fortune 100 lawyers coming in and yelling at you now versus ten years ago? >> It's a great point. So I've been at Red Hat for 13 years now, so I've seen sort of tremendous change over the years, and when I started in 2005, we were having a lot of discussions with customers about the copyleft aspects of the GPL, you know, this requirement to give back, and there were companies that were concerned about this, but over time, they've become more sophisticated and they're realizing that, notwithstanding what their lawyers were telling them, it really wasn't that dangerous, and I have very few of those conversations today. Most people get it. >> Furrier: And also a lot's changed since that time, I mean right now I think people are seeing the benefits of projects being out in the open, where it's fostering great collaboration. And the productization piece can still exist >> Yeah. >> With that, so that dynamic between productization, AKA commercialization, and open source projects is interesting. So you could almost make the argument, it's easier to be compliant if you just make everything open source because, rather than just re-engineering any fixes, the community can do it for you. >> David: Absolutely. >> So this efficiency's already been proven. >> David: Absolutely. And you know, customers are concerned about compliance with all of the obligations under the open source licenses, and one of the things that I try to tell customers is if you take open source, you build it into a product, rather than spend a lot of time focusing on pulling out the obligations into a separate file, just make the source code available, republish it and you get to participate, you get to push your contributions upstream and so you a whole community that's supporting the contributions that you described. >> Furrier: Okay, so what's the big news here that GPL, version 2, okay, so first of all, what's the current situation? You guys made a quick tweak in this GPL 2-3 situation, what was the current situation, what was the motivation? Why the change? What's the impact? >> David: So I talked earlier about the GPL and the GPL has very exacting requirements. I mentioned that if you're going to distribute the software to John, you have to give him the source code, and you have to include a copy of the license. Understanding what is source code, what has to be, what has to accompany it, depending on how you're distributing the software, that's not always an easy question, and so companies don't always get it right. And one of the challenges with GPL, version 2 is that there is no grace period, and so if you miss something, if you make a mistake in the way that you've tried to meet your license obligations, your license is terminated and you're a copyright infringer, sort of, right at that point in time, and that scares a lot of our customers, it scares enterprises. They need more predictability, they want some level of fairness. >> This is the grace period you're talking about. >> David: Yeah, this the grace period. So, there's no grace period in version 2 of the General Public License. That problem was fixed when they came out ten years ago with version 3 of the GPL. So version 3 included this grace period in it, but the challenge is that a lot of code today remains GPL, version 2, so what do we do with that large existing code base? And so, the solution was to adopt the cure provision, or the grace provision from GPL, version 2, I'm sorry, version 3, for GPL, version 2 code. Stop me if I'm speaking too quickly or if I'm getting too technical. So the idea is >> Let's rewind just back 30 seconds. So, do a little playback. So, if we can apply GPL, v.3 to the v.2 code, >> So, the cure period. >> Oh, just the cure period. >> So I'm adopting >> David: the cure period. >> Got it. >> David: So, the license stays the same, the only difference is, I've said that if you fail to meet your obligation to John when you redistribute, I'm going to give you 30 days to fix the problem. >> Furrier: So essentially you grandfather in the v.2 with the grace period. >> We're giving this grace period. >> Troyer: And this is a corporate promise. This doesn't change the license, this is a corporate promise. >> So it's a promise >> David: by any copyright holder, so in my example to you, I'm the sole copyright holder here, but in the Linux kernel, there are thousands of copyright holders. So the Linux kernel developers back in October adopted this same approach, adopted the GPL v.3 cure period for the Linux kernel, which continues to be licensed under GPL, version 3. And then in December, Red Hat led a group of companies that included IBM, Google, Facebook, we all adopted it for our own copyrights. So, we together, those four companies own a lot of copyrights to open source code. And then again in March, six more companies joined us. SAP, Microsoft, Cisco, HPE, Soothsay, CA Technologies, and at the Red Hat Summit today, we're asking developers to do the same thing. We want to show that it's a movement, that we want to cooperate in enforcement, because we think ultimately if we want more people to join the open source ecosystem, we can do that by making enforcement more predictable. >> Furrier: And so what specifically are you asking startups? What's the ask for developers? >> For developers, if you go to, we have a site on GitHub, so it's the GPL Cooperation Commitment, so gplcc.github.io/gplcc. >> And what do they do, just take a guess? >> And you go there, and there's the statement, the same commitment that the company's made, and you go in and add your name to the bottom of the file and submit a pull request, like developers know how to do on GitHub, and your name will be added as a supporter. >> Into the record. >> David: It would apply to every new copier. >> That gives them the primary source (mumbles), or write... >> David: It gives anyone who takes that code, has that piece of mind. >> Furrier: Well, great stuff, great one-on-one on the GPL v.2, v.3 grace period, it's super cool you guys are doing that. It's just such a hassle, I'm sure the complaints have been crazy. The bigger question for me as I look at, cause I love that the innovation comes from open source, we're seeing that both on the collaborative side in the project, but also people are really productizing open source and its running everything. The question is, where do I have code that I, you know, people are programming like crazy, they're slinging code like it's nobody's business right now. So, I might be afraid I'd be liable if I'm an enterprise or a startup that, through venture capital or an M&A process where something's going on, wait a minute, we can't actually sell this because that's his code over there. You didn't comply with the license, so there's always these tripwires in the mind, and sometimes that's just fear, this is a general kind of license hygiene practice. What's your take on that? What's your advice to entrepreneurs, to enterprise developers, to be safe? What should they do as their approach? >> David: That's a great question. I mean, what you want to know is where's my code coming from? And you have, it's a license issue, but it's also a product security issue. If you're taking something from someone, they took it from two places down the food chain, what's the provenance of that code? So, just like from a security perspective... >> Furrier: I've seen M&As go south because of this. >> Yes, so you want to know the source of your code, get it from a trusted source. Make sure that you understand what the license terms are. One of the things that we're trying to encourage developers to do is make sure you attach a license to it, because if you don't, a user or startup's not going to know what rights they have. And that can become problematic if they have a liquidity fight. >> Furrier: Okay, so here's my next question. So, the next question is obviously open source is growing and people are joining projects and/or creating projects. So this is a hypothetical: I have a project and I want to donate to CUBE code, to the open source CUBE community. Do I just ship the code, do I have to pick the license, what's the best license? And then I want to also have in the mind that I might use Linux and other things, so I have code I've written, proprietary code I want to open it up, I've got to pick a license, like, do I just go like that and pick the license out of the hat, or... >> Lots of times, it's sort of dictated for you. So it depends on the ecosystem that you're working with. I mean, if you're working in the Linux kernel ecosystem, generally it's going to be GPL, version 2. So you have to look at what other projects you're working with, is this part of a particular project that already has an existing license? And then it's a philosophical point. I mentioned before, the GPL is a copyleft license. It forces sharing, right, so it protects John's rights downstream from you, but there are other licenses that are permissive and give you lots of rights, but you could decide what you want to do with it downstream. So if you're okay with people taking your code downstream from you and making it proprietary, then using a permissive license is fine. But if you want to ensure this virtuous circle, then you want to pick a copyleft license. >> Troyer: Paul, do you think we have reached the end stage of open source licenses here? Are you, you know, GPL v.3 is ten years old, and after we started from MIT and Apache, and I could probably list a couple of others and I haven't even been paying attention, so, are we settled down, are we about done? Are you looking for things? >> David: That's a great question. So I was at a conference two weeks ago in Barcelona put on by the Free Software Foundation in Europe, and one of the conference sessions was The Future of Copyleft. You know, is there going to be another copyleft license? Do we need GPL, version 4? It's, you look at what the GPL has done and how many projects are governed by it, and how it's forced this collaboration, it's done amazing things, but it's pretty complicated. So is there a simpler way of accomplishing the same objectives? But I don't know that people have the stomach... >> Furrier: And the answer is? >> Uh, (laughing). I'll come back next year and let you know what I learn... >> Were you worried about, and now I'm going to ask, have to ask this, ask me how you can support open source licensing, so I'll ask you: how can you and me support open source licensing? >> David: So, take the GPL v.3 Cure Commitment, commit your name to supporting greater stability and predictability and fairness in the way enforcement takes place. So, I mean it's an exciting project. It's kind of fun to pull the whole community together. >> It's quite an accomplishment, too if you think about open source principles are now, again, we don't want to skew other events, but okay, this beginning of another generation of open source greatness certainly, remember the glory days when there was a Tier 2 citizen in the enterprise, you guys made it Tier 1 but now it's going to a whole other level with Cloud-Native, and you're seeing open source ethos being applied to other markets, not just software development. So, you're starting to see the success create this circle of innovation. Have you guys had the "pinch me" moments inside Red Hat, saying, "hey, this is actually working, and really well"? >> David: I think just a couple touchpoints, I mean, I think, look at where Microsoft has come, right? When I joined Red Hat, that wasn't a friendly relationship, but now they've embraced it. Who would have thought 15 years ago that we'd see Microsoft on board and we have. And your point about where else is open source going; one of my colleagues spoke about a year ago to seed developers who were interested in open sourcing seeds, because there was concern about seeds becoming patented and not being able to grow food. And so, thinking about ways to open up the market in seeds. >> Productization is a great thing. >> Yeah, absolutely. >> On the legal front, what's on the horizon? Any hurdles you see, opportunities, challenges that your guys are working on? Obviously, there's always the legal framework, we just commented before you came on with Chris Wright about Blockchain and some of the tokenization around content, we might even see a token economics model in software down the road. So, a lot of interesting legal things happening to rights if you open them up. What's your thoughts on the future? >> So, one of the areas that we're focused on, as is Red Hat, is containers. So what does it mean if you put open source software layers in a container? What does it mean if there are proprietary layers in there? Does it mean if you add, if you take my open source software, add a proprietary component, package it in a container and give that container to John, what does it mean for your proprietary layer? Is that, does that have to be licensed under the GPL? And so we spent a lot of time thinking about that a number of years ago and luckily concluded that it may improve the situation as opposed to adding any concerns, so we're thinking about the impact of open source licensing and containers, ensuring, again to your point earlier, what's the provenance of the code? There's so much code now available, making sure that there is a license associated with it. >> It's almost, you just declare all code free. (all laughing) >> Absolutely. >> Well certainly a lot of new things you're seeing, societal change is impacted, you've got self-driving cars and all kinds of new things that are just mind-blowing on a legal framework standpoint. First-time challenges, so you're busy, you're always going to have an interesting job. >> I really think that I have the best job in Red Hat, because I get to think about these things. What does it mean from a licensing perspective? What are the new issues that we're going to face as the technology evolves, the market evolves? And... >> Furrier: Super important, I mean there's tripwires in there, and again, if you don't think about it probably, I know or I've seen from experience, great companies lose big-time acquisition opportunities because of some faulty code on a license, and it's just killed things, and I've seen enterprises get (laughing). I mean, little weird things could happen, you've just got to be on top of it. >> David: I mean, look at what Tesla did in open sourcing their patents, making their patented technology available so that, to help the whole autonomous car industry. We've been doing a lot of work in the patent area as well to ensure that patents don't become an inhibitor to the change that you've described. >> Furrier: It's a great conversation, provocative, legal and open source software. These are competitive advantages and opportunities, not challenges and compliance, old-school guarded secrets. Open it up and good things happen. David, thanks for coming on theCUBE. Thanks for sharing the insights on the legal perspectives of licenses as open source software continues to power the globe on a global basis, the global economy, and the technology innovation coming. It's theCUBE, bringing you all the live action here in San Francisco. We'll be right back with more after this short break. (upbeat music) (inspirational music)
SUMMARY :
brought to you by Red Hat. and we are here with David Levine, So, obviously the legal challenges, I give it to you with all my copyrights. then you have to give it back Furrier: A dose of good ethos. I pass along the license. to distribute it, you have to So the product is sold David: So if it's a Furrier: That's actually and the specifics of the new development, about the copyleft aspects of the GPL, of projects being out in the open, it's easier to be compliant if you just So this efficiency's the contributions that you described. to John, you have to This is the grace period of the General Public License. So, if we can apply GPL, going to give you 30 days in the v.2 with the grace period. This doesn't change the license, this is a and at the Red Hat Summit today, so it's the GPL Cooperation Commitment, and you go in and add your name to every new copier. source (mumbles), or write... has that piece of mind. cause I love that the innovation I mean, what you want to know is Furrier: I've seen M&As One of the things that we're trying and pick the license So it depends on the ecosystem the end stage of open and one of the conference sessions was let you know what I learn... and predictability and fairness in the way in the enterprise, you guys made it Tier 1 and not being able to grow food. to rights if you open them up. and give that container to John, It's almost, you just and all kinds of new things What are the new issues and again, if you don't in the patent area as well on the legal perspectives
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Levine | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Soothsay | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
SAP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2005 | DATE | 0.99+ |
December | DATE | 0.99+ |
Free Software Foundation | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
March | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
13 years | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
30 days | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
ten percent | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Face | ORGANIZATION | 0.99+ |
Linux kernel | TITLE | 0.99+ |
two places | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
GPL | TITLE | 0.99+ |
ten years | QUANTITY | 0.99+ |
four companies | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Red Hat Summit 2018 | EVENT | 0.99+ |
Linux | TITLE | 0.99+ |
Troyer | PERSON | 0.99+ |
GPL v.3 | TITLE | 0.99+ |
theCUBE | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
30 seconds | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
15 years ago | DATE | 0.98+ |
CA Technologies | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
ten years ago | DATE | 0.98+ |
General Public License | TITLE | 0.98+ |
two weeks ago | DATE | 0.98+ |
today | DATE | 0.98+ |
gplcc.github.io/gplcc | OTHER | 0.98+ |
six more companies | QUANTITY | 0.98+ |