Sathish Balakrishnan, Red Hat | Google Cloud Next OnAir '20
>> (upbeat music) >> production: From around the globe, it's the Cube covering Google cloud Next on-Air 20. (Upbeat music) >> Welcome back. I'm Stu Miniman and this is the CUBE coverage of Google cloud Next on Air 20. Of course, the nine week distributed all online program that Google cloud is doing and going to be talking about, of course, multi-cloud, Google of course had a big piece in multi-cloud. When they took what was originally Borg, They built Kubernetes. They made that open source and gave that to the CNCF and one of Google's partners and a leader in that space is of course, Red Hat. Happy to welcome to the program Sathish Balakrishnan, he is the Vice President of hosted platforms at Red Hat. Sathish, thanks so much for joining us. >> Thank you. It's great to be here with you on Google Cloud Native insights. >> Alright. So I, I tied it up, of course, you know, we talk about, you know, the hybrid multicloud and open, you know, two companies. I probably think of the most and that I've probably said the most about the open cloud are Google and Red Hat. So maybe if we could start just, uh, you hosted platforms, help us understand what that is. And, uh, what was the relationship between Red Hat and the Open Shift team and Google cloud? >> Absolutely. Great question. And I think Google has been an amazing partner for us. I think we have a lot of things going on with them upstream in the community. I think, you know, we've been with Google and the Kubernetes project since the beginning and you know, like the second biggest contributor to Kubernetes. So we have great relationships upstream. We also made Red Hat Enterprise Linux as well as Open Shift available on Google. So we have customers using both our offerings as well as our other offerings on Google cloud as well. And more recently with the hosted our offerings. You know, we actually manage Open Shift on multiple clouds. We relaunched our Open Shift dedicated offering on Google cloud back at Red Hat Summit. There's a lot of interest for the offering. We had back offered the offering in 2017 with Open Shift Three and we just relaunched this with Open Shift Four and we received considerable interest for the Google cloud Open Shift dedicated offering. >> Yeah, Sathish maybe it makes sense if we talk about kind of the maturation of open source solutions, managed services has seen really tremendous growth, something we've seen, especially if we were talking about in the cloud space. Maybe if you could just walk us through a little bit out that, you know, what are you hearing from customers? How does Red Hat think about managed solutions? >> Absolutely. Stu, I think it was a good question, right? I think, uh, as we say, the customers are looking at, you know, multiple infrastructure footprints, Be iteither the public cloud or on-prem. They'll start looking at, you know, if I go to the cloud, you know, there's this concept of, I want something to be managed. So what Open Shift is doing is in Open Shift, as you know it's Red Hat's hybrid cloud platform and with Open Shift, all the things that we strive to do is to enable the vision of the Open Hybrid Cloud. Uh, so, but Open Hybrid Cloud, it's all about choice, So we want to make sure the customers have both the managed as well as the self managed option. Uh, so if you really look at it, you know, Red Hat has multiple offerings from a managed standpoint. One as you know, we have Open Shift dedicated, which runs from AWS and Google. And, you know, we just have, as I mentioned earlier. We relaunched our Google service at Red Hat Summit back in May. So that's actually getting a lot of traction. We also have joint offerings with Azure that we announced a couple of years back and, there's a lot of interest for that offering as well as the new offering that we announced post-summit, the Amazon-Red Hat Open Shift, which basically is another native offering that we have on Amazon. If you really look at, having, having spoken about these offerings, if you really look at Red Hat's evolution as a managed service provider in the public cloud, we've been doing this since 2011. You know, that's kind of surprising for a lot of people, but you know, we've been doing Open Shift online, which is kind of a multi-tenant parcel multi-talent CaaS solution 2011. And we are one of the earliest providers of managed kubernetes, you know, along with Google Kubernetes engine GKE, we are our Open Shift dedicated offering back in 2015. So we've been doing Kubernetes managed since, Open Shift 3.1. So that's actually, you know, we have a lot of experience with management of Kubernetes and, you know, the devolution of Open Shift we've now made it available and pretty much all the clouds. So that customers have that exact same experience that they can get any one cloud across all clouds, as well as on-prem. Managed service customers now have a choice of a self managed Open Shift or completely managed Open Shift. >> Yeah. You mentioned the choice and one of the challenges we have right now is there's really the paradox of choice. If you look in the Kubernetes space, you know, there are dozens of offerings. Of course, every cloud provider has their offerings. You know, Google's got GKE, they have Anthos, uh, they, they have management tools around there. You, you talked a bit about the, you know, the experience and all the customers you have, the, you know, there's one of the fighters talks about, there's no compression algorithm for experience. So, you know, what is Red Hat Open Shift? What really differentiates in the market place from, you know, so many of the other offerings, either from the public high providers, some of the new startups, that we should know. >> Yeah. I think that's an interesting question, right? I think all Google traders start with it's complete open source and, you know, we are a complete open source company. So there is no proprietary software that we put into Open Shift. Open Shift, basically, even though it has, you know, OC command, it basically has CPR. So you can actually use native Google networks as you choose on any Google network offering that you have be it GKE, EKS or any of the other things that are out there. So that's why I think there are such things with google networks and providers and Red Hat does not believe in open provider. It completely believes in open source. We have everything that we is open source. From an it standpoint, the value prop for Red Hat has always been the value of the subscription, but we actually make sure that, you know, Google network is taken from an upstream product. It's basically completed productized and available for the enterprise to consume. But that right, when we have the managed offering, we provide a lot more benefits to it, right? The benefits are right. We actually have customer zero for Open Shift. So what does that mean? Right. We will not release Open Shift if we can't run open Shift dedicated or any of their (indistinct) out Open Shift for them is under that Open Shift. Really really well. So you won't get a software version out there. The second thing is we actually run a lot of workloads, but then Red Hat that are dependent on our managed or open shift off. So for example, our billing systems, all of those internal things that are important for Red Hat run on managed Open Shift, for example, managed Open Shift. So those are the important services for Red Hat and we have to make sure that those things are running really, really well. So we provide that second layer of enterprise today. Then having put Open Shift online, out that in public. We have 4 million applications and a million developers that use them. So that means, I've been putting it out there in the internet and, you know, there's security hosts that are constantly being booked that are being plugged in. So that's another benefit that you get from having a product that's a managed service, but it also is something that enterprises can now use it. From an Open Shift standpoint, the real difference is we add a lot of other things on top of google network without compromising the google network safety. That basically helps customers not have to worry about how they're going to get the CIC pipeline or how they have to do a bunch of in Cobra Net as an outside as the inside. Then you have technologies like Store Street Metrics kind of really help customers not to obstruct the way the containerization led from that. So those are some of the benefits that we provide with Open Shift. >> Yeah. So, so, so Sathish, as it's said, there's lots of options when it comes to Kubernetes, even from a Red Hat offering, you've got different competing models there. If I look inside your portfolio, if it's something that I want to put on my infrastructure, if I haven't read the Open Shift container platform, is that significantly different from the managed platform. Maybe give us a little compare contrast, you know. What do I have to do as a customer? Is the code base the same? Can I do, you know, hybrid environments between them and you know, what does that mean? >> It's a smart questions. It's a really, really good question that you asked. So we actually, you know, as I've said, we add a lot of things on top of google network to make it really fast, but do you want to use the cast, you can use the desktop. So one of the things we've found, but you know, what we've done with our managed offering is we actually take Open Shift container platform and we manage that. So we make sure that you get like a completely managed source, you know. They'll be managed, the patching of the worker nodes and other things, which is, again, another difference that we have with the native Cobra Net of services. We actually give plush that admin functionality to customers that basically allows them to choose all the options that they need from an Open Shift container platform. So from a core base, it's exactly the same thing. The only thing is, it's a little bit opinionated. It to start off when we deploy the cluster for the customer and then the customer, if they want, they can choose how to customize it. So what this really does is it takes away any of the challenges the customer may have with like how to install and provision a cluster, which we've already simplified a lot of the open shift, but with the managed the Open Shift, it's actually just a click of it. >> Great. Sathish Well, I've got the trillion dollar question for you. One of the things we've been looking at for years of course, is, you know, what do I keep in my data center? What do I move to the cloud? How do I modernize it? We understand it's a complex and nuanced solution, but you talk to a lot of customers. So I, you know, here in 2020, what's the trends? What are some of the pieces that you're seeing some change and movement that, you know, might not have been the case a year ago? >> I think, you know, this is an interesting question and it's an evolving question, right? And it's something that if you ask like 10 people you'll get real answers, but I'm trying to generalize what I've seen just from all the customer conversations I've been involved. I think one thing is very clear, right? I think that the world is right as much as anybody may want to say that I'm going to go to a single cloud or I'm going to just be on prem. It is inevitable that you're going to basically end up with multiple infrastructure footprint. It's either multicloud or it's on Prem versus a single cloud or on prem versus multiple cloud. So the main thing is that, we've been noticing as, what customers are saying in a whole. How do I make sure that my developers are not confused by all these difference than one? How do I give them a consistent way to develop and build their applications? Not really worry about, what is the infrastructure. What is the footprint that they're actually servicing? So that's kind of really, really important. And in terms of, you know, things that, you know, we've seen customers, you know, I think you always start with compliance requirements and data regulations. Back there you got to figure it out. What compliance do I need? And as the infrastructure or the platform that I'm going to go to meet the compliance requirements that I have, and what are the data regulations? You know, what is the data I'm going to be setting? Is it going to meet the data submitted rules that my country or my geo has? I got to make sure I worry about that. And then I got to figure out if I'm going to basically more to the cloud from the data center or from one cloud to another cloud. I might just be doing a lift or shift. Am I doing a transformation? What is it that I really worry about? In addition to the transformation, they got to figure it out, or I need to do that. Do I not need to do that? And then, you know, we've got to figure out what your data going to set? What your database going to look in? And do you need to connect to some legacy system that you have on prem? Or how do you go? How do you have to figure that out and give them all of these complexities? This is really, really common for any large enterprise that has like an enterprise ID for that multi-cloud. That's basically in multiple geographies, servicing millions of customers. So that has a lot of experience doing all these things. We have open innovation labs, which are really, really awesome experience for customers. Whether they take a small project, they figured out how to change things. Not only learn how to change things from a technology standpoint, but also learn how to culturally change things, because a lot of these things. So it's not just moving from one infrastructure to another, but also learning how to do things differently. Then we have things like the container adoption programmer, which is like, how do you take a big legacy monolith application? How do you containerize it? How do you make it micro services? How do you make sure that you're leveraging the real benefits that you're going to get out of moving to the cloud or moving to a container platform? And then we have a bunch of other things like, how do you get started with Open Shift and all of that? So we've had a lot of experience with like our 2,400 plus customers doing this kind of really heavy workload migration and lifting. So the customers really get the benefits that they see out of Open Shift. >> Yeah. So Sathish, if I think about Google, specifically talking about Google cloud, one of the main reasons we hear customers using Google is to have access to the data services. They have the AI services they have. So how does that tie into what we were just talking about? If I, if I use Open Shift and you know. I'm living in Google cloud, can, can I access all of those cloud native services? Are there any nuances things I need to think about to be able to really unleash that innovation of the platform that I'm tying into? >> Yeah, absolutely not. Right. I think it's a great question. And I think customers are always wondering about. Hey, if I use Open Shift, am I going to be locked out of using the cloud services? And if anything run out as antilock. We want to make sure that you can use the best services that you need for your enterprise, like the strategy as well as for applications. So with that, right. And we've developed the operator framework, which I think Google has been a very early supporter of. They've built a lot of operators around their services. So you can develop those operators to monitor the life cycle of these services, right from Open Shift. So you can actually connect to an AI service if you want. That's absolutely fine. You can connect the database services as well. And you can leverage all of those things while your application runs on Open Shift from Google cloud. Also I think that done us right. We recognize that, when you're talking about the open hybrid cloud, you got to make sure that customers can actually leverage services that are the same across different clouds. So when you can actually leverage the Google services from On Prem as well, if you choose to have localized services. We have a large catalog of operators that we have in our operator hub, as well as in the Red Hat marketplace that you can actually go and leverage from third party, third party ISV, so that you're basically having the same consistent experience if you choose to. But based on the consistent experience, that's not tied to a cloud. You can do that as well. But we would like for customers to use any service that they want, right from Open Shift without any restrictions. >> Yeah. One of the other things we've heard a lot from Google over the last year or so has been, you know, just helping customers, especially for those mission, critical business, critical applications, things like SAP. You talked a bit about databases. What advice would you give customers these days? They're, they're looking at, you know, increasing or moving forward in their cloud journeys. >> I think it sounds as an interesting question because I think customers really have to look at, you know, what is the ID and technology strategy? What are the different initiatives to have? Is it digital transformation? Is it cloud native development? Is it just containerization or they have an overarching theme over? They've got to really figure that out and I'm sure they're looking at it. They know which one is the higher priority when all of them are interrelated and in some ways. They also got to figure out how they going to expand to new business. Because I think as we said, right, ID is basically what is driving personal software is eating the load. Software services are editing them. So you got to figure out, what are your business needs? Do you need to be more agile? Do you need to enter new businesses? You know, those are kind of important things. For example, BMW is a great example, they use Open Shift container platform as well as they use Open Shift dedicated, you know. They are like a hundred hundred plus year old car, guess, you know what they're trying to do. They're actually now becoming connected car infrastructure. That's the main thing that they're trying to build so that they can actually service the cars in any job. So in one shoe, they came from a car manufacturing company to now focus on being a SAS, an Edge and IOT company. If you really look at the cars as like the internet of things on an edge computer and what does that use case require? That use case cannot anymore have just one data center in Munich, they have to basically build a global platform of data centers or they can really easily go to the cloud. And then they need to make sure that that application double close when they're starting to run on multiple clouds, multiple geographies, they have the same abstraction layer so that they can actually apply things fast. Develop fast. They don't have to worry about the infrastructure frequently. And that's basically why they started using Open Shift. And don't know why they're big supporters of Open Shift. And then I think it's the right mission for their use. So I think it really depends on, you know, what the customer is looking for, but irrespective of what they're looking for, I think Open Shift nicely fits in because what it does, is it provides you that commonality across all infrastructure footprints. It gives you all the productivity gains and it allows you to connect to any service that you want anywhere because we are agnostic to that and as well as we bring a whole lot of services from Red Hat marketplace so you can actually leverage your status. >> Well, Sathish Balakrishnan, thank you so much for the updates. Great to hear about the progress you've got with your customers. And thank you for joining us on the Google cloud Next On Air Event. >> Thank you Stu. It's been great talking to you and look forward to seeing you in person one day. >> Alright. I'm Stu Miniman. And thank you as always for watching the Cube. (upbeat music) (upbeat music)
SUMMARY :
it's the Cube covering Google cloud and going to be talking about, to be here with you we talk about, you know, the and you know, like the a little bit out that, you know, if I go to the cloud, you the customers you have, in the internet and, you Can I do, you know, So we actually, you know, as I've said, So I, you know, here in And in terms of, you know, one of the main reasons we to an AI service if you you know, just helping customers, So I think it really depends on, you know, And thank you for joining us been great talking to you And thank you as always
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sathish Balakrishnan | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Red Hat | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Munich | LOCATION | 0.99+ |
2017 | DATE | 0.99+ |
Sathish | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Open Shift | TITLE | 0.99+ |
10 people | QUANTITY | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Open Shift Three | TITLE | 0.99+ |
two companies | QUANTITY | 0.99+ |
Open Shift Four | TITLE | 0.99+ |
May | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Shift | TITLE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
second layer | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
nine week | QUANTITY | 0.99+ |
4 million applications | QUANTITY | 0.99+ |
one shoe | QUANTITY | 0.99+ |
2,400 plus customers | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
Red Hat | TITLE | 0.98+ |
second thing | QUANTITY | 0.98+ |
Open Shift | TITLE | 0.98+ |
Open Shift 3.1 | TITLE | 0.97+ |
Edge | ORGANIZATION | 0.97+ |
last year | DATE | 0.97+ |
Red Hat Summit | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
One | QUANTITY | 0.97+ |
IOT | ORGANIZATION | 0.96+ |
one data center | QUANTITY | 0.96+ |
Stu | PERSON | 0.95+ |
Clayton Coleman, Red Hat | Google Cloud Next OnAir '20
>>From around the globe covering Google cloud next. >>Hi, I'm Stu middleman and this is the cube coverage of Google cloud. Next, happy to welcome back to the program. One of our cube alumni, Clayton Coleman, he's the architect for Kubernetes and OpenShift with red hat Clayton. Thanks for joining us again. Great to see you. Good to see you. All right. So of course, one of the challenges in 2020 is we love to be able to get unity together. And while we can't do it physically, we do get to do it through all of the virtual events and online forum. Of course, you know, we had the cubit red hat summit cube con, uh, for the European show and now Google cloud. So, you know, give us kind of your, your state of the state 2020 Kubernetes. Of course it was Google, uh, taking the technology from Borg, a few people working on it, and, you know, just that this project that has just had massive impact on it. So, you know, where are with the community in Kubernetes today? >>So, uh, you know, 2020 has been a crazy year for a lot of folks. Um, a lot of what I've been spending my time on is, um, you know, taking feedback from people who, you know, in this time of, you know, change and concern and worry and huge shift to the cloud, um, working with them to make sure that we have a really good, um, you know, foundation in Kubernetes and that the ecosystem is healthy and the things are moving forward there. So there's a ton of exciting projects. I will say, you know, the, the pandemics had a, an impact on, um, you know, the community. And so in many places we've reacted by slowing down our schedules or focusing more on the things that people are really worried about, like quality and bugs and making sure that the stuff just works. Uh, I will say this year has been a really interesting one and open source. >>There's been much more focus, I think, on how we start to tie this stuff together. Um, and new use cases and new challenges coming into, um, what maybe, you know, the original Kubernetes was very focused on helping you bring stuff together, bring your applications together and giving you common abstractions for working with them. Um, we went through a phase where we made it easy to extend Kubernetes, which brought a whole bunch of new abstractions. And, and I think now we're starting to see the challenges and the needs of organizations and companies and individuals that are getting out of, um, not just in Kubernetes, but across multiple locations across placement edge has been huge in the last few years. And so the projects in and around Kubernetes are kind of reacting to that. They're starting to, um, bridge, um, many of these, um, you know, disparate locations, different clouds, multicloud hybrid cloud, um, connecting enterprises to data centers are connecting data centers to the cloud, helping workloads be a little bit more portable in of themselves, but helping workloads move. >>And then I think, you know, we're, we're really starting to ask those next big questions about what comes, what comes next for making applications really come alive in the cloud, um, where you're not as focused on the hardware. You're not focused on the details, which are focused on abstractions, like, um, you know, reliability and availability, not just in one cluster, but in multiple. So that's been a really exciting, uh, transition in many of the projects that I've been following. You know, certainly projects like Istio I've been dealing with, um, spanning clusters and connecting existing workloads in and, uh, you know, each step along the way, I see people sort of broaden their scope about what they want, uh, open source to help themselves. >>Yeah, I it's, it's, it's been fascinating to watch just the, the breadth of the projects that can tie in and leverage Kubernetes. Uh, you brought up edge computing and want to get into some of the future pieces, but before we do, you know, let's look at Kubernetes itself. Uh, one dot 19 is kind of where we are at. Uh, um, I already see some, some red stalking about one dot 20. Can you just talk about the, the, the base project itself contributions to it, how the upstream, uh, works and you know, how, how should customers think about, you know, their Kubernetes environment, obviously, you know, red hat with open shifts had a very strong position. You've got thousands of customers now using it, all of the cloud providers have their, uh, Kubernetes flavor, but also you partner with them. So walk us through a little bit about, you know, the open source, the project and those dynamics. >>The project is really healthy. I think we've got through a couple of big transitions over the last few years. We've moved from the original, um, you know, I was on the bootstrap steering committee trying to help the governance model. The full bootstrap committee committee has handed off responsibility to, um, new participants. There's been a lot of growth in the project governance and community governance. Um, I think there's huge credit to the folks on the steering committee today. Folks, part of contributor experience and standardizing and formalizing Kubernetes as its own thing. I think we've really moved into being a community managed project. Um, we've developed a lot of maturity around that and Kubernetes and the folks involved in helping Kubernetes be successful, have actually been able to help others within the CNCF ecosystem and other open source projects outside of CNCF be successful. So that angle is going phenomenally well. >>Uh, contribution is up. I think one of the tension points that we've talked about is, um, Kubernetes is maturing one 19, spent a lot of time on stability. And while there's definitely lots of interesting new things in a few areas like storage, and we have fee to an ingress fee too, coming up on the horizon dual stack, support's been hotly anticipated by a lot of on premise folks looking to make the transition to IPV six. I think we've been a little bit less focused on chasing features and more focused on just making sure that Kubernetes is maturing responsibly. Now that we have a really successful ecosystem of integrators and vendors and, um, you know, unification, the conformance efforts in Kubernetes. Um, there've been some great work. I happened to be involved in the, um, in the architecture conformance definition group, and there's been some amazing participation from, um, uh, from that group of people who've made real strides in growing the testing efforts so that, you know, not only can you look at, um, two different Kubernetes vendors, but you can compare them in meaningful ways. >>That's actually helped us with our test coverage and Kubernetes, there's been a lot of focus on, um, really spending time on making sure that upgrades work well, that we've reduced the flakiness of our test suites and that when a contributor comes into Kubernetes, they're not presented with a confusing, massive instructions, but they have a really clear path to make their first contribution and their next contribution. And then the one after that. So from a project maturity standpoint, I think 2020 has been a great great year for the project. And I want to see that continue. >>Yeah. One of the things we talked quite a bit about, uh, at both red hat summit, as well as, uh, the CubeCon cloud native con Europe, uh, was operators. And, you know, maybe I believe there was some updates also about how operators can work with Google cloud. So can you give us that update? >>Sure. There's been a lot of, um, there's been a lot of growth in both the client tooling and the libraries and the frameworks that make it easy to integrate with Kubernetes. Um, and those integrations are about patterns that, um, make operations teams more productive, but it takes time to develop the domain expertise in, uh, operationalizing large groups of software. So over the last year, um, know the controller runtime project, uh, which is an outgrowth of the Kubernetes Siggy lb machinery. So it's kind of a, an outshoot that's intended to standardize and make it easier to write integrations to Kubernetes that next step of, um, you know, going then pass that red hat's worked, uh, with, um, others in the community around, um, the operator SDK, uh, which unifying that project and trying to get it aligned with others in the ecosystem. Um, almost all of the cloud providers, um, have written operators. >>Google has been an early adopter of the controller and operator pattern, uh, and have continued to put time and effort into helping make the community be successful. And, um, we're really appreciative of everyone who's come together to take some of those ideas from Kubernetes to extend them into, um, whether it's running databases and service on top of Kubernetes or whether it's integrating directly with cloud. Um, most of that work or almost all of that work benefits everybody in the ecosystem. Um, I think there's some future work that we'd like to see around, um, you know, uh, folks, uh, from, um, a number of places have gone even further and tried to boil Kubernetes down into simpler mechanisms, um, that you can integrate with. So a little bit more of a, a beginner's approach or a simplification, a domain specific, uh, operator kind of idea that, um, actually really does accelerate people getting up to speed with, um, you know, building these sorts of integrations, but at the end of the day, um, one of the things that I really see is the increasing integration between the public clouds and their Kubernetes on top of those clouds through capabilities that make everybody better off. >>So whether you're using a managed service, um, you know, on a particular cloud or whether you're running, um, the elements of that managed open source software using an open source operator on top of Kubernetes, um, there's a lot of abstractions that are really productive for admins. You might use the managed service for your production instances, but you want to use, um, throw away, um, database instances for developers. Um, and there's a lot of experimentation going on. So it's almost, it's almost really difficult to say what the most interesting part is. Um, operators is really more of an enabling technology. I'm really excited to see that increasing glue that makes automation and makes, um, you know, dev ops teams, um, more productive just because they can rely increasingly on open source or managed services offerings from, you know, the large cloud providers to work well together. >>Yeah. You had mentioned that we're seeing all the other projects that are tying into Coobernetti's, we're seeing Kubernetes going into broader use cases, things like edge computing, what, from an architectural standpoint, you know, needs to be done to make sure that, uh, Kubernetes can be used, you know, meets the performance, the simplicity, um, in these various use cases. >>That's a, that's a good question. There's a lot of complexity in some areas of what you might do in a large application deployment that don't make sense in edge deployments, but you get advantages from having a reasonably consistent environment. I think one of the challenges everybody is going through is what is that reasonable consistency? What are the tools? You know, one of the challenges obviously is as we have more and more clusters, a lot of the approaches around edge involve, you know, whether it's a single cluster on a single machine and, um, you know, in a fairly beefy, but, uh, remote, uh, computer, uh, that you still need to keep in sync with your application deployment. Um, you might have a different life cycle for, uh, the types of hardware that you're rolling out, you know, whether it's regional or whether it's tied to, whether someone can go out to that particular site that you've been update the software. Sometimes it's connected, sometimes it isn't. So I think a need that is becoming really clear is there's a lot of abstractions missing above Coopernetties. Uh, and everyone's approaching this differently. We've got a get ops and centralized config management. Um, we have, uh, architectures where, you know, you, you boot up and you go check some remote cloud location for what you should be running. Um, I think there's some, some productive obstructions that are >>That, or haven't been, um, >>It haven't been explored sufficiently yet that over the next couple of years, how do you treat a whole bunch of clusters as a pool of compute where you're not really focused on the details of where a cluster is, or how can you define applications that can easily move from your data center out to the edge or back up to the cloud, but get those benefits of Kubernetes, all those places. And >>That >>This is for so early, that what I see in open source and what I see with people deploying this is everyone is approaching this subtly differently, but you can start to see some of those patterns emerge where, um, you need reproducible bundles of applications, things that help can do REL, or you can do with just very simply with Kubernetes. Um, not every edge location needs, um, uh, an ingress controller or a way to move traffic onto that cluster because their job is to generate traffic and send it somewhere else. But then that puts more pressure on, well, you need those where you're feeding that data to your API APIs, whether that's a cloud or something within your something within a private data center, you need, um, enough of commonalities across those clusters and across your applications that you could reason about what's going on. So >>There's a huge amount >>Out of a space here. And I don't think it's just going to be Kubernetes. In fact, I, I want to say, I think we're starting to move to that phase where Kubernetes is just part of the platform that people are building or need to build. And what can we do to build those tools that help you stitch together computer across a lot of footprints, um, parts of applications across a lot of footprints. And there's, there's a bunch of open source projects that are trying to drive to that today. Um, projects like I guess the O and K natives, um, with the work being done with the venting in K native, and obviously the venting is a hugely, um, you know, we talk about edge, we'd almost be remiss, not talk about moving data. And you talk about moving data. Well, you want streams of data and you want to be reacted to data with compute and K native and Istio are both great examples of technologies within the QB ecosystem that are starting to broaden, um, you know, outside of the, well, this is just about one cube cluster to, um, we really need to stitch together a mindset of development, even if we have a reasonably consistent Kubernetes across all those footprints. >>Yeah. Well, Clayton so important. There's so many technologies out there it's becoming about that technology. And it's just a given, it's an underlying piece of it. You know, we don't talk about the internet. We don't talk, you know, as much about Linux anymore. Cause it's just in the fabric of everything we do. And it sounds like we're saying that's where we're getting with Kubernetes. Uh, I'd love to pull on that thread. You mentioned that you're hearing some patterns starting to emerge out there. So when you're talking to enterprises, especially if you're talking 2020, uh, lots of companies, all of a sudden have to really accelerate, uh, you know, those transformational projects that they were doing so that they can move faster and keep up with the pace of change. Uh, so, you know, what should enterprise be, be working on? What feedback are you hearing from customers, but what are some of those themes that you can share and w what, what should everybody else be getting ready for that? >>The most common pattern I think, is that many people still find a need to build, uh, platforms or, um, standardization of how they do application development across fairly large footprints. Um, I think what they're missing, and this is what everyone's kind of building on their own today, that, um, is a real opportunity within the community is, uh, abstract abstractions around a location, not really about clusters or machines, but something broader than that, whether it's, um, folks who need to be resilient across clouds, and whether it's folks who are looking to bring together disparate footprints to accelerate their boot to the cloud, or to modernize their on premise stack. They're looking for abstractions that are, um, productive to say, I don't really want to worry too much about the details of clusters or machines or applications, but I'm talking about services and where they run and that I need to stitch those into. >>Um, I need to stitch those deeply into some environments, but not others. So that pattern, um, has been something that we've been exploring for a long time within the community. So the open service broker project, um, you know, has been a long running effort of trying to genericize one type of interface operators and some of the obstructions and Kubernetes for extending Kubernetes and new dimensions is another. What I'm seeing is that people are building layers on top through continuous deployment, continuous integration, building their own API is building their own services that really hide these details. I think there's a really rich opportunity within open to observe what's going on and to offer some supporting technologies that bridge clouds, bridge locations, what you deal with computed a little bit more of an abstract level, um, and really doubled down on making services run. Well, I think we're kind of ready to make the transition to say officially, it's not just about applications, which is what we've been saying for a long time. >>You know, I've got these applications and I'm moving them, but to flip it around and say, we want to be service focused and services, have a couple of characteristics, the details of where they run are more about the guarantees that you're providing for your customers. Um, we lack a lot of open source tools that make it easier to build and run services, not just to consume as dependencies or run open source software, but what are the things that make our applications more resilient in and of themselves? I think Kubernetes was a good start. Um, I really see organizations struggling with that today. You're going to have multiple locations. You're going to have, um, the need to dramatically move workloads. What are the tools that the whole ecosystem, the open source ecosystem, um, can collaborate on and help accelerate that transition? >>Well, Clayton, you teed up on my last thing. I want to ask you, you know, we're, we're here at the Google cloud show and when you talk about ecosystem, you talk about community, you know, Google and red hat, both very active participants in this community. So, you know, you, you peer you collaborate with a lot of people from Google I'm sure. So give our audience a little bit of insight as to, you know, Google's participation. What, what you've been seeing from them the last couple of years at Google has been a great partner, >>Crazy ecosystem for red hat. Um, we worked really closely with them on Istio and K native and a number of other projects. Um, I, you know, as always, um, I'm continually impressed by the ability of the folks that I've worked with from Google to really take a community focus and to concentrate on actually solving use cases. I think the, you know, there's always the desire to create drama around technology or strategy or business and open source. You know, we're all coming together to work on common goals. I really want to, um, you know, thank the folks that I've worked with at Google over the years. Who've been key participants. They've believed very strongly in enabling users. Um, you know, regardless of, um, you know, business or technology, it's about making sure that we're improving software for everyone. And one of the beauties of working on an open source project like Kubernetes is everyone can get some benefit out of it. And those are really, um, you know, the sum of all of the individual contributions is much larger than what the simple math would apply. And I think that's, um, you know, Kubernetes has been a huge success. I want to see more successes like that. Um, you know, working with Google and others in the open source ecosystem around infrastructure as a service and, you know, this broadening >>Domain of places where we can collaborate to make it easier for developers and operations teams and dev ops and sec ops to just get their jobs done. Um, you know, there's a lot more to do and I think open source is the best way to do that. All right. Well, Clayton Coleman, thank you so much for the update. It's really great to catch up. It was a pleasure. All right. Stay tuned for lots more coverage. The Google cloud next 2020 virtually I'm Stu Miniman. Thank you for watching the cube.
SUMMARY :
From around the globe covering Google cloud Borg, a few people working on it, and, you know, just that this project that has just had good, um, you know, foundation in Kubernetes and that the ecosystem is healthy and um, what maybe, you know, the original Kubernetes was very focused on helping you bring in and, uh, you know, each step along the way, I see people sort of broaden their scope about it, how the upstream, uh, works and you know, how, how should customers think about, We've moved from the original, um, you know, I was on the bootstrap steering committee trying to help you know, not only can you look at, um, two different Kubernetes vendors, of our test suites and that when a contributor comes into Kubernetes, they're not presented with a And, you know, maybe I believe there was some updates also about um, you know, going then pass that red hat's worked, uh, with, um, um, you know, building these sorts of integrations, but at the end of the day, um, you know, the large cloud providers to work well together. uh, Kubernetes can be used, you know, meets the performance, the simplicity, um, a lot of the approaches around edge involve, you know, whether it's a single cluster on not really focused on the details of where a cluster is, or how can you define applications that can easily move a private data center, you need, um, enough of commonalities to broaden, um, you know, outside of the, well, this is just about one cube cluster all of a sudden have to really accelerate, uh, you know, those transformational projects that they were doing so a need to build, uh, platforms or, um, So the open service broker project, um, you know, has been a long You're going to have, um, the need to dramatically move workloads. So, you know, you, you peer you collaborate with a lot And those are really, um, you know, the sum of all of the individual contributions is much Um, you know, there's a lot more to do and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Clayton Coleman | PERSON | 0.99+ |
Clayton | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu Miniman | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
first contribution | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
Stu middleman | PERSON | 0.98+ |
One | QUANTITY | 0.97+ |
last year | DATE | 0.97+ |
pandemics | EVENT | 0.97+ |
Linux | TITLE | 0.97+ |
single cluster | QUANTITY | 0.96+ |
single machine | QUANTITY | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
one cluster | QUANTITY | 0.94+ |
each step | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
this year | DATE | 0.92+ |
dot 20 | COMMERCIAL_ITEM | 0.91+ |
Istio | ORGANIZATION | 0.91+ |
Kubernetes | ORGANIZATION | 0.9+ |
OpenShift | ORGANIZATION | 0.89+ |
K native | ORGANIZATION | 0.88+ |
customers | QUANTITY | 0.88+ |
Google cloud | TITLE | 0.88+ |
next couple of years | DATE | 0.85+ |
19 | QUANTITY | 0.84+ |
years | DATE | 0.84+ |
Google Cloud | TITLE | 0.81+ |
one cube | QUANTITY | 0.81+ |
last | DATE | 0.8+ |
IPV six | TITLE | 0.79+ |
red hat | ORGANIZATION | 0.77+ |
'20 | DATE | 0.77+ |
dot 19 | COMMERCIAL_ITEM | 0.76+ |
REL | TITLE | 0.74+ |
last few years | DATE | 0.68+ |
Ranga Rangachari, Red Hat | Google Cloud Next 2019
>> Announcer: Live from San Francisco, it's theCUBE, covering Google Cloud Next '19. Brought to you by Google Cloud, and its ecosystem partners. >> We're back at Google Cloud Next, at the new, improved Moscone Center. This is day two of theCUBE's coverage of Google's big Cloud show. theCUBE is a leader in live tech coverage, my name is Dave Vellante, I'm here with my co-host Stu Miniman. John Furrier is walking the floor, checking out the booth space. Ranga Rangachari is here, he's the Vice President and General Manager of Cloud Storage and hyper-converged infrastructure at Red Hat. Ranga, good to see you again. >> Hi Dave, hi Stu, good to see you again too. >> Thanks for coming on, this show it's, it's growing nicely, good thing Moscone is new and improved. How's the show going for you? >> Show's going really good. I just had a chance to walk around the booths and a lot of interesting conversations and, the Red Hat booth too, there've been a lot of interesting conversations with customers. >> A lot of tailwinds these days for Red Hat. We talk about that a lot on theCUBE, this whole notion of hybrid cloud, you guys have been on that since the early days. >> Yeah. >> Multi-cloud, omni-cloud, hyper-converged infrastructure, it's in your title. It's like that all the moons are lining up for you guys, you know is it just luck, skill, great predictions powers, what's your take? >> Well, I mean, I think it's a combination of those, but more importantly, it's about listening to our customers. I think that's what gives us, today, the permission to talk to our customers about some of these things they're doing, because when we talk to them, it's not just about solving today's problems, but also where they're headed, and anticipating where they're going, and the ability to meet their needs. So is, I think. >> So the Google partnership, we were talking earlier, it started 10 years ago with the hypervisor. >> Yup. >> And it's really evolved. Where is it today, from your perspective? >> Well, I think it continues to, it continues to cooperate in the technical community very well, and a couple of data points, one is on Kubernetes, that started four, five years ago, and that's going really strong. But more importantly, as the industry matures, there are, what I would call, special interest groups that are starting to emerge in the Kubernetes community. One thing that we are paying very close attention to is the storage SIG, which is the ability to federate storage across multiple clouds, and how do you do it seamlessly within the framework of Kubernetes, as opposed to trying to create a hack, or a one-off that some vendors attempted to do. So we try to take a very wholistic view of it, and make sure, I mean the industry we are in is trying to drive volumes, and volumes drives standards, so I think we pay very, very close attention-- >> And the objective there is leave the data in place if possible, provide secure access and fast access, provide high-speed data movement if necessary, protect the data in motion. That is a complex problem. >> It is, and that's why I think it's very important that the community together solves the problem, not just one vendor. But it's about how do you facilitate, the holy grail is how do you facilitate data portability and application portability across these hybrid clouds. And a lot of the things that you talked about are part and parcel of that, but what users don't wanna do is stitch them together. They want a simple, easy way. And most common example that we often get asked is can I migrate my data from one cloud to the other, from on-prem to a public cloud beta based on certain policies. That's a prototypical example of how federated storage and other things can help with that. >> Ranga, bring us inside some of those customer conversations, 'cause we talk on theCUBE, we go back to, customers always say I want multi-vendor, yes, I don't want lock-in, portability is a good thing, but at the end of the day, some of these things, if it's some science experiment or if it's difficult, well, sometimes it's easier just to kind of stick on a similar environment. We know the core of Red Hat, it's if I build on top of rail, then I know it can work lots of places, so where are customers at, how does that fit in to this whole discussion of multi-cloud. >> So, what I can kind of give you a perspective of the hybrid cloud, the product strategy that we've been on for better part of a decade now, is around facilitating the hybrid cloud. So if you look at the open, or the storage nature of the data nature of the conversations, it's almost two sides of the same coin. Which is, the developers want storage to be invisible. They don't wanna be in the business of stitching their lungs and their zone masking all that stuff. But yet at the same time they want storage to be ubiquitous. So, they want it to be invisible, they want it to be ubiquitous. So that's one of the key themes that we are in from our customer. >> Come on, Ranga, you guys are announcing storage list this year, right? >> Yeah, (laughs) exactly. (laughs) So that's a great point. The other part that we are also seeing from our customer conversations is, I think, let me give you, kind of the Red Hat inside out perspective. Is any products, any thing that we release to the market, the first filter that we run through is will it help our customers with our open hybrid cloud journey? So that kind of becomes the filter for any new features we add, any go-to-market motion, so that there is a tremendous amount of impedance match if you will. Between where we're going and how customers can succeed with their open hybrid cloud journey. >> So, in thinking about some of the discussions you're having with customers on their hybrid cloud strategy, specifically, what are those conversations like, what are the challenges that they're having? It's a maturity spectrum, obviously, but what are you seeing at each level of the spectrum, and where are some of those execution, formulation and execution challenges? >> So, as the industry evolves and the technology matures, the conversation change, and 12, 24 months ago it was a dramatically different conversation. It was an all around help me get there. Now the conversation is people really understand, and most of our conversations that we see, and even the other industry players are seeing this, is the conversation starts with on-prem looking out, as opposed to a cloud looking in. So, customers say look I've invested a tremendous amount of assets, intellectual horsepower into building my on-prem infrastructure and make it solid, now give me the degree of freedom for me to move certain workloads to one or many of these public clouds. So that's kind of a huge shift in the conversations we have with the customers. If you click one or a couple of levels below, the conversation talks about things like security as you pointed out. How do you ensure that if I move my workload my overall corporate compliance stuff aren't anywhere compromised. So that's one aspect. The other aspect is manageability. Can it really manage this infrastructure from a proverbial single pane of glass. So now the conversations are less about more theoretical, it's more about I've started the journey help me make this journey successful. >> So when you talk about the perspective of, I've built up this on-prem infrastructure, I've invested a ton it in, and now help me connect, I can see a mindset that would say think cloud first. Of course, the practical reality says I've got all this tactical debt. So how much of that is gonna be a potential pitfall down the road for some of these companies, in your view? >> Well, I think it's not so much of a technical debt. In one way you could call it a technical debt, but the other aspect is how do you really leverage the investment that you've made without having to just say well I'm gonna do things differently. So, that's why I think the conversations we have with our customers are mutually beneficial, because we can help them, but the same token they can help us understand where some of the road blocks are. And through our products, through our services, we can help them circumvent or mitigate some of those-- >> And those assets aren't depreciated on the books, they've gotta get a return on them, right? >> So, Ranga, we know that one of the areas that Red Hat and Google end up working a lot together is in the Cloud Native Computing Foundation. >> Yep. >> Bring us up to speed as to where we are with that storage discussion, 'cause I think back to when Docker launched it was oh, it's gonna be wonderful and everything, but we all live through virtualization, and we had to fix networking and storage challenges here, and networking seemed to go a little further along and there's been a few different viewpoints as to how storage should be looked at in the containerized and the Kubernetes SDO world that we're moving towards today. >> So one example that illustrates storage being the center of this is there is a project called Rook.io. If you're familiar with this, think of it as kind of sitting between the storage infrastructure and Kubernetes. And that is taking on a tremendous amount of traction, not just in the community, but even within the CNCF. I could be wrong here, but my understanding it's a project that's in incubation phase right now. So we are seeing a lot of industry commitment to that Rook project, and you're gonna see real, live use cases where customers are now able to fulfill the vision of data portability and storage portability across these multiple hybrid clouds. >> So Kubernetes is obviously taking off, although again, it's a maturity level. Some customers are diving in, and others maybe not so much. What are you seeing is some of the potential blockers, how are people getting started? Can you just download the code and go? What are you seeing there? >> That's a very interesting question, because we look at it as projects versus products. And, Kubernetes is a project. Phenomenal amount of velocity, phenomenal amount of innovation. But once you deploy it in your production environment, things like security, things like life cycle management, all those things have to be in place before somebody deploys it. That's why, in OpenShift you've seen the tremendous amount of market acceptance we've have with OpenShift is a proof point that it is kind of the best Kubernetes out there, because it's enterprise ready, people can deploy it, people can use it, people can scale with it, and not be worried about things like life cycle management, things like security, all the things that come into play when you deal with an upstream project. So, what we've seen from a customer basis, people start to dabble, and they'll look at Kubernetes, what's going on, and understand where the areas of innovation are. But once they start to say look I've got it deployed for some serious workloads, they look at a vendor who can provide all the necessary ingredients for them to be successful. >> We're having a good discussion earlier about customer's perspectives, I wanna get as much out of that asset as I possibly can. You said something that interested me. I wanna go back to it. Is customers want options to be able to migrate to various clouds. My question is do you sense that that's because they wanna manage their risk, they want an exit strategy? Or, are they actively moving more than once. Maybe they wanna go once and then run in the cloud. Or are you seeing a lot of active movement of that data? >> I think the first order of bit in those discussions that are about the workloads, What workload do they wanna run? And once they decide this is the, for instance, with the Google Cloud, with the MLAI type of workloads, lend themselves very well to the Google Cloud infrastructure. So when a customer says look this is the workload I wanna run on-prem, but I want the elastic capability for me to run on one of these public clouds, often the decision criteria seems to be what workload it is and where's the best place to run it in. And then, you know, the rest of the stuff comes into play. >> So, Ranga, let's step back for a second. I come out of this show, Google Cloud this year, and I'm hearing open, multi-cloud, reminds me of words I've heard going to Red Hat, some every year. Help us to kind of squint through a little bit as to where Red Hat sits in the customer. If I'm the c-suite of an enterprise customer day, where Red Hat fits in the partnership with customers, and where the partners fit into that overall story. >> So, our view is let's look at it customer end. And practically every customer that we talk to wants to embark on an open hybrid cloud storage. And I wanna kind of stress on the open part of it, because it's the easier way to say okay let me go build a hybrid cloud. The more difficult part is how do you facilitate it through open hybrid cloud story. And that's the march, if you will, that we've been on for the last five plus years. And, that business strategy and the technology strategy has not, we've been unwavering in that. And, the partners are and they say we truly believe that for us to be successful, for our customers to be successful, we need an ecosystem of partners. And the cloud providers are absolutely a critical ingredient and a critical component of the overall strategy, and I think together, with our partners, and our core technology, and our go-to-market routes, we think we can really solve our customers, we are solving them today, and we think we can continue to solve them over time. >> You talk about open, open has a lot of different definitions. And again it's suspected UNIX used to be open. (laughs) I see that potentially as one, real solid differentiator of Red Hat. I mean, your philosophy on open. What do you see as your differentiators in the marketplace? >> Well, I think the first is obviously open like you said, the second part is, I think I hinted upon it earlier, which is, projects are good. I think they are almost a fountain and of ideas and things, but I think where we spend a tremendous amount of hours of energy is to transform it from the upstream project into a product. And if you go back, Red Hat Linux, I think we've shown that Linux was in the same kind of state of vibe in other ways, 10, 20 years ago. And I think what we've shown to the industry is by being solely committed and focused on make these projects enterprise ready, I think we've shown the market leading the way, and making it successful. So I think for us, the next wave, whether it's Kubernetes, whether it's other things, it's a very similar recipe book, nothing dramatically different, but fundamentally what we want to do is help our customers take advantage of those innovations, but yet not compromise on what they need in their enterprise data centers. >> The recipe book is similar, but you've gotta make bets. You've made some pretty good bets over the years. >> Yep. >> We could debate about OpenStack, but I mean, even there. But that's not an easy thing for an open source company to do. 'Cause you've gotta pick your poison, you have to provide committers, what's the secret sauce there? >> Well, I think, first off, I think the number one secret sauce from our perspective is add more technical and intellectual horsepower to these communities. And, not so much for the sake of community, it's about does it solve a real business problem for our customers? That's the way we go about it because in the open source community, I don't even know, hundreds of thousands of open source projects are out there. And we pay, and our office of the CTO pays very close attention to all the projects out there, identify the ones that have promise, not just from our perspective but from customers' perspective, and invest in those areas. And a lot of them have succeeded, so we think we'll do well in that. >> Alright, so, Ranga, one of the biggest announcements this week is Anthos from Google. Wanna get your viewpoint as to where that fits. >> I think it's a good announcement, I haven't read through all the details, but part of it is I think it validates, to a certain extent, what Red Hat has been talking about for the last five, seven years. Which is you need a unified way to deploy, manage, provision your infrastructure, not just on public clouds, but a seamless way to connect to the on-prem. And I think Anthos is a validation of how we've been thinking about the work. So we think it's great. We think it's really good. >> Ranga Rangachari thanks so much for coming back on theCUBE >> Thank you, David! >> It's always a pleasure. >> Thank you again, Stu. >> Have a great Red Hat summit coming up in early May, theCUBE will be there, Stu will be co-hosting. You're watching theCUBE, day two of Google Cloud Next 2019 from Moscone. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Google Cloud, and its ecosystem partners. Ranga, good to see you again. How's the show going for you? the Red Hat booth too, since the early days. It's like that all the moons are lining up for you guys, and the ability to meet their needs. So the Google partnership, And it's really evolved. and make sure, I mean the industry we are in And the objective there is leave the data And a lot of the things that you talked about We know the core of Red Hat, it's if I build on top of rail, of the data nature of the conversations, So that kind of becomes the filter in the conversations we have with the customers. down the road for some of these companies, in your view? but the other aspect is how do you really is in the Cloud Native Computing Foundation. in the containerized and the Kubernetes SDO storage being the center of this What are you seeing is some of the potential blockers, is a proof point that it is kind of the best that that's because they wanna manage their risk, often the decision criteria seems to be If I'm the c-suite of an enterprise customer day, And that's the march, if you will, What do you see as your differentiators in the marketplace? the second part is, I think I hinted upon it earlier, You've made some pretty good bets over the years. for an open source company to do. That's the way we go about it Alright, so, Ranga, one of the biggest announcements for the last five, seven years. Have a great Red Hat summit coming up in early May,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu Miniman | PERSON | 0.99+ |
Ranga | PERSON | 0.99+ |
Ranga Rangachari | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
first filter | QUANTITY | 0.99+ |
Moscone | LOCATION | 0.99+ |
today | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
early May | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
10 | DATE | 0.98+ |
one aspect | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
this year | DATE | 0.98+ |
four | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
Anthos | ORGANIZATION | 0.97+ |
OpenShift | TITLE | 0.97+ |
single pane | QUANTITY | 0.96+ |
Linux | TITLE | 0.96+ |
SIG | ORGANIZATION | 0.95+ |
one way | QUANTITY | 0.95+ |
CTO | ORGANIZATION | 0.95+ |
Kubernetes | TITLE | 0.95+ |
each level | QUANTITY | 0.95+ |
one cloud | QUANTITY | 0.95+ |
Red Hat | EVENT | 0.94+ |
Rook | ORGANIZATION | 0.93+ |
Moscone Center | LOCATION | 0.93+ |
UNIX | TITLE | 0.93+ |
one vendor | QUANTITY | 0.93+ |
more than once | QUANTITY | 0.92+ |
20 years ago | DATE | 0.91+ |
Google Cloud Next | TITLE | 0.88+ |
thousands | QUANTITY | 0.87+ |
Kubernetes | ORGANIZATION | 0.86+ |
Cloud Next | TITLE | 0.86+ |
One thing | QUANTITY | 0.84+ |
CNCF | ORGANIZATION | 0.84+ |
Vice President | PERSON | 0.83+ |
day two | QUANTITY | 0.83+ |
Anthos | TITLE | 0.82+ |
Cloud Storage | ORGANIZATION | 0.81+ |
seven years | QUANTITY | 0.8+ |
Rook.io | TITLE | 0.8+ |
Docker | ORGANIZATION | 0.79+ |
two sides | QUANTITY | 0.74+ |
Mike Evans, Red Hat | Google Cloud Next 2019
>> reply from San Francisco. It's the Cube covering Google Club next nineteen Tio by Google Cloud and its ecosystem partners. >> We're back at Google Cloud next twenty nineteen. You're watching the Cube, the leader in live tech coverage on Dave a lot with my co host to minimum John Farriers. Also here this day. Two of our coverage. Hash tag. Google Next nineteen. Mike Evans is here. He's the vice president of technical business development at Red Hat. Mike, good to see you. Thanks for coming back in the Cube. >> Right to be here. >> So, you know, we're talking hybrid cloud multi cloud. You guys have been on this open shift for half a decade. You know, there were a lot of deniers, and now it's a real tail one for you in the whole world is jumping on. That bandwagon is gonna make you feel good. >> Yeah. No, it's nice to see everybody echoing a similar message, which we believe is what the customers demand and interest is. So that's a great validation. >> So how does that tie into what's happening here? What's going on with the show? It's >> interesting. And let me take a step back for us because I've been working with Google on their cloud efforts for almost ten years now. And it started back when Google, when they were about to get in the cloud business, they had to decide where they're going to use caveat present as their hyper visor. And that was a time when we had just switched to made a big bet on K V M because of its alignment with the Lenox Colonel. But it was controversial and and we help them do that. And I look back on my email recently and that was two thousand nine. That was ten years ago, and that was that was early stages on DH then, since that time, you know, it's just, you know, cloud market is obviously boomed. I again I was sort of looking back ahead of this discussion and saying, you know, in two thousand six and two thousand seven is when we started working with Amazon with rail on their cloud and back when everyone thought there's no way of booksellers goingto make an impact in the world, etcetera. And as I just play sort of forward to today and looking at thirty thousand people here on DH you know what sort of evolved? Just fascinated by, you know, sort of that open sources now obviously fully mainstream. And there's no more doubters. And it's the engine for everything. >> Like maybe, you know, bring us inside. So you know KK Veum Thie underpinning we know well is, you know, core to the multi clouds tragedy Red hat. And there's a lot that you've built on top of it. Speak, speak a little bit of some of the engineering relationships going on joint customers that you have. Ah, and kind of the value of supposed to, you know, write Hatton. General is your agnostic toe where lives, but there's got to be special work that gets done in a lot of places. >> Ralph has a Google. Yeah, yeah, yeah. >> Through the years, >> we've really done a lot of work to make sure that relative foundation works really well on G C P. So that's been a that's been a really consistent effort and whether it's around optimization for performance security element so that that provides a nice base for anybody who wants to move any work loader application from on crime over there from another cloud. And that's been great. And then the other maid, You know, we've also worked with them. Obviously, the upstream community dynamics have been really productive between Red Hat and Google, and Google has been one of the most productive and positive contributors and participants and open source. And so we worked together on probably ten or fifteen different projects, and it's a constant interaction between our upstream developers where we share ideas. And do you agree with this kind of >> S O Obviously, Cooper Netease is a big one. You know, when you see the list, it's it's Google and Red Hat right there. Give us a couple of examples of some of the other ones. I >> mean again, it's K B M is also a foundation on one that people kind of forget about that these days. But it still is a very pervasive technology and continuing to gain ground. You know, there's all there's the native stuff. There's the studio stuff in the AML, which is a whole fascinating category in my mind as well. >> I like history of kind of a real student of industry history, and so I like that you talk to folks who have been there and try to get it right. But there was a sort of this gestation period from two thousand six to two thousand nine and cloud Yeah, well, like you said, it's a book seller. And then even in the down turn, a lot of CFO said, Hey, cap backstop ex boom! And then come out of the downturn. And it was shadow I t around that two thousand nine time frame. But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in in In our cloud and homogeneity had a lot of a lot of traditional companies fumbling with their cloud strategies. And and And he had the big data craze. And obviously open source was a huge part of that. And then containers, which, of course, have been around since Lennox. Yeah, yeah, and I guess Doctor Boom started go crazy. And now it's like this curve is reshaping with a I and sort of a new era of data thoughts on sort of the accuracy of that little historical narrative and and why that big uptick with containers? >> Well, a couple of things there won the data, the whole data evolution and this is a fascinating one. For many, many years. I'm gonna be there right after nineteen years. So I've seen a lot of the elements of that history and one of the constant questions we would always get sometimes from investor. Why don't you guys buy a database company? You know, years ago and we would, you know, we didn't always look at it. Or why aren't you guys doing a dupe distribution When that became more spark, etcetera. And we always looked at it and said, You know, we're a platform company and if we were to pick anyone database, it would only cover some percentage and there's so many, and then it just kind of upsets the other. So we've we've decided we're going to focus, not on the data layer. We're going to focus on the infrastructure and the application layer and work down from it and support the things underneath. So it's consistent now with the AML explosion, which, you know, we're who was a pioneer of AML. They've got some of the best services and then we've been doing a lot of work within video in the last two years to make sure that all the GP use wherever they're run. Hybrid private cloud on multiple clouds that those air enabled and Raylan enabled in open shift. Because what we see happening and in video does also is right now all the applications being developed by free mlr are written by extremely technical people. When you write to tense airflow and things like that, you kind of got to be able to write a C compiler level, but so were working with them to bring open shift to become the sort of more mass mainstream tool to develop. A I aml enable app because the value of having rail underneath open shift and is every piece of hardware in the world is supported right for when that every cloud And then when we had that GPU enablement open shift and middleware and our storage, everything inherits it. So that's the That's the most valuable to me. That's the most valuable piece of ah, real estate that we own in the industry is actually Ralph and then everything build upon that and >> its interest. What you said about the database, Of course, we're a long discussion about that this morning. You're right, though. Mike, you either have to be, like, really good at one thing, like a data stacks or Cassandra or a mongo. And there's a zillion others that I'm not mentioning or you got to do everything you know, like the cloud guys were doing out there. You know, every one of them's an operational, you know, uh, analytics already of s no sequel. I mean, one of each, you know, and then you have to partner with them. So I would imagine you looked at that as well. I said, How're we going to do all that >> right? And there's only, you know, there's so many competitive dynamics coming at us and, you know, for we've always been in the mode where we've been the little guy battling against the big guys, whoever, maybe whether it was or, you know, son, IBM and HP. Unix is in the early days. Oracle was our friend for a while. Then they became. Then they became a nen ime, you know, are not enemy but a competitor on the Lennox side. And the Amazon was early friend, and then, though they did their own limits. So there's a competitive, so that's that's normal operating model for us to us to have this, you know, big competitive dynamic with a partnering >> dynamic. You gotta win it in the marketplace that the customers say. Come on, guys. >> Right. We'Ll figure it out >> together, Figured out we talked earlier about hybrid cloud. We talked about multi cloud and some people those of the same thing. But I think they actually you know, different. Yeah, hybrid. You think of, you know, on prim and public and and hopefully some kind of level of integration and common data. Plain and control plan and multi cloud is sort of evolved from multi vendor. How do you guys look at it? Is multi cloud a strategy? How do you look at hybrid? >> Yeah, I mean, it's it's it's a simple It's simple in my mind, but I know the words. The terms get used by a lot of different people in different ways. You know, hybrid Cloud to me is just is just that straightforward. Being able to run something on premise have been able to run something in any in a public cloud and have it be somewhat consistent or share a bowl or movable and then multi cloud has been able to do that same thing with with multiple public clouds. And then there's a third variation on that is, you know, wanting to do an application that runs in both and shares information, which I think the world's you know, You saw that in the Google Antos announcement, where they're talking about their service running on the other two major public cloud. That's the first of any sizable company. I think that's going to be the norm because it's become more normal wherever the infrastructure is that a customer's using. If Google has a great service, they want to be able to tell the user toe, run it on their data there at there of choice. So, >> yeah, so, like you brought up Antos and at the core, it's it's g k. So it's the community's we've been talking about and, he said, worked with eight of us work for danger. But it's geeky on top of those public clouds. Maybe give us a little bit of, you know, compare contrast of that open shift. Does open ship lives in all of these environments, too, But they're not fully compatible. And how does that work? So are >> you and those which was announced yesterday. Two high level comments. I guess one is as we talked about the beginning. It's a validation of what our message has been. Its hybrid cloud is a value multi clouds of values. That's a productive element of that to help promote that vision And that concept also macro. We talked about all of it. It it puts us in a competitive environment more with Google than it was yesterday or two days ago. But again, that's that's our normal world way partnered with IBM and HP and competed against them on unit. We partner with that was partnered with Microsoft and compete with them, So that's normal. That said, you know, we believe are with open shift, having five plus years in market and over a thousand customers and very wide deployments and already been running in Google, Amazon and Microsoft Cloud already already there and solid and people doing really things with that. Plus being from a position of an independent software vendor, we think is a more valuable position for multi cloud than a single cloud vendor. So that's, you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For all these public called companies going on from its, you know, it's It's a lot of complexity when you have to deal with, You know, American Express is Infrastructure, Bank of Hong Kong's infrastructure, Ford Motors infrastructure and it's a it's a >> right right here. You know Google before only had to run on Google servers in Google Data Center. Everything's very clean environment, one temperature on >> DH Enterprise customers have it a little different demands in terms of version ality and when the upgrade and and how long they let things like there's a lot of differences. >> But actually, there was one of the things Cory Quinn will. It was doing some analysis with us on there. And Google, for the most part, is if we decide to pull something, you've got kind of a one year window to do, you know? How does Red Hot look at that? >> I mean, and >> I explained, My >> guess is they'LL evolve over time as they get deeper in it. Or maybe they won't. Maybe they have a model where they think they will gain enough share and theirs. But I mean, we were built on on enterprise DNA on DH. We've evolved to cloud and hybrid multi cloud, DNA way love again like we love when people say I'm going to the cloud because when they say they're going to the cloud, it means they're doing new APs or they're modifying old apse. And we have a great shot of landing that business when they say we're doing something new >> Well, right, right. Even whether it's on Prem or in the public cloud, right? They're saying when they say we'LL go to the club, they talk about the cloud experience, right? And that's really what your strategy is to bring that cloud experience to wherever your data lives. Exactly. So talking about that multi cloud or a Romney cloud when we sort of look at the horses on the track and you say Okay, you got a V M. We're going after that. You've got you know, IBM and Red Hat going after that Now, Google sort of huge cloud provider, you know, doing that wherever you look. There's red hat now. Course I know you can't talk much about the IBM, you know, certainly integration, but IBM Executive once said to me still that we're like a recovering alcoholic. We learned our lesson from mainframe. We are open. We're committed to open, so we'LL see. But Red hat is everywhere, and your strategy presumably has to stay that sort of open new tia going last year >> I give to a couple examples of long ago. I mean, probably five. Six years ago when the college stuff was still more early. I had a to seo conference calls in one day, and one was with a big graphics, you know, Hollywood Graphics company, the CEO. After we explained all of our cloud stuff, you know, we had nine people on the call explaining all our cloud, and the guy said, Okay, because let me just tell you, right, that guy, something the biggest value bring to me is having relish my single point of sanity that I can move this stuff wherever I want. I just attach all my applications. I attached third party APS and everything, and then I could move it wherever we want. So realize that you're big, and I still think that's true. And then there was another large gaming company who was trying to decide to move forty thousand observers, from from their own cloud to a public cloud and how they were going to do it. And they had. They had to Do you know, the head of servers, a head of security, the head of databases, the head of network in the head of nine different functions there. And they're all in disagreement at the end. And the CEO said at the end of day, said, Mike, I've got like, a headache. I need some vodka and Tylenol now. So give me one simple piece of advice. How do I navigate this? I said, if you just write every app Terrell, Andrzej, boss. And this was before open shift. No matter >> where you want >> to run him, Raylan J. Boss will be there, and he said, Excellent advice. That's what we're doing. So there's something really beautiful about the simplicity of that that a lot of people overlooked, with all the hand waving of uber Netease and containers and fifty versions of Cooper Netease certified and you know, etcetera. It's it's ah, it's so I think there's something really beautiful about that. We see a lot of value in that single point of sanity and allowing people flexibility at you know, it's a pretty low cost to use. Relish your foundation >> over. Source. Hybrid Cloud Multi Cloud Omni Cloud All tail wins for Red Hat Mike will give you the final world where bumper sticker on Google Cloud next or any other final thoughts. >> To me, it's It's great to see thirty thousand people at this event. It's great to see Google getting more and more invested in the cloud and more and more invested in the enterprise about. I think they've had great success in a lot of non enterprise accounts, probably more so than the other clowns. And now they're coming this way. They've got great technology. We've our engineers love working with their engineers, and now we've got a more competitive dynamic. And like I said, welcome to the jungle. >> We got Red Hat Summit coming up stew. Writerly May is >> absolutely back in Beantown data. >> It's nice. Okay, I'll be in London there, >> right at Summit in Boston And May >> could deal. Mike, Thanks very much for coming. Thank you. It's great to see you. >> Good to see you. >> All right, everybody keep right there. Stew and I would back John Furry is also in the house watching the cube Google Cloud next twenty nineteen we'LL be right back
SUMMARY :
It's the Cube covering Thanks for coming back in the Cube. So, you know, we're talking hybrid cloud multi cloud. So that's a great validation. you know, it's just, you know, cloud market is obviously boomed. Ah, and kind of the value of supposed to, you know, Yeah, yeah, yeah. And do you agree with this kind of You know, when you see the list, it's it's Google and Red Hat right there. There's the studio stuff in the AML, But it was like, you say, a hyper visor discussion, you know, we're going to put VM where in You know, years ago and we would, you know, we didn't always look at it. I mean, one of each, you know, and then you have to partner with them. And there's only, you know, there's so many competitive dynamics coming at us and, You gotta win it in the marketplace that the customers say. We'Ll figure it out But I think they actually you know, different. which I think the world's you know, You saw that in the Google Antos announcement, where they're you know, compare contrast of that open shift. you know, we welcome to the party in the sense, you know, going on prom, I say, Welcome to the jungle For You know Google before only had to run on Google servers in Google Data Center. and how long they let things like there's a lot of differences. And Google, for the most part, is if we decide to pull something, And we have a great shot of landing that business when they say we're doing something new talk much about the IBM, you know, certainly integration, but IBM Executive one day, and one was with a big graphics, you know, at you know, it's a pretty low cost to use. final world where bumper sticker on Google Cloud next or any other final thoughts. And now they're coming this way. Writerly May is It's nice. It's great to see you. Stew and I would back John Furry is also in the house watching the cube Google Cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mike Evans | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
American Express | ORGANIZATION | 0.99+ |
Ford Motors | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
five plus years | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
ten | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
nine people | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Hollywood Graphics | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
thirty thousand people | QUANTITY | 0.99+ |
John Farriers | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Terrell | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
Stew | PERSON | 0.99+ |
two thousand | QUANTITY | 0.99+ |
Six years ago | DATE | 0.99+ |
thirty thousand people | QUANTITY | 0.99+ |
two days ago | DATE | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
Bank of Hong Kong | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Cassandra | PERSON | 0.98+ |
John Furry | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
ten years ago | DATE | 0.98+ |
Andrzej | PERSON | 0.98+ |
half a decade | QUANTITY | 0.98+ |
over a thousand customers | QUANTITY | 0.98+ |
Red Hot | ORGANIZATION | 0.98+ |
one day | QUANTITY | 0.97+ |
forty thousand observers | QUANTITY | 0.97+ |
Google Cloud | TITLE | 0.97+ |
Hatton | PERSON | 0.96+ |
third variation | QUANTITY | 0.96+ |
Cory Quinn | PERSON | 0.95+ |
one simple piece | QUANTITY | 0.95+ |
two thousand nine | QUANTITY | 0.95+ |
fifty versions | QUANTITY | 0.94+ |
Raylan J. Boss | PERSON | 0.93+ |
single point | QUANTITY | 0.93+ |
next twenty nineteen | DATE | 0.93+ |
Lennox | ORGANIZATION | 0.92+ |
Unix | ORGANIZATION | 0.92+ |
KK Veum Thie | PERSON | 0.92+ |
two thousand seven | QUANTITY | 0.92+ |
stew | PERSON | 0.91+ |
Jonathan Donaldson, Google Cloud | Red Hat Summit 2018
(upbeat electronic music) >> Narrator: Live from San Francisco, it's The Cube, covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hey, welcome back, everyone. We are here live, The Cube in San Francisco, Moscone West for the Red Hat Summit 2018 exclusive coverage. I'm John Furrier, the cohost of The Cube. I'm here with my cohost, John Troyer, who is the co-founder of Tech Reckoning, an advisory and community development firm. Our next guest is Jonathan Donaldson, Technical Director, Office of the CTO, Google Cloud. Former Cube Alumni. Formerly was Intel, been on before, now at Google Cloud for almost two years. Welcome back, good to see you. >> Good to see you too, it's great to be back. >> So, had a great time last week with the Google Cloud folks at KubeCon in Denmark. Kubernetes, rocking the world. Really, when I hear the word de facto standard and abstraction layers, I start to get, my bells go off, let me look at that. Some interesting stuff. You guys have been part of that from the beginning, with the CNCF, Google, Intel, among others. Really created a movement, congratulations. >> Yeah, thank you. It really comes down to the fact that we've been running containers for almost a dozen years. Four billion a week, we launch and collapse. And we know that at some point, as Docker and containers really started to take over the new way of developing things, that everyone is going to run into that scalability wall that we had run into years and years and years ago. And so Craig and the team at Google, again, I wasn't at Google at this time, but they had a really, let's take what we know from internally here and let's take those patterns and let's put them out there for the world to use, and that became Kubernetes. And so I think that's really the massive growth there, is that people are like, "Wow, you've solved a problem, "but not from a science project. "It's actually from something "that's been running for a decade." >> Internally, that's called bore. That's tools that Google used, that their SRE cyber lab engineers used to massively provision manage. And they're all software engineers, so it's not like they're operators. They're all Google engineers. But I want to take a minute, if you can, to explain. 'Cause you're new to Google Cloud. You're in the industry, you've been around, you helped form the CNCF, which is the Cloud Native Foundation. You know cloud, you know tech. Google's changed a lot, and Google Cloud specifically has a narrative of, they're one big cloud and they have an application called Google stuff and enterprises are different. You've been there now for almost a year or more. >> Jonathan: Little over a year, yeah. >> What's Google Cloud like right now? Break the myths down around Google Cloud. What's the current status? I know personally, a lot of cloud DNA is coming in from the industry. They've been hiring, making some great progress. Take a minute to explain the Google Cloud. >> Yeah, so it's really interesting. So again, it comes back from where you started from. So Google itself started from a scale consumer SAS type of business. And so that, they understood really well. And we still understand, obviously, uptime and scalability really, really well. And I would say if you backtrack several years ago, as the enterprise really started to look at public clouds and Google Cloud itself started to spin up, that was probably not, they probably didn't understand exactly all of the things that an enterprise would need. Really, at that point in time, no one cloud understood any of the enterprise specifically. And so what they did is they started hiring in people like myself and others that are in the group that I'm in. They're former CIOs of large enterprise companies or former VPs of engineering, and really our job in the Office of the CTO for Google Cloud is to help with the product teams, to help them build the products that enterprises need to be able to use the public cloud. And then also work with some of those top enterprise customers to help them adopt those technologies. And so I think now that if you look at Google Cloud, they understand enterprise really, really well, certainly from the product and the technology perspective. And I think it's just going to get better. >> I interviewed Jennifer Lynn, I had a one-on-one with her. I didn't publish it, it was more of a briefing. She runs Product Management, all on security side. >> Jonathan: Yeah, she's fantastic. >> So she's checking the boxes. So the table stakes are set for Google. I know you got to do some basic things to catch up to get in the cloud. But also you have partnerships. Google Next is coming up, The Cube will be there. Red Hat's a partner. Talk about that relationship with Red Hat and partners. So you're very partner-centric with Google Cloud. >> Jonathan: We are. >> And that's important in the enterprise, but so what-- >> Well, there tends to be two main ares that we focus on, from what we consider the right way to do cloud. One of them is open source. So having, which again, aligns perfectly with Red Hat, is putting the technologies that we want customers to use and that we think customers should use in open source. Kubernetes is an example, there's Istio and others that we've put out that are examples of those. A lot of the open source projects that we all take for granted today were started from white papers that we had put out at one point in time, explaining how we did those things. Red Hat, from a partner perspective, I think that that follows along. We think that the way that customers are going to consume these technologies, certainly enterprise customers are, through those partners that they know and trust. And so having a good, flourishing ecosystem of partners that surround Google Cloud is absolutely key to what we do. >> And they love multicloud too. >> They love multicloud. >> Can't go wrong with it. >> And we do too. The idea is that we want customers to come to Google Cloud and stay there because they want to stay there, because they like us for who we are and for what we offer them, not because they're locked into a specific service or technology. And things like Kubernetes, things like containers, being open sourced allows them to take their tool chains all the way from their laptop to their own cloud inside their own data center to any cloud provider they want. And we think hopefully they'll naturally gravitate towards us over time. >> One of the things I like about the cloud is that there's a flywheel, if you will, of expertise. Like I look at Amazon, for instance. They're getting a lot of metadata of the kinds of workloads that are on their cloud, so they can learn from that and turn that into an advantage for them, or not, or for their customers, and how they could do that. That's their business decision. Google has a lot of flywheel action going on. A lot of Android devices connected in the Google system. You have a lot of services that you can bring to bear in the cloud. How are you guys looking at, say, from a security standpoint alone, that would be a very valuable service to have. I can tap into all the security goodness of Google around what spear phishing is out there, things of that nature. So are you guys thinking like that, in terms of services for customers? How does that play out? >> So where we, we're very consistent on what we consider is, privacy is number one for our customers, whether they're consumer customers or whether they're enterprise customers. Where we would use data, you had mentioned a lot of things, but where we would use some data across customer bases are typically for security things, so where we would see some sort of security impact or an attack or something like that that started to impact many customers. And we would then aggregate that information. It's not really customer information. It's just like you said, metadata, themes, or trends. >> John Furrier: You're not monetizing it. >> Yeah, we're not monetizing it, but we're actually using it to protect customers. But when a customer actually uses Google Cloud, that instance is their hermetically sealed environment. In fact, I think we just came out recently with even the transparency aspects of it, where it's almost like the two key type of access, for if our engineers have to help the customer with a troubleshooting ticket, that ticket actually has to be opened. That kind of unlocks one door. The customer has to say, "Yes," that unlocks the other door. And then they can go in there and help the customer do things to solve whatever the problem is. And each one of those is transparently and permanently logged. And then the customer can, at any point in time, go in and see those things. So we are taking customer privacy from an enterprise perspective-- >> And you guys are also a whole building from Google proper, like it's a completely different campus. So that's important to note. >> It is. And a lot of it just chains on from Google proper itself. If you understood just how crazy and fanatical they are about keeping things inside and secret and proprietary. Not proprietary, but not allowing that customer data out, even on the consumer side, it would give a whole-- >> Well, you got to amplify that, I understand. But what I also see, a good side of that, which is there's a lot of resources you're bringing to bear or learnings. >> Yeah, absolutely. >> The SRE concept, for instance, is to me, really powerful, because Google had to build that out themselves. This is now a paradigm, we're seeing a cloud scale here, with the Cloud Native market bringing in all-new capabilities at scale. Horizontally scalable, fully synchronous, microservices architecture. This future is a complete game-changer on functionality at the different scale points. So there's no longer the operator's room, provisioning storage here. >> And this is what we've been doing for years and years and years. That's how all of Google itself, that's how search and ads and Gmail and everything runs, in containers all orchestrated by Borg, which is our version of Kubernetes. And so we're really just bringing those leanings into the Google Cloud, or learnings into Google Cloud and to our customers. >> Jonathan, machine learning and AI have been the big topic this week on OpenShift. Obviously that's a big strength of Google Cloud as well. Can you drill down on that story, and talk about what Google Cloud is bringing on, and machine learning on OpenShift in general? Give us a little picture of what's running. >> Yeah, so I think they showed some of the service broker stuff. And I think, did they show some of the Kubeflow stuff, which is taking some machine learning and Kubernetes underneath OpenShift. I think those are very, very interesting for people that want to start getting into using AutoML, which is kind of roll-your-own machine learning, or even the voice or vision APIs to enhance their products. And I think that those are going to be keys. Easing the adoption of those, making them really, really easy to consume, is what's going to drive the significant ramp on using those types of technologies. >> One of the key touchpoints here has been the fact that this stuff is real-world and production-ready. The fact that the enterprise architecture now rolling out apps within days or weeks. One of those things that's now real is ML. And even in the opening keynote, they talked about using a little bit of it to optimize the scheduling and what sessions were in which rooms. As you talk to enterprises, it does seem like this stuff is being baked into real enterprise apps today. Can you talk a little bit about that? >> Sure, so I certainly can't give any specific examples, because what I think what you're saying is that a lot of enterprises or a lot of companies are looking at that like, "Oh, this is our new secret sauce." It always used to be like they had some interesting feature before, that a competitor would have to keep up with or catch up with. But I think they're looking at machine learning as a way to enhance that customer experience, so that it's a much more intimate experience. It feels much more tailored to whomever is using their product. And I think that you're seeing a lot of those types of things that people are starting to bake into their products. We've, again, this is one of these things where we've been using machine learning for almost 10 years inside Google. Things like for Gmail, even in the early days, like spam filtering, something just mundane like that. Or we even used it, turned it on in our data centers, 'cause it does a really good job of lowering the PUE, which is the power efficiency in data centers. And those are very mundane things. But we have a lot of experience with that. And we're exposing that through these products. And we're starting to see people, customers gravitate to grab onto those. Instead of having to hard code something that is a one to many kind of thing, I may get it right or I may have to tweak it over time, but I'm still kind of generalizing what the use cases are that my customers want to see, once they turn on machine learning inside their applications, it feels much more tailored to the customer's use cases. >> Machine learning as a service seems to be a big hot button that's coming out. How are you guys looking at the technical direction from the cloud within the enterprise? 'Cause you have three classes of enterprise. You have the early adopters, the power, front, cutting-edge. Then you have the fast followers, then you have everybody else. The everybody else and fast followers, they know about Kubernetes, some might not even, "What is Kubernetes?" So you have kind of-- >> Jonathan: "What containers?" >> A level of progress where people are. How are you guys looking at addressing those three areas, because you could blow them away with TensorFlow as a service. "Whoa, wowee, I'm just trying to get my storage LUNs "moving to a cloud operation system." There's different parts of this journey. Is there a technical direction that addresses these? What are you guys doing? >> So typically we'll work with those customers to help them chart the path through all those things, and making it easy for them to use and consume. Machine learning is still, unless you are a stats major or you're a math major, a lot of the algorithms and understanding linear algebra and things like that are still very complex topics. But then again, so is networking and BGP and things like OSPF back a few years ago. So technology always evolves, and the thing that you can do is you can just help pull people along the continuum there, by making it easy for them to use and to provide a lot of education. And so we work with customers on all ends of the spectrum. Even if it's just like, "How do I modernize my applications, "or how do I even just put them into the cloud?" We have teams that can help do that or can educate on that. If there are customers that are like, "I really want to go do something special "with maybe refactoring my applications. "I really want to get the Cloud Native experience." We help with that. And those customers that say, "I really want to find out this machine learning thing. "How can I actually make that an impactful portion of my company's portfolio?" We can certainly help with that. And there's no one, and typically you'll find in any large enterprise, because there'll be some people on each one of those camps. >> Yeah, and they'll also want to put their toe in the water here and there. The question I have for you guys is you got a lot of goodness going on. You're not trying to match Amazon speed for speed, feature for feature, you guys are picking your shots. That is core to Google, that's clear. Is there a use case or a set of building blocks that are highly adopted with you guys now, in that as Google gets out there and gets some penetration in the enterprise, what's the use, what are the key things you see with successes for you guys, out of the gate? Is there a basic building? Amazon's got EC2 and S3. What are you guys seeing as the core building blocks of Google Cloud, from a product standpoint, that's getting the most traction today? >> So I think we're seeing the same types of building blocks that the other cloud providers are, I think. Some of the differences is we look at security differently, because of, again, where we grew up. We do things like live migration of virtual machines, if you're using virtual machines, because we've had to do that internally. So I think there are some differences on just even some of the basic block and tackling type of things. But I do think that if you look at just moving to the cloud, in and of itself is not enough. That's a stepping stone. We truly believe that artificial intelligence and machine learning, Cloud Native style of applications, containers, things like service meshes, those things that reduce the operational burdens and improve the rate of new feature introduction, as well as the machine learning things, I think that that's what people tend to come to Google for. And we think that that's a lot of what people are going to stay with us for. >> I overheard a quote I want to get your reaction to. I wrote it down, it says, "I need to get away from VPNs and firewalls. "I need user and application layer security "with un-phishable access, otherwise I'm never safe." So this is kind of a user perspective or customer perspective. Also with cloud there's no perimeters, so you got phishing problems. Spear phishing's one big problem. Security, you mentioned that. And then another quote I had was, "Kubernetes is about running frameworks, "and it's about changing the way "applications are going to be built over time." That's where, I think, SRE and Istio is very interesting, and Kubeflow. This is a modern architecture for-- >> There's even KubeVirt out there, where you can run a VM inside a container, which is actually what we do internally too. So there's a lot of different ways to slice and dice. >> Yeah, how relevant is that, those concepts? Because are you hearing that as well on the customers? 'Cause that's pain point, but also the new modern software development's future way to do things. So there's pain point, I need some aspirin for that. And then I need some growth with the new applications being built and hiring talent. Is that consistent with how you guys see it? >> So which one should I tackle? So you're talking about. >> John Furrier: VPN, do the VPNs first. >> The VPNs first, okay. >> John Furrier: That's my favorite one. >> So one of the most, kind of to give you the backstory, so one of the most interesting things when I came to Google, having come from other large enterprise vendors before this, was there's no VPNs. We don't even have it on our laptop. They have this thing called BeyondCorp, which is essentially now productized as the Identity-Aware Proxy. Which is, it actually takes, we trust no one or nothing with anything. It's not the walled garden style of approach of firewall-type VPN security. What we do is, based upon the resource you're going to request access for, and are you on a trusted machine? So on one that corporate has given you? And do you have two-factor authentication that corporate, not only your, so what you have and what you know. And so they take all of those things into awareness. Is this the laptop that's registered to you? Do you have your two-factor authentication? Have you authenticated to it and it's a trusted platform? Boom, then I can gain access to the resources. But they will also look for things like if all of a sudden you were sitting here and I'm in San Francisco, but something from some country in Asia pops up with my credentials on it, they're going to slam the door shut, going, "There's no way that you can be in two places at one time." And so that's what the Identity-Aware Proxy or BeyondCorp does, kind of in a nutshell. And so we use that everywhere, internally, externally. And so that's one of the ways that we do security differently is without VPNs. And that's actually in front of a lot of the GCP technologies today, that you can actually leverage that. So I would say we take-- >> Just rethinking security. >> It's rethinking security, again, based upon a long history. And not only that, but what we use internally, from our corporate perspective. And now to get to the second question, yeah. >> Istio, Kubeflow, is more of the way software gets run. One quote from one of the ex-Googlers who left Google then went out to another company, she goes, she was blown away, "This is the way you people ship software?" Like she was a fish out of water. She was like, "Oh my god, where's Borg?" "We do Waterfall." So there's a new approach that opens doors between these, and people expect. That's this notion of Kubeflow and orchestration. So that's kind of a modern, it requires training and commitment. That's the upside. Fix the aspirin, so Identity Proxy, cool. Future of software development architecture. >> I think one of the strong things that you're going to see in software development is I think the days of people running it differently in development, and then sandbox and testing, QA, and then in prod, are over. They want to basically have that same experience, no matter where they are. They want to not have to do the crossing your fingers if it, remember, now it gets reddited or you got slash-dotted way back in the past and things would collapse. Those days of people being able to put up with those types of issues are over. And so I think that you're going to continue to see the development and the style of microservices, containers, orchestrated by something that can do auto scaling and healing, like Kubernetes. You're going to see them then start to use that base layer to add new capabilities on top, which is where we see Kubeflow, which is like, hey, how can I go put scalable machine learning on top of containers and on top of Kubernetes? And you even see, like I said, you see people saying, "Well, I don't really want to run "two different data planes and do the inception model. "If I can lay down a base layer "of Kubernetes and containers, then I can run "bare metal workloads against the bare metal. "If I need to launch a virtual machine, "I'll just launch that inside the container." And that's what KubeVirt's doing. So we're seeing a lot of this very interesting stuff pop. >> John Furrier: Yeah, creativity. >> Creativity. >> Great, talk about your role in the Office of the CTO. I know we got a couple of minutes left. I want to get out there, what is the role of the CTO? Bryan Stevens, formerly a Red Hat executive. >> Yeah, Bryan's our CTO. He used to run a big chunk of the engineering for Google Cloud, absolutely. >> And so what is the office's charter? You mentioned some CIOs, former CIOs are in there. Is it the think tank? Is it the command and control ivory tower? What's the role of the office? >> So I think a couple of years ago, Diane Greene and Bryan Stevens and other executives decided if we want to really understand what the enterprise needs from us, from a cloud perspective, we really need to have some people that have walked in those shoes, and they can't just be Diane or can't just be Bryan, who also had a big breadth of experience there. But two people can't do that for every customer for every product. And so they instituted the Office of the CTO. They tapped Will Grannis, again, had been in Boeing before, been in the military, and so tapped him to build this thing. And they went and they looked for people that had experience. Former VPs of Engineering, former CIOs. We have people from GE Oil and Gas, we have people from Boeing, we have people from Pixar. You name it, across each of the different verticals. Healthcare, we have those in the Office of the CTO. And about, probably, I think 25 to 30 of us now. I can't remember the exact numbers. And really, what our day to day life is like is working significantly with the product managers and the engineering teams to help facilitate more and more enterprise-focused engineering into the products. And then working with enterprise customers, kind of the big enterprise customers that we want to see successful, and helping drive their success as they consume Google Cloud. So being the conduit, directly into engineering. >> So in market with customers, big, known customers, getting requirements, helping facilitate product management function as well. >> Yeah, and from an engineering perspective. So we actually sit in the engineering organization. >> John Furrier: Making sure you're making the good bets. >> Jonathan: Yes, exactly. >> Great, well thanks for coming on The Cube. Thanks for sharing the insight. >> Jonathan: Thanks for having me again. >> Great to have you on, great insight, again. Google, always great technology, great enterprise mojo going on right now. Of course, The Cube will be at Google Next this July, so we'll be having live coverage from Google Next here in San Francisco at that time. Thanks for coming on, Jonathan. Really appreciate it, looking forward to more coverage. Stay with us for more of day three, as we start to wrap up our live coverage of Red Hat Summit 2018. We'll be back after this short break. (upbeat electronic music)
SUMMARY :
Brought to you by Red Hat. Technical Director, Office of the CTO, Google Cloud. You guys have been part of that from the beginning, And so Craig and the team at Google, But I want to take a minute, if you can, to explain. is coming in from the industry. And so I think now that if you look at Google Cloud, I interviewed Jennifer Lynn, I had a one-on-one with her. So she's checking the boxes. is putting the technologies that we want customers to use The idea is that we want customers to come to Google Cloud You have a lot of services that you can that started to impact many customers. that ticket actually has to be opened. And you guys are also a whole building from Google proper, And a lot of it just chains on from Google proper itself. Well, you got to amplify that, I understand. The SRE concept, for instance, is to me, really powerful, and to our customers. have been the big topic this week on OpenShift. And I think that those are going to be keys. And even in the opening keynote, And I think that you're seeing So you have kind of-- How are you guys looking at addressing those three areas, and the thing that you can do is you can just help that are highly adopted with you guys now, Some of the differences is we look at security differently, "and it's about changing the way where you can run a VM inside a container, Is that consistent with how you guys see it? So which one should I tackle? So one of the most, kind of to give you the backstory, And now to get to the second question, yeah. "This is the way you people ship software?" Those days of people being able to put up with I want to get out there, what is the role of the CTO? Yeah, Bryan's our CTO. Is it the think tank? and the engineering teams to help facilitate more and more So in market with customers, big, known customers, So we actually sit in the engineering organization. Thanks for sharing the insight. Great to have you on, great insight, again.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jonathan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Jennifer Lynn | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jonathan Donaldson | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Bryan Stevens | PERSON | 0.99+ |
Bryan | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Craig | PERSON | 0.99+ |
Will Grannis | PERSON | 0.99+ |
Diane Greene | PERSON | 0.99+ |
second question | QUANTITY | 0.99+ |
Denmark | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Cloud Native Foundation | ORGANIZATION | 0.99+ |
two places | QUANTITY | 0.99+ |
Diane | PERSON | 0.99+ |
two key | QUANTITY | 0.99+ |
Tech Reckoning | ORGANIZATION | 0.99+ |
One quote | QUANTITY | 0.99+ |
Office of the CTO | ORGANIZATION | 0.99+ |
Pixar | ORGANIZATION | 0.99+ |
Red Hat Summit 2018 | EVENT | 0.99+ |
OpenShift | TITLE | 0.99+ |
GE Oil and Gas | ORGANIZATION | 0.99+ |
Gmail | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
30 | QUANTITY | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
one time | QUANTITY | 0.98+ |
last week | DATE | 0.98+ |
Boeing | ORGANIZATION | 0.98+ |
almost 10 years | QUANTITY | 0.97+ |
Android | TITLE | 0.97+ |
today | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Google Cloud | ORGANIZATION | 0.97+ |
Four billion a week | QUANTITY | 0.97+ |
day three | QUANTITY | 0.97+ |
two-factor | QUANTITY | 0.97+ |
The Cube | ORGANIZATION | 0.96+ |
Sam Ramji, Google Cloud Platform - Red Hat Summit 2017
>> Announcer: Live, from Boston, Massachusetts, it's the Cube. Covering Red Hat Summit 2017. Brought to you by Red Hat. (futuristic tone) >> Welcome back to the Cube's coverage of the Red Hat Summit here in Boston, Massachusetts. I'm your host, Rebecca Knight, along with my co-host Stu Miniman. We are welcoming right now Sam Ramji. He is the Vice President of Product Management Google Cloud Platforms. Thanks so much for joining us. >> Thank you, Rebecca, really appreciate it. And Stu good to see you again. >> So in your keynote, you talked about how this is the age of the developer. You said this is the best time in history to be a developer. We have more veneration, more cred in the industry. People get us, people respect us. And yet you also talked about how it is also the most challenging time to be a developer. Can you unpack that a little bit for our viewers? >> Yeah, absolutely. So I think there's two parts that make it really difficult. One is just the velocity of all the different pieces, how fast they're moving, right? How do you stay on top of all the different latest technology, right? How do you unpack all of the new buzzwords? How do you say this is a cloud, that's not a cloud? So you're constantly racing to keep up, but you're also maintaining all of your old systems, which is the other part that makes it so complex. Many old systems weren't built for modernization. They were just kind of like hey, this is a really cool thing, and they were built without any sense of the history, or the future that they'd be used in. So imagine the modern enterprise developer who's got a ship software at high rates of speed, support new business initiatives, they've got to deliver innovation, and they have to bridge the very new with the very old. Because if your mobile app doesn't talk to your mainframe, you are not going to move money. It's that simple. There's layers of technology architecture. In fact, you could think of it as technology archeology, as I mentioned in the keynote, right, this we don't want to create a new genre of people called programmer archeologists, who have to go-- >> I'm picturing them just chipping away. >> Sam: I don't think it'll be as exciting as Indiana Jones. >> No. >> Digging through layers of the stack is not really what people want to be doing with their time. >> Sam: Temple of the lost kernel. >> I love it. >> So Sam, it's interesting to kind of see, I was at the Google Cloud event a couple months ago, and here you bring up the term open cloud, which part of me wants to poke a hole in that and be like, come on, everybody has their cloud. Come on, you want to lock everybody in, you've got the best technology, therefore why isn't it just being open because it's great to say open and maybe people will trust you. Help explain that. >> Puppies, freedom, apple pie, motherhood, right. >> Stu: Yeah, yeah. (laughs) >> So there's a couple sides to that. One, we think the cloud is just a spectacular opportunity. We think about 1.2 trillion dollars in current spend will end up in cloud. And the cloud market depending on how you measure it is in the mid 20 billions today. So there's just unbounded upside. So we don't have to be a aspirational monopolist in order to be a successful business. And in fact, if you wind the clock forward, you will see that every market ends up breaking down into a closed system and a closed company, and an open platform. And the open platforms tend to grow more slowly, sort of exponential versus logarithmic, is how we think about it. So it's a pragmatic business strategy. Think about Linux in '97. Think about Linux in 2002. Think about Linux in 2007. Think about Linux in 2012. Think about Linux today. Look at that rate. It's the only thing that you're going to use. So open is very pragmatic that way. It's pragmatic in another direction which is customer choice. Customers are going to come for things that give them more options. Because your job is to future proof your business, to create what in the financial community call optionality. So how do you get that? In 2011, about eight other people and I created a nonprofit called the Open Cloud Initiative. And the Initiative is long since dead, we didn't fund it right, we kind of got these ideas baked, and then moved on. >> Stu: There's another OCI now. >> That's right, it's the Open Container Initiative. But we had three really crisp concepts there. We said number one, an open cloud will be based on open source. There won't be stuff that you can't get, can't replicate, can't build yourself. Second, we said, it'll have open access. There'll be no barriers to entry or exit. There won't be any discrimination on which users can or can't come in, and there won't be any blockers to being able to take your stuff out. 'Cause we felt that without open access, the cloud would be unsafe at any speed, to borrow a quote from Ralph Nader. And then third, built on an open ecosystem. So if you are assuming that you have to be able to be open to tens of thousands of different ideas, tens of thousands of different software applications, which are maybe database infrastructure, things that as a cloud provider, you might want to be a first party provider of. Well those things have to compete, or trade off or enrich each other in a consistent way, in a way that's fair, which is kind of what we mean when we say open ecosystem, but being able to be pulled through is going to give you that rate of change that you need to be exponential rather than logarithmic. So it's based on some fairly durable concepts, but I welcome you to poke holes in it. >> So we did an event with MIT a little while back. We had Marshall Van Alstyne, professor at BU who I know you know. He's an advisor at Cloud Foundry, and he talked about those platforms and it was interesting, you know, with the phone system you had Apple who got lots of the money, smaller market share as opposed to Android, which of course comes out of Google, has all of the adoption but less revenue. So, not sure it's this, yeah. >> Interestingly, we've run those curves, and you kind of see that same logarithmic versus exponential shift happening in Android. So we've seen, I don't have the latest numbers on the top of my head, but that is generating billions of dollars of third party revenue now. So share does shift over time in favor of openness and faster innovation. >> So let's bring it back to Red Hat here, because if I talk to all the big public cloud guys, Microsoft has embraced open source. >> And they're not just guys, actually, there's lots of women. >> Rebecca: Yes, thank you. >> Stu: I apologize. >> Sorry, I'm in a little bit of a jam here, where I'm trying to tell people the collective noun for technologists is not guys. >> Stu: Okay. >> It could be people, it could be folks, internally we use squirrels from time to time, just to invite people in. >> So, when I talk to the cloud squirrels, Microsoft has embraced open source. Amazon has an interesting relationship. >> I was there when that happened. >> You and I both know the people that they've brought in who have very good credibility in the open source community that are helping out Amazon there. Is it Kubernetes that makes you open because I look at what Red Hat's doing, we say okay, if I want to be able to live across many clouds or in my own data centers, Kubernetes is a layer to do that. It comes back to some of the things like Cloud Foundry. Is that what makes it open because I have choice, or is there more to it that you want to cover from an open cloud standpoint, from a Google standpoint? >> Open and choice effectively is a spectrum of effort. If it's incredibly difficult, it's the same as not having a choice. If it's incredibly easy, then you're saying actually, you really are free to come and go. So Kubernetes is kind of the brightest star in the solar system of open cloud. There's a lot of other technologies, new things that are coming out, like istio and pluri. I don't want to lose you in word soup. Linker D, container D, a lot of other things, because this is a whole new field, a whole fabric that has to come to bear, that just like the internet, can layer on top of your existing data centers or your existing clouds, that you can have other applications or other capabilities layered on top of it. So this permission-less innovation idea is getting reborn in the cloud era, not on top of TCP/IP, we take that for granted, but on top of Kubernetes and all of the linked projects. So yeah, that's a big part of it. >> I want to continue on with that idea of permission-less innovation and talk about the culture of open source, particularly because of what you were saying in the keynote about how it's not about the code, it's about the community. And you were using words like empathy and trust, and things that we don't necessarily think of as synonymous with engineers. >> Sam: Isn't it? >> So, can you just talk a little bit about how you've seen the culture change, particularly since your days at Microsoft, and now being at Google, in terms of how people are working together? >> Absolutely, so the first thing is why did it change? It became an economic imperative. Let's look at software industry competition back in the 90s. In general, the biggest got the mostest. If you could assemble the largest number of very intelligent engineers, and put them all on the same project, you would overwhelm your competition. So we saw that play out again and again. Then this new form of collaboration came around, not just birthed by Linux, but also Apache and a number of other things, where it's like oh, we don't have to work for the same company in order to collaborate. And all of a sudden we started seeing those masses grow as big as the number of engineers who went a single company. Ten thousand people, ten thousand engineers, share the copyright to the Linux kernel. At no point have they worked at the same company. At no point could a company have afforded to get all of them together. So this economic imperative that marks what I think of as the first half of the thirty years of open source that we've been in. The second half has been more us all waking up, and realizing open source has got to be inclusive. A diverse world needs diverse solutions built by diverse people. How do we increase our empathy? How do we increase our understanding so that we can collaborate? Because if we think each other is a jerk, if we get turned off of building our great ideas into software because some community member has said something that's just fundamentally not cool, or deeply hurtful, we are human beings and we do take our toys away, and say I'm not going to be there. >> That's the crux of it too. >> It's absolutely a cutthroat industry, but I think one of the things I'm seeing, I've been in Silicon Valley for 22 years, less three years for a stint at Microsoft, I've actually started to see the community become more self-reflective and like, if we can have cutthroat competition in corporations, we don't have to make that personal. 'Cause every likelihood of open source projects is you're employed as a professional engineer at a company, and that employment agreement might change. Especially in containers, right? Great container developers you'll see they move from one company to another, whether it's a giant company like Google, or whether it's a big startup like Docker, or any range of companies. Or Red Hat. So, this sort of general sense that there is a community is starting to help us make better open source, and you can't be effective in a community if you don't have empathy and you don't start focusing on understanding code of conduct community norms. >> Sam, I'm curious how you look at this spectrum of with this complexity out there, how much will your average customer, and you can segment it anywhere you want, but they say, okay I'm going to engage with this, do open source, get involved, and what spectrum of customers are going to be like, well, let me just run it on Google because you've got a great platform, I'm not going to have Google engineers and you guys have lots of smart people that can do that in any of the platform. How do you see that spectrum of customer, is it by what their business IT needs are, is it the size of the customer, is there a decision tree that you guys have worked out yet to try to help end users with what do they own, what do they outsource? It's in clouds more than outsourcing these days. The deal of outsourcing was your mess for less, and this should be somewhat more transformational and hopefully more business value, right? >> Yeah, Urs Hölzle, who's our SVP of Technical Infrastructure, says, the cloud is not a co-location facility. It is different, it is not your server that you shipped up and you know, ran. It's an integrated set of services that should make it incredibly easy to do computing. And we have tons of very intelligent women and men operating our cloud. We think about things like how do you balance velocity and reliability? We have a discipline called site reliability engineering. We've published a book on it, a community is growing up around that, it's sort of the mainstream version of dev ops. So there are a bunch of components that any company at any size can adopt, as long as you need both velocity and reliability. This has always been the tyranny of the or. If I can move fast I can break things, but even Mark Zuckerberg recently said you know, move fast and break fewer things. Kind of a shift, 'cause you don't want to break a lot of people's experience. How do you do that, while making sure that you have high reliability? It really defies simple classification. We have seen companies from startups to mom and pop shops, all the way to giant enterprises adopting cloud, adopting Google cloud platform. One of the big draws is of course, data analytics. Google is a deeply data intensive business, and we've taken that to eleven basically with machine learning, which is why it was so important to explain tense or flow, offer that as open source, and be able to move AI forward. Any company, at any size that wants to do high speed, high scale data analytics, is coming to GCP. We've seen it basically break down into, what's the business value, how close is it to the decision maker, and how motivated is an engineer to learn something different and give cloud a try. >> Because the engineer has to get better at working with the data, understanding the data, and deriving the right insights from the data. >> You're exactly right. Engineers are people, and people need to learn, and they need to be motivated to change. >> Sam, last question I have for you is, you've been involved in many different projects. We look at from the outside and say, okay, how much should be company driven, how much does a foundation get involved? We've seen certain foundations that have done very well, and others that have struggled. It's very interesting to watch Google. We'd give you good as we've talked on the Cube so far. Kubernetes seems to be going well. Great adoption. Google participates, but not too much, and Red Hat I think would agree with that. So congratulations on that piece. >> Sam: Thank you. >> What's your learnings that you've had as you've been involved in some of these various initiatives, couple foundations. We interviewed you when you were back at the Cloud Foundry, and things like that, so, what have you learned that you might want to say, hey, here's some guidelines. >> Yeah, so I think the first guideline is the core of a foundation is, the core purpose of a foundation is bootstrapping trust. So where trust is missing, then you will need that in order to create better contribution and higher velocity in the project. If there's trust there, if there's a benevolent dictator and everyone says that person's fine or that company's fine, then you won't necessarily need a foundation. You've seen a lot of changes in open source startups, dot coms that are also a dot org, shifting to models where you say well, this thing is actually so big it needs to not be owned by any one company. And therefore, to get the next level of contribution, we need to be able to bring in giant companies, then we create trust at that next level. So foundations are really there for trust. It's really important to be strong enough to get something off the ground, and this is the challenge we had at Cloud Foundry, it was a VMware project and then a Pivotal project, and many people believe this is great open source, but it's not an open community, but the technology had to keep working really well. So we how do we have a majority contributor, and start opening up, in a thoughtful process and bringing people in, until you can say what our target is to have the main contributor be less than 50% of the code commits. 'Cause then the majority is really coming from the community. Other projects that have been around for longer, maybe they started out with no majority. Those organizations, those projects tend to be self-organizing, and what they need is just a foundation to build a place that people can contribute money to, so the community can have events. So there's two very different types of organizations. One's almost like a charity, to say I really care about this popular open source project, and I want to be able to give something back, and others are more like a trade association, which is like, we need to enable very complex coordination between big companies that have a lot at stake, in which case you'll create a different class of foundation. >> Great, well Sam Ramji, thank you so much for being with us here on the Cube. I'm Rebecca Knight, and for your host Stu Miniman, please join us back in a bit. (futuristic tone)
SUMMARY :
Brought to you by Red Hat. He is the Vice President of Product Management And Stu good to see you again. also the most challenging time to be a developer. and they have to bridge the very new with the very old. what people want to be doing with their time. and here you bring up the term open cloud, Stu: Yeah, yeah. And the cloud market depending on how you measure it but being able to be pulled through is going to give you and it was interesting, you know, and you kind of see that same logarithmic So let's bring it back to Red Hat here, And they're not just guys, actually, Sorry, I'm in a little bit of a jam here, just to invite people in. Microsoft has embraced open source. or is there more to it that you want to cover So Kubernetes is kind of the brightest star and talk about the culture of open source, share the copyright to the Linux kernel. and you can't be effective in a community and you guys have lots of smart people that can do that how close is it to the decision maker, Because the engineer has to get better at working and they need to be motivated to change. and others that have struggled. what have you learned that you might want to say, shifting to models where you say well, I'm Rebecca Knight, and for your host Stu Miniman,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Schneider | PERSON | 0.99+ |
Sam Ramji | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
David Schneider | PERSON | 0.99+ |
Frank Sleuben | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Marshall Van Alstyne | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
CJ Desai | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2007 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2002 | DATE | 0.99+ |
2011 | DATE | 0.99+ |
John Donahoe | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
22 years | QUANTITY | 0.99+ |
Urs Hölzle | PERSON | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
second half | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
'97 | DATE | 0.99+ |
first half | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
Linux | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
CUBE | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
Cloud Foundry | ORGANIZATION | 0.98+ |
Ten thousand people | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
eleven | QUANTITY | 0.98+ |
ten thousand engineers | QUANTITY | 0.98+ |
90s | DATE | 0.98+ |
15 | QUANTITY | 0.98+ |
OCI | ORGANIZATION | 0.98+ |
Matthew Jones v2 ITA Red Hat Ansiblefest
>> Welcome back to AnsibleFest. I'm Matthew Jones, I'm the architect of the Ansible Automation Platform. And today I want to talk to you a little bit about what we've got coming in 2021, and some of the things that we're working on for the future. Today, I really want to cover some of the work that we're doing on scale and flexibility, and how we're going to focus on that for the next year. I also want to talk about how we're going to help you grow and manage and use your content on the Automation platform. And then finally, I want to look a little bit beyond the automation platform itself. So, last year we introduced Ansible Content Collections. Earlier this year, we introduced the Ansible Automation Hub on Red Hat Cloud. And yesterday you heard Richard mentioned on private automation hub that's coming later this year. And automation hub, Ansible tower, this is really what the automation platform means for us. It's bringing together that content, with the ability to execute and run and manage that content, that's really important. And so what we really want to do, is we want to help you bring Red Hat and partner content that you trust together with community content from galaxy that you may need, and bring this together with content that you develop for yourself, your roles, your collections, the automation that you actually do. And we want to give you control over that content and help you curate that content and build a community around your automation. We want to focus on a seamless experience with this automation from Ansible Tower and from Automation Hub for the automation platform itself, and make it accessible to the automation and infrastructure that you're managing. Now that we've talked about content a little bit, I want to talk about how you run Ansible. Today an Ansible Tower, use virtual environments to manage the actual execution of Ansible, and virtual environments are okay, but they have some drawbacks. Primarily they're not very portable. It's difficult to manage dependencies and the version of Ansible. Sometimes those dependencies conflict with the other systems that are on the infrastructure itself, even Ansible Tower. So what we've done is created a new system that we call execution environments. Execution environments are container-based. And what we're doing is bringing the flexibility and portability of containers to these Ansible execution environments. And the goal really is portability. And we want to be able to leverage the tools that the community develops as well as the tools that Red Hat provides to be able to produce these container images and use them effectively. At Ansible we've developed a tool called Ansible Builder. Ansible builder will let you bring content collections together with the version of Ansible and Red Hats base container image so that you can put together your own images for execution environments. And you'll be able to host these on your own private registry infrastructure. If you don't already have a container registry solution, Automation Hub itself provides that registry. The idea here is that, unlike today where your virtual environments and your production execution environments diverge a little bit from what your developers, your content developers and your automation developers experience, we want to give you the same experience between your production environments and your development environments, all the way through your test and validation workloads. Red Hat's also going to provide some prebuilt execution environments. We want to have some continuity between the experience that you have today on the Ansible tower and what you'll have next year, once we bring execution environments into production. We want you to be able to trust the Ansible, the version of Ansible that's running on your execution environments, and that you have the content that you expect. At the same time, we're going to provide a version of the execution environment, that's just the base execution environment. All it has is Ansible. This will let you take those using Ansible builder, take the collections that you've developed, that you need in your automation and combine them without having to bring in things that you don't need, or that you don't want in your automation and build them together into a very opinionated, container image. If you're interested in execution environments and you want to know how these are built and how you'll use them, we actually have them available for you to use today. Shane McDonald and Adam Miller are giving a talk later with a walk through how to build execution environments and how you'll use them. You can use this to make sure that you're ready for execution environments coming to the automation platform next year. Now that we've talked about how we build execution environments, I want to talk about how execution runs in your infrastructure. So today when you deploy Ansible tower, you're deploying a monolithic web application. Your execution capability is tied up into how you actually deploy Ansible tower. This makes scaling Ansible tower and your automation workloads difficult, and everything has to be co-located together in the same data center. Isolated nodes solve this a little bit, but they bring about their own sort of opinionated challenges in setting up SSH and having direct connectivity between the control nodes and the execution nodes themselves. We want to make this more flexible and easier to use. And so one of the things that we've created over the last year and that we've been working on over the last year is something that we call receptor. Receptor is an overlay network that's an Automation Mesh. And the goal here is to separate the execution capability of your Ansible content from the control plane capability, where you manage the web infrastructure, the users, the role-based access control. We want to draw a line between those. We want you to be able to deploy execution environments anywhere. Chris Wright earlier today mentioned Edge. Well Edge Cloud, we want you to be able to manage data centers anywhere in the world, and you can do this with the Automation Mesh,. The Automation Mesh connects your control plane with those execution nodes, anywhere in the world. Another thing that the Automation Mesh brings is, we're going to be able to draw the lines between the control plane themselves and each Automation Mesh node. This means that if you have an outage or a problem on your network and on your infrastructure, if you can draw a line between the control plane itself and the node that needs to execute, the sensible work, the Automation Mesh can route around problems. The Automation Mesh in the way it's deployed, also allows this to fit closer with ingress and egress policies that you have between your infrastructure. It doesn't matter which direction the Automation Mesh itself connects in. Once the connection is established, automation will be able to flow from the control systems to the execution nodes and get responses back. Now, this all works together with automation of the content collections that we mentioned earlier, the execution environments that we were just talking about and your container registries. All of these work together with these Automation Mesh nodes. They're very lightweight and very simple systems. This means you can scale up and scale down execution capacity as your needs increase or decrease. You don't need to keep around a lot of extra capacity just in case you automate more, just because you're not sure when your execution capacity needs will increase and decrease. This fits into an automated system for scaling your infrastructure and scaling your execution capacity. Now that we've talked about the content that you use to manage, and how that execution is performed and where that execution is performed. I want to look a little bit beyond the actual automation platform itself. And specifically, I want to talk about how the automation platform works with OpenShift and Kubernetes. Now we have an existing installer for Ansible tower that we'll deploy to OpenShift Kubernetes, and we support OpenShift and Kubernetes as a first-class system for deploying Ansible tower. But I mentioned automation hub and Ansible tower as this is what the automation platform is for us. So we want to take that installer and replace it with an operator-based full life cycle approach to deploying and managing the automation platform on OpenShift. This operator will be available in OperatorHub. So there's no need to manage complex YAML files that represent the deployment. Since it's available in OperatorHub, you have one place that you can go to manage deployments, upgrades, backup and restore. And all of this work seamlessly with the container groups feature that we introduced last year. But I want to take this a little bit beyond just deploying and upgrading the automation platform from the operator. We want to look at what other capabilities that we can get out of those operators. So beyond just deploying and upgrading, we're also creating a resource operators and CRDs that will allow other systems running in OpenShift or Kubernetes to directly manage resources within the automation platform. Anything from triggering jobs and getting the status of jobs back, we want to enable that capability if you're using OpenShift and Kubernetes. The first place we're starting with this, is Red Hats Advanced Cluster Management system. Advanced Cluster Management brings together the ability to manage OpenShift and Kubernetes clusters to install them and manage them, as well as applications and products in managing the life cycle of those across your clusters. So what we really want to do, is give you the ability to connect traditional and container-based workloads together. You're already using the Ansible automation platform to manage workloads with Ansible. When using Advanced Cluster Management and OpenShift and Kubernetes, now you have a full system. You can manage across clouds across clusters, anywhere in the world. And this sort of brings me back to one of the areas of focuses for us. Our goal is complete end-to-end automation. We want to connect your people, your domains and the processes. We want to help you deliver for you and your customers by expanding the capabilities of the Ansible automation platform. And we want to make this a seamless experience to both curate content, control the content for your organization, and run the content and run Ansible itself using the full suite of the Ansible automation platform. So the Advanced Cluster management team is giving a talk later where you'll actually be able to see Advanced cluster Management and the Ansible automation platform working together. Don't forget to check out Adam and Shane's talk on execution environments, how those are built and how you can use those. Thank you for coming to AnsibleFest, and we'll see you next time.
SUMMARY :
and the node that needs to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matthew Jones | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Adam Miller | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
2021 | DATE | 0.99+ |
Shane McDonald | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Today | DATE | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Shane | PERSON | 0.99+ |
AnsibleFest | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Kubernetes | TITLE | 0.98+ |
later this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Earlier this year | DATE | 0.95+ |
Ansible Automation Hub | ORGANIZATION | 0.95+ |
Ansiblefest | EVENT | 0.91+ |
Red Hats | ORGANIZATION | 0.9+ |
Ansible Builder | TITLE | 0.9+ |
Automation Hub | ORGANIZATION | 0.89+ |
one | QUANTITY | 0.87+ |
OpenShift Kubernetes | TITLE | 0.86+ |
Ansible Tower | TITLE | 0.85+ |
one place | QUANTITY | 0.84+ |
Hat | ORGANIZATION | 0.84+ |
Ansible Automation | ORGANIZATION | 0.81+ |
Red Hat | TITLE | 0.75+ |
Ansible Tower | ORGANIZATION | 0.74+ |
earlier today | DATE | 0.72+ |
Automation Hub | TITLE | 0.71+ |
Ansible | TITLE | 0.69+ |
AnsibleFest | EVENT | 0.65+ |
Red Hat Cloud | ORGANIZATION | 0.62+ |
Red | EVENT | 0.6+ |
OperatorHub | ORGANIZATION | 0.59+ |
class | QUANTITY | 0.56+ |
Collections | ORGANIZATION | 0.55+ |
Edge | TITLE | 0.54+ |
Tower | COMMERCIAL_ITEM | 0.52+ |
ITA | ORGANIZATION | 0.52+ |
Joe Fitzgerald, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual
>>from around the globe. >>It's the Cube with >>coverage of Coop Khan and Cloud Native Con Europe 2020 Virtual brought to you by Red Hat Cloud, >>Native Computing Foundation and >>Ecosystem Partners. Hi. And welcome back. I'm stew Minuteman. And this is the cube coverage of que con cognitive con 2020. The Europe virtual addition Course kubernetes won the container wars as we went from managing a few containers that managing clusters, too many customers managing multiple clusters and that and get more complicated. So to help understand those challenges and how solutions are being put out to solve them, having a welcome back to the from one of our cube alumni do if it Gerald is the vice president and general manager of the management business unit at Red Hat. Joe, good to see you again. Thanks so much for joining us >>two. Thanks for having me back. >>All right, so at Red Hat Summit, one of the interesting conversation do you and I add, was talking about advanced cluster management or a CME course. That was some people and some technology that came over to Red hat from IBM post acquisition. So it was tech preview give us the update. What's the news? And, you know, just level set for the audience. You know what cluster management is? >>Sure, So advanced Cluster manager or a CMS, We actually falling, basically, is a way to manage multiple clusters. Ross, even different environments, right? As people have adopted communities and you know, we have at several 1000 customers running open shift on their starting to push it in some very, very big ways. And so what they run into is a stay scale. They need better ways to manage. It would make those environments, and a CMS is a huge way to help manage those environments. It was early availability back at Summit end of April, and in just a few months now it's generally available. We're super excited about that. >>Well, that that Congratulations on moving that from technical preview to general availability so fast. What can you tell us? How many customers have you had used this? What have you learned in talking to them about this solution? >>So, first of all, we're really pleasantly surprised by the amount of people that were interested in the tech preview. Integrity is not a product that's ready to use in production yet so a lot of times accounts are not interested in. They want to wait for the production version. We had over 100 customers in our tech review across. Not only geography is all over the world Asia, America, Europe, us across all different verticals. There's a tremendous amount of interest in it. I think that just shows you know, how applicable it is to these environments of people trying to manage. So tremendous had update. We got great feedback from that. And in just a few months, we incorporate that feedback into the now generally available product. So great uptick during the tech created >>Excellent Bring assigned side a little bit, you know, When would I use this solution? If I just have a single cluster, Does it make sense for May eyes? Is it only for multi clusters? You know, what's the applicability of the offering? Yes, sir, even for >>single clusters that the things that ACM really does fall into three major areas right allows closer lifecycle management. Of course, that would mean that you have more than one cluster ondas people grow. They do for a number of reasons. Also, policy based management the ability to enforced and fig policies and enforce compliance across even your single cluster to make sure that stays perfect in terms of settings and configuration and things like that. Any other application. Lifecycle management The ability to deploy applications in more advanced way, even if you're on a single cluster, gets even better for multi cluster. But you can deploy your APS to just the clusters that are tagged a certainly, but lots of capabilities, even for application, even a single cluster. So we find even people that are running a single cluster need it askew, deployed more more clusters. You're definitely >>that's great. Any you mentioned you had feedback from customers. What are the things that I guess would be the biggest pain points that this solves for them that they were struggling with in the past? Well, >>first of being able to sort of Federated Management multiple clusters, right, as opposed to having to manage each cluster individually, but the ability to do policy based configuration management to just express the way you want things to stay, have them stay that way to adopt a more of a getups ethnology in terms of how they're managing their your open ships environments. There's lots more feedback, but those were some of the ones that seem to be fairly common, repetitive across the country. >>Yeah, and you know, Joe, you've also gotten automation in the management suite. How do I think about this? How does this fit into the broader management automation that customers were using? Well, >>I think as people in employees environments. And it was a long conversation about platform right? But there's a lot of things that have to go with the platform and red hats actually in very good about that, in terms of providing all the things you necessary that you would find necessary to make the five form successful in your environment. Right? So I was seen by four. We need storage, then development environments management, the automation ability to train on it. We have our open innovation labs. There's lots of things that are beyond the platform that people acquire in order to be successful. In the case of management automation, ACM was a huge advancement. Terms had managed these environments, but we're not done. We're gonna continue to ADM or automation integration with things like answerable mawr, integration with observe ability and analytics so far from done. But we want to make sure that open ship stays the best managed environment that's out there. I also do want to make a call out to the fact that you know, this team has been working on this technology for the past couple of years. And so, you know, it's only been a red hat for five months. This technology is actually very mature, but it is quite an accomplishment for any company to take a new team in a new technology. And in five months, do what Red Hat does to it in terms of making it consumable for the enterprise. So then kudos continue. Really not >>well. And I know a piece of that is, you know, moving that along to be open source. So, you know, where are we with the solution? Now that is be a How does that fit in tow being open? Source. >>Eso supports that are open source Already. When the process of open sourcing the rest of it, as you've seen over time read, it has a perfect record here of acquiring technologies that were either completely closed Source Open core in some cases where part it was open. It was closed. But that was the case with Ansell a few years ago. But basically our strategy is everything has to be open source. That takes time in the process of going through all of the processes necessary to open source parts of ACM on. We think that will find lots of interest in the community around the different projects inside of >>Yeah. How about what? One of the bigger concerns talking to customers in general about kubernetes even Mawr in 2020 is. What about security? How does a CME help customers make sure that their environment to secure? >>Yeah, so you know, configuration policies and forcing you can actually sent with ACM that you want things to be a certain way that somebody changes them that automatically either warn you about them or enforcement would set them back. So it's got some very strong security chops in terms of keeping the configurations just the way you want. That gets harder as you get more and more clusters. Imagine trying to keep everything but the same levels, settings, software, all the parts and pieces so affected you have ACM that can do this across any and all of your clusters really took the burden off people trying to maintain secure environments, >>okay, and so generally available. Now, anything you can share about how this solution is priced, how it fits in tow. The broader open shift offerings, >>Yes. Oh, so it's an add on for open shift is priced very similarly to open shift in terms of the, you know, core pricing. One thing I do want to mention about ACM, which maybe doesn't come out just by a description product is the fact that a scene was built from scratch for communities, environments and optimize for open shift. We're seeing a lot of competition out there that's taking products that were built for other environments, trying to sort of been member coerce them into managing kubernetes environments. We don't think people are going to be successful at that. Haven't been successful to date. So one things that we find as sort of a competitive differentiator for ACM and market is the fact that it was built from scratch designed for communities environments. So it is really well designed for the environment it's trying to manage, and we think that's gonna keep your competitive edge? >>Well, always. Joe. When you have a new architecture, you advantage of things. Any examples that you have is what, what a new architecture like this can do that that an older architecture might struggle with or not believe. Be able to do even though when you look at the product sheet, the words sound similar. But when you get underneath the covers, it's just not a good architect well fit. >>Yeah, so it's very similar sort of the shift from physical to virtual. You can't have a paradigm shift in the infrastructure and not have a sort of a corresponding paradigm shift in management tool. So the way you monitor these environments, where you secure them the way they scale and expand, we do resource management, security. All those things are vastly different in this environment compared to, let's say, a virtual more physical environment. So this has improved many times in the past. You know, paradigm shift in the infrastructure or the application environment will drive a commensurate paradigm shift in management. That's what you're seeing here. So that's why we thought it was super important to have management that was built for these environments. by design. So it's not trying to do sort of unnatural things north manage the environment. >>Yeah, I wondered. I love to hear just a little bit your philosophy as to what's needed in this space. You know, I look back to previous generations, look at virtualization. You know, Microsoft did very well at managing their environment, the M where did the same for their environments. But, you know, we've had generations of times where solutions have tried to be management of everything, and that could be challenging. So, you know, what's Red Hat in a CM's position and what do we need in the community space, you know, today and for the next couple of years. >>So kubernetes itself is the automation platform you talked about, you know, early on in the second. So you know, Cooper navies itself provides, you know, a lot of automation around container management. What a CME does is build a top it out and then capture, you know, data and events and configuration items in the environment and then allows you to define policies. People want to move away from manual processes. Certainly, but they wanna be able to get to a more state full expression of the way things should be. You want to be able to use more about, you know, sort of get up, you know, kind of philosophy where they say, this is how I want things today. Check the version in, keep it at that level. If it changes, put it back. Tell me about it. But sort of the era of chasing. You know, management with people is changing. You're seeing a huge premium now on probation. So automation at all levels. And I think this is where a cm's automation on top of open shift automation on down the road, combined with things like ansell, will provide the most automated environment you can have for these container platforms. Um, so it's definitely changing your seeing observe ability, ai ops getups type of philosophies Coming in these air very different manager in the past helps you seeing innovation across the whole management landscape in the communities environment because they are so different. The physics of them are different than the previous environments. We think with ACM answerable or insights product and some over analytics that we've got the right thing for this environment >>and can give us a little bit of a look forward, you know? How often should we expect to see updates on this? Of course. You mentioned getting feedback from the community from the technical preview to G A. So give us a little bit. Look, you know, what should we be expecting to see from a CME down the right the So >>the ACM team is far from done, right? So they're going to continue to rev, you know, just like we read open shift, that very, very fast base we're gonna be reading ACM and fast face. Also, you see a lot of integration between ACM. A lot of the partners were already working with in the application monitoring space and the analytics space security automation I would expect to see in the uncivil fest time frame, which is mid October, will cease, um, integration with danceable on ACM around things. That insult does very well combined with what ACM does. A sand will continue to push out on Mawr cluster management, more policy based management and certainly advancing the application life cycles that people are very interested in ruined faster. They want to move faster with a higher degree of certainty in their application. Employments on ACM is right there. >>It just final question for you, Joe, is, you know, just in the broader space, looking at management in this kind of cube con cloud, native con ecosystem final words, you want customers to understand where we are today and where we need to go down the road. >>So I think the you know, the market and industry has decided communities is the platform of future right? And certainly we were one of the earliest to invest in container management platforms with open shift were one of the first to invest in communities. We have thousands of customers running open shift back Russell Industries on geography is so we bet on that a long time ago. Now we're betting on the management automation of those environments and bringing them to scale. And the other thing I think that redhead is unique on is that we think that people gonna want to run their kubernetes environments across all different kinds of environments, whether it's on premise visible in virtual multiple public clouds, where we have offerings as well as at the edge. Right. So this is gonna be an environment that's going to be very, very ubiquitous. Pervasive, deported scale. And so the management of a nation has become a necessity. And so but had investing in the right areas to make sure that enterprises continues communities particularly open shift in all the environments that they want at the scale. >>All right. Excellent. Well, Joe, I know we'll be catching up with you and your team for answerable fest. Ah, coming in the fall. Thanks so much for the update. Congratulations to you in the team on the rapid progression of ACM now being G A. >>Thanks to appreciate it, we'll see you soon. >>All right, Stay tuned for more coverage from que con club native con 2020 in Europe, the virtual addition on still minimum and thanks, as always, for watching the Cube.
SUMMARY :
Joe, good to see you again. Thanks for having me back. All right, so at Red Hat Summit, one of the interesting conversation do you and I add, As people have adopted communities and you know, we have at several 1000 customers running open shift What have you learned in talking to I think that just shows you know, how applicable it Also, policy based management the ability to Any you mentioned you had feedback from customers. express the way you want things to stay, have them stay that way to adopt a more of a getups Yeah, and you know, Joe, you've also gotten automation in the management suite. in terms of providing all the things you necessary that you would find necessary to make the five form successful And I know a piece of that is, you know, moving that along to be open source. When the process of open sourcing the rest of it, as you've seen One of the bigger concerns talking to customers in general about kubernetes configurations just the way you want. Now, anything you can share about how this solution is of the, you know, core pricing. Be able to do even though when you look So the way you monitor these environments, where you secure them the way they scale and expand, a CM's position and what do we need in the community space, you know, So kubernetes itself is the automation platform you talked about, you know, early on in the second. Look, you know, what should we be expecting to see from a CME down the So they're going to continue to rev, you know, words, you want customers to understand where we are today and where we need to go down the road. So I think the you know, the market and industry has decided communities is the platform of future right? Congratulations to you in the team on the rapid progression All right, Stay tuned for more coverage from que con club native con 2020 in Europe, the virtual addition on
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Gerald | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Joe | PERSON | 0.99+ |
five months | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
America | LOCATION | 0.99+ |
Russell Industries | ORGANIZATION | 0.99+ |
Red Hat Cloud | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
mid October | DATE | 0.99+ |
each cluster | QUANTITY | 0.99+ |
Joe Fitzgerald | PERSON | 0.99+ |
single cluster | QUANTITY | 0.99+ |
over 100 customers | QUANTITY | 0.99+ |
Native Computing Foundation | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Ansell | ORGANIZATION | 0.98+ |
KubeCon | EVENT | 0.98+ |
five form | QUANTITY | 0.98+ |
ACM | ORGANIZATION | 0.97+ |
single clusters | QUANTITY | 0.97+ |
more than one cluster | QUANTITY | 0.97+ |
end of April | DATE | 0.97+ |
today | DATE | 0.97+ |
Coop Khan | ORGANIZATION | 0.96+ |
1000 customers | QUANTITY | 0.95+ |
ansell | ORGANIZATION | 0.94+ |
second | QUANTITY | 0.94+ |
four | QUANTITY | 0.94+ |
Cooper navies | ORGANIZATION | 0.92+ |
first | QUANTITY | 0.92+ |
Cube | ORGANIZATION | 0.91+ |
Ecosystem Partners | ORGANIZATION | 0.9+ |
One thing | QUANTITY | 0.89+ |
Red hat | ORGANIZATION | 0.88+ |
few years ago | DATE | 0.87+ |
two | QUANTITY | 0.87+ |
red hat | ORGANIZATION | 0.87+ |
One | QUANTITY | 0.86+ |
Native Con Europe 2020 | EVENT | 0.85+ |
stew Minuteman | PERSON | 0.85+ |
CloudNativeCon Europe 2020 | EVENT | 0.82+ |
next couple of years | DATE | 0.79+ |
Red Hat Summit | EVENT | 0.79+ |
thousands of customers | QUANTITY | 0.78+ |
three major areas | QUANTITY | 0.75+ |
past couple of years | DATE | 0.74+ |
Summit | EVENT | 0.74+ |
redhead | ORGANIZATION | 0.7+ |
con 2020 | EVENT | 0.68+ |
que con cognitive con 2020 | EVENT | 0.66+ |
Ross | PERSON | 0.65+ |
Eso | ORGANIZATION | 0.61+ |
Mawr | ORGANIZATION | 0.56+ |
Red Hat | TITLE | 0.55+ |
ACM | TITLE | 0.53+ |
Cloud | ORGANIZATION | 0.43+ |