Ashesh Badani, Red Hat | Red Hat Summit 2020
>> Announcer: From around the globe, it's theCUBE, with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of Red Hat Summit, happening digitally, interviewing practitioners, executives, and thought leaders from around the world. Happy to welcome back to our program, one of our CUBE alumni, Ashesh Badani, who's the Senior Vice President of Cloud Platforms with Red Hat. Ashesh, thank you so much for joining us, and great to see you. >> Yeah, likewise, thanks for having me on, Stu. Good to see you again. >> All right, so, Ashesh, since the last time we had you on theCUBE a few things have changed. One of them is that IBM has now finished the acquisition of Red Hat, and I've heard from you from a really long time, you know, OpenShift, it's anywhere and it's everywhere, but with the acquisition of Red Hat, it just means this only runs on IBM mainframes and IBM Cloud, and all things blue, correct? >> Well, that's true for sure, right? So, Stu, you and I have been talking for many, many times. As you know, we've been committed to hybrid multi-cloud from the very get-go, right? So, OpenShift supported to run on bare metal, on virtualization platforms, whether they come from us, or VMware, or Microsoft Hyper-V, on private clouds like OpenStack, as well as AWS, Google Cloud, as well as on Azure. Now, with the completion of the IBM acquisition of Red Hat, we obviously always partnered with IBM before, but given, if you will, a little bit of a closer relationship here, you know, IBM's been very keen to make sure that they promote OpenShift in all their platforms. So as you can probably see, OpenShift on IBM Cloud, as well as OpenShift on Z on mainframe, so regardless of how you like OpenShift, wherever you like OpenShift, you will get it. >> Yeah, so great clarification. It's not only on IBM, but of course, all of the IBM environments are supported, as you said, as well as AWS, Google, Azure, and the like. Yeah, I remember years ago, before IBM created their single, condensed conference of THINK, I attended the conference that would do Z, and Power, and Storage, and people would be like, you know, "What are they doing with that mainframe?" I'm like, "Well, you do know that it can run Linux." "Wait, it can run Linux?" I'm like, "Oh my god, Z's been able to run Linux "for a really long time." So you want your latest Container, Docker, OpenShift stuff on there? Yeah, that can sit on a mainframe. I've talked to some very large, global companies that that is absolutely a part of their overall story. So, OpenShift-- >> Interesting you say that, because we already have customers who've been procuring OpenShift on mainframe, so if you made the invest mainframe, it's running machine learning applications for you, looking to modernize some of the applications and services that run on top in OpenShift on mainframe now is an available option, which customers are already taking advantage of. So exactly right to your point, we're seeing that in the market today. >> Yeah, and Ashesh, maybe it's good to kind of, you know, you've got a great viewpoint as to customers deploying across all sorts of environments, so you mentioned VMware environments, the public cloud environment. It was our premise a few years ago on theCUBE that Kubernetes get staked into all the platforms, and absolutely, it's going to just be a layer underneath. I actually think we won't be talking a lot about Kubernetes if you fast-forward a couple of years, just because it's in there. I'm using it in all of my environments. So what are you seeing from your customers? Where are we in that general adoption, and any specifics you can give us about, you know, kind of the breadth and the depth of what you're seeing from your customer base? >> Yeah, so, you're exactly right. We're seeing that adoption continue on the path it's been on. So we've got now, over 1700 customers for OpenShift, running in all of these environments that you mentioned, so public, private, a combination of the two, running on traditional virtualization environments, as well as ensuring that they run in public cloud at scale. In some cases managed by customers, in other cases managed by us on their behalf in a public cloud. So, we're seeing all permutation, if you will, of that in play today. We're also seeing a huge variety of workloads, and to me, that's actually really interesting and fascinating. So, earliest days, as you'd expect, people trying to play with micro-services, so trying to build new market services and run it, so cloud native, what have you. Then as we're ensuring that we're supporting stateful application, right. Now you're starting to see if your legacy applications move on, ensuring that we can run them, support them at scale, within the platform 'cause we're looking to modernize applications. We'll talk maybe in a few minutes also about lift-and-shift that we got to play as well. But now also we're starting to see new workloads come on. So just most recently we announced some of the work that we're doing with a series of partners, from NVIDIA to emerging AI ML, AI, artificial intelligence machine learning, frameworks or ISVs, looking to bring those to market. Been ensuring that those are supported and can run with OpenShift. Right, our partnership with NVIDIA, ensuring OpenShift be supported on GPU based environment for specific workloads, whether it be performance sensitive or specific workloads that take advantage of underlying hardware. So starting now to see a wide variety if you will, of application types is also something that we're starting, right, so numbers of customers increasing, types of workloads, you know, coming on increasing, and then the diversity of underlying deployment environments. Where they're running all services. >> Ashesh, such an important piece and I'm so glad you talked about it there. 'Cause you know my background's infrastructure and we tend to look at things as to "Oh well, I moved from VM to a container, "to cloud or all these other things," but the only reason infrastructure exists is to run my application, is my data and my application that are the most important things out there. So Ashesh, let me get in some of the news that you got here, your team work on a lot of things, I believe one of them talks about some of those, those new ways that customers are building applications and how OpenShift fits into those environments. >> Yeah, absolutely. So look, we've been on this journey as you know for several years now. You know recently we announced the GA of OpenShift Service Mesh in support of Istio, increasing an interest as for turning microservices will take advantage of close capabilities that are coming in. At this event we're now also announcing the GA of OpenShift Serverless. We're starting to see obviously a lot of interest, right, we've seen the likes of AWS spawn that in the first instance, but more and more customers are interested in making sure that they can get a portable way to run serverless in any Kubernetes environment, to take advantage of open source projects as building blocks, if you will, so primitives in, within Kubernetes to allow for serverless capabilities, allow for scale down to zero, supporting serving and eventing by having portable functions run across those environments. So that's something that is important to us and we're starting to see support of in the marketplace. >> Yeah, so I'd love just, obviously I'm sure you've got lots of break outs in the OpenShift Serverless, but I've been talking to your team for a number of years, and people, it's like "Oh, well, just as cloud killed everything before it, "serverless obviates the need for everything else "that we were going to use before." Underlying OpenShift Serverless, my understanding, Knative either is the solution, or a piece of the solution. Help us understand what serverless environment this ties into, what this means for both your infrastructure team as well as your app dev team. >> Yeah, great, great question, so Knative is the basis of our serverless solution that we're introducing on OpenShift to the marketplace. The best way for me to talk about this is there's no one size fits all, so you're going to have specific applications or service that will take advantage of serverless capabilities, there will be some others that will take advantage of running within OpenShift, there'll be yet others, we talked about the AI ML frameworks, that will run with different characteristics, also within the platform. So now the platform is being built to help support a diversity, a multitude of different ways of interacting with it, so I think maybe Stu, you're starting to allude to this a little bit, right, so now we're starting to focus on, we've got a great set of building blocks, on the right compute network storage, a set of primitives that Kubernetes laid out, thinking of the notions of clustering and being able to scale, and we'll talk a little bit about management as well of those clusters. And then it changes to a, "What are the capabilities now, "that I need to build to make sure "that I'm most effective, most efficient, "regard to these workloads that I bring on?" You're probably hearing me say workloads now, several times, because we're increasingly focused on adoption, adoption, adoption, how can we ensure that when these 1700 plus, hopefully, hundreds if not thousands more customers come on, how they can get the most variety of applications onto this platform, so it can be a true abstraction over all the underlying physical resources that they have, across every deployment that they put out. >> All right, well Ashesh, I wish we could spend another hour talking about the serverless piece, I definitely am going to make sure I check out some of the breakouts that cover the piece that we talked to you, but, I know there's a lot more that the OpenShift update adds, so what other announcements, news, do you have to cover for us? >> Yeah, so a couple other things I want to make sure I highlight here, one is a capability called ACM, advanced cluster management, that we're introducing. So it was an experimental work that was happening with the IBM team, working on cluster management capabilities, we'd been doing some of that work ourselves, within Red Hat, as part of IBM and Red Hat coming together. We've had several folks from IBM actually join Red Hat, and so we're now open sourcing and providing this cluster management capability, so this is the notion of being able to run and manage these different clusters from OpenShift, at scale, across multiple environments, be able to check on cluster health, be able to apply policy consistently, provide governance, ensure that appropriate applications are running in appropriate clusters, and so on, a series of capabilities, to really allow for multiple clusters to be run at scale and managed effectively, so that's one set of, go ahead, Stu. >> Yeah, if I could, when I hear about multicluster management, I think of some of the solutions that I've heard talked about in the industry, so Azure Arc from Microsoft, Tanzu from VMware, when they talk about multicluster management, it is not only the Kubernetes solutions that they're offering, but also, how do I at least monitor, if not even allow a little bit of control across these environments? So when you talk about cluster management, is that all the OpenShift pieces, or things like AKS, EKS, other options out there, how do those fit into the overall management story? >> Yeah, that's absolutely our goal, right, so we've got to get started somewhere, right? So we obviously want to make sure that we bring into effect the solution to manage OpenShift clusters at scale, and then of course as we would expect, multiple other clusters exist, from Kubernetes, like the ones you mentioned, from the cloud providers as well as others from third parties and we want the solution to manage that as well. But obviously we're going to sort of take steps to get to the endpoint of this journey, so yes, we will get there, we've got to get started somewhere. >> Yeah, and Ashesh, any guides, when you look at people, some of the solutions I mentioned out there, when they start out it's "Here's the vision." So what guidance would you give to customers about where we are, how fast they can expect these things to mature, and I know anything that Red Hat does is going to be fully open source and everything, what's your guidance out there as to what customers should be looking for? >> Yeah, so we're at an interesting point, I think, in this Kubernetes journey right now, and so when we, if you will, started off, and Stu you and I have been talking about this for at least five years if not longer, was this notion that we want to provide a platform that can be portable and successfully run in multiple deployment environments. And we've done that over these years. But all the while when we were doing that, we're always thinking about, what are the capabilities that are needed that are perhaps not developed upstream, but will be over time, but we can ensure that we can look ahead and bring that into the platform. And for a really long time, and I think we still do, right, we at Red Hat take a lot of stick for saying "Hey look, you form the platform." Our outcome back to that has always been, "Look, we're trying to help solve problems "that we believe enterprise customers have, "we want to ensure that they're available open source, "and we want to upstream those capabilities always, "back into the community." But, let's say making available a platform without RBAC, role-based access control, well it's going to be hard then for enterprises to adopt that, we've got to make sure we introduce that capability, and then make sure that it's supported upstream as well. And there's a series of capabilities and features like that that we work through. We've always provided an abstraction within OpenShift to make it more productive for developers and administrators to use it. And we always also support working with kubectl or the command line interface from kube as well. And then we always hear back from folks saying "Well, you've got your own abstraction, "that might make that seem impossible," Nope, you can use both kubectl GPUs or C commands, whichever one is better for you, have at it, we're just trying to be more productive. And now increasingly what we're seeing in the marketplace is this notion that we've got to make sure we work our way up from not just laying out a Kubernetes distribution, but thinking about the additional capability, additional services that you can provide, that would be more valuable to customers, and I think Stu, you were making the point earlier, increasingly, the more popular and the more successful Kubernetes becomes, the less you will see and hear of it, which by the way is exactly the way it should be, because that becomes then the basis of your underlying infrastructure, you are confident that you've got a rock solid bottom, and now you as a customer, you as a user, are focusing all of your energy and time on building the productive application and services on top. >> Yeah, great great points there Ashesh, the vision people always talked about is "If I'm leveraging cloud services, "I shouldn't have to worry "about what version they're running." Well, when it comes to Kubernetes, ultimately we should be able to get there, but I know there's always a little bit of a delta between the latest and newest version of Kubernetes that comes out, and what the managed services, and not only managed services, what customers are doing in their own environment. Even my understanding, even Google, which is where Kubernetes came out of, if you're looking at GKE, GKE is not on the latest, what are we on, 1.19, from the community, Ashesh, so what's Red Hat's position on this, what version are you up to, how do you think customers should think about managing across those environments, because boy, I've got too many scars from interoperability history, go back 10 or 15 years and everything, "Oh, my server BIOS doesn't work on that latest "kernel.org version of what we're doing for Linux." Red Hat is probably better prepared than any company in the industry, to deal with that massive change happening from a code-based standpoint, I've heard you give presentations on the history of Linux and Kubernetes, and what's going forward, so when it comes to the release of Kubernetes, where are you with OpenShift, and how should people be thinking about upgrading from versions? >> Yeah, another excellent point, Stu, it's clearly been following us pretty closely over the years, so where we came at this, was we actually learned quite a bit from our experience in the company with OpenStack. And so what would happen with OpenStack is, you would have customers that are on a certain version of Openstack, and then they kept saying "Hey look, we want to consume close to trunk, "we want new features, we want to go faster." And we'd obviously spent some time, from the release in community to actually shipping our distribution into customer's hand, there's going to be some amount of time for testing and QE to happen, and some integration points that need to be certified, before we make it available. We often found that customers lagged, so there'd be let's say a small subset if you will within every customer or several customers who want to be consuming close to trunk, a majority actually want stability. Especially as time wore on, they were more interested in stability. And you can understand that, because now if you've got mission critical applications running on it you don't necessarily want to go and put that at risk. So the challenge that we addressed when we actually started shipping OpenShift four last summer, so about a year ago, was to say, "How can we provide you basically a way "to help upgrade your clusters, "essentially remotely, so you can upgrade, "if you will, your clusters, or at least "be able to consume them at different speeds." So what we introduced with OpenShift four was this ability to give you over the air updates, so the best way to think about it is with regard to a phone. So you have your phone, your new OS upgrades show up, you get a notification, you turn it on, and you say "Hey, pull it down," or you say at a certain point of time, or you can go off and delay it, do it at a different point in time. That same notion now exists within OpenShift. Which is to say, we provide you three channels, so there's a stable channel where you say "Hey look, maybe this cluster in production, "no rush here, I'll stay at or even a little behind," there's a fast channel for "Hey, I want to be up latest and greatest," or there's a third channel which allows for essentially features that are being in developed, or are still in early stage of development to be pushed out to you. So now you can start consuming these upgrades based on "Hey, I've got a dev team, "on day one I get these quicker," "I've got these applications that are stable in production, "no rush here." And then you can start managing that better yourself. So now if you will, those are capabilities that we're introducing into a Kubernetes platform, a standard Kubernetes platform, but adding additional value, to be able to have that be managed much much, in a much better fashion that serves the different needs of different parts of an organization, allows for them to move at different speeds, but at the same time, gives you that same consistent platform regardless of where you are. >> All right, so Ashesh, we started out the conversation talking about OpenShift anywhere and everywhere, so in the cloud, you talked about sitting on top of VMware, VM Farms is very prevalent in the data centers, or bare metal. I believe since I saw, one of the updates for OpenShift is how Red Hat virtualization is working with OpenShift there, and a lot of people out there are kind of staring out what VMware did with VSphere seven, so maybe you can set it up with a little bit of a compare contrast as to how Red Hat's doing this rollout, versus what you're seeing your partner VMware doing, or how Kubernetes fits into the virtualization environment. >> Yeah, I feel like we're both approaching it from different perspective and learnset that we come at it, so if I can, the VMware perspective is likely "Hey look, there's all these installations of VSphere "in the marketplace, how can we make sure "that we help bring containers there," and they've come up with a solution that you can argue is quite complicated in the way how they're achieving it. Our approach is a different one, right, so we always looked at this problem from the get-go with regard to containers as a new paradigm shift, it's not necessarily a revolution, because most companies that we're looking at are working with existing application services, but it's an evolution in the way you're thinking about the world, but this is definitely the long term future. And so how can we then think about introducing this environment, this application platform into the environment, and then be able to build a new application in it, but also bring in existing applications to the form? And so with this release of OpenShift, what we're introducing is something that we're calling OpenShift Virtualization, which is a few of our existing applications, certain VMs, how can we ensure that we bring those VMs into the platform, they've been certified, data security boundaries around it, or certain constraints or requirements have been put by your internal organization around it, and we can keep all of those, but then still encapsulate that VM as a container, have that be run natively within an environment orchestrated by OpenShift, Kubernetes as the primary orchestrator of those VMs, just like it does with everything else that's cloud-native, or is running directly as containers as well. We think that's extremely powerful, for us to really bring now the promise of Kubernetes into a much wider market, so I talked about 1700 customers, you can argue that that 1700 is the early majority, or if you will, almost the scratching of the surface of the numbers that we believe will adopt this platform. To get, if you held the next setup, whatever, five, 10, 20,000 customers, we'll have to make sure we meet them where they are. And so introducing this notion of saying "We can help migrate," with a series of tools that Rock's providing, these VM-based applications, and then have them run within Kubernetes in a consistent fashion, is going to be extremely powerful, and we're really excited about it, by those capabilities, bringing that to our customers. >> Well Ashesh, I think that puts a great exclamation point as to how we go from these early days off to the vast majority of environments, Ashesh, one thing, congratulations to you and the team on the growth, the momentum, all the customer stories, I'd love the opportunity to talk to many of the Red Hat customers about their digital transformation and how your cloud platforms have been a piece of it, so once again, always a pleasure to catch up with you. >> Likewise, thanks a lot, Stuart, good chatting with you, and hope to see you in person soon sometime. >> Absolutely, we at theCUBE of course hope to see you at events later in 2020, for the time being, we of course fully digital, always online, check out theCUBE.net for all of the archives as well as the events including all the digital ones that we are doing, I'm Stu Miniman, and as always, thanks for watching theCUBE. (calm music)
SUMMARY :
brought to you by Red Hat. and great to see you. Good to see you again. we had you on theCUBE a few things have changed. So as you can probably see, OpenShift on IBM Cloud, and Power, and Storage, and people would be like, you know, so if you made the invest mainframe, and any specifics you can give us about, you know, So, we're seeing all permutation, if you will, So Ashesh, let me get in some of the news that you got here, spawn that in the first instance, but I've been talking to your team Yeah, great, great question, so Knative is the basis so this is the notion of being able to run from Kubernetes, like the ones you mentioned, So what guidance would you give to customers and so when we, if you will, started off, GKE is not on the latest, what are we on, 1.19, Which is to say, we provide you three channels, so in the cloud, you talked about sitting on top of VMware, is the early majority, or if you will, to you and the team on the growth, the momentum, and hope to see you in person soon sometime. Absolutely, we at theCUBE of course hope to see you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ashesh | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first instance | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Kubernetes | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
over 1700 customers | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
Red Hat Summit 2020 | EVENT | 0.98+ |
three channels | QUANTITY | 0.98+ |
15 years | QUANTITY | 0.98+ |
OpenShift Serverless | TITLE | 0.98+ |
both | QUANTITY | 0.97+ |
Knative | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
GKE | ORGANIZATION | 0.96+ |
Azure Arc | TITLE | 0.96+ |
thousands more customers | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
third channel | QUANTITY | 0.96+ |
last summer | DATE | 0.96+ |
RBAC | TITLE | 0.95+ |
zero | QUANTITY | 0.93+ |
Brian Gracely, Red Hat | KubeCon + CloudNativeCon EU 2019
>> Live, from Barcelona, Spain, it's theCUBE, covering KubeCon and CloudNativeCon Europe, 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Welcome back. This is theCUBE at KubeCon CloudNativeCon 2019 here in Barcelona, Spain. I'm Stu Miniman, my co-host is Corey Quinn and welcoming back to the program, friend of the program, Brian Gracely who is the Director of Product Strategy at Red Hat. Brian, great to see you again. >> I've been, I feel like I've been in the desert. It's three years, I'm finally back, it's good to be back on theCUBE. >> Yeah well, I feel like we've been traveling parallel paths a lot. TheCUBE goes to a lot of events. We do a lot of interviews but I think when you go to shows, you actually have more back-to-back meetings than we even do, so we feel you in the jet lag and a little bit of exhaustion. Thanks for making time. >> Yeah, it's great. I had dinner with you two weeks ago, I did a podcast with Corey a week ago, and now, due to the magic of the internet, we're all here together in one place. It's good. >> Absolutely. Well Brian, as we know at a show like this we all want to hold hands and sing Kubernetes Kumbaya. It's wonderful to see that all of the old fights of the past have all been solved by software in the cloud. >> They're all good, it's all good. Yeah, somebody said it's a cult. I think I heard Owen Rodgers said it's now officially a cult. Corey, you called it the Greek word for spending lots of money. >> Uh yeah, it was named after the Kubernetes, the Greek god of spending money on cloud services. >> So, Brian, you talk to a lot of customers here. As they look at this space, how do they look at it? There's still times that I hear them, "I'm using this technology and I'm using this technology, "and gosh darn it vendor, "you better get together and make this work." So, open-source, we'd love to say is the panacea, but maybe not yet. >> I don't think we hear that as much anymore because there is no more barrier to getting the technology. It's no longer I get this technology from vendor A and I wish somebody else would support the standard. It's like, I can get it if I want it. I think the conversations we typically have aren't about features anymore, they're simply, my business is driven by software, that's the way I interact with my customer, that's the way I collect data from my customers, whatever that is. I need to do that faster and I need to teach my people to do that stuff. So the technology becomes secondary. I have this saying and it frustrates people sometimes, but I'm like, there's not a CEO, a CIO, a CTO that you would talk to that wakes up and says, "I have a Kubernetes problem." They all go, "I have a, I have this business problem, "I have that problem, it happens to be software." Kubernetes is a detail. >> Yeah Brian, those are the same people 10 years ago had a convergent problem, I never ran across them. >> If you screw up a Kubernetes roll-out, then you have a Kubernetes problem. But it's entertaining though. I mean, you are the Director of Product Strategy, which is usually a very hard job with the notable exception of one very large cloud company, where that role is filled by a post-it note that says simply, yes. So as you talk to the community and you look at what's going on, how are you having these conversations inform what you're building in terms of Openshift? >> Yeah, I mean, strategy you can be one of two things. You can either be really good at listening, or you can have a great crystal ball. I think Red Hat has essentially said, we're not going to be in the crystal ball business. Our business model is there's a lot of options, we will go get actively involved with them, we will go scratch our knees and get scars and stuff. Our biggest thing is, I have to spend a lot of time talking to customers going, what do you want to do? Usually there's some menu that you can offer them right now and it's really a matter of, do you want it sort of half-baked? Are you willing to sort of go through the learning process? Do you need something that's a little more finalized? We can help you do that. And our big thing is, we want to put as many of those things kind of together in one stew, so that you're not having-- Not you Stu, but other stews, thinking about like, I don't want to really think about them, I just want it to be monitored, I want the network to just work, I want scalability built in. So for us it's not so much a matter of making big, strategic bets, it's a matter of going, are we listening enough and piecing things together so they go, yeah, it's pretty close and it's the right level of baked for what I want to do right now. >> Yeah, so Brian, an interesting thing there. There's still quite a bit of complexity in this ecosystem. Red Hat does a good job of giving adult supervision to the environment, but, you know, when I used to think when row came out, it was like, okay, great. Back in the day, I get a CD and I know I can run this. Today here, if I talk to every Kubernetes customer that I run across and say okay, tell me your stack and tell me what service measure you're using, tell me which one of these projects you're doing and how you put them together. There's a lot of variation, so how do you manage that, the scale and growth with the individual configurations that everybody still can do, even if they're starting to do public clouds and all those other things? >> So, it's always interesting to me. I watch the different Keynotes and people will talk about all the things in their stack and why they had problems and this, that, and the other, and I kind of look at it and I'm like, we've solved that problem for you. Our thing is always, and I don't mean that sort of boastfully, but like, we put things together in what we think are pretty good defaults. It's the one probably big difference between Openshift and a lot of these other ones that are here is that we've put all those things together as sort of what we think are pretty good defaults. We allow some flexibility. So, you don't like the monitoring, you don't like Prometheus plugin splunk, that's fine. But we don't make you stand on your head. So for us, a lot of these problems that, our customers don't go, well, we can't figure out the stack, we can't do these things, they're kind of built in. And then their problem becomes okay, can I highly automate that? Did I try and make too many choices where you let me plug things in? And for us, what we've done, is I think if we went back a few years, people could say you guys are too modular, you're too plugable. We had to do that to kind of adapt to the market. Now we've sort of learned over time, you want to be immutable, you want to give them a little less choice. You want to really, no, if you're going to deploy an AWS, you got to know AWS really well. And that's, you know, not to make this a commercial, but that's basically what Openshift four became, was much more opinions about what we think are best practices based on about a thousand customers having done this. So we don't run into as many of pick your stack things, we run into that next level thing. Are we automating it enough? Do we scale it? How do we do statefulness? Stuff like that. >> Yeah, I'm curious in the Keynote this morning they called, you know, Kubernetes is a platform of platforms. Did that messaging resonate with you and your customers? >> Yeah, I think so, I mean, Kubernetes by itself doesn't really do anything, you need all this other stuff. So when I hear people say we deployed Kubernetes, I'm like, no you don't. You know, it's the engine of what you do, but you do a bunch of other stuff. So yeah, we like to think of it as like, we're platform builders, you should be a platform consumer, just like you're a consumer of Salesforce. They're a platform, you consume that. >> Yeah, one of the points made in the Keynote was how one provider, I believe it was IBM, please yell at me if I got that one wrong, talks about using Kubernetes to deploy Kubernetes. Which on the one hand, is super cool and a testament to the flexibility of how this is really working. On the other, it's-- and thus the serpent devours itself, and it becomes a very strange question of, okay, then we're starting to see some weird things. Where do we start, where do we look? Indeed.com for a better job. And it's one of those problems that at some point you just can't manage a head around complexities inside of complexities, but we've been dealing with that for 40 years. >> Yeah, Kubernetes managing Kubernetes is kind of one of those weird words like serverless, you're like what does that mean? I don't, it doesn't seem to, I don't think you mean what you want it to mean. The simplest way we explain that stuff, so... A couple of years ago there was a guy named Brandon Philips who had started a company called CoreOS. He stood up at Kube-- >> I believe you'll find it's pronounce CoreOS, but please, continue. >> CoreOS, exactly. Um, he stood up in the Seattle one when there was a thousand people at this event or 700, and he said, "I've created this pattern, "or we think there's a pattern that's going to be useful." The simplest way to think of it is, there's stuff that you just want to run, and I want essentially something monitoring it and keep it in a loop, if you will. Kubernetes just has that built in. I mean, it's kind of built in to the concept because originally Google said, "I can't manage it all myself." So that thing that he originally came up with or codified became what's now called operators. Operators is that thing now that's like okay, I have a stateful application. It needs to do certain things all the time, that's the best practice. Why don't we just build that around it? And so I think you heard in a lot of the Keynotes, if you're going to run storage, run it as an operator. If you're going to run a database, run it as an operator. It sounds like inception, Kubernetes running-- It's really just, it's a health loop that's going on all the time with a little bit of smarts that say hey, if you fail, fail this way. I always use the example like if I go to Amazon and get RDS, I don't get a DVA, there's no guy that shows up and says, "Hey, I'm your DVA." You just get some software that runs it for you. That's all this stuff is, it just never existed in Kubernetes before. Kubernetes has now matured enough to where they go, oh, I can play in that world, I can make that part of what I do. So it's less scary, it sounds sort of weird, inception-y. It's really just kind of what you've already gotten out of the public cloud now brought to wherever you want it. >> Well, one of the concerns that I'm starting to see as well is there's a level of hype around this. We've had a lot of conversations around Kubernetes today and yesterday, to the point where you can almost call this Kubernetes and friends instead of CloudNativeCon. And everyone has described it slightly differently. You see people describing it as systemd, as a kernel, sometimes as the way and the light, and someone on stage yesterday said that we all are familiar with the value that Kubernetes has brought to our jobs and our lives, is I think was the follow-up to that, which is a little strange. And I got to thinking about that. I don't deny that it has brought value, but what's interesting to me about this is I don't think I've heard two people define its value in the same terminology at all, and we've had kind of a lot of these conversations. >> So obviously not a cult because they would all be on message if it was a cult. >> Yeah, yeah yeah yeah. >> It's a cult with very crappy brand control, maybe. We don't know. >> I always just explain it that like, you know, if I went back 10 years or something, people... Any enterprise said hey, I would love to run like Google or like Amazon. Apparently for every one admin, I can manage a thousand servers and in their own data centers it's like well, I have one guy and he manages five, so I have cloud envy. >> We tried to add a sixth and he was crushed to death. Turns out those racks have size and weight limits. >> That's right, that's right. And so, people, they wanted this thing, they would've paid an arm and a leg for it. You move forward five years from that and it's like oh, Google just gave you their software, it's now available for free. Now what are you going to do with it? I gave you a bunch of power. So yeah, depending on how much you want to drink the Kool-Aid you're like, this is awesome, but at the end of the day you're just like, I just want the stuff that is available to, that's freely, publicly available, but for whatever reason, I can't be all in on one cloud, or I can't be all in on a public cloud, which, you believe in that there's tons of economic value about it, there's just some companies that can't do that. >> And I fully accept that. My argument has always been that it is, I think it's a poor best practice. When you have a constraint that forces you to be in multiple cloud providers, yes, do it! That makes absolute perfect sense. >> Right, if it makes sense, do it. And that's kind of what we've always said look, we're agnostic to that. If you want to run it, if you want to run it in a disconnected mode on a cruise ship, great, if it makes sense for you. If you need to run, you know, like... The other thing that we see-- >> That cruise ship becomes a container ship. >> Becomes a container ship. I had an interesting conversation with the bank last night. I had dinner with the bank. We were talking, they said, look, I run some stuff locally where I'm at, 'cause I have to, and then, we put a ton of stuff in AWS. He told me this story about a batch processing job that cost him like $4 or $5 million today. He does a variant of it in Lambda, and it cost him like $50 a month. So we had this conversation and it's going like, I love AWS, I want to be all in at AWS. And he said, here's my problem. I wake up every morning worried that I'm going to open the newspaper and Amazon, not AWS, Amazon is going to have moved closer into the banking industry than they are today. And so I have to have this kind of backup plan if you will. Backup's the wrong word, but sort of contingency plan of if they stop being my technology partner and they start becoming my competitor, which, there's arguments-- >> And for most of us I'd say that's not a matter of if, but when. >> Right, right. And some people live with it great. Like, Netflix lives with it, right? Others struggle. That guy's not doing multi-cloud in the future, he's just going, I would like to have the technology that allows me if that comes along. I'm not doing it to do it, I'd like the bag built in. >> So Brian, just want to shift a little bit off of kind of the mutli-cloud discussion. The thing that's interest me a lot, especially I've talked to a number of the Openshift customers, it is historically, infrastructure was the thing that slowed me down. We understand, oh, I want to modernize that. No, no wait. The back in thing or you know, provisioning, these kind of things take forever. The lever of this platform has been, I can move faster, I can really modernize my environment, and, whether that's in my data center or in one public cloud and a couple of others, it is that you know, great lever to help me be able to do that. Is that the right way to think about this? You've talked to a lot of customers. Is that a commonality between them? >> I think we see, I hate to give you a vendor answer, but we tend to see different entry points. So for the infrastructure people, I mean the infrastructure people realize in some cases they're slow, and a lot of cases the ones that are still slow, it's 'cause of some compliance thing. I can give you a VM in an hour, but I got to go through a process. They're the ones that are saying, look, my developers are putting stuff in containers or we're downloading, I just need to be able to support that. The developers obviously are the ones who are saying, look, business need, business problem, have budget to do something, That's usually the more important lever. Just faster infrastructure doesn't do a whole lot. But we find more and more where those two people have to be in the room. They're not making choices independently. But the ones that are successful, the ones that you hear case studies about, none of them are like, we're great at building containers. They're great at building software. Development drives it, infrastructure still tends to have a lot of the budget so they play a role in it, but they're not dictating where it goes or what it does. >> Yeah, any patterns you're seeing or things that customers can do to kind of move further along that spectrum? >> I think, I mean there's a couple of things, and whether you fit in this or not, number one, nobody has a container problem. Start with a business problem. That's always good for technology in general, but this isn't a refresh thing, this is some business problem. That business problem typically should be, I have to build software faster. We always say... I've seen enough of these go well and I've seen enough go poorly. There's, these events are great. They're great in the sense of people see that there's progress, there's innovation. They're also terrible because if you walk into this new, you feel like, man, everybody understands this, it must be pretty simple. And what'll happen is they start working on it and they realize, I don't know what I'm doing. Even if they're using Openshift and we made it easy, they don't know what they're doing. And then they go, I'm embarrassed to ask for help. Which is crazy because if you get into open source the community's all there to help. So it's always like, business problem, ask for help early and often, even if it embarrasses you. Don't go after low-hanging fruit, especially if you're trying to get further investment. Spinning up a bunch of web clusters or hello worlds doesn't, nobody cares anymore. Go after something big. It basically forces your organization to be all in. And then the other thing, and this is the thing that's never intuitive to IT teams, is you, at the point where you actually made something work, you have to look more like my organization than yours, which is basically you have to look like a software marketing company, because internally, you're trying to convince developers to come use your platform or to build faster or whatever, you actually have to have internal evangelist and for a lot of them, they're like, dude, marketing, eh, I don't want anything to do with that. But it's like, that's the way you're going to get people to come to your new way of doing things. >> Great points, Brian. I remember 15 years ago, it was the first time I was like wait, the CIO has a marketing person under him to help with some of those transformations? Some of the software roles to do. >> Yeah, it's the reason they all want to come and speak at Keynotes and they get at the end and they go, we're hiring. It's like, I got to make what I'm doing sound cool and attract 8,000 people to it. >> Well absolutely it's cool here. We really appreciate Brian, you sharing all the updates here. >> Great to see you guys again. It's good to be back. >> Definitely don't be a stranger. So for Corey Quinn, I'm Stu Miniman. Getting towards the end. Two days live, wall-to-wall coverage here at KubeCon, CloudNativeCon 2019. Thanks for watching theCUBE. (rhythmic music)
SUMMARY :
Brought to you by Red Hat, Brian, great to see you again. it's good to be back on theCUBE. but I think when you go to shows, I had dinner with you two weeks ago, have all been solved by software in the cloud. Corey, you called it the Greek word the Greek god of spending money on cloud services. So, Brian, you talk to a lot of customers here. that you would talk to that wakes up and says, Yeah Brian, those are the same people 10 years ago I mean, you are the Director of Product Strategy, I have to spend a lot of time talking to customers going, to the environment, but, you know, But we don't make you stand on your head. Did that messaging resonate with you and your customers? You know, it's the engine of what you do, that at some point you just can't manage a head I don't think you mean what you want it to mean. I believe you'll find it's pronounce CoreOS, brought to wherever you want it. And I got to thinking about that. because they would all be on message if it was a cult. It's a cult with very crappy brand control, maybe. I always just explain it that like, you know, We tried to add a sixth and he was crushed to death. and it's like oh, Google just gave you their software, When you have a constraint that forces you if you want to run it in a disconnected mode on a cruise ship, And so I have to have this kind of backup plan if you will. And for most of us I'd say I'm not doing it to do it, I'd like the bag built in. it is that you know, I think we see, I hate to give you a vendor answer, and whether you fit in this or not, Some of the software roles to do. Yeah, it's the reason they all want to come We really appreciate Brian, you sharing Great to see you guys again. So for Corey Quinn, I'm Stu Miniman.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
$4 | QUANTITY | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
40 years | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$5 million | QUANTITY | 0.99+ |
Owen Rodgers | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
8,000 people | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
five years | QUANTITY | 0.99+ |
Corey | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
sixth | QUANTITY | 0.99+ |
two weeks ago | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
Prometheus | TITLE | 0.98+ |
Kubernetes | PERSON | 0.98+ |
one guy | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Two days | QUANTITY | 0.98+ |
a week ago | DATE | 0.98+ |
Kubernetes Kumbaya | TITLE | 0.98+ |
three years | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Brandon Philips | PERSON | 0.97+ |
two things | QUANTITY | 0.97+ |
CoreOS | TITLE | 0.97+ |
$50 a month | QUANTITY | 0.96+ |
15 years ago | DATE | 0.96+ |
Lambda | TITLE | 0.96+ |
Greek | OTHER | 0.96+ |
last night | DATE | 0.96+ |
CloudNativeCon EU 2019 | EVENT | 0.95+ |
CloudNativeCon 2019 | EVENT | 0.94+ |
about a thousand customers | QUANTITY | 0.93+ |
10 years | QUANTITY | 0.93+ |
one cloud | QUANTITY | 0.93+ |
10 years ago | DATE | 0.91+ |
Openshift four | TITLE | 0.91+ |
this morning | DATE | 0.9+ |