Ashesh Badani, Red Hat | Red Hat Summit 2020
>> Announcer: From around the globe, it's theCUBE, with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of Red Hat Summit, happening digitally, interviewing practitioners, executives, and thought leaders from around the world. Happy to welcome back to our program, one of our CUBE alumni, Ashesh Badani, who's the Senior Vice President of Cloud Platforms with Red Hat. Ashesh, thank you so much for joining us, and great to see you. >> Yeah, likewise, thanks for having me on, Stu. Good to see you again. >> All right, so, Ashesh, since the last time we had you on theCUBE a few things have changed. One of them is that IBM has now finished the acquisition of Red Hat, and I've heard from you from a really long time, you know, OpenShift, it's anywhere and it's everywhere, but with the acquisition of Red Hat, it just means this only runs on IBM mainframes and IBM Cloud, and all things blue, correct? >> Well, that's true for sure, right? So, Stu, you and I have been talking for many, many times. As you know, we've been committed to hybrid multi-cloud from the very get-go, right? So, OpenShift supported to run on bare metal, on virtualization platforms, whether they come from us, or VMware, or Microsoft Hyper-V, on private clouds like OpenStack, as well as AWS, Google Cloud, as well as on Azure. Now, with the completion of the IBM acquisition of Red Hat, we obviously always partnered with IBM before, but given, if you will, a little bit of a closer relationship here, you know, IBM's been very keen to make sure that they promote OpenShift in all their platforms. So as you can probably see, OpenShift on IBM Cloud, as well as OpenShift on Z on mainframe, so regardless of how you like OpenShift, wherever you like OpenShift, you will get it. >> Yeah, so great clarification. It's not only on IBM, but of course, all of the IBM environments are supported, as you said, as well as AWS, Google, Azure, and the like. Yeah, I remember years ago, before IBM created their single, condensed conference of THINK, I attended the conference that would do Z, and Power, and Storage, and people would be like, you know, "What are they doing with that mainframe?" I'm like, "Well, you do know that it can run Linux." "Wait, it can run Linux?" I'm like, "Oh my god, Z's been able to run Linux "for a really long time." So you want your latest Container, Docker, OpenShift stuff on there? Yeah, that can sit on a mainframe. I've talked to some very large, global companies that that is absolutely a part of their overall story. So, OpenShift-- >> Interesting you say that, because we already have customers who've been procuring OpenShift on mainframe, so if you made the invest mainframe, it's running machine learning applications for you, looking to modernize some of the applications and services that run on top in OpenShift on mainframe now is an available option, which customers are already taking advantage of. So exactly right to your point, we're seeing that in the market today. >> Yeah, and Ashesh, maybe it's good to kind of, you know, you've got a great viewpoint as to customers deploying across all sorts of environments, so you mentioned VMware environments, the public cloud environment. It was our premise a few years ago on theCUBE that Kubernetes get staked into all the platforms, and absolutely, it's going to just be a layer underneath. I actually think we won't be talking a lot about Kubernetes if you fast-forward a couple of years, just because it's in there. I'm using it in all of my environments. So what are you seeing from your customers? Where are we in that general adoption, and any specifics you can give us about, you know, kind of the breadth and the depth of what you're seeing from your customer base? >> Yeah, so, you're exactly right. We're seeing that adoption continue on the path it's been on. So we've got now, over 1700 customers for OpenShift, running in all of these environments that you mentioned, so public, private, a combination of the two, running on traditional virtualization environments, as well as ensuring that they run in public cloud at scale. In some cases managed by customers, in other cases managed by us on their behalf in a public cloud. So, we're seeing all permutation, if you will, of that in play today. We're also seeing a huge variety of workloads, and to me, that's actually really interesting and fascinating. So, earliest days, as you'd expect, people trying to play with micro-services, so trying to build new market services and run it, so cloud native, what have you. Then as we're ensuring that we're supporting stateful application, right. Now you're starting to see if your legacy applications move on, ensuring that we can run them, support them at scale, within the platform 'cause we're looking to modernize applications. We'll talk maybe in a few minutes also about lift-and-shift that we got to play as well. But now also we're starting to see new workloads come on. So just most recently we announced some of the work that we're doing with a series of partners, from NVIDIA to emerging AI ML, AI, artificial intelligence machine learning, frameworks or ISVs, looking to bring those to market. Been ensuring that those are supported and can run with OpenShift. Right, our partnership with NVIDIA, ensuring OpenShift be supported on GPU based environment for specific workloads, whether it be performance sensitive or specific workloads that take advantage of underlying hardware. So starting now to see a wide variety if you will, of application types is also something that we're starting, right, so numbers of customers increasing, types of workloads, you know, coming on increasing, and then the diversity of underlying deployment environments. Where they're running all services. >> Ashesh, such an important piece and I'm so glad you talked about it there. 'Cause you know my background's infrastructure and we tend to look at things as to "Oh well, I moved from VM to a container, "to cloud or all these other things," but the only reason infrastructure exists is to run my application, is my data and my application that are the most important things out there. So Ashesh, let me get in some of the news that you got here, your team work on a lot of things, I believe one of them talks about some of those, those new ways that customers are building applications and how OpenShift fits into those environments. >> Yeah, absolutely. So look, we've been on this journey as you know for several years now. You know recently we announced the GA of OpenShift Service Mesh in support of Istio, increasing an interest as for turning microservices will take advantage of close capabilities that are coming in. At this event we're now also announcing the GA of OpenShift Serverless. We're starting to see obviously a lot of interest, right, we've seen the likes of AWS spawn that in the first instance, but more and more customers are interested in making sure that they can get a portable way to run serverless in any Kubernetes environment, to take advantage of open source projects as building blocks, if you will, so primitives in, within Kubernetes to allow for serverless capabilities, allow for scale down to zero, supporting serving and eventing by having portable functions run across those environments. So that's something that is important to us and we're starting to see support of in the marketplace. >> Yeah, so I'd love just, obviously I'm sure you've got lots of break outs in the OpenShift Serverless, but I've been talking to your team for a number of years, and people, it's like "Oh, well, just as cloud killed everything before it, "serverless obviates the need for everything else "that we were going to use before." Underlying OpenShift Serverless, my understanding, Knative either is the solution, or a piece of the solution. Help us understand what serverless environment this ties into, what this means for both your infrastructure team as well as your app dev team. >> Yeah, great, great question, so Knative is the basis of our serverless solution that we're introducing on OpenShift to the marketplace. The best way for me to talk about this is there's no one size fits all, so you're going to have specific applications or service that will take advantage of serverless capabilities, there will be some others that will take advantage of running within OpenShift, there'll be yet others, we talked about the AI ML frameworks, that will run with different characteristics, also within the platform. So now the platform is being built to help support a diversity, a multitude of different ways of interacting with it, so I think maybe Stu, you're starting to allude to this a little bit, right, so now we're starting to focus on, we've got a great set of building blocks, on the right compute network storage, a set of primitives that Kubernetes laid out, thinking of the notions of clustering and being able to scale, and we'll talk a little bit about management as well of those clusters. And then it changes to a, "What are the capabilities now, "that I need to build to make sure "that I'm most effective, most efficient, "regard to these workloads that I bring on?" You're probably hearing me say workloads now, several times, because we're increasingly focused on adoption, adoption, adoption, how can we ensure that when these 1700 plus, hopefully, hundreds if not thousands more customers come on, how they can get the most variety of applications onto this platform, so it can be a true abstraction over all the underlying physical resources that they have, across every deployment that they put out. >> All right, well Ashesh, I wish we could spend another hour talking about the serverless piece, I definitely am going to make sure I check out some of the breakouts that cover the piece that we talked to you, but, I know there's a lot more that the OpenShift update adds, so what other announcements, news, do you have to cover for us? >> Yeah, so a couple other things I want to make sure I highlight here, one is a capability called ACM, advanced cluster management, that we're introducing. So it was an experimental work that was happening with the IBM team, working on cluster management capabilities, we'd been doing some of that work ourselves, within Red Hat, as part of IBM and Red Hat coming together. We've had several folks from IBM actually join Red Hat, and so we're now open sourcing and providing this cluster management capability, so this is the notion of being able to run and manage these different clusters from OpenShift, at scale, across multiple environments, be able to check on cluster health, be able to apply policy consistently, provide governance, ensure that appropriate applications are running in appropriate clusters, and so on, a series of capabilities, to really allow for multiple clusters to be run at scale and managed effectively, so that's one set of, go ahead, Stu. >> Yeah, if I could, when I hear about multicluster management, I think of some of the solutions that I've heard talked about in the industry, so Azure Arc from Microsoft, Tanzu from VMware, when they talk about multicluster management, it is not only the Kubernetes solutions that they're offering, but also, how do I at least monitor, if not even allow a little bit of control across these environments? So when you talk about cluster management, is that all the OpenShift pieces, or things like AKS, EKS, other options out there, how do those fit into the overall management story? >> Yeah, that's absolutely our goal, right, so we've got to get started somewhere, right? So we obviously want to make sure that we bring into effect the solution to manage OpenShift clusters at scale, and then of course as we would expect, multiple other clusters exist, from Kubernetes, like the ones you mentioned, from the cloud providers as well as others from third parties and we want the solution to manage that as well. But obviously we're going to sort of take steps to get to the endpoint of this journey, so yes, we will get there, we've got to get started somewhere. >> Yeah, and Ashesh, any guides, when you look at people, some of the solutions I mentioned out there, when they start out it's "Here's the vision." So what guidance would you give to customers about where we are, how fast they can expect these things to mature, and I know anything that Red Hat does is going to be fully open source and everything, what's your guidance out there as to what customers should be looking for? >> Yeah, so we're at an interesting point, I think, in this Kubernetes journey right now, and so when we, if you will, started off, and Stu you and I have been talking about this for at least five years if not longer, was this notion that we want to provide a platform that can be portable and successfully run in multiple deployment environments. And we've done that over these years. But all the while when we were doing that, we're always thinking about, what are the capabilities that are needed that are perhaps not developed upstream, but will be over time, but we can ensure that we can look ahead and bring that into the platform. And for a really long time, and I think we still do, right, we at Red Hat take a lot of stick for saying "Hey look, you form the platform." Our outcome back to that has always been, "Look, we're trying to help solve problems "that we believe enterprise customers have, "we want to ensure that they're available open source, "and we want to upstream those capabilities always, "back into the community." But, let's say making available a platform without RBAC, role-based access control, well it's going to be hard then for enterprises to adopt that, we've got to make sure we introduce that capability, and then make sure that it's supported upstream as well. And there's a series of capabilities and features like that that we work through. We've always provided an abstraction within OpenShift to make it more productive for developers and administrators to use it. And we always also support working with kubectl or the command line interface from kube as well. And then we always hear back from folks saying "Well, you've got your own abstraction, "that might make that seem impossible," Nope, you can use both kubectl GPUs or C commands, whichever one is better for you, have at it, we're just trying to be more productive. And now increasingly what we're seeing in the marketplace is this notion that we've got to make sure we work our way up from not just laying out a Kubernetes distribution, but thinking about the additional capability, additional services that you can provide, that would be more valuable to customers, and I think Stu, you were making the point earlier, increasingly, the more popular and the more successful Kubernetes becomes, the less you will see and hear of it, which by the way is exactly the way it should be, because that becomes then the basis of your underlying infrastructure, you are confident that you've got a rock solid bottom, and now you as a customer, you as a user, are focusing all of your energy and time on building the productive application and services on top. >> Yeah, great great points there Ashesh, the vision people always talked about is "If I'm leveraging cloud services, "I shouldn't have to worry "about what version they're running." Well, when it comes to Kubernetes, ultimately we should be able to get there, but I know there's always a little bit of a delta between the latest and newest version of Kubernetes that comes out, and what the managed services, and not only managed services, what customers are doing in their own environment. Even my understanding, even Google, which is where Kubernetes came out of, if you're looking at GKE, GKE is not on the latest, what are we on, 1.19, from the community, Ashesh, so what's Red Hat's position on this, what version are you up to, how do you think customers should think about managing across those environments, because boy, I've got too many scars from interoperability history, go back 10 or 15 years and everything, "Oh, my server BIOS doesn't work on that latest "kernel.org version of what we're doing for Linux." Red Hat is probably better prepared than any company in the industry, to deal with that massive change happening from a code-based standpoint, I've heard you give presentations on the history of Linux and Kubernetes, and what's going forward, so when it comes to the release of Kubernetes, where are you with OpenShift, and how should people be thinking about upgrading from versions? >> Yeah, another excellent point, Stu, it's clearly been following us pretty closely over the years, so where we came at this, was we actually learned quite a bit from our experience in the company with OpenStack. And so what would happen with OpenStack is, you would have customers that are on a certain version of Openstack, and then they kept saying "Hey look, we want to consume close to trunk, "we want new features, we want to go faster." And we'd obviously spent some time, from the release in community to actually shipping our distribution into customer's hand, there's going to be some amount of time for testing and QE to happen, and some integration points that need to be certified, before we make it available. We often found that customers lagged, so there'd be let's say a small subset if you will within every customer or several customers who want to be consuming close to trunk, a majority actually want stability. Especially as time wore on, they were more interested in stability. And you can understand that, because now if you've got mission critical applications running on it you don't necessarily want to go and put that at risk. So the challenge that we addressed when we actually started shipping OpenShift four last summer, so about a year ago, was to say, "How can we provide you basically a way "to help upgrade your clusters, "essentially remotely, so you can upgrade, "if you will, your clusters, or at least "be able to consume them at different speeds." So what we introduced with OpenShift four was this ability to give you over the air updates, so the best way to think about it is with regard to a phone. So you have your phone, your new OS upgrades show up, you get a notification, you turn it on, and you say "Hey, pull it down," or you say at a certain point of time, or you can go off and delay it, do it at a different point in time. That same notion now exists within OpenShift. Which is to say, we provide you three channels, so there's a stable channel where you say "Hey look, maybe this cluster in production, "no rush here, I'll stay at or even a little behind," there's a fast channel for "Hey, I want to be up latest and greatest," or there's a third channel which allows for essentially features that are being in developed, or are still in early stage of development to be pushed out to you. So now you can start consuming these upgrades based on "Hey, I've got a dev team, "on day one I get these quicker," "I've got these applications that are stable in production, "no rush here." And then you can start managing that better yourself. So now if you will, those are capabilities that we're introducing into a Kubernetes platform, a standard Kubernetes platform, but adding additional value, to be able to have that be managed much much, in a much better fashion that serves the different needs of different parts of an organization, allows for them to move at different speeds, but at the same time, gives you that same consistent platform regardless of where you are. >> All right, so Ashesh, we started out the conversation talking about OpenShift anywhere and everywhere, so in the cloud, you talked about sitting on top of VMware, VM Farms is very prevalent in the data centers, or bare metal. I believe since I saw, one of the updates for OpenShift is how Red Hat virtualization is working with OpenShift there, and a lot of people out there are kind of staring out what VMware did with VSphere seven, so maybe you can set it up with a little bit of a compare contrast as to how Red Hat's doing this rollout, versus what you're seeing your partner VMware doing, or how Kubernetes fits into the virtualization environment. >> Yeah, I feel like we're both approaching it from different perspective and learnset that we come at it, so if I can, the VMware perspective is likely "Hey look, there's all these installations of VSphere "in the marketplace, how can we make sure "that we help bring containers there," and they've come up with a solution that you can argue is quite complicated in the way how they're achieving it. Our approach is a different one, right, so we always looked at this problem from the get-go with regard to containers as a new paradigm shift, it's not necessarily a revolution, because most companies that we're looking at are working with existing application services, but it's an evolution in the way you're thinking about the world, but this is definitely the long term future. And so how can we then think about introducing this environment, this application platform into the environment, and then be able to build a new application in it, but also bring in existing applications to the form? And so with this release of OpenShift, what we're introducing is something that we're calling OpenShift Virtualization, which is a few of our existing applications, certain VMs, how can we ensure that we bring those VMs into the platform, they've been certified, data security boundaries around it, or certain constraints or requirements have been put by your internal organization around it, and we can keep all of those, but then still encapsulate that VM as a container, have that be run natively within an environment orchestrated by OpenShift, Kubernetes as the primary orchestrator of those VMs, just like it does with everything else that's cloud-native, or is running directly as containers as well. We think that's extremely powerful, for us to really bring now the promise of Kubernetes into a much wider market, so I talked about 1700 customers, you can argue that that 1700 is the early majority, or if you will, almost the scratching of the surface of the numbers that we believe will adopt this platform. To get, if you held the next setup, whatever, five, 10, 20,000 customers, we'll have to make sure we meet them where they are. And so introducing this notion of saying "We can help migrate," with a series of tools that Rock's providing, these VM-based applications, and then have them run within Kubernetes in a consistent fashion, is going to be extremely powerful, and we're really excited about it, by those capabilities, bringing that to our customers. >> Well Ashesh, I think that puts a great exclamation point as to how we go from these early days off to the vast majority of environments, Ashesh, one thing, congratulations to you and the team on the growth, the momentum, all the customer stories, I'd love the opportunity to talk to many of the Red Hat customers about their digital transformation and how your cloud platforms have been a piece of it, so once again, always a pleasure to catch up with you. >> Likewise, thanks a lot, Stuart, good chatting with you, and hope to see you in person soon sometime. >> Absolutely, we at theCUBE of course hope to see you at events later in 2020, for the time being, we of course fully digital, always online, check out theCUBE.net for all of the archives as well as the events including all the digital ones that we are doing, I'm Stu Miniman, and as always, thanks for watching theCUBE. (calm music)
SUMMARY :
brought to you by Red Hat. and great to see you. Good to see you again. we had you on theCUBE a few things have changed. So as you can probably see, OpenShift on IBM Cloud, and Power, and Storage, and people would be like, you know, so if you made the invest mainframe, and any specifics you can give us about, you know, So, we're seeing all permutation, if you will, So Ashesh, let me get in some of the news that you got here, spawn that in the first instance, but I've been talking to your team Yeah, great, great question, so Knative is the basis so this is the notion of being able to run from Kubernetes, like the ones you mentioned, So what guidance would you give to customers and so when we, if you will, started off, GKE is not on the latest, what are we on, 1.19, Which is to say, we provide you three channels, so in the cloud, you talked about sitting on top of VMware, is the early majority, or if you will, to you and the team on the growth, the momentum, and hope to see you in person soon sometime. Absolutely, we at theCUBE of course hope to see you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ashesh | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first instance | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Kubernetes | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
over 1700 customers | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
Red Hat Summit 2020 | EVENT | 0.98+ |
three channels | QUANTITY | 0.98+ |
15 years | QUANTITY | 0.98+ |
OpenShift Serverless | TITLE | 0.98+ |
both | QUANTITY | 0.97+ |
Knative | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
GKE | ORGANIZATION | 0.96+ |
Azure Arc | TITLE | 0.96+ |
thousands more customers | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
third channel | QUANTITY | 0.96+ |
last summer | DATE | 0.96+ |
RBAC | TITLE | 0.95+ |
zero | QUANTITY | 0.93+ |