Image Title

Search Results for gke:

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great questions, Dave. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

WalmartORGANIZATION

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.99+

StarlinkORGANIZATION

0.99+

oneQUANTITY

0.98+

a dayQUANTITY

0.97+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

CubeORGANIZATION

0.9+

SuperCloud2ORGANIZATION

0.89+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.88+

one more pieceQUANTITY

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great question, Steve. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, and it does get disconnected on a regular basis. In fact, I forget the exact number, but some several dozen locations get disconnected daily just by virtue of the fact that there's construction going on and things are happening in the real world. When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Jan 9 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

StarlinkORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.98+

oneQUANTITY

0.98+

a dayQUANTITY

0.98+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

CubeORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.86+

one more pieceQUANTITY

0.86+

SuperCloud2ORGANIZATION

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

Madhura Maskasky & Sirish Raghuram | KubeCon + CloudNativeCon NA 2022


 

(upbeat synth intro music) >> Hey everyone and welcome to Detroit, Michigan. theCUBE is live at KubeCon CloudNativeCon, North America 2022. Lisa Martin here with John Furrier. John, this event, the keynote that we got out of a little while ago was, standing room only. The Solutions hall is packed. There's so much buzz. The community is continuing to mature. They're continuing to contribute. One of the big topics is Cloud Native at Scale. >> Yeah, I mean, this is a revolution happening. The developers are coming on board. They will be running companies. Developers, structurally, will be transforming companies with just, they got to get powered somewhere. And, I think, the Cloud Native at Scale speaks to getting everything under the covers, scaling up to support developers. In this next segment, we have two Kube alumnis. We're going to talk about Cloud Native at Scale. Some of the things that need to be there in a unified architecture, should be great. >> All right, it's going to be fantastic. Let's go under the covers here, as John mentioned, two alumni with us, Madhura Maskasky joins us, co-founder of Platform9. Sirish Raghuram, also co-founder of Platform9 joins us. Welcome back to theCUBE. Great to have you guys here at KubeCon on the floor in Detroit. >> Thank you for having us. >> Thank you for having us. >> Excited to be here >> So, talk to us. You guys have some news, Madhura, give us the sneak peak. What's going on? >> Definitely, we are very excited. So, we have John, not too long ago we spoke about our very new open source project called Arlon. And, we were talking about the launch of Arlon in terms of its first release and etcetera. And, just fresh hot of the press, we, Platform9 had its 5.6 release which is its most recent release of our product. And there's a number of key interesting announcements that we'd like to share as part of that. I think, the prominent one is, Platform9 added support for EKS Kubernetes cluster management. And, so, this is part of our vision of being able to add value, no matter where you run your Kubernetes clusters, because, Kubernetes or cluster management, is increasingly becoming commodity. And, so, I think the companies that succeed are going to add value on top, and are going to add value in a way that helps end users, developers, DevOps solve problems that they encounter as they start running these environments, with a lot of scale and a lot of diversity. So, towards that, key features in the 5.6 six release. First, is the very first package release of the product online, which is the open source project that we've kicked off to do cluster and application, entire cluster management at scale. And, then there's few other very interesting capabilities coming out of that. >> I want to just highlight something and then get your thoughts on this next, this release 5.6. First of all, 5.6, it's been around for a while, five reps, but, now, more than ever, you mentioned the application in Ops. You're seeing WebAssembly trends, you're seeing developers getting more and more advanced capability. It's going to accelerate their ability to write code and compose applications. So, you're seeing a application tsunami coming. So, the pressure is okay, they're going to need infrastructure to run all that stuff. And, so, you're seeing more clusters being spun up, more intelligence trying to automate. So you got the automation, so you got the dynamic, the power dynamic of developers and then under the covers. What does 5.6 do to push the mission forward for developers? How would you guys summarize that for people watching? what's in it for them right now? >> So it's, I think going back to what you just said, right, the breadth of applications that people are developing on top of something like Kubernetes and Cloud Native, is always growing. So, it's not just a number of clusters, but also the fact that different applications and different development groups need these clusters to be composed differently. So, a certain version of the application may require some set of build components, add-ons, and operators, and extensions. Whereas, a different application may require something entirely different. And, now, you take this in an enterprise context, right. Like, we had a major media company that worked with us. They have more than 10,000 pods being used by thousands of developers. And, you now think about the breadth of applications, the hundreds of different applications being built. how do you consistently build, and compose, and manage, a large number of communities clusters with a a large variety of extensions that these companies are trying to manage? That's really what I think 5.6 is bringing to the table. >> Scott Johnston just was on here early as the CEO of Docker. He said there's more applications being pushed now than in the history of application development combined. There's more and more apps coming, more and more pressure on the system. >> And, that's where, if you go, there's this famous landscape chart of the CNCF ecosystem technologies. And, the problem that people here have is, how do they put it all together? How do they make sense of it? And, what 5.6 and Arlon and what Platform9 is doing is, it's helping you declaratively capture blueprints of these clusters, using templates, and be able to manage a small number of blueprints that helps you make order out of the chaos of these hundreds of different projects, that are all very interesting and powerful. >> So Project Arlon really helping developers produce the configuration and the deployment complexities of Kubernetes at scale. >> That's exactly right. >> Talk about the, the impact on the business side. Ease of use, what's the benefits for 5.6? What's does it turn into for a benefit standpoint? >> Yeah, I think the biggest benefit, right, is being able to do Cloud Native at Scale faster, and while still keeping a very lean Ops team that is able to spend, let's say 70 plus percent of their time, caring for your actual business bread and butter applications, and not for the infrastructure that serves it, right. If you take the analogy of a restaurant, you don't want to spend 70% of your time in building the appliances or setting up your stoves etcetera. You want to spend 90 plus percent of your time cooking your own meal, because, that is your core key ingredient. But, what happens today in most enterprises is, because, of the level of automation, the level of hands-on available tooling, being there or not being there, majority of the ops time, I would say 50, 70% plus, gets spent in making that kitchen set up and ready, right. And, that is exactly what we are looking to solve, online. >> What would a customer look like, or prospect environment look like that would be really ready for platform9? What, is it more apps being pushed, big push on application development, or is it the toil of like really inefficient infrastructure, or gaps in skills of people? What does an environment look like? So, someone needs to look at their environment and say, okay, maybe I should call platform9. What's it look like? >> So, we generally see customers fall into two ends of the barbell, I would say. One, is the advanced communities users that are running, I would say, typically, 30 or more clusters already. These are the people that already know containers. They know, they've container wise... >> Savvy teams. >> They're savvy teams, a lot of them are out here. And for them, the problem is, how do I manage the complexity at scale? Because, now, the problem is how do I scale us? So, that's one end of the barbell. The other end of the barbell, is, how do we help make Kubernetes accessible to companies that, as what I would call the mainstream enterprise. We're in Detroit in Motown, right, And, we're outside of the echo chamber of the Silicon Valley. Here's the biggest truth, right. For all the progress that we made as a community, less than 20% of applications in the enterprise today are running on Kubernetes. So, what does it take? I would say it's probably less than 10%, okay. And, what does it take, to grow that in order of magnitude? That's the other kind of customer that we really serve, is, because, we have technologies like Kube Word, which helps them take their existing applications and start adopting Kubernetes as a directional roadmap, but, while using the existing applications that they have, without refactoring it. So, I would say those are the two ends of the barbell. The early adopters that are looking for an easier way to adopt Kubernetes as an architectural pattern. And, the advanced savvy users, for whom the problem is, how do they operationally solve the complexity of managing at scale. >> And, what is your differentiation message to both of those different user groups, as you talked about in terms of the number of users of Kubernetes so far? The community groundswell is tremendous, but, there's a lot of opportunity there. You talked about some of the barriers. What's your differentiation? What do you come in saying, this is why Platform9 is the right one for you, in the both of these groups. >> And it's actually a very simple message. We are the simplest and easiest way for a new user that is adopting Kubernetes as an architectural pattern, to get started with existing applications that they have, on the infrastructure that they have. Number one. And, for the savvy teams, our technology helps you operate with greater scale, with constrained operations teams. Especially, with the economy being the way it is, people are not going to get a lot more budget to go hire a lot more people, right. So, that all of them are being asked to do more with less. And, our team, our technology, and our teams, help you do more with less. >> I was talking with Phil Estes last night from AWS. He's here, he is one of their engineer open source advocates. He's always on the ground pumping up AWS. They've had great success, Amazon Web Services, with their EKS. A lot of people adopting clusters on the cloud and on-premises. But Amazon's doing well. You guys have, I think, a relationship with AWS. What's that, If I'm an Amazon customer, how do I get involved with Platform9? What's the hook? Where's the value? What's the product look like? >> Yeah, so, and it kind of goes back towards the point we spoke about, which is, Kubernetes is going to increasingly get commoditized. So, customers are going to find the right home whether it's hyperscalers, EKS, AKS, GKE, or their own infrastructure, to run Kubernetes. And, so, where we want to be at, is, with a project like Arlon, Sirish spoke about the barbell strategy, on one end there is these advanced Kubernetes users, majority of them are running Kubernetes on AKS, right? Because, that was the easiest platform that they found to get started with. So, now, they have a challenge of running these 50 to 100 clusters across various regions of Amazon, across their DevTest, their staging, their production. And, that results in a level of chaos that these DevOps or platform... >> So you come in and solve that. >> That is where we come in and we solve that. And it, you know, Amazon or EKS, doesn't give you tooling to solve that, right. It makes it very easy for you to create those number of clusters. >> Well, even in one hyperscale, let's say AWS, you got regions and locations... >> Exactly >> ...that's kind of a super cloud problem, we're seeing, opportunity problem, and opportunity is that, on Amazon, availability zones is one thing, but, now, also, you got regions. >> That is absolutely right. You're on point John. And the way we solve it, is by using infrastructure as a code, by using GitOps principles, right? Where you define it once, you define it in a yaml file, you define exactly how for your DevTest environment you want your entire infrastructure to look like, including EKS. And then you stamp it out. >> So let me, here's an analogy, I'll throw out this. You guys are like, someone learns how to drive a car, Kubernetes clusters, that's got a couple clusters. Then once they know how to drive a car, you give 'em the sports car. You allow them to stay on Amazon and all of a sudden go completely distributed, Edge, Global. >> I would say that a lot of people that we meet, we feel like they're figuring out how to build a car with the kit tools that they have. And we give them a car that's ready to go and doesn't require them to be trying to... ... they can focus on driving the car, rather than trying to build the car. >> You don't want people to stop, once they get the progressions, they hit that level up on Kubernetes, you guys give them the ability to go much bigger and stronger. >> That's right. >> To accelerate that applications. >> Building a car gets old for people at a certain point in time, and they really want to focus on is driving it and enjoying it. >> And we got four right behind us, so, we'll get them involved. So that's... >> But, you're not reinventing the wheel. >> We're not at all, because, what we are building is two very, very differentiated solutions, right. One, is, we're the simplest and easiest way to build and run Cloud Native private clouds. And, this is where the operational complexity of trying to do it yourself. You really have to be a car builder, to be able to do this with our Platform9. This is what we do uniquely that nobody else does well. And, the other end is, we help you operate at scale, in the hyperscalers, right. Those are the two problems that I feel, whether you're on-prem, or in the cloud, these are the two problems people face. How do you run a private cloud more easily, more efficiently? And, how do you govern at scale, especially in the public clouds? >> I want to get to two more points before we run out of time. Arlon and Argo CD as a service. We previously mentioned up coming into KubeCon, but, here, you guys couldn't be more relevant, 'cause Intuit was on stage on the keynote, getting an award for their work. You know, Argo, it comes from Intuit. That ArgoCon was in Mountain View. You guys were involved in that. You guys were at the center of all this super cloud action, if you will, or open source. How does Arlon fit into the Argo extension? What is Argo CD as a service? Who's going to take that one? I want to get that out there, because, Arlon has been talked about a lot. What's the update? >> I can talk about it. So, one of the things that Arlon uses behind the scenes, is it uses Argo CD, open source Argo CD as a service, as its key component to do the continuous deployment portion of its entire, the infrastructure management story, right. So, we have been very strongly partnering with Argo CD. We, really know and respect the Intuit team a lot. We, as part of this effort, in 5.6 release, we've also put out Argo CD as a service, in its GA version, right. Because, the power of running Arlon along with Argo CD as a service, in our mind, is enabling you to run on one end, your infrastructure as a scale, through GitOps, and infrastructure as a code practices. And on the other end, your entire application fleet, at scale, right. And, just marrying the two, really gives you the ability to perform that automation that we spoke about. >> But, and avoid the problem of sprawl when you have distributed teams, you have now things being bolted on, more apps coming out. So, this is really solves that problem, mainly. >> That is exactly right. And if you think of it, the way those problems are solved today, is, kind of in disconnected fashion, which is on one end you have your CI/CD tools, like Argo CD is an excellent one. There's some other choices, which are managed by a separate team to automate your application delivery. But, that team, is disconnected from the team that does the infrastructure management. And the infrastructure management is typically done through a bunch of Terraform scripts, or a bunch of ad hoc homegrown scripts, which are very difficult to manage. >> So, Arlon changes sure, as they change the complexity and also the sprawl. But, that's also how companies can die. They're growing fast, they're adding more capability. That's what trouble starts, right? >> I think in two ways, right. Like one is, as Madhura said, I think one of the common long-standing problems we've had, is, how do infrastructure and application teams communicate and work together, right. And, you've seen Argo's really get adopted by the application teams, but, it's now something that we are making accessible for the infrastructure teams to also bring the best practices of how application teams are managing applications. You can now use that to manage infrastructure, right. And, what that's going to do is, help you ultimately reduce waste, reduce inefficiency, and improve the developer experience. Because, that's what it's all about, ultimately. >> And, I know that you just released 5.6 today, congratulations on that. Any customer feedback yet? Any, any customers that you've been able to talk to, or have early access? >> Yeah, one of our large customers is a large SaaS retail company that is B2C SaaS. And, their feedback has been that this, basically, helps them bring exactly what I said in terms of bring some of the best practices that they wanted to adopt in the application space, down to the infrastructure management teams, right. And, we are also hearing a lot of customers, that I would say, large scale public cloud users, saying, they're really struggling with the complexity of how to tame the complexity of navigating that landscape and making it consumable for organizations that have thousands of developers or more. And that's been the feedback, is that this is the first open source standard mechanism that allows them to kind of reuse something, as opposed to everybody feels like they've had to build ad hoc solutions to solve this problem so far. >> Having a unified infrastructure is great. My final question, for me, before I end up, for Lisa to ask her last question is, if you had to explain Platform9, why you're relevant and cool today, what would you say? >> If I take that? I would say that the reason why Platform9, the reason why we exist, is, putting together a cloud, a hybrid cloud strategy for an enterprise today, historically, has required a lot of DIY, a lot of building your own car. Before you can drive a car, or you can enjoy the car, you really learn to build and operate the car. And that's great for maybe a 100 tech companies of the world, but, for the next 10,000 or 50,000 enterprises, they want to be able to consume a car. And that's why Platform9 exists, is, we are the only company that makes this delightfully simple and easy for companies that have a hybrid cloud strategy. >> Why you cool and relevant? How would you say it? >> Yeah, I think as Kubernetes becomes mainstream, as containers have become mainstream, I think automation at scale with ease, is going to be the key. And that's exactly what we help solve. Automation at scale and with ease. >> With ease and that differentiation. Guys, thank you so much for joining me. Last question, I guess, Madhura, for you, is, where can Devs go to learn more about 5.6 and get their hands on it? >> Absolutely. Go to platform9.com. There is info about 5.6 release, there's a press release, there's a link to it right on the website. And, if they want to learn about Arlon, it's an open source GitHub project. Go to GitHub and find out more about it. >> Excellent guys, thanks again for sharing what you're doing to really deliver Cloud Native at Scale in a differentiated way that adds ostensible value to your customers. John, and I, appreciate your insights and your time. >> Thank you for having us. >> Thanks so much >> Our pleasure. For our guests and John Furrier, I'm Lisa Martin. You're watching theCUBE Live from Detroit, Michigan at KubeCon CloudNativeCon 2022. Stick around, John and I will be back with our next guest. Just a minute. (light synth outro music)

Published Date : Oct 28 2022

SUMMARY :

One of the big topics is Some of the things that need to be there Great to have you guys here at KubeCon So, talk to us. And, just fresh hot of the press, So, the pressure is okay, they're to what you just said, right, as the CEO of Docker. of the CNCF ecosystem technologies. produce the configuration and impact on the business side. because, of the level of automation, or is it the toil of One, is the advanced communities users of the Silicon Valley. in the both of these groups. And, for the savvy teams, He's always on the ground pumping up AWS. that they found to get started with. And it, you know, Amazon or you got regions and locations... but, now, also, you got regions. And the way we solve it, Then once they know how to drive a car, of people that we meet, to go much bigger and stronger. and they really want to focus on And we got four right behind us, And, the other end is, What's the update? And on the other end, your But, and avoid the problem of sprawl that does the infrastructure management. and also the sprawl. for the infrastructure teams to also bring And, I know that you of bring some of the best practices today, what would you say? of the world, ease, is going to be the key. to learn more about 5.6 there's a link to it right on the website. to your customers. be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Madhura MaskaskyPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

LisaPERSON

0.99+

AWSORGANIZATION

0.99+

Sirish RaghuramPERSON

0.99+

MadhuraPERSON

0.99+

John FurrierPERSON

0.99+

DetroitLOCATION

0.99+

AmazonORGANIZATION

0.99+

Scott JohnstonPERSON

0.99+

30QUANTITY

0.99+

70%QUANTITY

0.99+

SirishPERSON

0.99+

50QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

twoQUANTITY

0.99+

Platform9ORGANIZATION

0.99+

two problemsQUANTITY

0.99+

Phil EstesPERSON

0.99+

100 tech companiesQUANTITY

0.99+

less than 20%QUANTITY

0.99+

less than 10%QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

Detroit, MichiganLOCATION

0.99+

FirstQUANTITY

0.99+

KubeConEVENT

0.99+

bothQUANTITY

0.99+

MotownLOCATION

0.99+

first releaseQUANTITY

0.99+

more than 10,000 podsQUANTITY

0.99+

DockerORGANIZATION

0.99+

firstQUANTITY

0.99+

two alumniQUANTITY

0.99+

two waysQUANTITY

0.99+

ArlonORGANIZATION

0.99+

5.6QUANTITY

0.98+

Mountain ViewLOCATION

0.98+

OneQUANTITY

0.98+

two more pointsQUANTITY

0.98+

oneQUANTITY

0.98+

EKSORGANIZATION

0.98+

last nightDATE

0.98+

Cloud NativeTITLE

0.98+

70 plus percentQUANTITY

0.97+

one endQUANTITY

0.97+

fourQUANTITY

0.97+

90 plus percentQUANTITY

0.97+

DevTestTITLE

0.97+

ArgoORGANIZATION

0.97+

50,000 enterprisesQUANTITY

0.96+

KubeORGANIZATION

0.96+

two endsQUANTITY

0.96+

IntuitORGANIZATION

0.96+

five repsQUANTITY

0.96+

todayDATE

0.96+

KubernetesTITLE

0.95+

GitOpsTITLE

0.95+

Cloud NativeTITLE

0.95+

platform9.comOTHER

0.95+

hundreds of different applicationsQUANTITY

0.95+

Andy Goldstein & Tushar Katarki, Red Hat | KubeCon + CloudNativeCon NA 2022


 

>>Hello everyone and welcome back to Motor City, Michigan. We're live from the Cube and my name is Savannah Peterson. Joined this afternoon with my co-host John Ferer. John, how you doing? Doing >>Great. This next segment's gonna be awesome about application modernization, scaling pluses. This is what's gonna, how are the next generation software revolution? It's gonna be >>Fun. You know, it's kind of been a theme of our day today is scale. And when we think about the complex orchestration platform that is Kubernetes, everyone wants to scale faster, quicker, more efficiently, and our guests are here to tell us all about that. Please welcome to Char and Andy, thank you so much for being here with us. You were on the Red Hat OpenShift team. Yeah. I suspect most of our audience is familiar, but just in case, let's give 'em a quick one-liner pitch so everyone's on the same page. Tell us about OpenShift. >>I, I'll take that one. OpenShift is our ES platform is our ES distribution. You can consume it as a self-managed platform or you can consume it as a managed service on on public clouds. And so we just call it all OpenShift. So it's basically Kubernetes, but you know, with a CNCF ecosystem around it to make things more easier. So maybe there's two >>Lights. So what does being at coupon mean for you? How does it feel to be here? What's your initial takes? >>Exciting. I'm having a fantastic time. I haven't been to coupon since San Diego, so it's great to be back in person and see old friends, make new friends, have hallway conversations. It's, it's great as an engineer trying to work in this ecosystem, just being able to, to be in the same place with these folks. >>And you gotta ask, before we came on camera, you're like, this is like my sixth co con. We were like, we're seven, you know, But that's a lot of co coupons. It >>Is, yes. I mean, so what, >>Yes. >>Take us status >>For sure. Where we are now. Compare and contrast co. Your first co con, just scope it out. What's the magnitude of change? If you had to put a pin on that, because there's a lot of new people coming in, they might not have seen where it's come from and how we got here is maybe not how we're gonna get to the next >>Level. I've seen it grow tremendously since the first one I went to, which I think was Austin several years ago. And what's great is seeing lots of new people interested in contributing and also seeing end users who are trying to figure out the best way to take advantage of this great ecosystem that we have. >>Awesome. And the project management side, you get the keys to the Kingdom with Red Hat OpenShift, which has been successful. Congratulations by the way. Thank you. We watched that grow and really position right on the wave. It's going great. What's the update on on the product? Kind of, you're in a good, good position right now. Yeah, >>No, we we're feeling good about it. It's all about our customers. Obviously the fact that, you know, we have thousands of customers using OpenShift as the cloud native platform, the container platform. We're very excited. The great thing about them is that, I mean you can go to like OpenShift Commons is kind of a user group that we run on the first day, like on Tuesday we ran. I mean you should see the number of just case studies that our customers went through there, you know? And it is fantastic to see that. I mean it's across so many different industries, across so many different use cases, which is very exciting. >>One of the things we've been reporting here in the Qla scene before, but here more important is just that if you take digital transformation to the, to its conclusion, the IT department and developers, they're not a department to serve the business. They are the business. Yes. That means that the developers are deciding things. Yeah. And running the business. Prove their code. Yeah. Okay. If that's, if that takes place, you gonna have scale. And we also said on many cubes, certainly at Red Hat Summit and other ones, the clouds are distributed computer, it's distributed computing. So you guys are focusing on this project, Andy, that you're working on kcp. >>Yes. >>Which is, I won't platform Kubernetes platform for >>Control >>Planes. Control planes. Yes. Take us through, what's the focus on why is that important and why is that relate to the mission of developers being in charge and large scale? >>Sure. So a lot of times when people are interested in developing on Kubernetes and running workloads, they need a cluster of course. And those are not cheap. It takes time, it takes money, it takes resources to get them. And so we're trying to make that faster and easier for, for end users and everybody involved. So with kcp, we've been able to take what looks like one normal Kubernetes and partition it. And so everybody gets a slice of it. You're an administrator in your little slice and you don't have to ask for permission to install new APIs and they don't conflict with anybody else's APIs. So we're really just trying to make it super fast and make it super flexible. So everybody is their own admin. >>So the developer basically looks at it as a resource blob. They can do whatever they want, but it's shared and provisioned. >>Yes. One option. It's like, it's like they have their own cluster, but you don't have to go through the process of actually provisioning a full >>Cluster. And what's the alternative? What's the what's, what's the, what's the benefit and what was the alternative to >>This? So the alternative, you spin up a full cluster, which you know, maybe that's three control plane nodes, you've got multiple workers, you've got a bunch of virtual machines or bare metal, or maybe you take, >>How much time does that take? Just ballpark. >>Anywhere from five minutes to an hour you can use cloud services. Yeah. Gke, E Ks and so on. >>Keep banging away. You're configuring. Yeah. >>Those are faster. Yeah. But it's still like, you still have to wait for that to happen and it costs money to do all of that too. >>Absolutely. And it's complex. Why do something that's been done, if there's a tool that can get you a couple steps down the path, which makes a ton of sense. Something that we think a lot when we're talking about scale. You mentioned earlier, Tohar, when we were chatting before the cams were alive, scale means a lot of different things. Can you dig in there a little bit? >>Yeah, I >>Mean, so when, when >>We talk about scale, >>We are talking about from a user perspective, we are talking about, you know, there are more users, there are more applications, there are more workloads, there are more services being run on Kubernetes now, right? So, and OpenShift. So, so that's one dimension of this scale. The other dimension of the scale is how do you manage all the underlying infrastructure, the clusters, the name spaces, and all the observability data, et cetera. So that's at least two levels of scale. And then obviously there's a third level of scale, which is, you know, there is scale across not just different clouds, but also from cloud to the edge. So there is that dimension of scale. So there are several dimensions of this scale. And the one that again, we are focused on here really is about, you know, this, the first one that I talk about is a user. And when I say user, it could be a developer, it could be an application architect, or it could be an application owner who wants to develop Kubernetes applications for Kubernetes and wants to publish those APIs, if you will, and make it discoverable and then somebody consumes it. So that's the scale we are talking about >>Here. What are some of the enterprise, you guys have a lot of customers, we've talked to you guys before many, many times and other subjects, Red Hat, I mean you guys have all the customers. Yeah. Enterprise, they've been there, done that. And you know, they're, they're savvy. Yeah. But the cloud is a whole nother ballgame. What are they thinking about? What's the psychology of the customer right now? Because now they have a lot of choices. Okay, we get it, we're gonna re-platform refactor apps, we'll keep some legacy on premises for whatever reasons. But cloud pretty much is gonna be the game. What's the mindset right now of the customer base? Where are they in their, in their psych? Not the executive, but more of the the operators or the developers? >>Yeah, so I mean, first of all, different customers are at different levels of maturity, I would say in this. They're all on a journey how I like to describe it. And in this journey, I mean, I see a customers who are really tip of the sphere. You know, they have containerized everything. They're cloud native, you know, they use best of tools, I mean automation, you know, complete automation, you know, quick deployment of applications and all, and life cycle of applications, et cetera. So that, that's kind of one end of this spectrum >>Advanced. Then >>The advances, you know, and, and I, you know, I don't, I don't have any specific numbers here, but I'd say there are quite a few of them. And we see that. And then there is kind of the middle who are, I would say, who are familiar with containers. They know what app modernization, what a cloud application means. They might have tried a few. So they are in the journey. They are kind of, they want to get there. They have some other kind of other issues, organizational or talent and so, so on and so forth. Kinds of issues to get there. And then there are definitely the quota, what I would call the lag arts still. And there's lots of them. But I think, you know, Covid has certainly accelerated a lot of that. I hear that. And there is definitely, you know, more, the psychology is definitely more towards what I would say public cloud. But I think where we are early also in the other trend that I see is kind of okay, public cloud great, right? So people are going there, but then there is the so-called edge also. Yeah. That is for various regions. You, you gotta have a kind of a regional presence, a edge presence. And that's kind of the next kind of thing taking off here. And we can talk more >>About it. Yeah, let's talk about that a little bit because I, as you know, as we know, we're very excited about Edge here at the Cube. Yeah. What types of trends are you seeing? Is that space emerges a little bit more firmly? >>Yeah, so I mean it's, I mean, so we, when we talk about Edge, you're talking about, you could talk about Edge as a, as a retail, I mean locations, right? >>Could be so many things edges everywhere. Everywhere, right? It's all around us. Quite literally. Even on the >>Scale. Exactly. In space too. You could, I mean, in fact you mentioned space. I was, I was going to >>Kinda, it's this world, >>My space actually Kubernetes and OpenShift running in space, believe it or not, you know, So, so that's the edge, right? So we have Industrial Edge, we have Telco Edge, we have a 5g, then we have, you know, automotive edge now and, and, and retail edge and, and more, right? So, and space, you know, So it's very exciting there. So the reason I tag back to that question that you asked earlier is that that's where customers are. So cloud is one thing, but now they gotta also think about how do I, whatever I do in the cloud, how do I bring it to the edge? Because that's where my end users are, my customers are, and my data is, right? So that's the, >>And I think Kubernetes has brought that attention to the laggards. We had the Laed Martin on yesterday, which is an incredible real example of Kubernetes at the edge. It's just incredible story. We covered it also wrote a story about it. So compelling. Cuz it makes it real. Yes. And Kubernetes is real. So then the question is developer productivity, okay, Things are starting to settle in. We've got KCP scaling clusters, things are happening. What about the tool chains? And how do I develop now I got scale of development, more code coming in. I mean, we are speculating that in the future there's so much code in open source that no one has to write code anymore. Yeah. At some point it's like this gluing things together. So the developers need to be productive. How are we gonna scale the developer equation and eliminate the, the complexity of tool chains and environments. Web assembly is super hyped up at this show. I don't know why, but sounds good. No one, no one can tell me why, but I can kind of connect the dots. But this is a big thing. >>Yeah. And it's fitting that you ask about like no code. So we've been working with our friends at Cross Plain and have integrated with kcp the ability to no code, take a whole bunch of configuration and say, I want a database. I want to be a, a provider of databases. I'm in an IT department, there's a bunch of developers, they don't wanna have to write code to create databases. So I can just take, take my configuration and make it available to them. And through some super cool new easy to use tools that we have as a developer, you can just say, please give me a database and you don't have to write any code. I don't have to write any code to maintain that database. I'm actually using community tooling out there to get that spun up. So there's a lot of opportunities out there. So >>That's ease of use check. What about a large enterprise that's got multiple tool chains and you start having security issues. Does that disrupt the tool chain capability? Like there's all those now weird examples emerging, not weird, but like real plumbing challenges. How do you guys see that evolving with Red >>Hat and Yeah, I mean, I mean, talking about that, right? The software, secure software supply chain is a huge concern for everyone after, especially some of the things that have happened in the past few >>Years. Massive team here at the show. Yeah. And just within the community, we're all a little more aware, I think, even than we were before. >>Before. Yeah. Yeah. And, and I think the, so to step back, I mean from, so, so it's not just even about, you know, run time vulnerability scanning, Oh, that's important, but that's not enough, right? So we are talking about, okay, how did that container, or how did that workload get there? What is that workload? What's the prominence of this workload? How did it get created? What is in it? You know, and what, what are, how do I make, make sure that there are no unsafe attack s there. And so that's the software supply chain. And where Red Hat is very heavily invested. And as you know, with re we kind of have roots in secure operating system. And rel one of the reasons why Rel, which is the foundation of everything we do at Red Hat, is because of security. So an OpenShift has always been secure out of the box with things like scc, rollbacks access control, we, which we added very early in the product. >>And now if you kind of bring that forward, you know, now we are talking about the complete software supply chain security. And this is really about right how from the moment the, the, the developer rights code and checks it into a gateway repository from there on, how do you build it? How do you secure it at each step of the process, how do you sign it? And we are investing and contributing to the community with things like cosign and six store, which is six store project. And so that secures the supply chain. And then you can use things like algo cd and then finally we can do it, deploy it onto the cluster itself. And then we have things like acs, which can do vulnerability scanning, which is a container security platform. >>I wanna thank you guys for coming on. I know Savannah's probably got a last question, but my last question is, could you guys each take a minute to answer why has Kubernetes been so successful today? What, what was the magic of Kubernetes that made it successful? Was it because no one forced it? Yes. Was it lightweight? Was it good timing, right place at the right time community? What's the main reason that Kubernetes is enabling all this, all this shift and goodness that's coming together, kind of defacto unifies people, the stacks, almost middleware markets coming around. Again, not to use that term middleware, but it feels like it's just about to explode. Yeah. Why is this so successful? I, >>I think, I mean, the shortest answer that I can give there really is, you know, as you heard the term, I think Satya Nala from Microsoft has used it. I don't know if he was the original person who pointed, but every company wants to be a software company or is a software company now. And that means that they want to develop stuff fast. They want to develop stuff at scale and develop at, in a cloud native way, right? You know, with the cloud. So that's, and, and Kubernetes came at the right time to address the cloud problem, especially across not just one public cloud or two public clouds, but across a whole bunch of public clouds and infrastructure as, and what we call the hybrid clouds. I think the ES is really exploded because of hybrid cloud, the need for hybrid cloud. >>And what's your take on the, the magic Kubernetes? What made it, what's making it so successful? >>I would agree also that it came about at the right time, but I would add that it has great extensibility and as developers we take it advantage of that every single day. And I think that the, the patterns that we use for developing are very consistent. And I think that consistency that came with Kubernetes, just, you have so many people who are familiar with it and so they can follow the same patterns, implement things similarly, and it's just a good fit for the way that we want to get our software out there and have, and have things operate. >>Keep it simple, stupid almost is that acronym, but the consistency and the de facto alignment Yes. Behind it just created a community. So, so then the question is, are the developers now setting the standards? That seems like that's the new way, right? I mean, >>I'd like to think so. >>So I mean hybrid, you, you're touching everything at scale and you also have mini shift as well, right? Which is taking a super macro micro shift. You ma micro shift. Oh yeah, yeah, exactly. It is a micro shift. That is, that is fantastic. There isn't a base you don't cover. You've spoken a lot about community and both of you have, and serving the community as well as your engagement with them from a, I mean, it's given that you're both leaders stepping back, how, how Community First is Red Hat and OpenShift as an organization when it comes to building the next products and, and developing. >>I'll take and, and I'm sure Andy is actually the community, so I'm sure he'll want to a lot of it. But I mean, right from the start, we have roots in open source. I'll keep it, you know, and, and, and certainly with es we were one of the original contributors to Kubernetes other than Google. So in some ways we think about as co-creators of es, they love that. And then, yeah, then we have added a lot of things in conjunction with the, I I talk about like SCC for Secure, which has become part security right now, which the community, we added things like our back and other what we thought were enterprise features needed because we actually wanted to build a product out of it and sell it to customers where our customers are enterprises. So we have worked with the community. Sometimes we have been ahead of the community and we have convinced the community. Sometimes the community has been ahead of us for other reasons. So it's been a great collaboration, which is I think the right thing to do. But Andy, as I said, >>Is the community well set too? Are well said. >>Yes, I agree with all of that. I spend most of my days thinking about how to interact with the community and engage with them. So the work that we're doing on kcp, we want it to be a community project and we want to involve as many people as we can. So it is a heavy focus for me and my team. And yeah, we we do >>It all the time. How's it going? How's the project going? You feel good >>About it? I do. It is, it started as an experiment or set of prototypes and has grown leaps and bounds from it's roots and it's, it's fantastic. Yeah. >>Controlled planes are hot data planes control planes. >>I >>Know, I love it. Making things work together horizontally scalable. Yeah. Sounds like cloud cloud native. >>Yeah. I mean, just to add to it, there are a couple of talks that on KCP at Con that our colleagues s Stephan Schemanski has, and I, I, I would urge people who have listening, if they have, just Google it, if you will, and you'll get them. And those are really awesome talks to get more about >>It. Oh yeah, no, and you can tell on GitHub that KCP really is a community project and how many people are participating. It's always fun to watch the action live to. Sure. Andy, thank you so much for being here with us, John. Wonderful questions this afternoon. And thank all of you for tuning in and listening to us here on the Cube Live from Detroit. I'm Savannah Peterson. Look forward to seeing you again very soon.

Published Date : Oct 27 2022

SUMMARY :

John, how you doing? This is what's gonna, how are the next generation software revolution? is familiar, but just in case, let's give 'em a quick one-liner pitch so everyone's on the same page. So it's basically Kubernetes, but you know, with a CNCF ecosystem around it to How does it feel to be here? I haven't been to coupon since San Diego, so it's great to be back in And you gotta ask, before we came on camera, you're like, this is like my sixth co con. I mean, so what, What's the magnitude of change? And what's great is seeing lots of new people interested in contributing And the project management side, you get the keys to the Kingdom with Red Hat OpenShift, I mean you should see the number of just case studies that our One of the things we've been reporting here in the Qla scene before, but here more important is just that if you mission of developers being in charge and large scale? And so we're trying to make that faster and easier for, So the developer basically looks at it as a resource blob. It's like, it's like they have their own cluster, but you don't have to go through the process What's the what's, what's the, what's the benefit and what was the alternative to How much time does that take? Anywhere from five minutes to an hour you can use cloud services. Yeah. do all of that too. Why do something that's been done, if there's a tool that can get you a couple steps down the And the one that again, we are focused And you know, they're, they're savvy. they use best of tools, I mean automation, you know, complete automation, And there is definitely, you know, more, the psychology Yeah, let's talk about that a little bit because I, as you know, as we know, we're very excited about Edge here at the Cube. Even on the You could, I mean, in fact you mentioned space. So the reason I tag back to So the developers need to be productive. And through some super cool new easy to use tools that we have as a How do you guys see that evolving with Red I think, even than we were before. And as you know, with re we kind of have roots in secure operating And so that secures the supply chain. I wanna thank you guys for coming on. I think, I mean, the shortest answer that I can give there really is, you know, the patterns that we use for developing are very consistent. Keep it simple, stupid almost is that acronym, but the consistency and the de facto alignment Yes. and serving the community as well as your engagement with them from a, it. But I mean, right from the start, we have roots in open source. Is the community well set too? So the work that we're doing on kcp, It all the time. I do. Yeah. And those are really awesome talks to get more about And thank all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FererPERSON

0.99+

Stephan SchemanskiPERSON

0.99+

AndyPERSON

0.99+

CharPERSON

0.99+

Savannah PetersonPERSON

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Andy GoldsteinPERSON

0.99+

San DiegoLOCATION

0.99+

five minutesQUANTITY

0.99+

Tushar KatarkiPERSON

0.99+

TuesdayDATE

0.99+

thousandsQUANTITY

0.99+

Satya NalaPERSON

0.99+

sevenQUANTITY

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

EdgeORGANIZATION

0.99+

DetroitLOCATION

0.99+

Motor City, MichiganLOCATION

0.99+

third levelQUANTITY

0.99+

bothQUANTITY

0.99+

Cross PlainORGANIZATION

0.99+

six storeQUANTITY

0.99+

CubeORGANIZATION

0.99+

one-linerQUANTITY

0.99+

One optionQUANTITY

0.99+

GoogleORGANIZATION

0.98+

OpenShiftTITLE

0.98+

CovidPERSON

0.98+

oneQUANTITY

0.98+

an hourQUANTITY

0.98+

Red HatORGANIZATION

0.98+

Telco EdgeORGANIZATION

0.98+

KubeConEVENT

0.98+

first oneQUANTITY

0.98+

CloudNativeConEVENT

0.98+

AustinLOCATION

0.98+

OpenShiftORGANIZATION

0.97+

sixth co con.QUANTITY

0.97+

each stepQUANTITY

0.97+

ESTITLE

0.97+

several years agoDATE

0.97+

todayDATE

0.97+

KubernetesTITLE

0.96+

first co conQUANTITY

0.96+

KCPORGANIZATION

0.95+

OneQUANTITY

0.95+

both leadersQUANTITY

0.94+

cosignORGANIZATION

0.94+

two public cloudsQUANTITY

0.94+

Community FirstORGANIZATION

0.93+

one dimensionQUANTITY

0.91+

Red Hat OpenShiftORGANIZATION

0.91+

first dayQUANTITY

0.91+

Industrial EdgeORGANIZATION

0.9+

SCCORGANIZATION

0.89+

eachQUANTITY

0.89+

one thingQUANTITY

0.88+

customersQUANTITY

0.86+

NA 2022EVENT

0.86+

GitHubORGANIZATION

0.85+

single dayQUANTITY

0.85+

a minuteQUANTITY

0.83+

Red Hat SummitEVENT

0.79+

Cube LiveTITLE

0.77+

Matt LeBlanc & Tom Leyden, Kasten by Veeam | VMware Explore 2022


 

(upbeat music) >> Hey everyone and welcome back to The Cube. We are covering VMware Explore live in San Francisco. This is our third day of wall to wall coverage. And John Furrier is here with me, Lisa Martin. We are excited to welcome two guests from Kasten by Veeam, please welcome Tom Laden, VP of marketing and Matt LeBlanc, not Joey from friends, Matt LeBlanc, the systems engineer from North America at Kasten by Veeam. Welcome guys, great to have you. >> Thank you. >> Thank you for having us. >> Tom-- >> Great, go ahead. >> Oh, I was going to say, Tom, talk to us about some of the key challenges customers are coming to you with. >> Key challenges that they have at this point is getting up to speed with Kubernetes. So everybody has it on their list. We want to do Kubernetes, but where are they going to start? Back when VMware came on the market, I was switching from Windows to Mac and I needed to run a Windows application on my Mac and someone told me, "Run a VM." Went to the internet, I downloaded it. And in a half hour I was done. That's not how it works with Kubernetes. So that's a bit of a challenge. >> I mean, Kubernetes, Lisa, remember the early days of The Cube Open Stack was kind of transitioning, Cloud was booming and then Kubernetes was the paper that became the thing that pulled everybody together. It's now de facto in my mind. So that's clear, but there's a lot of different versions of it and you hear VMware, they call it the dial tone. Usually, remember, Pat Gelter, it's a dial tone. Turns out that came from Kit Colbert or no, I think AJ kind of coined the term here, but it's since been there, it's been adopted by everyone. There's different versions. It's open source. AWS is involved. How do you guys look at the relationship with Kubernetes here and VMware Explore with Kubernetes and the customers because they have choices. They can go do it on their own. They can add a little bit with Lambda, Serverless. They can do more here. It's not easy. It's not as easy as people think it is. And then this is a skill gaps problem too. We're seeing a lot of these problems out there. What's your take? >> I'll let Matt talk to that. But what I want to say first is this is also the power of the cloud native ecosystem. The days are gone where companies were selecting one enterprise application and they were building their stack with that. Today they're building applications using dozens, if not hundreds of different components from different vendors or open source platforms. And that is really what creates opportunities for those cloud native developers. So maybe you want to... >> Yeah, we're seeing a lot of hybrid solutions out there. So it's not just choosing one vendor, AKS, EKS, or Tanzu. We're seeing all the above. I had a call this morning with a large healthcare provider and they have a hundred clusters and that's spread across AKS, EKS and GKE. So it is covering everything. Plus the need to have a on-prem solution manage it all. >> I got a stat, I got to share that I want to get your reactions and you can laugh or comment, whatever you want to say. Talk to big CSO, CXO, executive, big company, I won't say the name. We got a thousand developers, a hundred of them have heard of Kubernetes, okay. 10 have touched it and used it and one's good at it. And so his point is that there's a lot of Kubernetes need that people are getting aware. So it shows that there's more and more adoption around. You see a lot of managed services out there. So it's clear it's happening and I'm over exaggerating the ratio probably. But the point is the numbers kind of make sense as a thousand developers. You start to see people getting adoption to it. They're aware of the value, but being good at it is what we're hearing is one of those things. Can you guys share your reaction to that? Is that, I mean, it's hyperbole at some level, but it does point to the fact of adoption trends. You got to get good at it, you got to know how to use it. >> It's very accurate, actually. It's what we're seeing in the market. We've been doing some research of our own, and we have some interesting numbers that we're going to be sharing soon. Analysts don't have a whole lot of numbers these days. So where we're trying to run our own surveys to get a grasp of the market. One simple survey or research element that I've done myself is I used Google trends. And in Google trends, if you go back to 2004 and you compare VMware against Kubernetes, you get a very interesting graph. What you're going to see is that VMware, the adoption curve is practically complete and Kubernetes is clearly taking off. And the volume of searches for Kubernetes today is almost as big as VMware. So that's a big sign that this is starting to happen. But in this process, we have to get those companies to have all of their engineers to be up to speed on Kubernetes. And that's one of the community efforts that we're helping with. We built a website called learning.kasten.io We're going to rebrand it soon at CubeCon, so stay tuned, but we're offering hands on labs there for people to actually come learn Kubernetes with us. Because for us, the faster the adoption goes, the better for our business. >> I was just going to ask you about the learning. So there's a big focus here on educating customers to help dial down the complexity and really get them, these numbers up as John was mentioning. >> And we're really breaking it down to the very beginning. So at this point we have almost 10 labs as we call them up and they start really from install a Kubernetes Cluster and people really hands on are going to install a Kubernetes Cluster. They learn to build an application. They learn obviously to back up the application in the safest way. And then there is how to tune storage, how to implement security, and we're really building it up so that people can step by step in a hands on way learn Kubernetes. >> It's interesting, this VMware Explore, their first new name change, but VMWorld prior, big community, a lot of customers, loyal customers, but they're classic and they're foundational in enterprises and let's face it. Some of 'em aren't going to rip out VMware anytime soon because the workloads are running on it. So in Broadcom we'll have some good action to maybe increase prices or whatnot. So we'll see how that goes. But the personas here are definitely going cloud native. They did with Tanzu, was a great thing. Some stuff was coming off, the fruit's coming off the tree now, you're starting to see it. CNCF has been on this for a long, long time, CubeCon's coming up in Detroit. And so that's just always been great, 'cause you had the day zero event and you got all kinds of community activity, tons of developer action. So here they're talking, let's connect to the developer. There the developers are at CubeCon. So the personas are kind of connecting or overlapping. I'd love to get your thoughts, Matt on? >> So from the personnel that we're talking to, there really is a split between the traditional IT ops and a lot of the people that are here today at VMWare Explore, but we're also talking with the SREs and the dev ops folks. What really needs to happen is we need to get a little bit more experience, some more training and we need to get these two groups to really start to coordinate and work together 'cause you're basically moving from that traditional on-prem environment to a lot of these traditional workloads and the only way to get that experience is to get your hands dirty. >> Right. >> So how would you describe the persona specifically here versus say CubeCon? IT ops? >> Very, very different, well-- >> They still go ahead. Explain. >> Well, I mean, from this perspective, this is all about VMware and everything that they have to offer. So we're dealing with a lot of administrators from that regard. On the Kubernetes side, we have site reliability engineers and their goal is exactly as their title describes. They want to architect arch applications that are very resilient and reliable and it is a different way of working. >> I was on a Twitter spaces about SREs and dev ops and there was people saying their title's called dev ops. Like, no, no, you do dev ops, you don't really, you're not the dev ops person-- >> Right, right. >> But they become the dev ops person because you're the developer running operations. So it's been weird how dev ops been co-opted as a position. >> And that is really interesting. One person told me earlier when I started Kasten, we have this new persona. It's the dev ops person. That is the person that we're going after. But then talking to a few other people who were like, "They're not falling from space." It's people who used to do other jobs who now have a more dev ops approach to what they're doing. It's not a new-- >> And then the SRE conversation was in site, reliable engineer comes from Google, from one person managing multiple clusters to how that's evolved into being the dev ops. So it's been interesting and this is really the growth of scale, the 10X developer going to more of the cloud native, which is okay, you got to run ops and make the developer go faster. If you look at the stuff we've been covering on The Cube, the trends have been cloud native developers, which I call dev ops like developers. They want to go faster. They want self-service and they don't want to slow down. They don't want to deal with BS, which is go checking security code, wait for the ops team to do something. So data and security seem to be the new ops. Not so much IT ops 'cause that's now cloud. So how do you guys see that in, because Kubernetes is rationalizing this, certainly on the compute side, not so much on storage yet but it seems to be making things better in that grinding area between dev and these complicated ops areas like security data, where it's constantly changing. What do you think about that? >> Well there are still a lot of specialty folks in that area in regards to security operations. The whole idea is be able to script and automate as much as possible and not have to create a ticket to request a VM to be billed or an operating system or an application deployed. They're really empowered to automatically deploy those applications and keep them up. >> And that was the old dev ops role or person. That was what dev ops was called. So again, that is standard. I think at CubeCon, that is something that's expected. >> Yes. >> You would agree with that. >> Yeah. >> Okay. So now translating VM World, VMware Explore to CubeCon, what do you guys see as happening between now and then? Obviously got re:Invent right at the end in that first week of December coming. So that's going to be two major shows coming in now back to back that're going to be super interesting for this ecosystem. >> Quite frankly, if you compare the persona, maybe you have to step away from comparing the personas, but really compare the conversations that we're having. The conversations that you're having at a CubeCon are really deep dives. We will have people coming into our booth and taking 45 minutes, one hour of the time of the people who are supposed to do 10 minute demos because they're asking more and more questions 'cause they want to know every little detail, how things work. The conversations here are more like, why should I learn Kubernetes? Why should I start using Kubernetes? So it's really early day. Now, I'm not saying that in a bad way. This is really exciting 'cause when you hear CNCF say that 97% of enterprises are using Kubernetes, that's obviously that small part of their world. Those are their members. We now want to see that grow to the entire ecosystem, the larger ecosystem. >> Well, it's actually a great thing, actually. It's not a bad thing, but I will counter that by saying I am hearing the conversation here, you guys'll like this on the Veeam side, the other side of the Veeam, there's deep dives on ransomware and air gap and configuration errors on backup and recovery and it's all about Veeam on the other side. Those are the guys here talking deep dive on, making sure that they don't get screwed up on ransomware, not Kubernete, but they're going to Kub, but they're now leaning into Kubernetes. They're crossing into the new era because that's the apps'll end up writing the code for that. >> So the funny part is all of those concepts, ransomware and recovery, they're all, there are similar concepts in the world of Kubernetes and both on the Veeam side as well as the Kasten side, we are supporting a lot of those air gap solutions and providing a ransomware recovery solution and from a air gap perspective, there are a many use cases where you do need to live. It's not just the government entity, but we have customers that are cruise lines in Europe, for example, and they're disconnected. So they need to live in that disconnected world or military as well. >> Well, let's talk about the adoption of customers. I mean this is the customer side. What's accelerating their, what's the conversation with the customer at base, not just here but in the industry with Kubernetes, how would you guys categorize that? And how does that get accelerated? What's the customer situation? >> A big drive to Kubernetes is really about the automation, self-service and reliability. We're seeing the drive to and reduction of resources, being able to do more with less, right? This is ongoing the way it's always been. But I was talking to a large university in Western Canada and they're a huge Veeam customer worth 7000 VMs and three months ago, they said, "Over the next few years, we plan on moving all those workloads to Kubernetes." And the reason for it is really to reduce their workload, both from administration side, cost perspective as well as on-prem resources as well. So there's a lot of good business reasons to do that in addition to the technical reliability concerns. >> So what is those specific reasons? This is where now you start to see the rubber hit the road on acceleration. >> So I would say scale and flexibility that ecosystem, that opportunity to choose any application from that or any tool from that cloud native ecosystem is a big driver. I wanted to add to the adoption. Another area where I see a lot of interest is everything AI, machine learning. One example is also a customer coming from Veeam. We're seeing a lot of that and that's a great thing. It's an AI company that is doing software for automated driving. They decided that VMs alone were not going to be good enough for all of their workloads. And then for select workloads, the more scalable one where scalability was more of a topic, would move to Kubernetes. I think at this point they have like 20% of their workloads on Kubernetes and they're not planning to do away with VMs. VMs are always going to be there just like mainframes still exist. >> Yeah, oh yeah. They're accelerating actually. >> We're projecting over the next few years that we're going to go to a 50/50 and eventually lean towards more Kubernetes than VMs, but it was going to be a mix. >> Do you have a favorite customer example, Tom, that you think really articulates the value of what Kubernetes can deliver to customers where you guys are really coming in and help to demystify it? >> I would think SuperStereo is a really great example and you know the details about it. >> I love the SuperStereo story. They were a AWS customer and they're running OpenShift version three and they need to move to OpenShift version four. There is no upgrade in place. You have to migrate all your apps. Now SuperStereo is a large French IT firm. They have over 700 developers in their environment and it was by their estimation that this was going to take a few months to get that migration done. We're able to go in there and help them with the automation of that migration and Kasten was able to help them architect that migration and we did it in the course of a weekend with two people. >> A weekend? >> A weekend. >> That's a hackathon. I mean, that's not real come on. >> Compared to thousands of man hours and a few months not to mention since they were able to retire that old OpenShift cluster, the OpenShift three, they were able to stop paying Jeff Bezos for a couple of those months, which is tens of thousands of dollars per month. >> Don't tell anyone, keep that down low. You're going to get shot when you leave this place. No, seriously. This is why I think the multi-cloud hybrid is interesting because these kinds of examples are going to be more than less coming down the road. You're going to see, you're going to hear more of these stories than not hear them because what containerization now Kubernetes doing, what Dockers doing now and the role of containers not being such a land grab is allowing Kubernetes to be more versatile in its approach. So I got to ask you, you can almost apply that concept to agility, to other scenarios like spanning data across clouds. >> Yes, and that is what we're seeing. So the call I had this morning with a large insurance provider, you may have that insurance provider, healthcare provider, they're across three of the major hyperscalers clouds and they do that for reliability. Last year, AWS went down, I think three times in Q4 and to have a plan of being able to recover somewhere else, you can actually plan your, it's DR, it's a planned migration. You can do that in a few hours. >> It's interesting, just the sidebar here for a second. We had a couple chats earlier today. We had the influences on and all the super cloud conversations and trying to get more data to share with the audience across multiple areas. One of them was Amazon and that super, the hyper clouds like Amazon, as your Google and the rest are out there, Oracle, IBM and everyone else. There's almost a consensus that maybe there's time for some peace amongst the cloud vendors. Like, "Hey, you've already won." (Tom laughs) Everyone's won, now let's just like, we know where everyone is. Let's go peace time and everyone, then 'cause the relationship's not going to change between public cloud and the new world. So there's a consensus, like what does peace look like? I mean, first of all, the pie's getting bigger. You're seeing ecosystems forming around all the big new areas and that's good thing. That's the tides rise and the pie's getting bigger, there's bigger market out there now so people can share and share. >> I've never worked for any of these big players. So I would have to agree with you, but peace would not drive innovation. And in my heart is with tech innovation. I love it when vendors come up with new solutions that will make things better for customers and if that means that we're moving from on-prem to cloud and back to on-prem, I'm fine with that. >> What excites me is really having the flexibility of being able to choose any provider you want because you do have open standards, being cloud native in the world of Kubernetes. I've recently discovered that the Canadian federal government had mandated to their financial institutions that, "Yes, you may have started all of your on cloud presence in Azure, you need to have an option to be elsewhere." So it's not like-- >> Well, the sovereign cloud is one of those big initiatives, but also going back to Java, we heard another guest earlier, we were thinking about Java, right once ran anywhere, right? So you can't do that today in a cloud, but now with containers-- >> You can. >> Again, this is, again, this is the point that's happening. Explain. >> So when you have, Kubernetes is a strict standard and all of the applications are written to that. So whether you are deploying MongoDB or Postgres or Cassandra or any of the other cloud native apps, you can deploy them pretty much the same, whether they're in AKS, EKS or on Tanzu and it makes it much easier. The world became just a lot less for proprietary. >> So that's the story that everybody wants to hear. How does that happen in a way that is, doesn't stall the innovation and the developer growth 'cause the developers are driving a lot of change. I mean, for all the talk in the industry, the developers are doing pretty good right now. They've got a lot of open source, plentiful, open source growing like crazy. You got shifting left in the CICD pipeline. You got tools coming out with Kubernetes. Infrastructure has code is almost a 100% reality right now. So there's a lot of good things going on for developers. That's not an issue. The issue is just underneath. >> It's a skillset and that is really one of the biggest challenges I see in our deployments is a lack of experience. And it's not everyone. There are some folks that have been playing around for the last couple of years with it and they do have that experience, but there are many people that are still young at this. >> Okay, let's do, as we wrap up, let's do a lead into CubeCon, it's coming up and obviously re:Invent's right behind it. Lisa, we're going to have a lot of pre CubeCon interviews. We'll interview all the committee chairs, program chairs. We'll get the scoop on that, we do that every year. But while we got you guys here, let's do a little pre-pre-preview of CubeCon. What can we expect? What do you guys think is going to happen this year? What does CubeCon look? You guys our big sponsor of CubeCon. You guys do a great job there. Thanks for doing that. The community really recognizes that. But as Kubernetes comes in now for this year, you're looking at probably the what third year now that I would say Kubernetes has been on the front burner, where do you see it on the hockey stick growth? Have we kicked the curve yet? What's going to be the level of intensity for Kubernetes this year? How's that going to impact CubeCon in a way that people may or may not think it will? >> So I think first of all, CubeCon is going to be back at the level where it was before the pandemic, because the show, as many other shows, has been suffering from, I mean, virtual events are not like the in-person events. CubeCon LA was super exciting for all the vendors last year, but the attendees were not really there yet. Valencia was a huge bump already and I think Detroit, it's a very exciting city I heard. So it's going to be a blast and it's going to be a huge attendance, that's what I'm expecting. Second I can, so this is going to be my third personally, in-person CubeCon, comparing how vendors evolved between the previous two. There's going to be a lot of interesting stories from vendors, a lot of new innovation coming onto the market. And I think the conversations that we're going to be having will yet, again, be much more about live applications and people using Kubernetes in production rather than those at the first in-person CubeCon for me in LA where it was a lot about learning still, we're going to continue to help people learn 'cause it's really important for us but the exciting part about CubeCon is you're talking to people who are using Kubernetes in production and that's really cool. >> And users contributing projects too. >> Also. >> I mean Lyft is a poster child there and you've got a lot more. Of course you got the stealth recruiting going on there, Apple, all the big guys are there. They have a booth and no one's attending you like, "Oh come on." Matt, what's your take on CubeCon? Going in, what do you see? And obviously a lot of dynamic new projects. >> I'm going to see much, much deeper tech conversations. As experience increases, the more you learn, the more you realize you have to learn more. >> And the sharing's going to increase too. >> And the sharing, yeah. So I see a lot of deep conversations. It's no longer the, "Why do I need Kubernetes?" It's more, "How do I architect this for my solution or for my environment?" And yeah, I think there's a lot more depth involved and the size of CubeCon is going to be much larger than we've seen in the past. >> And to finish off what I think from the vendor's point of view, what we're going to see is a lot of applications that will be a lot more enterprise-ready because that is the part that was missing so far. It was a lot about the what's new and enabling Kubernetes. But now that adoption is going up, a lot of features for different components still need to be added to have them enterprise-ready. >> And what can the audience expect from you guys at CubeCon? Any teasers you can give us from a marketing perspective? >> Yes. We have a rebranding sitting ready for learning website. It's going to be bigger and better. So we're not no longer going to call it, learning.kasten.io but I'll be happy to come back with you guys and present a new name at CubeCon. >> All right. >> All right. That sounds like a deal. Guys, thank you so much for joining John and me breaking down all things Kubernetes, talking about customer adoption, the challenges, but also what you're doing to demystify it. We appreciate your insights and your time. >> Thank you so much. >> Thank you very much. >> Our pleasure. >> Thanks Matt. >> For our guests and John Furrier, I'm Lisa Martin. You've been watching The Cube's live coverage of VMware Explore 2022. Thanks for joining us. Stay safe. (gentle music)

Published Date : Sep 1 2022

SUMMARY :

We are excited to welcome two customers are coming to you with. and I needed to run a and you hear VMware, they the cloud native ecosystem. Plus the need to have a They're aware of the value, And that's one of the community efforts to help dial down the And then there is how to tune storage, So the personas are kind of and a lot of the people They still go ahead. and everything that they have to offer. the dev ops person-- So it's been weird how dev ops That is the person that we're going after. the 10X developer going to and not have to create a ticket So again, that is standard. So that's going to be two of the people who are but they're going to Kub, and both on the Veeam side not just here but in the We're seeing the drive to to see the rubber hit the road that opportunity to choose any application They're accelerating actually. over the next few years and you know the details about it. and they need to move to I mean, that's not real come on. and a few months not to mention since and the role of containers and to have a plan of being and that super, the and back to on-prem, I'm fine with that. that the Canadian federal government this is the point that's happening. and all of the applications and the developer growth and that is really one of How's that going to impact and it's going to be a huge attendance, and no one's attending you like, the more you learn, And the sharing's and the size of CubeCon because that is the part It's going to be bigger and better. adoption, the challenges, of VMware Explore 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt LeBlancPERSON

0.99+

Lisa MartinPERSON

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

Pat GelterPERSON

0.99+

Tom LeydenPERSON

0.99+

MattPERSON

0.99+

John FurrierPERSON

0.99+

Tom LadenPERSON

0.99+

LisaPERSON

0.99+

TomPERSON

0.99+

VeeamORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

one hourQUANTITY

0.99+

San FranciscoLOCATION

0.99+

AmazonORGANIZATION

0.99+

LALOCATION

0.99+

DetroitLOCATION

0.99+

JoeyPERSON

0.99+

AppleORGANIZATION

0.99+

10 minuteQUANTITY

0.99+

two peopleQUANTITY

0.99+

Last yearDATE

0.99+

Jeff BezosPERSON

0.99+

45 minutesQUANTITY

0.99+

John FurrierPERSON

0.99+

2004DATE

0.99+

two guestsQUANTITY

0.99+

Western CanadaLOCATION

0.99+

GoogleORGANIZATION

0.99+

7000 VMsQUANTITY

0.99+

JavaTITLE

0.99+

97%QUANTITY

0.99+

hundredsQUANTITY

0.99+

last yearDATE

0.99+

thirdQUANTITY

0.99+

Kit ColbertPERSON

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

20%QUANTITY

0.99+

CNCFORGANIZATION

0.99+

two groupsQUANTITY

0.99+

firstQUANTITY

0.99+

TanzuORGANIZATION

0.99+

WindowsTITLE

0.99+

third dayQUANTITY

0.99+

North AmericaLOCATION

0.99+

dozensQUANTITY

0.99+

OneQUANTITY

0.99+

over 700 developersQUANTITY

0.99+

learning.kasten.ioOTHER

0.98+

AKSORGANIZATION

0.98+

oneQUANTITY

0.98+

VeeamPERSON

0.98+

VMware Explore 2022TITLE

0.98+

VMWare ExploreORGANIZATION

0.98+

CubeConEVENT

0.98+

One exampleQUANTITY

0.98+

KubernetesTITLE

0.98+

three months agoDATE

0.98+

bothQUANTITY

0.98+

EKSORGANIZATION

0.97+

LyftORGANIZATION

0.97+

TodayDATE

0.97+

KastenORGANIZATION

0.97+

this yearDATE

0.97+

three timesQUANTITY

0.97+

SuperStereoTITLE

0.97+

third yearQUANTITY

0.96+

Manoj Sharma, Google Cloud | VMware Explore 2022


 

>>Welcome back everyone to the Cube's live coverage here in San Francisco of VMware Explorer, 2022. I'm John furrier with Dave ante coast of the hub. We're two sets, three days of wall to wall coverage. Our 12 year covering VMware's annual conference day, formerly world. Now VMware Explorer. We're kicking off day tube, no Sharma director of product management at Google cloud GCP. No Thankss for coming on the cube. Good to see you. >>Yeah. Very nice to see you as well. >>It's been a while. Google next cloud. Next is your event. We haven't been there cuz of the pandemic. Now you got an event coming up in October. You wanna give that plug out there in October 11th, UHS gonna be kind of a hybrid show. You guys with GCP, doing great. Getting up, coming up on in, in the rear with third place, Amazon Azure GCP, you guys have really nailed the developer and the AI and the data piece in the cloud. And now with VMware, with multicloud, you guys are in the mix in the universal program that they got here had been, been a partnership. Talk about the Google VMware relationship real quick. >>Yeah, no, I wanna first address, you know, us being in third place. I think when, when customers think about cloud transformation, you know, they, they, for them, it's all about how you can extract value from the data, you know, how you can transform your business with AI. And as far as that's concerned, we are in first place. Now coming to the VMware partnership, what we observed was, you know, you know, first of all, like there's a lot of data gravity built over the past, you know, 20 years in it, you know, and you know, VMware has, you know, really standardized it platforms. And when it comes to the data gravity, what we found was that, you know, customers want to extract the value that, you know, lives in that data as I was just talking about, but they find it hard to change architectures and, you know, bring those architectures into, you know, the cloud native world, you know, with microservices and so forth. >>Especially when, you know, these applications have been built over the last 20 years with off the shelf, you know, commercial off the shelf in, you know, systems you don't even know who wrote the code. You don't know what the IP address configuration is. And it's, you know, if you change anything, it can break your production. But at the same time, they want to take advantage of what the cloud has to offer. You know, the self-service the elasticity, you know, the, the economies of scale efficiencies of operation. So we wanted to, you know, bring CU, you know, bring the cloud to where the customer is with this service. And, you know, with, like I said, you know, VMware was the defacto it platform. So it was a no brainer for us to say, you know what, we'll give VMware in a native manner yeah. For our customers and bring all the benefits of the cloud into it to help them transform and take advantage of the cloud. >>It's interesting. And you called out that the, the advantages of Google cloud, one of the things that we've observed is, you know, VMware trying to be much more cloud native in their messaging and their positioning. They're trying to connect into that developer world for cloud native. I mean, Google, I mean, you guys have been cloud native literally from day one, just as a company. Yeah. Infrastructure wise, I mean, DevOps was an infrastructures code was Google's DNA. I, you had Borg, which became Kubernetes. Everyone kind of knows that in the history, if you, if you're in, in the, inside the ropes. Yeah. So as you guys have that core competency of essentially infrastructures code, which is basically cloud, how are you guys bringing that into the enterprise with the VMware, because that's where the puck is going. Right. That's where the use cases are. Okay. You got data clearly an advantage there, developers, you guys do really well with developers. We see that at say Coon and CNCF. Where's the use cases as the enterprise start to really figure out that this is now happening with hybrid and they gotta be more cloud native. Are they ramping up certain use cases? Can you share and connect the dots between what you guys had as your core competency and where the enterprise use cases are? >>Yeah. Yeah. You know, I think transformation means a lot of things, especially when you get into the cloud, you want to be not only efficient, but you also wanna make sure you're secure, right. And that you can manage and maintain your infrastructure in a way that you can reason about it. When, you know, when things go wrong, we took a very unique approach with Google cloud VMware engine. When we brought it to the cloud to Google cloud, what we did was we, we took like a cloud native approach. You know, it would seem like, you know, we are to say that, okay, VMware is cloud native, but in fact that's what we've done with this service from the ground up. One of the things we wanted to do was make sure we meet all the enterprise needs availability. We are the only service that gives four nines of SLA in a single site. >>We are the only service that has fully redundant networking so that, you know, some of the pets that you run on the VMware platform with your operational databases and the keys to the kingdom, you know, they can be run in a efficient manner and in a, in a, in a stable manner and, and, you know, in a highly available fashion, but we also paid attention to performance. One of our customers Mitel runs a unified communication service. And what they found was, you know, the high performance infrastructure, low latency infrastructure actually helps them deliver, you know, highly reliable, you know, communication experience to their customers. Right. And so, you know, we, you know, while, you know, so we developed the service from the ground up, making sure we meet the needs of these enterprise applications, but also wanted to make sure it's positioned for the future. >>Well, integrated into Google cloud VPC, networking, billing, identities, access control, you know, support all of that with a one stop shop. Right? And so this completely changes the game for, for enterprises on the outset, but what's more like we also have built in integration to cloud operations, you know, a single pane of glass for managing all your cloud infrastructure. You know, you have the ability to easily ELT into BigQuery and, you know, get a data transformation going that way from your operational databases. So, so I think we took a very like clean room ground from the ground of approach to make sure we get the best of both worlds to our customers. So >>Essentially made the VMware stack of first class citizen connecting to all the go Google tool. Did you build a bare metal instance to be able to support >>That? We, we actually have a very customized infrastructure to make sure that, you know, the experience that customers looking for in the VMware context is what we can deliver to them. And, and like I said, you know, being able to manage the pets in, in addition to the cattle that, that we are, we are getting with the modern containerized workloads. >>And, and it's not likely you did that as a one off, I, I would presume that other partners can potentially take advantage of that, that approach as well. Is that >>True? Absolutely. So one of our other examples is, is SAP, you know, our SAP infrastructure runs on very similar kind of, you know, highly redundant infrastructure, some, some parts of it. And, and then, you know, we also have in the same context partners such as NetApp. So, so customers want to, you know, truly, so, so there's two parts to it, right? One is to meet customers where they already are, but also take them to the future. And partner NetApp has delivered a cloud service that is well integrated into the platform, serves use cases like VDI serves use cases for, you know, tier two data protection scenarios, Dr. And also high performance context that customers are looking for, explain >>To people because think a lot of times people understand say, oh, NetApp, but doesn't Google have storage. Yeah. So explain that relationship and why that, that is complimentary. Yeah. And not just some kind of divergence from your strategy. >>Yeah. Yeah. No. So I think the, the idea here is NetApp, the NetApp platform living on-prem, you know, for, for so many years, it's, it's built a lot of capabilities that customers take advantage of. Right. So for example, it has the sta snap mirror capabilities that enable, you know, instant Dr. Of between locations and customers. When they think of the cloud, they are also thinking of heterogeneous context where some of the infrastructure is still needs to live on prem. So, you know, they have the Dr going on from the on-prem side using snap mirror, into Google cloud. And so, you know, it enables that entry point into the cloud. And so we believe, you know, partnering with NetApp kind of enables these high performance, you know, high, you know, reliability and also enables the customers to meet regulatory needs for, you know, the Dr. And data protection that they're looking for. And, >>And NetApp, obviously a big VMware partner as well. So I can take that partnership with VMware and NetApp into the Google cloud. >>Correct. Yeah. Yeah. It's all about leverage. Like I said, you know, meeting customers where they already are and ensuring that we smoothen their journey into the future rather than making it like a single step, you know, quantum leap. So to speak between two words, you know, I think, you know, I like to say like for the, for the longest time the cloud was being presented as a false choice between, you know, the infrastructure as of, of the past and the infrastructure of the future, like the red pill and the blue pill. Right. And, you know, we've, I like to say, like, I've, you know, we've brought, brought into the, into this context, the purple pill. Right. Which gives you really the best of both tools. >>Yeah. And this is a tailwind for you guys now, and I wanna get your thoughts on this and your differentiation around multi-cloud that's around the corner. Yeah. I mean, everyone now recognizes at least multi clouds of reality. People have workloads on AWS, Azure and GCP. That is technically multi-cloud. Yeah. Now the notion of spanning applications across clouds is coming certainly hybrid cloud is a steady state, which essentially DevOps on prem or edge in the cloud. So, so you have, now the recognition that's here, you guys are positioned well for this. How is that evolving and how are you positioning yourself with, and how you're differentiating around as clients start thinking, Hey, you know what, I can start running things on AWS and GCP. Yeah. And OnPrem in a really kind of a distributed way. Yeah. With abstractions and these things that people are talking about super cloud, what we call it. And, and this is really the conversations. Okay. What does that next future around the corner architecture look like? And how do you guys fit in, because this is an opportunity for you guys. It's almost, it's almost, it's like Wayne Gretsky, the puck is coming to you. Yeah. Yeah. It seems that way to me. What, how do you respond to >>That? Yeah, no, I think, you know, Raghu said, yes, I did yesterday. Right. It's all about being cloud smart in this new heterogeneous world. I think Google cloud has always been the most open and the most customer oriented cloud. And the reason I say that is because, you know, looking at like our Kubernetes platform, right. What we've enabled with Kubernetes and Antho is the ability for a customer to run containerized infrastructure in the same consistent manner, no matter what the platform. So while, you know, Kubernetes runs on GKE, you can run using Anthos on the VMware platform and you can run using Anthos on any other cloud on the planet in including AWS Azure. And, and so it's, you know, we, we take a very open, we've taken an open approach with Kubernetes to begin with, but, you know, the, the fact that, you know, with Anthos and this multicloud management experience that we can provide customers, we are, we are letting customers get the full freedom of an advantage of what multicloud has to has to offer. And I like to say, you know, VMware is the ES of ISAs, right. Cause cuz if you think about it, it's the only hypervisor that you can run in the same consistent manner, take the same image and run it on any of the providers. Right. And you can, you know, link it, you know, with the L two extensions and create a fabric that spans the world and, and, and multiple >>Products with, with almost every company using VMware. >>That's pretty much that's right. It's the largest, like the VMware network of, of infrastructure is the largest network on the planet. Right. And so, so it's, it's truly about enabling customer choice. We believe that every cloud, you know, brings its advantages and, you know, at the end of their day, the technology of, you know, capabilities of the provider, the differentiation of the provider need to stand on its merit. And so, you know, we truly embrace this notion of money. Those ops guys >>Have to connect to opportunities to connect to you, you guys in yeah. In, in the cloud. >>Yeah. Absolutely >>Like to ask you a question sort of about database philosophy and maybe, maybe futures a little bit, there seems to be two camps. I mean, you've got multiple databases, you got span for, you know, kind of global distributed database. You've got big query for analytics. There seems to be a trend in the industry for some providers to say, okay, let's, let's converge the transactions and analytics and kind of maybe eliminate the need to do a lot of Elting and others are saying, no, no, we want to be, be, you know, really precise and distinct with our capabilities and, and, and have be spoke set of capability, right. Tool for the right job. Let's call it. What's Google's philosophy in that regard. And, and how do you think about database in the future? >>So, so I think, you know, when it comes to, you know, something as general and as complex as data, right, you know, data lives in all ships and forms, it, it moves at various velocities that moves at various scale. And so, you know, we truly believe that, you know, customers should have the flexibility and freedom to put things together using, you know, these various contexts and, and, you know, build the right set of outcomes for themselves. So, you know, we, we provide cloud SQL, right, where customers can run their own, you know, dedicated infrastructure, fully managed and operated by Google at a high level of SLA compared to any other way of doing it. We have a database born in the cloud, a data warehouse born in the cloud BigQuery, which enables zero ops, you know, zero touch, you know, instant, you know, know high performance analytics at scale, you know, span gives customers high levels of reliability and redundancy in, in, in a worldwide context. So with, with, with extreme levels of innovation coming from, you know, the, the, the NTP, you know, that happen across different instances. Right? So I, you know, I, we, we do think that, you know, data moves a different scale and, and different velocity and, and, you know, customers have a complex set of needs. And, and so our portfolio of database services put together can truly address all ends of the spectrum. >>Yeah. And we've certainly been following you guys at CNCF and the work that Google cloud's doing extremely strong technical people. Yeah. Really open source focused, great products, technology. You guys do a great job. And I, I would imagine, and it's clear that VMware is an opportunity for you guys, given the DNA of their customer base. The installed base is huge. You guys have that nice potential connection where these customers are kind of going where its puck is going. You guys are there now for the next couple minutes, give a, give a plug for Google cloud to the VMware customer base out there. Yeah. Why Google cloud, why now what's in it for them? What's the, what's the value parts? Give the, give the plug for Google cloud to the VMware community. >>Absolutely. So, so I think, you know, especially with VMware engine, what we've built, you know, is truly like a cloud native next generation enterprise platform. Right. And it does three specific things, right? It gives you a cloud optimized experience, right? Like the, the idea being, you know, self-service efficiencies, economies, you know, operational benefits, you get that from the platform and a customer like Mitel was able to take advantage of that. Being able to use the same platform that they were running in their co-located context and migrate more than a thousand VMs in less than 90 days, something that they weren't able to do for, for over two years. The second aspect of our, you know, our transformation journey that we enable with this service is cloud integration. What that means is the same VPC experience that you get in the, the, the networking global networking that Google cloud has to offer. >>The VMware platform is fully integrated into that. And so the benefits of, you know, having a subnet that can live anywhere in the world, you know, having multi VPC, but more importantly, the benefits of having these Google cloud services like BigQuery and span and cloud operations management at your fingertips in the same layer, three domain, you know, just make an IP call and your data is transformed into BigQuery from your operational databases and car four. The retailer in Europe actually was able to do that with our service. And not only that, you know, do do the operational transform into BigQuery, you know, from their, the data gravity living in VMware on, on VMware engine, but they were able to do it in, you know, cost effective, a manner. They, they saved, you know, over 40% compared to the, the current context and also lower the co increase the agility of operations at the same time. >>Right. And so for them, this was extremely transf transformative. And lastly, we believe in the context of being open, we are also a very partner friendly cloud. And so, you know, customers come bring VMware platform because of all the, it, you know, ecosystem that comes along with it, right. You've got your VM or your Zerto or your rubric, or your capacity for data protection and, and backup. You've got security from Forex, tha fortunate, you know, you've got, you know, like we'd already talked about NetApp storage. So we, you know, we are open in that technology context, ISVs, you know, fully supported >>Integrations key. Yeah, >>Yeah, exactly. And, and, you know, that's how you build a platform, right? Yeah. And so, so we enable that, but, but, you know, we also enable customers getting into the future, going into the future, through their AI, through the AI capabilities and services that are once again available at, at their fingertips. >>Soo, thanks for coming on. Really appreciate it. And, you know, as super clouds, we call it, our multi-cloud comes around the corner, you got the edge exploding, you guys do a great job in networking and security, which is well known. What's your view of this super cloud multi-cloud world. What's different about it? Why isn't it just sass on cloud what's, what's this next gen cloud really about it. You had to kind of kind explain that to, to business folks and technical folks out there. Is it, is it something unique? Do you see a, a refactoring? Is it something that does something different? Yeah. What, what doesn't make it just SAS. >>Yeah. Yeah. No, I think that, you know, there's, there's different use cases that customers have have in mind when they, when they think about multi-cloud. I think the first thing is they don't want to have, you know, all eggs in a single basket. Right. And, and so, you know, it, it helps diversify their risk. I mean, and it's a real problem. Like you, you see outages in, you know, in, in availability zones that take out entire businesses. So customers do wanna make sure that they're not, they're, they're able to increase their availability, increase their resiliency through the use of multiple providers, but I think so, so that's like getting the same thing in different contexts, but at the same time, the context is shifting right. There is some, there's some data sources that originate, you know, elsewhere and there, the scale and the velocity of those sources is so vast, you know, you might be producing video from retail stores and, you know, you wanna make sure, you know, this, this security and there's, you know, information awareness built about those sources. >>And so you want to process that data, add the source and take instant decisions with that proximity. And that's why we believe with the GC and, you know, with, with both, both the edge versions and the hosted versions, GDC stands for Google, Google distributed cloud, where we bring the benefit and value of Google cloud to different locations on the edge, as well as on-prem. And so I think, you know, those kinds of contexts become important. And so I think, you know, we, you know, we are not only do we need to be open and pervasive, you know, but we also need to be compatible and, and, and also have the proximity to where information lives and value lives. >>Minish. Thanks for coming on the cube here at VMware Explorer, formerly world. Thanks for your time. Thank >>You so much. Okay. >>This is the cube. I'm John for Dave ante live day two coverage here on Moscone west lobby for VMware Explorer. We'll be right back with more after the short break.

Published Date : Aug 31 2022

SUMMARY :

No Thankss for coming on the cube. And now with VMware, with multicloud, you guys are in the mix in the universal program you know, the cloud native world, you know, with microservices and so forth. You know, the self-service the elasticity, you know, you know, VMware trying to be much more cloud native in their messaging and their positioning. You know, it would seem like, you know, we And so, you know, we, you know, while, you know, so we developed the service from the you know, get a data transformation going that way from your operational databases. Did you build a bare metal instance to be able to support And, and like I said, you know, being able to manage the pets in, And, and it's not likely you did that as a one off, I, I would presume that other partners And, and then, you know, we also have in the same context partners such as NetApp. And not just some kind of divergence from your strategy. to meet regulatory needs for, you know, the Dr. And data protection that they're looking for. and NetApp into the Google cloud. you know, I think, you know, I like to say like for the, now the recognition that's here, you guys are positioned well for this. Kubernetes to begin with, but, you know, the, the fact that, you know, And so, you know, we truly embrace this notion of money. In, in the cloud. no, no, we want to be, be, you know, really precise and distinct with So, so I think, you know, when it comes to, you know, for you guys, given the DNA of their customer base. of our, you know, our transformation journey that we enable with this service is you know, having a subnet that can live anywhere in the world, you know, you know, we are open in that technology context, ISVs, you know, fully supported Yeah, so we enable that, but, but, you know, we also enable customers getting And, you know, as super clouds, we call it, our multi-cloud comes stores and, you know, you wanna make sure, you know, this, this security and there's, And so I think, you know, Thanks for coming on the cube here at VMware Explorer, formerly world. You so much. This is the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

GoogleORGANIZATION

0.99+

RaghuPERSON

0.99+

San FranciscoLOCATION

0.99+

Manoj SharmaPERSON

0.99+

October 11thDATE

0.99+

Wayne GretskyPERSON

0.99+

OctoberDATE

0.99+

two wordsQUANTITY

0.99+

two partsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

less than 90 daysQUANTITY

0.99+

BigQueryTITLE

0.99+

DavePERSON

0.99+

12 yearQUANTITY

0.99+

second aspectQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

2022DATE

0.99+

20 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

more than a thousand VMsQUANTITY

0.99+

two setsQUANTITY

0.99+

both toolsQUANTITY

0.99+

oneQUANTITY

0.98+

over two yearsQUANTITY

0.98+

VMwareORGANIZATION

0.98+

OneQUANTITY

0.98+

CoonORGANIZATION

0.98+

three daysQUANTITY

0.98+

both worldsQUANTITY

0.98+

first thingQUANTITY

0.98+

third placeQUANTITY

0.98+

MosconeLOCATION

0.98+

over 40%QUANTITY

0.98+

first placeQUANTITY

0.97+

AnthosTITLE

0.97+

GDCORGANIZATION

0.96+

NetAppTITLE

0.96+

two campsQUANTITY

0.96+

VMware ExplorerORGANIZATION

0.95+

first addressQUANTITY

0.95+

single stepQUANTITY

0.95+

KubernetesTITLE

0.95+

VMwareTITLE

0.93+

single basketQUANTITY

0.93+

GCPORGANIZATION

0.93+

tier twoQUANTITY

0.92+

MitelORGANIZATION

0.92+

SQLTITLE

0.91+

single siteQUANTITY

0.91+

OnPremORGANIZATION

0.91+

Google VMwareORGANIZATION

0.9+

ForexORGANIZATION

0.88+

day oneQUANTITY

0.88+

pandemicEVENT

0.87+

ISAsTITLE

0.87+

three specific thingsQUANTITY

0.86+

VMware ExplorerORGANIZATION

0.86+

AnthoTITLE

0.86+

Haseeb Budhani, Rafay & Kevin Coleman, AWS | AWS Summit New York 2022


 

(gentle music) (upbeat music) (crowd chattering) >> Welcome back to The City That Never Sleeps. Lisa Martin and John Furrier in New York City for AWS Summit '22 with about 10 to 12,000 of our friends. And we've got two more friends joining us here today. We're going to be talking with Haseeb Budhani, one of our alumni, co-founder and CEO of Rafay Systems, and Kevin Coleman, senior manager for Go-to Market for EKS at AWS. Guys, thank you so much for joining us today. >> Thank you very much for having us. Excited to be here. >> Isn't it great to be back at an in-person event with 10, 12,000 people? >> Yes. There are a lot of people here. This is packed. >> A lot of energy here. So, Haseeb, we've got to start with you. Your T-shirt says it all. Don't hate k8s. (Kevin giggles) Talk to us about some of the trends, from a Kubernetes perspective, that you're seeing, and then Kevin will give your follow-up. >> Yeah. >> Yeah, absolutely. So, I think the biggest trend I'm seeing on the enterprise side is that enterprises are forming platform organizations to make Kubernetes a practice across the enterprise. So it used to be that a BU would say, "I need Kubernetes. I have some DevOps engineers, let me just do this myself." And the next one would do the same, and then next one would do the same. And that's not practical, long term, for an enterprise. And this is now becoming a consolidated effort, which is, I think it's great. It speaks to the power of Kubernetes, because it's becoming so important to the enterprise. But that also puts a pressure because what the platform team has to solve for now is they have to find this fine line between automation and governance, right? I mean, the developers, you know, they don't really care about governance. Just give me stuff, I need to compute, I'm going to go. But then the platform organization has to think about, how is this going to play for the enterprise across the board? So that combination of automation and governance is where we are finding, frankly, a lot of success in making enterprise platform team successful. I think, that's a really new thing to me. It's something that's changed in the last six months, I would say, in the industry. I don't know if, Kevin, if you agree with that or not, but that's what I'm seeing. >> Yeah, definitely agree with that. We see a ton of customers in EKS who are building these new platforms using Kubernetes. The term that we hear a lot of customers use is standardization. So they've got various ways that they're deploying applications, whether it's on-prem or in the cloud and region. And they're really trying to standardize the way they deploy applications. And Kubernetes is really that compute substrate that they're standardizing on. >> Kevin, talk about the relationship with Rafay Systems that you have and why you're here together. And two, second part of that question, why is EKS kicking ass so much? (Haseeb and Kevin laughing) All right, go ahead. First one, your relationship. Second one, EKS is doing pretty well. >> Yep, yep, yep. (Lisa laughing) So yeah, we work closely with Rafay, Rafay, excuse me. A lot of joint customer wins with Haseeb and Co, so they're doing great work with EKS customers and, yeah, love the partnership there. In terms of why EKS is doing so well, a number of reasons, I think. Number one, EKS is vanilla, upstream, open-source Kubernetes. So customers want to use that open-source technology, that open-source Kubernetes, and they come to AWS to get it in a managed offering, right? Kubernetes isn't the easiest thing to self-manage. And so customers, you know, back before EKS launched, they were banging down the door at AWS for us to have a managed Kubernetes offering. And, you know, we launched EKS and there's been a ton of customer adoption since then. >> You know, Lisa, when we, theCUBE 12 years, now everyone knows we started in 2010, we used to cover a show called OpenStack. >> I remember that. >> OpenStack Summit. >> What's that now? >> And at the time, at that time, Kubernetes wasn't there. So theCUBE was present at creation. We've been to every KubeCon ever, CNCF then took it over. So we've been watching it from the beginning. >> Right. And it reminds me of the same trend we saw with MapReduce and Hadoop. Very big promise, everyone loved it, but it was hard, very difficult. And Hadoop's case, big data, it ended up becoming a data lake. Now you got Spark, or Snowflake, and Databricks, and Redshift. Here, Kubernetes has not yet been taken over. But, instead, it's being abstracted away and or managed services are emerging. 'Cause general enterprises can't hire enough Kubernetes people. >> Yep. >> They're not that many out there yet. So there's the training issue. But there's been the rise of managed services. >> Yep. >> Can you guys comment on what your thoughts are relative to that trend of hard to use, abstracting away the complexity, and, specifically, the managed services? >> Yeah, absolutely. You want to go? >> Yeah, absolutely. I think, look, it's important to not kid ourselves. It is hard. (Johns laughs) But that doesn't mean it's not practical, right. When Kubernetes is done well, it's a thing of beauty. I mean, we have enough customer to scale, like, you know, it's like a, forget a hockey stick, it's a straight line up, because they just are moving so fast when they have the right platform in place. I think that the mistake that many of us make, and I've made this mistake when we started this company, was trivializing the platform aspect of Kubernetes, right. And a lot of my customers, you know, when they start, they kind of feel like, well, this is not that hard. I can bring this up and running. I just need two people. It'll be fine. And it's hard to hire, but then, I need two, then I need two more, then I need two, it's a lot, right. I think, the one thing I keep telling, like, when I talk to analysts, I say, "Look, somebody needs to write a book that says, 'Yes, it's hard, but, yes, it can be done, and here's how.'" Let's just be open about what it takes to get there, right. And, I mean, you mentioned OpenStack. I think the beauty of Kubernetes is that because it's such an open system, right, even with the managed offering, companies like Rafay can build really productive businesses on top of this Kubernetes platform because it's an open system. I think that is something that was not true with OpenStack. I've spent time with OpenStack also, I remember how it is. >> Well, Amazon had a lot to do with stalling the momentum of OpenStack, but your point about difficulty. Hadoop was always difficult to maintain and hiring against. There were no managed services and no one yet saw that value of big data yet. Here at Kubernetes, people are living a problem called, I'm scaling up. >> Yep. And so it sounds like it's a foundational challenge. The ongoing stuff sounds easier or manageable. >> Once you have the right tooling. >> Is that true? >> Yeah, no, I mean, once you have the right tooling, it's great. I think, look, I mean, you and I have talked about this before, I mean, the thesis behind Rafay is that, you know, there's like 8, 12 things that need to be done right for Kubernetes to work well, right. And my whole thesis was, I don't want my customer to buy 10, 12, 15 products. I want them to buy one platform, right. And I truly believe that, in our market, similar to what vCenter, like what VMware's vCenter did for VMs, I want to do that for Kubernetes, right. And that the reason why I say that is because, see, vCenter is not about hypervisors, right? vCenter is about hypervisor, access, networking, storage, all of the things, like multitenancy, all the things that you need to run an enterprise-grade VM environment. What is that equivalent for the Kubernetes world, right? So what we are doing at Rafay is truly building a vCenter, but for Kubernetes, like a kCenter. I've tried getting the domain. I couldn't get it. (Kevin laughs) >> Well, after the Broadcom view, you don't know what's going to happen. >> Ehh. (John laughs) >> I won't go there! >> Yeah. Yeah, let's not go there today. >> Kevin, EKS, I've heard people say to me, "Love EKS. Just add serverless, that's a home run." There's been a relationship with EKS and some of the other Amazon tools. Can you comment on what you're seeing as the most popular interactions among the services at AWS? >> Yeah, and was your comment there, add serverless? >> Add serverless with AKS at the edge- >> Yeah. >> and things are kind of interesting. >> I mean, so, one of the serverless offerings we have today is actually Fargate. So you can use Fargate, which is our serverless compute offering, or one of our serverless compute offerings with EKS. And so customers love that. Effectively, they get the beauty of EKS and the Kubernetes API but they don't have to manage nodes. So that's, you know, a good amount of adoption with Fargate as well. But then, we also have other ways that they can manage their nodes. We have managed node groups as well, in addition to self-managed nodes also. So there's a variety of options that customers can use from a compute perspective with EKS. And you'll continue to see us evolve the portfolio as well. >> Can you share, Haseeb, can you share a customer example, a joint customer example that you think really articulates the value of what Rafay and AWS are doing together? >> Yeah, absolutely. In fact, we announced a customer very recently on this very show, which is MoneyGram, which is a joint AWS and Rafay customer. Look, we have enough, you know, the thing about these massive customers is that, you know, not everybody's going to give us their logo to use. >> Right. >> But MoneyGram has been a Rafay plus EKS customer for a very, very long time. You know, at this point, I think we've earned their trust, and they've allowed us to, kind of say this publicly. But there's enough of these financial services companies who have, you know, standardized on EKS. So it's EKS first, Rafay second, right. They standardized on EKS. And then they looked around and said, "Who can help me platform EKS across my enterprise?" And we've been very lucky. We have some very large financial services, some very large healthcare companies now, who, A, EKS, B, Rafay. I'm not just saying that because my friend Kevin's here, (Lisa laughs) it's actually true. Look, EKS is a brilliant platform. It scales so well, right. I mean, people try it out, relative to other platforms, and it's just a no-brainer, it just scales. You want to build a big enterprise on the backs of a Kubernetes platform. And I'm not saying that's because I'm biased. Like EKS is really, really good. There's a reason why so many companies are choosing it over many other options in the market. >> You're doing a great job of articulating why the theme (Kevin laughs) of the New York City Summit is scale anything. >> Oh, yeah. >> There you go. >> Oh, yeah. >> I did not even know that but I'm speaking the language, right? >> You are. (John laughs) >> Yeah, absolutely. >> One of the things that we're seeing, also, I want to get your thoughts on, guys, is the app modernization trend, right? >> Yep. >> Because unlike other standards that were hard, that didn't have any benefit downstream 'cause they were too hard to get to, here, Kubernetes is feeding into real app for app developer pressure. They got to get cloud-native apps out. It's fairly new in the mainstream enterprise and a lot of hyperscalers have experience. So I'm going to ask you guys, what is the key thing that you're enabling with Kubernetes in the cloud-native apps? What is the key value? >> Yeah. >> I think, there's a bifurcation happening in the market. One is the Kubernetes Engine market, which is like EKS, AKS, GKE, right. And then there's the, you know, what, back in the day, we used to call operations and management, right. So the OAM layer for Kubernetes is where there's need, right. People are learning, right. Because, as you said before, the skill isn't there, you know, there's not enough talent available to the market. And that's the opportunity we're seeing. Because to solve for the standardization, the governance, and automation that we talked about earlier, you know, you have to solve for, okay, how do I manage my network? How do I manage my service mesh? How do I do chargebacks? What's my, you know, policy around actual Kubernetes policies? What's my blueprinting strategy? How do I do add-on management? How do I do pipelines for updates of add-ons? How do I upgrade my clusters? And we're not done yet, there's a longer list, right? This is a lot, right? >> Yeah. >> And this is what happens, right. It's just a lot. And really, the companies who understand that plethora of problems that need to be solved and build easy-to-use solutions that enterprises can consume with the right governance automation, I think they're going to be very, very successful here. >> Yeah. >> Because this is a train, right? I mean, this is happening whether, it's not us, it's happening, right? Enterprises are going to keep doing this. >> And open-source is a big driver in all of this. >> Absolutely. >> Absolutely. >> And I'll tag onto that. I mean, you talked about platform engineering earlier. Part of the point of building these platforms on top of Kubernetes is giving developers an easier way to get applications into the cloud. So building unique developer experiences that really make it easy for you, as a software developer, to take the code from your laptop, get it out of production as quickly as possible. The question is- >> So is that what you mean, does that tie your point earlier about that vertical, straight-up value once you've set up it, right? >> Yep. >> Because it's taking the burden off the developers for stopping their productivity. >> Absolutely. >> To go check in, is it configured properly? Is the supply chain software going to be there? Who's managing the services? Who's orchestrating the nodes? >> Yep. >> Is that automated, is that where you guys see the value? >> That's a lot of what we see, yeah. In terms of how these companies are building these platforms, is taking all the component pieces that Haseeb was talking about and really putting it into a cohesive whole. And then, you, as a software developer, you don't have to worry about configuring all of those things. You don't have to worry about security policy, governance, how your app is going to be exposed to the internet. >> It sounds like infrastructure is code. >> (laughs) Yeah. >> Come on, like. >> (laughs) Infrastructure's code is a big piece of it, for sure, for sure. >> Yeah, look, infrastructure's code actually- >> Infrastructure's sec is code too, the security. >> Yeah. >> Huge. >> Well, it all goes together. Like, we talk about developer self-service, right? The way we enable developer self-service is by teaching developers, here's a snippet of code that you write and you check it in and your infrastructure will just magically be created. >> Yep. >> But not automatically. It's going to go through a check, like a check through the platform team. These are the workflows that if you get them right, developers don't care, right. All developers want is I want to compute. But then all these 20 things need to happen in the back. That's what, if you nail it, right, I mean, I keep trying to kind of pitch the company, I don't want to do that today. But if you nail that, >> I'll give you a plug at the end. >> you have a good story. >> But I got to, I just have a tangent question 'cause you reminded me. There's two types of developers that have emerged, right. You have the software developer that wants infrastructures code. I just want to write my code, I don't want to stop. I want to build in shift-left for security, shift-right for data. All that's in there. >> Right. >> I'm coding away, I love coding. Then you've got the under-the-hood person. >> Yes. >> I've been to the engines. >> Certainly. >> So that's more of an SRE, data engineer, I'm wiring services together. >> Yeah. >> A lot of people are like, they don't know who they are yet. They're in college or they're transforming from an IT job. They're trying to figure out who they are. So question is, how do you tell a person that's watching, like, who am I? Like, should I be just coding? But I love the tech. Would you guys have any advice there? >> You know, I don't know if I have any guidance in terms of telling people who they are. (all laughing) I mean, I think about it in terms of a spectrum and this is what we hear from customers, is some customers want to shift as much responsibility onto the software teams to manage their infrastructure as well. And then some want to shift it all the way over to the very centralized model. And, you know, we see everything in between as well with our EKS customer base. But, yeah, I'm not sure if I have any direct guidance for people. >> Let's see, any wisdom? >> Aside from experiment. >> If you're coding more, you're a coder. If you like to play with the hardware, >> Yeah. >> or the gears. >> Look, I think it's really important for managers to understand that developers, yes, they have a job, you have to write code, right. But they also want to learn new things. It's only fair, right. >> Oh, yeah. >> So what we see is, developers want to learn. And we enable for them to understand Kubernetes in small pieces, like small steps, right. And that is really, really important because if we completely abstract things away, like Kubernetes, from them, it's not good for them, right. It's good for their careers also, right. It's good for them to learn these things. This is going to be with us for the next 15, 20 years. Everybody should learn it. But I want to learn it because I want to learn, not because this is part of my job, and that's the distinction, right. I don't want this to become my job because I want, I want to write my code. >> Do what you love. If you're more attracted to understanding how automation works, and robotics, or making things scale, you might be under-the-hood. >> Yeah. >> Yeah, look under the hood all day long. But then, in terms of, like, who keeps the lights on for the cluster, for example. >> All right, see- >> That's the job. >> He makes a lot of value. Now you know who you are. Ask these guys. (Lisa laughing) Congratulations on your success on EKS 2. >> Yeah, thank you. >> Quick, give a plug for the company. I know you guys are growing. I want to give you a minute to share to the audience a plug that's going to be, what are you guys doing? You're hiring? How many employees? Funding? Customer new wins? Take a minute to give a plug. >> Absolutely. And look, I come see, John, I think, every show you guys are doing a summit or a KubeCon, I'm here. (John laughing) And every time we come, we talk about new customers. Look, platform teams at enterprises seem to love Rafay because it helps them build that, well, Kubernetes platform that we've talked about on the show today. I think, many large enterprises on the financial service side, healthcare side, digital native side seem to have recognized that running Kubernetes at scale, or even starting with Kubernetes in the early days, getting it right with the right standards, that takes time, that takes effort. And that's where Rafay is a great partner. We provide a great SaaS offering, which you can have up and running very, very quickly. Of course, we love EKS. We work with our friends at AWS. But also works with Azure, we have enough customers in Azure. It also runs in Google. We have enough customers at Google. And it runs on-premises with OpenShift or with EKS A, right, whichever option you want to take. But in terms of that standardization and governance and automation for your developers to move fast, there's no better product in the market right now when it comes to Kubernetes platforms than Rafay. >> Kevin, while we're here, why don't you plug EKS too, come on. >> Yeah, absolutely, why not? (group laughing) So yes, of course. EKS is AWS's managed Kubernetes offering. It's the largest managed Kubernetes service in the world. We help customers who want to adopt Kubernetes and adopt it wherever they want to run Kubernetes, whether it's in region or whether it's on the edge with EKS A or running Kubernetes on Outposts and the evolving portfolio of EKS services as well. We see customers running extremely high-scale Kubernetes clusters, excuse me, and we're here to support them as well. So yeah, that's the managed Kubernetes offering. >> And I'll give the plug for theCUBE, we'll be at KubeCon in Detroit this year. (Lisa laughing) Lisa, look, we're giving a plug to everybody. Come on. >> We're plugging everybody. Well, as we get to plugs, I think, Haseeb, you have a book to write, I think, on Kubernetes. And I think you're wearing the title. >> Well, I do have a book to write, but I'm one of those people who does everything at the very end, so I will never get it right. (group laughing) So if you want to work on it with me, I have some great ideas. >> Ghostwriter. >> Sure! >> But I'm lazy. (Kevin chuckles) >> Ooh. >> So we got to figure something out. >> Somehow I doubt you're lazy. (group laughs) >> No entrepreneur's lazy, I know that. >> Right? >> You're being humble. >> He is. So Haseeb, Kevin, thank you so much for joining John and me today, >> Thank you. >> talking about what you guys are doing at Rafay with EKS, the power, why you shouldn't hate k8s. We appreciate your insights and your time. >> Thank you as well. >> Yeah, thank you very much for having us. >> Our pleasure. >> Thank you. >> We appreciate it. With John Furrier, I'm Lisa Martin. You're watching theCUBE live from New York City at the AWS NYC Summit. John and I will be right back with our next guest, so stick around. (upbeat music) (gentle music)

Published Date : Jul 14 2022

SUMMARY :

We're going to be talking Thank you very much for having us. This is packed. Talk to us about some of the trends, I mean, the developers, you know, in the cloud and region. that you have and why And so customers, you know, we used to cover a show called OpenStack. And at the time, And it reminds me of the same trend we saw They're not that many out there yet. You want to go? And, I mean, you mentioned OpenStack. Well, Amazon had a lot to do And so it sounds like it's And that the reason why Well, after the Broadcom view, (John laughs) Yeah, let's not go there today. and some of the other Amazon tools. I mean, so, one of the you know, the thing about these who have, you know, standardized on EKS. of the New York City (John laughs) So I'm going to ask you guys, And that's the opportunity we're seeing. I think they're going to be very, I mean, this is happening whether, big driver in all of this. I mean, you talked about Because it's taking the is taking all the component pieces code is a big piece of it, is code too, the security. here's a snippet of code that you write that if you get them right, at the end. I just want to write my I'm coding away, I love coding. So that's more of But I love the tech. And then some want to If you like to play with the hardware, for managers to understand This is going to be with us Do what you love. the cluster, for example. Now you know who you are. I want to give you a minute Kubernetes in the early days, why don't you plug EKS too, come on. and the evolving portfolio And I'll give the plug And I think you're wearing the title. So if you want to work on it with me, But I'm lazy. So we got to (group laughs) So Haseeb, Kevin, thank you so much the power, why you shouldn't hate k8s. Yeah, thank you very much at the AWS NYC Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Kevin ColemanPERSON

0.99+

KevinPERSON

0.99+

JohnPERSON

0.99+

RafayPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

HaseebPERSON

0.99+

John FurrierPERSON

0.99+

twoQUANTITY

0.99+

EKSORGANIZATION

0.99+

10QUANTITY

0.99+

John FurrierPERSON

0.99+

New York CityLOCATION

0.99+

Haseeb BudhaniPERSON

0.99+

2010DATE

0.99+

Rafay SystemsORGANIZATION

0.99+

20 thingsQUANTITY

0.99+

12QUANTITY

0.99+

LisaPERSON

0.99+

two peopleQUANTITY

0.99+

GoogleORGANIZATION

0.99+

one platformQUANTITY

0.99+

two typesQUANTITY

0.99+

MoneyGramORGANIZATION

0.99+

15 productsQUANTITY

0.99+

oneQUANTITY

0.99+

OpenShiftTITLE

0.99+

RafayORGANIZATION

0.99+

12 thingsQUANTITY

0.98+

todayDATE

0.98+

Second oneQUANTITY

0.98+

8QUANTITY

0.98+

10, 12,000 peopleQUANTITY

0.98+

vCenterTITLE

0.98+

DetroitLOCATION

0.98+

12 yearsQUANTITY

0.98+

New York City SummitEVENT

0.97+

EKS ATITLE

0.97+

KubernetesTITLE

0.97+

Breaking Analysis: Supercloud is becoming a thing


 

>> From The Cube studios in Palo Alto, in Boston, bringing you data driven insights from the cube and ETR. This is breaking analysis with Dave Vellante. >> Last year, we noted in a breaking analysis that the cloud ecosystem is innovating beyond the idea or notion of multi-cloud. We've said for years that multi-cloud is really not a strategy but rather a symptom of multi-vendor. And we coined this term supercloud to describe an abstraction layer that lives above the hyperscale infrastructure that hides the underlying complexities, the APIs, and the primitives of each of the respective clouds. It interconnects whether it's On-Prem, AWS, Azure, Google, stretching out to the edge and creates a value layer on top of that. So our vision is that supercloud is more than running an individual service in cloud native mode within an individual individual cloud rather it's this new layer that builds on top of the hyperscalers. And does things irrespective of location adds value and we'll get into that in more detail. Now it turns out that we weren't the only ones thinking about this, not surprisingly, the majority of the technology ecosystem has been working towards this vision in various forms, including some examples that actually don't try to hide the underlying primitives. And we'll talk about that, but give a consistent experience across the DevSecOps tool chain. Hello, and welcome to this week's Wikibon, Cube insights powered by ETR. In this breaking analysis, we're going to share some recent examples and direct quotes about supercloud from the many Cube guests that we've had on over the last several weeks and months. And we've been trying to test this concept of supercloud. Is it technically feasible? Is it business rational? Is there business case for it? And we'll also share some recent ETR data to put this into context with some of the players that we think are going after this opportunity and where they are in their supercloud build out. And as you can see I'm not in the studio, everybody's got COVID so the studios shut down temporarily but breaking analysis continues. So here we go. Now, first thing is we uncovered an article from earlier this year by Lori MacVittie, is entitled, Supercloud: The 22 Answer to Multi-Cloud Challenges. What a great title. Of course we love it. Now, what really interested us here is not just the title, but the notion that it really doesn't matter what it's called, who cares? Supercloud, distributed cloud, someone even called it Metacloud recently, and we'll get into that. But Lori is a technologist. She's a developer by background. She works at F-Five and she's partial to the supercloud definition that was put forth by Cornell. You can see it here. That's a cloud architecture that enables application migration as a service across different availability zones or cloud providers, et cetera. And that the supercloud provides interfaces to allocate, migrate and terminate resources... And can span all major public cloud providers as well as private clouds. Now, of course, we would take that as well to the edge. So sure. That sounds about right and provides further confirmation that something new is really happening out there. And that was our initial premise when we put this fourth last year. Now we want to dig deeper and hear from the many Cube guests that we've interviewed recently probing about this topic. We're going to start with Chuck Whitten. He's Dell's new Co-COO and most likely part of the Dell succession plan, many years down the road hopefully. He coined the phrase multi-cloud by default versus multi-cloud by design. And he provides a really good business perspective. He's not a deep technologist. We're going to hear from Chuck a couple of times today including one where John Furrier asks him about leveraging hyperscale CapEx. That's an important concept that's fundamental to supercloud. Now, Ashesh Badani heads products at Red Hat and he talks about what he calls Metacloud. Again, it doesn't matter to us what you call it but it's the ecosystem gathering and innovating and we're going to get his perspective. Now we have a couple of clips from Danny Allan. He is the CTO of Veeam. He's a deep technologist and super into the weeds, which we love. And he talks about how Veeam abstracts the cloud layer. Again, a concept that's fundamental to supercloud and he describes what a supercloud is to him. And we also bring with Danny the edge discussion to the conversation. Now the bottom line from Danny is we want to know is supercloud technically feasible? And is it a thing? And then we have Jeff Clarke. Jeff Clark is the Co-COO and Vice Chairman of Dell super experienced individual. He lays out his vision of supercloud and what John Furrier calls a business operating system. You're going to hear from John a couple times. And he, Jeff Clark has a dropped the mic moment, where he says, if we can do this X, we'll describe what X is, it's game over. Okay. So of course we wanted to then go to HPE, one of Dell's biggest competitors and Patrick Osborne is the vice president of the storage business unit at Hewlett Packet Enterprise. And so given Jeff Clarke's game over strategy, we want to understand how HPE sees supercloud. And the bottom line, according to Patrick Osborne is that it's real. So you'll hear from him. And now Raghu Raghuram is the CEO of VMware. He threw a curve ball at this supercloud concept. And he flat out says, no, we don't want to hide the underlying primitives. We want to give developers access to those. We want to create a consistent developer experience in that DevsSecOps tool chain and Kubernetes runtime environments, and connect all the elements in the application development stack. So that's a really interesting perspective that Raghu brings. And then we end on Itzik Reich. Itzik is a technologist and a technical team leader who's worked as a go between customers and product developers for a number of years. And we asked Itzik, is supercloud technically feasible and will it be a reality? So let's hear from these experts and you can decide for yourselves how real supercloud is today and where it is, run the sizzle >> Operative phrase is multi-cloud by default that's kind of the buzz from your keynote. What do you mean by that? >> Well, look, customers have woken up with multiple clouds, multiple public clouds, On-Premise clouds increasingly as the edge becomes much more a reality for customers clouds at the edge. And so that's what we mean by multi-cloud by default. It's not yet been designed strategically. I think our argument yesterday was, it can be and it should be. It is a very logical place for architecture to land because ultimately customers want the innovation across all of the hyperscale public clouds. They will see workloads and use cases where they want to maintain an On-Premise cloud, On-Premise clouds are not going away, I mentioned edge clouds, so it should be strategic. It's just not today. It doesn't work particularly well today. So when we say multi-cloud by default we mean that's the state of the world today. Our goal is to bring multi-cloud by design as you heard. >> Really great question, actually, since you and I talked, Dave, I've been spending some time noodling just over that. And you're right. There's probably some terminology, something that will get developed either by us or in collaboration with the industry. Where we sort of almost have the next almost like a Metacloud that we're working our way towards. >> So we manage both the snapshots and we convert it into the Veeam portable data format. And here's where the supercloud comes into play. Because if I can convert it into the Veeam portable data format, I can move that OS anywhere. I can move it from physical to virtual, to cloud, to another cloud, back to virtual, I can put it back on physical if I want to. It actually abstracts the cloud layer. There are things that we do when we go between cloud some use BIOS, some use UEFI, but we have the data in backup format, not snapshot format, that's theirs, but we have it in backup format that we can move around and abstract workloads across all of the infrastructure. >> And your catalog is control in control of that. Is that right? Am I thinking about that the right way? >> Yeah it is, 100%. And you know what's interesting about our catalog, Dave, the catalog is inside the backup. Yes. So here's, what's interesting about the edge, two things, on the edge you don't want to have any state, if you can help it. And so containers help with that You can have stateless environments, some persistent data storage But we not not only provide the portability in operating systems, we also do this for containers. And that's true. If you go to the cloud and you're using say EKS with relational database services RDS for the persistent data later, we can pick that up and move it to GKE or move it to OpenShift On-Premises. And so that's why I call this the supercloud, we have all of this data. Actually, I think you termed the term supercloud. >> Yeah. But thank you for... I mean, I'm looking for a confirmation from a technologist that it's technically feasible. >> It is technically feasible and you can do it today. >> You said also technology and business models are tied together and enabler. If you believe that then you have to believe that it's a business operating system that they want. They want to leverage whatever they can. And at the end of the day, they have to differentiate what they do. >> Well, that's exactly right. If I take that in what Dave was saying and I summarize it the following way, if we can take these cloud assets and capabilities, combine them in an orchestrated way to deliver a distributed platform, game over. >> We have a number of platforms that are providing whether it's compute or networking or storage, running those workloads that they plum up into the cloud they have an operational experience in the cloud and they now they have data services that are running in the cloud for us in GreenLake. So it's a reality, we have a number of platforms that support that. We're going to have a a set of big announcements coming up at HPE Discover. So we led with Electra and we have a block service. We have VM backup as a service and DR on top of that. So that's something that we're providing today. GreenLake has over, I think it's actually over 60 services right now that we're providing in the GreenLake platform itself. Everything from security, single sign on, customer IDs, everything. So it's real. We have the proofpoint for it. >> Yeah. So I want to clarify something that you said because this tends to be very commonly confused by customers. I use the word abstraction. And usually when people think of abstraction, they think it hides capabilities of the cloud providers. That's not what we are trying to do. In fact, that's the last thing we are trying to do. What we are trying to do is to provide a consistent developer experience regardless of where you want to build your application. So that you can use the cloud provider services if that's what you want to use. But the DevSecOp tool chain, the runtime environment which turns out to be Kubernetes and how you control the Kubernetes environment, how do you manage and secure and connect all of these things. Those are the places where we are adding the value. And so really the VMware value proposition is you can build on the cloud of your choice but providing these consistent elements, number one, you can make better use of us, your scarce developer or operator resources and expertise. And number two, you can move faster. And number three, you can just spend less as a result of this. So that's really what we are trying to do. We are not... So I just wanted to clarify the word abstraction. In terms of where are we? We are still, I would say, in the early stages. So if you look at what customers are trying to do, they're trying to build these greenfield applications. And there is an entire ecosystem emerging around Kubernetes. There is still, Kubernetes is not a developer platform. The developer experience on top of Kubernetes is highly inconsistent. And so those are some of the areas where we are introducing new innovations with our Tanzu Application Platform. And then if you take enterprise applications, what does it take to have enterprise applications running all the time be entirely secure, et cetera. >> Well, look, the multi-cloud by default today are isolated clouds. They don't work together. Your data is siloed. It's locked up and it is expensive to move and make sense of it. So I think the word you and I were batting around before, this is an interconnected tissue. That's what the world needs. They need the clouds to work together as a single platform. That's the problem that we're trying to solve. And you saw it in some of our announcements here that we're starting to make steps on that journey to make multi-cloud work together much simpler. >> It's interesting, you mentioned the hyperscalers and all that CapEx investments. Why wouldn't you want to take advantage of a cloud and build on the CapEx and then ultimately have the solutions machine learning as one area. You see some specialization with the clouds. But you start to see the rise of superclouds, Dave calls them, and that's where you can innovate on a cloud then go to the multiple clouds. Snowflakes is one, we see a lot of examples of supercloud... >> Project Alpine was another one. I mean, it's early, but it's its clearly where you're going. The technology is just starting to come around. I mean it's real. >> Yeah. I mean, why wouldn't you want to take advantage of all of the cloud innovation out there? >> Is that something that's, that supercloud idea is a reality from a technologist perspective. >> I think it is. So for example Katie Gordon, which I believe you've interviewed earlier this week, was demonstrating the Kubernetes data mobility aspect which is another project. That's exactly part of the it's rationale, the rationale of customers being able to move some of their Kubernetes workloads to the cloud and back and between different clouds. Why are we doing? Because customers wants to have the ability to move between different cloud providers, using a common API that will be able to orchestrate all of those things with a self-service that may be offered via the APEX console itself. So it's all around enabling developers and meeting them where they are today and also meeting them into tomorrow's world where they actually may have changed their mind to do those things. So yes we are walking on all of those different aspects. >> Okay. Let's take a quick look at some of the ETR data. This is an X-Y graph. You've seen it a number of times on breaking analysis, it plots the net score or spending momentum on the Y-axis and overlap or pervasiveness in the ETR dataset on the X-axis, used to be called market share. I think that term was off putting to some people, but anyway it's an indicator of presence in the dataset. Now that red dotted line that's rarefied air where anything above that line is considered highly elevated. Now you can see we've plotted Azure and AWS in the upper right. GCP is in there and Kubernetes. We've done that as reference points. They're not necessarily building supercloud platforms. We'll see if they ever want to do so. And Kubernetes of course not a company, but we put 'em in there for context. And we've cherry picked a few players that we believe are building out or are important for supercloud build out. Let's start with Snowflake. We've talked a lot about this company. You can see they're highly elevated on the vertical axis. We see the data cloud as a supercloud in the making. You've got pure storage in there. They made the public, the early part of its supercloud journey at Accelerate 2019 when it unveiled a hybrid block storage service inside of AWS, it connects its On-Prem to AWS and creates that singular experience for pure customers. We see Hashi, HashiCorp as an enabling infrastructure, as code. So they're enabling infrastructure as code across different clouds and different locations. You see Nutanix. They're embarking on their multi-cloud strategy but it's doing so in a way that we think is supercloud, like now. Now Veeam, we were just at VeeamON. And this company has tied Dell for the number one revenue player in data protection. That's according to IDC. And we don't think it won't be long before it holds that position alone at the top as it's growing faster than in Dell in the space. We'll see, Dell is kind of waking up a little bit and putting more resource on that. But Veeam, they're a pure play vendor in data protection. And you heard their CTO, Danny Allan's view on Supercloud, they're doing it today. And we heard extensive comments as well from Dell that's clearly where they're headed, project Alpine was an early example from Dell technologies world of Supercloud in our view. And HPE with GreenLake. Finally beginning to talk about that cross cloud experience. I think it in initially HPE has been more focused on the private cloud, we'll continue to probe. We'll be at HPE discover later on the spring, actually end of June. And we'll continue to probe to see what HPE is doing specifically with GreenLake. Now, finally, Cisco, we put them on the chart. We don't have direct quotes from recent shows and events but this data really shows you the size of Cisco's footprint within the ETR data set that's on the X-axis. Now the cut of this ETR data includes all sectors across the ETR taxonomy which is not something that we commonly show but you can see the magnitude of Cisco's presence. It's impressive. Now, they had better, Cisco that is, had better be building out a supercloud in our view or they're going to be left behind. And I'm quite certain that they're actually going to do so. So we have a lot of evidence that we're putting forth here and seeing in the marketplace what we said last year, the ecosystem is take taking shape, supercloud is forming and becoming a thing. And really in our view, is the future of cloud. But there are always risks to these predictive scenarios and we want to acknowledge those. So first, look, we could end up with a bunch of bespoke superclouds. Now one supercloud is better than three separate cloud native services that do fundamentally the same thing from the same vendor. One for AWS, one for GCP and one for Azure. So maybe that's not all that bad. But to point number two, we hope there evolves a set of open standards for self-service infrastructure, federated governance, and data sharing that will evolve as a horizontal layer versus a set of proprietary vendor specific tools. Now, maybe a company like Veeam will provide that as a data management layer or some of Veeam's competitors or maybe it'll emerge again as open source. As well, and this next point, we see the potential for edge disruptions, changing the economics of the data center. Edge in fact could evolve on its own, independent of the cloud. In fact, David Floria sees the edge somewhat differently from Danny Allan. Floria says he sees a requirement for distributed stateful environments that are ephemeral where recovery is built in. And I said, David, stateful? Ephemeral? Stateful ephemeral? Isn't that an oxymoron? And he responded that, look, if it's not ephemeral the costs are going to be prohibitive. He said the biggest mistake the companies could make is thinking that the edge is simply an extension of their current cloud strategies. We're seeing that a lot. Dell largely talks about the edge as retail. Now, and Telco is a little bit different, but back to Floria's comments, he feels companies have to completely reimagine an integrated file and recovery system which is much more data efficient. And he believes that the technology will evolve with massive volumes and eventually seep into enterprise cloud and distributed data centers with better economics. In other words, as David Michelle recently wrote, we're about 15 years into the most recent cloud cycle and history shows that every 15 years or so, something new comes along that is a blind spot and highly disruptive to existing leaders. So number four here is really important. Remember, in 2007 before AWS introduced the modern cloud, IBM outpost, sorry, IBM outspent Amazon and Google and RND and CapEx and was really comparable to Microsoft. But instead of inventing cloud, IBM spent hundreds of billions of dollars on stock buybacks and dividends. And so our view is that innovation rewards leaders. And while it's not without risks, it's what powers the technology industry it always has and likely always will. So we'll be watching that very closely, how companies choose to spend their free cash flow. Okay. That's it for now. Thanks for watching this episode of The Cube Insights, powered by ETR. Thanks to Stephanie Chan who does some of the background research? Alex Morrison is on production and is going to compile all this stuff. Thank you, Alex. We're all remote this week. Kristen Nicole and Cheryl Knight do Cube distribution and social distribution and get the word out, so thank you. Robert Hof is our editor in chief. Don't forget the checkout etr.ai for all the survey action. Remember I publish each week on wikibon.com and siliconangle.com and you can check out all the breaking analysis podcasts. All you can do is search breaking analysis podcast so you can pop in the headphones and listen while you're on a walk. You can email me at david.vellante@siliconangle.com. If you want to get in touch or DM me at DVellante, you can always hit me up into a comment on our LinkedIn posts. This is Dave Vellante. Thank you for watching this episode of break analysis, stay safe, be well and we'll see you next time. (upbeat music)

Published Date : May 21 2022

SUMMARY :

insights from the cube and ETR. And that the supercloud that's kind of the buzz from your keynote. across all of the something that will get developed all of the infrastructure. Is that right? for the persistent data later, from a technologist that and you can do it today. And at the end of the day, and I summarize it the following way, experience in the cloud And so really the VMware value proposition They need the clouds to work and build on the CapEx starting to come around. of all of the cloud innovation out there? Is that something that's, That's exactly part of the it's rationale, And he believes that the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff ClarkPERSON

0.99+

FloriaPERSON

0.99+

Jeff ClarkePERSON

0.99+

Stephanie ChanPERSON

0.99+

DavePERSON

0.99+

TelcoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Katie GordonPERSON

0.99+

JohnPERSON

0.99+

DannyPERSON

0.99+

Alex MorrisonPERSON

0.99+

DavidPERSON

0.99+

LoriPERSON

0.99+

CiscoORGANIZATION

0.99+

Danny AllanPERSON

0.99+

ChuckPERSON

0.99+

David MichellePERSON

0.99+

Robert HofPERSON

0.99+

2007DATE

0.99+

AlexPERSON

0.99+

AmazonORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Patrick OsbornePERSON

0.99+

Danny AllanPERSON

0.99+

DellORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Lori MacVittiePERSON

0.99+

Chuck WhittenPERSON

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

HPEORGANIZATION

0.99+

John FurrierPERSON

0.99+

Last yearDATE

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

CapExORGANIZATION

0.99+

100%QUANTITY

0.99+

last yearDATE

0.99+

BostonLOCATION

0.99+

Hewlett Packet EnterpriseORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Supercloud: The 22 Answer to Multi-Cloud ChallengesTITLE

0.99+

Ashesh BadaniPERSON

0.99+

end of JuneDATE

0.99+

david.vellante@siliconangle.comOTHER

0.99+

each weekQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

yesterdayDATE

0.99+

IDCORGANIZATION

0.99+

David FloriaPERSON

0.98+

todayDATE

0.98+

tomorrowDATE

0.98+

firstQUANTITY

0.98+

VeeamONORGANIZATION

0.98+

over 60 servicesQUANTITY

0.98+

oneQUANTITY

0.98+

siliconangle.comOTHER

0.98+

F-FiveORGANIZATION

0.98+

Raghu RaghuramPERSON

0.98+

DD Dasgupta, Cisco | Simplifying Hybrid Cloud


 

>>The introduction of the modern public cloud in the mid two thousands permanently changed the way we think about it at the heart of it. The cloud operating model attacked one of the biggest problems in enterprise infrastructure, human labor costs more than half of it, budgets were spent on people. And much of that effort added little or no differentiable value to the business. The automation of provisioning management, recovery optimization and decommissioning infrastructure resources has gone mainstream as organizations demand a cloud-like model across all their application infrastructure, irrespective of its physical location. This is not only cut costs, but it's also improved quality and reduced human error. Hello everyone. My name is Dave Vellante and welcome to simplifying hybrid cloud made possible by Cisco today, we're going to explore hybrid cloud as an operating model for organizations or the definition of cloud is expanding. Cloud is no longer an abstract set of remote services, you know, somewhere out in the clouds. >>No, it's an operating model that spans public cloud on premises infrastructure. And it's also moving to edge locations. This trend is happening at massive scale. While at the same time, preserving granular control of resources. It's an entirely new game where it managers must think differently to deal with this complexity. And the environment is constantly changing the growth and diversity of applications continues. And now we're living in a world where the workforce is remote hybrid work is now a permanent state and will be the dominant model. In fact, a recent survey of CIO is by enterprise technology. Research ETR indicates that organizations expect 36% of their workers will be operating in a hybrid mode splitting time between remote work and in office environments. This puts added pressure on the application infrastructure required to support these workers. The underlying technology must be more dynamic and adaptable to accommodate constant change. >>So the challenge for it managers is ensuring that modern applications can be run with a cloud-like experience that spans on-prem public cloud and edge locations. This is the future of it. Now today we have three segments where we're going to dig into these issues and trends surrounding hybrid cloud. First up is Didi Dasgupta, who will set the stage and share with us how Cisco is approaching this challenge. Next we're going to hear from Maneesh Agra wall and Darren Williams, who will help us unpack HyperFlex, which is Cisco's hyper-converged infrastructure offering. And finally, our third segment we'll drill into unified compute more than a decade ago. Cisco pioneered the concept of bringing together compute with networking in a single offering. Cisco frankly changed the legacy server market with UCS unified compute system. The X series is Cisco's next generation architecture for the coming decade, and we'll explore how it fits into the world of hybrid cloud and its role in simplifying the complexity that we just discussed. So thanks for being here. Let's go. >>Okay. Let's start things off. Gus is back on the cube to talk about how we're going to simplify hybrid cloud complexity. DD. Welcome. Good to see you again. >>Hey Dave, thanks for having me. Good to see you again. Yeah, >>Our pleasure here. Uh, look, let's start with big picture. Talk about the trends you're seeing from your customers. >>Well, I think first off every customer, these days is a public cloud customer. They do have their on-premise data centers, but um, every customer is looking to move workloads, use services, cloud native services from the public cloud. I think that's, that's one of the big things that we're seeing, um, while that is happening. We're also seeing a pretty dramatic evolution of the application landscape itself. You've got bare metal applications. You always have virtualized applications. Um, and then most modern applications are, um, are containerized and, you know, managed by Kubernetes. So I think we're seeing a big change in, uh, uh, in the application landscape as well, and probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves it's triggering a change in the it organizations in the development organizations and sort of not only how they work within their organizations, but how they work across, um, all of these different organizations. So I think those are some of the big things that, uh, that I hear about when I talk to customers. >>Well, so it's interesting. I often say Cisco kind of changed the game and in server and compute when it, when it developed the original UCS and you remember there were organizational considerations back then bringing together the server team and the networking team. And of course the bus storage team. And now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >>No, I think you're right. Um, back to the origins of UCS, I mean, we widen the networking company, builder server, well, we just enabled with the best networking technology. So we do compute that and now doing something similar on the software, actually the software for our, um, for our and you know, we've been on this journey for about four years. Um, but the software is called intersite and, you know, we started out with intersite being just the element manager management software for Cisco's compute and hyperconverged devices. Um, but then we've evolved it over the last few years because we believe that the customer shouldn't have to manage a separate piece of software would do manage the hardware of the underlying hardware and then a separate tool to connect it to a public cloud. And then the third tool to do optimization, workload optimization or performance optimization or cost optimization, a fourth tool do now manage Kubernetes and not just in one cluster, one cloud, but multi cluster multicloud. >>They should not have to have a fifth tool that does go into observability. Anyway, I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that manage their infrastructure, but it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the Kubernetes infrastructure, uh, Kubernetes clusters. I mean, whether it's on-prem or in the cloud. So overall that's the strategy, bring it to a single platform and a platform is a loaded word, but we'll get into that a little bit, uh, you know, in this, in this conversation, but that's the overall strategy simplify? >>Well, you know, we brought a platform, I, I like to say platform beats products, but you know, there was a day and you could still point to some examples today in the it industry where, Hey, another tool we can monetize that and another one to solve a different problem. We can monetize that. Uh, and so tell me more about how intersite came about. You obviously sat back, you saw what your customers were going through. You said we can do better. So w tell us the story there. >>Yeah, absolutely. So look, it started with, um, you know, three or four guys in getting in a room and saying, look, we've had this, you know, management software, UCS manager, UCS director, and these are just the Cisco's management, you know, uh, for our softwares, for our own platform. Then every company has their, their own flavor. We said, we took on this ball goal of like, we're not when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service, or we're going to create a SAS offering because the same is the infrastructure built by us, whether it's on networking or compute or on software, how do our customers use it? Well, they use it to write and run their applications, their SAS services, every customer, every customer, every company today is a software company. >>They live and die by how they work or don't. And so we were like, we want to eat our own dog food here, right? We want to deliver this as a SAS offering. And so that's how it started being on this journey for about four years, tens of thousands of customers. Um, but it was a pretty big boat patient because, you know, um, the big change with SAS is, is you're, uh, as you're familiar today is the job of now managing this, this piece of software is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities, and we've learned so much along the way, whether it's around scalability, reliability, um, working with, uh, our own companies, security organizations on what can or cannot be in a SAS service. Um, so again, it's just been a wonderful journey, but, uh, I wanted to point out, we are in some ways eating our own dog food because we built a SAS application that helps other companies deliver their SAS applications. >>So Cisco, I look at Cisco's business model and I compete, I of course, compare it to other companies in the infrastructure business, and obviously a very profitable company or large company you're growing faster than, than, than most of the traditional competitors. And so that means that you have more to invest. You, you, you can, you can afford things like doing stock buybacks, and you can invest in R and D. You don't have to make those hard trade-offs that a lot of your competitors have to make. So It's never enough, right. Never enough. But, but, but in speaking of R and D and innovations that your intro introducing I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud in the operations there and prove flexibility and things around cloud native initiatives as well? >>Absolutely. Absolutely. Well, look, I think one of the fundamentals where we're philosophically different from a lot of options that I see in the industry is we don't need to build everything ourselves. We don't, I just need to create a damn good platform with really good platform services, whether it's, you know, around, um, search ability, whether it's around logging, whether it's around, you know, access control, multi-tenants, I need to create a really good platform and make it open. I do not need to go on a shopping spree to buy 17 and a half companies, and then figure out how to stitch it all together. Cause it's, it's almost impossible if it's impossible for us as a vendor, it's, it's three times more difficult, but for the customer who then has to consume it. So that was the philosophical difference in how we went about building in our sites. >>We've created a harden platform that's, that's always on. Okay. And then you, then the magic starts happening. Then you get partners, whether it is, um, you know, infrastructure partners like, uh, you know, some of our storage partners like NetApp or your, you know, others who want their conversion infrastructure is also to be managed or are other SAS offerings and software vendors, um, who have now become partners. Like we do not, we did not write to Terraform, you know, but we partnered with Tashi and now, uh, you know, Terraform services available on the intercept platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem, we partnered with a company called ergonomics. And so that's now an offering on the intercept platform. So that's where we're philosophically different and sort of, uh, you know, w how we have gone about this. >>And, uh, it actually dovetails well into some of the new things that I want to talk about today that we're announcing on the inner side platform where we're actually announcing the ability to attach and, and be able to manage Kubernetes clusters, which are not on prem. They're actually on AWS, on Azure, uh, soon coming on, uh, on GC, on, uh, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there, but in terms of monitoring, managing it, you can use in our site is since you're using it on prem, you can use that same piece of software to manage Kubernetes clusters in a public cloud, or even manage the end in, in a, in an easy to instance. So, >>So the fact that you could, you mentioned storage, pure net app. So it's intersite can manage that infrastructure. I remember the hot-seat deal. It caught my attention. And of course, a lot of companies want to partner with Cisco because you've got such a strong ecosystem, but I thought that was an interesting move Turbonomic. You mentioned. And now you're saying Kubernetes in the public cloud, so a lot different than it was 10 years ago. Um, so my last question is how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, it was kind of a tug of war there. We see these, these, these two worlds coming together. How will that evolve over the next few years? >>Well, I think it's, it's the evolution of the model and really look at depending on, you know, how you're keeping time. But I think one thing has become very clear. Again, we may be eating our own dog food. I mean, innercise is a hybrid cloud SAS applications that we've learned. Some of these lessons ourselves. One thing is referred that customers are looking for a consistent model, whether it's on the edge, on the polo public cloud, on-prem no data center. It doesn't matter if they're looking for a consistent model for operations, for governings or upgrades, or they're looking for a consistent operating model. What my crystal ball doesn't mean. There's going to be the rise of more custom plugs. It's still going to be hybrid. So allegations will want to reside wherever it makes most sense for them, which is most as the data moving data is the most expensive thing. >>So it's going to be located with the data that's on the edge. We on the air colo public cloud doesn't matter, but, um, basically you're gonna see more custom clouds, more industry-specific clouds, you know, whether it's for finance or constipation or retail industry specific, I think sovereign is going to play a huge role. Uh, you know, today, if you look at the cloud providers, you know, American and Chinese companies that these, the rest of the world, when it goes to making, you know, a good digital citizens, they're they're people and, you know, whether it's, gonna play control, um, and then distributed cloud also on edge, um, is, is gonna be the next frontier. And so that's where we are trying to line up our strategy. And if I had to sum it up in one sentence, it's really your cloud, your way, every customer is on a different journey. They will have their choice of like workload data, um, you know, upgrading your liability concerns. That's really what, what we are trying to enable for our customers. >>Uh, you know, I think I agree with doing that custom clouds. And I think what you're seeing is you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers. They're connecting their on-prem to their, to their public cloud. They're doing that. They're, they're doing that across clouds. And they're looking for companies like Cisco to do the hard work. It give me an infrastructure layer that I can build value on top of, because I'm going to take my financial services business to my cloud model or my healthcare business. I don't want to mess around with it. I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco in your R and D to do that. Do you buy that? >>Absolutely. I think, again, it goes back back to what I was talking about with blacks. You got to get the world, uh, a solid open, flexible platform, and it's flexible in terms of the technology flexible in how they want to consume it at some customers are fine with a SAS software. What if I talk to, you know, my friends in the federal team now that does not work so how they want to consume it, they want to, you know, our perspective sovereignty, we talked about it. So, you know, job for an infrastructure vendor like ourselves is give the world an open platform, give them the knobs, give them the right API. Um, but the last thing I would mention is, you know, there's still a place for innovation in hardware. Some of my colleagues are gonna engage into some of those, um, you know, details, whether it's on our X series platform or HyperFlex. Um, but it's really, it's going to, it's going to be software defined to SAS service and then, you know, give the world and open rock-solid platform, >>Got to run on something. All right, thanks DDL. It was a pleasure to have you in the queue. Great to see you. You're welcome in a moment, I'll be back to dig into hyperconverged and where HyperFlex fits and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today.

Published Date : Mar 23 2022

SUMMARY :

abstract set of remote services, you know, somewhere out in the clouds. the application infrastructure required to support these workers. So the challenge for it managers is ensuring that modern applications Gus is back on the cube to talk about how we're going to simplify Good to see you again. Talk about the trends you're seeing from you know, managed by Kubernetes. And of course the bus storage team. Um, but the software is called intersite and, you know, we started out with intersite being It's the same platform on which you can use to manage the Kubernetes but you know, there was a day and you could still point to some examples today in the it industry where, So look, it started with, um, you know, patient because, you know, um, the big change with SAS is, is you're, So Cisco, I look at Cisco's business model and I compete, I of course, compare it to other companies in the infrastructure whether it's around logging, whether it's around, you know, access control, So that's where we're philosophically different and sort of, uh, you know, clusters on, you know, in AWS or Azure, stay there, So the fact that you could, you mentioned storage, pure net app. on, you know, how you're keeping time. data, um, you know, upgrading your liability concerns. I'm not going to develop, you know, custom infrastructure like an Amazon but the last thing I would mention is, you know, there's still a place for innovation in hardware. It was a pleasure to have you in the queue.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Didi DasguptaPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

36%QUANTITY

0.99+

FirstQUANTITY

0.99+

Darren WilliamsPERSON

0.99+

TerraformORGANIZATION

0.99+

threeQUANTITY

0.99+

third toolQUANTITY

0.99+

AWSORGANIZATION

0.99+

10 years agoDATE

0.99+

one sentenceQUANTITY

0.99+

three timesQUANTITY

0.99+

17 and a half companiesQUANTITY

0.99+

a dayQUANTITY

0.99+

fourth toolQUANTITY

0.99+

third segmentQUANTITY

0.99+

DDLPERSON

0.99+

fifth toolQUANTITY

0.98+

GusPERSON

0.98+

todayDATE

0.98+

KubernetesORGANIZATION

0.98+

one clusterQUANTITY

0.98+

two worldsQUANTITY

0.98+

DD DasguptaPERSON

0.98+

one cloudQUANTITY

0.98+

one thingQUANTITY

0.97+

about four yearsQUANTITY

0.97+

four guysQUANTITY

0.97+

first two thingsQUANTITY

0.97+

firstQUANTITY

0.97+

SASTITLE

0.96+

tens of thousandsQUANTITY

0.96+

oneQUANTITY

0.96+

ChineseOTHER

0.96+

KubernetesTITLE

0.96+

three segmentsQUANTITY

0.94+

TashiORGANIZATION

0.94+

SASORGANIZATION

0.93+

KubernetesPERSON

0.93+

more than a decade agoDATE

0.93+

single platformQUANTITY

0.92+

HyperFlexTITLE

0.91+

UCSORGANIZATION

0.91+

AzureTITLE

0.91+

One thingQUANTITY

0.9+

TurbonomicORGANIZATION

0.89+

ergonomicsORGANIZATION

0.89+

ETRORGANIZATION

0.85+

ThursdayDATE

0.82+

more than half ofQUANTITY

0.79+

Maneesh Agra wallPERSON

0.79+

mid two thousandsQUANTITY

0.78+

yearsDATE

0.77+

customersQUANTITY

0.76+

XCOMMERCIAL_ITEM

0.74+

single offeringQUANTITY

0.72+

X seriesCOMMERCIAL_ITEM

0.68+

next few yearsDATE

0.65+

Cisco: Simplifying Hybrid Cloud


 

>> The introduction of the modern public cloud in the mid 2000s, permanently changed the way we think about IT. At the heart of it, the cloud operating model attacked one of the biggest problems in enterprise infrastructure, human labor costs. More than half of IT budgets were spent on people, and much of that effort added little or no differentiable value to the business. The automation of provisioning, management, recovery, optimization, and decommissioning infrastructure resources has gone mainstream as organizations demand a cloud-like model across all their application infrastructure, irrespective of its physical location. This has not only cut cost, but it's also improved quality and reduced human error. Hello everyone, my name is Dave Vellante and welcome to Simplifying Hybrid Cloud, made possible by Cisco. Today, we're going to explore Hybrid Cloud as an operating model for organizations. Now the definite of cloud is expanding. Cloud is no longer an abstract set of remote services, you know, somewhere out in the clouds. No, it's an operating model that spans public cloud, on-premises infrastructure, and it's also moving to edge locations. This trend is happening at massive scale. While at the same time, preserving granular control of resources. It's an entirely new game where IT managers must think differently to deal with this complexity. And the environment is constantly changing. The growth and diversity of applications continues. And now, we're living in a world where the workforce is remote. Hybrid work is now a permanent state and will be the dominant model. In fact, a recent survey of CIOs by Enterprise Technology Research, ETR, indicates that organizations expect 36% of their workers will be operating in a hybrid mode. Splitting time between remote work and in office environments. This puts added pressure on the application infrastructure required to support these workers. The underlying technology must be more dynamic and adaptable to accommodate constant change. So the challenge for IT managers is ensuring that modern applications can be run with a cloud-like experience that spans on-prem, public cloud, and edge locations. This is the future of IT. Now today, we have three segments where we're going to dig into these issues and trends surrounding Hybrid Cloud. First up, is DD Dasgupta, who will set the stage and share with us how Cisco is approaching this challenge. Next, we're going to hear from Manish Agarwal and Darren Williams, who will help us unpack HyperFlex which is Cisco's hyperconverged infrastructure offering. And finally, our third segment will drill into Unified Compute. More than a decade ago, Cisco pioneered the concept of bringing together compute with networking in a single offering. Cisco frankly, changed the legacy server market with UCS, Unified Compute System. The X-Series is Cisco's next generation architecture for the coming decade and we'll explore how it fits into the world of Hybrid Cloud, and its role in simplifying the complexity that we just discussed. So, thanks for being here. Let's go. (upbeat music playing) Okay, let's start things off. DD Dasgupta is back on theCUBE to talk about how we're going to simplify Hybrid Cloud complexity. DD welcome, good to see you again. >> Hey Dave, thanks for having me. Good to see you again. >> Yeah, our pleasure. Look, let's start with big picture. Talk about the trends you're seeing from your customers. >> Well, I think first off, every customer these days is a public cloud customer. They do have their on-premise data centers, but, every customer is looking to move workloads, new services, cloud native services from the public cloud. I think that's one of the big things that we're seeing. While that is happening, we're also seeing a pretty dramatic evolution of the application landscape itself. You've got, you know, bare metal applications, you always have virtualized applications, and then most modern applications are containerized, and, you know, managed by Kubernetes. So I think we're seeing a big change in, in the application landscape as well. And, probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves, it's triggering a change in the IT organizations in the development organizations and sort of not only how they work within their organizations, but how they work across all of these different organizations. So I think those are some of the big things that, that I hear about when I talk to customers. >> Well, so it's interesting. I often say Cisco kind of changed the game in server and compute when it developed the original UCS. And you remember there were organizational considerations back then bringing together the server team and the networking team and of course the storage team as well. And now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >> No, I think you're right Dave, back to the origins of UCS and we, you know, why did a networking company build a server? Well, we just enabled with the best networking technologies so, would do compute better. And now, doing something similar on the software, actually the managing software for our hyperconvergence, for our, you know, Rack server, for our blade servers. And, you know, we've been on this journey for about four years. The software is called Intersight, and, you know, we started out with Intersight being just the element manager, the management software for Cisco's compute and hyperconverged devices. But then we've evolved it over the last few years because we believe that a customer shouldn't have to manage a separate piece of software, would do manage the hardware, the underlying hardware. And then a separate tool to connect it to a public cloud. And then a third tool to do optimization, workload optimization or performance optimization, or cost optimization. A fourth tool to now manage, you know, Kubernetes and like, not just in one cluster, one cloud, but multi-cluster, multi-cloud. They should not have to have a fifth tool that does, goes into observability anyway. I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that manage their infrastructure. But it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the, the Kubernetes infrastructure, Kubernetes clusters, I mean, whether it's on-prem or in a cloud. So, overall that's the strategy. Bring it to a single platform, and a platform is a loaded word we'll get into that a little bit, you know, in this conversation, but, that's the overall strategy, simplify. >> Well, you know, you brought platform. I like to say platform beats products, but you know, there was a day, and you could still point to some examples today in the IT industry where, hey, another tool we can monetize that. And another one to solve a different problem, we can monetize that. And so, tell me more about how Intersight came about. You obviously sat back, you saw what your customers were going through, you said, "We can do better." So tell us the story there. >> Yeah, absolutely. So, look, it started with, you know, three or four guys in getting in a room and saying, "Look, we've had this, you know, management software, UCS manager, UCS director." And these are just the Cisco's management, you know, for our, softwares for our own platforms. And every company has their own flavor. We said, we took on this bold goal of like, we're not, when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service. Or we're going to create a SaaS offering. Because the same, the infrastructure built by us whether it's on networking or compute, or the cyber cloud software, how do our customers use it? Well, they use it to write and run their applications, their SaaS services, every customer, every customer, every company today is a software company. They live and die by how their applications work or don't. And so, we were like, "We want to eat our own dog food here," right? We want to deliver this as a SaaS offering. And so that's how it started, we've being on this journey for about four years, tens of thousands of customers. But it was a pretty big, bold ambition 'cause you know, the big change with SaaS as you're familiar Dave is, the job of now managing this piece of software, is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities, and we've learned so much along the way, whether it's to announce scalability, reliability, working with, our own company's security organizations on what can or cannot be in a SaaS service. So again, it's been a wonderful journey, but, I wanted to point out, we are in some ways eating our own dog food 'cause we built a SaaS application that helps other companies deliver their SaaS applications. >> So Cisco, I look at Cisco's business model and I compare, of course compare it to other companies in the infrastructure business and, you're obviously a very profitable company, you're a large company, you're growing faster than most of the traditional competitors. And, so that means that you have more to invest. You, can afford things, like to you know, stock buybacks, and you can invest in R&D you don't have to make those hard trade offs that a lot of your competitors have to make, so-- >> You got to have a talk with my boss on the whole investment. >> Yeah, right. You'd never enough, right? Never enough. But in speaking of R&D and innovations that you're intro introducing, I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud, the operations there, improve flexibility, and things around Cloud Native initiatives as well? >> Absolutely, absolutely. Well, look, I think, one of the fundamentals where we're kind of philosophically different from a lot of options that I see in the industry is, we don't need to build everything ourselves, we don't. I just need to create a damn good platform with really good platform services, whether it's, you know, around, searchability, whether it's around logging, whether it's around, you know, access control, multi-tenants. I need to create a really good platform, and make it open. I do not need to go on a shopping spree to buy 17 and 1/2 companies and then figure out how to stich it all together. 'Cause it's almost impossible. And if it's impossible for us as a vendor, it's three times more difficult for the customer who then has to consume it. So that was the philosophical difference and how we went about building Intersight. We've created a hardened platform that's always on, okay? And then you, then the magic starts happening. Then you get partners, whether it is, you know, infrastructure partners, like, you know, some of our storage partners like NetApp or PR, or you know, others, who want their conversion infrastructures also to be managed, or their other SaaS offerings and software vendors who have now become partners. Like we did not write Terraform, you know, but we partnered with Hashi and now, you know, Terraform service's available on the Intersight platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem. We partner with a company called Turbonomic and so that's now an offering on the Intersight platform. So that's where we're philosophically different, in sort of, you know, how we have gone about this. And, it actually dovetails well into, some of the new things that I want to talk about today that we're announcing on the Intersight platform where we're actually announcing the ability to attach and be able to manage Kubernetes clusters which are not on-prem. They're actually on AWS, on Azure, soon coming on GC, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there. But in terms of monitoring, managing it, you can use Intersight, and since you're using it on-prem you can use that same piece of software to manage Kubernetes clusters in a public cloud. Or even manage DMS in a EC2 instance. So. >> Yeah so, the fact that you could, you mentioned Storage Pure, NetApp, so Intersight can manage that infrastructure. I remember the Hashi deal and I, it caught my attention. I mean, of course a lot of companies want to partner with Cisco 'cause you've got such a strong ecosystem, but I thought that was an interesting move, Turbonomic you mentioned. And now you're saying Kubernetes in the public cloud. So a lot different than it was 10 years ago. So my last question is, how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, and it was kind of a tug of war there. We see these two worlds coming together. How will that evolve on for the next few years? >> Well, I think it's the evolution of the model and I, really look at Cloud, you know, 2.0 or 3.0, or depending on, you know, how you're keeping terms. But, I think one thing has become very clear again, we, we've be eating our own dog food, I mean, Intersight is a hybrid cloud SaaS application. So we've learned some of these lessons ourselves. One thing is for sure that the customers are looking for a consistent model, whether it's on the edge, on the COLO, public cloud, on-prem, no data center, it doesn't matter. They're looking for a consistent model for operations, for governance, for upgrades, for reliability. They're looking for a consistent operating model. What (indistinct) tells me I think there's going to be a rise of more custom clouds. It's still going to be hybrid, so applications will want to reside wherever it most makes most sense for them which is obviously data, 'cause you know, data is the most expensive thing. So it's going to be complicated with the data goes on the edge, will be on the edge, COLO, public cloud, doesn't matter. But, you're basically going to see more custom clouds, more industry specific clouds, you know, whether it's for finance, or transportation, or retail, industry specific, I think sovereignty is going to play a huge role, you know, today, if you look at the cloud provider there's a handful of, you know, American and Chinese companies, that leave the rest of the world out when it comes to making, you know, good digital citizens of their people and you know, whether it's data latency, data gravity, data sovereignty, I think that's going to play a huge role. Sovereignty's going to play a huge role. And the distributor cloud also called Edge, is going to be the next frontier. And so, that's where we are trying line up our strategy. And if I had to sum it up in one sentence, it's really, your cloud, your way. Every customer is on a different journey, they will have their choice of like workloads, data, you know, upgrade reliability concern. That's really what we are trying to enable for our customers. >> You know, I think I agree with you on that custom clouds. And I think what you're seeing is, you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers, they're connecting their on-prem to their public cloud. They're doing that across clouds, and they're looking for companies like Cisco to do the hard work, and give me an infrastructure layer that I can build value on top of. 'Cause I'm going to take my financial services business to my cloud model, or my healthcare business. I don't want to mess around with, I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco and your R&D to do that. Do you buy that? >> Absolutely. I think again, it goes back to what I was talking about with platform. You got to give the world a solid open, flexible platform. And flexible in terms of the technology, flexible in how they want to consume it. Some of our customers are fine with the SaaS, you know, software. But if I talk to, you know, my friends in the federal team, no, that does not work. And so, how they want to consume it, they want to, you know, (indistinct) you know, sovereignty we talked about. So, I think, you know, job for an infrastructure vendor like ourselves is to give the world a open platform, give them the knobs, give them the right API tool kit. But the last thing I will mention is, you know, there's still a place for innovation in hardware. And I think some of my colleagues are going to get into some of those, you know, details, whether it's on our X-Series, you know, platform or HyperFlex, but it's really, it's going to be software defined, it's a SaaS service and then, you know, give the world an open rock solid platform. >> Got to run on something All right, Thanks DD, always a pleasure to have you on the, theCUBE, great to see you. >> Thanks for having me. >> You're welcome. In a moment, I'll be back to dig into hyperconverged, and where HyperFlex fits, and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today. >> It used to be all your infrastructure was managed here. But things got more complex in distributing, and now IT operations need to be managed everywhere. But what if you could manage everywhere from somewhere? One scalable place that brings together your teams, technology, and operations. Both on-prem and in the cloud. One automated place that provides full stack visibility to help you optimize performance and stay ahead of problems. One secure place where everyone can work better, faster, and seamlessly together. That's the Cisco Intersight cloud operations platform. The time saving, cost reducing, risk managing solution for your whole IT environment, now and into the future of this ever-changing world of IT. (upbeat music) >> With me now are Manish Agarwal, senior director of product management for HyperFlex at Cisco, @flash4all, number four, I love that, on Twitter. And Darren Williams, the director of business development and sales for Cisco. MrHyperFlex, @MrHyperFlex on Twitter. Thanks guys. Hey, we're going to talk about some news and HyperFlex, and what role it plays in accelerating the hybrid cloud journey. Gentlemen, welcome to theCUBE, good to see you. >> Thanks a lot Dave. >> Thanks Dave. >> All right Darren, let's start with you. So, for a hybrid cloud, you got to have on-prem connection, right? So, you got to have basically a private cloud. What are your thoughts on that? >> Yeah, we agree. You can't have a hybrid cloud without that prime element. And you've got to have a strong foundation in terms of how you set up the whole benefit of the cloud model you're building in terms of what you want to try and get back from the cloud. You need a strong foundation. Hyperconversions provides that. We see more and more customers requiring a private cloud, and they're building it with Hyperconversions, in particular HyperFlex. Now to make all that work, they need a good strong cloud operations model to be able to connect both the private and the public. And that's where we look at Intersight. We've got solution around that to be able to connect that around a SaaS offering. That looks around simplified operations, gives them optimization, and also automation to bring both private and public together in that hybrid world. >> Darren let's stay with you for a minute. When you talk to your customers, what are they thinking these days when it comes to implementing hyperconverged infrastructure in both the enterprise and at the edge, what are they trying to achieve? >> So there's many things they're trying to achieve, probably the most brutal honesty is they're trying to save money, that's probably the quickest answer. But, I think they're trying to look in terms of simplicity, how can they remove layers of components they've had before in their infrastructure? We see obviously collapsing of storage into hyperconversions and storage networking. And we've got customers that have saved 80% worth of savings by doing that collapse into a hyperconversion infrastructure away from their Three Tier infrastructure. Also about scalability, they don't know the end game. So they're looking about how they can size for what they know now, and how they can grow that with hyperconvergence very easy. It's one of the major factors and benefits of hyperconversions. They also obviously need performance and consistent performance. They don't want to compromise performance around their virtual machines when they want to run multiple workloads. They need that consistency all all way through. And then probably one of the biggest ones is that around the simplicity model is the management layer, ease of management. To make it easier for their operations, yeah, we've got customers that have told us, they've saved 50% of costs in their operations model on deploying HyperFlex, also around the time savings they make massive time savings which they can reinvest in their infrastructure and their operations teams in being able to innovate and go forward. And then I think probably one of the biggest pieces we've seen as people move away from three tier architecture is the deployment elements. And the ease of deployment gets easy with hyperconverged, especially with Edge. Edge is a major key use case for us. And, what I want, what our customers want to do is get the benefit of a data center at the edge, without A, the big investment. They don't want to compromise in performance, and they want that simplicity in both management and deployment. And, we've seen our analysts recommendations around what their readers are telling them in terms of how management deployment's key for our IT operations teams. And how much they're actually saving by deploying Edge and taking the burden away when they deploy hyperconversions. And as I said, the savings elements is the key bit, and again, not always, but obviously those are case studies around about public cloud being quite expensive at times, over time for the wrong workloads. So by bringing them back, people can make savings. And we again have customers that have made 50% savings over three years compared to their public cloud usage. So, I'd say that's the key things that customers are looking for. Yeah. >> Great, thank you for that Darren. Manish, we have some hard news, you've been working a lot on evolving the HyperFlex line. What's the big news that you've just announced? >> Yeah, thanks Dave. So there are several things that we are announcing today. The first one is a new offer called HyperFlex Express. This is, you know, Cisco Intersight led and Cisco Intersight managed eight HyperFlex configurations. That we feel are the fastest path to hybrid cloud. The second is we are expanding our server portfolio by adding support for HX on AMD Rack, UCS AMD Rack. And the third is a new capability that we are introducing, that we are calling, local containerized witness. And let me take a minute to explain what this is. This is a pretty nifty capability to optimize for Edge environments. So, you know, this leverages the, Cisco's ubiquitous presence of the networking, you know, products that we have in the environments worldwide. So the smallest HyperFlex configuration that we have is a 2-node configuration, which is primarily used in Edge environments. Think of a, you know, a backroom in a departmental store or a oil rig, or it might even be a smaller data center somewhere around the globe. For these 2-node configurations, there is always a need for a third entity that, you know, industry term for that is either a witness or an arbitrator. We had that for HyperFlex as well. And the problem that customers face is, where you host this witness. It cannot be on the cluster because the job of the witness is to, when the infrastructure is going down, it basically breaks, sort of arbitrates which node gets to survive. So it needs to be outside of the cluster. But finding infrastructure to actually host this is a problem, especially in the Edge environments where these are resource constraint environments. So what we've done is we've taken that witness, we've converted it into a container reform factor. And then qualified a very large slew of Cisco networking products that we have, right from ISR, ASR, Nexus, Catalyst, industrial routers, even a Raspberry Pi that can host this witness. Eliminating the need for you to find yet another piece of infrastructure, or doing any, you know, care and feeding of that infrastructure. You can host it on something that already exists in the environment. So those are the three things that we are announcing today. >> So I want to ask you about HyperFlex Express. You know, obviously the whole demand and supply chain is out of whack. Everybody's, you know, global supply chain issues are in the news, everybody's dealing with it. Can you expand on that a little bit more? Can HyperFlex Express help customers respond to some of these issues? >> Yeah indeed Dave. You know the primary motivation for HyperFlex Express was indeed an idea that, you know, one of the folks are on my team had, which was to build a set of HyperFlex configurations that are, you know, would have a shorter lead time. But as we were brainstorming, we were actually able to tag on multiple other things and make sure that, you know, there is in it for, something in it for our customers, for sales, as well as our partners. So for example, you know, for our customers, we've been able to dramatically simplify the configuration and the install for HyperFlex Express. These are still HyperFlex configurations and you would at the end of it, get a HyperFlex cluster. But the part to that cluster is much, much simplified. Second is that we've added in flexibility where you can now deploy these, these are data center configurations, but you can deploy these with or without fabric interconnects, meaning you can deploy with your existing top of rack. We've also, you know, added attractive price point for these, and of course, you know, these will have better lead times because we've made sure that, you know, we are using components that are, that we have clear line of sight from our supply perspective. For partner and sales, this is, represents a high velocity sales motion, a faster turnaround time, and a frictionless sales motion for our distributors. This is actually a set of disty-friendly configurations, which they would find very easy to stalk, and with a quick turnaround time, this would be very attractive for the distys as well. >> It's interesting Manish, I'm looking at some fresh survey data, more than 70% of the customers that were surveyed, this is the ETR survey again, we mentioned 'em at the top. More than 70% said they had difficulty procuring server hardware and networking was also a huge problem. So that's encouraging. What about, Manish, AMD? That's new for HyperFlex. What's that going to give customers that they couldn't get before? >> Yeah Dave, so, you know, in the short time that we've had UCS AMD Rack support, we've had several record making benchmark results that we've published. So it's a powerful platform with a lot of performance in it. And HyperFlex, you know, the differentiator that we've had from day one is that it has the industry leading storage performance. So with this, we are going to get the fastest compute, together with the fastest storage. And this, we are hoping that we'll, it'll basically unlock, you know, a, unprecedented level of performance and efficiency, but also unlock several new workloads that were previously locked out from the hyperconverged experience. >> Yeah, cool. So Darren, can you give us an idea as to how HyperFlex is doing in the field? >> Sure, absolutely. So, both me and Manish been involved right from the start even before it was called HyperFlex, and we've had a great journey. And it's very exciting to see where we are taking, where we've been with the technology. So we have over 5,000 customers worldwide, and we're currently growing faster year over year than the market. The majority of our customers are repeat buyers, which is always a good sign in terms of coming back when they've proved the technology and are comfortable with the technology. They, repeat buyer for expanded capacity, putting more workloads on. They're using different use cases on there. And from an Edge perspective, more numbers of science. So really good endorsement of the technology. We get used across all verticals, all segments, to house mission critical applications, as well as the traditional virtual server infrastructures. And we are the lifeblood of our customers around those, mission critical customers. I think one big example, and I apologize for the worldwide audience, but this resonates with the American audience is, the Super Bowl. So, the SoFi stadium that housed the Super Bowl, actually has Cisco HyperFlex running all the management services, through from the entire stadium for digital signage, 4k video distribution, and it's completely cashless. So, if that were to break during Super Bowl, that would've been a big news article. But it was run perfectly. We, in the design of the solution, we're able to collapse down nearly 200 servers into a few nodes, across a few racks, and have 120 virtual machines running the whole stadium, without missing a heartbeat. And that is mission critical for you to run Super Bowl, and not be on the front of the press afterwards for the wrong reasons, that's a win for us. So we really are, really happy with HyperFlex, where it's going, what it's doing, and some of the use cases we're getting involved in, very, very exciting. >> Hey, come on Darren, it's Super Bowl, NFL, that's international now. And-- >> Thing is, I follow NFL. >> The NFL's, it's invading London, of course, I see the, the picture, the real football over your shoulder. But, last question for Manish. Give us a little roadmap, what's the future hold for HyperFlex? >> Yeah. So, you know, as Darren said, both Darren and I have been involved with HyperFlex since the beginning. But, I think the best is yet to come. There are three main pillars for HyperFlex. One is, Intersight is central to our strategy. It provides a, you know, lot of customer benefit from a single pane of class management. But we are going to take this beyond the lifecycle management, which is for HyperFlex, which is integrated into Intersight today, and element management. We are going to take it beyond that and start delivering customer value on the dimensions of AI Ops, because Intersight really provides us a ideal platform to gather stats from all the clusters across the globe, do AI/ML and do some predictive analysis with that, and return back as, you know, customer valued, actionable insights. So that is one. The second is UCS expand the HyperFlex portfolio, go beyond UCS to third party server platforms, and newer UCS server platforms as well. But the highlight there is one that I'm really, really excited about and think that there is a lot of potential in terms of the number of customers we can help. Is HX on X-Series. X-Series is another thing that we are going to, you know, add, we're announcing a bunch of capabilities on in this particular launch. But HX on X-Series will have that by the end of this calendar year. And that should unlock with the flexibility of X-Series of hosting a multitude of workloads and the simplicity of HyperFlex. We're hoping that would bring a lot of benefits to new workloads that were locked out previously. And then the last thing is HyperFlex data platform. This is the heart of the offering today. And, you'll see the HyperFlex data platform itself it's a distributed architecture, a unique distributed architecture. Primarily where we get our, you know, record baring performance from. You'll see it can foster more scalable, more resilient, and we'll optimize it for you know, containerized workloads, meaning it'll get granular containerized, container granular management capabilities, and optimize for public cloud. So those are some things that we are, the team is busy working on, and we should see that come to fruition. I'm hoping that we'll be back at this forum in maybe before the end of the year, and talking about some of these newer capabilities. >> That's great. Thank you very much for that, okay guys, we got to leave it there. And you know, Manish was talking about the HX on X-Series that's huge, customers are going to love that and it's a great transition 'cause in a moment, I'll be back with Vikas Ratna and Jim Leach, and we're going to dig into X-Series. Some real serious engineering went into this platform, and we're going to explore what it all means. You're watching Simplifying Hybrid Cloud on theCUBE, your leader in enterprise tech coverage. >> The power is here, and here, but also here. And definitely here. Anywhere you need the full force and power of your infrastructure hyperconverged. It's like having thousands of data centers wherever you need them, powering applications anywhere they live, but manage from the cloud. So you can automate everything from here. (upbeat music) Cisco HyperFlex goes anywhere. Cisco, the bridge to possible. (upbeat music) >> Welcome back to theCUBE's special presentation, Simplifying Hybrid Cloud brought to you by Cisco. We're here with Vikas Ratna who's the director of product management for UCS at Cisco and James Leach, who is director of business development at Cisco. Gents, welcome back to theCUBE, good to see you again. >> Hey, thanks for having us. >> Okay, Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers, and as organizations as they hit the pavement for their hybrid cloud journeys, what are the most common challenges that they face? What are they telling you? How is Cisco, specifically UCS helping them deal with these problems? >> Well, you know, first I think that's a, you know, that's a great question. And you know, customer centric view is the way that we've taken, is kind of the approach we've taken from day one. Right? So I think that if you look at the challenges that we're solving for that our customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications, right? That's the, that's where the rubber meets your proverbial road with the customer. And I would say that, you know, what we're seeing is, the challenges customers are facing within applications come from the the way that applications have evolved. So what we're seeing now is more data centric applications for example. Those require that we, you know, are able to move and process large data sets really in real time. And the other aspect of applications I think to give our customers kind of some, you know, pause some challenges, would be around the fact that they're changing so quickly. So the application that exists today or the day that they, you know, make a purchase of infrastructure to be able to support that application, that application is most likely changing so much more rapidly than the infrastructure can keep up with today. So, that creates some challenges around, you know, how do I build the infrastructure? How do I right size it without over provisioning, for example? But also, there's a need for some flexibility around life cycle and planning those purchase cycles based on the life cycle of the different hardware elements. And within the infrastructure, which I think is the second bucket of challenges, we see customers who are being forced to move away from the, like a modular or blade approach, which offers a lot of operational and consolidation benefits, and they have to move to something like a Rack server model for some applications because of these needs that these data centric applications have, and that creates a lot of you know, opportunity for siloing the infrastructure. And those silos in turn create multiple operating models within the, you know, a data center environment that, you know, again, drive a lot of complexity. So that, complexity is definitely the enemy here. And then finally, I think life cycles. We're seeing this democratization of processing if you will, right? So it's no longer just CPU focused, we have GPU, we have FPGA, we have, you know, things that are being done in storage and the fabrics that stitch them together that are all changing rapidly and have very different life cycles. So, when those life cycles don't align for a lot of our customers, they see a challenge in how they can manage this, you know, these different life cycles and still make a purchase without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So, that is a, you know, kind of the other bucket. And then finally, I think management is huge, right? So management, you know, at its core is really right size for our customers and give them the most value when it meets the mark around scale and scope. You know, back in 2009, we weren't meeting that mark in the industry and UCS came about and took management outside the chassis, right? We put it at the top of the rack and that worked great for the scale and scope we needed at that time. However, as things have changed, we're seeing a very new scale and scope needed, right? So we're talking about a hybrid cloud world that has to manage across data centers, across clouds, and, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those operational pieces that touch the application, that touch the infrastructure, but they're not the same tool. They tend to be disparate tools that have to be put together. >> Right. >> So our customers, you know, don't really enjoy being in the business of, you know, building their own tools, so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >> Right. You know Jim, I said in my open that you guys, Cisco sort of changed the server game with the original UCS, but the X-Series is the next generation, the generation for the next decade which is really important 'cause you touched on a lot of things, these data intensive workload, alternative processors to sort of meet those needs. The whole cloud operating model and hybrid cloud has really changed. So, how's it going with with the X-Series? You made a big splash last year, what's the reception been in the field? >> Actually, it's been great. You know, we're finding that customers can absolutely relate to our, you know, UCS X-Series story. I think that, you know, the main reason they relate to it is they helped create it, right? It was their feedback and their partnership that gave us really the, those problem areas, those areas that we could solve for the customer that actually add, you know, significant value. So, you know, since we brought UCS to market back in 2009, you know, we had this unique architectural paradigm that we created, and I think that created a product which was the fastest in Cisco history in terms of growth. What we're seeing now is X-Series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake. We're seeing all, you know, both in terms of, you know, the number of customers, but also more importantly, the number of workloads that our customers are using, and the types of workloads are growing, right? So we're growing this modular segment that exist, not just, you know, bringing customers onto a new product, but we're actually bring them into the product in the way that we had envisioned, which is one infrastructure that can run any application and do it seamlessly. So we're really excited to be growing this modular segment. I think the other piece, you know, that, you know, we judge ourselves is, you know, sort of not just within Cisco, but also within the industry. And I think right now is a, you know, a great example, you know, our competitors have taken kind of swings and misses over the past five years at this, at a, you know, kind of the new next architecture. And, we're seeing a tremendous amount of growth even faster than any of our competitors have seen when they announced something that was new to this space. So, I think that the ground up work that we did is really paying off. And I think that what we're also seeing is it's not really a leap frog game, as it may have been in the past. X-Series is out in front today, and, you know, we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers and, you know, we're pretty excited that we're seeing the results as well. So, as our competitors hit walls, I think we're, you know, we're executing on the plan that we laid out back in June when we launched X-Series to the world. And, you know, as we continue to do that, we're seeing, you know, again, tremendous uptake from our customers. >> So thank you for that Jim. So Vikas, I was just on Twitter just today actually talking about the gravitational pull, you've got the public clouds pulling CXOs one way and you know, on-prem folks pulling the other way and hybrid cloud. So, organizations are struggling with a lot of different systems and architectures and ways to do things. And I said that what they're trying to do is abstract all that complexity away and they need infrastructure to support that. And I think your stated aim is really to try to help with that confusion with the X series, right? I mean, so how so can you explain that? >> Sure. And, that's the right, the context that you built up right there Dave. If you walk into enterprise data center you'll see plethora of compute systems spread all across. Because, every application has its unique needs, and, hence you find drive node, drive-dense system, memory dense system, GPU dense system, core dense system, and variety of form factors, 1U, 2U, 4U, and, every one of them typically come with, you know, variety of adapters and cables and so forth. This creates the siloness of resources. Fabric is (indistinct), the adapter is (indistinct). The power and cooling implication. The Rack, you know, face challenges. And, above all, the multiple management plane that they come up with, which makes it very difficult for IT to have one common center policy, and enforce it all across, across the firmware and software and so forth. And then think about upgrade challenges of the siloness makes it even more complex as these go through the upgrade processes of their own. As a result, we observe quite a few of our customers, you know, really seeing an inter, slowness in that agility, and high burden in the cost of overall ownership. This is where with the X-Series powered by Intersight, we have one simple goal. We want to make sure our customers get out of that complexities. They become more agile, and drive lower TCOs. And we are delivering it by doing three things, three aspects of simplification. First, simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure. An infrastructure which removes the siloness of form factor. An infrastructure which reduces the Rack footprint that is required. An infrastructure where power and cooling budgets are in the lower. Second, we want to simplify by delivering a cloud operating model, where they can and create the policy once across compute network storage and deploy it all across. And third, we want to take away the pain they have by simplifying the process of upgrade and any platform evolution that they're going to go through in the next two, three years. So that's where the focus is on just driving down the simplicity, lowering down their TCOs. >> Oh, that's key, less friction is always a good thing. Now, of course, Vikas we heard from the HyperFlex guys earlier, they had news not to be outdone. You have hard news as well. What innovations are you announcing around X-Series today? >> Absolutely. So we are following up on the exciting X-Series announcement that we made in June last year, Dave. And we are now introducing three innovation on X-Series with the goal of three things. First, expand the supported workload on X-Series. Second, take the performance to new levels. Third, dramatically reduce the complexities in the data center by driving down the number of adapters and cables that are needed. To that end, three new innovations are coming in. First, we are introducing the support for the GPU node using a cableless and very unique X-Fabric architecture. This is the most elegant design to add the GPUs to the compute node in the modular form factor. Thereby, our customers can now power in AI/ML workload, or any workload that need many more number of GPUs. Second, we are bringing in GPUs right onto the compute node, and thereby our customers can now fire up the accelerated VDI workload for example. And third, which is what you know, we are extremely proud about, is we are innovating again by introducing the fifth generation of our very popular unified fabric technology. With the increased bandwidth that it brings in, coupled with the local drive capacity and densities that we have on the compute node, our customers can now fire up the big data workload, the FCI workload, the SDS workload. All these workloads that have historically not lived in the modular form factor, can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric, we've become the only vendor to now finally enable 100 gig end to end single port bandwidth, and there are multiple of those that are coming in there. And we are working very closely with our CI partners to deliver the benefit of these performance through our Cisco Validated Design to our CI franchise. And third, the innovations in the fifth gen fabric will again allow our customers to have fewer physical adapters made with ethernet adapter, made with power channel adapters, or made with, the other storage adapters. They've reduced it down and coupled with the reduction in the cable. So very, very excited about these three big announcements that we are making in this month's release. >> Great, a lot there, you guys have been busy, so thank you for that Vikas. So, Jim, you talked a little bit about the momentum that you have, customers are adopting, what problems are they telling you that X-Series addresses, and how do they align with where they want to go in the future? >> That's a great question. I think if you go back to, and think about some of the things that we mentioned before, in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what Vikas mentioned I think is really important, right? Those pieces that we just announced really enhance that story and really move again, to the, kind of, to the next level of taking advantage of some of these, you know, problem solving for our customers. You know, if you look at, you know, I think Vikas mentioned accelerated VDI. That's a great example. These are where customers, you know, they need to have this dense compute, they need video acceleration, they need tight policy management, right? And they need to be able to deploy these systems anywhere in the world. Well, that's exactly what we're hitting on here with X-Series right now. We're hitting the market in every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs, and a lot of room to grow. We have the, you know, the premier cloud based management, you know, hybrid cloud suite in the industry, right? So check there. We have the flexible GPU accelerators that Vikas just talked about that we're announcing both on the system and also adding additional ones to the, through the use of the X-Fabric, which is really, really critical to this launch as well. And, you know, I think finally, the fifth generation of fabric interconnect and virtual interface card, and, intelligent fabric module go hand in hand in creating this 100 gig end to end bandwidth story, that we can move a lot of data. Again, you know, having all this performance is only as good as what we can get in and out of it, right? So giving customers the ability to manage it anywhere, to be able to get the bandwidth that they need, to be able to get the accelerators that are flexible that it fit exactly their needs, this is huge, right? This solves a lot of the problems we can tick off right away. With the infrastructure as I mentioned, X-Fabric is really critical here because it opens a lot of doors here, you know, we're talking about GPUs today, but in the future, there are other elements that we can disaggregate, like the GPUs that solve these life cycle mismanagement issues. They solve issues around the form factor limitations. It solves all these issues for like, it does for GPU we can do that with storage or memory in the future. So that's going to be huge, right? This is disaggregation that actually delivers, right? It's not just a gimmicky bar trick here that we're doing, this is something that customers can really get value out of day one. And then finally, I think the, you know, the future readiness here, you know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPUs going to evolve, the CPUs are going to evolve, the drives, you know, the storage modules are going to evolve. All of these things are changing very rapidly. The fabric that stitches them together is critical, and we know that we're just on the edge of some of the development that are coming with CXL, with some of the PCI Express changes that are coming in the very near future, so we're ready to go. And the X-Fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers, right? Our customers are out there saying that, you know, they want to buy into to something like X-Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future, right? We want our customers to take these disruptive technologies and not be disrupted, but use them to disrupt their competition as well. So, you know, we're really excited about the pieces today, and, I think it goes a long way towards continuing to tell the customer benefit story that X-Series brings, and, you know, again, you know, stay tuned because it's going to keep getting better as we go. >> Yeah, a lot of headroom for scale and the management piece is key there. Just have time for one more question Vikas. Give us some nuggets on the roadmap. What's next for X-Series that we can look forward to? >> Absolutely Dave. As we talked about, and as Jim also hinted, this is a future ready architecture. A lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up, no refund replace. And, there we are looking into, enabling the customer's journey as they transition from PCI generation four, to five to six without driven replace, as they embrace CXL without driven replace. As they embrace the newer paradigm of computing through the disaggregated memory, disaggregated PCIe or NVMe based dense drives, and so forth. We are also looking forward to X-Fabric next generation, which will allow dynamic assignment of GPUs anywhere within the chassis and much more. So this is again, all about focusing on the innovation that will make the enterprise data center operations a lot more simpler, and drive down the TCO by keeping them not only covered for today, but also for future. So that's where some of the focus is on Dave. >> Okay. Thank you guys we'll leave it there, in a moment, I'll have some closing thoughts. (upbeat music) We're seeing a major evolution, perhaps even a bit of a revolution in the underlying infrastructure necessary to support hybrid work. Look, virtualizing compute and running general purpose workloads is something IT figured out a long time ago. But just when you have it nailed down in the technology business, things change, don't they? You can count on that. The cloud operating model has bled into on-premises locations. And is creating a new vision for the future, which we heard a lot about today. It's a vision that's turning into reality. And it supports much more diverse and data intensive workloads and alternative compute modes. It's one where flexibility is a watch word, enabling change, attacking complexity, and bringing a management capability that allows for a granular management of resources at massive scale. I hope you've enjoyed this special presentation. Remember, all these videos are available on demand at thecube.net. And if you want to learn more, please click on the information link. Thanks for watching Simplifying Hybrid Cloud brought to you by Cisco and theCUBE, your leader in enterprise tech coverage. This is Dave Vellante, be well and we'll see you next time. (upbeat music)

Published Date : Mar 22 2022

SUMMARY :

and its role in simplifying the complexity Good to see you again. Talk about the trends you're of the big things that, and of course the storage team as well. UCS and we, you know, Well, you know, you brought platform. is not on the customer, like to you know, stock buybacks, on the whole investment. hybrid cloud, the operations Like we did not write Terraform, you know, Kubernetes in the public cloud. that leave the rest of the world out you know, custom infrastructure And flexible in terms of the technology, have you on the, theCUBE, some of the supply chain challenges to help you optimize performance And Darren Williams, the So, for a hybrid cloud, you in terms of what you want to in both the enterprise and at the edge, is that around the simplicity What's the big news that Eliminating the need for you to find are in the news, and of course, you know, more than 70% of the is that it has the industry is doing in the field? and not be on the front Hey, come on Darren, the real football over your shoulder. and return back as, you know, And you know, Manish was Cisco, the bridge to possible. theCUBE, good to see you again. We know that when it comes to navigating or the day that they, you know, the business of, you know, my open that you guys, can absolutely relate to our, you know, and you know, on-prem the context that you What innovations are you And third, which is what you know, the momentum that you have, the future readiness here, you know, for scale and the management a lot more simpler, and drive down the TCO brought to you by Cisco and theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

Dave VellantePERSON

0.99+

UCSORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Manish AgarwalPERSON

0.99+

2009DATE

0.99+

80%QUANTITY

0.99+

DavePERSON

0.99+

50%QUANTITY

0.99+

JuneDATE

0.99+

17QUANTITY

0.99+

36%QUANTITY

0.99+

DarrenPERSON

0.99+

James LeachPERSON

0.99+

threeQUANTITY

0.99+

100 gigQUANTITY

0.99+

Darren WilliamsPERSON

0.99+

Enterprise Technology ResearchORGANIZATION

0.99+

June last yearDATE

0.99+

AMDORGANIZATION

0.99+

FirstQUANTITY

0.99+

one sentenceQUANTITY

0.99+

TurbonomicORGANIZATION

0.99+

Super BowlEVENT

0.99+

thecube.netOTHER

0.99+

more than 70%QUANTITY

0.99+

last yearDATE

0.99+

VikasORGANIZATION

0.99+

third segmentQUANTITY

0.99+

VikasPERSON

0.99+

OneQUANTITY

0.99+

fourth toolQUANTITY

0.99+

AWSORGANIZATION

0.99+

thirdQUANTITY

0.99+

oneQUANTITY

0.99+

Vikas RatnaPERSON

0.99+

IntersightORGANIZATION

0.99+

ETRORGANIZATION

0.99+

SecondQUANTITY

0.99+

HyperFlexORGANIZATION

0.99+

mid 2000sDATE

0.99+

third toolQUANTITY

0.99+

TodayDATE

0.99+

More than 70%QUANTITY

0.99+

X-SeriesTITLE

0.99+

10 years agoDATE

0.99+

DD Dasgupta, Cisco


 

>>Okay, let's start things off Didi Dasgupta is back on the cube to talk about how we're going to simplify hybrid cloud complexity. Didi. Welcome. Good to see you again. >>Hey Dave, thanks for having me. Good to see you again. >>Yeah, our pleasure here. Look, let's start with big picture. Talk about the trends you're seeing from your customers. >>Well, I think first off every customer, these days is a public cloud customer. They do have their on-premise data centers, but every customer is looking to move workloads, use services, cloud native services from the public cloud. I think that's, that's one of the big things that we're seeing while that is happening. We're also seeing a pretty dramatic evolution off the application landscape itself. You've got bare metal applications. You always have virtualized applications and then most modern applications are, are containerized and, you know, managed by Kubernetes. So I think we're seeing a big change in, in the application landscape as well, and probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves it's triggering the change in the it organizations in the development organizations and sort of not only how they work within their organizations, but how they work across all of these different organizations. So I think those are some of the big things that, that I hear about when I talk to customers. >>Well, so it's interesting. I often say Cisco kind of changed the game and in server and compute when it, when it developed the original UCS and you remember there were organizational considerations back then bringing together the server team and the networking team. And of course the, the storage team of, and now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >>No, I think you're right back of the origins of UCS. I mean, we, you know, why the networking company builder server, well, we just enabled with the best networking technology. So do compute that and now doing something similar on the software, actually the managing software for our hyperconvergence, for our. And you know, we've been on this journey for about four years, but the software is called intersite. And, you know, we started out with intersite being just the element manager, the management software for Cisco's compute and hyperconverged devices, but then we've evolved it the last few years because we believe that the customer shouldn't have to manage a separate piece of software would do manage the hardware of the underlying hardware and then a separate tool to connect it to a public cloud. And then the third tool to do optimization, workload optimization or performance optimization or cost optimization, a fourth tool do now manage, you know, Kubernetes and like, not just in one, one cluster, one cloud, but multi cluster multicloud. >>They should not have to have a fifth tool that does goes into observability. Anyway, I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that managed their infrastructure, but it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the Kubernetes infrastructure, Kubernetes clusters. I mean, whether it's on-prem or in the cloud. So overall that's the strategy, bring it to a single platform and a platform is a loaded word, but we'll get into that a little bit, you know, in this, in this conversation, but that's the overall strategy simplify? >>Well, you know, he brought a platform. I, I like to say platform beats products, but you know, there was a day and you could still point to some examples today in the it industry where, Hey, another tool we can monetize that and another one to solve a different problem. We can monetize that. And so tell me more about how intersite came about. You obviously sat back, you saw what your customers were going through. You said we can do better. So tell us the story there. >>Yeah, absolutely. So look, it started with, you know, three or four guys getting in a room and saying, look, we've had this, you know, management software, UCS manager, UCS director, and these are just the Cisco's management, you know, for our softwares, for our own platform. Then every company has their, their own flavor. We said, we, we took on this bold goal of like, we're not when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service, or we're going to create a SAS offering because the same in the infrastructure built by us, whether it's on networking or compute or the cyber talk software, how do our customers use it? Well, they use it to write and run their applications, their SAS services, every customer, every customer, every company today is a software company. >>They live and die by how their assets work or don't. And so we were like, we want to eat our own dog food here, right? We want to deliver this as a SAS offering. And so that's how it started being on this journey for about four years, tens of thousands of customers. But it, it was pretty big boat invasion. Cause you know, the big change with SAS is your, as you're familiar, Dave is the job of now managing this, this piece of software is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities. And we've learned so much along the way, whether it's around scalability, reliability, working with our own company's security organizations on what can or cannot be in a SAS service. So again, it's just been a wonderful journey, but I wanted to point out that we are in some ways eating our own dog food. Cause we built a SAS application that helps other companies deliver their SAS applications. >>So Cisco, I look at Cisco's business model and I, I of course compare it to other companies in the infrastructure business and obviously a very profitable company or large company you're growing faster than, than, than most of the traditional competitors. And so that means that you have more to invest. You, you, you can, you can afford things like stock buybacks, and you can invest in R and D. You don't have to make those hard trade-offs that a lot of your competitors have to make. So It's never enough, right? Never enough. But, but, but in speaking of R and D and innovations that you're introducing, I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud in the operations there and prove flexibility and things around cloud native initiatives as well? >>Absolutely. Absolutely. Well, look, I think one of the fundamentals where we're philosophically different from a lot of options that I see in the industry is we don't need to build everything ourselves. We don't, I just need to create a damn good platform with really good platform services, whether it's, you know, around search ability, whether it's around logging, whether it's around, you know, access control multi-tenants I need to create a really good platform and make it open. I do not need to go on a shopping spree to buy 17 and a half companies and then figure out how to stitch it all together because it's, it's almost impossible if it's impossible for us as a vendor, it's, it's three times more difficult, but for the customer who then has to consume it. So that was the philosophical difference in how we went about building in our sites. >>We've created a harden platform that's that's always on. Okay. And then you, then the magic starts happening. Then you get partners, whether it is, you know, infrastructure partners, like, you know, some of our storage partners like NetApp or your, or, you know, others who want to their conversion infrastructure is also to be managed or are there other SAS offerings, software vendors who have now become partners? Like we did not, we did not write Terraform, you know, but we partnered with Tashi and now, you know, Terraform services available on the intercept platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem we partnered with a company called urbanomics. And so that's now an offering on the intercept platform. So that's where we're philosophically different and sort of, you know, w how we have gone about this. And it actually ducked a dovetails well into some of the new things that I want to talk about today, that we're announcing on the underside platform, where we're actually been announcing the ability to attach and, and be able to manage Kubernetes clusters, which are not on prem. They're actually on AWS, on Azure, soon coming on, on GC, on, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there, but in terms of monitoring, managing it, you can use in our site, since you're using it on prem, you can use that same piece of software to manage Kubernetes clusters in a public cloud, or even manage VMs in, in a, in an instance. >>So the fact that you could, you mentioned storage, pure net app. So it's intersite can manage that infrastructure. I remember the hot-seat deal. It caught my attention. And of course, a lot of companies want to partner with Cisco because you've got such a strong ecosystem, but I thought that was an interesting move Turbonomic. You mentioned. And now you're saying Kubernetes in the public cloud, so a lot different than it was 10 years ago. So my last question is, how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, and it was kind of a tug of war there. We see these, these, these two worlds coming together. How will that evolve over the next few years? >>Well, I think it's, it's the evolution of the model and really look at know $2 or $3 depending on, you know, how you're keeping time. But I think one thing is become very clear. Again, we may be eating our own dog food. I mean, innercise is a hybrid cloud SAS applications that we've learned. Some of these lessons ourselves. One thing is referred that customers are looking for a consistent model, whether it's on the edge, on the polo public cloud, on-prem no data center doesn't matter. They're looking for a consistent model for operations, for governings or upgrades or liability. They're looking for a consistent operating model. What Mike is the law doesn't mean? I think there's going to be the rise of more custom plugs. It's still going to be hybrid. So obligations will want to reside wherever it makes most sense for them, which is data moving data is it's the most expensive thing. >>So it's going to be co-located with the data that's on the edge, on the edge colo public cloud doesn't matter, but you're basically going to see more customer droughts, more industry-specific clouds. You know, whether it's for finance or constipation or retail industry specific. I think sovereign is going to play a huge role, you know, today, if you look at the cloud providers, you know, American and Chinese companies that these, the rest of the world, when it comes to making good digital citizens, they're they're people and, you know, control. And the distributor cloud is also on edge is, is gonna be the next frontier. And so that's where we are trying to line up our strategy. And if I had to sum it up in one sentence, it's really your cloud, your way. Every customer is on a different journey that will have their choice of workloads, data, you know, uptime, reliability, concerns. That's really what, what we are returning any of our customers. >>You know, I think I agree with you that custom clouds. And I think what you're seeing is you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers. They're connecting their on-prem to their, to their public cloud. They're doing that. They're, they're doing that across clouds. And they're looking for companies like Cisco to do the hard work. It give me an infrastructure layer that I can build value on top of, because I'm going to take my financial services business to my cloud model or my healthcare business. I don't want to mess around with it. I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco in your R and D to do that. Do you buy that? >>Absolutely. I think, again, it goes back to what I was talking about with blacks. You got to get the world a solid open, flexible, and flexible in terms of the technology, flexible in how they want to consume it. Some customers are fine with a SAS software, but as I talk to, you know, my friends in the federal team, no, that does not work. So how they want to consume it. They want to, you know, a hundred percent, no sovereignty. We, we talked about. So, you know, job for a decent structure vendor like ourselves is to give the world an open platform, give them the knobs, give them the right API. But the last thing I will mention is, you know, there's still a place for innovation in hardware. Some of my colleagues are gonna engage me to some of those, you know, details, whether it's on our X series platform or HyperFlex, but it's really, it's going to, it's going to be software defined to SAS service and then, you know, give the world and open rock-solid platform, >>Got to run on something. All right. Thanks, Deedee. Always a pleasure to have you in the cube. Great to see you. >>You're >>Welcome. In a moment, I'll be back to dig into hyperconverged and where fits and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today.

Published Date : Mar 11 2022

SUMMARY :

Good to see you again. Good to see you again. Talk about the trends you're seeing the application landscape as well, and probably, you know, So how have you I mean, we, you know, why the networking company builder server, well, we just enabled with the best networking It's the same platform on which you can use to manage the Kubernetes infrastructure, but you know, there was a day and you could still point to some examples today in the it industry where, So look, it started with, you know, three or four guys Cause you know, the big change with SAS is your, So Cisco, I look at Cisco's business model and I, I of course compare it to other companies in the infrastructure whether it's around logging, whether it's around, you know, access control multi-tenants So that's where we're philosophically different and sort of, you know, So the fact that you could, you mentioned storage, pure net app. or $3 depending on, you know, how you're keeping time. I think sovereign is going to play a huge role, you know, today, if you look at the cloud providers, I'm not going to develop, you know, custom infrastructure like an Amazon Some of my colleagues are gonna engage me to some of those, you know, details, Always a pleasure to have you in the cube. in the market today.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

$2QUANTITY

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

threeQUANTITY

0.99+

$3QUANTITY

0.99+

Didi DasguptaPERSON

0.99+

AWSORGANIZATION

0.99+

one sentenceQUANTITY

0.99+

third toolQUANTITY

0.99+

fourth toolQUANTITY

0.99+

four guysQUANTITY

0.99+

MikePERSON

0.99+

DeedeePERSON

0.99+

10 years agoDATE

0.99+

DidiPERSON

0.99+

oneQUANTITY

0.99+

DD DasguptaPERSON

0.99+

SASTITLE

0.99+

fifth toolQUANTITY

0.98+

todayDATE

0.98+

17 and a half companiesQUANTITY

0.98+

three timesQUANTITY

0.98+

TashiORGANIZATION

0.98+

about four yearsQUANTITY

0.98+

one thingQUANTITY

0.98+

a dayQUANTITY

0.98+

first two thingsQUANTITY

0.98+

UCSORGANIZATION

0.98+

TerraformORGANIZATION

0.97+

firstQUANTITY

0.97+

two worldsQUANTITY

0.97+

urbanomicsORGANIZATION

0.97+

TurbonomicORGANIZATION

0.96+

one clusterQUANTITY

0.96+

KubernetesORGANIZATION

0.95+

ChineseOTHER

0.94+

one cloudQUANTITY

0.94+

single platformQUANTITY

0.94+

KubernetesTITLE

0.94+

One thingQUANTITY

0.94+

hundred percentQUANTITY

0.9+

SASORGANIZATION

0.87+

GCORGANIZATION

0.86+

tens of thousandsQUANTITY

0.86+

ThursdayDATE

0.85+

KubernetesPERSON

0.82+

yearsDATE

0.8+

poloORGANIZATION

0.77+

customersQUANTITY

0.71+

AzureTITLE

0.71+

HyperFlexTITLE

0.7+

nextDATE

0.66+

GKEORGANIZATION

0.63+

NetAppTITLE

0.63+

X seriesTITLE

0.6+

lastDATE

0.55+

AmericanLOCATION

0.48+

Breaking Analysis: The Improbable Rise of Kubernetes


 

>> From theCUBE studios in Palo Alto, in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vollante. >> The rise of Kubernetes came about through a combination of forces that were, in hindsight, quite a long shot. Amazon's dominance created momentum for Cloud native application development, and the need for newer and simpler experiences, beyond just easily spinning up computer as a service. This wave crashed into innovations from a startup named Docker, and a reluctant competitor in Google, that needed a way to change the game on Amazon and the Cloud. Now, add in the effort of Red Hat, which needed a new path beyond Enterprise Linux, and oh, by the way, it was just about to commit to a path of a Kubernetes alternative for OpenShift and figure out a governance structure to hurt all the cats and the ecosystem and you get the remarkable ascendancy of Kubernetes. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we tapped the back stories of a new documentary that explains the improbable events that led to the creation of Kubernetes. We'll share some new survey data from ETR and commentary from the many early the innovators who came on theCUBE during the exciting period since the founding of Docker in 2013, which marked a new era in computing, because we're talking about Kubernetes and developers today, the hoodie is on. And there's a new two part documentary that I just referenced, it's out and it was produced by Honeypot on Kubernetes, part one and part two, tells a story of how Kubernetes came to prominence and many of the players that made it happen. Now, a lot of these players, including Tim Hawkin Kelsey Hightower, Craig McLuckie, Joe Beda, Brian Grant Solomon Hykes, Jerry Chen and others came on theCUBE during formative years of containers going mainstream and the rise of Kubernetes. John Furrier and Stu Miniman were at the many shows we covered back then and they unpacked what was happening at the time. We'll share the commentary from the guests that they interviewed and try to add some context. Now let's start with the concept of developer defined structure, DDI. Jerry Chen was at VMware and he could see the trends that were evolving. He left VMware to become a venture capitalist at Greylock. Docker was his first investment. And he saw the future this way. >> What happens is when you define infrastructure software you can program it. You make it portable. And that the beauty of this cloud wave what I call DDI's. Now, to your point is every piece of infrastructure from storage, networking, to compute has an API, right? And, and AWS there was an early trend where S3, EBS, EC2 had API. >> As building blocks too. >> As building blocks, exactly. >> Not monolithic. >> Monolithic building blocks every little building bone block has it own API and just like Docker really is the API for this unit of the cloud enables developers to define how they want to build their applications, how to network them know as Wills talked about, and how you want to secure them and how you want to store them. And so the beauty of this generation is now developers are determining how apps are built, not just at the, you know, end user, you know, iPhone app layer the data layer, the storage layer, the networking layer. So every single level is being disrupted by this concept of a DDI and where, how you build use and actually purchase IT has changed. And you're seeing the incumbent vendors like Oracle, VMware Microsoft try to react but you're seeing a whole new generation startup. >> Now what Jerry was explaining is that this new abstraction layer that was being built here's some ETR data that quantifies that and shows where we are today. The chart shows net score or spending momentum on the vertical axis and market share which represents the pervasiveness in the survey set. So as Jerry and the innovators who created Docker saw the cloud was becoming prominent and you can see it still has spending velocity that's elevated above that 40% red line which is kind of a magic mark of momentum. And of course, it's very prominent on the X axis as well. And you see the low level infrastructure virtualization and that even floats above servers and storage and networking right. Back in 2013 the conversation with VMware. And by the way, I remember having this conversation deeply at the time with Chad Sakac was we're going to make this low level infrastructure invisible, and we intend to make virtualization invisible, IE simplified. And so, you see above the two arrows there related to containers, container orchestration and container platforms, which are abstraction layers and services above the underlying VMs and hardware. And you can see the momentum that they have right there with the cloud and AI and RPA. So you had these forces that Jerry described that were taking shape, and this picture kind of summarizes how they came together to form Kubernetes. And the upper left, Of course you see AWS and we inserted a picture from a post we did, right after the first reinvent in 2012, it was obvious to us at the time that the cloud gorilla was AWS and had all this momentum. Now, Solomon Hykes, the founder of Docker, you see there in the upper right. He saw the need to simplify the packaging of applications for cloud developers. Here's how he described it. Back in 2014 in theCUBE with John Furrier >> Container is a unit of deployment, right? It's the format in which you package your application all the files, all the executables libraries all the dependencies in one thing that you can move to any server and deploy in a repeatable way. So it's similar to how you would run an iOS app on an iPhone, for example. >> A Docker at the time was a 30% company and it just changed its name from .cloud. And back to the diagram you have Google with a red question mark. So why would you need more than what Docker had created. Craig McLuckie, who was a product manager at Google back then explains the need for yet another abstraction. >> We created the strong separation between infrastructure operations and application operations. And so, Docker has created a portable framework to take it, basically a binary and run it anywhere which is an amazing capability, but that's not enough. You also need to be able to manage that with a framework that can run anywhere. And so, the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat open stack deployment. You could run on another major cloud provider like rec. >> Now Google had this huge cloud infrastructure but no commercial cloud business compete with AWS. At least not one that was taken seriously at the time. So it needed a way to change the game. And it had this thing called Google Borg, which is a container management system and scheduler and Google looked at what was happening with virtualization and said, you know, we obviously could do better Joe Beda, who was with Google at the time explains their mindset going back to the beginning. >> Craig and I started up Google compute engine VM as a service. And the odd thing to recognize is that, nobody who had been in Google for a long time thought that there was anything to this VM stuff, right? Cause Google had been on containers for so long. That was their mindset board was the way that stuff was actually deployed. So, you know, my boss at the time, who's now at Cloudera booted up a VM for the first time, and anybody in the outside world be like, Hey, that's really cool. And his response was like, well now what? Right. You're sitting at a prompt. Like that's not super interesting. How do I run my app? Right. Which is, that's what everybody's been struggling with, with cloud is not how do I get a VM up? How do I actually run my code? >> Okay. So Google never really did virtualization. They were looking at the market and said, okay what can we do to make Google relevant in cloud. Here's Eric Brewer from Google. Talking on theCUBE about Google's thought process at the time. >> One interest things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998 which is the same year that VMware started was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic Unix processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well, that's a good model for us. And we can take what we know internally which was called Borg a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. >> Now, Eric Brewer gave us the bumper sticker version of the story there. What he reveals in the documentary that I referenced earlier is that initially Google was like, why would we open source our secret sauce to help competitors? So folks like Tim Hockin and Brian Grant who were on the original Kubernetes team, went to management and pressed hard to convince them to bless open sourcing Kubernetes. Here's Hockin's explanation. >> When Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on we realized we know what this is going to we know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes, once you get beyond two or three of them, and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please this is going to happen with us or without us. And I think it, the world would be better if we helped. >> So the open source strategy became more compelling as they studied the problem because it gave Google a way to neutralize AWS's advantage because with containers you could develop on AWS for example, and then run the application anywhere like Google's cloud. So it not only gave developers a path off of AWS. If Google could develop a strong service on GCP they could monetize that play. Now, focus your attention back to the diagram which shows this smiling, Alex Polvi from Core OS which was acquired by Red Hat in 2018. And he saw the need to bring Linux into the cloud. I mean, after all Linux was powering the internet it was the OS for enterprise apps. And he saw the need to extend its path into the cloud. Now here's how he described it at an OpenStack event in 2015. >> Similar to what happened with Linux. Like yes, there is still need for Linux and Windows and other OSs out there. But by and large on production, web infrastructure it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was, it was by having a truly open consistent API and a commitment into not breaking APIs and, so on. That allowed Linux to really become ubiquitous in the data center. Yes, there are other OSs, but Linux buy in large for production infrastructure, what is being used. And I think you'll see a similar phenomenon happen for this next level up cause we're treating the whole data center as a computer instead of trading one in visual instance is just the computer. And that's the stuff that Kubernetes to me and someone is doing. And I think there will be one that shakes out over time and we believe that'll be Kubernetes. >> So Alex saw the need for a dominant container orchestration platform. And you heard him, they made the right bet. It would be Kubernetes. Now Red Hat, Red Hat is been around since 1993. So it has a lot of on-prem. So it needed a future path to the cloud. So they rang up Google and said, hey. What do you guys have going on in this space? So Google, was kind of non-committal, but it did expose that they were thinking about doing something that was you know, pre Kubernetes. It was before it was called Kubernetes. But hey, we have this thing and we're thinking about open sourcing it, but Google's internal debates, and you know, some of the arm twisting from the engine engineers, it was taking too long. So Red Hat said, well, screw it. We got to move forward with OpenShift. So we'll do what Apple and Airbnb and Heroku are doing and we'll build on an alternative. And so they were ready to go with Mesos which was very much more sophisticated than Kubernetes at the time and much more mature, but then Google the last minute said, hey, let's do this. So Clayton Coleman with Red Hat, he was an architect. And he leaned in right away. He was one of the first outside committers outside of Google. But you still led these competing forces in the market. And internally there were debates. Do we go with simplicity or do we go with system scale? And Hen Goldberg from Google explains why they focus first on simplicity in getting that right. >> We had to defend of why we are only supporting 100 nodes in the first release of Kubernetes. And they explained that they know how to build for scale. They've done that. They know how to do it, but realistically most of users don't need large clusters. So why create this complexity? >> So Goldberg explains that rather than competing right away with say Mesos or Docker swarm, which were far more baked they made the bet to keep it simple and go for adoption and ubiquity, which obviously turned out to be the right choice. But the last piece of the puzzle was governance. Now Google promised to open source Kubernetes but when it started to open up to contributors outside of Google, the code was still controlled by Google and developers had to sign Google paper that said Google could still do whatever it wanted. It could sub license, et cetera. So Google had to pass the Baton to an independent entity and that's how CNCF was started. Kubernetes was its first project. And let's listen to Chris Aniszczyk of the CNCF explain >> CNCF is all about providing a neutral home for cloud native technology. And, you know, it's been about almost two years since our first board meeting. And the idea was, you know there's a certain set of technology out there, you know that are essentially microservice based that like live in containers that are essentially orchestrated by some process, right? That's essentially what we mean when we say cloud native right. And CNCF was seated with Kubernetes as its first project. And you know, as, as we've seen over the last couple years Kubernetes has grown, you know, quite well they have a large community a diverse con you know, contributor base and have done, you know, kind of extremely well. They're one of actually the fastest, you know highest velocity, open source projects out there, maybe. >> Okay. So this is how we got to where we are today. This ETR data shows container orchestration offerings. It's the same X Y graph that we showed earlier. And you can see where Kubernetes lands not we're standing that Kubernetes not a company but respondents, you know, they doing Kubernetes. They maybe don't know, you know, whose platform and it's hard with the ETR taxon economy as a fuzzy and survey data because Kubernetes is increasingly becoming embedded into cloud platforms. And IT pros, they may not even know which one specifically. And so the reason we've linked these two platforms Kubernetes and Red Hat OpenShift is because OpenShift right now is a dominant revenue player in the space and is increasingly popular PaaS layer. Yeah. You could download Kubernetes and do what you want with it. But if you're really building enterprise apps you're going to need support. And that's where OpenShift comes in. And there's not much data on this but we did find this chart from AMDA which show was the container software market, whatever that really is. And Red Hat has got 50% of it. This is revenue. And, you know, we know the muscle of IBM is behind OpenShift. So there's really not hard to believe. Now we've got some other data points that show how Kubernetes is becoming less visible and more embedded under of the hood. If you will, as this chart shows this is data from CNCF's annual survey they had 1800 respondents here, and the data showed that 79% of respondents use certified Kubernetes hosted platforms. Amazon elastic container service for Kubernetes was the most prominent 39% followed by Azure Kubernetes service at 23% in Azure AKS engine at 17%. With Google's GKE, Google Kubernetes engine behind those three. Now. You have to ask, okay, Google. Google's management Initially they had concerns. You know, why are we open sourcing such a key technology? And the premise was, it would level the playing field. And for sure it has, but you have to ask has it driven the monetization Google was after? And I would've to say no, it probably didn't. But think about where Google would've been. If it hadn't open source Kubernetes how relevant would it be in the cloud discussion. Despite its distant third position behind AWS and Microsoft or even fourth, if you include Alibaba without Kubernetes Google probably would be much less prominent or possibly even irrelevant in cloud, enterprise cloud. Okay. Let's wrap up with some comments on the state of Kubernetes and maybe a thought or two about, you know, where we're headed. So look, no shocker Kubernetes for all its improbable beginning has gone mainstream in the past year or so. We're seeing much more maturity and support for state full workloads and big ecosystem support with respect to better security and continued simplification. But you know, it's still pretty complex. It's getting better, but it's not VMware level of maturity. For example, of course. Now adoption has always been strong for Kubernetes, for cloud native companies who start with containers on day one, but we're seeing many more. IT organizations adopting Kubernetes as it matures. It's interesting, you know, Docker set out to be the system of the cloud and Kubernetes has really kind of become that. Docker desktop is where Docker's action really is. That's where Docker is thriving. It sold off Docker swarm to Mirantis has made some tweaks. Docker has made some tweaks to its licensing model to be able to continue to evolve its its business. To hear more about that at DockerCon. And as we said, years ago we expected Kubernetes to become less visible Stu Miniman and I talked about this in one of our predictions post and really become more embedded into other platforms. And that's exactly what's happening here but it's still complicated. Remember, remember the... Go back to the early and mid cycle of VMware understanding things like application performance you needed folks in lab coats to really remediate problems and dig in and peel the onion and scale the system you know, and in some ways you're seeing that dynamic repeated with Kubernetes, security performance scale recovery, when something goes wrong all are made more difficult by the rapid pace at which the ecosystem is evolving Kubernetes. But it's definitely headed in the right direction. So what's next for Kubernetes we would expect further simplification and you're going to see more abstractions. We live in this world of almost perpetual abstractions. Now, as Kubernetes improves support from multi cluster it will be begin to treat those clusters as a unified group. So kind of abstracting multiple clusters and treating them as, as one to be managed together. And this is going to create a lot of ecosystem focus on scaling globally. Okay, once you do that, you're going to have to worry about latency and then you're going to have to keep pace with security as you expand the, the threat area. And then of course recovery what happens when something goes wrong, more complexity, the harder it is to recover and that's going to require new services to share resources across clusters. So look for that. You also should expect more automation. It's going to be driven by the host cloud providers as Kubernetes supports more state full applications and begins to extend its cluster management. Cloud providers will inject as much automation as possible into the system. Now and finally, as these capabilities mature we would expect to see better support for data intensive workloads like, AI and Machine learning and inference. Schedule with these workloads becomes harder because they're so resource intensive and performance management becomes more complex. So that's going to have to evolve. I mean, frankly, many of the things that Kubernetes team way back when, you know they back burn it early on, for example, you saw in Docker swarm or Mesos they're going to start to enter the scene now with Kubernetes as they start to sort of prioritize some of those more complex functions. Now, the last thing I'll ask you to think about is what's next beyond Kubernetes, you know this isn't it right with serverless and IOT in the edge and new data, heavy workloads there's something that's going to disrupt Kubernetes. So in that, by the way, in that CNCF survey nearly 40% of respondents were using serverless and that's going to keep growing. So how is that going to change the development model? You know, Andy Jassy once famously said that if they had to start over with Amazon retail, they'd start with serverless. So let's keep an eye on the horizon to see what's coming next. All right, that's it for now. I want to thank my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team, who also manages the breaking analysis podcast, Kristin Martin and Cheryl Knight help get the word out on socials, so thanks to all of you. Remember these episodes, they're all available as podcasts wherever you listen, just search breaking analysis podcast. Don't forget to check out ETR website @etr.ai. We'll also publish. We publish a full report every week on wikibon.com and Silicon angle.com. You can get in touch with me, email me directly david.villane@Siliconangle.com or DM me at D Vollante. You can comment on our LinkedIn post. This is Dave Vollante for theCUBE insights powered by ETR. Have a great week, everybody. Thanks for watching. Stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : Feb 12 2022

SUMMARY :

bringing you data driven and many of the players And that the beauty of this And so the beauty of this He saw the need to simplify It's the format in which A Docker at the time was a 30% company And so, the union of Docker and Kubernetes and said, you know, we And the odd thing to recognize is that, at the time. And so scaling that up, you and pressed hard to convince them and said, you know, please And he saw the need to And that's the stuff that Kubernetes and you know, some of the arm twisting in the first release of Kubernetes. of Google, the code was And the idea was, you know and dig in and peel the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

Chris AniszczykPERSON

0.99+

HockinPERSON

0.99+

Dave VollantePERSON

0.99+

Solomon HykesPERSON

0.99+

Craig McLuckiePERSON

0.99+

Cheryl KnightPERSON

0.99+

Jerry ChenPERSON

0.99+

Alex MyersonPERSON

0.99+

Kristin MartinPERSON

0.99+

Brian GrantPERSON

0.99+

Eric BrewerPERSON

0.99+

1998DATE

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Tim HockinPERSON

0.99+

Andy JassyPERSON

0.99+

2013DATE

0.99+

Alex PolviPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Craig McLuckiePERSON

0.99+

Clayton ColemanPERSON

0.99+

2018DATE

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

50%QUANTITY

0.99+

JerryPERSON

0.99+

AppleORGANIZATION

0.99+

2012DATE

0.99+

Joe BedaPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CNCFORGANIZATION

0.99+

17%QUANTITY

0.99+

John FurrierPERSON

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

OracleORGANIZATION

0.99+

23%QUANTITY

0.99+

iOSTITLE

0.99+

1800 respondentsQUANTITY

0.99+

AlibabaORGANIZATION

0.99+

2015DATE

0.99+

39%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AirbnbORGANIZATION

0.99+

Hen GoldbergPERSON

0.99+

fourthQUANTITY

0.99+

twoQUANTITY

0.99+

Chad SakacPERSON

0.99+

threeQUANTITY

0.99+

david.villane@Siliconangle.comOTHER

0.99+

first projectQUANTITY

0.99+

CraigPERSON

0.99+

VMwareORGANIZATION

0.99+

ETRORGANIZATION

0.99+

Did HPE GreenLake Just Set a New Bar in the On-Prem Cloud Services Market?


 

>> Welcome back to The Cube's coverage of HPE's GreenLake announcements. My name is Dave Vellante and you're watching the Cube. I'm here with Holger Mueller, who is an analyst at Constellation Research. And Matt Maccaux is the global field CTO of Ezmeral software at HPE. We're going to talk data. Gents, great to see you. >> Holger: Great to be here. >> So, Holger, what do you see happening in the data market? Obviously data's hot, you know, digital, I call it the force marks to digital. Everybody realizes wow, digital business, that's a data business. We've got to get our data act together. What do you see in the market is the big trends, the big waves? >> We are all young enough or old enough to remember when people were saying data is the new oil, right? Nothing has changed, right? Data is the key ingredient, which matters to enterprise, which they have to store, which they have to enrich, which they have to use for their decision-making. It's the foundation of everything. If you want to go into machine learning or (indistinct) It's growing very fast, right? We have the capability now to look at all the data in enterprise, which weren't able 10 years ago to do that. So data is main center to everything. >> Yeah, it's even more valuable than oil, I think, right? 'Cause with oil, you can only use once. Data, you can, it's kind of polyglot. I can go in different directions and it's amazing, right? >> It's the beauty of digital products, right? They don't get consumed, right? They don't get fired up, right? And no carbon footprint, right? "Oh wait, wait, we have to think about carbon footprint." Different story, right? So to get to the data, you have to spend some energy. >> So it's that simple, right? I mean, it really is. Data is fundamental. It's got to be at the core. And so Matt, what are you guys announcing today, and how does that play into what Holger just said? >> What we're announcing today is that organizations no longer need to make a difficult choice. Prior to today, organizations were thinking if I'm going to do advanced machine learning and really exploit my data, I have to go to the cloud. But all my data's still on premises because of privacy rules, industry rules. And so what we're announcing today, through GreenLake Services, is a cloud services way to deliver that same cloud-based analytical capability. Machine learning, data engineering, through hybrid analytics. It's a unified platform to tie together everything from data engineering to advance data science. And we're also announcing the world's first Kubernetes native object store, that is hybrid cloud enabled. Which means you can keep your data connected across clouds in a data fabric, or Dave, as you say, mesh. >> Okay, can we dig into that a little bit? So, you're essentially saying that, so you're going to have data in both places, right? Public cloud, edge, on-prem, and you're saying, HPE is announcing a capability to connect them, I think you used the term fabric. I'm cool, by the way, with the term fabric, we can, we'll parse that out another time. >> I love for you to discuss textiles. Fabrics vs. mesh. For me, every fabric breaks down to mesh if you put it on a microscope. It's the same thing. >> Oh wow, now that's really, that's too detailed for my brain, right this moment. But, you're saying you can connect all those different estates because data by its very nature is everywhere. You're going to unify that, and what, that can manage that through sort of a single view? >> That's right. So, the management is centralized. We need to be able to know where our data is being provisioned. But again, we don't want organizations to feel like they have to make the trade off. If they want to use cloud surface A in Azure, and cloud surface B in GCP, why not connect them together? Why not allow the data to remain in sync or not, through a distributed fabric? Because we use that term fabric over and over again. But the idea is let the data be where it most naturally makes sense, and exploit it. Monetization is an old tool, but exploit it in a way that works best for your users and applications. >> In sync or not, that's interesting. So it's my choice? >> That's right. Because the back of an automobile could be a teeny tiny, small edge location. It's not always going to be in sync until it connects back up with a training facility. But we still need to be able to manage that. And maybe that data gets persisted to a core data center. Maybe it gets pushed to the cloud, but we still need to know where that data is, where it came from, its lineage, what quality it has, what security we're going to wrap around that, that all should be part of this fabric. >> Okay. So, you've got essentially a governance model, at least maybe you're working toward that, and maybe it's not all baked today, but that's the north star. Is this fabric connect, single management view, governed in a federated fashion? >> Right. And it's available through the most common API's that these applications are already written in. So, everybody today's talking S3. I've got to get all of my data, I need to put it into an object store, it needs to be S3 compatible. So, we are extending this capability to be S3 native. But it's optimized for performance. Today, when you put data in an object store, it's kind of one size fits all. Well, we know for those streaming analytical capabilities, those high performance workloads, it needs to be tuned for that. So, how about I give you a very small object on the very fastest disk in your data center and maybe that cheaper location somewhere else. And so we're giving you that balance as part of the overall management estate. >> Holger, what's your take on this? I mean, Frank Slootman says we'll never, we're not going halfway house. We're never going to do on-prem, we're only in the cloud. So that basically says, okay, he's ignoring a pretty large market by choice. You're not, Matt, you must love those words. But what do you see as the public cloud players, kind of the moves on-prem, particularly in this realm? >> Well, we've seen lots of cloud players who were only cloud coming back towards on-premise, right? We call it the next generation compute platform where I can move data and workloads between on-premise and ideally, multiple clouds, right? Because I don't want to be logged into public cloud vendors. And we see two trends, right? One trend is the traditional hardware supplier of on-premise has not scaled to cloud technology in terms of big data analytics. They just missed the boat for that in the past, this is changing. You guys are a traditional player and changing this, so congratulations. The other thing, is there's been no innovation for the on-premise tech stack, right? The only technology stack to run modern application has been invested for a long time in the cloud. So what we see since two, three years, right? With the first one being Google with Kubernetes, that are good at GKE on-premise, then onto us, right? Bringing their tech stack with compromises to on-premises, right? Acknowledging exactly what we're talking about, the data is everywhere, data is important. Data gravity is there, right? It's just the network's fault, where the networks are too slow, right? If you could just move everything anywhere we want like juggling two balls, then we'd be in different place. But that's the not enough investment for the traditional IT players for that stack, and the modern stack being there. And now every public cloud player has an on-premise offering with different flavors, different capabilities. >> I want to give you guys Dave's story of kind of history and you can kind of course correct, and tell me how this, Matt, maybe fits into what's happened with customers. So, you know, before Hadoop, obviously you had to buy a big Oracle database and you know, you running Unix, and you buy some big storage subsystem if you had any money left over, you know, you maybe, you know, do some actual analytics. But then Hadoop comes in, lowers the cost, and then S3 kneecaps the entire Hadoop market, right? >> I wouldn't say that, I wouldn't agree. Sorry to jump on your history. Because the fascinating thing, what Hadoop brought to the enterprise for the first time, you're absolutely right, affordable, right, to do that. But it's not only about affordability because S3 as the affordability. The big thing is you can store information without knowing how to analyze it, right? So, you mentioned Snowflake, right? Before, it was like an Oracle database. It was Starschema for data warehouse, and so on. You had to make decisions how to store that data because compute capabilities, storage capabilities, were too limited, right? That's what Hadoop blew away. >> I agree, no schema on, right. But then that created data lakes, which create a data swamps, and that whole mess, and then Spark comes in and help clean it out, okay, fine. So, we're cool with that. But the early days of Hadoop, you had, companies would have a Hadoop monolith, they probably had their data catalog in Excel or Google sheets, right? And so now, my question to you, Matt, is there's a lot of customers that are still in that world. What do they do? They got an option to go to the cloud. I'm hearing that you're giving them another option? >> That's right. So we know that data is going to move to the cloud, as I mentioned. So let's keep that data in sync, and governed, and secured, like you expect. But for the data that can't move, let's bring those cloud native services to your data center. And so that's a big part of this announcement is this unified analytics. So that you can continue to run the tools that you want to today while bringing those next generation tools based on Apache Spark, using libraries like Delta Lake so you can go anything from Tableaux through Presto sequel, to advance machine learning in your Jupiter notebooks on-premises where you know your data is secured. And if it happens to sit in existing Hadoop data lake, that's fine too. We don't want our customers to have to make that trade off as they go from one to the other. Let's give you the best of both worlds, or as they say, you can eat your cake and have it too. >> Okay, so. Now let's talk about sort of developers on-prem, right? They've been kind of... If they really wanted to go cloud native, they had to go to the cloud. Do you feel like this changes the game? Do on-prem developers, do they want that capability? Will they lean into that capability? Or will they say no, no, the cloud is cool. What's your take? >> I love developers, right? But it's about who makes the decision, who pays the developers, right? So the CXOs in the enterprises, they need exactly, this is why we call the next-gen computing platform, that you can move your code assets. It's very hard to build software, so it's very valuable to an enterprise. I don't want to have limited to one single location or certain computing infrastructure, right? Luckily, we have Kubernetes to be able to move that, but I want to be able to deploy it on-premise if I have to. I want to deploy it, would be able to deploy in the multiple clouds which are available. And that's the key part. And that makes developers happy too, because the code you write has got to run multiple places. So you can build more code, better code, instead of building the same thing multiple places, because a little compiler change here, a little compiler change there. Nobody wants to do portability testing and rewriting, recertified for certain platforms. >> The head of application development or application architecture and the business are ultimately going to dictate that, number one. Number two, you're saying that developers shouldn't care because it can write once, run anywhere. >> That is the promise, and that's the interesting thing which is available now, 'cause people know, thanks to Kubernetes as a container platform and the abstraction which containers provide, and that makes everybody's life easier. But it goes much more higher than the Head of Apps, right? This is the digital transformation strategy, the next generation application the company has to build as a response to a pandemic, as a pivot, as digital transformation, as digital disruption capability. >> I mean, I see a lot of organizations basically modernizing by building some kind of abstraction to their backend systems, modernizing it through cloud native, and then saying, hey, as you were saying Holger, run it anywhere you want, or connect to those cloud apps, or connect across clouds, connect to other on-prem apps, and eventually out to the edge. Is that what you see? >> It's so much easier said than done though. Organizations have struggled so much with this, especially as we start talking about those data intensive app and workloads. Kubernetes and Hadoop? Up until now, organizations haven't been able to deploy those services. So, what we're offering as part of these GreenLake unified analytics services, a Kubernetes runtime. It's not ours. It's top of branch open source. And open source operators like Apache Spark, bringing in Delta Lake libraries, so that if your developer does want to use cloud native tools to build those next generation advanced analytics applications, but prod is still on-premises, they should just be able to pick that code up, and because we are deploying 100% open-source frameworks, the code should run as is. >> So, it seems like the strategy is to basically build, now that's what GreenLake is, right? It's a cloud. It's like, hey, here's your options, use whatever you want. >> Well, and it's your cloud. That's, what's so important about GreenLake, is it's your cloud, in your data center or co-lo, with your data, your tools, and your code. And again, we know that organizations are going to go to a multi or hybrid cloud location and through our management capabilities, we can reach out if you don't want us to control those, not necessarily, that's okay, but we should at least be able to monitor and audit the data that sits in those other locations, the applications that are running, maybe I register your GKE cluster. I don't manage it, but at least through a central pane of glass, I can tell the Head of Applications, what that person's utilization is across these environments. >> You know, and you said something, Matt, that struck, resonated with me, which is this is not trivial. I mean, not as simple to do. I mean what you see, you see a lot of customers or companies, what they're doing, vendors, they'll wrap their stack in Kubernetes, shove it in the cloud, it's essentially hosted stack, right? And, you're kind of taking a different approach. You're saying, hey, we're essentially building a cloud that's going to connect all these estates. And the key is you're going to have to keep, and you are, I think that's probably part of the reason why we're here, announcing stuff very quickly. A lot of innovation has to come out to satisfy that demand that you're essentially talking about. >> Because we've oversimplified things with containers, right? Because containers don't have what matters for data, and what matters for enterprise, which is persistence, right? I have to be able to turn my systems down, or I don't know when I'm going to use that data, but it has to stay there. And that's not solved in the container world by itself. And that's what's coming now, the heavy lifting is done by people like HPE, to provide that persistence of the data across the different deployment platforms. And then, there's just a need to modernize my on-premise platforms. Right? I can't run on a server which is two, three years old, right? It's no longer safe, it doesn't have trusted identity, all the good stuff that you need these days, right? It cannot be operated remotely, or whatever happens there, where there's two, three years, is long enough for a server to have run their course, right? >> Well you're a software guy, you hate hardware anyway, so just abstract that hardware complexity away from you. >> Hardware is the necessary evil, right? It's like TSA. I want to go somewhere, but I have to go through TSA. >> But that's a key point, let me buy a service, if I need compute, give it to me. And if I don't, I don't want to hear about it, right? And that's kind of the direction that you're headed. >> That's right. >> Holger: That's what you're offering. >> That's right, and specifically the services. So GreenLake's been offering infrastructure, virtual machines, IaaS, as a service. And we want to stop talking about that underlying capability because it's a dial tone now. What organizations and these developers want is the service. Give me a service or a function, like I get in the cloud, but I need to get going today. I need it within my security parameters, access to my data, my tools, so I can get going as quickly as possible. And then beyond that, we're going to give you that cloud billing practices. Because, just because you're deploying a cloud native service, if you're still still being deployed via CapEx, you're not solving a lot of problems. So we also need to have that cloud billing model. >> Great. Well Holger, we'll give you the last word, bring us home. >> It's very interesting to have the cloud qualities of subscription-based pricing maintained by HPE as the cloud vendor from somewhere else. And that gives you that flexibility. And that's very important because data is essential to enterprise processes. And there's three reasons why data doesn't go to the cloud, right? We know that. It's privacy residency requirement, there is no cloud infrastructure in the country. It's performance, because network latency plays a role, right? Especially for critical appraisal. And then there's not invented here, right? Remember Charles Phillips saying how old the CIO is? I know if they're going to go to the cloud or not, right? So, it was not invented here. These are the things which keep data on-premise. You know that load, and HP is coming on with a very interesting offering. >> It's physics, it's laws, it's politics, and sometimes it's cost, right? Sometimes it's too expensive to move and migrate. Guys, thanks so much. Great to see you both. >> Matt: Dave, it's always a pleasure. All right, and thank you for watching the Cubes continuous coverage of HPE's big GreenLake announcements. Keep it right there for more great content. (calm music begins)

Published Date : Sep 28 2021

SUMMARY :

And Matt Maccaux is the global field CTO I call it the force marks to digital. So data is main center to everything. 'Cause with oil, you can only use once. So to get to the data, you And so Matt, what are you I have to go to the cloud. capability to connect them, It's the same thing. You're going to unify that, and what, We need to be able to know So it's my choice? It's not always going to be in sync but that's the north star. I need to put it into an object store, But what do you see as for that in the past, I want to give you guys Sorry to jump on your history. And so now, my question to you, Matt, And if it happens to sit in they had to go to the cloud. because the code you write has and the business the company has to build as and eventually out to the edge. to pick that code up, So, it seems like the and audit the data that sits to have to keep, and you are, I have to be able to turn my systems down, guy, you hate hardware anyway, I have to go through TSA. And that's kind of the but I need to get going today. the last word, bring us home. I know if they're going to go Great to see you both. the Cubes continuous coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

MattPERSON

0.99+

Matt MaccauxPERSON

0.99+

HolgerPERSON

0.99+

DavePERSON

0.99+

Holger MuellerPERSON

0.99+

twoQUANTITY

0.99+

100%QUANTITY

0.99+

Charles PhillipsPERSON

0.99+

Constellation ResearchORGANIZATION

0.99+

HPEORGANIZATION

0.99+

ExcelTITLE

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

three yearsQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

three reasonsQUANTITY

0.99+

TodayDATE

0.99+

GoogleORGANIZATION

0.99+

two ballsQUANTITY

0.98+

firstQUANTITY

0.98+

OracleORGANIZATION

0.98+

10 years agoDATE

0.98+

EzmeralORGANIZATION

0.98+

both worldsQUANTITY

0.98+

first timeQUANTITY

0.98+

S3TITLE

0.98+

One trendQUANTITY

0.98+

GreenLake ServicesORGANIZATION

0.98+

first oneQUANTITY

0.98+

SnowflakeTITLE

0.97+

both placesQUANTITY

0.97+

KubernetesTITLE

0.97+

onceQUANTITY

0.96+

bothQUANTITY

0.96+

two trendsQUANTITY

0.96+

Delta LakeTITLE

0.95+

GoogleTITLE

0.94+

HadoopTITLE

0.94+

CapExORGANIZATION

0.93+

TableauxTITLE

0.93+

AzureTITLE

0.92+

GKEORGANIZATION

0.92+

CubesORGANIZATION

0.92+

UnixTITLE

0.92+

one single locationQUANTITY

0.91+

single viewQUANTITY

0.9+

SparkTITLE

0.86+

ApacheORGANIZATION

0.85+

pandemicEVENT

0.82+

HadoopORGANIZATION

0.81+

three years oldQUANTITY

0.8+

singleQUANTITY

0.8+

KubernetesORGANIZATION

0.74+

big wavesEVENT

0.73+

Apache SparkORGANIZATION

0.71+

Number twoQUANTITY

0.69+

Ajay Singh, Pure Storage | CUBEconversation


 

(upbeat music) >> The Cloud essentially turned the data center into an API and ushered in the era of programmable infrastructure, no longer do we think about deploying infrastructure in rigid silos with a hardened, outer shell, rather infrastructure has to facilitate digital business strategies. And what this means is putting data at the core of your organization, irrespective of its physical location. It also means infrastructure generally and storage specifically must be accessed as sets of services that can be discovered, deployed, managed, secured, and governed in a DevOps model or OpsDev, if you prefer. Now, this has specific implications as to how vendor product strategies will evolve and how they'll meet modern data requirements. Welcome to this Cube conversation, everybody. This is Dave Vellante. And with me to discuss these sea changes is Ajay Singh, the Chief Product Officer of Pure Storage, Ajay welcome. >> Thank you, David, gald to be on. >> Yeah, great to have you, so let's talk about your role at Pure. I think you're the first CPO, what's the vision there? >> That's right, I just joined up Pure about eight months ago from VMware as the chief product officer and you're right, I'm the first our chief product officer at Pure. And at VMware I ran the Cloud management business unit, which was a lot about automation and infrastructure as code. And it's just great to join Pure, which has a phenomenal all flash product set. I kind of call it the iPhone or flash story super easy to use. And how do we take that same ease of use, which is a heart of a Cloud operating principle, and how do we actually take it up to really deliver a modern data experience, which includes infrastructure and storage as code, but then even more beyond that and how do you do modern operations and then modern data services. So super excited to be at Pure. And the vision, if you may, at the end of the day, is to provide, leveraging this moderate experience, a connected and effortless experience data experience, which allows customers to ultimately focus on what matters for them, their business, and by really leveraging and managing and winning with their data, because ultimately data is the new oil, if you may, and if you can mine it, get insights from it and really drive a competitive edge in the digital transformation in your head, and that's what be intended to help our customers to. >> So you joined earlier this year kind of, I guess, middle of the pandemic really I'm interested in kind of your first 100 days, what that was like, what key milestones you set and now you're into your second a 100 plus days. How's that all going? What can you share with us in and that's interesting timing because the effects of the pandemic you came in in a kind of post that, so you had experience from VMware and then you had to apply that to the product organization. So tell us about that sort of first a 100 days and the sort of mission now. >> Absolutely, so as we talked about the vision, around the modern data experience, kind of have three components to it, modernizing the infrastructure and really it's kudos to the team out of the work we've been doing, a ton of work in modernizing the infrastructure, I'll briefly talk to that, then modernizing the data, much more than modernizing the operations. I'll talk to that as well. And then of course, down the pike, modernizing data services. So if you think about it from modernizing the infrastructure, if you think about Pure for a minute, Pure is the first company that took flash to mainstream, essentially bringing what we call consumer simplicity to enterprise storage. The manual for the products with the front and back of a business card, that's it, you plug it in, boom, it's up and running, and then you get proactive AI driven support, right? So that was kind of the heart of Pure. Now you think about Pure again, what's unique about Pure has been a lot of our competition, has dealt with flash at the SSD level, hey, because guess what? All this software was built for hard drive. And so if I can treat NAND as a solid state drive SSD, then my software would easily work on it. But with Pure, because we started with flash, we released went straight to the NAND level, and as opposed to kind of the SSD layer, and what that does is it gives you greater efficiency, greater reliability and create a performance compared to an SSD, because you can optimize at the chip level as opposed to at the SSD module level. That's one big advantage that Pure has going for itself. And if you look at the physics, in the industry for a minute, there's recent data put out by Wikibon early this year, effectively showing that by the year 2026, flash on a dollar per terabyte basis, just the economics of the semiconductor versus the hard disk is going to be cheaper than hard disk. So this big inflection point is slowly but surely coming that's going to disrupt the hardest industry, already the high end has been taken over by flash, but hybrid is next and then even the long tail is coming up over there. And so to end to that extent our lead, if you may, the introduction of QLC NAND, QLC NAND powerful competition is barely introducing, we've been at it for a while. We just recently this year in my first a 100 days, we introduced the flasher AC, C40 and C60 drives, which really start to open up our ability to go after the hybrid story market in a big way. It opens up a big new market for us. So great work there by the team,. Also at the heart of it. If you think about it in the NAND side, we have our flash array, which is a scale-up latency centric architecture and FlashBlade which is a scale-out throughput architecture, all operating with NAND. And what that does is it allows us to cover both structured data, unstructured data, tier one apps and tier two apps. So pretty broad data coverage in that journey to the all flash data center, slowly but surely we're heading over there to the all flash data center based on demand economics that we just talked about, and we've done a bunch of releases. And then the team has done a bunch of things around introducing and NVME or fabric, the kind of thing that you expect them to do. A lot of recognition in the industry for the team or from the likes of TrustRadius, Gartner, named FlashRay, the Carton Peer Insights, the customer choice award and primary storage in the MQ. We were the leader. So a lot of kudos and recognition coming to the team as a result, Flash Blade just hit a billion dollars in cumulative revenue, kind of a leader by far in kind of the unstructured data, fast file an object marketplace. And then of course, all the work we're doing around what we say, ESG, environmental, social and governance, around reducing carbon footprint, reducing waste, our whole notion of evergreen and non-disruptive upgrades. We also kind of did a lot of work in that where we actually announced that over 2,700 customers have actually done non-disruptive upgrades over the technology. >> Yeah a lot to unpack there. And a lot of this sometimes you people say, oh, it's the plumbing, but the plumbing is actually very important too. 'Cause we're in a major inflection point, when we went from spinning disk to NAND. And it's all about volumes, you're seeing this all over the industry now, you see your old boss, Pat Gelsinger, is dealing with this at Intel. And it's all about consumer volumes in my view anyway, because thanks to Steve Jobs, NAND volumes are enormous and what two hard disk drive makers left in the planet. I don't know, maybe there's two and a half, but so those volumes drive costs down. And so you're on that curve and you can debate as to when it's going to happen, but it's not an if it's a when. Let me, shift gears a little bit. Because Cloud, as I was saying, it's ushered in this API economy, this as a service model, a lot of infrastructure companies have responded. How are you thinking at Pure about the as a service model for your customers? What's the strategy? How is it evolving and how does it differentiate from the competition? >> Absolutely, a great question. It's kind of segues into the second part of the moderate experience, which is how do you modernize the operations? And that's where automation as a service, because ultimately, the Cloud has validated and the address of this model, right? People are looking for outcomes. They care less about how you get there. They just want the outcome. And the as a service model actually delivers these outcomes. And this whole notion of infrastructure as code is kind of the start of it. Imagine if my infrastructure for a developer is just a line of code, in a Git repository in a program that goes through a CICD process and automatically kind of is configured and set up, fits in with the Terraform, the Ansibles, all that different automation frameworks. And so what we've done is we've gone down the path of really building out what I think is modern operations with this ability to have storage as code, disability, in addition modern operations is not just storage scored, but also we've got recently introduced some comprehensive ransomware protection, that's part of modern operations. There's all the threat you hear in the news or ransomware. We introduced what we call safe mode snapshots that allow you to recover in literally seconds. When you have a ransomware attack, we also have in the modern operations Pure one, which is maybe the leader in AI driven support to prevent downtime. We actually call you 80% of the time and fix the problems without you knowing about it. That's what modern operations is all about. And then also Martin operations says, okay, you've got flash on your on-prem side, but even maybe using flash in the public Cloud, how can I have seamless multi-Cloud experience in our Cloud block store we've introduced around Amazon, AWS and Azure allows one to do that. And then finally, for modern applications, if you think about it, this whole notion of infrastructure's code, as a service, software driven storage, the Kubernetes infrastructure enables one to really deliver a great automation framework that enables to reduce the labor required to manage the storage infrastructure and deliver it as code. And we have, kudos to Charlie and the Pure storage team before my time with the acquisition of Portworx, Portworx today is truly delivers true storage as code orchestrated entirely through Kubernetes and in a multi-Cloud hybrid situation. So it can run on EKS, GKE, OpenShift rancher, Tansu, recently announced as the leader by giggle home for enterprise Kubernetes storage. We were really proud about that asset. And then finally, the last piece are Pure as a service. That's also all outcome oriented, SLS. What matters is you sign up for SLS, and then you get those SLS, very different from our competition, right? Our competition tends to be a lot more around financial engineering, hey, you can buy it OPEX versus CapEx. And, but you get the same thing with a lot of professional services, we've really got, I'd say a couple of years and lead on, actually delivering and managing with SRE engineers for the SLA. So a lot of great work there. We recently also introduced Cisco FlashStack, again, flash stack as a service, again, as a service, a validation of that. And then finally, we also recently did a announcement with Aquaponics, with their bare metal as a service where we are a key part of their bare metal as a service offering, again, pushing the kind of the added service strategy. So yes, big for us, that's where the buck is skating, half the enterprises, even on prem, wanting to consume things in the Cloud operating model. And so that's where we're putting it lot. >> I see, so your contention is, it's not just this CapEx to OPEX, that's kind of the, during the economic downturn of 2007, 2008, the economic crisis, that was the big thing for CFOs. So that's kind of yesterday's news. What you're saying is you're creating a Cloud, like operating model, as I was saying upfront, irrespective of physical location. And I see that as your challenge, the industry's challenge, be, if I'm going to effect the digital transformation, I don't want to deal with the Cloud primitives. I want you to hide the underlying complexity of that Cloud. I want to deal with higher level problems, but so that brings me to digital transformation, which is kind of the now initiative, or I even sometimes call it the mandate. There's not a one size fits all for digital transformation, but I'm interested in your thoughts on the must take steps, universal steps that everybody needs to think about in a digital transformation journey. >> Yeah, so ultimately the digital transformation is all about how companies are gain a competitive edge in this new digital world or that the company are, and the competition are changing the game on, right? So you want to make sure that you can rapidly try new things, fail fast, innovate and invest, but speed is of the essence, agility and the Cloud operating model enables that agility. And so what we're also doing is not only are we driving agility in a multicloud kind of data, infrastructure, data operation fashion, but we also taking it a step further. We were also on the journey to deliver modern data services. Imagine on a Pure on-prem infrastructure, along with your different public Clouds that you're working on with the Kubernetes infrastructures, you could, with a few clicks run Kakfa as a service, TensorFlow as a service, Mongo as a service. So me as a technology team can truly become a service provider and not just an on-prem service provider, but a multi-Cloud service provider. Such that these services can be used to analyze the data that you have, not only your data, your partner data, third party public data, and how you can marry those different data sets, analyze it to deliver new insights that ultimately give you a competitive edge in the digital transformation. So you can see data plays a big role there. The data is what generates those insights. Your ability to match that data with partner data, public data, your data, the analysis on it services ready to go, as you get the digital, as you can do the insights. You can really start to separate yourself from your competition and get on the leaderboard a decade from now when this digital transformation settles down. >> All right, so bring us home, Ajay, summarize what does a modern data strategy look like and how does it fit into a digital business or a digital organization? >> So look, at the end of the day, data and analysis, both of them play a big role in the digital transformation. And it really comes down to how do I leverage this data, my data, partner data, public data, to really get that edge. And that links back to a vision. How do we provide that connected and effortless, modern data experience that allows our customers to focus on their business? How do I get the edge in the digital transformation? But easily leveraging, managing and winning with their data. And that's the heart of where Pure is headed. >> Ajay Singh, thanks so much for coming inside theCube and sharing your vision. >> Thank you, Dave, it was a real pleasure. >> And thank you for watching this Cube conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Aug 18 2021

SUMMARY :

in the era of programmable Yeah, great to have you, And the vision, if you the pandemic you came in in kind of the unstructured data, And a lot of this sometimes and the address of this model, right? of 2007, 2008, the economic crisis, the data that you have, And that's the heart of and sharing your vision. was a real pleasure. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Ajay SinghPERSON

0.99+

CharliePERSON

0.99+

AmazonORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

AjayPERSON

0.99+

Steve JobsPERSON

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

PureORGANIZATION

0.99+

TrustRadiusORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

2008DATE

0.99+

2007DATE

0.99+

firstQUANTITY

0.99+

CapExORGANIZATION

0.99+

AquaponicsORGANIZATION

0.99+

PortworxORGANIZATION

0.99+

yesterdayDATE

0.99+

IntelORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

OPEXORGANIZATION

0.99+

MartinPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

100 plus daysQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

second partQUANTITY

0.99+

over 2,700 customersQUANTITY

0.99+

WikibonORGANIZATION

0.98+

secondQUANTITY

0.98+

first 100 daysQUANTITY

0.98+

billion dollarsQUANTITY

0.98+

this yearDATE

0.97+

KubernetesTITLE

0.97+

CiscoORGANIZATION

0.96+

two and a halfQUANTITY

0.96+

oneQUANTITY

0.96+

MongoORGANIZATION

0.96+

TansuORGANIZATION

0.95+

AzureORGANIZATION

0.95+

early this yearDATE

0.94+

earlier this yearDATE

0.94+

100 daysQUANTITY

0.94+

FlashRayORGANIZATION

0.93+

first companyQUANTITY

0.93+

tier two appsQUANTITY

0.93+

C60COMMERCIAL_ITEM

0.92+

pandemicEVENT

0.92+

OpenShiftORGANIZATION

0.91+

SLSTITLE

0.91+

2026DATE

0.91+

CartonORGANIZATION

0.91+

three componentsQUANTITY

0.9+

todayDATE

0.88+

CloudTITLE

0.88+

a minuteQUANTITY

0.87+

SREORGANIZATION

0.86+

Cloud blockTITLE

0.86+

two hard disk driveQUANTITY

0.86+

EKSORGANIZATION

0.85+

KubernetesORGANIZATION

0.82+

about eight months agoDATE

0.82+

AnsiblesORGANIZATION

0.8+

GKEORGANIZATION

0.79+

KakfaORGANIZATION

0.79+

a decadeDATE

0.77+

tier one appsQUANTITY

0.76+

Peer InsightsTITLE

0.75+

GitTITLE

0.75+

TensorFlowORGANIZATION

0.71+

one big advantageQUANTITY

0.7+

Danny Allan, Veeam | VeeamON 2021


 

(upbeat music) >> 2020 was the most unpredictable year of our lives, a forced shutdown of global economies left everyone have the conclusion that the tech industry spending would decline and of course it did, but you'd hardly know it if you watch the stock market and the momentum of several well-positioned companies. Those firms that had products and services that catered to the pivot to work from home, SAS based solutions were focused on business resiliency and cloud saw huge growth. The forced match to digital turned a buzzword into reality overnight, where if you weren't a digital business, you were out of business. And one of the companies participating in that growth trend was Veeam. Veeam virtual is scheduled to take place on May 25th and 26th. And it's one of our favorite physical events and the Cube will be there again as a virtual participant. One of our traditions prior to VeeamON has always been to bring in an executive into the Cube and talk not only about what to expect at the show, but what's happening in the market. And with me is many times Cube alum Danny Allan is the chief technology officer at Veeam. Danny welcome is always great to see you. >> I am delighted to be here again. Disappointed it's virtual, but excited to talk with you. >> Yeah, me too. You know, it's coming. It's getting jabbed but you know, you look at the surprises here. I mean, look at the chip shortage, you know everybody thought, Oh, well stop ordering chips. I mean furniture, et cetera, cars. And it's just kind of crazy. What was your expectation going into the pandemic and what did you actually see looking back? >> Well, it's funny, you never know what's going to happen. And for the first few weeks I would say there's a lot of disruption because all of a sudden you have people who've been going into an office for a long time, working from home and you know, from an R&D perspective at Veeam those people weren't used to working from home. So there's a lot of uncertainty I'll say for the first three or four weeks but what very quickly picked up was the opportunity. I'll say to focus more specifically on delivering things for our customers. And one of the things obviously that just exploded was use of digital technologies like Slack and Microsoft teams. And as you say, Veeam was well positioned to help customers as they move towards this new normal, as they say. >> So what were some of the growth vectors that you guys saw specifically that were helping your customers going to get get through this time? >> Yeah well, people always associate Veeam with knowing data protection for the virtual environment but two things really stood out last year as our emerging markets. One was Office365, and I think that's due to the uptake of Microsoft teams. I mean, if you look at the Microsoft results, you can see that people are doing SaaS. And we were very well positioned to take advantage of that. Help customers move towards collaborating online. So that was a huge growth vector for us. And the second one was cloud. We had more data moving to cloud than ever before in Veeam history. And that continues on into 2021. >> You guys, well, yeah, let's talk more about that set SAS piece of your business. You were very early on in terms of SAS data protection. You kind of had to educate the market. People are like, well, why do I need to back up my SAS doesn't the cloud provider do that? And then you sort of you had to educate, so it was you were early and but it's really paid off. Maybe talk about how that trend has benefited some of your customers. >> Yeah. So if you go back four years, we didn't even have data protection for Office365. And over the last four years, we've emerged into the market leader the largest in protection for Office365. And as you say, it was about education. Early on people knew that they needed to protect exchange when they ran it on premises. And when they first went to the cloud there was this expectation of, Hey Microsoft or my provider will do that for me. And very quickly they realized that's not the case and there's still the same threats. It might not be hardware failure, but certainly misconfiguration or deleted items or ransomware in 2021, sorry, 2020 was massive. And so we do data protection for Exchange, for SharePoint online, for one drive, and most recently for Microsoft teams. And so that data protection obviously helps organizations as they adopt Office365 and SaaS technologies. >> I sent him my last breaking analysis. When you look at the ETR data, Veeam has been really steady. You know, some competitors spike up and come down, others, you know, maybe aren't doing so well or the larger established players don't have as much momentum. It just seems like Veeam even though you cross the billion dollar revenue mark, you've been able to keep that spending momentum up. And I think it's, I would observe it's a function of your ability to identify that the waves and ride those waves and anticipate them. We just talked about SAS, talked about virtualization. You were there cloud, we'll talk about that more as well, plus your execution. It seems like since the acquisition by insight you guys have continued to execute. I wonder if you could help the audience understand how do you think about the phases that Veeam has gone through in its ascendancy and where you're headed? >> Yeah. And so I look at it as three things, it's having the right product, but it's not just enough to have the right product, the right product it's the right timing and it's the right execution. So if you think about where Veeam started, it was all in data protection for vSphere, for the hypervisor. And that was right at the time when VMware was taking off and the modernized data center was being virtualized. And so that helped us grow, I'll say into a $600 million company, but then about four years ago, we see the ascendancy of SaaS and specifically Office365. And so, you know, we weren't first to market but I would argue the timing with the best product, with the right execution has turned that into a massive a very significant contribution to our bottom line. And then actually the third wave through 2020 is the adoption of cloud. We moved last year, 242 petabytes into cloud storage and already in the first quarter of 2021, we've moved to 100 petabytes. So there's this massive adoption or migration of data into the cloud. And Veeam has been positioned with the right product, at the right time, with the right execution, to take advantage of that. >> So I wonder if you could help us quantify that IDC data you know, the IDC did a good job quantifying the market. Maybe you could share with us sort of your position there, maybe some of the growth that we're seeing. Can you add some color to that? >> Yeah. We have some very exciting results from the recent IDC report. So in the second half of 2020, we saw 17.9% year over year growth in our revenue. That was actually triple the closest competitor. And our sequential growth was over 21%. So massive growth and all of that is in the second half of the year, 563 million in revenue. So over a billion dollar company. So these aren't just, you know, 20% growth on small numbers. This is on a very significant number. And we see that continuing forward, we'll be announcing some things I'm sure at VMR coming up in a few weeks here, but that trend continues. And again, it's the right product, right time, with the right execution. >> Cloud continues to roll on. You're seeing, you know, solid weather. If you add them all up the big four 30 plus percent growth you're seeing Azure, even higher growth. You know, AWS is huge, Google growing, Google cloud, probably in the 60 to 70% range. So cloud still hot, it's kind of gone mainstream but there's still feels like there's a long way to go there. What's happening in cloud? You guys, again, leaning in, riding that wave. What can you tell us? >> We are leaning in, you're going to see some things coming up at Veeam related to that. But two things I would say, one is we're in the marketplace of all three of the major hyperscalers. So there's a Veeam backup for AWS, Veeam backup for Azure, and a Veeam backup for GCP. And not only is there products that are purpose-built for those clouds in the marketplace, all three of them have integrations to the core Veeam platform. And so this isn't just standalone products while it is in the marketplace, it's integrated into the full strategy around modern data protection for the organizations. And so I am thrilled about some of the things that we're going to be showing in there but we're leaning in very closely with those. We think we're in early days, like I say maybe first, second year, and it will be the next decade as they truly emerge into their dominant position. But even more than cloud if you asked me what I get excited about looking forward certainly cloud adoption is massive, but Kubernetes, that's what's enabling some of the models of both on-prem and cloud hosted. And we're clearly doing some things there as well. >> So I'm glad you brought that up because I think the first time I ever sort of stumbled into a company that was actually doing data protection for containers was out of a VeeamON event. It was one of your exhibitors. And I was like, Hmm, that's an interesting name. And yeah, of course he ended up buying the company. But so, you know, it's funny, right? Because containers have been around forever. And then when you started to see Kubernetes come to for, containers are really ephemeral they really didn't, you know, they weren't persistent but they didn't have state, but that's changing. I wonder if you could give us your perspective as to how you're thinking about that whole space. >> I truly believe that the third big wave of technology transformation, the first was around physical systems and mainframes and things. And then we went into the virtualized era. I think that the third world is not the cloud. I actually think it is containers. Now why containers? Because as you mentioned, Dave, they're a femoral, they're designed for the world of consumption. Everything else is designed for you, install it. And then you build to the high watermark. The whole thing about containers is that they're a femoral and they're built for the consumption model. The other thing about containers is that they're highly portable. So you can run it on premises with OpenShift but then you can move it to GKE or HKS or EKS or any of the big cloud platforms. So it definitely aligns with organization's desire to modernize and to choose the infrastructure of best choice. Now, at the same time, the reason why they haven't taken off I would argue as quickly as they could have is because they've been really complex, in early days the complexity of containers was very difficult, but the model, the platform or ecosystem is evolving, they are becoming more simple. And what is happening is IT operations teams are now considering the developer, their customer and they're building self-service models for the developers to be more productive. So I think of this as platform apps and certainly backup and security is a part of this but it is moving and we're seeing traction actually faster than it would have predicted in early 2020. >> Yeah. We've been putting forth this vision of a layer that abstracts the complexity of the underlying clouds whether it's on prem, across clouds, eventually the edge and containers are linchpin to enabling that. Let's talk about VeeamON 2021. Show us a little leg, give us a preview. >> So we always come with the excitement and we always come with showing a sneak peek of what's to come. So certainly we're going to celebrate some of the big successes. We brought version 11 to market earlier this year that had security capabilities around ransomware type, continuous data protection it at a whole lot of things. So we're going to celebrate some of the products have already recently launched but we're also going to give a sneak peek of what's coming over 2021. Now, if you ask me what that is, we talk an awful lot about cloud. So you should expect to see things around Veeam backup for AWS and Azure and GCP. You should expect to see things around our Kubernetes data protection with our casting Cape 10 product, you should expect to see evolution of capabilities with our SaaS data protection with Office365. So we're going to give a sneak peek of lots of things to come. And as always, we bring lots of innovation to the market. It's not just another checkbox theme has always said, how can we do it differently? How can we do a better? And then we're going to show that to our customers at VeeamON. >> Well, we're always super excited to participate in the Veeam community. We've always had a lot of fun. They're great events. Yes, it's virtual, but you guys always have an interesting spin on things and make it fun. It's May 25th and 26th. It starts at 9:00 AM Eastern time. You go to Veeam V-E-E-A-M.com and sign up, make sure you do that and check out all the content. The Cube of course will be there. I will be interviewing executives, customers, partners. There's tons of content for practitioners. And, you know, as always you guys got the great demos and always a few surprises. So Danny, really looking forward to that and really appreciate your time and the Cube today. >> Thank you, Dave. >> All right. And thank you for watching. This is Dave Vellante for the Cube. Again, May 25th and 26th 9:00 AM. Eastern time, go to veeam.com and sign up. We'll see you there. (upbeat music)

Published Date : May 18 2021

SUMMARY :

And one of the companies participating but excited to talk with you. I mean, look at the And for the first few weeks And the second one was cloud. And then you sort of you had to educate, And over the last four years, that the waves and ride those and it's the right execution. So I wonder if you could And again, it's the right in the 60 to 70% range. of the major hyperscalers. And then when you started to for the developers to be more productive. of a layer that abstracts the complexity of the big successes. in the Veeam community. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Danny AllanPERSON

0.99+

60QUANTITY

0.99+

Dave VellantePERSON

0.99+

DannyPERSON

0.99+

VeeamORGANIZATION

0.99+

17.9%QUANTITY

0.99+

2021DATE

0.99+

last yearDATE

0.99+

20%QUANTITY

0.99+

AWSORGANIZATION

0.99+

100 petabytesQUANTITY

0.99+

563 millionQUANTITY

0.99+

242 petabytesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

May 25thDATE

0.99+

one driveQUANTITY

0.99+

first quarter of 2021DATE

0.99+

26thDATE

0.99+

2020DATE

0.99+

firstQUANTITY

0.99+

70%QUANTITY

0.99+

bothQUANTITY

0.99+

$600 millionQUANTITY

0.99+

oneQUANTITY

0.99+

three thingsQUANTITY

0.99+

OneQUANTITY

0.99+

SASORGANIZATION

0.99+

two thingsQUANTITY

0.98+

early 2020DATE

0.98+

vSphereTITLE

0.98+

four weeksQUANTITY

0.98+

30 plus percentQUANTITY

0.98+

over 21%QUANTITY

0.98+

VeeamPERSON

0.98+

next decadeDATE

0.98+

second oneQUANTITY

0.98+

first timeQUANTITY

0.98+

Office365TITLE

0.98+

first threeQUANTITY

0.98+

threeQUANTITY

0.97+

IDCORGANIZATION

0.97+

SlackORGANIZATION

0.97+

veeam.comOTHER

0.97+

billion dollarQUANTITY

0.96+

todayDATE

0.96+

pandemicEVENT

0.96+

over a billion dollarQUANTITY

0.95+

second half of 2020DATE

0.95+

earlier this yearDATE

0.94+

26th 9:00 AMDATE

0.94+

second yearQUANTITY

0.93+

SharePointTITLE

0.93+

first few weeksQUANTITY

0.92+

about four years agoDATE

0.92+

VMRORGANIZATION

0.92+

KubernetesTITLE

0.92+

four yearsQUANTITY

0.91+

VeeamONORGANIZATION

0.89+

third waveEVENT

0.89+

Sathish Balakrishnan, Red Hat | Google Cloud Next OnAir '20


 

>> (upbeat music) >> production: From around the globe, it's the Cube covering Google cloud Next on-Air 20. (Upbeat music) >> Welcome back. I'm Stu Miniman and this is the CUBE coverage of Google cloud Next on Air 20. Of course, the nine week distributed all online program that Google cloud is doing and going to be talking about, of course, multi-cloud, Google of course had a big piece in multi-cloud. When they took what was originally Borg, They built Kubernetes. They made that open source and gave that to the CNCF and one of Google's partners and a leader in that space is of course, Red Hat. Happy to welcome to the program Sathish Balakrishnan, he is the Vice President of hosted platforms at Red Hat. Sathish, thanks so much for joining us. >> Thank you. It's great to be here with you on Google Cloud Native insights. >> Alright. So I, I tied it up, of course, you know, we talk about, you know, the hybrid multicloud and open, you know, two companies. I probably think of the most and that I've probably said the most about the open cloud are Google and Red Hat. So maybe if we could start just, uh, you hosted platforms, help us understand what that is. And, uh, what was the relationship between Red Hat and the Open Shift team and Google cloud? >> Absolutely. Great question. And I think Google has been an amazing partner for us. I think we have a lot of things going on with them upstream in the community. I think, you know, we've been with Google and the Kubernetes project since the beginning and you know, like the second biggest contributor to Kubernetes. So we have great relationships upstream. We also made Red Hat Enterprise Linux as well as Open Shift available on Google. So we have customers using both our offerings as well as our other offerings on Google cloud as well. And more recently with the hosted our offerings. You know, we actually manage Open Shift on multiple clouds. We relaunched our Open Shift dedicated offering on Google cloud back at Red Hat Summit. There's a lot of interest for the offering. We had back offered the offering in 2017 with Open Shift Three and we just relaunched this with Open Shift Four and we received considerable interest for the Google cloud Open Shift dedicated offering. >> Yeah, Sathish maybe it makes sense if we talk about kind of the maturation of open source solutions, managed services has seen really tremendous growth, something we've seen, especially if we were talking about in the cloud space. Maybe if you could just walk us through a little bit out that, you know, what are you hearing from customers? How does Red Hat think about managed solutions? >> Absolutely. Stu, I think it was a good question, right? I think, uh, as we say, the customers are looking at, you know, multiple infrastructure footprints, Be iteither the public cloud or on-prem. They'll start looking at, you know, if I go to the cloud, you know, there's this concept of, I want something to be managed. So what Open Shift is doing is in Open Shift, as you know it's Red Hat's hybrid cloud platform and with Open Shift, all the things that we strive to do is to enable the vision of the Open Hybrid Cloud. Uh, so, but Open Hybrid Cloud, it's all about choice, So we want to make sure the customers have both the managed as well as the self managed option. Uh, so if you really look at it, you know, Red Hat has multiple offerings from a managed standpoint. One as you know, we have Open Shift dedicated, which runs from AWS and Google. And, you know, we just have, as I mentioned earlier. We relaunched our Google service at Red Hat Summit back in May. So that's actually getting a lot of traction. We also have joint offerings with Azure that we announced a couple of years back and, there's a lot of interest for that offering as well as the new offering that we announced post-summit, the Amazon-Red Hat Open Shift, which basically is another native offering that we have on Amazon. If you really look at, having, having spoken about these offerings, if you really look at Red Hat's evolution as a managed service provider in the public cloud, we've been doing this since 2011. You know, that's kind of surprising for a lot of people, but you know, we've been doing Open Shift online, which is kind of a multi-tenant parcel multi-talent CaaS solution 2011. And we are one of the earliest providers of managed kubernetes, you know, along with Google Kubernetes engine GKE, we are our Open Shift dedicated offering back in 2015. So we've been doing Kubernetes managed since, Open Shift 3.1. So that's actually, you know, we have a lot of experience with management of Kubernetes and, you know, the devolution of Open Shift we've now made it available and pretty much all the clouds. So that customers have that exact same experience that they can get any one cloud across all clouds, as well as on-prem. Managed service customers now have a choice of a self managed Open Shift or completely managed Open Shift. >> Yeah. You mentioned the choice and one of the challenges we have right now is there's really the paradox of choice. If you look in the Kubernetes space, you know, there are dozens of offerings. Of course, every cloud provider has their offerings. You know, Google's got GKE, they have Anthos, uh, they, they have management tools around there. You, you talked a bit about the, you know, the experience and all the customers you have, the, you know, there's one of the fighters talks about, there's no compression algorithm for experience. So, you know, what is Red Hat Open Shift? What really differentiates in the market place from, you know, so many of the other offerings, either from the public high providers, some of the new startups, that we should know. >> Yeah. I think that's an interesting question, right? I think all Google traders start with it's complete open source and, you know, we are a complete open source company. So there is no proprietary software that we put into Open Shift. Open Shift, basically, even though it has, you know, OC command, it basically has CPR. So you can actually use native Google networks as you choose on any Google network offering that you have be it GKE, EKS or any of the other things that are out there. So that's why I think there are such things with google networks and providers and Red Hat does not believe in open provider. It completely believes in open source. We have everything that we is open source. From an it standpoint, the value prop for Red Hat has always been the value of the subscription, but we actually make sure that, you know, Google network is taken from an upstream product. It's basically completed productized and available for the enterprise to consume. But that right, when we have the managed offering, we provide a lot more benefits to it, right? The benefits are right. We actually have customer zero for Open Shift. So what does that mean? Right. We will not release Open Shift if we can't run open Shift dedicated or any of their (indistinct) out Open Shift for them is under that Open Shift. Really really well. So you won't get a software version out there. The second thing is we actually run a lot of workloads, but then Red Hat that are dependent on our managed or open shift off. So for example, our billing systems, all of those internal things that are important for Red Hat run on managed Open Shift, for example, managed Open Shift. So those are the important services for Red Hat and we have to make sure that those things are running really, really well. So we provide that second layer of enterprise today. Then having put Open Shift online, out that in public. We have 4 million applications and a million developers that use them. So that means, I've been putting it out there in the internet and, you know, there's security hosts that are constantly being booked that are being plugged in. So that's another benefit that you get from having a product that's a managed service, but it also is something that enterprises can now use it. From an Open Shift standpoint, the real difference is we add a lot of other things on top of google network without compromising the google network safety. That basically helps customers not have to worry about how they're going to get the CIC pipeline or how they have to do a bunch of in Cobra Net as an outside as the inside. Then you have technologies like Store Street Metrics kind of really help customers not to obstruct the way the containerization led from that. So those are some of the benefits that we provide with Open Shift. >> Yeah. So, so, so Sathish, as it's said, there's lots of options when it comes to Kubernetes, even from a Red Hat offering, you've got different competing models there. If I look inside your portfolio, if it's something that I want to put on my infrastructure, if I haven't read the Open Shift container platform, is that significantly different from the managed platform. Maybe give us a little compare contrast, you know. What do I have to do as a customer? Is the code base the same? Can I do, you know, hybrid environments between them and you know, what does that mean? >> It's a smart questions. It's a really, really good question that you asked. So we actually, you know, as I've said, we add a lot of things on top of google network to make it really fast, but do you want to use the cast, you can use the desktop. So one of the things we've found, but you know, what we've done with our managed offering is we actually take Open Shift container platform and we manage that. So we make sure that you get like a completely managed source, you know. They'll be managed, the patching of the worker nodes and other things, which is, again, another difference that we have with the native Cobra Net of services. We actually give plush that admin functionality to customers that basically allows them to choose all the options that they need from an Open Shift container platform. So from a core base, it's exactly the same thing. The only thing is, it's a little bit opinionated. It to start off when we deploy the cluster for the customer and then the customer, if they want, they can choose how to customize it. So what this really does is it takes away any of the challenges the customer may have with like how to install and provision a cluster, which we've already simplified a lot of the open shift, but with the managed the Open Shift, it's actually just a click of it. >> Great. Sathish Well, I've got the trillion dollar question for you. One of the things we've been looking at for years of course, is, you know, what do I keep in my data center? What do I move to the cloud? How do I modernize it? We understand it's a complex and nuanced solution, but you talk to a lot of customers. So I, you know, here in 2020, what's the trends? What are some of the pieces that you're seeing some change and movement that, you know, might not have been the case a year ago? >> I think, you know, this is an interesting question and it's an evolving question, right? And it's something that if you ask like 10 people you'll get real answers, but I'm trying to generalize what I've seen just from all the customer conversations I've been involved. I think one thing is very clear, right? I think that the world is right as much as anybody may want to say that I'm going to go to a single cloud or I'm going to just be on prem. It is inevitable that you're going to basically end up with multiple infrastructure footprint. It's either multicloud or it's on Prem versus a single cloud or on prem versus multiple cloud. So the main thing is that, we've been noticing as, what customers are saying in a whole. How do I make sure that my developers are not confused by all these difference than one? How do I give them a consistent way to develop and build their applications? Not really worry about, what is the infrastructure. What is the footprint that they're actually servicing? So that's kind of really, really important. And in terms of, you know, things that, you know, we've seen customers, you know, I think you always start with compliance requirements and data regulations. Back there you got to figure it out. What compliance do I need? And as the infrastructure or the platform that I'm going to go to meet the compliance requirements that I have, and what are the data regulations? You know, what is the data I'm going to be setting? Is it going to meet the data submitted rules that my country or my geo has? I got to make sure I worry about that. And then I got to figure out if I'm going to basically more to the cloud from the data center or from one cloud to another cloud. I might just be doing a lift or shift. Am I doing a transformation? What is it that I really worry about? In addition to the transformation, they got to figure it out, or I need to do that. Do I not need to do that? And then, you know, we've got to figure out what your data going to set? What your database going to look in? And do you need to connect to some legacy system that you have on prem? Or how do you go? How do you have to figure that out and give them all of these complexities? This is really, really common for any large enterprise that has like an enterprise ID for that multi-cloud. That's basically in multiple geographies, servicing millions of customers. So that has a lot of experience doing all these things. We have open innovation labs, which are really, really awesome experience for customers. Whether they take a small project, they figured out how to change things. Not only learn how to change things from a technology standpoint, but also learn how to culturally change things, because a lot of these things. So it's not just moving from one infrastructure to another, but also learning how to do things differently. Then we have things like the container adoption programmer, which is like, how do you take a big legacy monolith application? How do you containerize it? How do you make it micro services? How do you make sure that you're leveraging the real benefits that you're going to get out of moving to the cloud or moving to a container platform? And then we have a bunch of other things like, how do you get started with Open Shift and all of that? So we've had a lot of experience with like our 2,400 plus customers doing this kind of really heavy workload migration and lifting. So the customers really get the benefits that they see out of Open Shift. >> Yeah. So Sathish, if I think about Google, specifically talking about Google cloud, one of the main reasons we hear customers using Google is to have access to the data services. They have the AI services they have. So how does that tie into what we were just talking about? If I, if I use Open Shift and you know. I'm living in Google cloud, can, can I access all of those cloud native services? Are there any nuances things I need to think about to be able to really unleash that innovation of the platform that I'm tying into? >> Yeah, absolutely not. Right. I think it's a great question. And I think customers are always wondering about. Hey, if I use Open Shift, am I going to be locked out of using the cloud services? And if anything run out as antilock. We want to make sure that you can use the best services that you need for your enterprise, like the strategy as well as for applications. So with that, right. And we've developed the operator framework, which I think Google has been a very early supporter of. They've built a lot of operators around their services. So you can develop those operators to monitor the life cycle of these services, right from Open Shift. So you can actually connect to an AI service if you want. That's absolutely fine. You can connect the database services as well. And you can leverage all of those things while your application runs on Open Shift from Google cloud. Also I think that done us right. We recognize that, when you're talking about the open hybrid cloud, you got to make sure that customers can actually leverage services that are the same across different clouds. So when you can actually leverage the Google services from On Prem as well, if you choose to have localized services. We have a large catalog of operators that we have in our operator hub, as well as in the Red Hat marketplace that you can actually go and leverage from third party, third party ISV, so that you're basically having the same consistent experience if you choose to. But based on the consistent experience, that's not tied to a cloud. You can do that as well. But we would like for customers to use any service that they want, right from Open Shift without any restrictions. >> Yeah. One of the other things we've heard a lot from Google over the last year or so has been, you know, just helping customers, especially for those mission, critical business, critical applications, things like SAP. You talked a bit about databases. What advice would you give customers these days? They're, they're looking at, you know, increasing or moving forward in their cloud journeys. >> I think it sounds as an interesting question because I think customers really have to look at, you know, what is the ID and technology strategy? What are the different initiatives to have? Is it digital transformation? Is it cloud native development? Is it just containerization or they have an overarching theme over? They've got to really figure that out and I'm sure they're looking at it. They know which one is the higher priority when all of them are interrelated and in some ways. They also got to figure out how they going to expand to new business. Because I think as we said, right, ID is basically what is driving personal software is eating the load. Software services are editing them. So you got to figure out, what are your business needs? Do you need to be more agile? Do you need to enter new businesses? You know, those are kind of important things. For example, BMW is a great example, they use Open Shift container platform as well as they use Open Shift dedicated, you know. They are like a hundred hundred plus year old car, guess, you know what they're trying to do. They're actually now becoming connected car infrastructure. That's the main thing that they're trying to build so that they can actually service the cars in any job. So in one shoe, they came from a car manufacturing company to now focus on being a SAS, an Edge and IOT company. If you really look at the cars as like the internet of things on an edge computer and what does that use case require? That use case cannot anymore have just one data center in Munich, they have to basically build a global platform of data centers or they can really easily go to the cloud. And then they need to make sure that that application double close when they're starting to run on multiple clouds, multiple geographies, they have the same abstraction layer so that they can actually apply things fast. Develop fast. They don't have to worry about the infrastructure frequently. And that's basically why they started using Open Shift. And don't know why they're big supporters of Open Shift. And then I think it's the right mission for their use. So I think it really depends on, you know, what the customer is looking for, but irrespective of what they're looking for, I think Open Shift nicely fits in because what it does, is it provides you that commonality across all infrastructure footprints. It gives you all the productivity gains and it allows you to connect to any service that you want anywhere because we are agnostic to that and as well as we bring a whole lot of services from Red Hat marketplace so you can actually leverage your status. >> Well, Sathish Balakrishnan, thank you so much for the updates. Great to hear about the progress you've got with your customers. And thank you for joining us on the Google cloud Next On Air Event. >> Thank you Stu. It's been great talking to you and look forward to seeing you in person one day. >> Alright. I'm Stu Miniman. And thank you as always for watching the Cube. (upbeat music) (upbeat music)

Published Date : Sep 10 2020

SUMMARY :

it's the Cube covering Google cloud and going to be talking about, to be here with you we talk about, you know, the and you know, like the a little bit out that, you know, if I go to the cloud, you the customers you have, in the internet and, you Can I do, you know, So we actually, you know, as I've said, So I, you know, here in And in terms of, you know, one of the main reasons we to an AI service if you you know, just helping customers, So I think it really depends on, you know, And thank you for joining us been great talking to you And thank you as always

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sathish BalakrishnanPERSON

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

2015DATE

0.99+

MunichLOCATION

0.99+

2017DATE

0.99+

SathishPERSON

0.99+

BMWORGANIZATION

0.99+

2020DATE

0.99+

AmazonORGANIZATION

0.99+

secondQUANTITY

0.99+

Stu MinimanPERSON

0.99+

Open ShiftTITLE

0.99+

10 peopleQUANTITY

0.99+

SASORGANIZATION

0.99+

2011DATE

0.99+

Open Shift ThreeTITLE

0.99+

two companiesQUANTITY

0.99+

Open Shift FourTITLE

0.99+

MayDATE

0.99+

AWSORGANIZATION

0.99+

ShiftTITLE

0.99+

CNCFORGANIZATION

0.99+

second layerQUANTITY

0.99+

bothQUANTITY

0.99+

nine weekQUANTITY

0.99+

4 million applicationsQUANTITY

0.99+

one shoeQUANTITY

0.99+

2,400 plus customersQUANTITY

0.98+

a year agoDATE

0.98+

Red HatTITLE

0.98+

second thingQUANTITY

0.98+

Open ShiftTITLE

0.98+

Open Shift 3.1TITLE

0.97+

EdgeORGANIZATION

0.97+

last yearDATE

0.97+

Red Hat SummitEVENT

0.97+

oneQUANTITY

0.97+

googleORGANIZATION

0.97+

OneQUANTITY

0.97+

IOTORGANIZATION

0.96+

one data centerQUANTITY

0.96+

StuPERSON

0.95+

Alexandre McLean, Ubisoft | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> [] From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020. Virtual, brought to you by Red Hat, the cloud native computing foundation and the ecosystem partners. >> Hi, I'm Stu Miniman. And this is theCUBE coverage of KubeCon CloudNativeCon 2020 in Europe the virtual edition and you've reached the final stage. This is our last interview so hopefully, learned a lot talking to the CNCF members. We've had a few great practitioners, of course, some of the important vendors and startups in this space. And when we talk about what's happening in this, this cloud native space, one of the things that gets bandied about a lot is scale. What does that mean? You know, when it first rolled out, of course, there is only one Google out there, and only a handful of true hyperscalers. But there absolutely are some companies that really need scale, performance global and so happy to bring in he is the final boss. It is Alexandria McLean. He's a technical architect at Ubisoft. And yes, I do have a little bit of background in gaming. But here's someone that is helping enable in one of the largest gaming companies in the globe. So Alexandra, thanks so much for joining us. >> Hey, thanks for the invitation, happy to be here. >> All right, so you're no novice to this ecosystem. I know you and I have both been at many of the Docker cons, the KubeCons over the years. So if you could just give our audience a little bit of your background and what is your team responsible for Ubisoft? >> Okay, sure, so I am part of the one of the IT teams inside Ubisoft. So we're responsible mainly to provide cloud computing resources and Kubernetes infrastructure for the whole company. So again, and if you want to know more about basically, I've been, I've been leading the Kubernetes initiative, the past few years right now. So we started the journey maybe in 2016. We're already pretty busy, you know, working on the growth for the cloud, the cloud industry to stand Ubisoft, for the growth of the expansion, different data centers and supporting the needs of the different teams and development teams inside Ubisoft. And one thing we wanted to do back then was really to enable and accelerate the adoption of cloud native, the cloud native mindset and cloud native architectures. So what we did back then is did, we did a short analysis of our different technologies that was available at the time, and we decided to jump full head on Kubernetes and make this as the foundation for the different workloads, container workloads that will be that will enable drive adoption inside Ubisoft to grow and boost the productivity of many things. >> Alright, I'm really glad you brought up that cloud native mindset, if you could just up-level a little bit for, you know, the business leaders out there, they hear about, you know, Kubernetes and they won't know how to spell it. They hear something like a cloud native mindset, and they say, you know, I don't understand, what does this mean for our business? So what architecturally are you doing and what does that mean for you know, your games and ultimately your end users? >> Yeah, so I would say that basically, I mean, if you want to have a cloud native architecture, really want to make your application, first of all, very portable, very easy to deploy and manageable, and at the same time very resilient to failure. So you want to make sure that your application once it's deployed, that it's highly resilient to failure, that it was built for failures and that you can manage the project and the service to meet the expectation of either the gamers or the service owners basically. >> Yeah, you know, absolutely. I'm curious, here in 2020, we see the ripple effects of what the global pandemic has. I have to imagine that from a gaming standpoint, that has had an impact. So maybe if we use that as an analogy, if it's valid from your standpoint, I have to imagine more people are using it. What did this mean to your infrastructure? How were you ready from an IT's perspective to support that, you know, increased usage, kind of rippling around the globe as more people are home all the time? >> Hmm, yeah, that's a good question I guess. I mean, we really have like two kind of, I would say, audience inside Ubisoft in the IT team that I serve. So we have the people who are building the softwares and the applications to help the developers to, I mean the game developers in general, so we have different services, internal services, and tooling that needs to be hosted somewhere. And we need to enable these people in these teams to have a way to manage applications efficiently. And the other side we are looking at right now, I mean, we the game server and the gaming industry, is really, I think there's a shift right now in the way that she prefer doing, the way that you're going to manage the game servers in the future. And I would say that back then, there was a lot of in house tooling, things that were really, I mean, appropriately proprietary to each gaming company. But right now, what we wanted to do in the past few years, we work for instance on a solution called hygienist. So we were involved in the beginning to design this kind of next gen game server, dedicated server hosting infrastructure that was all built around communities. So, in the future, we were already started to work on that, and the next gen of games are going to be difficult to stay on top of Kubernetes, which is going to enable a lot more efficiency of resource usage and now at the same time, we'd say manageability and the profitability about all these services. Because I think that one key thing about cloud native and Kubernetes is that, once you know Kubernetes, I mean, basically, it's very easy to onboard new people in the team, the project, because they know what is Kubernetes how to operate it. So it will be much more efficient in the future for all the workflows that we have internally and the next game server infrastructure as well to be hosted in Kubernetes, it's going to be much more easy to standardize and unify that whole stack. >> Well, the skill sets are so critically important. And it's great to hear you say that onboarding somebody in Kubernetes, is easier than it might have been a couple of years ago. If you could bring us inside a little bit, you know, what's your stack look like? You know, you know, can you say what cloud or cloud you use? When it comes to Kubernetes, you know, what are the key tools that you're using and partners that you have? >> Yeah, sure. So early on, I would say, almost 10 years ago, we really started to focus on adding on prem cloud infrastructure and the technology that we chose back then was OpenStack. So we have a large footprint of OpenStack called install, installed internally and different data centers all over the world so people and different teams and anyone at Ubisoft can easily have computers or compute resources available for them. And with Kubernetes, we initially we wanted to have, you know, to make your Kubernetes a commodity. We wanted to ask people be very I mean in a position to easily experiment new things, new applications on top of Kubernetes. And for that we decided to go with Rancher. So Rancher is an open source solution made by Rancher labs, and we, initially after we started to build and in our solution, the first year because we talked back then the landscape was quite different and we thought it was the best choice for us to do. But we realized shortly after, I mean, when Rancher 2.2 came out, I think it was in something like April 2018, that we will benefit a lot go with this kind of solution which was open sourced, there was a lot of traction behind it and it will enable us to I mean, accelerate, accelerate the adoption of Kubernetes and cloud native in general, much more faster, than the you know solutions that we had built at that time. So we went with Rancher and right now we have, I would say, I mean, we have maybe 10 data centers with the cloud installed on top of it, much more data centers was going to grow in the next couple of months and years, and we have over 200 clusters and 1000 nodes that are managed by Rancher and people can just deploy on demand, to own Kubernetes cluster and get started with it if they want to. >> Okay, so if I heard you right, it's Rancher on top of the OpenStack solution in your data centers. >> Yes. >> You talk about how many clusters you have, you know, what's the state of managing those environments? You said, you're using Rancher that's one of the things we've seen a lot of discussion over the last couple of years is you know, went from managing containers to managing you know, part or cluster to now, multi clusters around multi sites, you know, what's the maturity today? Anything that you're looking for that would make your life easier to manage such a broad environment? >> Yeah, well, I would say that's one of the drawback, I mean, when we enabled that solution with Rancher we didn't see, I mean, here's the views of launching provisioning new clusters, is that right now, we have a lot of clusters, maybe too many, because we try to consolidate, I mean, the next, the next logical step for us is we try to consolidate the workloads maybe as much as possible, and see if there's really a need for people to have their own dedicated cluster for them. And initially, there was a lot of demand for that, because people basically they came to us and they said, you know, we want to use Kubernetes. And what we want to do is we want to have films which we have access to it, we want to be able to do whatever we want with it, upgrade it at our own pace. And I don't want to have any neighbor on it. I want to be completely isolated in terms of computer resources. So we said all right, we're going to make a solution that is going to provision new clusters on demand for everyone. And the intro stuff may very well. But now, after a while some people and we as especially as an IT provider and operator, we realized that, you know, maybe people don't have to be completing alone to cluster, maybe we should try to consolidate that a little bit. So we're trying to migrate workloads from certain services and tooling and say maybe you can, instead of running your own cluster, you can use this one that is going to be shared. And there will be a team dedicated I mean dedicated to support and operate is faster for you because we want to in the end, we want to offload the burden of infrastructure and Kubernetes although it's I mean, it brings a lot of abstraction in simplicity, you still have to manage your cluster in the end. So we'd rather have people focus on the application side than on the Kubernetes infrastructure side. So we will start a path of maybe try to consolidated friend workloads, and see if we can reduce the amount of clusters that we have and also to unify the way that people are using the different providers because although we have, a huge open OpenStack cloud offering internally on prem, there are still people who need to use GKE or EKS and a couple of other external cloud providers. So for these people, some of them are not using really Rancher, although it's possible with Rancher to just directly using the providers. But what we want to do is try to unify the way that you're going to get access to this cluster, try to make a central governance model for people to pass through a central team to get access and prevent the cluster. So they will be standardized, we will be able to add more maybe security policies and compliance and rules and everything. So the cluster will be created in certain ways and that too much fragmented as they are today. >> Yeah, that's ultimately what I was trying to understand is most customers I talked to, they have hybrid environments, they're using multiple clouds, if you're using Kubernetes you know, how do you get your arms around that. So I'd love to get your viewpoint just 'cause you've been involved since, kind of the early Kubernetes days, you know, what's, what's better now than it was a few years ago? You know, I heard you say that you looked at possibly, you know, creating a solution to yourself so a company like Rancher helps simplify things. So when you look at the maturity, you know, how happy are you with what you have now? And are there any things that you say, boy, I'd love my team to not have to worry about this. You know, maybe the industry as a whole would be able to, you know, standardize or make things simpler? >> Well, you know, when we started to use Rancher maybe there were a couple of things that we wanted to simplify for the users because what Rancher does is essentially is that, there's a lot of configuration options. It's very flexible because it's first mining providers. So the first few things that we did was try to simplify the user experience who we will extend we modified ventures in some ways to make It's simpler to be consumed. And also, the experience is much more simpler than it was, let's say two years ago when we started, we still want to simplify it even further, we want to ideally provide a fully manage experience. So peoples don't even have to worry about the control plane components that is currently being deployed with their competitors clusters. We want to remove that away from them so that we, once again fully focus on the application side of development. And I think one other aspect that we need to maybe improve in the future is that, when you want to deploy your application and make it resilient and geographically distributed, then you need to manage multiple clusters, and you need to deploy your applications and performance cluster. So, the whole multi cluster aspect of things like, how do I deploy my application from a version? How do I make it like consistent between the different clusters that where it needs to be deployed. How do I make service discovery possible? Or do I mesh everything all the application together to make sure that it's easy to operate, it's easy for the developers, and that it's resilient in the end. So we will start to look at the, I mean, the multi cluster multi region aspect for Kubernetes. Because that's a big challenge to us. >> All right, well, Alexandre, want to shift for a second, let's talk about the conference, KubeCon, CloudNativeCon, obviously, it's virtual this year, so there is a little bit of shift but you know, you've attended many of these in the past. What are their projects that you're interested in learning more or are there you know, peers of yours that you're looking to collaborate with? What have you seen in the past that that you're hoping you still get, from a virtual event like we have this year? >> Well, you know, I think that it has become so big, it's hard to keep up with everything that's happening at the same time, you know, nowadays, but, things that we're looking at really, is maybe like, I think chapters maybe, in terms of service mesh to a lot of technologies, I think it's maturing slowly. So we'll have, we'll always try to have a look about what is the most, the best fit for us and the use cases that we have. And some people thought you're using Kubernetes, some other people are using, you know more traditional stacks, So we try to bridge that together and see what's possible to migrate the existing workloads from the traditional cloud VMs, and call applications toward Kubernetes and everything. So maybe try to see if it's possible to bridge that path and migrate gradually for the users that we have. And other things in general, I think that it will be very interesting to see the whole bear setups, I mean, evolving to run out and see are we can try to add conformance and compliance rules to different clusters that we have to manage to make sure that it's no longer like, just add a matter of I want to create a cluster, I get access to it. We need to centralize the governance. We need to centralize that, the rules of our everything's going to be managing the end and make sure that security is a big aspect to it so make sure that there's no vulnerabilities and everything's being audited. And especially for the game students is going to be a big factor for us. So we definitely our interested into all the security discussion that's happening right now. >> All right, no shortage of lots of information. Alexandre, by the way, that there's no way that anybody can keep up on everything that's happening in this very robust community. But thank you so much for sharing your journey. It's always great to hear from the practitioner. Thanks so much for joining us. >> Thanks for having me, awesome. >> All right, and thank you for joining us, for all the coverage. Be sure to go to theCUBE.net, you can see not only all the interviews from this show, you can go search find previous shows as well as see what events we will be at, of course right now all virtually, so, am Stu Miniman and thank you as always for watching theCUBE. (upbeat music)

Published Date : Aug 18 2020

SUMMARY :

and the ecosystem partners. and so happy to bring Hey, thanks for the at many of the Docker cons, the cloud industry to stand Ubisoft, and they say, you know, and that you can manage to support that, you and the applications to And it's great to hear you say and the technology that we of the OpenStack solution and prevent the cluster. So I'd love to get your viewpoint just and that it's resilient in the end. of shift but you know, and the use cases that we have. from the practitioner. for all the coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlexandraPERSON

0.99+

April 2018DATE

0.99+

AlexandrePERSON

0.99+

UbisoftORGANIZATION

0.99+

2016DATE

0.99+

Stu MinimanPERSON

0.99+

Alexandre McLeanPERSON

0.99+

RancherORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

EuropeLOCATION

0.99+

10 data centersQUANTITY

0.99+

2020DATE

0.99+

1000 nodesQUANTITY

0.99+

KubeConEVENT

0.99+

Alexandria McLeanPERSON

0.99+

twoQUANTITY

0.99+

CNCFORGANIZATION

0.98+

KubernetesTITLE

0.98+

theCUBE.netOTHER

0.98+

over 200 clustersQUANTITY

0.98+

bothQUANTITY

0.98+

two years agoDATE

0.98+

GoogleORGANIZATION

0.98+

oneQUANTITY

0.97+

CloudNativeCon Europe 2020EVENT

0.97+

CloudNativeConEVENT

0.97+

todayDATE

0.96+

OpenStackTITLE

0.96+

firstQUANTITY

0.95+

this yearDATE

0.95+

one thingQUANTITY

0.93+

one key thingQUANTITY

0.9+

few years agoDATE

0.9+

couple of years agoDATE

0.9+

KubeCon CloudNativeCon 2020EVENT

0.89+

first yearQUANTITY

0.86+

KubeConsEVENT

0.85+

each gaming companyQUANTITY

0.84+

10 years agoDATE

0.83+

past few yearsDATE

0.82+

theCUBEORGANIZATION

0.79+

next couple of monthsDATE

0.79+

first fewQUANTITY

0.77+

globalEVENT

0.73+

one other aspectQUANTITY

0.73+

KubernetesORGANIZATION

0.68+

yearsDATE

0.66+

Sheng Liang, Rancher Labs | CUBE Conversation, July 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stu Miniman coming to you from our Boston area studio and this is a special CUBE Conversation, we always love talking to startups around the industry, understanding how they're creating innovation, doing new things out there, and oftentimes one of the exits for those companies is they do get acquired, and happy to welcome back to the program one of our CUBE alumni, Sheng Liang, he is the cofounder and CEO of Rancher, today there was an announcement for a definitive acquisition of SUSE, who our audience will know well, we were at SUSECON, so Sheng, first of all, thank you for joining us, and congratulations to you and the team on joining SUSE here in the near future. >> Thank you, Stu, I'm glad to be here. >> All right, so Sheng, why don't you give our audience a little bit of context, so I've known Rancher since the very early days, I knew Rancher before most people had heard the word Kubernetes, it was about containerization, it was about helping customers, there was that cattles versus pets, so that Rancher analogy was, hey, we're going to be your rancher and help you deal with that sprawl and all of those pieces out there, where you don't want to know them by name and the like, so help us understand how what was announced today is meeting along the journey that you set out for with Rancher. >> Absolutely, so SUSE is the largest independent opensource software company in the world, and they're a leader in enterprise Linux. Today they announced they have signed a definitive agreement to acquire Rancher, so we started Rancher about six years ago, as Stu said, to really build the next generation enterprise compute platform. And in the beginning, we thought we're going to just base our technology based on Docker containers, but pretty soon Kubernetes was just clearly becoming an industry standard, so Rancher actually became the most widely used enterprise Kubernetes platform, so really with the combination of Rancher and SUSE going forward, we're going to be able to supply the enterprise container platform of choice for lots and lots of customers out there. >> Yeah, just for our audience that might not be as familiar with Rancher, why don't you give us your position in where we are with the Kubernetes landscape, I've talked about many times on theCUBE, a few years ago it was all about "Hey, are we going to have some distribution war?" Rancher has an option in that space, but today it's multicloud, Rancher works with all of the cloud Kubernetes versions, so what is it that Rancher does uniquely, and of course as you mentioned, opensource is a key piece of what you're doing. >> Exactly, Stu, thanks for the question. So this is really a good lead-up into describing what Rancher does, and some of the industry dynamics, and the great opportunity we see with SUSE. So many of you, I'm sure, have heard about Kubernetes, Kubernetes is this container orchestration platform that basically works everywhere, and you can deploy all kinds of applications, and run these applications through Kubernetes, it doesn't really matter, fundamentally, what infrastructure you use anymore, so the great thing about Kubernetes is whether you deploy your apps on AWS or on Azure, or on on-premise bare metal, or vSphere clusters, or out there in IoT gateways and 5G base stations and surveillance cameras, literally everywhere, Kubernetes will run, so it's, in our world I like to think about Kubernetes as the standard for compute. If you kind of make the analogy, what's the standard of networking, that's TCPIP, so networking used to be very different, decades ago, there used to be different kinds of networking and at best you had a local area network for a small number of computers to talk to each other, but today with TCPIP as a standard, we have internet, we have Cisco, we have Google, we have Amazon, so I really think as successful as cloud computing has been, and how much impact it has had to actually push digital transformation and app modernization forward, a lot of organizations are kind of stuck between their desire to take advantage of a cloud provider, one specific cloud provider, all the bells and whistles, versus any cloud provider, not a single cloud provider can actually supply infrastructure for everything that a large enterprise would need. You may be in a country, you may be in some remote locations, you may be in your own private data center, so the market really really demands a standard form of compute infrastructure, and that turned out to be Kubernetes, that is the true, Kubernetes started as a way Google internally ran their containers, but what it really hit the stride was a couple years ago, people started realizing for once, compute could be standardized, and that's where Rancher came in, Rancher is a Kubernetes management platform. We help organizations tie together all of their Kubernetes clusters, regardless where they are, and you can see this is a very natural evolution of organizations who embark on this Kubernetes journey, and by definition Rancher has to be open, because who, this is such a strategic piece of software, who would want their single point of control for all compute to be actually closed and proprietary? Rancher is 100% opensource, and not only that, Rancher works with everyone, it really doesn't matter who implements Kubernetes for you, I mean Rancher could implement Kubernetes for you, we have a Kubernetes distro as well, we actually have, we're particularly well-known for Kubernetes distro design for resource constrained deployments on the edge, called K3S, some of you might have heard about it, but really, we don't care, I mean we work with upstream Kubernetes distro, any CNCF-compliant Kubernetes distro, or one of many many other popular cloud hosted Kubernetes services like EKS, GKE, AKS, and with Rancher, enterprise can start to treat all of these Kubernetes clusters as fungible resources, as catalysts, so that is basically our vision, and they can focus on modernizing their application, running their application reliably, and that's really what Rancher's about. >> Okay, so Sheng, being acquired by SUSE, I'd love to hear a little bit, what does this mean for the product, what does it mean for your customers, what does it mean for you personally? According to Crunchbase, you'd raised 95 million dollars, as you said, over the six years. It's reported by CNBC, that the acquisition's in the ballpark of 600 to 700 million, so that would be about a 6X increment over what was invested, not sure if you can comment on the finances, and would love to hear what this means going forward for Rancher and its ecosystem. >> Yeah, actually, I know there's tons of rumors going around, but the acquisition price, SUSE's decided not to disclose the acquisition price, so I'm not going to comment on that. Rancher's been a very cash-efficient business, there's been no shortage of funding, but even amounts to 95 million dollars that we raised, we really haven't spent majority of it, we probably spent just about a third of the money we raised, in fact our last run to fundraise was just three, four month ago, it was a 40 million dollar series D, and we didn't even need that, I mean we could've just continued with the series C money that we raised a couple years ago, which we barely started spending either. So the great thing about Rancher's business is because we're such a product-driven company, with opensource software, you develop a unique product that actually solves a real problem, and then there's just no barrier to adoption, so this stuff just spreads organically, people download and install, and then they put it in mission-critical production. Then they seek us out for commercial subscription, and the main value they're getting out of commercial subscription is really the confidence that they can actually rely on the software to power their mission-critical workload, so once they really start using Rancher, they recognize that Rancher as an organization provide, so this business model's worked out really well for us. Vast majority of our deals are based on inbound leads, and that's why we've been so efficient, and that's I think one of the things that really attracted SUSE as well. It's just, these days you don't just want a business that you have to do heavy weight, heavy duty, old fashioned enterprise (indistinct), because that's really expensive, and when so much of that value is building through some kind of a bundling or locking, sooner or later customers know better, right? They want to get away. So we really wanted to provide a opensource, and open, more important than opensource is actually open, lot of people don't realize there are actually lots of opensource software even in the market that are not really quite open, that might seem like a contradiction, but you can have opensource software which you eventually package it in a way, you don't even make the source code available easily, you don't make it easy to rebuild the stuff, so Rancher is truly open and opensource, people just download opensource software, run it in the day they need it, our Enterprise subscription we will support, the day they don't need it, they will actually continue to run the same piece of software, and we'd be happy to continue to provide them with patches and security fixes, so as an organization we really have to provide that continuous value, and it worked out really well, because, this is such a important piece of software. SUSE has this model that I saw on their website, and it really appeals to us, it's called the power of many, so SUSE, turns out they not only completely understand and buy into our commitment to open and opensource, but they're completely open in terms of supporting the whole ecosystem, the software stack, that not only they produce, but their partners produce, in many cases even their competitors produce, so that kind of mentality really resonated with us. >> Yeah, so Sheng, you wrote in the article announcing the acquisition that when the deal closes, you'll be running engineering and innovation inside of SUSE, if I remember right, Thomas Di Giacomo has a similar title to that right now in SUSE, course Melissa Di Donato is the CEO of SUSE. Of course the comparison that everyone will have is you are now the OpenShift to SUSE. You're no stranger to OpenShift, Rancher competes against RedHat OpenShift out on the market. I wonder if you could share a little bit, what do you see in your customer base for people out there that says "Hey, how should I think of Rancher "compared to what RedHat's been doing with OpenShift?" >> Yeah, I mean I think RedHat did a lot of good things for opensource, for Linux, for Kubernetes, and for the community, OpenShift being primarily a Kubernetes distro and on top of that, RedHat built a number of enhanced capabilities, but at the end of the day, we don't believe OpenShift by itself actually solves the kind of problem we're seeing with customers today, and that's why as much investment has gone into OpenShift, we just see no slowdown, in fact an acceleration of demand of Rancher, so we don't, Rancher always thrived by being different, and the nice thing about SUSE being a independent company, as opposed to a part of a much larger organization like RedHat, is where we're going to be as an organization 100% focused on bringing the best experience to customers, and solve customers' business problems, as they transform their legacy application suite into cloud-native infrastructure. So I think the opportunity is so large, and there's going to be enough market there for multiple players, but we measure our success by how many people, how much adoption we're actually getting out of our software, and I said in the beginning, Rancher is the most widely used enterprise Kubernetes platform, and out of that, what real value we're delivering to our customers, and I think we solve those problems, we'll be able to build a fantastic business with SUSE. >> Excellent. Sheng, I'm wondering if we could just look back a little bit, you're no stranger to acquisitions, remember back when Cloud.com was acquired by Citrix, back when we had the stack wars between CloudStack and OpenStack and the like, I'm curious what lessons you learned having gone through that, that you took away, and prepared you for what you're doing here, and how you might do things a little bit differently, with the SUSE acquisition. >> Yeah, my experience with Cloud.com acquired by Citrix was very good, in fact, and a lot of times, you really got to figure out a way to adapt to actually make sure that Rancher as a standalone business, or back then, Cloud.com was a standalone business, how are they actually fitting to the acquirer's business as a whole? So when Cloud.com was acquired, it was pretty clear, as attractive as the CloudStack business was, really the bigger prize for Citrix was to actually modernize and cloudify their desktop business, which absolutely was like a two billion dollar business, growing to three billion dollars back then, I think it's even bigger now, with now everyone working remote. So we at Citrix, we not only continued to grow the CloudStack business, but more importantly, one of the things I'm the most proud of is we really played up a crucial role in modernizing and cloudifying the Citrix mainline business. So this time around, I think the alignment between what Rancher does and what SUSE does is even more apparent, obviously, until the deal actually closes, we're not really allowed to actually plan or execute on some of the integration synergies, but at a higher level, I don't see any difficulty for SUSE to be able to effectively market, and service their global base of customers, using the Rancher technology, so it's just the synergy between Kubernetes and Linux is just so much stronger, and in some sense, I think I've used this term before, Kubernetes is almost like the new Linux, so it just seems like a very natural place for SUSE to evolve into anyway, so I'm very very bullish about the potential synergy with the acquisition, I just can't wait to roll up my hands and get going as soon as the deal closes. >> All right, well Sheng, thank you so much for joining us, absolutely from our standpoint, we look at it, it's a natural fit of what Rancher does into SUSE, as you stated. The opensource vision, the community, and customer-focused absolutely align, so best of luck with the integration, looking forward to seeing you when you have your new role and hearing more about Rancher's journey, now part of SUSE. Thanks for joining us. >> Thank you Stu, it's always great talking to you. >> All right, and be sure, we'll definitely catch up with Rancher's team at the KubeCon + CloudNativeCon European show, which is of course virtual, as well as many other events down the road. I'm Stu Miniman, and thank you for watching theCUBE.

Published Date : Jul 8 2020

SUMMARY :

leaders all around the world, and oftentimes one of the is meeting along the journey And in the beginning, we and of course as you mentioned, and the great opportunity that the acquisition's in the ballpark and the main value they're getting is the CEO of SUSE. and for the community, CloudStack and OpenStack and the like, and cloudifying the looking forward to seeing you always great talking to you. events down the road.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CitrixORGANIZATION

0.99+

Melissa Di DonatoPERSON

0.99+

Thomas Di GiacomoPERSON

0.99+

CiscoORGANIZATION

0.99+

Sheng LiangPERSON

0.99+

SUSEORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CNBCORGANIZATION

0.99+

100%QUANTITY

0.99+

three billion dollarsQUANTITY

0.99+

RancherORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

ShengPERSON

0.99+

AmazonORGANIZATION

0.99+

Sheng LiangPERSON

0.99+

600QUANTITY

0.99+

GoogleORGANIZATION

0.99+

95 million dollarsQUANTITY

0.99+

July 2020DATE

0.99+

StuPERSON

0.99+

KubeConEVENT

0.99+

TodayDATE

0.99+

oneQUANTITY

0.99+

two billion dollarQUANTITY

0.99+

CrunchbaseORGANIZATION

0.98+

700 millionQUANTITY

0.98+

Rancher LabsORGANIZATION

0.98+

RedHatORGANIZATION

0.98+

KubernetesTITLE

0.98+

OpenShiftTITLE

0.98+

AWSORGANIZATION

0.98+

LinuxTITLE

0.97+

SUSECONORGANIZATION

0.97+

CloudStackTITLE

0.96+

todayDATE

0.96+

four month agoDATE

0.96+

CUBEORGANIZATION

0.96+

decades agoDATE

0.96+

CloudLive Great Cloud Debate with Corey Quinn and Stu Miniman


 

(upbeat music) >> Hello, and welcome to The Great Cloud Debate. I'm your moderator Rachel Dines. I'm joined by two debaters today Corey Quinn, Cloud Economist at the Duckbill Group and Stu Miniman, Senior Analyst and Host of theCube. Welcome Corey and Stu, this when you can say hello. >> Hey Rachel, great to talk to you. >> And it's better to talk to me. It's always a pleasure to talk to the fine folks over at CloudHealth at by VMware and less of the pleasure to talk to Stu. >> Smack talk is scheduled for later in the agenda gentlemen, so please keep it to a minimum now to keep us on schedule. So here's how today is going to work. I'm going to introduce a debate topic and assign Corey and Stu each to a side. Remember, their assignments are what I decide and they might not actually match their true feelings about a topic, and it definitely does not represent the feelings of their employer or my employer, importantly. Each debater is going to have two minutes to state their opening arguments, then we'll have rebuttals. And each round you the audience gets to vote of who you think is winning. And at the end of the debate, I'll announce the winner. The prize is bragging rights of course, but then also we're having each debater play to win lunch for their local hospital, which is really exciting. So Stu, which hospital are you playing for? >> Yeah, so Rachel, I'm choosing Brigham Women's Hospital. I get a little bit of a home vote for the Boston audience here and was actually my wife's first job out of school. >> Great hospital. Very, very good. All right, Corey, what about you? >> My neighbor winds up being as specialist in infectious diseases as a doctor, and that was always one of those weird things you learn over a cocktail party until this year became incredibly relevant. So I will absolutely be sending the lunch to his department. >> Wonderful! All right. Well, is everyone ready? Any last words? This is your moment for smack talk. >> I think I'll say that for once we can apply it to a specific technology area. Otherwise, it was insulting his appearance and that's too easy. >> All right, let's get going. The first topic is multicloud. Corey, you'll be arguing that companies are better off standardizing on a single cloud. While Stu, you're going to argue the companies are better off with a multicloud strategy. Corey, you're up first, two minutes on the clock and go. >> All right. As a general rule, picking a single provider and going all in leads to the better outcome. Otherwise, you're trying to build every workload to run seamlessly on other providers on a moment's notice. You don't ever actually do it and all you're giving up in return is the ability to leverage whatever your primary cloud provider is letting you build. Now you're suddenly trying to make two differently behaving load balancers work together in the same way, you're using terraform or as I like to call it multicloud formation in the worst of all possible ways. Because now you're having to only really build on one provider, but all the work you're putting in to make that scale to other providers, you might theoretically want to go to at some point, it slows you down, you're never going to be able to move as quickly trying to build for everyone as you are for one particular provider. And I don't care which provider you pick, you probably care which one you pick, I don't care which one. The point is, you've got to pick what's right for your business. And in almost every case, that means start on a single platform. And if you need to migrate down the road years from now, great, that means A you've survived that long, and B you now have the longevity as a business to understand what migrating looks like. Otherwise you're not able to take care of any of the higher level offerings these providers offer that are even slightly differentiated from each other. And even managed database services behave differently. You've got to become a master of all the different ways these things can fail and unfortunate and displeasing ways. It just leaves you in a position where you're not able to specialize, and of course, makes hiring that much harder. Stu, fight me! >> Tough words there. All right, Stu, your turn. Why are companies better off if they go with a multicloud strategy? Got two minutes? >> Yeah, well first of all Corey, I'm really glad that I didn't have to whip out the AWS guidelines, you were not sticking strictly to it and saying that you could not use the words multicloud, cross-cloud, any cloud or every cloud so thank you for saving me that argument. But I want you to kind of come into the real world a little bit. We want access to innovation, we want flexibility, and well, we used to say I would have loved to have a single provider, in the real world we understand that people end up using multiple solutions. If you look at the AI world today, there's not a provider that is a clear leader in every environment that I have. So there's a reason why I might want to use a lot of clouds. Most companies I talked to, Corey, they still have some of their own servers. They're working in a data center, we've seen huge explosion in the service provider world connecting to multiple clouds. So well, a couple of years ago, multicloud was a complete mess. Now, it's only a little bit of a mess, Corey. So absolutely, there's work that we need to do as an industry to make these solutions better. I've been pining for a couple years to say that multicloud needs to be stronger than the sum of its pieces. And we might not yet be there but limiting yourself to a single cloud is reducing your access to innovation, it's reducing your flexibility. And when you start looking at things like edge computing and AI, I'm going to need to access services from multiple providers. So single cloud is a lovely ideal, but in the real world, we understand that teams come with certain skill sets. We end up in many industries, we have mergers and acquisitions. And it's not as easy to just rip out all of your cloud, like you would have 20 years ago, if you said, "Oh, well, they have a phone system or a router "that didn't match what our corporate guidelines is." Cloud is what we're doing. There's lots of solutions out there. And therefore, multicloud is the reality today, and will be the reality going forward for many years to come. >> Strong words from you, Stu. Corey, you've got 60 seconds for rebuttal. I mostly agree with what you just said. I think that having different workloads in different clouds makes an awful lot of sense. Data gravity becomes a bit of a bear. But if you acquire a company that's running on a different cloud than the one that you've picked, you'd be ridiculous to view migrating as anything approaching a strategic priority. Now, this also gets into the question of what is cloud? Our G Suite stuff counts as cloud, but no one really views it in that way. Similarly, when you have an AI specific workload, that's great. As long as it isn't you seriously expensive to move data between providers. That workload doesn't need to live in the same place as your marketing website does. I think that the idea of having a specific cloud provider that you go all in on for every use case, well, at some point that leads to ridiculous things like pretending that Amazon WorkDocs has customers, it does not. But for things that matter to your business and looking at specific workloads, I think that you're going to find a primary provider with secondary workloads here and they're scattered elsewhere to be the strategy that people are getting at when they use the word multicloud badly. >> Time's up for you Corey, Stu we've got time for rebuttal and remember, for those of you in the audience, you can vote at any time and who you think is winning this round. Stu, 60 seconds for a rebuttal. >> Yeah, absolutely Corey. Look, you just gave the Andy Jassy of what multicloud should be 70 to 80% goes to a single provider. And it does make sense we know nobody ever said multicloud equals the same amount in multiple environments but you made a clear case as to why multicloud leveraging multi providers is likely what most companies are going to do. So thank you so much for making a clear case as to why multicloud not equal cloud, across multiple providers is the way to go. So thank you for conceding the victory. >> Last Words, Corey. >> If that's what you took from it Stu, I can't get any closer to it than you have. >> All right, let's move on to the next topic then. The next topic is serverless versus containers which technology is going to be used in, let's say, five to 10 years time? And as a reminder, I'm going to assign each of the debaters these topics, their assignments may or may not match their true feelings about this topic, and they definitely don't represent the topics of my employer, CloudHealth by VMware. Stu, you're going to argue for containers. Corey you're going to argue for start serverless. Stu, you're up first. Two minutes on the clock and go. >> All right, so with all respect to my friends in the serverless community, We need to have a reality check as to how things work. We all know that serverless is a ridiculous name because underneath we do need to worry about all of the infrastructure underneath. So containers today are the de facto building block for cloud native architectures, just as the VM defined the ecosystem for an entire generation of solutions. Containers are the way we build things today. It is the way Google has architected their entire solution and underneath it is often something that's used with serverless. So yes, if you're, building an Alexa service, serverless make what's good for you. But for the vast majority of solutions, I need to have flexibility, I need to understand how things work underneath it. We know in IT that it's great when things work, but we need to understand how to fix them when they break. So containerization gets us to that atomic level, really close to having the same thing as the application. And therefore, we saw the millions of users that deploy Docker, we saw the huge wave of container orchestration led by Kubernetes. And the entire ecosystem and millions of customers are now on board with this way of designing and architecting and breaking down the silos between the infrastructure world and the application developer world. So containers, here to stay growing fast. >> All right, Corey, what do you think? Why is serverless the future? >> I think that you're right in that containers are the way you get from where you were to something that runs effectively in a cloud environment. That is why Google is so strongly behind Kubernetes it helps get the entire industry to write code the way that Google might write code. And that's great. But if you're looking at effectively rewriting something from scratch, or building something that new, the idea of not having to think about infrastructure in the traditional sense of being able to just here, take this code and run it in a given provider that takes whatever it is that you need to do and could loose all these other services together, saves an awful lot of time. As that continues to move up the stack towards the idea of no code or low code. And suddenly, you're now able to build these applications in ways that require just a little bit of code that tie together everything else. We're closer than ever to that old trope of the only code you write is business logic. Serverless gives a much clearer shot of getting there, if you can divorce yourself from the past of legacy workloads. Legacy, of course meaning older than 18 months and makes money. >> Stu, do you have a rebuttal, 60 seconds? >> Yeah. So Corey, we've been talking about this Nirvana in many ways. It's the discussion that we had for paths for over a decade now. I want to be able to write my code once not worry about where it lives, and do all this. But sometimes, there's a reason why we keep trying the same thing over and over again, but never reaching it. So serverless is great for some application If you talked about, okay, if you're some brand new webby thing there and I don't want to have to do this team, that's awesome. I've talked to some wonderful people that don't know anything about coding that have built some cool stuff with serverless. But cool stuff isn't what most business runs on, and therefore containerization is, as you said, it's a bridge to where I need to go, it lives in these cloud environments, and it is the present and it is the future. >> Corey, your response. >> I agree that it's the present, I doubt that it's the future in quite the same way. Right now Kubernetes is really scratching a major itch, which is how all of these companies who are moving to public cloud still I can have their infrastructure teams be able to cosplay as cloud providers themselves. And over time, that becomes simpler and I think on some level, you might even see a convergence of things that are container workloads begin to look a lot more like serverless workloads. Remember, we're aiming at something that is five years away in the context of this question. I think that the serverless and container landscape will look very different. The serverless landscape will be bright and exciting and new, whereas unfortunately the container landscape is going to be represented by people like you Stu. >> Hoarse words from Corey. Stu, any last words or rebuttals? >> Yeah, and look Corey absolutely just like we don't really think about the underlying server or VM, we won't think about the containers you won't think about Kubernetes in the future, but, the question is, which technology will be used in five to 10 years, it'll still be there. It will be the fabric of our lives underneath there for containerization. So, that is what we were talking about. Serverless I think will be useful in pockets of places but will not be the predominant technology, five years from now. >> All right, tough to say who won that one? I'm glad I don't have to decide. I hope everyone out there is voting, last chance to vote on this question before we move on to the next. Next topic is cloud wars. I'm going to give a statement and then I'm going to assign each of you a pro or a con, Google will never be an actual contender in the cloud wars always a far third, we're going to have Corey arguing that Google is never going to be an actual contender. And Stu, you're going to argue that Google is eventually going to overtake the top two AWS and Azure. As a constant reminder, I'm assigning these topics, it's my decision and also they don't match the opinions of me, my employer, or likely Stu or Corey. This is all just for fun and games. But I really want to hear what everyone has to say. So Corey, you're up first two minutes. Why is Google never going to be an actual contender and go. >> The biggest problem Google has in the time of cloud is their ability to forecast longer term on anything that isn't their advertising business, and their ability to talk to human beings long enough to meet people where they are. We're replacing their entire culture is what it's going to take to succeed in the time of cloud and with respect, Thomas Kurian is a spectacular leader internally but look at where he's come from. He spent 22 years at Oracle and now has been transplanted into Google. If we take a look at Satya Nadella's cloud transformation at Microsoft, he was able to pull that off as an insider, after having known intimately every aspect of that company, and he grew organically with it and was perfectly positioned to make that change. You can't instill that kind of culture change by dropping someone externally, on top of an organization and expecting anything to go with this magic one day wake up and everything's going to work out super well. Google has a tremendous amount of strengths, and I don't see that providing common denominator cloud computing services to a number of workloads that from a Google perspective are horrifying, is necessarily in their wheelhouse. It feels like their entire focus on this is well, there's money over there. We should go get some of that too. It comes down to the traditional Google lack of focus. >> Stu, rebuttal? Why do you think Google has a shaft? >> Yeah, so first of all, Corey, I think we'd agree Google is a powerhouse in the world today. My background is networking, when they first came out with with Google Cloud, I said, Google has the best network, second to none in the world. They are ubiquitous today. If you talk about the impact they have on the world, Android phones, you mentioned Kubernetes, everybody uses G Suite maps, YouTube, and the like. That does not mean that they are necessarily going to become the clear leader in cloud but, Corey, they've got really, really smart people. If you're not familiar with that talk to them. They'll tell you how smart they are. And they have built phenomenal solutions, who's going to be able to solve, the challenge every day of, true distributed systems, that a global database that can handle the clock down to the atomic level, Google's the one that does that we've all read the white papers on that. They've set the tone for Hadoop, and various solutions that are all over the place, and their secret weapon is not the advertising, of course, that is a big concern for them, but is that if you talk about, the consumer adoption, everyone uses Google. My kids have all had Chromebooks growing up. It isn't their favorite thing, but they get, indoctrinated with Google technology. And as they go out and leverage technologies in the world, Google is one that is known. Google has the strength of technology and a lot of positioning and partnerships to move them forward. Everybody wants a strong ecosystem in cloud, we don't want a single provider. We already discussed this before, but just from a competitive nature standpoint, if there is a clear counterbalance to AWS, I would say that it is Google, not Microsoft, that is positioned to be that clear and opportune. >> Interesting, very interesting Stu. So your argument is the Gen Zers will of ultimately when they come of age become the big Google proponents. Some strong words that as well but they're the better foil to AWS, Corey rebuttal? >> I think that Stu is one t-shirt change away from a pitch perfect reenactment of Charlie Brown. In this case with Google playing the part of Lucy yanking the football away every time. We've seen it with inbox, Google Reader, Google Maps, API pricing, GKE's pricing for control plane. And when your argument comes down to a suddenly Google is going to change their entire nature and become something that it is as proven as constitutionally incapable of being, namely supporting something that its customers want that it doesn't itself enjoy working on. And to the exclusion of being able to get distracted and focused on other things. Even their own conferences called Next because Google is more interested in what they're shipping than what they're building, than what they're currently shipping. I think that it is a fantasy to pretend that that is somehow going to change without a complete cultural transformation, which again, I don't see the seeds being planted for. >> Some sick burns in there Stu, rebuttal? >> Yeah. So the final word that I'll give you on this is, one of the most important pieces of what we need today. And we need to tomorrow is our data. Now, there are some concerns when we talk about Google and data, but Google also has strong strength in data, understanding data, helping customers leverage data. So while I agree to your points about the cultural shift, they have the opportunity to take the services that they have, and enable customers to be able to take their data to move forward to the wonderful world of AI, cloud, edge computing, and all of those pieces and solve the solution with data. >> Strong words there. All right, that's a tough one. Again, I hope you're all out there voting for who you think won that round. Let's move on to the last round before we start hitting the lightning questions. I put a call out on several channels and social media for people to have questions that they want you to debate. And this one comes from Og-AWS Slack member, Angelo. Angelo asks, "What about IBM Cloud?" Stu you're pro, Corey you're con. Let's have Stu you're up first. The question is, what about IBM Cloud? >> All right, so great question, Angelo. I think when you look at the cloud providers, first of all, you have to understand that they're not all playing the same game. We talked about AWS and they are the elephant in the room that moves nimbly as a cheetah. Every other provider plays a little bit of a different game. Google has strength in data. Microsoft, of course, has their, business productivity applications. IBM has a strong legacy. Now, Corey is going to say that they are just legacy and you need to think about them but IBM has strong innovation. They are a player in really what we call chapter two of the cloud. So when we start talking about multicloud, when we start talking about living in many environments, IBM was the first one to partner with VMware for VMware cloud before the mega VMware AWS announcement, there was IBM up on stage and if I remember right, they actually have more VMware customers on IBM Cloud than they do in the AWS cloud. So over my shoulder here, there's of course, the Red Hat $34 billion to bet on that multicloud solution. So as we talk about containerization, and Kubernetes, Red Hat is strongly positioned in open-source, and flexibility. So you really need a company that understands both the infrastructure side and the application side. IBM has database, IBM has infrastructure, IBM has long been the leader in middleware, and therefore IBM has a real chance to be a strong player in this next generation of platforms. Doesn't mean that they're necessarily going to go attack Amazon, they're partnering across the board. So I think you will see a kinder, gentler IBM and they are leveraging open source and Red Hat and I think we've let the dogs out on the IBM solution. >> Indeed. >> So before Corey goes, I feel the need to remind everyone that the views expressed here are not the views of my employer nor myself, nor necessarily of Corey or Stu. I have Corey. >> I haven't even said anything yet. And you're disclaiming what I'm about to say. >> I'm just warning the audience, 'cause I can't wait to hear what you're going to say next. >> Sounds like I have to go for the high score. All right. IBM's best days are behind it. And that is pretty clear. They like to get angry when people talk about how making the jokes about a homogenous looking group of guys in blue suits as being all IBM has to offer. They say that hasn't been true since the '80s. But that was the last time people cared about IBM in any meaningful sense and no one has bothered to update the relevance since then. Now, credit where due, I am seeing an awful lot of promoted tweets from IBM into my timeline, all talking about how amazing their IBM blockchain technology is. And yes, that is absolutely the phrasing of someone who's about to turn it all around and win the game. I don't see it happening. >> Stu, rebuttal? >> Look, Corey, IBM was the company that brought us the UPC code. They understand Mac manufacturing and blockchain actually shows strong presence in supply chain management. So maybe you're not quite aware of some of the industries that IBM is an expert in. So that is one of the big strengths of IBM, they really understand verticals quite well. And, at the IBM things show, I saw a lot in the healthcare world, had very large customers that were leveraging those solutions. So while you might dismiss things when they say, Oh, well, one of the largest telecom providers in India are leveraging OpenStack and you kind of go with them, well, they've got 300 million customers, and they're thrilled with the solution that they're doing with IBM, so it is easy to scoff at them, but IBM is a reliable, trusted provider out there and still very strong financially and by the way, really excited with the new leadership in place there, Arvind Krishna knows product, Jim Whitehurst came from the Red Hat side. So don't be sleeping on IBM. >> Corey, any last words? >> I think that they're subject to massive disruption as soon as they release the AWS 400 mainframe in the cloud. And I think that before we, it's easy to forget this, but before Google was turning off Reader, IBM stopped making the model M buckling spring keyboards. Those things were masterpieces and that was one of the original disappointments that we learned that we can't fall in love with companies, because companies in turn will not love us back. IBM has demonstrated that. Lastly, I think I'm thrilled to be working with IBM is exactly the kind of statement one makes only at gunpoint. >> Hey, Corey, by the way, I think you're spending too much time looking at all titles of AWS services, 'cause you don't know the difference between your mainframe Z series and the AS/400 which of course is heavily pending. >> Also the i series. Oh yes. >> The i series. So you're conflating your system, which still do billions of dollars a year, by the way. >> Oh, absolutely. But that's not we're not seeing new banks launching and then building on top of IBM mainframe technology. I'm not disputing that mainframes were phenomenal. They were, I just don't see them as the future and I don't see a cloud story. >> Only a cloud live your mainframe related smack talk. That's the important thing that we're getting to here. All right, we move-- >> I'm hoping there's an announcement from CloudHealth by VMware that they also will now support mainframe analytics as well as traditional cloud. >> I'll look into that. >> Excellent. >> We're moving on to the lightning rounds. Each debater in this round is only going to get 60 seconds for their opening argument and then 30 seconds for a rebuttal. We're going to hit some really, really big important questions here like this first one, which is who deserves to sit on the Iron Throne at the end of "Game of Thrones?" I've been told that Corey has never seen this TV show so I'm very interested to hear him argue for Sansa. But let's Sansa Stark, let's hear Stu go first with his argument for Jon Snow. Stu one minute on the clock, go. >> All right audience let's hear it from the king of the north first of all. Nothing better than Jon Snow. He made the ultimate sacrifice. He killed his love to save Westeros from clear destruction because Khaleesi had gone mad. So Corey is going to say something like it's time for the women to do this but it was a woman she went mad. She started burning the place down and Jon Snow saved it so it only makes sense that he should have done it. Everyone knows it was a travesty that he was sent back to the Wall, and to just wander the wild. So absolutely Jon Snow vote for King of the North. >> Compelling arguments. Corey, why should Sansa Stark sit on the throne? Never having seen the show I've just heard bits and pieces about it and all involves things like bloody slaughters, for example, the AWS partner Expo right before the keynote is best known as AWS red wedding. We take a look at that across the board and not having seen it, I don't know the answer to this question, but how many of the folks who are in positions of power we're in fact mediocre white dudes and here we have Stu advocating for yet another one. Sure, this is a lightning round of a fun event but yes, we should continue to wind up selecting this mediocre white person has many parallels in terms of power, et cetera, politics, current tech industry as a whole. I think she's right we absolutely should give someone with a look like this a potential opportunity to see what they can do instead. >> Ouch, Stu 30 seconds rebuttal. >> Look, I would just give a call out to the women in the audience and say, don't you want Jon Snow to be king? >> I also think it's quite bold of Corey to say that he looks like Kit Harington. Corey, any last words? >> I think that it sad you think Stu was running for office at this point because he's become everyone's least favorite animal, a panda bear. >> Fire. All right, so on to the next question. This one also very important near and dear to my heart personally, is a hot dog a sandwich. Corey you'll be arguing no, Stu will be arguing yes. I must also add this important disclaimer that these assignments are made by me and might not reflect the actual views of the debaters here so Corey, you're up first. Why is a hot dog not a sandwich? >> Because you'll get punched in the face if you go to a deli of any renown and order a hot dog. That is not what they serve there. They wind up having these famous delicatessen in New York they have different sandwiches named after different celebrities. I shudder to think of the deadly insult that naming a hot dog after a celebrity would be to that not only celebrity in some cases also the hot dog too. If you take a look and you want to get sandwiches for lunch? Sure. What are we having catered for this event? Sandwiches. You show up and you see a hot dog, you're looking around the hot dog to find the rest of the sandwich. Now while it may check all of the boxes for a technical definition of what a sandwich is, as I'm sure Stu will boringly get into, it's not what people expect, there's a matter of checking the actual boxes, and then delivering what customers actually want. It's why you can let your product roadmap be guided by cart by customers or by Gartner but rarely both. >> Wow, that one hurts. Stu, why is the hot dog a sandwich? >> Yeah so like Corey, I'm sorry that you must not have done some decent traveling 'cause I'm glad you brought up the definition because I'm not going to bore you with yes, there's bread and there's meat and there's toppings and everything else like that but there are some phenomenal hot dogs out there. I traveled to Iceland a few years ago, and there's a little hot dog stand out there that's been there for over 40 or 50 years. And it's one of the top 10 culinary experience I put in. And I've been to Michelin star restaurants. You go to Chicago and any local will be absolutely have to try our creation. There are regional hot dogs. There are lots of solutions there and so yeah, of course you don't go to a deli. Of course if you're going to the deli for takeout and you're buying meats, they do sell hot dogs, Corey, it's just not the first thing that you're going to order on the menu. So I think you're underselling the hot dog. Whether you are a child and grew up and like eating nothing more than the mustard or ketchup, wherever you ate on it, or if you're a world traveler, and have tried some of the worst options out there. There are a lot of options for hot dogs so hot dog, sandwich, culinary delight. >> Stu, don't think we didn't hear that pun. I'm not sure if that counts for or against you, but Corey 30 seconds rebuttal. >> In the last question, you were agitating for putting a white guy back in power. Now you're sitting here arguing that, "Oh some of my best friend slash meals or hot dogs." Yeah, I think we see what you're putting down Stu and it's not pretty, it's really not pretty and I think people are just going to start having to ask some very pointed, delicate questions. >> Tough words to hear Stu. Close this out or rebuttal. >> I'm going to take the high road, Rachel and leave that where it stands. >> I think that is smart. All right, next question. Tabs versus spaces. Stu, you're going to argue for tabs, Corey, you're going to argue for spaces just to make this fun. Stu, 60 seconds on the clock, you're up first. Why are tabs the correct approach? >> First of all, my competitor here really isn't into pop culture. So he's probably not familiar with the epic Silicon Valley argument over this discussion. So, Corey, if you could explain the middle of algorithm, we will be quite impressed but since you don't, we'll just have to go with some of the technology first. Looks, developers, we want to make things simple on you. Tabs, they're faster to do they take up less memory. Yes, they aren't quite as particular as using spaces but absolutely, they get the job done and it is important to just, focus on productivity, I believe that the conversation as always, the less code you can write, the better and therefore, if you don't have to focus on exactly how many spaces and you can just simplify with the tabs, you're gona get close enough for most of the job. And it is easier to move forward and focus on the real work rather than some pedantic discussion as to whether one thing is slightly more efficient than the other. >> Great points Stu. Corey, why is your pedantic approach better? >> No one is suggesting you sit there and whack the spacebar four times or eight times you hit the Tab key, but your editor should be reasonably intelligent enough to expand that. At that point, you have now set up a precedent where in other cases, other parts of your codebase you're using spaces because everyone always does. And that winds up in turn, causing a weird dissonance you'll see a bunch of linters throwing issues if you use tabs as a direct result. Now the wrong answer is, of course, and I think Steve will agree with me both in the same line. No one is ever in favor of that. But I also want to argue with Stu over his argument about "Oh, it saves a little bit of space "is the reason one should go with tabs instead." Sorry, that argument said bye bye a long time ago, and that time was the introduction of JavaScript, where it takes many hundreds of Meg's of data to wind up building hello world. Yeah, at that point optimization around small character changes are completely irrelevant. >> Stu, rebuttal? >> Yeah, I didn't know that Corey did not try to defend that he had any idea what Silicon Valley was, or any of the references in there. So Rachel, we might have to avoid any other pop culture references. We know Corey just looks at very specific cloud services and can't have fun with some of the broader themes there. >> You're right my mistake Stu. Corey, any last words? >> It's been suggested that whole middle out seen on the whiteboard was came from a number of conversations I used to have with my co-workers as in people who were sitting in the room with me watching that episode said, Oh my God, I've been in the room while you had this debate with your friend and I will not name here because they at least still strive to remain employable. Yeah, it's, I understand the value in the picking these fights, we could have gone just as easily with vi versus Emacs, AWS versus Azure, or anything else that you really care to pick a fight with. But yeah, this is exactly the kind of pedantic fight that everyone loves to get involved with, which is why I walked a different path and pick other ridiculous arguments. >> Speaking of those ridiculous arguments that brings us to our last debate topic of the day, Corey you are probably best known for your strong feelings about the pronunciation of the acronym for Amazon Machine Image. I will not be saying how I think it is pronounced. We're going to have you argue each. Stu, you're going to argue that the acronym Amazon Machine Image should be pronounced to rhyme with butterfly. Corey, you'll be arguing that it rhymes with mommy. Stu, rhymes with butterfly. Let's hear it, 60 seconds on the clock. >> All right, well, Rachel, first of all, I wish I could go to the videotape because I have clear video evidence from a certain Corey Quinn many times arguing why AMI is the proper way to pronounce this, but it is one of these pedantic arguments, is it GIF or GIF? Sometimes you go back and you say, Okay, well, there's the way that the community did it. And the way that oh wait, the founder said it was a certain way. So the only argument against AMI, Jeff Barr, when he wrote about the history of all of the blogging that he's done from AWS said, I wish when I had launched the service that I pointed out the correct pronunciation, which I won't even deem to talk it because the community has agreed by and large that AMI is the proper way to pronounce it. And boy, the tech industry is rific on this kind of thing. Is it SQL and no SQL and you there's various ways that we butcher these constantly. So AMI, almost everyone agrees and the lead champion for this argument, of course is none other than Corey Quinn. >> Well, unfortunately today Corey needs to argue the opposite. So Corey, why does Amazon Machine Image when pronounce as an acronym rhyme with mommy? >> Because the people who built it at Amazon say that it is and an appeal to authorities generally correct when the folks built this. AWS has said repeatedly that they're willing to be misunderstood for long periods of time. And this is one of those areas in which they have been misunderstood by virtually the entire industry, but they are sticking to their guns and continuing to wind up advocating for AMI as the correct pronunciation. But I'll take it a step further. Let's take a look at the ecosystem companies. Whenever Erica Brescia, who is now the COO and GitHub, but before she wound up there, she was the founder of Bitnami. And whenever I call it Bitn AMI she looks like she is barely successfully restraining herself from punching me right in the mouth for that pronunciation of the company. Clearly, it's Bitnami named after the original source AMI, which is what the proper term pronunciation of the three letter acronym becomes. Fight me Stu. >> Interesting. Interesting argument, Stu 30 seconds, rebuttal. >> Oh, the only thing he can come up with is that, you take the word Bitnami and because it has that we know that things sound very different if you put a prefix or a suffix, if you talk to the Kubernetes founders, Kubernetes should be coop con but the people that run the conference, say it cube con so there are lots of debates between the people that create it and the community. I in general, I'm going to vote with the community most of the time. Corey, last words on this topic 'cause I know you have very strong feelings about it. >> I'm sorry, did Stu just say Kubernetes and its community as bastions of truth when it comes to pronouncing anything correctly? Half of that entire conference is correcting people's pronunciation of Kubernetes, Kubernetes, Kubernetes, Kubernetes and 15 other mispronunciations that they will of course yell at you for but somehow they're right on this one. All right. >> All right, everyone, I hope you've been voting all along for who you think is winning each round, 'cause this has been a tough call. But I would like to say that's a wrap for today. big thank you to our debaters. You've been very good sports, even when I've made you argue for against things that clearly are hurting you deep down inside, we're going to take a quick break and tally all the votes. And we're going to announce a winner up on the Zoom Q and A. So go to the top of your screen, Click on Zoom Q and A to join us and hear the winner announced and also get a couple minutes to chat live with Corey and Stu. Thanks again for attending this session. And thank you again, Corey and Stu. It's been The Great Cloud Debate. All right, so each round I will announce the winner and then we're going to announce the overall winner. Remember that Corey and Stu are playing not just for bragging rights and ownership of all of the internet for the next 24 hours, but also for lunch to be donated to their local hospital. Corey is having lunch donated to the California Pacific Medical Centre. And Stu is having lunch donated to Boston Medical Centre. All right, first up round one multicloud versus monocloud. Stu, you were arguing for multicloud, Corey, you were arguing for one cloud. Stu won that one by 64% of the vote. >> The vendor fix was in. >> Yeah, well, look, CloudHealth started all in AWS by supporting customers across those environments. So and Corey you basically conceded it because we said multicloud does not mean we evenly split things up. So you got to work on those two skills, buddy, 'cause, absolutely you just handed the victory my way. So thank you so much and thank you to the audience for understanding multicloud is where we are today, and unfortunately, it's where we're gonnao be in the future. So as a whole, we're going to try to make it better 'cause it is, as Corey and I both agree, a bit of a mess right now. >> Don't get too cocky. >> One of those days the world is going to catch up with me and realize that ad hominem is not a logical fallacy so much as it is an excellent debating skill. >> Well, yeah, I was going to say, Stu, don't get too cocky because round two serverless versus containers. Stu you argued for containers, Corey you argued for serverless. Corey you won that one with 65, 66 or most percent of the vote. >> You can't fight the future. >> Yeah, and as you know Rachel I'm a big fan of serverless. I've been to the serverless comp, I actually just published an excellent interview with Liberty Mutual and what they're doing with serverless. So love the future, it's got a lot of maturity to deliver on the promise that it has today but containers isn't going anyway or either so. >> So, you're not sad that you lost that one. Got it, good concession speech. Next one up was cloud wars specifically Google. is Google a real contender in the clouds? Stu, you were arguing yes they are. Corey, you were arguing no they aren't. Corey also won this round was 72% of the votes. >> Yeah, it's one of those things where at some point, it's sort of embarrassing if you miss a six inch pot. So it's nice that that didn't happen in this case. >> Yeah, so Corey, is this the last week that we have any competitors to AWS? Is that what we're saying? And we all accept our new overlords. Thank you so much, Corey. >> Well I hope not, my God, I don't know what to be an Amazonian monoculture anymore than I do anyone else. Competition makes all of us better. But again, we're seeing a lot of anti competitive behaviour. For example, took until this year for Microsoft to finally make calculator uninstallable and I trust concerned took a long time to work its way of course. >> Yeah, and Corey, I think everyone is listening to what you've been saying about what Google's doing with Google Meet and forcing that us when we make our pieces there. So definitely there's some things that Google culture, we'd love them to clean up. And that's one of the things that's really held back Google's enterprise budget is that advertised advertising driven culture. So we will see. We are working hand-- >> That was already opted out of Hangouts, how do we fix it? We call it something else that they haven't opted out of yet. >> Hey, but Corey, I know you're looking forward to at least two months of weekly Google live stuff starting this summer. So we'll have a lot of time to talk about google. >> Let's not kid ourselves they're going to cancel it halfway through. (Stu laughs) >> Boys, I thought we didn't have any more smack talk left in you but clearly you do. So, all right, moving on. Next slide. This is the last question that we did in the main part of the debate. IBM Cloud. What about IBM Cloud was the question, Stu, you were pro, Corey you were con. Corey, you won this one again with 62% of the vote and for the main. >> It wasn't just me, IBM Cloud also won. The problem is that competition was oxymoron of the day. >> I don't know Rachel, I thought this one had a real shot as to putting where IBM fits. I thought we had a good discussion there. It seemed like some of the early voting was going my way but it just went otherwise. >> It did. We had some last minute swings in these polls. They were going one direction they rapidly swung another it's a fickle crowd today. So right now we've got Corey with three points Stu with one but really the lightning round anyone's game. They got very close here. The next question, lightning round question one, was "Game of Thrones" who deserves to sit on the Iron Throne? Stu was arguing for Jon Snow, Corey was arguing for Sansa Stark also Corey has never seen Game of Thrones. This was shockingly close with Stu at 51.5% of the vote took the crown on this King of the North Stu. >> Well, I'm thrilled and excited that King of the North pulled things out because it would have been just a complete embarrassment if I lost to Corey on this question. >> It would. >> It was the right answer, and as you said, he had no idea what he's talking about, which, unfortunately is how he is on most of the rest of it. You just don't realize that he doesn't know what he's talking about. 'Cause he uses all those fast words and discussion points. >> Well, thank you for saying the quiet part out loud. Now, I am completely crestfallen as to the results of this question about a thing I've never seen and could not possibly care less about not going in my favor. I will someday managed to get over this. >> I'm glad you can really pull yourself together and keep on going with life, Corey it's inspiring. All right, next question. Was the lightning round question two is a hot dog a sandwich? Stu, you were arguing yes. Corey, you were arguing no. Corey landslide, you won this 75% of the vote. >> It all comes down to customer expectations. >> Yeah. >> Just disappointment. Disappointment. >> All right, next question tabs versus spaces. Another very close one. Stu, what were you arguing for Stu? >> I was voting tabs. >> Tabs, yeah. And Corey, you were arguing spaces. This did not turn out the way I expected. So Stu you lost this by slim margin Corey 53% of the vote. You won with spaces. >> Yep. And I use spaces in my day to day life. So that's a position I can actually believe in. >> See, I thought I was giving you the opposite point of view there. I mistook you for the correct answer, in my opinion, which is tabs. >> Well, it is funnier to stalk me on Twitter and look what I have to there than on GitHub where I just completely commit different kinds of atrocities. So I don't blame you. >> Caught that pun there. All right, the last rounds. Speaking of atrocities, AMI, Amazon Machine Image is it pronounced AMI or AMI? >> I better not have won this one. >> So Stu you were arguing that this is pronounced AMI rhymes with butterfly. Corey, you were arguing that it's pronounced AMI like mommy. Any guesses under who won this? >> It better be Stu. >> It was a 50, 50 split complete tie. So no points to anyone. >> For your complete and utterly failed on this because I should have won in a landslide. My entire argument was based on every discussion you've had on this. So, Corey I think they're just voting for you. So I'm really surprised-- >> I think at this point it shows I'm such a skilled debater that I could have also probably brought you to a standstill taking the position that gravity doesn't exist. >> You're a master of few things, Corey. Usually it's when you were dressed up nicely and I think they like the t-shirt. It's a nice t-shirt but not how we're usually hiding behind the attire. >> Truly >> Well. >> Clothes don't always make a demand. >> Gentlemen, I would like to say overall our winner today with five points is Corey. Congratulations, Corey. >> Thank you very much. It's always a pleasure to mop the floor with you Stu. >> Actually I was going to ask Stu to give the acceptance speech for you, Corey and, Corey, if you could give a few words of concession, >> Oh, that's a different direction. Stu, we'll start with you, I suppose. >> Yeah, well, thank you to the audience. Obviously, you voted for me without really understanding that I don't know what I'm talking about. I'm a loudmouth on Twitter. I just create a bunch of arguments out there. I'm influential for reasons I don't really understand. But once again, thank you for your votes so much. >> Yeah, it's always unfortunate to wind up losing a discussion with someone and you wouldn't consider it losing 'cause most of the time, my entire shtick is that I sit around and talk to people who know what they're talking about. And I look smart just by osmosis sitting next to them. Video has been rough on me. So I was sort of hoping that I'd be able to parlay that into something approaching a victory. But sadly, that hasn't worked out quite so well. This is just yet another production brought to you by theCube which shut down my original idea of calling it a bunch of squares. (Rachael laughs) >> All right, well, on that note, I would like to say thank you both Stu and Corey. I think we can close out officially the debate, but we can all stick around for a couple more minutes in case any fans have questions for either of them or want to get them-- >> Find us a real life? Yeah. >> Yeah, have a quick Zoom fight. So thanks, everyone, for attending. And thank you Stu, thank you Corey. This has been The Great Cloud Debate.

Published Date : Jun 18 2020

SUMMARY :

Cloud Economist at the Duckbill Group and less of the pleasure to talk to Stu. to vote of who you think is winning. for the Boston audience All right, Corey, what about you? the lunch to his department. This is your moment for smack talk. to a specific technology area. minutes on the clock and go. is the ability to leverage whatever All right, Stu, your turn. and saying that you that leads to ridiculous of you in the audience, is the way to go. to it than you have. each of the debaters these topics, and breaking down the silos of the only code you and it is the future. I agree that it's the present, I doubt Stu, any last words or rebuttals? about Kubernetes in the future, to assign each of you a pro or a con, and their ability to talk but is that if you talk about, to AWS, Corey rebuttal? that that is somehow going to change and solve the solution with data. that they want you to debate. the Red Hat $34 billion to bet So before Corey goes, I feel the need And you're disclaiming what you're going to say next. and no one has bothered to update So that is one of the and that was one of the and the AS/400 which of course Also the i series. So you're conflating your system, I'm not disputing that That's the important thing that they also will now to sit on the Iron Throne at So Corey is going to say something like We take a look at that across the board to say that he looks like Kit Harington. you think Stu was running and might not reflect the actual views of checking the actual boxes, Wow, that one hurts. I'm not going to bore you I'm not sure if that just going to start having Close this out or rebuttal. I'm going to take the high road, Rachel Stu, 60 seconds on the I believe that the conversation as always, Corey, why is your and that time was the any of the references in there. Corey, any last words? that everyone loves to get involved with, We're going to have you argue each. and large that AMI is the to argue the opposite. that it is and an appeal to Stu 30 seconds, rebuttal. I in general, I'm going to vote that they will of course yell at you for So go to the top of your screen, So and Corey you basically realize that ad hominem or most percent of the vote. Yeah, and as you know Rachel is Google a real contender in the clouds? So it's nice that that that we have any competitors to AWS? to be an Amazonian monoculture anymore And that's one of the things that they haven't opted out of yet. to at least two months they're going to cancel and for the main. The problem is that competition a real shot as to putting where IBM fits. of the vote took the crown that King of the North is on most of the rest of it. to the results of this Was the lightning round question two It all comes down to Stu, what were you arguing for Stu? margin Corey 53% of the vote. And I use spaces in my day to day life. I mistook you for the correct answer, to stalk me on Twitter All right, the last rounds. So Stu you were arguing that this So no points to anyone. and utterly failed on this to a standstill taking the position Usually it's when you to say overall our winner It's always a pleasure to mop the floor Stu, we'll start with you, I suppose. Yeah, well, thank you to the audience. to you by theCube which officially the debate, Find us a real life? And thank you Stu, thank you Corey.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AngeloPERSON

0.99+

CoreyPERSON

0.99+

Erica BresciaPERSON

0.99+

RachelPERSON

0.99+

StevePERSON

0.99+

StuPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jim WhitehurstPERSON

0.99+

Thomas KurianPERSON

0.99+

Corey QuinnPERSON

0.99+

New YorkLOCATION

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Satya NadellaPERSON

0.99+

RachaelPERSON

0.99+

GoogleORGANIZATION

0.99+

70QUANTITY

0.99+

Jeff BarrPERSON

0.99+

Game of ThronesTITLE

0.99+

65QUANTITY

0.99+

Arvind KrishnaPERSON

0.99+

Jon SnowPERSON

0.99+

Stu MinimanPERSON

0.99+

IcelandLOCATION

0.99+

62%QUANTITY

0.99+

60 secondsQUANTITY

0.99+

Anurag Goel, Render & Steve Herrod, General Catalyst | CUBE Conversation, June 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, and welcome to this CUBE Conversation, from our Boston area studio, I'm Stu Miniman, happy to welcome to the program, first of all we have a first time guest, always love when we have a founder on the program, Anurag Goel is the founder and CEO of Render, and we've brought along a longtime friend of the program, Dr. Steve Herrod, he is a managing director at General Catalyst, a investor in Render. Anurag and Steve, thanks so much for joining us. >> Thank you for having me. >> Yeah, thanks, Stu. >> All right, so Anurag, Render, your company, the tagline is the easiest cloud for developers and startups. It's a rather bold statement, most people feel that the first generation of cloud has happened and there were certain clear winners there. The hearts and minds of developers absolutely has been a key thing for many many companies, and one of those drivers in the software world. Why don't you give us a little bit of your background, and as the founder of the company, what was it, the opportunity that you saw, that had you create Render? >> Yeah, so I was the fifth engineer at Stripe, and helped launch the company and grow it to five billion dollars in revenue. And throughout that period, I saw just how much money we were spending on just hiring DevOps engineers, AWS was a huge huge management headache, really, there's no other way to describe it. And even after I left Stripe, I was thinking hard about what I wanted to do next, and a lot of those ideas required some form of development and deployment, and putting things in production, and every single time I had to do the same thing over and over and over again, as a developer, so despite all the advancements in the cloud, it was always repetitive work, that wasn't just for my projects, I think a lot of my friends felt the same way. And so, I decided that we needed to automate some of these new things that have come about, as part of the regular application deployment process, and how it evolves, and that's how Render was born. >> All right, so Steve, remember in the early days, cloud was supposed to be easy and inexpensive, I've been saying on theCUBE it's like well, I guess it hasn't quite turned out that way. Love your viewpoint a little bit, because you've invested here, to really be competitive in the cloud, tens of billions of dollars a year, that need to go into this, right? >> Yeah, I had the fortunate chance to meet Anurag early on, General Catalyst was an investor in Stripe, and so seeing what they did sort of spurred us to think about this, but I think we've talked about this before, also, on theCUBE, even back, long ago in the VMware days, we looked very seriously at buying Heroku, one of the early players, and still around, obviously, at Salesforce in this PaaS space, and every single infrastructure conversation I've had from the start, I have to come back to myself and come back to everyone else and just say, don't forget, the only reason any infrastructure even exists is to run applications. And as we talked about, the first generation of cloud, it was about, let's make the infrastructure disappear, and make it programmatic, but I think even that, we're realizing from developers, that is just still way too low of an abstraction level. You want to write code, you want to have it in GitHub, and you want to just press go, and it should automatically deploy, automatically scale, automatically secure itself, and just let the developer focus purely on the app, and that's a idea that people have been talking about for 20 years, and should continue to talk about, but I really think with Render, we found a way to make it just super easy to deploy and run, and certainly it is big players out there, but it really starts with developers loving the platform, and that's been Anurag's obsession since I met him. >> Yeah, it's interesting, when I first was reading I'm like "Wait," reminds me a lot of somebody like DigitalOcean, cloud for developers who are, Steve, we walked through, the PaaS discussion has gone through so many iterations, what would containerization do for things, or serverless was from its name, I don't need to think about that underlying layer. Anurag, give us a little bit as to how should we think of Render, you are a cloud, but you're not so much, you're not an infrastructure layer, you're not trying to compete against the laundry list of features that AWS, Azure, or Google have, you're a little bit different than some of the previous PaaS players, and you're not serverless, so, what is Render? >> Yeah, it is actually a new category that has come about because of the advent of containers, and because of container orchestration tools, and all of the surrounding technologies, that make it possible for companies like Render to innovate on top of those things, and provide experiences to developers that are essentially serverless, so by serverless you could mean one of two things, or many things really, but the way in which Render is serverless is you just don't have to think about servers, all you need to do is connect your code to GitHub, and give Render a quick start command for your server and a build command if needed, and we suggest a lot of those values ourselves, and then every push to your GitHub repo deploys a new version of your service. And then if you wanted to check out pull requests, which is a way developers test out code before actually pushing it to deployment, every pull request ends up creating a new instance of your service, and you can do everything from a single static site, to building complex clusters of several microservices, as well as managed Postgres, things like clustered Kafka and Elasticsearch, and really one way to think about Render, is it is the platform that every company ends up building internally, and spends a lot of time and money to build, and we're just doing it once for everyone and doing it right, and this is what we specialize in, so you don't have to. >> Yeah, just to add to that if I could, Stu, what's I think interesting is that we've had and talked about a lot of startups doing a lot of different things, and there's a huge amount of complexity to enable all of this to work at scale, and to make it work with all the things you look for, whether it's storage or CDNs, or metrics and alerting and monitoring, all of these little startups that we've gone through and big companies alike, if you could just hide that entirely from the developer and just make it super easy to use and deploy, that's been the mission that Anurag's been on to start, and as you hear it from some of the early customers, and how they're increasing the usage, it's just that love of making it simple that is key in this space. >> All right, yeah, Anurag, maybe it would really help illustrate things if you could talk a little bit about some of your early customers, their use case, and give us what stats you can about how your company's growing. >> Certainly. So, one of our more prominent customers was the Pete Buttigieg campaign, which ran through most of 2019, and through the first couple of months of 2020. And they moved to us from Google Cloud, because they just could not or did not want to deal with the complexity in today's standard infrastructure providers, where you get a VM and then you have to figure out how to work with it, or even Managed Kubernetes, actually, they were trying to run on Managed Kubernetes on GKE, and that was too complex or too much to manage for the team. And so they moved all of their infrastructure over to Render, and they were able to service billions of requests over the next few months, just on our platform, and every time Pete Buttigieg went on stage during a debate and said "Oh, go to PeteForAmerica.com," there's a huge spike in traffic on our platform, and it scaled with every debate. And so that's just one example of where really high quality engineering teams are saying "No, this stuff is too complex, it doesn't need to be," and there is a simpler alternative, and Render is filling in that gap. We also have customers all over, from single indie hackers who are just building out their new project ideas, to late stage companies like Stripe, where we are making sure that we scale with our users, and we give them the things that they would need without them having to "mature" into AWS, or grow into AWS. I think Render is built for the entire lifecycle of a company, which is you start off really easily, and then you grow with us, and that is what we're seeing with Render where a lot of customers are starting out simple and then continuing to grow their usage and their traffic with us. >> Yeah, I was doing some research getting ready for this, Anurag, I saw, not necessarily you're saying that you're cheaper, but there are some times that price can help, performance can be better, if I was a Heroku customer, or an AWS customer, I guess what might be some of the reasons that I'd be considering Render? >> So, for Heroku, I think the comparison of course, there's a big difference in price, because we think Heroku is significantly overpriced, because they have a perpetual free tier, and so their paid customers end up footing the bill for that. We don't have a perpetual free tier that way, we make sure that our paid customers pay what's fair, but more importantly, we have features that just haven't been available in any platform as a service up until now, for example, you cannot spin up persistent storage, block storage, in Heroku, you cannot set up private networking in Heroku as a developer, unless you pay for some crazy enterprise tier which is 1500, 3000 dollars a month. And Render just builds all of that into the platform out of the box, and when it comes to AWS, again, there's no comparison in terms of ease of use, we'll never be cheaper than AWS, that's not our goal either, it's our goal to make sure that you never have to deal with the complexity of AWS while still giving you all of the functionality that you would need from AWS, and when you think about applications as applications and services as opposed to applications that are running on servers, that's where Render makes it much easier for developers and development teams to say "Look, we don't actually need "to hire hundreds of DevOps people," we can significantly reduce our DevOps team and the existing DevOps team that we have can focus on application-level concerns, like performance. >> All right, so Steve, I guess, a couple questions for you, number one is, we haven't talked about security yet, which I know is a topic near and dear to your heart, was one of the early concerns about cloud, but now often is a driver to move to cloud, give us the security angle for this space. >> Yeah, I mean the key thing in all of the space is to get rid of the complexity, and complexity and human error is often, as we've talked about, that is the number one security problem. So by taking this fresh approach that's all about just the application, and a very simple GitOps-based workflow for it, you're not going to have the human error that typically has misconfigured things and coming into there, I think more broadly, the overall notion of the serverless world has also been a very nice move forward for security. If you're only bringing up and taking down the pieces of the application as needed, they're not there to be hacked or attacked. So I think for those two reasons, this is really a more modern way of looking at it, and again, I think we've talked about many times, security is the bane of DevOps, it's the slowest part of any deployment, and the more we get rid of that, the more the extra value proposition comes safer and also faster to deploy. >> The question I'd like to hear both of you is, the role of the developer has changed an awful lot. Five years ago, if I talked to companies, and they were trying to bring DevOps to the enterprise, or anything like that, it seemed like they were doomed, but things have matured, we all understand how important the developer is, and it feels like that line between the infrastructure team and the developer team is starting to move, or at least have tools and communication happening between them, I'd love, maybe Steve if you can give us a little bit your macroview of it, and Anurag, where that plays for Render too. >> Yeah, and Anurag especially would be able to go into our existing customers. What I love about Render, this is a completely clean sheet approach to thinking about, get rid of infrastructure, just make it all go away, and have it be purely there for the developers. Certainly the infrastructure people need to audit and make sure that you're passing the certifications and make sure that it has acceptable security, and data retention and all those other pieces, but that becomes Anurag's problem, not the developer problem. And so that's really how you look at it. The second thing I've seen across all these startups, you don't typically have, especially, you're not talking about startups, but mid-sized companies and above, they don't convert all the way to DevOps. You typically have people peeling off individual projects, and trying to move faster, and use some new approach for those, and then as those hopefully go successful, more and more of the existing projects will begin to move over there, and so what Render's been doing, and what we've been hoping from the start, is let's attract some of the key developers and key new projects, and then word will spread within the companies from there, but so the answer, and a lot of these companies make developers love you, and make the infrastructure team at least support you. >> Yeah, and that was a really good point about developers and infrastructure, DevOps people, the line between them sort of thinning, and becoming more of a gray area, I think that's absolutely right, I think the developers want to continue to think about code, but then, in today's environment, outside of Render when we see things like AWS, and things like DigitalOcean, you still see developers struggling. And in some ways, Render is making it easy for smaller companies and developers and startups to use the same best practices that a fully fledged DevOps team would give them, and then for larger companies, again, it makes it much easier for them to focus their efforts on business development and making sure they're building features for their users, and making their apps more secure outside of the infrastructure realm, and not spending as much time just herding servers, and making those servers more secure. To give you an example, Render's machines aren't even accessible from the public internet, where our workloads run, so there's no firewall to configure, really, for your app, there's no DMZ, there's no VPN. And then when you want to make sure that you're just, you want a private network, that's just built into Render along with service discovery. All your services are visible to each other, but not to anyone else. And just setting those things up, on something like AWS, and then managing it on an ongoing basis, is a huge, huge, huge cost in terms of resources, and people. >> All right, so Anurag, you just opened your first region, in Europe, Frankfurt if I remember right. Give us a little bit as to what growth we should expect, what you're seeing, and how you're going to be expanding your services. >> Yeah, so the expansion to Europe was by far our most requested feature, we had a lot of European users using Render, even though our servers were, until now, based in the US. In fact, one of, or perhaps the largest recipe-sharing site in Italy was using Render, even though the servers were in the US, and all their users were in Italy, and when we moved to Europe, that was like, it was Christmas come early for them, and they just started moving over things to our European region. But that's just the start, we have to make sure that we make compute as accessible to everyone, not just in the US or Europe but also in other places, so we're looking forward to expanding in Asia, to expanding in South America, and even Africa. And our goal is to make sure that your applications can run in a way that is completely transparent to where they're running, and you can even say "Look, I just want my application to run "in these four regions across the globe, "you figure out how to do it," and we will. And that's really the sort of dream that a lot of platforms as service have been selling, but haven't been able to deliver yet, and I think, again, Render is sort of this, at this point in time, where we can work on those crazy crazy dreams that we've been selling all along, and actually make them happen for companies that have been burned by platforms as a service before. >> Yeah, I guess it brings up a question, you talk about platforms, and one of the original ideas of PaaS and one of the promises of containerization was, I should be able to focus on my code and not think about where it lives, but part of that was, if I need to be able to run it somewhere else, or want to be able to move it somewhere else, that I can. So that whole discussion of portability, in the Kubernetes space, it definitely is something that gets talked quite a bit about. And can I move my code, so where does multicloud fit into your customers' environments, Anurag, and is it once they come onto Render, they're happy and it's easy and they're just doing it, or are there things that they develop on Render and then run somewhere else also, maybe for a region that you don't have, how does multicloud fit into your customers' world? >> That's a great question, and I think that multicloud is a reality that will continue to exist, and just grow over time, because not every cloud provider can give you every possible service you can think of, obviously, and so we have customers who are using, say, Redshift, on AWS, but they still want to run their compute workloads on Render. And as a result, they connect to AWS from their services running on Render. The other thing to point out here, is that Render does not force you into a specific paradigm of programming. So you can take your existing apps that have been containerized, or not, and just run them as-is on Render, and then if you don't like Render for whatever reason, you can take them away without really changing anything in your app, and run them somewhere else. Now obviously, you'll have to build out all the other things that Render gives you out of the box, but we don't lock you in by forcing you to program in a way that, for example, AWS Lambda does. And when it comes to the future, multicloud, I think Render will continue to run in all the major clouds, as well as our own data centers, and make sure that our customers can run the appropriate workloads wherever they are, as well as connect to them from the Render services with ease. >> Excellent. >> And maybe I'll make one more point if I could, Stu, which is one thing I've been excited to watch is the, in any of these platform as a services, you can't do everything yourself, so you want the opensource package vendors and other folks to really buy into this platform too, and one exciting thing we've seen at Render is a lot of the big opensource packages are saying "Boy, it'd be easier for our customers to use our opensource "if it were running on Render." And so this ecosystem and this set of packages that you can use will just be easier and easier over time, and I think that's going to lead to, at the end of the day people would like to be able to move their applications and have it run anywhere, and I think by having those services here, ultimately they're going to deploy to AWS or Google or somewhere else, but it is really the right abstraction layer for letting people build the app they want, that's going to be future-proof. >> Excellent, well Steve and Anurag, thank you so much for the update, great to hear about Render, look forward to hearing more updates in the future. >> Thank you, Stu. >> Thanks, Stu, good to talk to you. >> All right, and stay tuned, lots more coverage, if you go to theCUBE.net you can see all of the events that we're doing with remote coverage, as well as the back catalog of what we've done. I'm Stu Miniman, thank you for watching theCUBE. (calm music)

Published Date : Jun 8 2020

SUMMARY :

leaders all around the world, and we've brought along a and as the founder of the company, and grow it to five that need to go into this, right? and just let the developer I don't need to think about and all of the surrounding technologies, and to make it work with us what stats you can about and then continuing to grow their usage and the existing DevOps near and dear to your heart, and the more we get rid of that, and the developer team and make sure that you're Yeah, and that was a to be expanding your services. and you can even say and one of the original ideas of PaaS and then if you don't like and I think that's going to lead to, great to hear about Render, can see all of the events

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

EuropeLOCATION

0.99+

Anurag GoelPERSON

0.99+

ItalyLOCATION

0.99+

AsiaLOCATION

0.99+

AnuragPERSON

0.99+

USLOCATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

June 2020DATE

0.99+

Steve HerrodPERSON

0.99+

AfricaLOCATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

South AmericaLOCATION

0.99+

StuPERSON

0.99+

five billion dollarsQUANTITY

0.99+

RenderTITLE

0.99+

GoogleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

General CatalystORGANIZATION

0.99+

RenderORGANIZATION

0.99+

bothQUANTITY

0.99+

StripeORGANIZATION

0.99+

ElasticsearchTITLE

0.99+

HerokuORGANIZATION

0.99+

KafkaTITLE

0.99+

FrankfurtLOCATION

0.99+

ChristmasEVENT

0.99+

2019DATE

0.99+

1500QUANTITY

0.99+

two reasonsQUANTITY

0.99+

20 yearsQUANTITY

0.98+

SalesforceORGANIZATION

0.98+

first regionQUANTITY

0.98+

first timeQUANTITY

0.98+

AnuragORGANIZATION

0.98+

fifth engineerQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.97+

second thingQUANTITY

0.97+

Mark Hinkle & Sebastien Goasguen, TriggerMesh | CUBE Conversation, May 2020


 

>> Announcer: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stu Miniman and welcome to a special CUBE conversation. I'm coming from the Boston area studio. We were supposed to have a KubeCon Europe in Amsterdam. First in the spring, they pushed it off to the summer, and, of course, the decision due to the global pandemic is it's making it virtual. But happy to welcome to the program two guests that I was planning to have on in person, but couldn't wait for our virtual coverage of the event, though. Happy to welcome the co-founders of TriggerMesh. Sitting in the middle is Mark Hinkle, who is the CEO of the company, and to the other side is Sebastien Goasguen, who is also the co-founder and the Chief Product Officer. Gentlemen, thanks so much for joining us. >> Thanks for having us, Stu. >> Thanks, Stu. >> All right, so, it's interesting, we've been covering the cloud native space for a number of years, and especially at KubeCon, there's always some of those discussions of does cloud kill on-premises, does this new thing kill that old thing. And in some of the early days of KubeCon, it was like, well, containers are really interesting, and there was all the buzz for years about Docker but, hey, the next thing is going to be serverless. And serverless, we don't need to think about any of that stuff, it's the nirvana of what developers wanted. So therefore, let's not worry about containers, but you sit in that space really helping to connect between some of the various pieces. So, I guess, Sebastien, maybe if I could start with you, 'cause you've built some of these various projects, when you go through and look at your background, you've been involved in the co-business space, uBLAS, and now for TriggerMesh but, you know, give us some of that background as to how, from a technological under pinnings, the community's been thinking about how these worlds fit together. >> Yeah sure, it's very interesting because first, the container rejuvenation started with Docker obviously and then Kubernetes appeared, and the entire community started building this. And this was really an evolution from the virtual machine orchestration, right. People needing a better way to package applications, deploy them, and they said, "You know what, "virtual machines are not that great for this. "Can't we have a better vehicle to do this" and that's where, really, containers took over. And it made total sense and so we saw this switch from, craziness about open stack and even cloud stack that Mark and I worked on, and putting all the focus on containers. And then comes AWS always innovating, always in the lead, and AWS saying, "Hey, you know what? "Actually, we need to go serverless. "We need to forget about the infrastructure. "What people want is really deploy applications "without worrying about the infrastructure. "They want things that are going to auto scale. "They want to pay very little, even pay per function call "and not pay when your VM is up." So AWS really pushed this mindset of serverless, but then what was the meaning in that realm of containers, and that's when I started Kubeless and I said, "You know what, if you would need to build function "as a service, you should build it on Kubernetes, "and use Kubernetes as a platform." And from there we started started seeing this fight, a little bit, between people, saying "Hey, forget containers, go serverless." So in TriggerMesh, we're not really taking that stance. We really see on-premises has, it's always going to be here, we have worked clouds on-premises, we have our own data centers but definitely there is more and more cloud usage, and when you start using the cloud you don't want to care about the infrastructure in the cloud, right. So, you want as much serverless as possible in the cloud, but you know you have to deal with your on-premises, data bases and some work loads and so on. So you have to be a pragmatic and you have to pick the best of both worlds and keep moving to modernize your stack and your IT in general. >> Excellent, alright so Mark, at the CNCF I'd seen the Knative project come out and it was talking about how we can connect containers and serverless, and one of the questions I'd been asking is "Well look, there are a lot "of open source projects for serverless." But when I talk to the community, when I talk to users, you say serverless, I think AWS. Sebastien was just talking about, so, I was sitting at the KubeCon shows and talking to the vendors and a lot of really big vendors were working on Knative, Oracle, IBM, RedHat and others and I said if this doesn't connect with AWS first and Azure second, I don't understand what we're doing. Yes, there's probably a place for on-premises but that was when, I think you and I had a conversation, we'd been looking at this space, so how did the ideas that Sebastien talked about turn into an initiative and a company of TriggerMesh. >> Well, early on we latched onto the Knative announcement that Google made. Google had given Sebastien some insight into where they were going with serverless, and the Knative project before it launched. And then they actually quoted him in the release which started interest in our company which was the only company in name at that point. But we really didn't know where Knative and Kubernetes together were going and the serverless movement, but we thought at first that there would need to be management capabilities to do lifecycle management around serverless functions, but what we realized, or Sebastien realized, early on was that it's not so much the management of serverless, because the whole idea of serverless is to abstract away all of the severs and architecture so that all you're really dealing with is the run time. So the problem that we saw early on was not managing but actually integrating applications across serverless framework, so the name TriggerMesh, that came from the idea that you trigger serverless functions and that you would mesh architectures whether they be legacy applications or they be file services or other serverless clouds across the fabric of the internet. So that's Triggermesh and that's really where we're going and we see that there's a couple of proof points in our industry for that already and people having the desire to do that. >> All right excellent, so that integration that you're talking about. Help Sebastien explain, there's some news I believe its the EveryBridge Cloud Native Integration Platform that's just announced. Help us understand what that is and what should we be kind of comparing it to other solutions in the industry today. >> Yeah so, you know we are very happy about the EveryBridge announcement and it's really, we're getting beta, we are doing a beta release of EveryBridge available in our SaaS cloud, the Triggermesh.io and really to first piggy back on what Mark was saying, is that a lot of people still believe serverless is just functions, right. And for us serverless is much more than this. Serverless is about building event driven applications. We see it with AWS, with things like they are doing with EventBridge, for example, but we really believe in this mindset. What we are trying to do is to help people build applications, build cloud native applications, that fundamentally are event driven and they are linking cloud services in the public cloud providers and also on-premises work load, right. So EveryBridge allows people to do this, to build those cloud native apps as basic event flows that connect event sources wherever they are, could be events that are on-prem from an eCommerce application, ERP application, could be events that are circulating through a Kafka infrastructure on-prem, and people can connect those event sources with what we call targets. So those targets could be on-premises, they would be OpenShift work loads for example or they could be in the cloud at AWS lambda functions, Google cloud run, or even dedicated SaaS like Twilio, SendGrid, and so on, so that's when we saw really over the last 18 to almost two years now, is that serverless is more of an integration problem, more like traditional IPaaS that we've seen, right. So basically we are building a new IPaaS solution at the frontier of serverless offerings from the public clouds, traditional messaging systems like Kafka, Remittent, Q and so on, plus the, I would say, the old IPaaS solution and we're doing all of this backed by Kubernetes and Knative. >> Excellent, so Mark I heard Sebastien talking about, he mentioned OpenShift, talked about Google, speak a little bit to really the ecosystems, the market places that TriggerMesh fits into. What are the use cases that you are seeing customers using. >> Yeah, I think a couple of the, to the dive into the on-prem triggers we have capabilities to trigger oracle database changes that could actually pick off cloud based ETL transactions. We're seeing that users are going through digital transformation and really to be more specific given the global climate right now, it's remote work, and the idea of lifting and shifting all of your infrastructure into the cloud is pretty daunting and long ask, but if you can front end those systems with new cloud native architecture and you have a way to create those event flows to tie in your existing systems to new portals for your employees to get their work done, automate workflows to provision new systems, like Zoom for example, and other conferencing systems, you can use the serverless front ends and work flows that actually integrate with all of your existing infrastructure and give you a way to extend your life of your applications and modernize them. >> Yeah, the long pole on attending modernization is that application. Sebastien maybe I'd come to you on this is, I think about iPaaS, when you look at that space they talk about all the integration that they need to work on, usually there are certifications involved, you mentioned Oracle databases, these are things that we need to go in there with a engineering effort and make sure that it is tested and certified by the ISV out there. Does containerization, Kubernetes, and serverless, does this change it at all, does this make it easier to move along these environments? I guess the question is for the enterprise, normally this change is rather slow. Mark was just alluding to the fact that we need to do some of these things faster, to try to react from what's happening in the world. >> Yeah, I think that's the entire premise of containers. It's speeding up the software life cycle and the speed at which we can deliver new features, for all our applications and so on. So, a big part of the job, when Docker started and then Kubernetes has been, if you adopt that type of infrastructure and that type of artifact, containers, you're going to speed up your software management and software delivery. So now what happens is that you have slow moving pieces, maybe pieces that you've had in your data center for 10, 20 years, for quite a while, and then you have these extremely fast moving environment, which is containerized and running Kubernetes plus the cloud. That's even, we could say even faster moving, and you can, that's definitely the challenge, that's where we see the value and that's where we see the struggle, is that you have all those big companies that have those slow moving pieces Oracle DB, IBM MQ, and so on and they need to make those pieces relevant in a fast moving containerized world and in a cloud native world, right. So how do you bridge that gap? Well that's what we do, we provide bridges. We provide integration bridges with every bridge, there you go. So we connect the event sources from Oracle DB and MQ and we bring that to a more fast moving cloud native environment, whether it's managed Kubernetes on Google GKE or whether its still on-prem in OpenShift. >> Mark, want to get your view point, just being a start up in today's global environment, obviously, you look at the cloud data space, many of the companies are distributed. We're talking to Sebastien from over in Europe, you're down in North Carolina, but give us your view point as a startup. How is the current economic environment impacting you, impacting your partners, impacting your customers. >> So, our partners and customers are probably moving more slowly than we do as a startup because they had physical brick and mortar offices and now they are coming into our world. We're 100% virtual, we're in 3 continents across over 12 time zones. That kind of work versus where they're at, I think everybody is consciously moving ahead, the one thing that I will say is that their interest in being more like the startups that are virtual, don't have brick and mortar, are really good at online collaboration. They look at us for sort of inspiration on how they are going to do business going forward or at least for the foreseeable future. So, overall I think that, not only are we teaching them about cloud native technologies but we're just teaching them about distributed work forces in a quarantined world. >> Absolutely, and I think those are some of the key learnings that you look at that are diversity consistent in the cloud native space. Want to give you both a final word and-- >> And Stu if I just add something. Mark and I have been working from home for quite a while, eight to 10 years, and definitely right now this is not the normal working from home, right, we all have, most of us have kids at home 24/7. The cognitive load in the news is huge, this is not the normal environment. So we are extremely careful, we help each other definitely internally in the team, you know, India, Vietnam, Germany, Spain, U.S. We have to be extremely careful that everybody is not falling down and putting too much on the nerves and their spirits right, so not a normal environment and even though we know how to do it we have to be careful. >> Yeah Sebastien, I'm so glad you brought that up 'cause this is not just a, how do we move to a distributed system. There is the rest of the impact on that. All right so lets give you both final words. Hopefully, we absolutely will be gathering together even if we are remote for the KubeCon event for Europe, other event later on this year, but Sebastien let's start with you, final take aways. >> Yeah, so we are very excited to build a startup. It's fast moving, its an exciting industry and really seeing the beta release of EveryBridge for us. We are trying to bring the future of event driven application to everybody, event sources to targets for everyone, not just on AWS and taking all of the strength of Kubernetes with us. It's going to be a familiar system for all Kubernetes lovers. >> Great, and Mark. >> Well as we talked about today, we are very excited about the EveryBridge announcement, and if you are interested in a cloud native, serverless, digital transformation we think we have great tools for you. But on a more personal and global note, I think Sebastien hit something that's really important, it's that even though we are not all together it's really important to check in. Even these virtual sessions have been, it's nice to interact with your colleagues and friends in the industry but be kind to each other and don't just take it for granted. that everything is good at the other end of the wire so reach out to each other and we'll all get through this together. >> Well Mark and Sebastien, thank you so much for joining us. Absolutely the personal pieces as well as TriggerMesh. You're helping to pull some of those technology communities together so congratulations on the progress and definitely look forward to tracking where you go from here. >> Thanks Stu. >> Thanks a lot. >> We appreciate it. >> All right be sure to check out theCUBE.net, we will be covering KubeCon and CloudNativeCon Europe as it goes virtual as well as lots of others in the cloud developer space. I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)

Published Date : May 19 2020

SUMMARY :

leaders all around the world, and the Chief Product Officer. of that background as to how, and putting all the focus on containers. and serverless, and one of the and people having the desire to do that. I believe its the EveryBridge Cloud over the last 18 to really the ecosystems, and give you a way to extend your life that they need to work on, and the speed at which we many of the companies are distributed. in being more like the of the key learnings that you look at and even though we know how to There is the rest of the impact on that. and really seeing the beta in the industry but be kind to each other and definitely look forward to tracking in the cloud developer space.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SebastienPERSON

0.99+

Sebastien GoasguenPERSON

0.99+

Mark HinklePERSON

0.99+

MarkPERSON

0.99+

EuropeLOCATION

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

BostonLOCATION

0.99+

AmsterdamLOCATION

0.99+

Stu MinimanPERSON

0.99+

Palo AltoLOCATION

0.99+

North CarolinaLOCATION

0.99+

OracleORGANIZATION

0.99+

10QUANTITY

0.99+

GoogleORGANIZATION

0.99+

eightQUANTITY

0.99+

May 2020DATE

0.99+

two guestsQUANTITY

0.99+

KubeConEVENT

0.99+

100%QUANTITY

0.99+

oneQUANTITY

0.99+

StuPERSON

0.99+

TriggerMeshORGANIZATION

0.99+

3 continentsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

KnativeORGANIZATION

0.99+

SpainLOCATION

0.99+

VietnamLOCATION

0.99+

Triggermesh.ioTITLE

0.98+

bothQUANTITY

0.98+

TriggerMeshPERSON

0.98+

FirstQUANTITY

0.98+

RedHatORGANIZATION

0.98+

KubernetesTITLE

0.98+

GermanyLOCATION

0.97+

firstQUANTITY

0.97+

U.S.LOCATION

0.97+

KafkaTITLE

0.97+

OpenShiftTITLE

0.97+

both worldsQUANTITY

0.96+

IndiaLOCATION

0.96+

20 yearsQUANTITY

0.96+

SendGridTITLE

0.96+

CNCFORGANIZATION

0.96+

todayDATE

0.96+

this yearDATE

0.96+

TwilioTITLE

0.94+

secondQUANTITY

0.94+

KubernetesORGANIZATION

0.91+

Ashesh Badani, Red Hat | Red Hat Summit 2020


 

>> Announcer: From around the globe, it's theCUBE, with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of Red Hat Summit, happening digitally, interviewing practitioners, executives, and thought leaders from around the world. Happy to welcome back to our program, one of our CUBE alumni, Ashesh Badani, who's the Senior Vice President of Cloud Platforms with Red Hat. Ashesh, thank you so much for joining us, and great to see you. >> Yeah, likewise, thanks for having me on, Stu. Good to see you again. >> All right, so, Ashesh, since the last time we had you on theCUBE a few things have changed. One of them is that IBM has now finished the acquisition of Red Hat, and I've heard from you from a really long time, you know, OpenShift, it's anywhere and it's everywhere, but with the acquisition of Red Hat, it just means this only runs on IBM mainframes and IBM Cloud, and all things blue, correct? >> Well, that's true for sure, right? So, Stu, you and I have been talking for many, many times. As you know, we've been committed to hybrid multi-cloud from the very get-go, right? So, OpenShift supported to run on bare metal, on virtualization platforms, whether they come from us, or VMware, or Microsoft Hyper-V, on private clouds like OpenStack, as well as AWS, Google Cloud, as well as on Azure. Now, with the completion of the IBM acquisition of Red Hat, we obviously always partnered with IBM before, but given, if you will, a little bit of a closer relationship here, you know, IBM's been very keen to make sure that they promote OpenShift in all their platforms. So as you can probably see, OpenShift on IBM Cloud, as well as OpenShift on Z on mainframe, so regardless of how you like OpenShift, wherever you like OpenShift, you will get it. >> Yeah, so great clarification. It's not only on IBM, but of course, all of the IBM environments are supported, as you said, as well as AWS, Google, Azure, and the like. Yeah, I remember years ago, before IBM created their single, condensed conference of THINK, I attended the conference that would do Z, and Power, and Storage, and people would be like, you know, "What are they doing with that mainframe?" I'm like, "Well, you do know that it can run Linux." "Wait, it can run Linux?" I'm like, "Oh my god, Z's been able to run Linux "for a really long time." So you want your latest Container, Docker, OpenShift stuff on there? Yeah, that can sit on a mainframe. I've talked to some very large, global companies that that is absolutely a part of their overall story. So, OpenShift-- >> Interesting you say that, because we already have customers who've been procuring OpenShift on mainframe, so if you made the invest mainframe, it's running machine learning applications for you, looking to modernize some of the applications and services that run on top in OpenShift on mainframe now is an available option, which customers are already taking advantage of. So exactly right to your point, we're seeing that in the market today. >> Yeah, and Ashesh, maybe it's good to kind of, you know, you've got a great viewpoint as to customers deploying across all sorts of environments, so you mentioned VMware environments, the public cloud environment. It was our premise a few years ago on theCUBE that Kubernetes get staked into all the platforms, and absolutely, it's going to just be a layer underneath. I actually think we won't be talking a lot about Kubernetes if you fast-forward a couple of years, just because it's in there. I'm using it in all of my environments. So what are you seeing from your customers? Where are we in that general adoption, and any specifics you can give us about, you know, kind of the breadth and the depth of what you're seeing from your customer base? >> Yeah, so, you're exactly right. We're seeing that adoption continue on the path it's been on. So we've got now, over 1700 customers for OpenShift, running in all of these environments that you mentioned, so public, private, a combination of the two, running on traditional virtualization environments, as well as ensuring that they run in public cloud at scale. In some cases managed by customers, in other cases managed by us on their behalf in a public cloud. So, we're seeing all permutation, if you will, of that in play today. We're also seeing a huge variety of workloads, and to me, that's actually really interesting and fascinating. So, earliest days, as you'd expect, people trying to play with micro-services, so trying to build new market services and run it, so cloud native, what have you. Then as we're ensuring that we're supporting stateful application, right. Now you're starting to see if your legacy applications move on, ensuring that we can run them, support them at scale, within the platform 'cause we're looking to modernize applications. We'll talk maybe in a few minutes also about lift-and-shift that we got to play as well. But now also we're starting to see new workloads come on. So just most recently we announced some of the work that we're doing with a series of partners, from NVIDIA to emerging AI ML, AI, artificial intelligence machine learning, frameworks or ISVs, looking to bring those to market. Been ensuring that those are supported and can run with OpenShift. Right, our partnership with NVIDIA, ensuring OpenShift be supported on GPU based environment for specific workloads, whether it be performance sensitive or specific workloads that take advantage of underlying hardware. So starting now to see a wide variety if you will, of application types is also something that we're starting, right, so numbers of customers increasing, types of workloads, you know, coming on increasing, and then the diversity of underlying deployment environments. Where they're running all services. >> Ashesh, such an important piece and I'm so glad you talked about it there. 'Cause you know my background's infrastructure and we tend to look at things as to "Oh well, I moved from VM to a container, "to cloud or all these other things," but the only reason infrastructure exists is to run my application, is my data and my application that are the most important things out there. So Ashesh, let me get in some of the news that you got here, your team work on a lot of things, I believe one of them talks about some of those, those new ways that customers are building applications and how OpenShift fits into those environments. >> Yeah, absolutely. So look, we've been on this journey as you know for several years now. You know recently we announced the GA of OpenShift Service Mesh in support of Istio, increasing an interest as for turning microservices will take advantage of close capabilities that are coming in. At this event we're now also announcing the GA of OpenShift Serverless. We're starting to see obviously a lot of interest, right, we've seen the likes of AWS spawn that in the first instance, but more and more customers are interested in making sure that they can get a portable way to run serverless in any Kubernetes environment, to take advantage of open source projects as building blocks, if you will, so primitives in, within Kubernetes to allow for serverless capabilities, allow for scale down to zero, supporting serving and eventing by having portable functions run across those environments. So that's something that is important to us and we're starting to see support of in the marketplace. >> Yeah, so I'd love just, obviously I'm sure you've got lots of break outs in the OpenShift Serverless, but I've been talking to your team for a number of years, and people, it's like "Oh, well, just as cloud killed everything before it, "serverless obviates the need for everything else "that we were going to use before." Underlying OpenShift Serverless, my understanding, Knative either is the solution, or a piece of the solution. Help us understand what serverless environment this ties into, what this means for both your infrastructure team as well as your app dev team. >> Yeah, great, great question, so Knative is the basis of our serverless solution that we're introducing on OpenShift to the marketplace. The best way for me to talk about this is there's no one size fits all, so you're going to have specific applications or service that will take advantage of serverless capabilities, there will be some others that will take advantage of running within OpenShift, there'll be yet others, we talked about the AI ML frameworks, that will run with different characteristics, also within the platform. So now the platform is being built to help support a diversity, a multitude of different ways of interacting with it, so I think maybe Stu, you're starting to allude to this a little bit, right, so now we're starting to focus on, we've got a great set of building blocks, on the right compute network storage, a set of primitives that Kubernetes laid out, thinking of the notions of clustering and being able to scale, and we'll talk a little bit about management as well of those clusters. And then it changes to a, "What are the capabilities now, "that I need to build to make sure "that I'm most effective, most efficient, "regard to these workloads that I bring on?" You're probably hearing me say workloads now, several times, because we're increasingly focused on adoption, adoption, adoption, how can we ensure that when these 1700 plus, hopefully, hundreds if not thousands more customers come on, how they can get the most variety of applications onto this platform, so it can be a true abstraction over all the underlying physical resources that they have, across every deployment that they put out. >> All right, well Ashesh, I wish we could spend another hour talking about the serverless piece, I definitely am going to make sure I check out some of the breakouts that cover the piece that we talked to you, but, I know there's a lot more that the OpenShift update adds, so what other announcements, news, do you have to cover for us? >> Yeah, so a couple other things I want to make sure I highlight here, one is a capability called ACM, advanced cluster management, that we're introducing. So it was an experimental work that was happening with the IBM team, working on cluster management capabilities, we'd been doing some of that work ourselves, within Red Hat, as part of IBM and Red Hat coming together. We've had several folks from IBM actually join Red Hat, and so we're now open sourcing and providing this cluster management capability, so this is the notion of being able to run and manage these different clusters from OpenShift, at scale, across multiple environments, be able to check on cluster health, be able to apply policy consistently, provide governance, ensure that appropriate applications are running in appropriate clusters, and so on, a series of capabilities, to really allow for multiple clusters to be run at scale and managed effectively, so that's one set of, go ahead, Stu. >> Yeah, if I could, when I hear about multicluster management, I think of some of the solutions that I've heard talked about in the industry, so Azure Arc from Microsoft, Tanzu from VMware, when they talk about multicluster management, it is not only the Kubernetes solutions that they're offering, but also, how do I at least monitor, if not even allow a little bit of control across these environments? So when you talk about cluster management, is that all the OpenShift pieces, or things like AKS, EKS, other options out there, how do those fit into the overall management story? >> Yeah, that's absolutely our goal, right, so we've got to get started somewhere, right? So we obviously want to make sure that we bring into effect the solution to manage OpenShift clusters at scale, and then of course as we would expect, multiple other clusters exist, from Kubernetes, like the ones you mentioned, from the cloud providers as well as others from third parties and we want the solution to manage that as well. But obviously we're going to sort of take steps to get to the endpoint of this journey, so yes, we will get there, we've got to get started somewhere. >> Yeah, and Ashesh, any guides, when you look at people, some of the solutions I mentioned out there, when they start out it's "Here's the vision." So what guidance would you give to customers about where we are, how fast they can expect these things to mature, and I know anything that Red Hat does is going to be fully open source and everything, what's your guidance out there as to what customers should be looking for? >> Yeah, so we're at an interesting point, I think, in this Kubernetes journey right now, and so when we, if you will, started off, and Stu you and I have been talking about this for at least five years if not longer, was this notion that we want to provide a platform that can be portable and successfully run in multiple deployment environments. And we've done that over these years. But all the while when we were doing that, we're always thinking about, what are the capabilities that are needed that are perhaps not developed upstream, but will be over time, but we can ensure that we can look ahead and bring that into the platform. And for a really long time, and I think we still do, right, we at Red Hat take a lot of stick for saying "Hey look, you form the platform." Our outcome back to that has always been, "Look, we're trying to help solve problems "that we believe enterprise customers have, "we want to ensure that they're available open source, "and we want to upstream those capabilities always, "back into the community." But, let's say making available a platform without RBAC, role-based access control, well it's going to be hard then for enterprises to adopt that, we've got to make sure we introduce that capability, and then make sure that it's supported upstream as well. And there's a series of capabilities and features like that that we work through. We've always provided an abstraction within OpenShift to make it more productive for developers and administrators to use it. And we always also support working with kubectl or the command line interface from kube as well. And then we always hear back from folks saying "Well, you've got your own abstraction, "that might make that seem impossible," Nope, you can use both kubectl GPUs or C commands, whichever one is better for you, have at it, we're just trying to be more productive. And now increasingly what we're seeing in the marketplace is this notion that we've got to make sure we work our way up from not just laying out a Kubernetes distribution, but thinking about the additional capability, additional services that you can provide, that would be more valuable to customers, and I think Stu, you were making the point earlier, increasingly, the more popular and the more successful Kubernetes becomes, the less you will see and hear of it, which by the way is exactly the way it should be, because that becomes then the basis of your underlying infrastructure, you are confident that you've got a rock solid bottom, and now you as a customer, you as a user, are focusing all of your energy and time on building the productive application and services on top. >> Yeah, great great points there Ashesh, the vision people always talked about is "If I'm leveraging cloud services, "I shouldn't have to worry "about what version they're running." Well, when it comes to Kubernetes, ultimately we should be able to get there, but I know there's always a little bit of a delta between the latest and newest version of Kubernetes that comes out, and what the managed services, and not only managed services, what customers are doing in their own environment. Even my understanding, even Google, which is where Kubernetes came out of, if you're looking at GKE, GKE is not on the latest, what are we on, 1.19, from the community, Ashesh, so what's Red Hat's position on this, what version are you up to, how do you think customers should think about managing across those environments, because boy, I've got too many scars from interoperability history, go back 10 or 15 years and everything, "Oh, my server BIOS doesn't work on that latest "kernel.org version of what we're doing for Linux." Red Hat is probably better prepared than any company in the industry, to deal with that massive change happening from a code-based standpoint, I've heard you give presentations on the history of Linux and Kubernetes, and what's going forward, so when it comes to the release of Kubernetes, where are you with OpenShift, and how should people be thinking about upgrading from versions? >> Yeah, another excellent point, Stu, it's clearly been following us pretty closely over the years, so where we came at this, was we actually learned quite a bit from our experience in the company with OpenStack. And so what would happen with OpenStack is, you would have customers that are on a certain version of Openstack, and then they kept saying "Hey look, we want to consume close to trunk, "we want new features, we want to go faster." And we'd obviously spent some time, from the release in community to actually shipping our distribution into customer's hand, there's going to be some amount of time for testing and QE to happen, and some integration points that need to be certified, before we make it available. We often found that customers lagged, so there'd be let's say a small subset if you will within every customer or several customers who want to be consuming close to trunk, a majority actually want stability. Especially as time wore on, they were more interested in stability. And you can understand that, because now if you've got mission critical applications running on it you don't necessarily want to go and put that at risk. So the challenge that we addressed when we actually started shipping OpenShift four last summer, so about a year ago, was to say, "How can we provide you basically a way "to help upgrade your clusters, "essentially remotely, so you can upgrade, "if you will, your clusters, or at least "be able to consume them at different speeds." So what we introduced with OpenShift four was this ability to give you over the air updates, so the best way to think about it is with regard to a phone. So you have your phone, your new OS upgrades show up, you get a notification, you turn it on, and you say "Hey, pull it down," or you say at a certain point of time, or you can go off and delay it, do it at a different point in time. That same notion now exists within OpenShift. Which is to say, we provide you three channels, so there's a stable channel where you say "Hey look, maybe this cluster in production, "no rush here, I'll stay at or even a little behind," there's a fast channel for "Hey, I want to be up latest and greatest," or there's a third channel which allows for essentially features that are being in developed, or are still in early stage of development to be pushed out to you. So now you can start consuming these upgrades based on "Hey, I've got a dev team, "on day one I get these quicker," "I've got these applications that are stable in production, "no rush here." And then you can start managing that better yourself. So now if you will, those are capabilities that we're introducing into a Kubernetes platform, a standard Kubernetes platform, but adding additional value, to be able to have that be managed much much, in a much better fashion that serves the different needs of different parts of an organization, allows for them to move at different speeds, but at the same time, gives you that same consistent platform regardless of where you are. >> All right, so Ashesh, we started out the conversation talking about OpenShift anywhere and everywhere, so in the cloud, you talked about sitting on top of VMware, VM Farms is very prevalent in the data centers, or bare metal. I believe since I saw, one of the updates for OpenShift is how Red Hat virtualization is working with OpenShift there, and a lot of people out there are kind of staring out what VMware did with VSphere seven, so maybe you can set it up with a little bit of a compare contrast as to how Red Hat's doing this rollout, versus what you're seeing your partner VMware doing, or how Kubernetes fits into the virtualization environment. >> Yeah, I feel like we're both approaching it from different perspective and learnset that we come at it, so if I can, the VMware perspective is likely "Hey look, there's all these installations of VSphere "in the marketplace, how can we make sure "that we help bring containers there," and they've come up with a solution that you can argue is quite complicated in the way how they're achieving it. Our approach is a different one, right, so we always looked at this problem from the get-go with regard to containers as a new paradigm shift, it's not necessarily a revolution, because most companies that we're looking at are working with existing application services, but it's an evolution in the way you're thinking about the world, but this is definitely the long term future. And so how can we then think about introducing this environment, this application platform into the environment, and then be able to build a new application in it, but also bring in existing applications to the form? And so with this release of OpenShift, what we're introducing is something that we're calling OpenShift Virtualization, which is a few of our existing applications, certain VMs, how can we ensure that we bring those VMs into the platform, they've been certified, data security boundaries around it, or certain constraints or requirements have been put by your internal organization around it, and we can keep all of those, but then still encapsulate that VM as a container, have that be run natively within an environment orchestrated by OpenShift, Kubernetes as the primary orchestrator of those VMs, just like it does with everything else that's cloud-native, or is running directly as containers as well. We think that's extremely powerful, for us to really bring now the promise of Kubernetes into a much wider market, so I talked about 1700 customers, you can argue that that 1700 is the early majority, or if you will, almost the scratching of the surface of the numbers that we believe will adopt this platform. To get, if you held the next setup, whatever, five, 10, 20,000 customers, we'll have to make sure we meet them where they are. And so introducing this notion of saying "We can help migrate," with a series of tools that Rock's providing, these VM-based applications, and then have them run within Kubernetes in a consistent fashion, is going to be extremely powerful, and we're really excited about it, by those capabilities, bringing that to our customers. >> Well Ashesh, I think that puts a great exclamation point as to how we go from these early days off to the vast majority of environments, Ashesh, one thing, congratulations to you and the team on the growth, the momentum, all the customer stories, I'd love the opportunity to talk to many of the Red Hat customers about their digital transformation and how your cloud platforms have been a piece of it, so once again, always a pleasure to catch up with you. >> Likewise, thanks a lot, Stuart, good chatting with you, and hope to see you in person soon sometime. >> Absolutely, we at theCUBE of course hope to see you at events later in 2020, for the time being, we of course fully digital, always online, check out theCUBE.net for all of the archives as well as the events including all the digital ones that we are doing, I'm Stu Miniman, and as always, thanks for watching theCUBE. (calm music)

Published Date : Apr 1 2020

SUMMARY :

brought to you by Red Hat. and great to see you. Good to see you again. we had you on theCUBE a few things have changed. So as you can probably see, OpenShift on IBM Cloud, and Power, and Storage, and people would be like, you know, so if you made the invest mainframe, and any specifics you can give us about, you know, So, we're seeing all permutation, if you will, So Ashesh, let me get in some of the news that you got here, spawn that in the first instance, but I've been talking to your team Yeah, great, great question, so Knative is the basis so this is the notion of being able to run from Kubernetes, like the ones you mentioned, So what guidance would you give to customers and so when we, if you will, started off, GKE is not on the latest, what are we on, 1.19, Which is to say, we provide you three channels, so in the cloud, you talked about sitting on top of VMware, is the early majority, or if you will, to you and the team on the growth, the momentum, and hope to see you in person soon sometime. Absolutely, we at theCUBE of course hope to see you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Ashesh BadaniPERSON

0.99+

fiveQUANTITY

0.99+

AsheshPERSON

0.99+

StuartPERSON

0.99+

Red HatORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

hundredsQUANTITY

0.99+

twoQUANTITY

0.99+

first instanceQUANTITY

0.99+

StuPERSON

0.99+

VMwareORGANIZATION

0.99+

LinuxTITLE

0.99+

GoogleORGANIZATION

0.99+

KubernetesTITLE

0.99+

OpenShiftTITLE

0.99+

CUBEORGANIZATION

0.99+

over 1700 customersQUANTITY

0.99+

OneQUANTITY

0.98+

10QUANTITY

0.98+

Red Hat SummitEVENT

0.98+

Red Hat Summit 2020EVENT

0.98+

three channelsQUANTITY

0.98+

15 yearsQUANTITY

0.98+

OpenShift ServerlessTITLE

0.98+

bothQUANTITY

0.97+

KnativeORGANIZATION

0.96+

todayDATE

0.96+

GKEORGANIZATION

0.96+

Azure ArcTITLE

0.96+

thousands more customersQUANTITY

0.96+

Red HatTITLE

0.96+

third channelQUANTITY

0.96+

last summerDATE

0.96+

RBACTITLE

0.95+

zeroQUANTITY

0.93+

Ashesh Badani, Red Hat | Red Hat Summit 2020


 

from around the globe it's the cube with digital coverage of Red Hat summit 2020 brought to you by Red Hat hi I'm Stu min a man and this is the cubes coverage of Red Hat summit having digitally interviewing practitioners executives and thought leaders from around the world happy to welcome back to our program one of our cube alumni a chef des données is the senior vice president of cloud platforms with Red Hat ashesh thank you so much for joining us and great to see you yeah likewise thanks for having me on Stu good to see you again all right so a shesh since the last time we had you on the cube a few things have changed you know one of them is that IBM has now finished the the acquisition of bread hat and I've heard from you from a really long time you know OpenShift it's anywhere and everywhere but with your exhibition Red Hat it just means you know this only run on IBM mainframes and IBM cloud and all things blue correct well that's true for sure right so Stu you know we're talking for many many times as you know we've been committed to hybrid multi-cloud from the very GetGo right so open ships supported to run on bare metal on which was asian platforms will they come from us or BM where microsoft happy on private clouds like OpenStack as well as AWS Google cloud as well as on a sure now with the completion of the IBM acquisition Red Hat we obviously always partnered with IBM before but given if you will a little bit for a close relationship here you know IB has been very keen to make sure that they promote open ships and all their platforms right so as you can probably see open idea about up as well as open shift on Xeon mainframe it's so regardless of how you like open shape wherever you like open ship you will get it yeah oh so great client clarification it's not only on IBM but of course all of the IBM environment are supported as you said as well as ad abs Google Azure and the like yeah it's you know I remember years ago before IBM created their single condensed conference I think I attended the conference that would do you know Z and power and storage and people would be like you know what are they doing you know with that mainframe I'm like well you do know that can run Linux wait it can run Linux I'm like oh my god these been able to run Linux for a really long time so you want your latest container docker you know openshift stuff on there yeah that can sit on a mainframe I thought some very large global companies that that is absolutely a part of their overall story so so interesting you by the way you say that because we already have customers who've been a procuring openshift on mainframe right so if you made the investment frame it's running much typical applications for you looking to modernize on the applications and then services run on top you know open ship domain say now there's an available option which customers already taking advantage of so exactly right to your point we're seeing that yeah and it's just maybe it's good to kind of you know you've got a great view point as to customers deploying across all sorts of environment so you mentioned VMware environments the public cloud environment you know it was you know our premise a few years ago on the cube that you know kubernetes gets baked into all the platform and absolutely it's going to just be you know a layer underneath I actually think we won't be talking a lot about kubernetes if you fast forward a couple years just because you know it's in there it's I'm using it in all of my environment so what are you seeing from your customers where are we in that general doctrine and you know any specifics you can give us about you know kind of the breadth and the depth of what you're seeing yes so you're exactly right all right we're seeing that adoption continue on the path it's been not so we've got now over 1,700 customers of poor openshift running in all of these environments that you mention right so public-private you know a combination of the two running on traditional which ization environments as well as ensuring that they run in public cloud that scale in some cases managed by customers other cases you've managed by by us on their behalf in a public cloud so we're seeing all permutations if you will you know of that in play today we're also seeing a huge variety workloads right and to me that's actually really interesting it and past that all right so earliest days as you'd expect you know people don't play with micro services right so trying to build unity Marc services and run it right so part native what have you then you know as we were ensuring that we're supporting stateful application right now you're starting to see if you a legacy application move on right ensuring that you know we can run them support them at scale you know within the platform you know customers looking to modernize applications I will talk maybe in a few minutes also a little bit of kind of Lipton shift you know that that you know you've got to play as well but now also we're starting to see new workloads come on right so just you know most recently we announced you know some the work that we're doing with series of partners right from Nvidia to emerging a IML you know a I utter intelligence machine learning frameworks ice bees you know looking to bring those to market been ensuring that those are supported and can run with open ship right our partners with Nvidia ensuring open ship we support you know GPU based environment for specific workloads right whether it be performance sensitive or you know specific workloads they take advantage of July starting now to see a wide variety if you will of application types is also something that that were chimes all right so numbers of customers increasing types of workloads you know you know coming on grazing and then the diversity of underlying deployment environments you know whether they're running balls it's such an important piece and I'm so glad you talked about it there because you know my backgrounds infrastructure and we tend to look at things as to oh well you know I move from a VM to a container the cloud or all these other things but the only reason infrastructure exists is to run my applications it's my data and my application that are the most important things out there so a shesha I want to get in some of the news that you've got here your team working a lot of things I believe one of them talks about you know some of those those new ways that customers are building applications and how openshift fits into those yeah absolutely so look we've been you know on this journey as you know for several years now you know recently we announced the GA of open ship you know server smash it support sto right increasing interest as for turning micro services and I won't take advantage of those capabilities are coming in you know at this event we're now also announcing the GA of open ship serverless but we're starting to see obviously a lot of interest right you know we've seen likes of AWS spawn that up in the first instance but more and more customers interested in making sure that they can get you know portable way to run server list in any kubernetes environment like to take advantage of open source projects as building blocks if you will right so primitives in within kubernetes to allow for surveillance capability is loud for you know scale down to zero support and serving and eventing up and having portable functions you know run across those environments so that that's something that is important to us and we're starting to see support up in the marketplace yeah so I I'd love just you know we've obviously I'm sure you've got lots of breakouts in the open ship server list but you know I've been talking to your team for a number of years and people is like oh well you know just as cloud killed everything before you know serverless obviates the need for everything else that we were going to use before underlying openshift server list my understand Kay native either is the solution or a piece of the solution help us understand you know what service environments decides into what this means for both your infrastructure team as well as your kind of app dev team yeah yeah and a great great question right so Kay native is the basis of our solar solution you know that we're introducing on open chef to the marketplace yeah the best way for me to talk about this right is is no one size fits all right so you're going to have you know specific applications or service that will take advantage of several SK abilities there will be some others that will take advantage of you know running within open ship they'll be yet others you know we talked to the robot and the AI ml frameworks that will run with different characteristics also within the platform so now the platform is being built to help support a diversity multitude of different ways of interacting with it right so I think maybe Stu you're starting to allude to this a little bit right so now we're starting to focus on you know we've got a great set of building blocks you know on the right compute network storage you know a set of primitives that you know kubernetes laid out right thinking of the notions of clustering and being able to scale and we'll talk about what management is well off of those clusters up and then it changes to hey what are the capabilities now that I need to be able to make sure that I'm most effective most efficient regard to these workloads that have been done you're probably hearing me say workloads now several times right because we're increasingly focused on adoption adoption adoption right how can we ensure that when these 1700 plus hopefully you know hundreds if not thousands more of customers come on how they can get the most variety of applications onto this platform right so it can be a true abstraction over all the underlying you know physical resources that they have across every deployment that they've put up all right well I wish we could spend another hour talking about the serverless piece I definitely going to make sure I check out some of the breakout that covered the feast and we talked to you but I I know there's a lot more that the open shift updates have so what other announcements news do you have to cover for yeah so a couple of the things they said I wanna make sure I highlight here one is Ghibli called ACM advanced cluster management that when you're saying right so there's a fair amount of work that was happening with the IBM team working on Plus imagine capabilities we've been doing some of that work ourselves within Red Hat you know as part of IBM Red Hat coming together we've had several folks from IBM actually joined Red Hat and so we're now open sourcing and providing this cluster magical with it right so this is the notion of being able to run and manage these different clusters from openshift at scale across a multitude of ironmans be able to check on cluster help people to apply policy could consistently provide governance ensure that appropriate application they're running appropriate clusters and so on right a series of capabilities to really allow for you know multiple Buster's to be run at scale and managed effectively right so that's one set of go ahead stick yeah if I could when I hear about multi cluster management III I think of some of the solutions I've heard talked about in the industry so you know as you're arc from Microsoft hanzou from VMware when they talk about multi cluster management it is not only the kubernetes solutions that they are offering but also you know how do I at least monitor if not even allow a little bit of control across these environments but when you talk about cluster management is that all you know kind of the the openshift pieces or things like a KS d KS other you know options out there how do those fit into the overall you know management story yeah that's absolutely our goal right so you know we gotta get started somewhere right so we obviously want to make sure that we bring in to effect the solution to manage OpenShift cluster that scale and of course as we'd expect multiple other bus will exist from kubernetes you know like the ones you mentioned from the cloud provider as well as others from third parties and we want the solution to manage that as well but you obviously we're going to sort of take steps to get to through the end point of this journey so yes we will we will get there right we've got to get started somewhere yeah and if chesh any guidance when you look at people you know some of the solutions I mentioned out there when they start out it here's the vision so what what guidance would you give to customers about kind of where we are how fast they can expect these things mature and you know I know anything that Red Hat does is going to be fully open force and everything but you know what what's your guidance out there is what customers people yeah so what was an interesting point I think in this kubernetes journey right now right so when we if you will start off and stew you and I've been talking about this for at least five years not longer was this notion that you know we want to provide a platform that can be portable and successfully run in multiple deployment environments and we've done that over these years but all the while when we were doing that we're always thinking about what are the capabilities that are needed that are perhaps not developed upstream but will be over time but we can ensure that we can look ahead and bring that into the path up and for a really long time I think we we still do right we at Red Hat take a lot of stick for saying hey look you've pork the platform now barn I'll come back to that has always been look we're trying to help solve problems that we believe enterprise customers have we want to ensure that the available open source and we want upstream those capabilities always and back into the community all right but you know let's say making available a platform without our back role based access control what's going to be hard then for enterprise to to adopt that we've got to make sure we introduce that capability and then make sure that it's supported upstream as well and there's a series of capabilities and features like that that we work through and we always provide an abstraction with an open ship to make it more productive for developers administrators to use it and we always also support you know working with coop ctrl or command line interface from coop as well right and then we always hear back from folks saying well you know you've got your own abstraction you know that might make that seem like before collect no you can use both coops ETL you use you know OC commands right whichever one you know is better for you right you have at it right we're just trying to be more productive and now increasingly what we're seeing in the marketplace is this notion that you know we've got to make sure we work our way up from not just laying out a kubernetes distribution but thinking about the additional capabilities additional services that you can provide that'll be more valuable to customers I think Stu you're making the point earlier right increasingly the more popular and the more successful kubernetes becomes the less you will see in Europe which by the way is exactly the way it should be because that becomes then the basis of your underlying infrastructure you're confident that you've got a lock rock-solid bottom and now you as a customer you as a user of focusing all your energy and time on building the productive application and services yeah great great points there are chefs write the division people always talked about is if I'm leveraging cloud services um I shouldn't have to worry about what version they're running well when it comes to kubernetes ultimately we should be able to get there but you know I I know there's always a little bit of a Delta between the latest and newest version of kubernetes that comes out and what the manage services and not only manage services what what customers are doing their own environment right even my understanding even Google you know which is where kubernetes came out of if you're looking at g/kg gke is not on the latest what are we up 1.19 start from the community is shesh so um yeah where's what what's Red Hat's position on this how do you what version are you up to how do you think customers should think about managing across those environments because boy yeah I've got too many you know stars from you know interoperability history go back in 15 years and everything like you know oh my server BIOS doesn't work on that latest kernel.org version of what we're doing for linux um you know Red Hat is probably better prepared than any company in the industry to deal with that you know massive change happening from a code based standpoint I've heard you good presentation on you know the history of Linux and kubernetes and what's going forward so when it comes to the release of kubernetes where are you would open ship and how should people be thinking about you know upgrading from version yeah another excellent points to it's you've been following this pretty closely over the years so we're where we came at this was we actually learned quite a bit from our experience the company with OpenStack and so what would happen the OpenStack is you'd have customers that are on a certain version OpenStack and they kept saying hey look you know we want to consume clothes of drugs we want new features we will move faster and you know we'd obviously spend some time right from the release in community to actually shipping our distribution into customers and you know there's gonna be some more time for testing in QE to happen and some integration points that need to be certified before we make it available we often found that customer all right so they'd be let's say a small subset if you will within every customer or several customers who want to be close could you close the trunk majority actually wanted the stability especially as you know time wore on right they were Wonder sensibility and you can understand that right because now if you've got mission-critical applications running on it you don't necessarily want to go ahead and and you know put that at risk so the challenge that we addressed when we actually started shipping OpenShift for last summit right so so about a year ago was to say how can we provide you basically a way to help upgrade your clusters you know essentially remotely so you can upgrade if you will your clusters or at least be able to consume them at different speeds all right so what we introduced with open shop for was this ability to give you the on the over-the-air updates right so best we think about it is with regard to a phone all right so you know you have your phone you know new OS upgrades show up you get a notification you turn it on and you say hey look pull it down or you say it's their important time or you can go off and delay you know I do it a different point in time that same notion now exists within open show right which is to say we provide you three channels right so there's a stable channel where you're saying hey look you know maybe this cluster is production no no rush here you know I'll stay you know add or even even little further behind there's a fast Channel right for hey I want to be up latest and greatest or there's a third channel which allows for essentially features that are being in developed or still in early stage of development to be pushed out tree so now you can start you know consuming these upgrades based on hey I've got a dev team you know they want here with these quicker you know I've got these you know application that stable production right no rush it and then you can start managing that you know better yourself right so now if you will do that here will be that we're introducing into a kubernetes platform us the under kubernetes platform but adding additional value to be able to have that be managed much much in a much better fashion that observed the different needs of different if an organization allows for them to move at different speeds but at the same time gives you that same consistent platform with all this way running all right so a chef we started out the conversation talking about open shift anywhere and everywhere so you know in the cloud you talked about you know sitting on top of vmware vm farms very prevalent the data centers you know or bare bare metal i believe if i saw right one of the updates for open shift is how RedHat virtualization is you know working with open shift there and you know a lot of people out there kind of staring at what vmware did would be sore seven so maybe you can set it up a little bit of a compare contrast as to how you know red hats doing this rollout versus what you're seeing your partner vmware doing for how kubernetes fits into the virtualization fire yeah I feel like we're both approaching it from you know different perspective and land set that we come at it right so if I can the VMware perspective is likely hey look you know there's all of these installation in the vSphere you know in the marketplace you know how can we make sure that we help you know bring containers there and they've come up with a solution that you can argue is quite complicated in the way how they're achieving it our approach is different one right so we've always you know looked at this problem from the get-go with regard to containers is a new paradigm shift right it's not necessarily a revolution because most companies that we're looking at are working with existing application services but it's an evolution and in the way you know you're thinking about the world but this is definitely the long-term future and so how can we then think about you know introducing this environment this application platform into your environment and then be able to build or build a new application in it but also bring in existing applications to before and so with this release of open ship what we introducing is something a bit for calling open ship virtualization now which is if you got existing applications that sit in VMs how can we ensure that we bring those VMs into the platform but you know they've been certified their security boundaries around it or you know constraints or reforms have been put by your own internal organization around it and we can keep all of those but then still encapsulate that VM as a container have that be run natively within an environment and orchestrated by open ship you know kubernetes as the primary Orchestrator of those VMs just like it does with everything else that's cloud native orb or is running directly as container as well we think that's extremely powerful for us to really bring now the promise of urban Eddie's into a much wider market rights I talked about 79 customers you can argue that that 1700 is the early majority right or who else are the the almost scratching of the surface of the numbers that we believe will adopt this platform to get if you will the next if set of whatever five ten twenty thousand customers will have to make sure we meet them where they are now that you're introducing this notion of saying we can help migrate with a you know a series of tools that were also providing these VM based applications and then have them run within kubernetes in a consistent fashion it's going to extremely powerful really excited by it by those capabilities that predict that to our customers well I I think that puts a great exclamation point as to how we go from these early days off to you know the vast majority of environments yes once again congratulations to you and the team on the growth of momentum all the customer stories you know I've loved the opportunity to talk to many of the Red Hat customers about their digital transformation and how your cloud platform has been a piece of it so once again always a pleasure to catch up with you likewise thanks a lot Stuart good chatting with you and hope to see you in person soon absolutely.we at the cube of course hope to see you at events later in 2020 for the time being we of course fully digital always online check out the cube net for all of the archives as well as the event including all the digital ones that we are doing I'm sue minimun and as always thanks for watching the cube [Music]

Published Date : Apr 1 2020

SUMMARY :

in the industry so you know as you're

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

EuropeLOCATION

0.99+

hundredsQUANTITY

0.99+

LinuxTITLE

0.99+

MicrosoftORGANIZATION

0.99+

StuartPERSON

0.99+

AWSORGANIZATION

0.99+

Red HatTITLE

0.99+

NvidiaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

five ten twenty thousand customersQUANTITY

0.99+

JulyDATE

0.99+

OpenStackTITLE

0.99+

three channelsQUANTITY

0.98+

third channelQUANTITY

0.98+

Ashesh BadaniPERSON

0.98+

Red HatORGANIZATION

0.98+

twoQUANTITY

0.98+

RedHatTITLE

0.98+

Red HatTITLE

0.98+

linuxTITLE

0.98+

over 1,700 customersQUANTITY

0.97+

microsoftORGANIZATION

0.97+

1700 plusQUANTITY

0.96+

todayDATE

0.96+

bothQUANTITY

0.96+

first instanceQUANTITY

0.96+

thousands moreQUANTITY

0.91+

oneQUANTITY

0.91+

vmwareORGANIZATION

0.9+

15 yearsQUANTITY

0.9+

Red Hat Summit 2020EVENT

0.9+

1700QUANTITY

0.89+

1.19QUANTITY

0.86+

IBORGANIZATION

0.85+

about a year agoDATE

0.84+

kernel.orgTITLE

0.84+

at least five yearsQUANTITY

0.83+

Red Hat summit 2020EVENT

0.83+

years agoDATE

0.82+

Red Hat summitEVENT

0.77+

vSphereTITLE

0.76+

several yearsQUANTITY

0.76+

lot of peopleQUANTITY

0.76+

zeroQUANTITY

0.76+

Google cloudTITLE

0.75+

OpenShiftORGANIZATION

0.74+

a few years agoDATE

0.73+

single condensedQUANTITY

0.72+

VMwareTITLE

0.72+

79 customersQUANTITY

0.7+

both coopsQUANTITY

0.7+

2020DATE

0.7+

OpenShiftTITLE

0.69+

GhibliORGANIZATION

0.68+

several customerQUANTITY

0.65+

vmwareTITLE

0.65+

number of yearsQUANTITY

0.63+

open shopORGANIZATION

0.63+

Red HatEVENT

0.62+

sue minimunPERSON

0.62+

AzureTITLE

0.61+

customersQUANTITY

0.61+

coopORGANIZATION

0.6+

every customerQUANTITY

0.58+

themQUANTITY

0.58+

Simon Taylor, HYCU | CUBE Conversation, March 2020


 

>> From the SiliconANGLE Media office in Boston massachusetts, it's theCUBE. (techno music) Now, here's your host Stu Miniman. >> Hi, and welcome to a special CUBE conversation here in our Boston area studio. One of the biggest topics we've been digging into as we head through 2020, has really been multi-cloud and as the customers as they're really going through their own transformations understanding what they're doing in their data center to modernize what's happening between all of the public clouds they use, and all the services that fit amongst them. Happy to bring back one of our CUBE alumni to dig into a specific topic. Simon Taylor, who's the CEO of HYCU. Of course data protection, a big piece. A big buzz in the industry for a number of years, in one of those areas, in multi-cloud, that's definitely of big importance. Simon, great to see you, thanks so much for joining us. >> Thank you so much for having me back on, it's exciting to be here. >> All right, so, Simon, first, give us the update. >> Sure. >> It's 2020. We've seen you at many of the conferences we go to. You're based in Boston, so not to far for you to come out to our Boston area studio here. You know a 40 minute drive without traffic so, >> Not bad at all. >> give us the latest on HYCU. >> Certainly well and Stu, thanks again for having me into your studio, it's gorgeous, everything looks great. It's a lot easier than traveling over to Europe to see you. So this is very very convenient actually. But since we last spoke, which I think was about six months ago now, HYCU has been growing fast and furiously, you know we started out with the world's first purpose built backup and recovery product for Nutanix Of course, we added VMware we added Google Cloud, we wrapped all the data together into multi-cloud data protection as a service, and we called that HYCU Protege. Well I am so thrilled to announce that in just the three months since we've launched Protege, we have seen hundreds of customers flocking to it. And what we're finding is that customers are calling us and they're saying things like, "let me get this straight, "I'm already backing up my data on-prem with you, "I can now migrate to the cloud, "bring it back again for disaster recovery as a service, "and it's all part of HYCU?" and we say yes, you know, and they say, "and this is all offered as a service?" Yes, "and it's natively integrated "into all the platforms that I'm using?" Yes. And I think so customers today, are more and more in need of the kind of expertise that HYCUs providing because they're looking now much more strategically than ever before, at what workloads to leave on-prem and which workloads to migrate to the cloud, and they want to make sure that, that entire data pathway is protected from beginning to end. >> Yeah, it's really interesting stuff, I think back to early in my career that you know that data protection layer was like, "well, this is what I'm running "and don't change it." Think about like when you've rolled out like virtual tape as a technology it was, you know, "I don't want to have to change my backup "because that is just something that runs "and I don't do it." For last five years or so it feels like customers. There's so much change in their environment that they are looking for things that are more flexible, you talked about some of the flexible adoption models for payment and the like that they're looking for. So, you know, what do you think customers are just more embracing of that change, is it just that changes their daily business and therefore data protection needs to come along with that. Well it's funny you asked because just a few years ago I was on theCUBE with you and you said to me, "you guys have a perpetual license model, "what are you doing about that?" and I said, "don't worry, it is shifting to as a service it's going subscription," which was super important for the market is, I've had conversations with folks who are selling cooking gear and they're trying to sell that as a service, I saw yesterday, somebody, I think Panera Bread, is offering a coffee as a service. You know, I think what we've started to realize is that the convenience of the as a service model, the flexibility, which I would argue was probably driven by cloud technology and cloud technology adoption, is something the market has truly embraced and I think anybody who's not moved in that direction at this point is probably very much being left behind. >> Okay, another technology that often goes hand in hand in discussion with data protection is security. Of course ransomware is a hot topic conversation the last few years, how does that fit into your conversations with customers, what are you saying? >> That's a great question. So you know one of our advisory board members, his name is Kevin Powers, and he runs the Boston College cyber security program. I had the privilege and the honor of attending the FBI Boston College cyber program recently at a large scale event at Boston College, and FBI Director Ray was actually on hand to talk about this problem, and it was incredible you know he said, "cyber crime as a service "is becoming a major issue," you're talking about the commoditization of hard to build malware, that's now just skyrocketing off the charts, the amount of cyber exploitation that's going on across the world. This is creating massive massive issues for the FBI because they've got so many thousands of cases, they've got to deal with. And while they're doing a fantastic job. We believe prevention is certainly the key. So one of the things that has been really really wonderful as a CEO to watch has been the way that some of our customers have actually been able to crack the code in terms of not having to give in to these bad actors. We've had actual customers who have had ransomware attacks had millions of dollars in data, literally stolen from them, and they've been told, "you've got to deposit, "$5 million on this Bitcoin account by midnight, "or we're deleting the data." Right? Because HYCU is Linux based because HYCU is not Windows Server based because HYCU is natively integrated into all the platforms that we support. We were able to help those customers get their data back without paying a penny. So I think that that's one of those moments where you really sort of say to yourself, "God I'm glad I'm in this business here," we've built a product that doesn't just do what we say it's going to do, it does a heck of a lot more. And I think it's it's absolutely a massive problem and data protection is really a key part of the answer, >> You know it's great to hear their success stories there, you know I think back to earlier days where it'd be like well you know what if I set up for disasters and data protection and things like that, well maybe I haven't thought about it or maybe I kind of implemented it but I've never really tested it, but there's more and more reasons why I might actually need to leverage these technologies that I've deployed, and it's nice to know that they're there. You know it's not just an insurance thing that I've never used. >> Oh absolutely. Yeah, absolutely. >> All right. So I started off our discussion time in talking about multi-cloud So you talked about earlier we first first met it was at the Nutanix shows in their environments, and some of that you've gone along with Nutanix as they've gone through hybrid and multi-cloud what they call enterprise Cloud Messaging. >> Sure. >> And play with those environments so bring us up to speed. What have your big customers doing with cloud where does HYCU fit in and what are the updates on your product. >> Yeah, sure. And I'll start off by saying that at this point about a third of all AHV customers are using a HYCU for backup AND recovery. >> And just for our audience that doesn't know, AHV of course is Nutanix's >> Yes. >> Acropolis Hypervisor >> Absolutely. >> That comes baked into their solution as an alternative to people like VMware. >> Perfectly said as always sir, yes very much, and you know we've been thrilled as the rise of AHV and Nutanix has sort of taken the market by storm. And when we started out, you know we use to came on the show with zero customers and a new product and said, "we believe in AHV and we think it's going to be great "and we're going to back it up." And that's really paid off in spades for us, which was wonderful, but we also recognize that customers needed that VMware backups. We built a VADP integration and then we started going after the public cloud. So we started with Google Cloud, and we said we're going to build the world's first purpose built backup and recovery as a service for GCP. We launched that last year and it was tremendous you know some of the world's largest companies and organizations and governments are actually now running HYCU specifically for Google Cloud. So we've been thrilled about that. I think the management team at GCP has done a terrific job of making sure that Google can be really competitive in the cloud wars, and we're thrilled to support them. >> Yeah, and I'm glad you've got some customer stories on Google because you know the industry watchers out there it's like, "well you know Google they're number three," and you know we know that Google has some really strong data products Where they're very well known but I'm curious when you're talking to your customers. Is there anything that's kind of commonalities to why customers are using Google and you know what feedback you're hearing from your customers out there. >> Sure I mean I'll start off by saying this, we've polled our customers and we've now got over 1,300 customers in 56 countries. So we polled all of them and we just said, "how many data silos do you have, "how many platforms, how many clouds?" The average was five. Right, so the first thing to say is that I think almost all of these large enterprise customers in public sector and private sector are really using all of them, the extent to which they may be using AWS versus Azure versus GCP, versus Nutanix versus VMware on-prem. we can argue and debate but I think all customers at this point of any size and scale are trying them all out. I think what Google's done really well is they've started to build a really strong partner program. I think where they were a little bit sort of late to the party in terms of AWS and Azure being there sort of first. But I think what Thomas Kurian did when he came in is he sort of tripled down on sort of building out that ecosystem and saying, "what's really important "to make cloud customers comfortable "that their data is going to be as safe on Google Cloud, "as it was on-prem," and I'm thrilled that they've elected to make data protection sort of one of the key pillars of that strategy, not just because we're a data protection company, but because I do think that that was one of the encumbrances in terms of that evolution to cloud. >> Yeah, absolutely, seen a huge growth in the ecosystem around Google. The other big cloud provider that has a very strong partner ecosystem is the one when I went to the show last year, their CEO Satya Nadella talked about trust, so of course talking about Microsoft and Azure, very large ecosystem there, trying to emphasize, maybe against others and by the way you saw this as much of a shot against Google >> Sure. >> you know, how do I trust Google with my data and information from the consumer side as AWS is I might be concerned that they might be competing against them. So, how about the Microsoft relationship? >> It's a great question. So again, so when we started on-prem, with our initial purpose built backup recovery products. We added Google Cloud. You know I'm now thrilled to announce that we're also going to be launching Azure backup and recovery. It's also native, it is purpose built into the Azure Marketplace. All the things you've come to expect from HYCU backup. The simplicity, the fact that it's SLO based. The fact that you can actually go in and decide how many times a day you want a different recovery point et cetera. All of those levels of configuration are now baked in to HYCUs own purpose built backup and recovery as a service for Azure. But I think the important thing to remember about this wonderful wonderful new addition to our portfolio. Is that, it is a critical component of HYCU Protege. So getting back to your question from before about multi-cloud data protection and what we're seeing, we call this the year of migration, because for all of these cloud platforms, what are they really trying to do they need to move massive amounts of data in a safe and resilient manner, to the cloud. So remember after we built out these purpose built backup recovery services, Azure is now one of those. We then pulled all that data together under a single pane of glass we called it HYCU Protege. We then said to customers, we're going to enable you to automatically migrate with the touch of a button an entire workload to the cloud, and then bring it back again for disaster recovery, and we will protect the data on-prem in the cloud and back again. >> Yeah, it's interesting 'cause when we kind of look at what's happening in the marketplace, for many years it was a discussion of what's moving from the data center to the public cloud, some things are moving back from the environment edge, of course, pulls things even further. Often it's, I say it's not even migration anymore it's just mobility, because we are going to be moving things and spinning things up and building things in many more places, and it's going to change. As we started out that conversation, there's so much change going on that so you're giving customers some optionality there, so that this isn't just a one way, you know, let's stick it on a truck put it on this thing and get it to that environment but I need to be able to enable some of that optionality and know what I'm doing today but also knowing that you know six months a year from now, we know things are going to be different >> Yes, yes! >> And in each of these some of those environments. >> Absolutely. We call it the three Ds data assurance, data mobility, and disaster recovery. So I think the ability to not only protect your data, whether it's on-prem as it journeys to the cloud or whether it's in the cloud, the ability to actually assist the customer in the migration. And what I hear time and time again is, "oh but Azure has a tool," or "Google has a tool for migration." Of course they have tools for migration, but I think the challenge for customers is, how do I affect that data resiliency, how do I ensure that I can move the data as a complete workload. Moving an entire SAP HANA instance, for example, to the cloud. And it protected the entire time as it journeys up there, and then bring it back for the disaster recovery without professional services. Because again, you know HYCU it's about simplicity, we want to make sure that these customers can get the same level of readiness, the same ease of deployment that they get from their cloud vendor, when they're thinking about the data protection and the migration. >> All right, I want to click down one layer >> Please. >> in here. We're talking about multi-cloud, you talk about simplicity. >> Sure. >> Well, Kubernetes might not be the simplest thing out there but it absolutely is a fundamental piece of the infrastructure in a multi-cloud environment so you know your partners, Google with GKE, Azure with AKS and >> And Carbon. >> Carbon with a K from Nutanix everyone now, I say it's not about distributions it's really every platform that you're going to use is going to have Kubernetes built into it so what does that mean from a data protection standpoint? Do you just plug into all of these environments you've tested it got customers using it? >> It's a great question it comes up, as you can imagine, all the time. I think it's something that is becoming more and more ready for prime time. A lot of the major vendors are moving to it, making heavy investments in Kubernetes, we ourselves have over 100 customers that are actively using Kubernetes in one form or another and backing the data up using HYCU so there's no question in my mind that HYCU is Kubernetes ready. I think what's really exciting for us is some of the native integrations we're working on with Google and with Nutanix so whether it's Carbon whether it's GKE, we want to make sure that when we work with these platforms that we mimic, how the platform is supporting Kubernetes, so that our customers can get the same experience from HYCU that they're getting from the platform provider itself. >> All right, Simon want to give you the final word. Bring us inside your customers what they're doing with multi-cloud and where HYCU fits there, here in 2020. Sure, we talked about prime time. Cloud for many years has been something that I think large enterprises have talked a big game about, but have been really dipping their toe in the water with. What we've seen the last two years, is a massive massive at scale migration to the largest three public clouds, whether that's GCP, whether that's Azure or the other one. (laughing) We're thrilled to support GCP and Azure because GCP and Azure, we believe do provide the most value to our customers. But I think the name of the game here is not just supporting a customer in the cloud, it's understanding that every customer today is to is on a journey, whether they're on-prem, whether their journeying to cloud or they're in cloud those three Ds, data assurance, which is our backup, data mobility, which is the automated migration, or disaster recovery readiness. That's the name of the game and that's how HYCU wants to help. >> All right, Simon Taylor. Always a pleasure to catch up with you thank you so much for the HYCU updates, >> Stu thanks so much for having us on. >> All right, be sure to check out www.thecube.net for all of our inventory of the shows that we've been at the videos we've done, you can even search on keywords in companies, I'm Stu Miniman and thank you for watching theCUBE. (Techno Music)

Published Date : Mar 5 2020

SUMMARY :

From the SiliconANGLE Media office and all the services that fit amongst them. it's exciting to be here. You're based in Boston, so not to far and we say yes, you know, is that the convenience of the as a service model, the last few years, how does that fit and data protection is really a key part of the answer, and it's nice to know that they're there. Yeah, absolutely. So you talked about earlier we first first met and what are the updates on your product. And I'll start off by saying that at this point as an alternative to people like VMware. and it was tremendous you know and you know what feedback you're hearing Right, so the first thing to say is and by the way you saw this as much of a shot against Google and information from the consumer side We then said to customers, we're going to enable you and get it to that environment And in each of these the ability to actually assist the customer you talk about simplicity. and backing the data up using HYCU is not just supporting a customer in the cloud, Always a pleasure to catch up with you I'm Stu Miniman and thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GoogleORGANIZATION

0.99+

FBIORGANIZATION

0.99+

BostonLOCATION

0.99+

Simon TaylorPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Kevin PowersPERSON

0.99+

SimonPERSON

0.99+

EuropeLOCATION

0.99+

fiveQUANTITY

0.99+

Satya NadellaPERSON

0.99+

$5 millionQUANTITY

0.99+

NutanixORGANIZATION

0.99+

HYCUORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

2020DATE

0.99+

March 2020DATE

0.99+

GCPORGANIZATION

0.99+

last yearDATE

0.99+

40 minuteQUANTITY

0.99+

Stu MinimanPERSON

0.99+

Boston CollegeORGANIZATION

0.99+

yesterdayDATE

0.99+

RayPERSON

0.99+

HYCUsORGANIZATION

0.99+

Panera BreadORGANIZATION

0.99+

56 countriesQUANTITY

0.99+

millions of dollarsQUANTITY

0.98+

over 100 customersQUANTITY

0.98+

todayDATE

0.98+

www.thecube.netOTHER

0.98+

KubernetesTITLE

0.98+

firstQUANTITY

0.97+

eachQUANTITY

0.97+

CUBEORGANIZATION

0.97+

AKSORGANIZATION

0.97+

SAP HANATITLE

0.97+

oneQUANTITY

0.97+

over 1,300 customersQUANTITY

0.97+

zero customersQUANTITY

0.96+

AHVORGANIZATION

0.96+

GKEORGANIZATION

0.96+

one layerQUANTITY

0.96+

one formQUANTITY

0.96+

hundreds of customersQUANTITY

0.95+

Nigel Poulton, MSB com | KubeCon + CloudNativeCon NA 2019


 

>> Live from San Diego California, it's theCUBE. Covering KubeCon and CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back. We're at the end of three days of wall-to-wall coverage here at KubeCon CloudNativeCon 2019 in San Diego. I am Stu Miniman and my co-host for this week has been John Troyer, and we figured no better way to cap our coverage than bring on a CUBE alumni who has likely educated more people about containers and Kubernetes, you know, may be second only to the CNCF. So, Nigel Poulton now the head of content at msb.com. Nigel, pleasure to see you and thanks for coming back on the program. >> Honestly gents, the pleasure is all mine, as always. >> All right, so Nigel, first of all I'd love to get your just gestalt of the week. You know, take away, what's the energy. You know, how is was this community doing. >> Yeah, so it's the end of the week and my brain is a mixture of fried and about to explode, okay. Which i think is a good thing. That's what you want at the end of a conference, right. But I think if we can dial it back to the first day at that opening keynote, something that really grabbed me at the time and has been sort of a theme for me throughout the conference, is when they asked, can you raise your hand if this is your first KubeCon, and it's a room of 8,000 people, and I don't have the data at hand right, but I'm sat there, I've got my brother on this side, it's his first ever KubeCon, and he kind of goes like this, and then he realizes that nearly everybody around us has got their hands up, so he's kind of like, whoa yeah, I feel like I'm on the in the in-crowd now. And I think from the people that I've spoken to it seems to be that the community is maturing, the conference or the event itself is maturing, and that starts to bring in kind of a different crowd, and a new crowd. People that are not necessarily building Kubernetes or building projects in the Kubernetes ecosystem, but looking to bring it into their organizations to run their own applications. >> Yeah, no absolutely. You know, the rough number I heard was somewhere two-thirds to three-quarters of that room were new. >> Nigel: I can believe that. >> 12,000 here in attendance, right. There were 8,000 here last year. >> Nigel: Yeah. >> You think about the, you know, somebody, oh I sent somebody this year, I sent somebody different the next year, and all the new people. So, you know, Nigel, luckily that keeps you busy, because there is something I've said for a long, long, time is there is always a need for that introductory and then how do I get started and how do I get into here, and luckily the the ecosystem and all the projects and everything, somebody could pick that up in five or 10 minutes if they'd just put their mind to it, right. >> So I say this a lot of the time, that I feel like we live in the Golden Age of being able to take hold of your own career and learn a technology and make the best of what's available for you. Now we don't live in the day where we used, you know, to learn something new you would have to buy infrastructure. I mean even to learn Windows back in the day, or NetWare or Linux you'd need a couple of dusty old PCs in the corner of your office or your bedroom or something, and it was hard. Whereas now with cloud, with video training, with all the hands-on labs and stuff that are out there, with all of the sessions that you get at events like this, if you're interested in pushing your career forward, not only have you not got an excuse not to do it anymore, but the opportunities are just amazing, right. I feel like we live in such an, I feel like we're living in a exciting time for tech. >> Well Nigel, you do books, said you've done training courses, you have your platform of like a lab platform, msb.com. And one of the challenges in this space is that it is moving so fast, right. Yes, you have, anything's at your fingertips, but. >> Nigel: Yeah. >> Kubernetes changes every every quarter. Here at the show, both scale of people's deployments, but also scale of the probably number of projects, and everything has a different name. >> Nigel: Yeah. >> So, how are you, what should people be looking for? How are you changing your curriculum? What are you what are you adding to it, what are you replicating? >> Yeah, so that's super interesting. I think, right, as well, so it's a Golden Age for learning right, but if you're in the technology industry in the sort of areas that we are, right, if you don't love it and if you're not passionate about it, I almost feel like you're in the wrong industry, because you need that passion, and that sort of it's my hobby as well as my job, just to keep up. Like I feel like I spend an unhealthy amount of time in the Cloud Native ecosystem and just trying to keep track of everything that's going on. And all that time that I spend in, I still feel like I'm playing catch-up all the time. So I think you have to adjust your mentality. Like if you thought that you could learn something, a technology or whatever, and be comfortable for five years in your role, then you really need to adjust that. Like just an example, right. So I write, I offer a book as well, and I would love nothing better than to write that book, stick it on a shelf on Amazon and what-have-you and let it be valid for five years. I would love that because it's hard work, but I can't so like I do a six monthly update, but that applies to way more than that. So for your career, you know, if you want to, it sounds cheesy, if you want to rock it in your career, you have got to keep yourself up to date. And it's a race, but I do think that the kind of things were doing with tech now, they're fun things, right. >> Yeah, a little scary, because while we're at this show I hope you kept up with all the Amazon announcements, the Google announcements. >> Nigel: Yeah. >> And everything going, because it is it is non-stop. >> Nigel: It is. >> Out there. Nigel, we last had you on theCUBE two years ago at this show, and at every show for a bunch of shows it seemed like there was a project or a category du jour. >> Nigel: Yeah. >> I don't know that I quite got that this year. There were some really cool things at edge computing. There was the observability, something we spent a bunch of time talking on. But we'd love to just kind of throw it out there as to what you're seeing in the ecosystem, the landscape, some of the areas that are interesting. >> Nigel: Yeah. >> Important, and what's growing, what's not. >> Okay, so if I can take the event first off, right, so KubeCon itself. Loads of new people, okay, and when I talk to them I'm getting three answers from them. Like number one, they're like, some people like, I just love it, you know, which is great, and I've loved it and it's an amazing event. Other people are like kind of over awed by it, the size. So I don't know, maybe we should send them to re:Invent and then come back here and then they'll be like, oh yeah, it's not so bad. But the second thing is that some of the sessions are going over the first timers heads. So I'm hoping, and I'm sure it will, that going forward in Amsterdam and Boston next year that we'll start to be able to pitch parts of the conference to that new user base. So that was kind of a theme from speaking to people at the event from me. But a couple of things from the ecosystem, like we talked about service mesh, right, two years ago, and it felt like it was a bit of a buzzword, but everyone was talking about it and it was a real theme, and I don't get that at this conference, but what I do feel from the community in general is that uptake and adoption is actually starting to happen now, and thanks a lot to, well look, Linkerd pretty easy these days, STO is making great strides to being easier to deploy, but I also think that the cloud providers, those hosted cloud providers, really stepping up to the plate, like they did with hosted Kubernetes, you know when it was hard to get Kubernetes for your environment. We're seeing a similar thing with the service mesh. You can spin something up in GKE, Kubernetes cluster, click the box, and I'll have a service mesh, thank you very much. >> Well, it's funny. I think back to Austin, when I talk to the average customer in the show floor and said, "What are you doing?" they were rolling their own. Picking all of the pieces and doing it. When I talk to the average customer here, is, I'm using managed services. >> Nigel: Yeah. >> Seems to have matured a lot. Of course, some of the manage public cloud services were brand new or a couple months there. Is that's a general direction you see things going? >> So, yes, but I almost wonder if it will be like cloud in general, right, where there was a big move to the cloud. And I understand why people will want to do hosted Kubernetes and things, 'cause it's easy and you know it gets you. I'm careful that when I use the term production grade, because I know it means different things to different people, but you get something that we can at least loosely turn production grade. >> Yeah, and actually just to be clear, we had a lot of discussions about on-premises, so I guess it's more the managed service rather than the, I'm going to roll all the pieces myself. >> Yeah, but I wonder will we start, and because of price and maybe the ability to tweak the cluster towards your needs and things, whether we might see people taking their first steps on a managed service or a hosted Kubernetes, and then as they scale up then they start to say, well, tell you what we'll start rolling our own, because we're better at doing this now, and then run like, you know, you still have your hosted stuff, but you have some stuff on premises as well, and then we move towards something that's a bit more hybrid. I don't know, but I just wonder if that will become a trend. >> Well Nigel, I mean it's been a busy week. You started off with workshops. I don't know, what did you miss? What's the first, when you go home, back to England, are you going to, and you pop open your browser and start looking at all the session videos and stuff, I don't know, what didn't you get a chance to do here this week? >> So I was kind of, for me it's been the busiest KubeCon I've had and it's robbed me of a lot of sessions, right, and when I remember when I looked at the catalog at the beginning it was like, you know it's one of those conferences where almost every slot there's three things that I want to go to, which is a sign of a good conference. I'm quite interested at the moment in K3s. I actually haven't touched it for a long time, but outside of KubeCon I have had a lot of people talk to me about that, so I will go home and I will hunt down, right, what are the K3s sessions to try and get myself back up to speed, 'cause I know there are other projects that are similar right, but I find it quite fascinating in that it's one of those projects where it started out with like this goal of we'll be for the edge, right, or for IOT or something, and the community are like, we really like it, and actually I want to use it for loads of other things. You have no idea whether it will go on to be like a roaring success, but it. I don't know, so often you have it where a project isn't planned to be something. >> Announcer: Good afternoon attendees. Breakout sessions will begin in 10 minutes. >> But it naturally in the community. >> Announcer: Session locations are listed. >> Take it on and say. >> Announcer: On the noted schedule. >> We're going to do something with it. >> Announcer: On digital signage throughout the venue. >> That wasn't originally planned, yeah. So I'll be looking up K3s as my first thing when I go home, but it is the first thing on a long list, right. >> All right. Nigel, tell us a little bit about, you know, latest things you're doing, msb.com. I know you had your book signing for your book here, had huge lines here. >> Yeah. >> Great to see. So, tell us about what you're doing overall. >> Thank you, yeah. So, I've got a couple of books and I've got a bunch of video training courses out there, and I'm super fortunate that I've reached a lot of people, but a real common theme when I talk to people are like, look, I love your book, I love your video courses, whatever, how do I take that next step, and the answer was always, look, get your hands on as much as possible, okay. And I would send people to like Minikube and to play with Docker or play with Kubernetes and various other solutions, but none of them really seem to be like, a real something that looked and smelled and tasted like production. So I'm working with a start-up at the moment, msb.com, where we have curated learning content. Everybody gets their own fully functioning private free node Kubernetes cluster. Ingress will work, internet-facing load balancers will all work on it, and the idea is that instead of having like a single node development environment on your laptop, which is fine, but you know, you can't really play with scheduling and things like that, then msb.com takes that sort of learning journey to the next level because it's it's a real working cluster, plus we've got this amazing visual dashboard so that when you're deploying stuff and scaling and rolling updates you see it all happening in the browser. And for me as an educator, right, it's sometimes hard for people to connect the dots when you're reading a book or, and I spend hours on like PowerPoint animations and stuff, whereas now in this browser to augment like reading a book, and to augment taking a training video, you can go and get your hands on and have this amazing sort of rich visual experience that really helps you like, sort of, oh I get it now, yeah. >> All right, so Nigel, final question I have for you. I've known you back when we were just a couple of infrastructure guys. You've done phenomenal things. >> Nigel: The glory days. >> With kind of the wave of containers, you're a Docker captain. You know, really well known in the Kubernetes. When you reflect back on something, on kind of this journey we've been on, you look at 12,000 people here, you know Docker has some recent news here, so give us a reflection back on that this journey the whole industry's on. >> Yeah, so I had breakfast with a guy this morning who I wrote my first ever public blog with. He had a blog site and he loaned me some space on his blog site 'cause I didn't even know how to build a blog at the time, and it was a storage blog, yeah, we're talking about EMC and HDS and all that kind of stuff, and I'm having breakfast with him, 14 I think years later in San Diego at KubeCon. And I think, and I don't know if this really answers your question, but I feel like that Kubernetes is almost so, if ubiquitous is the right word or it's so pervasive, and it's so all-encompassing almost, that it is bringing almost the entire community. I don't want to get too carried away with saying this, right, but it is bringing people from all different areas to like a common platform for want of a better term, right. I mean we were infrastructure guys, yourself as well John, and here we are at an event that as a community and as a technology I think it's just, it's changing the world, but it's also bringing things almost under one hood. So I would say anybody, like whatever you're doing, do all roads lead to Kubernetes at the moment, I don't know. >> Yeah, well we know software can actually be a unifying factor. Best term I've heard is Kubernetes is looking to be that universal back plain. >> Nigel: Yeah. >> and therefore, both you know, southbound to the infrastructure, northbound to the application. Nigel Poulton congratulations on the progress. Definitely, everybody makes sure to check out his training online, and thank you for helping us to wrap up our three days of coverage here. For John Troyer, I am Stu Miniman. TheCUBE will be at KubeCon 2020 in both Amsterdam and Boston. we will be at lots of other shows. Be sure to check out thecube.net. Please reach out if you have any questions. We are looking for more people to help support our growing coverage in the cloud native space, so thank you so much for the community, thank you to all of our guests, thank you to the CNCF and our sponsors that make this coverage possible, and thank you to you our audience for watching theCUBE. (upbeat music)

Published Date : Nov 22 2019

SUMMARY :

Brought to you by Red Hat, and Kubernetes, you know, may be second only to the CNCF. All right, so Nigel, first of all I'd love to get and that starts to bring in kind of a different crowd, You know, the rough number I heard was There were 8,000 here last year. and luckily the the ecosystem and learn a technology and make the best of you have your platform of like a lab platform, msb.com. but also scale of the probably number of projects, So I think you have to adjust your mentality. I hope you kept up with all the Amazon announcements, Nigel, we last had you on theCUBE I don't know that I quite got that this year. and I don't get that at this conference, and said, "What are you doing?" Is that's a general direction you see things going? to different people, but you get something Yeah, and actually just to be clear, and because of price and maybe the ability to and you pop open your browser I don't know, so often you have it where Breakout sessions will begin in 10 minutes. but it is the first thing on a long list, right. I know you had your book signing for your book here, Great to see. and the answer was always, look, I've known you back when we were just With kind of the wave of containers, and it's so all-encompassing almost, is looking to be that universal back plain. and thank you to you our audience for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NigelPERSON

0.99+

Nigel PoultonPERSON

0.99+

Stu MinimanPERSON

0.99+

John TroyerPERSON

0.99+

AmsterdamLOCATION

0.99+

EnglandLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

8,000QUANTITY

0.99+

JohnPERSON

0.99+

San DiegoLOCATION

0.99+

five yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

next yearDATE

0.99+

fiveQUANTITY

0.99+

last yearDATE

0.99+

BostonLOCATION

0.99+

12,000 peopleQUANTITY

0.99+

KubeConEVENT

0.99+

San Diego CaliforniaLOCATION

0.99+

10 minutesQUANTITY

0.99+

firstQUANTITY

0.99+

PowerPointTITLE

0.99+

8,000 peopleQUANTITY

0.99+

this yearDATE

0.99+

two years agoDATE

0.99+

AmazonORGANIZATION

0.99+

WindowsTITLE

0.99+

second thingQUANTITY

0.99+

12,000QUANTITY

0.99+

KubeCon 2020EVENT

0.98+

bothQUANTITY

0.98+

thecube.netOTHER

0.98+

EMCORGANIZATION

0.98+

first dayQUANTITY

0.98+

three daysQUANTITY

0.98+

oneQUANTITY

0.98+

CloudNativeConEVENT

0.97+

first stepsQUANTITY

0.97+

GoogleORGANIZATION

0.97+

CUBEORGANIZATION

0.97+

three-quartersQUANTITY

0.97+

three thingsQUANTITY

0.97+

LinuxTITLE

0.97+

this weekDATE

0.97+

singleQUANTITY

0.96+

three answersQUANTITY

0.96+

CNCFORGANIZATION

0.96+

HDSORGANIZATION

0.95+

first thingQUANTITY

0.94+

KubernetesTITLE

0.94+

STOORGANIZATION

0.93+

two-thirdsQUANTITY

0.92+

msb.comORGANIZATION

0.92+

msb.comOTHER

0.91+