Image Title

Search Results for ISTIO:

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great questions, Dave. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

WalmartORGANIZATION

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.99+

StarlinkORGANIZATION

0.99+

oneQUANTITY

0.98+

a dayQUANTITY

0.97+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

CubeORGANIZATION

0.9+

SuperCloud2ORGANIZATION

0.89+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.88+

one more pieceQUANTITY

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great question, Steve. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, and it does get disconnected on a regular basis. In fact, I forget the exact number, but some several dozen locations get disconnected daily just by virtue of the fact that there's construction going on and things are happening in the real world. When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Jan 9 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

StarlinkORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.98+

oneQUANTITY

0.98+

a dayQUANTITY

0.98+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

CubeORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.86+

one more pieceQUANTITY

0.86+

SuperCloud2ORGANIZATION

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

Brian Gracely & Idit Levine, Solo.io | KubeCon CloudNativeCon NA 2022


 

(bright upbeat music) >> Welcome back to Detroit guys and girls. Lisa Martin here with John Furrier. We've been on the floor at KubeCon + CloudNativeCon North America for about two days now. We've been breaking news, we would have a great conversations, John. We love talking with CUBE alumni whose companies are just taking off. And we get to do that next again. >> Well, this next segment's awesome. We have former CUBE host, Brian Gracely, here who's an executive in this company. And then the entrepreneur who we're going to talk with. She was on theCUBE when it just started now they're extremely successful. It's going to be a great conversation. >> It is, Idit Levine is here, the founder and CEO of solo.io. And as John mentioned, Brian Gracely. You know Brian. He's the VP of Product Marketing and Product Strategy now at solo.io. Guys, welcome to theCUBE, great to have you here. >> Thanks for having us. >> Idit: Thank so much for having us. >> Talk about what's going on. This is a rocket ship that you're riding. I was looking at your webpage, you have some amazing customers. T-Mobile, BMW, Amex, for a marketing guy it must be like, this is just- >> Brian: Yeah, you can't beat it. >> Kid in a candy store. >> Brian: Can't beat it. >> You can't beat it. >> For giant companies like that, giant brands, global, to trust a company of our size it's trust, it's great engineering, it's trust, it's fantastic. >> Idit, talk about the fast trajectory of this company and how you've been able to garner trust with such mass organizations in such a short time period. >> Yes, I think that mainly is just being the best. Honestly, that's the best approach I can say. The team that we build, honestly, and this is a great example of one of them, right? And we're basically getting the best people in the industry. So that's helpful a lot. We are very, very active on the open source community. So basically it building it, anyway, and by doing this they see us everywhere. They see our success. You're starting with a few customers, they're extremely successful and then you're just creating this amazing partnership with them. So we have a very, very unique way we're working with them. >> So hard work, good code. >> Yes. >> Smart people, experience. >> That's all you need. >> It's simple, why doesn't everyone do it? >> It's really easy. (all laughing) >> All good, congratulations. It's been fun to watch you guys grow. Brian, great to see you kicking butt in this great company. I got to ask about the landscape because I love the ServiceMeshCon you guys had on a co-located event on day zero here as part of that program, pretty packed house. >> Brian: Yep. >> A lot of great feedback. This whole ServiceMesh and where it fits in. You got Kubernetes. What's the update? Because everything's kind of coming together- >> Brian: Right. >> It's like jello in the refrigerator it kind of comes together at the same time. Where are we? >> I think the easiest way to think about it is, and it kind of mirrors this event perfectly. So the last four or five years, all about Kubernetes, built Kubernetes. So every one of our customers are the ones who have said, look, for the last two or three years, we've been building Kubernetes, we've had a certain amount of success with it, they're building applications faster, they're deploying and then that success leads to new challenges, right? So we sort of call that first Kubernetes part sort of CloudNative 1.0, this and this show is really CloudNative 2.0. What happens after Kubernetes service mesh? Is that what happens after Kubernetes? And for us, Istio now being part of the CNCF, huge, standardized, people are excited about it. And then we think we are the best at doing Istio from a service mesh perspective. So it's kind of perfect, perfect equation. >> Well, I'll turn it on, listen to your great Cloud cast podcast, plug there for you. You always say what is it and what isn't it? >> Brian: Yeah. >> What is your product and what isn't it? >> Yeah, so our product is, from a purely product perspective it's service mesh and API gateway. We integrate them in a way that nobody else does. So we make it easier to deploy, easier to manage, easier to secure. I mean, those two things ultimately are, if it's an internal API or it's an external API, we secure it, we route it, we can observe it. So if anybody's, you're building modern applications, you need this stuff in order to be able to go to market, deploy at scale all those sort of things. >> Idit, talk about some of your customer conversations. What are the big barriers that they've had, or the challenges, that solo.io comes in and just wipes off the table? >> Yeah, so I think that a lot of them, as Brian described it, very, rarely they had a success with Kubernetes, maybe a few clusters, but then they basically started to on-ramp more application on those clusters. They need more cluster maybe they want multi-class, multi-cloud. And they mainly wanted to enable the team, right? This is why we all here, right? What we wanted to eventually is to take a piece of the infrastructure and delegate it to our customers which is basically the application team. So I think that that's where they started to see the problem because it's one thing to take some open source project and deploy it very little bit but the scale, it's all about the scale. How do you enable all those millions of developers basically working on your platform? How do you scale multi-cloud? What's going on if one of them is down, how do you fill over? So that's exactly the problem that they have >> Lisa: Which is critical for- >> As bad as COVID was as a global thing, it was an amazing enabler for us because so many companies had to say... If you're a retail company, your front door was closed, but you still wanted to do business. So you had to figure out, how do I do mobile? How do I be agile? If you were a company that was dealing with like used cars your number of hits were through the roof because regular cars weren't available. So we have all these examples of companies who literally overnight, COVID was their digital transformation enabler. >> Lisa: Yes. Yes. >> And the scale that they had to deal with, the agility they had to deal with, and we sort of fit perfectly in that. They re-looked at what's our infrastructure look like? What's our security look like? We just happened to be right place in the right time. >> And they had skillset issues- >> Skillsets. >> Yeah. >> And the remote work- >> Right, right. >> Combined with- >> Exactly. >> Modern upgrade gun-to-the-head, almost, kind of mentality. >> And we're really an interesting company. Most of the interactions we do with customers is through Slack, obviously it was remote. We would probably be a great Slack case study in terms of how to do business because our customers engage with us, with engineers all over the world, they look like one team. But we can get them up and running in a POC, in a demo, get them through their things really, really fast. It's almost like going to the public cloud, but at whatever complexity they want. >> John: Nice workflow. >> So a lot of momentum for you guys silver linings during COVID, which is awesome we do hear a lot of those stories of positive things, the acceleration of digital transformation, and how much, as consumers, we've all benefited from that. Do you have one example, Brian, as the VP of product marketing, of a customer that you really think in the last two years just is solo.io's value proposition on a platter? >> I'll give you one that I think everybody can understand. So most people, at least in the United States, you've heard of Chick-fil-A, retail, everybody likes the chicken. 2,600 stores in the US, they all shut down and their business model, it's good food but great personal customer experience. That customer experience went away literally overnight. So they went from barely anybody using the mobile application, and hence APIs in the backend, half their business now goes through that to the point where, A, they shifted their business, they shifted their customer experience, and they physically rebuilt 2,600 stores. They have two drive-throughs now that instead of one, because now they have an entire one dedicated to that mobile experience. So something like that happening overnight, you could never do the ROI for it, but it's changed who they are. >> Lisa: Absolutely transformative. >> So, things like that, that's an example I think everybody can kind of relate to. Stuff like that happened. >> Yeah. >> And I think that's also what's special is, honestly, you're probably using a product every day. You just don't know that, right? When you're swiping your credit card or when you are ordering food, or when you using your phone, honestly the amount of customer they were having, the space, it's like so, every industry- >> John: How many customers do you have? >> I think close to 200 right now. >> Brian: Yeah. >> Yeah. >> How many employees, can you gimme some stats? Funding, employees? What's the latest statistics? >> We recently found a year ago $135 million for a billion dollar valuation. >> Nice. >> So we are a unicorn. I think when you took it we were around like 50 ish people. Right now we probably around 180, and we are growing, we probably be 200 really, really quick. And I think that what's really, really special as I said the interaction that we're doing with our customers, we're basically extending their team. So for each customer is basically a Slack channel. And then there is a lot of people, we are totally global. So we have people in APAC, in Australia, New Zealand, in Singapore we have in AMEA, in UK and in Spain and Paris, and other places, and of course all over US. >> So your use case on how to run a startup, scale up, during the pandemic, complete clean sheet of paper. >> Idit: We had to. >> And what happens, you got Slack channels as your customer service collaboration slash productivity. What else did you guys do differently that you could point to that's, I would call, a modern technique for an entrepreneurial scale? >> So I think that there's a few things that we are doing different. So first of all, in Solo, honestly, there is a few things that differentiated from, in my opinion, most of the companies here. Number one is look, you see this, this is a lot, a lot of new technology and one of the things that the customer is nervous the most is choosing the wrong one because we saw what happened, right? I don't know the orchestration world, right? >> John: So choosing and also integrating multiple things at the same time. >> Idit: Exactly. >> It's hard. >> And this is, I think, where Solo is expeditious coming to place. So I mean we have one team that is dedicated like open source contribution and working with all the open source community and I think we're really good at picking the right product and basically we're usually right, which is great. So if you're looking at Kubernetes, we went there for the beginning. If you're looking at something like service mesh Istio, we were all envoy proxy and out of process. So I think that by choosing these things, and now Cilium is something that we're also focusing on. I think that by using the right technology, first of all you know that it's very expensive to migrate from one to the other if you get it wrong. So I think that's one thing that is always really good at. But then once we actually getting those portal we basically very good at going and leading those community. So we are basically bringing the customers to the community itself. So we are leading this by being in the TOC members, right? The Technical Oversight Committee. And we are leading by actually contributing a lot. So if the customer needs something immediately, we will patch it for him and walk upstream. So that's kind of like the second thing. And the third one is innovation. And that's really important to us. So we pushing the boundaries. Ambient, that we announced a month ago with Google- >> And STO, the book that's out. >> Yes, the Ambient, it's basically a modern STO which is the future of SDL. We worked on it with Google and their NDA and we were listed last month. This is exactly an example of us basically saying we can do it better. We learn from our customers, which is huge. And now we know that we can do better. So this is the third thing, and the last one is the partnership. I mean honestly we are the extension team of the customer. We are there on Slack if they need something. Honestly, there is a reason why our renewal rate is 98.9 and our net extension is 135%. I mean customers are very, very happy. >> You deploy it, you make it right. >> Idit: Exactly, exactly. >> The other thing we did, and again this was during COVID, we didn't want to be a shell-for company. We didn't want to drop stuff off and you didn't know what to do with it. We trained nearly 10,000 people. We have something called Solo Academy, which is free, online workshops, they run all the time, people can come and get hands on training. So we're building an army of people that are those specialists that have that skill set. So we don't have to walk into shops and go like, well okay, I hope six months from now you guys can figure this stuff out. They're like, they've been doing that. >> And if their friends sees their friend, sees their friend. >> The other thing, and I got to figure out as a marketing person how to do this, we have more than a few handfuls of people that they've got promoted, they got promoted, they got promoted. We keep seeing people who deploy our technologies, who, because of this stuff they're doing- >> John: That's a good sign. They're doing it at at scale, >> John: That promoter score. >> They keep getting promoted. >> Yeah, that's amazing. >> That's a powerful sort of side benefit. >> Absolutely, that's a great thing to have for marketing. Last question before we ran out of time. You and I, Idit, were talking before we went live, your sessions here are overflowing. What's your overall sentiment of KubeCon 2022 and what feedback have you gotten from all the customers bursting at the seam to come talk to you guys? >> I think first of all, there was the pre-event which we had and it was a lot of fun. We talked to a lot of customer, most of them is 500, global successful company. So I think that people definitely... I will say that much. We definitely have the market feed, people interested in this. Brian described very well what we see here which is people try to figure out the CloudNative 2.0. So that's number one. The second thing is that there is a consolidation, which I like, I mean STO becoming right now a CNCF project I think it's a huge, huge thing for all the community. I mean, we're talking about all the big tweak cloud, we partner with them. I mean I think this is a big sign of we agree which I think is extremely important in this community. >> Congratulations on all your success. >> Thank you so much. >> And where can customers go to get their hands on this, solo.io? >> Solo.io? Yeah, absolutely. >> Awesome guys, this has been great. Congratulations on the momentum. >> Thank you. >> The rocket ship that you're riding. We know you got to get to the airport we're going to let you go. But we appreciate your insights and your time so much, thank you. >> Thank you so much. >> Thanks guys, we appreciate it. >> A pleasure. >> Thanks. >> For our guests and John Furrier, This is Lisa Martin live in Detroit, had to think about that for a second, at KubeCon 2022 CloudNativeCon. We'll be right back with our final guests of the day and then the show wraps, so stick around. (gentle music)

Published Date : Oct 27 2022

SUMMARY :

And we get to do that next again. It's going to be a great conversation. great to have you here. This is a rocket ship that you're riding. to trust a company of our size Idit, talk about the fast So we have a very, very unique way It's really easy. It's been fun to watch you guys grow. What's the update? It's like jello in the refrigerator So the last four or five years, listen to your great Cloud cast podcast, So we make it easier to deploy, What are the big barriers So that's exactly the So we have all these examples the agility they had to deal with, almost, kind of mentality. Most of the interactions So a lot of momentum for you guys and hence APIs in the backend, everybody can kind of relate to. honestly the amount of We recently found a year ago So we are a unicorn. So your use case on that you could point to and one of the things that the at the same time. So that's kind of like the second thing. and the last one is the partnership. So we don't have to walk into shops And if their friends sees and I got to figure out They're doing it at at scale, at the seam to come talk to you guys? We definitely have the market feed, to get their hands on this, solo.io? Yeah, absolutely. Congratulations on the momentum. But we appreciate your insights of the day and then the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

SpainLOCATION

0.99+

Lisa MartinPERSON

0.99+

AustraliaLOCATION

0.99+

AmexORGANIZATION

0.99+

JohnPERSON

0.99+

LisaPERSON

0.99+

SingaporeLOCATION

0.99+

Brian GracelyPERSON

0.99+

UKLOCATION

0.99+

John FurrierPERSON

0.99+

BMWORGANIZATION

0.99+

DetroitLOCATION

0.99+

ParisLOCATION

0.99+

GoogleORGANIZATION

0.99+

$135 millionQUANTITY

0.99+

USLOCATION

0.99+

Idit LevinePERSON

0.99+

135%QUANTITY

0.99+

98.9QUANTITY

0.99+

T-MobileORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

United StatesLOCATION

0.99+

200QUANTITY

0.99+

New ZealandLOCATION

0.99+

last monthDATE

0.99+

oneQUANTITY

0.99+

2,600 storesQUANTITY

0.99+

KubeConEVENT

0.99+

Chick-fil-AORGANIZATION

0.99+

IstioORGANIZATION

0.99+

millionsQUANTITY

0.99+

a year agoDATE

0.99+

500QUANTITY

0.99+

one teamQUANTITY

0.99+

third thingQUANTITY

0.99+

third oneQUANTITY

0.99+

second thingQUANTITY

0.99+

each customerQUANTITY

0.98+

two thingsQUANTITY

0.98+

one teamQUANTITY

0.98+

a month agoDATE

0.97+

CloudNative 2.0TITLE

0.97+

one exampleQUANTITY

0.97+

solo.ioORGANIZATION

0.97+

KubeCon 2022EVENT

0.96+

Technical Oversight CommitteeORGANIZATION

0.96+

nearly 10,000 peopleQUANTITY

0.96+

one thingQUANTITY

0.96+

AMEALOCATION

0.95+

pandemicEVENT

0.95+

CloudNative 1.0TITLE

0.95+

KubernetesORGANIZATION

0.95+

COVIDTITLE

0.94+

firstQUANTITY

0.94+

Solo AcademyORGANIZATION

0.93+

ServiceMeshConEVENT

0.92+

CNCFORGANIZATION

0.92+

APACLOCATION

0.92+

six monthsQUANTITY

0.92+

around 180QUANTITY

0.92+

CiliumORGANIZATION

0.92+

ServiceMeshORGANIZATION

0.9+

Matt Klein, Lyft | KubeCon + CloudNativeCon NA 2022


 

>>Good morning and welcome back to Detroit, Michigan. My name is Savannah Peterson and I'm here on set of the cube, my co-host John Farer. How you doing this morning, John? >>Doing great. Feeling fresh. Day two of three days of coverage, feeling >>Fresh. That is that for being in the heat of the conference. I love that attitude. It's gonna >>Be a great day today. We'll see you at the end of the day. Yeah, >>Well, we'll hold him to it. All right, everyone hold 'em accountable. Very excited to start the day off with an internet, a legend as well as a cube og. We are joined this morning by Matt Klein. Matt, welcome to the show. >>Thanks for having me. Good to see you. Yep. >>It's so, what's the vibe? Day two, Everyone's buzzing. What's got you excited at the show? You've been here before, but it's been three years you >>Mentioned. I, I was saying it's been three years since I've been to a conference, so it's been interesting for me to see what is, what is the same and what is different pre and post covid. But just really great to see everyone here again and nice to not be sitting in my home by myself. >>You know, Savannah said you're an OG and we were referring before we came on camera that you were your first came on the Cub in 2017, second Cuban event. But you were, I think, on the first wave of what I call the contributor momentum, where CNCF really got the traction. Yeah. You were at Lift, Envoy was contributed and that was really hyped up and I remember that vividly. It was day zero they called it back then. Yeah. And you got so much traction. People are totally into it. Yeah. Now we've got a lot of that going on now. Right. A lot of, lot of day Zero events. They call 'em co, co-located events. You got web assembly, a lot of other hype out there. What do you see out there that you like? How would you look at some of these other Sure. Communities that are developing, What's the landscape look like as you look out? Because Envoy set the table, what is now a standard >>Practice. Yeah. What's been so interesting for me just to come here to the conference is, you know, we open source Envoy in 2016. We donated in 2017. And as you mentioned at that time, Envoy was, you know, everyone wanted to talk about Envoy. And you know, much to my amazement, Envoy is now pervasive. I mean, it's used everywhere around the world. It's like, never in my wildest dreams would I have imagined that it would be so widely used. And it's almost gotten to the point where it's become boring. You know, It's just assumed that Envoy is, is everywhere. And now we're hearing a lot about Eeb p f and Web assembly and GI ops and you know, AI and a bunch of other things. So it's, it's actually great. It's made me very happy that it's become so pervasive, but it's also fun. Yeah. We mention to, to look around all other stuff >>Like congratulate. It's just a huge accomplishment really. I think it's gonna be historic, historical moment for the industry too. But I like how it progressed. I mean, I don't mind hype cycles as long as it's some vetting. Sure. Of course. You know, use cases that are clearly defined, but you gotta get that momentum in the community, but then you start gotta get down to, to business. Yep. So, so to speak and get it deployed, get traction. Yep. What should projects look like? And, and give us the update on Envoy. Cause you guys have a, a great use case of how you got traction. Right. Take us through some of the early days of what made Envoy successful in your opinion. Great question. >>Yeah. You know, I, I think Envoy is fairly unique around this conference in the sense that Envoy was developed by Lyft, which is an end user company. And many of the projects in this ecosystem, you know, no judgment, for better or worse, they are vendor backed. And I think that's a different delivery mechanism when it's coming from an end user where you're solving a, a particular business case. So Envoy was really developed for Lyft in a, you know, very early scaling days and just, you know, trying to help Lyft solve its business problems. So I think when Envoy was developed, we were, you know, scaling, we were falling over and actually many other companies were having similar problems. So I think Envoy became very widely deployed because many companies were having similar issues. So Envoy just became pervasive among lift peer companies. And then we saw a lot of vendor uptake in the service mesh space in the API gateway space among large internet providers. So, I I I, I think it's just, it's an interesting case because I think when you're solving real problems on the ground, in some ways it's easier to actually get adoption than if you're trying to develop it from a commercial backing. >>And that's the class, I mean, almost, It's almost like open source product market fit. It is in its own way. Cause you have a problem. Absolutely. Other people have the same problem finding >>Too. I mean, it's, it's designed thinking from >>A different, When, when I talk to people about open source, I like to tell people that I do not think it's any different than starting a company. I actually think it's all the same problems finding pro product, market fit, hiring, like finding contributors and maintainers, like doing PR and marketing. Yeah. Getting team together, traction, getting, getting funding. I mean, you have to have money to do all these things. Yeah. So I think a lot of people think of open source as I, I don't know, you know, this fantastic collaborative effort and, and it is that, but there's a lot more to it. Yeah. And it is much more akin to starting a >>Company. Let's, let's just look at that for a second. Cause I think that's a good point. And I was having a conversation in the hallway two nights ago on this exact point. If the power dynamics of a startup in the open source, as you point out, is just different, it's community based. So there are things you just gotta be mindful of. It's not top down. >>Exactly. It's not like, >>Right. You know, go take that hill. It's really consensus based, but it is a startup. All those elements are in place. Absolutely. You need leadership, you gotta have debates, alignment, commit, You gotta commit to a vision. Yep. You gotta make adjustments. Build the trajectory. So based on that, I mean, do you see more end user traction? Cause I was, we were talking also about Intuit, they donated some of their tow code R goes out there. Yep. R go see the CDR goes a service. Where's the end user contributions to these days? Do you feel like it's good, still healthy? >>I, I mean, I, I'm, I'm biased. I would like to see more. I think backstage outta Spotify is absolutely fantastic. That's an area just in terms of developer portals and developer efficiency that I think has been very underserved. So seeing Backstage come outta Spotify where they've used it for years, and I think we've already seen they had a huge date, you know, day one event. And I, I think we're gonna see a lot more out of that >>Coming from, I'm an end user, pretend I'm an end user, so pretend I have some code. I want to, Oh man, I'm scared. I don't am I'm gonna lose my competitive edge. What's the, how do you talk to the enterprise out there that might be thinking about putting their project out there for whether it's the benefit of the community, developing talent, developing the product? >>Sure. Yeah. I would say that I, I would ask everyone to think through all of the pros and cons of doing that because it's not for free. I mean, doing open source is costly. It takes developer time, you know, it takes management time, it takes budgeting dollars. But the benefits if successful can be huge, right? I mean, it can be just in terms of, you know, getting people into your company, getting users, getting more features, all of that. So I would always encourage everyone to take a very pragmatic and realistic view of, of what is required to make that happen. >>What was that decision like at Lyft >>When you I I'm gonna be honest, it was very naive. I I think we've, of that we think we need to know. No, just didn't know. Yeah. I think a lot of us, myself included, had very minimal open source experience. And had we known, or had I known what would've happened, I, I still would've done it. But I, I'm gonna be honest, the last seven years have aged me what I feel like is like 70 or a hundred. It's been a >>But you say you look out in the landscape, you gotta take pride, look at what's happened. Oh, it's, I mean, it's like you said, it >>Matured fantastic. I would not trade it for anything, but it has, it has been a journey. What >>Was the biggest surprise? What was the most eye opening thing about the journey for you? >>I, I think actually just the recognition of all of the non-technical things that go into making these things a success. I think at a conference like this, people think a lot about technology. It is a technology conference, but open source is business. It really is. I mean, it, it takes money to keep it going. It takes people to keep >>It going. You gotta sell people on the concepts. >>It takes leadership to keep it going. It takes internal, it takes marketing. Yeah. So for me, what was most eyeopening is over the last five to seven years, I feel like I actually have not developed very many, if any technical skills. But my general leadership skills, you know, that would be applicable again, to running a business have applied so well to, to >>Growing off, Hey, you put it out there, you hear driving the ship. It's good to do that. They need that. It really needs it. And the results speak for itself and congratulations. Yeah. Thank you. What's the update on the project? Give us an update because you're seeing, seeing a lot of infrastructure people having the same problem. Sure. But it's also, the environments are a little bit different. Some people have different architectures. Absolutely different, more cloud, less cloud edges exploding. Yeah. Where does Envoy fit into the landscape they've seen and what's the updates? You've got some new things going on. Give the updates on what's going on with the project Sure. And then how it sits in the ecosystem vis-a-vis what people may use it for. >>Yeah. So I'm, from a core project perspective, honestly, things have matured. Things have stabilized a bit. So a lot of what we focus on now are less Big bang features, but more table stakes. We spend a lot of time on security. We spend a lot of time on software supply chain. A topic that you're probably hearing a lot about at this conference. We have a lot of software supply chain issues. We have shipped Quicken HTB three over the last year. That's generally available. That's a new internet protocol still work happening on web assembly where ha doing a lot of work on our build and release pipeline. Again, you would think that's boring. Yeah. But a lot of people want, you know, packages for their fedora or their ADU or their Docker images. And that takes a lot of effort. So a lot of what we're doing now is more table stakes, just realizing that the project is used around the world very widely. >>Yeah. The thing that I'm most interested in is, we announced in the last six months a project called Envoy Gateway, which is layered on top of Envoy. And the goal of Envoy Gateway is to make it easier for people to run Envoy within Kubernetes. So essentially as an, as an ingress controller. And Envoy is a project historically, it is a very sophisticated piece of software, very complicated piece of software. It's not for everyone. And we want to provide Envoy Gateway as a way of onboarding more users into the Envoy ecosystem and making Envoy the, the default API gateway or edge proxy within Kubernetes. But in terms of use cases, we see Envoy pervasively with service mesh, API gateway, other types of low balancing cases. I mean, honestly, it's, it's all over the place at >>This point. I'm curious because you mentioned it's expanded beyond your wildest dreams. Yeah. And how could you have even imagined what Envoy was gonna do? Is there a use case or an application that really surprised you? >>You know, I've been asked that before and I, it's hard for me to answer that. It's, it's more that, I mean, for example, Envoy is used by basically every major internet company in China. I mean, like, wow. Everyone in China uses Envoy, like TikTok, like Alibaba. I mean like everyone, all >>The large sale, >>Everyone. You know, and it's used, it's used in the, I'm just, it's not just even the us. So I, I think the thing that has surprised me more than individual use cases is just the, the worldwide adoption. You know, that something could be be everywhere. And that I think, you know, when I open my phone and I'm opening all of these apps on my phone, 80 or 90% of them are going through Envoy in some form. Yeah. You know, it's, it's just that pervasive, I blow your mind a little bit sometimes >>That does, that's why you say plumber on your Twitter handle as your title. Cause you're working on all these things that are like really important substrate issues, Right. For scale, stability, growth. >>And, you know, to, I, I guess the only thing that I would add is, my goal for Envoy has always been that it is that boring, transparent piece of technology. Kind of similar to Linux. Linux is everywhere. Right? But no one really knows that they're using Linux. It's, it's justs like Intel inside, we're not paying attention. It's just there, there's >>A core group working on, if they have pride, they understand the mission, the importance of it, and they make their job is to make it invisible. >>Right. Exactly. >>And that's really ease of use. What's some of the ease of use sways and, and simplicity that you're working on, if you can talk about that. Because to be boring, you gotta be simpler and easier. All boring complex is unique is not boring. Complex is stressful. No, >>I I think we approach it in a couple different ways. One of them is that because we view Envoy as a, as a base technology in the ecosystem, we're starting to see, you know, not only vendors, but other open source projects that are being built on top of Envoy. So things like API Gateway, sorry, Envoy Gateway or you know, projects like Istio or all the other projects that are out there. They use Envoy as a component, but in some sense Envoy is a, as a transparent piece of that system. Yeah. So I'm a big believer in the ecosystem that we need to continue to make cloud native easier for, for end users. I still think it's too complicated. And so I think we're there, we're, we're pushing up the stack a bit. >>Yeah. And that brings up a good point. When you start seeing people building on top of things, right? That's enabling. So as you look at the enablement of Envoy, what are some of the things you see out on the horizon if you got the 20 mile stare out as you check these boring boxes, make it more plumbing, Right? Stable. You'll have a disruptive enabling platform. Yeah. What do you see out there? >>I am, you know, I, again, I'm not a big buzzword person, but, so some people call it serverless functions as a service, whatever. I'm a big believer in platforms in the sense that I really believe in the next 10 to 15 years, developers, they want to provide code. You know, they want to call APIs, they want to use pub subsystems, they want to use cas and databases. And honestly, they don't care about container scheduling or networking or load balancing or any of >>These things. It's handled in the os >>They just want it to be part of the operating system. Yeah, exactly. So I, I really believe that whether it's an open source or in cloud provider, you know, package solutions, that we're going to be just moving increasingly towards systems likes Lambda and Fargate and Google Cloud Run and Azure functions and all those kinds of things. And I think that when you do that much of the functionality that has historically powered this conference like Kubernetes and Onvoy, these become critical but transparent components that people don't, they're not really aware of >>At that point. Yeah. And I think that's a great call out because one of the things we're seeing is the market forces of, of this evolution, what you just said is what has to happen Yep. For digital transformation to, to get to its conclusion. Yep. Which means that everything doesn't have to serve the business, it is the business. Right. You know it in the old days. Yep. Engineers, they serve the business. Like what does that even mean? Yep. Now, right. Developers are the business, so they need that coding environment. So for your statement to happen, that simplicity in visibility calling is invisible os has to happen. So it brings up the question in open source, the trend is things always work itself out on the wash, as we say. So when you start having these debates and the alignment has to come at some point, you can't get to those that stay without some sort of defacto or consensus. Yep. And even standards, I'm not a big be around hardcore standards, but we can all agree and have consensus Sure. That will align behind, say Kubernetes, It's Kubernetes a standard. It's not like an i e you know, but this next, what, what's your reaction to this? Because this alignment has to come after debate. So all the process contending for I am the this of that. >>Yeah. I'm a look, I mean, I totally see the value in like i e e standards and, and there's a place for that. At the same time, for me personally as a technologist, as an engineer, I prefer to let the, the market as it were sort out what are the defacto standards. So for example, at least with Envoy, Envoy has an API that we call Xds. Xds is now used beyond Envoy. It's used by gc, it's used by proprietary systems. And I'm a big believer that actually Envoy in its form is probably gonna go away before Xds goes away. So in some ways Xds has become a defacto standard. It's not an i e e standard. Yeah. We, we, we have been asked about whether we should do that. Yeah. But I just, I I think the >>It becomes a component. >>It becomes a component. Yeah. And then I think people gravitate towards these things that become de facto standards. And I guess I would rather let the people on the show floor decide what are the standards than have, you know, 10 people sitting in a room figure out >>The community define standards versus organizational institutional defined standards. >>And they both have places a >>Hundred percent. Yeah, sure. And, and there's social proof in both of them. Yep. >>Frankly, >>And we were saying on the cube that we believe that the developers will decide the standard. Sure. Because that's what you're basically saying. They're deciding what they do with their code. Right. And over time, as people realize the trade of, hey, if everyone's coding this right. And makes my life easier to get to that state of nirvana and enlightenment, as we would say. Yeah. Yeah. >>Starting strong this morning. John, I I love this. I'm curious, you mentioned Backstage by Spotify wonderful example. Do you think that this is a trend we're gonna see with more end users >>Creating open source projects? Like I, you know, I hope so. The flip side of that, and as we all know, we're entering an uncertain economic time and it can be hard to justify the effort that it takes to do it well. And what I typically counsel people when they are about to open source something is don't do it unless you're ready to commit the resources. Because opensourcing something and not supporting it. Yeah. I actually can be think, I think it'd be worse. >>It's an, it's insult that people, you're asking to commit to something. Exactly. Needs of time, need the money investment, you gotta go all in and push. >>So I, so I very much want to see it and, and I want to encourage that here, but it's hard for me to look into the crystal ball and know, you know, whether it's gonna happen more >>Or less at what point there were, are there too many projects? You know, I mean, but I'm not, I mean this in, in a, in a negative way. I mean it more in the way of, you know, you mentioned supply chain. We were riffing on the cube about at some point there's gonna be so much code open source continuing thundering away with, with the value that you're just gluing things. Right. I don't need the code, this code there. Okay. What's in the code? Okay. Maybe automation can help out on supply chain. Yeah. But ultimately composability is the new >>Right? It is. Yeah. And, and I think that's always going to be the case. Case. Good thing. It is good thing. And I, I think that's just, that's just the way of things for sure. >>So no code will be, >>I think, I think we're seeing a lot of no code situations that are working great for people. And, and, but this is actually really no different than my, than my serverless arguing from before. Just as a, as a, a slight digression. I'm building something new right now and you know, we're using cloud native technologies and all this stuff and it's still, >>What are you building? >>Even as a I'm, I'm gonna keep that, I'm gonna keep that secret. I know I'm, but >>We'll find out on Twitter. We're gonna find out now that we know it. Okay. Keep on mystery. You open that door. We're going down see in a couple weeks. >>Front >>Page is still an angle. >>But I, I was just gonna say that, you know, and I consider myself, you know, you're building something, I'm, I see myself an expert in the cloud native space. It's still difficult, It's difficult to, to pull together these technologies and I think that we will continue to make it easier for people. >>What's the biggest difficulties? Can you give us some examples? >>Well, just, I mean, we still live in a big mess of yammel, right? Is a, there's a, there's a lot of yaml out there. And I think just wrangling all of that in these systems, there's still a lot of cobbling together where I think that there can be unified platforms that make it easier for us to focus on our application logic. >>Yeah. I gotta ask you a question cuz I've talked to college kids all the time. My son's a junior in CS and he's, you know, he's coding away. What would you, how does a student or someone who's learning figure out where, who they are? Because there's now, you know, you're either into the infrastructure under the hood Yeah. Or you're, cuz that's coding there option now coding the way your infrastructure people are working on say the boring stuff so everyone else can have ease of use. And then what is just, I wanna just code, there's two types of personas. How does someone know who they are? >>My, when I give people career advice, my biggest piece of advice to them is in the first five to seven to 10 years of their career, I encourage people to do different things like every say one to two to three years. And that doesn't mean like quitting companies and changing companies, it could mean, you know, within a company that they join doing different teams, you know, working on front end versus back end. Because honestly I think people don't know. I think it's actually very, Yeah. Our industry is so broad. Yeah. That I think it's almost impossible to >>Know. You gotta get your hands dirty to jump >>In order to know what you like. And for me, in my career, you know, I've dabbled in different areas, but I've always come back to infrastructure, you know, that that's what I enjoy >>The most. Okay. You gotta, you gotta taste everything. See what you, what >>You like. Exactly. >>Right. Last question for you, Matt. It's been three years since you were here. Yep. What do you hope that we're able to say next year? That we can't say this year? Hmm. Beyond the secrets of your project, which hopefully we will definitely be discussing then. >>You know, I I, I don't have anything in particular. I would just say that I would like to see more movement towards projects that are synthesizing and making it easier to use a lot of the existing projects that we have today. So for example, I'm, I'm very bullish on backstage. Like I, I've, I've always said that we need better developer UIs that are not CLIs. Like I know it's a general perception among many people. Totally agree with you. Frankly, you're not a real systems engineer unless you type on the command line. I, I think better user interfaces are better for humans. Yep. So just for a project like Backstage to be more integrated with the rest of the projects, whether that be Envo or Kubernete or Argo or Flagger. I, I just, I think there's tremendous potential for further integration of some >>Of these projects. It just composability That makes total sense. Yep. Yep. You're, you're op you're operating and composing. >>Yep. And there's no reason that user experience can't be better. And then more people can create and build. So I think it's awesome. Matt, thank you so much. Thank you. Yeah, this has been fantastic. Be sure and check out Matt on Twitter to find out what that next secret project is. John, thank you for joining me this morning. My name is Savannah Peterson and we'll be here all day live from the cube. We hope you'll be joining us throughout the evening until a happy hour today. Thanks for coming. Thanks for coming. Thanks for watching.

Published Date : Oct 27 2022

SUMMARY :

How you doing this morning, Day two of three days of coverage, feeling That is that for being in the heat of the conference. We'll see you at the end of the day. Very excited to start the day off Good to see you. You've been here before, but it's been three years you for me to see what is, what is the same and what is different pre and post covid. Communities that are developing, What's the landscape look like as you look out? And you know, much to my amazement, but you gotta get that momentum in the community, but then you start gotta get down to, to business. And many of the projects in this ecosystem, you know, no judgment, for better or worse, And that's the class, I mean, almost, It's almost like open source product market fit. I mean, you have to have money to do all these things. So there are things you just gotta be mindful of. It's not like, So based on that, I mean, do you see more end user traction? you know, day one event. What's the, how do you talk to the enterprise out there that might I mean, it can be just in terms of, you know, getting people into your company, getting users, I think a lot of us, myself included, I mean, it's like you said, it I would not trade it for anything, but it has, it has been a journey. I mean, it, it takes money to keep it going. You gotta sell people on the concepts. leadership skills, you know, that would be applicable again, to running a business have And the results speak for itself and congratulations. you know, packages for their fedora or their ADU or their Docker images. And the goal of Envoy Gateway is to make it easier for people to run Envoy within Kubernetes. I'm curious because you mentioned it's expanded beyond your wildest dreams. You know, I've been asked that before and I, it's hard for me to answer that. And that I think, you know, when I open my phone and I'm opening all of these apps on my That does, that's why you say plumber on your Twitter handle as your title. And, you know, to, I, I guess the only thing that I would add is, and they make their job is to make it invisible. Right. Because to be boring, you gotta be simpler and easier. So things like API Gateway, sorry, Envoy Gateway or you know, So as you look at the enablement of Envoy, what are some of the things you see out on the horizon if I am, you know, I, again, I'm not a big buzzword person, but, It's handled in the os And I think that when you do that much of the functionality that has the alignment has to come at some point, you can't get to those that stay without some sort of defacto But I just, I I think the what are the standards than have, you know, 10 people sitting in a room figure out And, and there's social proof in both of them. And makes my life easier to get to I'm curious, you mentioned Backstage by Spotify wonderful Like I, you know, I hope so. you gotta go all in and push. I mean it more in the way of, you know, you mentioned supply chain. And I, I think that's just, that's just the way of things now and you know, we're using cloud native technologies and all this stuff and it's still, I know I'm, but We're gonna find out now that we know it. But I, I was just gonna say that, you know, and I consider myself, And I think just wrangling all of that in these systems, Because there's now, you know, you're either into the infrastructure under the hood Yeah. changing companies, it could mean, you know, within a company that they join doing different teams, And for me, in my career, you know, See what you, what You like. It's been three years since you were here. So just for a project like Backstage to be more integrated with the rest of It just composability That makes total sense. John, thank you for joining me this morning.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt KleinPERSON

0.99+

2017DATE

0.99+

2016DATE

0.99+

John FarerPERSON

0.99+

SavannahPERSON

0.99+

ChinaLOCATION

0.99+

JohnPERSON

0.99+

Savannah PetersonPERSON

0.99+

MattPERSON

0.99+

80QUANTITY

0.99+

AlibabaORGANIZATION

0.99+

LyftORGANIZATION

0.99+

70QUANTITY

0.99+

10 peopleQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

next yearDATE

0.99+

bothQUANTITY

0.99+

Detroit, MichiganLOCATION

0.99+

three yearsQUANTITY

0.99+

EnvoyORGANIZATION

0.99+

20 mileQUANTITY

0.99+

90%QUANTITY

0.99+

OneQUANTITY

0.99+

LinuxTITLE

0.99+

three daysQUANTITY

0.99+

two typesQUANTITY

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

firstQUANTITY

0.99+

Hundred percentQUANTITY

0.99+

TwitterORGANIZATION

0.99+

sevenQUANTITY

0.99+

last yearDATE

0.98+

todayDATE

0.98+

IntuitORGANIZATION

0.98+

this yearDATE

0.98+

XdsTITLE

0.98+

secondQUANTITY

0.98+

oneQUANTITY

0.98+

CNCFORGANIZATION

0.98+

AzureTITLE

0.98+

EnvoyTITLE

0.98+

EnvoORGANIZATION

0.97+

FlaggerORGANIZATION

0.97+

CloudNativeConEVENT

0.97+

Day twoQUANTITY

0.97+

two nights agoDATE

0.97+

KubeConEVENT

0.97+

KubernetesTITLE

0.96+

seven yearsQUANTITY

0.96+

OnvoyORGANIZATION

0.96+

ArgoORGANIZATION

0.95+

KubernetesORGANIZATION

0.95+

TikTokORGANIZATION

0.94+

this morningDATE

0.93+

15 yearsQUANTITY

0.93+

last six monthsDATE

0.9+

KubeCon + CloudNativeCon 2022 Preview w/ @Stu


 

>>Keon Cloud Native Con kicks off in Detroit on October 24th, and we're pleased to have Stewart Miniman, who's the director of Market Insights, hi, at, for hybrid platforms at Red Hat back in the studio to help us understand the key trends to look for at the events. Do welcome back, like old, old, old >>Home. Thank you, David. It's great to, great to see you and always love doing these previews, even though Dave, come on. How many years have I told you Cloud native con, It's a hoodie crowd. They're gonna totally call you out for where in a tie and things like that. I, I know you want to be an ESPN sportscaster, but you know, I I, I, I still don't think even after, you know, this show's been around for so many years that there's gonna be too many ties into Troy. I >>Know I left the hoodie in my off, I'm sorry folks, but hey, we'll just have to go for it. Okay. Containers generally, and Kubernetes specifically continue to show very strong spending momentum in the ETR survey data. So let's bring up this slide that shows the ETR sectors, all the sectors in the tax taxonomy with net score or spending velocity in the vertical axis and pervasiveness on the horizontal axis. Now, that red dotted line that you see, that marks the elevated 40% mark, anything above that is considered highly elevated in terms of momentum. Now, for years, the big four areas of momentum that shine above all the rest have been cloud containers, rpa, and ML slash ai for the first time in 10 quarters, ML and AI and RPA have dropped below the 40% line, leaving only cloud and containers in rarefied air. Now, Stu, I'm sure this data doesn't surprise you, but what do you make of this? >>Yeah, well, well, Dave, I, I did an interview with at Deepak who owns all the container and open source activity at Amazon earlier this year, and his comment was, the default deployment mechanism in Amazon is containers. So when I look at your data and I see containers and cloud going in sync, yeah, that, that's, that's how we see things. We're helping lots of customers in their overall adoption. And this cloud native ecosystem is still, you know, we're still in that Cambridge explosion of new projects, new opportunities, AI's a great workload for these type type of technologies. So it's really becoming pervasive in the marketplace. >>And, and I feel like the cloud and containers go hand in hand, so it's not surprising to see those two above >>The 40%. You know, there, there's nothing to say that, Look, can I run my containers in my data center and not do the public cloud? Sure. But in the public cloud, the default is the container. And one of the hot discussions we've been having in this ecosystem for a number of years is edge computing. And of course, you know, I want something that that's small and lightweight and can do things really fast. A lot of times it's an AI workload out there, and containers is a great fit at the edge too. So wherever it goes, containers is a good fit, which has been keeping my group at Red Hat pretty busy. >>So let's talk about some of those high level stats that we put together and preview for the event. So it's really around the adoption of open source software and Kubernetes. Here's, you know, a few fun facts. So according to the state of enterprise open source report, which was published by Red Hat, although it was based on a blind survey, nobody knew that that Red Hat was, you know, initiating it. 80% of IT execs expect to increase their use of enterprise open source software. Now, the CNCF community has currently more than 120,000 developers. That's insane when you think about that developer resource. 73% of organizations in the most recent CNCF annual survey are using Kubernetes. Now, despite the momentum, according to that same Red Hat survey, adoption barriers remain for some organizations. Stu, I'd love you to talk about this specifically around skill sets, and then we've highlighted some of the other trends that we expect to see at the event around Stu. I'd love to, again, your, get your thoughts on the preview. You've done a number of these events, automation, security, governance, governance at scale, edge deployments, which you just mentioned among others. Now Kubernetes is eight years old, and I always hear people talking about there's something coming beyond Kubernetes, but it looks like we're just getting started. Yeah, >>Dave, It, it is still relatively early days. The CMC F survey, I think said, you know, 96% of companies when they, when CMC F surveyed them last year, were either deploying Kubernetes or had plans to deploy it. But when I talked to enterprises, nobody has said like, Hey, we've got every group on board and all of our applications are on. It is a multi-year journey for most companies and plenty of them. If you, you look at the general adoption of technology, we're still working through kind of that early majority. We, you know, passed the, the chasm a couple of years ago. But to a point, you and I we're talking about this ecosystem, there are plenty of people in this ecosystem that could care less about containers and Kubernetes. Lots of conversations at this show won't even talk about Kubernetes. You've got, you know, big security group that's in there. >>You've got, you know, certain workloads like we talked about, you know, AI and ml and that are in there. And automation absolutely is playing a, a good role in what's going on here. So in some ways, Kubernetes kind of takes a, a backseat because it is table stakes at this point. So lots of people involved in it, lots of activities still going on. I mean, we're still at a cadence of three times a year now. We slowed it down from four times a year as an industry, but there's, there's still lots of innovation happening, lots of adoption, and oh my gosh, Dave, I mean, there's just no shortage of new projects and new people getting involved. And what's phenomenal about it is there's, you know, end user practitioners that aren't just contributing. But many of the projects were spawned out of work by the likes of Intuit and Spotify and, and many others that created some of the projects that sit alongside or above the, the, you know, the container orchestration itself. >>So before we talked about some of that, it's, it's kind of interesting. It's like Kubernetes is the big dog, right? And it's, it's kind of maturing after, you know, eight years, but it's still important. I wanna share another data point that underscores the traction that containers generally are getting in Kubernetes specifically have, So this is data from the latest ETR survey and shows the spending breakdown for Kubernetes in the ETR data set for it's cut for respondents with 50 or more citations in, in by the IT practitioners that lime green is new adoptions, the forest green is spending 6% or more relative to last year. The gray is flat spending year on year, and those little pink bars, that's 6% or down spending, and the bright red is retirements. So they're leaving the platform. And the blue dots are net score, which is derived by subtracting the reds from the greens. And the yellow dots are pervasiveness in the survey relative to the sector. So the big takeaway here is that there is virtually no red, essentially zero churn across all sectors, large companies, public companies, private firms, telcos, finance, insurance, et cetera. So again, sometimes I hear this things beyond Kubernetes, you've mentioned several, but it feels like Kubernetes is still a driving force, but a lot of other projects around Kubernetes, which we're gonna hear about at the show. >>Yeah. So, so, so Dave, right? First of all, there was for a number of years, like, oh wait, you know, don't waste your time on, on containers because serverless is gonna rule the world. Well, serverless is now a little bit of a broader term. Can I do a serverless viewpoint for my developers that they don't need to think about the infrastructure but still have containers underneath it? Absolutely. So our friends at Amazon have a solution called Fargate, their proprietary offering to kind of hide that piece of it. And in the open source world, there's a project called Can Native, I think it's the second or third can Native Con's gonna happen at the cncf. And even if you use this, I can still call things over on Lambda and use some of those functions. So we know Dave, it is additive and nothing ever dominates the entire world and nothing ever dies. >>So we have, we have a long runway of activities still to go on in containers and Kubernetes. We're always looking for what that next thing is. And what's great about this ecosystem is most of it tends to be additive and plug into the pieces there, there's certain tools that, you know, span beyond what can happen in the container world and aren't limited to it. And there's others that are specific for it. And to talk about the industries, Dave, you know, I love, we we have, we have a community event that we run that's gonna happen at Cubans called OpenShift Commons. And when you look at like, who's speaking there? Oh, we've got, you know, for Lockheed Martin, University of Michigan and I g Bank all speaking there. So you look and it's like, okay, cool, I've got automotive, I've got, you know, public sector, I've got, you know, university education and I've got finance. So all of you know, there is not an industry that is not touched by this. And the general wave of software adoption is the reason why, you know, not just adoption, but the creation of new software is one of the differentiators for companies. And that is what, that's the reason why I do containers, isn't because it's some cool technology and Kubernetes is great to put on my resume, but that it can actually accelerate my developers and help me create technology that makes me respond to my business and my ultimate end users. Well, >>And you know, as you know, we've been talking about the Supercloud a lot and the Kubernetes is clearly enabler to, to Supercloud, but I wanted to go back, you and John Furrier have done so many of, you know, the, the cube cons, but but go back to Docker con before Kubernetes was even a thing. And so you sort of saw this, you know, grow. I think there's what, how many projects are in CNCF now? I mean, hundreds. Hundreds, okay. And so you're, Will we hear things in Detroit, things like, you know, new projects like, you know, Argo and capabilities around SI store and things like that? Well, you're gonna hear a lot about that. Or is it just too much to cover? >>So I, I mean the, the good news, Dave, is that the CNCF really is, is a good steward for this community and new things got in get in. So there's so much going on with the existing projects that some of the new ones sometimes have a little bit of a harder time making a little bit of buzz. One of the more interesting ones is a project that's been around for a while that I think back to the first couple of Cube Cuban that John and I did service Mesh and Istio, which was created by Google, but lived under basically a, I guess you would say a Google dominated governance for a number of years is now finally under the CNCF Foundation. So I talked to a number of companies over the years and definitely many of the contributors over the years that didn't love that it was a Google Run thing, and now it is finally part. >>So just like Kubernetes is, we have SEO and also can Native that I mentioned before also came outta Google and those are all in the cncf. So will there be new projects? Yes. The CNCF is sometimes they, they do matchmaking. So in some of the observability space, there were a couple of projects that they said, Hey, maybe you can go merge down the road. And they ended up doing that. So there's still you, you look at all these projects and if I was an end user saying, Oh my God, there is so much change and so many projects, you know, I can't spend the time in the effort to learn about all of these. And that's one of the challenges and something obviously at Red Hat, we spend a lot of time figuring out, you know, not to make winners, but which are the things that customers need, Where can we help make them run in production for our, our customers and, and help bring some stability and a little bit of security for the overall ecosystem. >>Well, speaking of security, security and, and skill sets, we've talked about those two things and they sort of go hand in hand when I go to security events. I mean, we're at reinforced last summer, we were just recently at the CrowdStrike event. A lot of the discussion is sort of best practice because it's so complicated. And, and, and will you, I presume you're gonna hear a lot of that here because security securing containers now, you know, the whole shift left thing and shield right is, is a complicated matter, especially when you saw with the earlier data from the Red Hat survey, the the gaps are around skill sets. People don't have the skill. So should we expect to hear a lot about that, A lot of sort of how to, how to take advantage of some of these new capabilities? >>Yeah, Dave, absolutely. So, you know, one of the conversations going on in the community right now is, you know, has DevOps maybe played out as we expect to see it? There's a newer term called platform engineering, and how much do I need to do there? Something that I, I know your, your team's written a lot about Dave, is how much do you need to know versus what can you shift to just a platform or a service that I can consume? I've talked a number of times with you since I've been at Red Hat about the cloud services that we offer. So you want to use our offering in the public cloud. Our first recommendation is, hey, we've got cloud services, how much Kubernetes do you really want to learn versus you want to do what you can build on top of it, modernize the pieces and have less running the plumbing and electric and more, you know, taking advantage of the, the technologies there. So that's a big thing we've seen, you know, we've got a big SRE team that can manage that for use so that you have to spend less time worrying about what really is un differentiated heavy lifting and spend more time on what's important to your business and your >>Customers. So, and that's, and that's through a managed service. >>Yeah, absolutely. >>That whole space is just taken off. All right, Stu I'll give you the final word. You know, what are you excited about for, for, for this upcoming event and Detroit? Interesting choice of venue? Yeah, >>Look, first of off, easy flight. I've, I've never been to Detroit, so I'm, I'm willing to give it a shot and hopefully, you know, that awesome airport. There's some, some, some good things there to learn. The show itself is really a choose your own adventure because there's so much going on. The main show of QAN and cloud Native Con is Wednesday through Friday, but a lot of a really interesting stuff happens on Monday and Tuesday. So we talked about things like OpenShift Commons in the security space. There's cloud Native Security Day, which is actually two days and a SIG store event. There, there's a get up show, there's, you know, k native day. There's so many things that if you want to go deep on a topic, you can go spend like a workshop in some of those you can get hands on to. And then at the show itself, there's so much, and again, you can learn from your peers. >>So it was good to see we had, during the pandemic, it tilted a little bit more vendor heavy because I think most practitioners were pretty busy focused on what they could work on and less, okay, hey, I'm gonna put together a presentation and maybe I'm restricted at going to a show. Yeah, not, we definitely saw that last year when I went to LA I was disappointed how few customer sessions there were. It, it's back when I go look through the schedule now there's way more end users sharing their stories and it, it's phenomenal to see that. And the hallway track, Dave, I didn't go to Valencia, but I hear it was really hopping felt way more like it was pre pandemic. And while there's a few people that probably won't come because Detroit, we think there's, what we've heard and what I've heard from the CNCF team is they are expecting a sizable group up there. I know a lot of the hotels right near the, where it's being held are all sold out. So it should be, should be a lot of fun. Good thing I'm speaking on an edge panel. First time I get to be a speaker at the show, Dave, it's kind of interesting to be a little bit of a different role at the show. >>So yeah, Detroit's super convenient, as I said. Awesome. Airports too. Good luck at the show. So it's a full week. The cube will be there for three days, Tuesday, Wednesday, Thursday. Thanks for coming. >>Wednesday, Thursday, Friday, sorry, >>Wednesday, Thursday, Friday is the cube, right? So thank you for that. >>And, and no ties from the host, >>No ties, only hoodies. All right Stu, thanks. Appreciate you coming in. Awesome. And thank you for watching this preview of CubeCon plus cloud Native Con with at Stu, which again starts the 24th of October, three days of broadcasting. Go to the cube.net and you can see all the action. We'll see you there.

Published Date : Oct 4 2022

SUMMARY :

Red Hat back in the studio to help us understand the key trends to look for at the events. I know you want to be an ESPN sportscaster, but you know, I I, I, I still don't think even Now, that red dotted line that you And this cloud native ecosystem is still, you know, we're still in that Cambridge explosion And of course, you know, I want something that that's small and lightweight and Here's, you know, a few fun facts. I think said, you know, 96% of companies when they, when CMC F surveyed them last year, You've got, you know, certain workloads like we talked about, you know, AI and ml and that And it's, it's kind of maturing after, you know, eight years, but it's still important. oh wait, you know, don't waste your time on, on containers because serverless is gonna rule the world. And the general wave of software adoption is the reason why, you know, And you know, as you know, we've been talking about the Supercloud a lot and the Kubernetes is clearly enabler to, to Supercloud, definitely many of the contributors over the years that didn't love that it was a Google Run the observability space, there were a couple of projects that they said, Hey, maybe you can go merge down the road. securing containers now, you know, the whole shift left thing and shield right is, So, you know, one of the conversations going on in the community right now is, So, and that's, and that's through a managed service. All right, Stu I'll give you the final word. There, there's a get up show, there's, you know, k native day. I know a lot of the hotels right near the, where it's being held are all sold out. Good luck at the show. So thank you for that. Go to the cube.net and you can see all the action.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Lockheed MartinORGANIZATION

0.99+

6%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

DetroitLOCATION

0.99+

50QUANTITY

0.99+

CNCFORGANIZATION

0.99+

October 24thDATE

0.99+

40%QUANTITY

0.99+

Stewart MinimanPERSON

0.99+

FridayDATE

0.99+

GoogleORGANIZATION

0.99+

96%QUANTITY

0.99+

two daysQUANTITY

0.99+

University of MichiganORGANIZATION

0.99+

StuPERSON

0.99+

CMC FORGANIZATION

0.99+

80%QUANTITY

0.99+

TuesdayDATE

0.99+

JohnPERSON

0.99+

WednesdayDATE

0.99+

eight yearsQUANTITY

0.99+

MondayDATE

0.99+

last yearDATE

0.99+

three daysQUANTITY

0.99+

Red HatORGANIZATION

0.99+

secondQUANTITY

0.99+

73%QUANTITY

0.99+

ThursdayDATE

0.99+

LALOCATION

0.99+

more than 120,000 developersQUANTITY

0.99+

two thingsQUANTITY

0.99+

John FurrierPERSON

0.99+

hundredsQUANTITY

0.99+

HundredsQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.99+

24th of OctoberDATE

0.99+

oneQUANTITY

0.98+

KubeConEVENT

0.98+

CubeConEVENT

0.98+

CNCF FoundationORGANIZATION

0.98+

cube.netOTHER

0.98+

last summerDATE

0.98+

ValenciaLOCATION

0.98+

thirdQUANTITY

0.98+

SpotifyORGANIZATION

0.98+

IntuitORGANIZATION

0.98+

last yearDATE

0.98+

OneQUANTITY

0.98+

cloud Native Security DayEVENT

0.97+

KubernetesTITLE

0.97+

QANEVENT

0.97+

ESPNORGANIZATION

0.97+

Breaking Analysis: We Have the Data…What Private Tech Companies Don’t Tell you About Their Business


 

>> From The Cube Studios in Palo Alto and Boston, bringing you data driven insights from The Cube at ETR. This is "Breaking Analysis" with Dave Vellante. >> The reverse momentum in tech stocks caused by rising interest rates, less attractive discounted cash flow models, and more tepid forward guidance, can be easily measured by public market valuations. And while there's lots of discussion about the impact on private companies and cash runway and 409A valuations, measuring the performance of non-public companies isn't as easy. IPOs have dried up and public statements by private companies, of course, they accentuate the good and they kind of hide the bad. Real data, unless you're an insider, is hard to find. Hello and welcome to this week's "Wikibon Cube Insights" powered by ETR. In this "Breaking Analysis", we unlock some of the secrets that non-public, emerging tech companies may or may not be sharing. And we do this by introducing you to a capability from ETR that we've not exposed you to over the past couple of years, it's called the Emerging Technologies Survey, and it is packed with sentiment data and performance data based on surveys of more than a thousand CIOs and IT buyers covering more than 400 companies. And we've invited back our colleague, Erik Bradley of ETR to help explain the survey and the data that we're going to cover today. Erik, this survey is something that I've not personally spent much time on, but I'm blown away at the data. It's really unique and detailed. First of all, welcome. Good to see you again. >> Great to see you too, Dave, and I'm really happy to be talking about the ETS or the Emerging Technology Survey. Even our own clients of constituents probably don't spend as much time in here as they should. >> Yeah, because there's so much in the mainstream, but let's pull up a slide to bring out the survey composition. Tell us about the study. How often do you run it? What's the background and the methodology? >> Yeah, you were just spot on the way you were talking about the private tech companies out there. So what we did is we decided to take all the vendors that we track that are not yet public and move 'em over to the ETS. And there isn't a lot of information out there. If you're not in Silicon (indistinct), you're not going to get this stuff. So PitchBook and Tech Crunch are two out there that gives some data on these guys. But what we really wanted to do was go out to our community. We have 6,000, ITDMs in our community. We wanted to ask them, "Are you aware of these companies? And if so, are you allocating any resources to them? Are you planning to evaluate them," and really just kind of figure out what we can do. So this particular survey, as you can see, 1000 plus responses, over 450 vendors that we track. And essentially what we're trying to do here is talk about your evaluation and awareness of these companies and also your utilization. And also if you're not utilizing 'em, then we can also figure out your sales conversion or churn. So this is interesting, not only for the ITDMs themselves to figure out what their peers are evaluating and what they should put in POCs against the big guys when contracts come up. But it's also really interesting for the tech vendors themselves to see how they're performing. >> And you can see 2/3 of the respondents are director level of above. You got 28% is C-suite. There is of course a North America bias, 70, 75% is North America. But these smaller companies, you know, that's when they start doing business. So, okay. We're going to do a couple of things here today. First, we're going to give you the big picture across the sectors that ETR covers within the ETS survey. And then we're going to look at the high and low sentiment for the larger private companies. And then we're going to do the same for the smaller private companies, the ones that don't have as much mindshare. And then I'm going to put those two groups together and we're going to look at two dimensions, actually three dimensions, which companies are being evaluated the most. Second, companies are getting the most usage and adoption of their offerings. And then third, which companies are seeing the highest churn rates, which of course is a silent killer of companies. And then finally, we're going to look at the sentiment and mindshare for two key areas that we like to cover often here on "Breaking Analysis", security and data. And data comprises database, including data warehousing, and then big data analytics is the second part of data. And then machine learning and AI is the third section within data that we're going to look at. Now, one other thing before we get into it, ETR very often will include open source offerings in the mix, even though they're not companies like TensorFlow or Kubernetes, for example. And we'll call that out during this discussion. The reason this is done is for context, because everyone is using open source. It is the heart of innovation and many business models are super glued to an open source offering, like take MariaDB, for example. There's the foundation and then there's with the open source code and then there, of course, the company that sells services around the offering. Okay, so let's first look at the highest and lowest sentiment among these private firms, the ones that have the highest mindshare. So they're naturally going to be somewhat larger. And we do this on two dimensions, sentiment on the vertical axis and mindshare on the horizontal axis and note the open source tool, see Kubernetes, Postgres, Kafka, TensorFlow, Jenkins, Grafana, et cetera. So Erik, please explain what we're looking at here, how it's derived and what the data tells us. >> Certainly, so there is a lot here, so we're going to break it down first of all by explaining just what mindshare and net sentiment is. You explain the axis. We have so many evaluation metrics, but we need to aggregate them into one so that way we can rank against each other. Net sentiment is really the aggregation of all the positive and subtracting out the negative. So the net sentiment is a very quick way of looking at where these companies stand versus their peers in their sectors and sub sectors. Mindshare is basically the awareness of them, which is good for very early stage companies. And you'll see some names on here that are obviously been around for a very long time. And they're clearly be the bigger on the axis on the outside. Kubernetes, for instance, as you mentioned, is open source. This de facto standard for all container orchestration, and it should be that far up into the right, because that's what everyone's using. In fact, the open source leaders are so prevalent in the emerging technology survey that we break them out later in our analysis, 'cause it's really not fair to include them and compare them to the actual companies that are providing the support and the security around that open source technology. But no survey, no analysis, no research would be complete without including these open source tech. So what we're looking at here, if I can just get away from the open source names, we see other things like Databricks and OneTrust . They're repeating as top net sentiment performers here. And then also the design vendors. People don't spend a lot of time on 'em, but Miro and Figma. This is their third survey in a row where they're just dominating that sentiment overall. And Adobe should probably take note of that because they're really coming after them. But Databricks, we all know probably would've been a public company by now if the market hadn't turned, but you can see just how dominant they are in a survey of nothing but private companies. And we'll see that again when we talk about the database later. >> And I'll just add, so you see automation anywhere on there, the big UiPath competitor company that was not able to get to the public markets. They've been trying. Snyk, Peter McKay's company, they've raised a bunch of money, big security player. They're doing some really interesting things in developer security, helping developers secure the data flow, H2O.ai, Dataiku AI company. We saw them at the Snowflake Summit. Redis Labs, Netskope and security. So a lot of names that we know that ultimately we think are probably going to be hitting the public market. Okay, here's the same view for private companies with less mindshare, Erik. Take us through this one. >> On the previous slide too real quickly, I wanted to pull that security scorecard and we'll get back into it. But this is a newcomer, that I couldn't believe how strong their data was, but we'll bring that up in a second. Now, when we go to the ones of lower mindshare, it's interesting to talk about open source, right? Kubernetes was all the way on the top right. Everyone uses containers. Here we see Istio up there. Not everyone is using service mesh as much. And that's why Istio is in the smaller breakout. But still when you talk about net sentiment, it's about the leader, it's the highest one there is. So really interesting to point out. Then we see other names like Collibra in the data side really performing well. And again, as always security, very well represented here. We have Aqua, Wiz, Armis, which is a standout in this survey this time around. They do IoT security. I hadn't even heard of them until I started digging into the data here. And I couldn't believe how well they were doing. And then of course you have AnyScale, which is doing a second best in this and the best name in the survey Hugging Face, which is a machine learning AI tool. Also doing really well on a net sentiment, but they're not as far along on that access of mindshare just yet. So these are again, emerging companies that might not be as well represented in the enterprise as they will be in a couple of years. >> Hugging Face sounds like something you do with your two year old. Like you said, you see high performers, AnyScale do machine learning and you mentioned them. They came out of Berkeley. Collibra Governance, InfluxData is on there. InfluxDB's a time series database. And yeah, of course, Alex, if you bring that back up, you get a big group of red dots, right? That's the bad zone, I guess, which Sisense does vis, Yellowbrick Data is a NPP database. How should we interpret the red dots, Erik? I mean, is it necessarily a bad thing? Could it be misinterpreted? What's your take on that? >> Sure, well, let me just explain the definition of it first from a data science perspective, right? We're a data company first. So the gray dots that you're seeing that aren't named, that's the mean that's the average. So in order for you to be on this chart, you have to be at least one standard deviation above or below that average. So that gray is where we're saying, "Hey, this is where the lump of average comes in. This is where everyone normally stands." So you either have to be an outperformer or an underperformer to even show up in this analysis. So by definition, yes, the red dots are bad. You're at least one standard deviation below the average of your peers. It's not where you want to be. And if you're on the lower left, not only are you not performing well from a utilization or an actual usage rate, but people don't even know who you are. So that's a problem, obviously. And the VCs and the PEs out there that are backing these companies, they're the ones who mostly are interested in this data. >> Yeah. Oh, that's great explanation. Thank you for that. No, nice benchmarking there and yeah, you don't want to be in the red. All right, let's get into the next segment here. Here going to look at evaluation rates, adoption and the all important churn. First new evaluations. Let's bring up that slide. And Erik, take us through this. >> So essentially I just want to explain what evaluation means is that people will cite that they either plan to evaluate the company or they're currently evaluating. So that means we're aware of 'em and we are choosing to do a POC of them. And then we'll see later how that turns into utilization, which is what a company wants to see, awareness, evaluation, and then actually utilizing them. That's sort of the life cycle for these emerging companies. So what we're seeing here, again, with very high evaluation rates. H2O, we mentioned. SecurityScorecard jumped up again. Chargebee, Snyk, Salt Security, Armis. A lot of security names are up here, Aqua, Netskope, which God has been around forever. I still can't believe it's in an Emerging Technology Survey But so many of these names fall in data and security again, which is why we decided to pick those out Dave. And on the lower side, Vena, Acton, those unfortunately took the dubious award of the lowest evaluations in our survey, but I prefer to focus on the positive. So SecurityScorecard, again, real standout in this one, they're in a security assessment space, basically. They'll come in and assess for you how your security hygiene is. And it's an area of a real interest right now amongst our ITDM community. >> Yeah, I mean, I think those, and then Arctic Wolf is up there too. They're doing managed services. You had mentioned Netskope. Yeah, okay. All right, let's look at now adoption. These are the companies whose offerings are being used the most and are above that standard deviation in the green. Take us through this, Erik. >> Sure, yet again, what we're looking at is, okay, we went from awareness, we went to evaluation. Now it's about utilization, which means a survey respondent's going to state "Yes, we evaluated and we plan to utilize it" or "It's already in our enterprise and we're actually allocating further resources to it." Not surprising, again, a lot of open source, the reason why, it's free. So it's really easy to grow your utilization on something that's free. But as you and I both know, as Red Hat proved, there's a lot of money to be made once the open source is adopted, right? You need the governance, you need the security, you need the support wrapped around it. So here we're seeing Kubernetes, Postgres, Apache Kafka, Jenkins, Grafana. These are all open source based names. But if we're looking at names that are non open source, we're going to see Databricks, Automation Anywhere, Rubrik all have the highest mindshare. So these are the names, not surprisingly, all names that probably should have been public by now. Everyone's expecting an IPO imminently. These are the names that have the highest mindshare. If we talk about the highest utilization rates, again, Miro and Figma pop up, and I know they're not household names, but they are just dominant in this survey. These are applications that are meant for design software and, again, they're going after an Autodesk or a CAD or Adobe type of thing. It is just dominant how high the utilization rates are here, which again is something Adobe should be paying attention to. And then you'll see a little bit lower, but also interesting, we see Collibra again, we see Hugging Face again. And these are names that are obviously in the data governance, ML, AI side. So we're seeing a ton of data, a ton of security and Rubrik was interesting in this one, too, high utilization and high mindshare. We know how pervasive they are in the enterprise already. >> Erik, Alex, keep that up for a second, if you would. So yeah, you mentioned Rubrik. Cohesity's not on there. They're sort of the big one. We're going to talk about them in a moment. Puppet is interesting to me because you remember the early days of that sort of space, you had Puppet and Chef and then you had Ansible. Red Hat bought Ansible and then Ansible really took off. So it's interesting to see Puppet on there as well. Okay. So now let's look at the churn because this one is where you don't want to be. It's, of course, all red 'cause churn is bad. Take us through this, Erik. >> Yeah, definitely don't want to be here and I don't love to dwell on the negative. So we won't spend as much time. But to your point, there's one thing I want to point out that think it's important. So you see Rubrik in the same spot, but Rubrik has so many citations in our survey that it actually would make sense that they're both being high utilization and churn just because they're so well represented. They have such a high overall representation in our survey. And the reason I call that out is Cohesity. Cohesity has an extremely high churn rate here about 17% and unlike Rubrik, they were not on the utilization side. So Rubrik is seeing both, Cohesity is not. It's not being utilized, but it's seeing a high churn. So that's the way you can look at this data and say, "Hm." Same thing with Puppet. You noticed that it was on the other slide. It's also on this one. So basically what it means is a lot of people are giving Puppet a shot, but it's starting to churn, which means it's not as sticky as we would like. One that was surprising on here for me was Tanium. It's kind of jumbled in there. It's hard to see in the middle, but Tanium, I was very surprised to see as high of a churn because what I do hear from our end user community is that people that use it, like it. It really kind of spreads into not only vulnerability management, but also that endpoint detection and response side. So I was surprised by that one, mostly to see Tanium in here. Mural, again, was another one of those application design softwares that's seeing a very high churn as well. >> So you're saying if you're in both... Alex, bring that back up if you would. So if you're in both like MariaDB is for example, I think, yeah, they're in both. They're both green in the previous one and red here, that's not as bad. You mentioned Rubrik is going to be in both. Cohesity is a bit of a concern. Cohesity just brought on Sanjay Poonen. So this could be a go to market issue, right? I mean, 'cause Cohesity has got a great product and they got really happy customers. So they're just maybe having to figure out, okay, what's the right ideal customer profile and Sanjay Poonen, I guarantee, is going to have that company cranking. I mean they had been doing very well on the surveys and had fallen off of a bit. The other interesting things wondering the previous survey I saw Cvent, which is an event platform. My only reason I pay attention to that is 'cause we actually have an event platform. We don't sell it separately. We bundle it as part of our offerings. And you see Hopin on here. Hopin raised a billion dollars during the pandemic. And we were like, "Wow, that's going to blow up." And so you see Hopin on the churn and you didn't see 'em in the previous chart, but that's sort of interesting. Like you said, let's not kind of dwell on the negative, but you really don't. You know, churn is a real big concern. Okay, now we're going to drill down into two sectors, security and data. Where data comprises three areas, database and data warehousing, machine learning and AI and big data analytics. So first let's take a look at the security sector. Now this is interesting because not only is it a sector drill down, but also gives an indicator of how much money the firm has raised, which is the size of that bubble. And to tell us if a company is punching above its weight and efficiently using its venture capital. Erik, take us through this slide. Explain the dots, the size of the dots. Set this up please. >> Yeah. So again, the axis is still the same, net sentiment and mindshare, but what we've done this time is we've taken publicly available information on how much capital company is raised and that'll be the size of the circle you see around the name. And then whether it's green or red is basically saying relative to the amount of money they've raised, how are they doing in our data? So when you see a Netskope, which has been around forever, raised a lot of money, that's why you're going to see them more leading towards red, 'cause it's just been around forever and kind of would expect it. Versus a name like SecurityScorecard, which is only raised a little bit of money and it's actually performing just as well, if not better than a name, like a Netskope. OneTrust doing absolutely incredible right now. BeyondTrust. We've seen the issues with Okta, right. So those are two names that play in that space that obviously are probably getting some looks about what's going on right now. Wiz, we've all heard about right? So raised a ton of money. It's doing well on net sentiment, but the mindshare isn't as well as you'd want, which is why you're going to see a little bit of that red versus a name like Aqua, which is doing container and application security. And hasn't raised as much money, but is really neck and neck with a name like Wiz. So that is why on a relative basis, you'll see that more green. As we all know, information security is never going away. But as we'll get to later in the program, Dave, I'm not sure in this current market environment, if people are as willing to do POCs and switch away from their security provider, right. There's a little bit of tepidness out there, a little trepidation. So right now we're seeing overall a slight pause, a slight cooling in overall evaluations on the security side versus historical levels a year ago. >> Now let's stay on here for a second. So a couple things I want to point out. So it's interesting. Now Snyk has raised over, I think $800 million but you can see them, they're high on the vertical and the horizontal, but now compare that to Lacework. It's hard to see, but they're kind of buried in the middle there. That's the biggest dot in this whole thing. I think I'm interpreting this correctly. They've raised over a billion dollars. It's a Mike Speiser company. He was the founding investor in Snowflake. So people watch that very closely, but that's an example of where they're not punching above their weight. They recently had a layoff and they got to fine tune things, but I'm still confident they they're going to do well. 'Cause they're approaching security as a data problem, which is probably people having trouble getting their arms around that. And then again, I see Arctic Wolf. They're not red, they're not green, but they've raised fair amount of money, but it's showing up to the right and decent level there. And a couple of the other ones that you mentioned, Netskope. Yeah, they've raised a lot of money, but they're actually performing where you want. What you don't want is where Lacework is, right. They've got some work to do to really take advantage of the money that they raised last November and prior to that. >> Yeah, if you're seeing that more neutral color, like you're calling out with an Arctic Wolf, like that means relative to their peers, this is where they should be. It's when you're seeing that red on a Lacework where we all know, wow, you raised a ton of money and your mindshare isn't where it should be. Your net sentiment is not where it should be comparatively. And then you see these great standouts, like Salt Security and SecurityScorecard and Abnormal. You know they haven't raised that much money yet, but their net sentiment's higher and their mindshare's doing well. So those basically in a nutshell, if you're a PE or a VC and you see a small green circle, then you're doing well, then it means you made a good investment. >> Some of these guys, I don't know, but you see these small green circles. Those are the ones you want to start digging into and maybe help them catch a wave. Okay, let's get into the data discussion. And again, three areas, database slash data warehousing, big data analytics and ML AI. First, we're going to look at the database sector. So Alex, thank you for bringing that up. Alright, take us through this, Erik. Actually, let me just say Postgres SQL. I got to ask you about this. It shows some funding, but that actually could be a mix of EDB, the company that commercializes Postgres and Postgres the open source database, which is a transaction system and kind of an open source Oracle. You see MariaDB is a database, but open source database. But the companies they've raised over $200 million and they filed an S-4. So Erik looks like this might be a little bit of mashup of companies and open source products. Help us understand this. >> Yeah, it's tough when you start dealing with the open source side and I'll be honest with you, there is a little bit of a mashup here. There are certain names here that are a hundred percent for profit companies. And then there are others that are obviously open source based like Redis is open source, but Redis Labs is the one trying to monetize the support around it. So you're a hundred percent accurate on this slide. I think one of the things here that's important to note though, is just how important open source is to data. If you're going to be going to any of these areas, it's going to be open source based to begin with. And Neo4j is one I want to call out here. It's not one everyone's familiar with, but it's basically geographical charting database, which is a name that we're seeing on a net sentiment side actually really, really high. When you think about it's the third overall net sentiment for a niche database play. It's not as big on the mindshare 'cause it's use cases aren't as often, but third biggest play on net sentiment. I found really interesting on this slide. >> And again, so MariaDB, as I said, they filed an S-4 I think $50 million in revenue, that might even be ARR. So they're not huge, but they're getting there. And by the way, MariaDB, if you don't know, was the company that was formed the day that Oracle bought Sun in which they got MySQL and MariaDB has done a really good job of replacing a lot of MySQL instances. Oracle has responded with MySQL HeatWave, which was kind of the Oracle version of MySQL. So there's some interesting battles going on there. If you think about the LAMP stack, the M in the LAMP stack was MySQL. And so now it's all MariaDB replacing that MySQL for a large part. And then you see again, the red, you know, you got to have some concerns about there. Aerospike's been around for a long time. SingleStore changed their name a couple years ago, last year. Yellowbrick Data, Fire Bolt was kind of going after Snowflake for a while, but yeah, you want to get out of that red zone. So they got some work to do. >> And Dave, real quick for the people that aren't aware, I just want to let them know that we can cut this data with the public company data as well. So we can cross over this with that because some of these names are competing with the larger public company names as well. So we can go ahead and cross reference like a MariaDB with a Mongo, for instance, or of something of that nature. So it's not in this slide, but at another point we can certainly explain on a relative basis how these private names are doing compared to the other ones as well. >> All right, let's take a quick look at analytics. Alex, bring that up if you would. Go ahead, Erik. >> Yeah, I mean, essentially here, I can't see it on my screen, my apologies. I just kind of went to blank on that. So gimme one second to catch up. >> So I could set it up while you're doing that. You got Grafana up and to the right. I mean, this is huge right. >> Got it thank you. I lost my screen there for a second. Yep. Again, open source name Grafana, absolutely up and to the right. But as we know, Grafana Labs is actually picking up a lot of speed based on Grafana, of course. And I think we might actually hear some noise from them coming this year. The names that are actually a little bit more disappointing than I want to call out are names like ThoughtSpot. It's been around forever. Their mindshare of course is second best here but based on the amount of time they've been around and the amount of money they've raised, it's not actually outperforming the way it should be. We're seeing Moogsoft obviously make some waves. That's very high net sentiment for that company. It's, you know, what, third, fourth position overall in this entire area, Another name like Fivetran, Matillion is doing well. Fivetran, even though it's got a high net sentiment, again, it's raised so much money that we would've expected a little bit more at this point. I know you know this space extremely well, but basically what we're looking at here and to the bottom left, you're going to see some names with a lot of red, large circles that really just aren't performing that well. InfluxData, however, second highest net sentiment. And it's really pretty early on in this stage and the feedback we're getting on this name is the use cases are great, the efficacy's great. And I think it's one to watch out for. >> InfluxData, time series database. The other interesting things I just noticed here, you got Tamer on here, which is that little small green. Those are the ones we were saying before, look for those guys. They might be some of the interesting companies out there and then observe Jeremy Burton's company. They do observability on top of Snowflake, not green, but kind of in that gray. So that's kind of cool. Monte Carlo is another one, they're sort of slightly green. They are doing some really interesting things in data and data mesh. So yeah, okay. So I can spend all day on this stuff, Erik, phenomenal data. I got to get back and really dig in. Let's end with machine learning and AI. Now this chart it's similar in its dimensions, of course, except for the money raised. We're not showing that size of the bubble, but AI is so hot. We wanted to cover that here, Erik, explain this please. Why TensorFlow is highlighted and walk us through this chart. >> Yeah, it's funny yet again, right? Another open source name, TensorFlow being up there. And I just want to explain, we do break out machine learning, AI is its own sector. A lot of this of course really is intertwined with the data side, but it is on its own area. And one of the things I think that's most important here to break out is Databricks. We started to cover Databricks in machine learning, AI. That company has grown into much, much more than that. So I do want to state to you Dave, and also the audience out there that moving forward, we're going to be moving Databricks out of only the MA/AI into other sectors. So we can kind of value them against their peers a little bit better. But in this instance, you could just see how dominant they are in this area. And one thing that's not here, but I do want to point out is that we have the ability to break this down by industry vertical, organization size. And when I break this down into Fortune 500 and Fortune 1000, both Databricks and Tensorflow are even better than you see here. So it's quite interesting to see that the names that are succeeding are also succeeding with the largest organizations in the world. And as we know, large organizations means large budgets. So this is one area that I just thought was really interesting to point out that as we break it down, the data by vertical, these two names still are the outstanding players. >> I just also want to call it H2O.ai. They're getting a lot of buzz in the marketplace and I'm seeing them a lot more. Anaconda, another one. Dataiku consistently popping up. DataRobot is also interesting because all the kerfuffle that's going on there. The Cube guy, Cube alum, Chris Lynch stepped down as executive chairman. All this stuff came out about how the executives were taking money off the table and didn't allow the employees to participate in that money raising deal. So that's pissed a lot of people off. And so they're now going through some kind of uncomfortable things, which is unfortunate because DataRobot, I noticed, we haven't covered them that much in "Breaking Analysis", but I've noticed them oftentimes, Erik, in the surveys doing really well. So you would think that company has a lot of potential. But yeah, it's an important space that we're going to continue to watch. Let me ask you Erik, can you contextualize this from a time series standpoint? I mean, how is this changed over time? >> Yeah, again, not show here, but in the data. I'm sorry, go ahead. >> No, I'm sorry. What I meant, I should have interjected. In other words, you would think in a downturn that these emerging companies would be less interesting to buyers 'cause they're more risky. What have you seen? >> Yeah, and it was interesting before we went live, you and I were having this conversation about "Is the downturn stopping people from evaluating these private companies or not," right. In a larger sense, that's really what we're doing here. How are these private companies doing when it comes down to the actual practitioners? The people with the budget, the people with the decision making. And so what I did is, we have historical data as you know, I went back to the Emerging Technology Survey we did in November of 21, right at the crest right before the market started to really fall and everything kind of started to fall apart there. And what I noticed is on the security side, very much so, we're seeing less evaluations than we were in November 21. So I broke it down. On cloud security, net sentiment went from 21% to 16% from November '21. That's a pretty big drop. And again, that sentiment is our one aggregate metric for overall positivity, meaning utilization and actual evaluation of the name. Again in database, we saw it drop a little bit from 19% to 13%. However, in analytics we actually saw it stay steady. So it's pretty interesting that yes, cloud security and security in general is always going to be important. But right now we're seeing less overall net sentiment in that space. But within analytics, we're seeing steady with growing mindshare. And also to your point earlier in machine learning, AI, we're seeing steady net sentiment and mindshare has grown a whopping 25% to 30%. So despite the downturn, we're seeing more awareness of these companies in analytics and machine learning and a steady, actual utilization of them. I can't say the same in security and database. They're actually shrinking a little bit since the end of last year. >> You know it's interesting, we were on a round table, Erik does these round tables with CISOs and CIOs, and I remember one time you had asked the question, "How do you think about some of these emerging tech companies?" And one of the executives said, "I always include somebody in the bottom left of the Gartner Magic Quadrant in my RFPs. I think he said, "That's how I found," I don't know, it was Zscaler or something like that years before anybody ever knew of them "Because they're going to help me get to the next level." So it's interesting to see Erik in these sectors, how they're holding up in many cases. >> Yeah. It's a very important part for the actual IT practitioners themselves. There's always contracts coming up and you always have to worry about your next round of negotiations. And that's one of the roles these guys play. You have to do a POC when contracts come up, but it's also their job to stay on top of the new technology. You can't fall behind. Like everyone's a software company. Now everyone's a tech company, no matter what you're doing. So these guys have to stay in on top of it. And that's what this ETS can do. You can go in here and look and say, "All right, I'm going to evaluate their technology," and it could be twofold. It might be that you're ready to upgrade your technology and they're actually pushing the envelope or it simply might be I'm using them as a negotiation ploy. So when I go back to the big guy who I have full intentions of writing that contract to, at least I have some negotiation leverage. >> Erik, we got to leave it there. I could spend all day. I'm going to definitely dig into this on my own time. Thank you for introducing this, really appreciate your time today. >> I always enjoy it, Dave and I hope everyone out there has a great holiday weekend. Enjoy the rest of the summer. And, you know, I love to talk data. So anytime you want, just point the camera on me and I'll start talking data. >> You got it. I also want to thank the team at ETR, not only Erik, but Darren Bramen who's a data scientist, really helped prepare this data, the entire team over at ETR. I cannot tell you how much additional data there is. We are just scratching the surface in this "Breaking Analysis". So great job guys. I want to thank Alex Myerson. Who's on production and he manages the podcast. Ken Shifman as well, who's just coming back from VMware Explore. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE. Does some great editing for us. Thank you. All of you guys. Remember these episodes, they're all available as podcast, wherever you listen. All you got to do is just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me to get in touch david.vellante@siliconangle.com. You can DM me at dvellante or comment on my LinkedIn posts and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for Erik Bradley and The Cube Insights powered by ETR. Thanks for watching. Be well. And we'll see you next time on "Breaking Analysis". (upbeat music)

Published Date : Sep 7 2022

SUMMARY :

bringing you data driven it's called the Emerging Great to see you too, Dave, so much in the mainstream, not only for the ITDMs themselves It is the heart of innovation So the net sentiment is a very So a lot of names that we And then of course you have AnyScale, That's the bad zone, I guess, So the gray dots that you're rates, adoption and the all And on the lower side, Vena, Acton, in the green. are in the enterprise already. So now let's look at the churn So that's the way you can look of dwell on the negative, So again, the axis is still the same, And a couple of the other And then you see these great standouts, Those are the ones you want to but Redis Labs is the one And by the way, MariaDB, So it's not in this slide, Alex, bring that up if you would. So gimme one second to catch up. So I could set it up but based on the amount of time Those are the ones we were saying before, And one of the things I think didn't allow the employees to here, but in the data. What have you seen? the market started to really And one of the executives said, And that's one of the Thank you for introducing this, just point the camera on me We are just scratching the surface

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErikPERSON

0.99+

Alex MyersonPERSON

0.99+

Ken ShifmanPERSON

0.99+

Sanjay PoonenPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Erik BradleyPERSON

0.99+

November 21DATE

0.99+

Darren BramenPERSON

0.99+

AlexPERSON

0.99+

Cheryl KnightPERSON

0.99+

PostgresORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

NetskopeORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

Rob HofPERSON

0.99+

FivetranORGANIZATION

0.99+

$50 millionQUANTITY

0.99+

21%QUANTITY

0.99+

Chris LynchPERSON

0.99+

19%QUANTITY

0.99+

Jeremy BurtonPERSON

0.99+

$800 millionQUANTITY

0.99+

6,000QUANTITY

0.99+

OracleORGANIZATION

0.99+

Redis LabsORGANIZATION

0.99+

November '21DATE

0.99+

ETRORGANIZATION

0.99+

FirstQUANTITY

0.99+

25%QUANTITY

0.99+

last yearDATE

0.99+

OneTrustORGANIZATION

0.99+

two dimensionsQUANTITY

0.99+

two groupsQUANTITY

0.99+

November of 21DATE

0.99+

bothQUANTITY

0.99+

BostonLOCATION

0.99+

more than 400 companiesQUANTITY

0.99+

Kristen MartinPERSON

0.99+

MySQLTITLE

0.99+

MoogsoftORGANIZATION

0.99+

The CubeORGANIZATION

0.99+

thirdQUANTITY

0.99+

GrafanaORGANIZATION

0.99+

H2OORGANIZATION

0.99+

Mike SpeiserPERSON

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

28%QUANTITY

0.99+

16%QUANTITY

0.99+

SecondQUANTITY

0.99+

Show Wrap | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: The cube presents, the Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon and Cloudnativecon Europe, 2022. I'm your host Keith Townsend. It's been a amazing day, three days of coverage 7,500 people, 170 sponsors, a good mix of end user organizations, vendors, just people with open source at large. I've loved the conversations. We're not going to stop that coverage just because this is the last session of the conference. Colin Murphy, senior software engineer, Adobe, >> Adobe. >> Oh, wow. This is going to be fun. And then Liam Randall, the chair of CNCF Cloud Native WebAssembly Day. >> That's correct. >> And CNCF & CEO of Cosmonic. >> That's right. >> All right. First off, let's talk about the show. How has this been different than other, if at all of other Kubecons? >> Well, first I think we all have to do a tremendous round of applause, not only for the vendors, but the CNC staff and all the attendees for coming out. And you have to say, Kubecon is back. The online experiences have been awesome but this was the first one, where Hallwaycon was in full effect. And you had the opportunity to sit down and meet with so many intelligent and inspiring peers and really have a chance to learn about all the exciting innovations that have happened over the last year. >> Colin. >> Yeah, it's been my most enjoyable Kubecon I've ever been to. And I've been to a bunch of them over the last few years. Just the quality of people. The problems that we're solving right now, everywhere from this newer stuff that we're talking about today with WebAssembly but then all these big enterprises trying to getting involved in Kubernetes >> Colin, to your point about the problems that we're solving, in many ways the pandemic has dramatically accelerated the pace of innovation, especially inside the CNCF, which is by far the most critical repository of open source projects that enterprises, governments and individuals rely on around the world, in order to deliver new experiences and to have coped and scaled out within the pandemic over the last few years. >> Yeah, I'm getting this feel, this vibe of the overall show that feels like we're on the cuff for something. There's other shows throughout the year, that's more vendor focused that talk about cloud native. But I think this is going to be the industry conference where we're just getting together and talking about it and it's going to probably be, in the next couple of years, the biggest conference of the year, that's just my personal opinion. >> I actually really strongly agree with you. And I think that the reason for that is the diversity that we get from the open source focus of Kubecon Kubecon has started where the industry really started which was in shared community projects. And I was the executive at Capital One that led the donation of cloud custodian into the CNCF. And I've started and put many projects here. And one of the reasons that you do that is so that you can build real scalable communities, Vendors that oftentimes even have competing interest but it gives us a place where we can truly collaborate where we can set aside our personal agendas and our company's agendas. And we can focus on the problems at hand. And how do we really raise the bar for technology for everybody. >> Now you two are representing a project that, you know as we look at kind of, how the web has evolved the past few decades, there's standards, there's things that we know that work, there's things that we know that don't work and we're beyond cloud native, we're kind of resistant to change. Funny enough. >> That's right. >> So WebAssembly, talk to me about what problem is WebAssembly solving that need solving? >> I think it's fitting that here on the last day of Kubecon, we're starting with the newest standard for the web and for background, there's only four languages that make up what we think of as the modern web. There's JavaScript, there's HTML, there's CSS, and now there's a new idea that's WebAssembly. And it's maybe not a new idea but it's certainly a new standard, that's got massive adoption and acceleration. WebAssembly is best thought of as almost like a portable little virtual machine. And like a lot of great ideas like JavaScript, it was originally designed to bring new experiences to browsers everywhere. And as organizations looked at the portability and security value props that come from this tiny little virtual machine, it's made a wonderful addition to backend servers and as a platform for portability to bring solutions all the way out to the edge. >> So what are some of the business cases for WebAssembly? Like what problem, what business problem are we solving? >> So it, you know, we would not have been able to bring Photoshop to the web without WASM. >> Wow. >> And just to be clear, I had nothing to do with that effort. So I want to make sure everybody understands, but if you have a lot of C++ or C code and you want to bring that experience to the web browser which is a great cost savings, cause it's running on the client's machines, really low latency, high performance experiences in the browser, WASM, really the only way to go. >> So I'm getting hints of fruit berry, Java. >> Liam: Yeah, absolutely. >> Colin: Definitely. >> You know, the look, WebAssembly sounds similar to promises you've heard before, right ones, run anywhere. The difference is, is that WebAssembly is not driven by any one particular vendor. So there's no one vendor that's trying to bring a plug in to every single device. WebAssembly was a recognition, much like Kubecon, the point that we started with around the diversity of thought ideas and representation of shared interest, of how do we have a platform that's polyglot? Many people can bring languages to it, and solutions that we can share and then build from there. And it is unlocking some of the most amazing and innovative experiences, both on the web backend servers and all the way to the edge. Because WebAssembly is a tiny little virtual machine that runs everywhere. Adobe's leadership is absolutely incredible with the things that they're doing with WebAssembly. They did this awesome blog post with the Google Chrome team that talked about other performance improvements that were brought into Chrome and other browsers, in order to enable that kind of experience. >> So I get the general concept of WebAssembly and it's one of those things that I have to ask the question, and I appreciate that Adobe uses it but without the community, I mean, I've dedicated some of my team's resources over the years to some really cool projects and products that just died on the buying cause there was no community around. >> Yeah. >> Who else uses WebAssembly? >> Yeah, I think so. We actually, inside the CNCF now, have an entire day devoted just to WebAssembly and as the co-chair of the CNCF Cloud Native WebAssembly Day, we really focus on bringing those case studies to the forefront. So some of the more interesting talks that we had here and at some of the precursor weekend conferences were from BMW, for example, they talked about how they were excited about not only WebAssembly, but a framework that they use on WebAssembly called WASM cloud, that lets them a flexibly scale machine learning models from their own edge, in their own vehicles through to their developer's workstations and even take that data onto their regular cloud Kubernetes and scale analysis and analytics. They invested and they just released a machine learning framework for one of the many great WebAssembly projects called WASM cloud, which is a CNCF project, a member project here in the CNCF. >> So how does that fit in overall landscape? >> So think of WebAssembly, like you think of HTML. It's a technology that gives you a lot of concept and to accelerate your journey on those technologies, people create frameworks. For example, if you were going to write a UI, you would not very likely start with an empty document you'd start with a react or view. And in a similar vein, if you were going to start a new microservice or backend application, project for WebAssembly, you might use WASM cloud or you might use ATMO or you might use a Spin. Those are three different types of projects. They all have their own different value props and their own different opinions that they bring to them. But the point is is that this is a quickly evolving space and it's going to dramatically change the type of experiences that we bring, not only to web browsers but to servers and edges everywhere. >> So Colin, you mentioned C+ >> Colin: Yeah. >> And other coding. Well , talk to me about the ramp up. >> Oh, well, so, yeah, so, C++ there was a lot of work done in scripting, at Adobe. Taking our C++ code and bringing it into the browser. A lot of new instructions, Cimdi, that were brought to make a really powerful experience, but what's new now is the server side aspect of things. So, just what kind of, what Liam was talking about. Now we can run this stuff in the data center. It's not just for people's browsers anymore. And then we can also bring it out to the edge too, which is a new space that we can take advantage of really almost only through WebAssembly and some JavaScript. >> So wait, let me get this kind of under hook. Before, if I wanted a rich experience, I have to run a heavy VDI instance on the back end so that I'm basically getting remote desktop calls from a light thin client back to my backend server, that's heavy. >> That is heavy. >> WebAssembly is alternative to that? >> Yes, absolutely. Think of WebAssembly as a tiny little CPU that is a shim, that we can take the places that don't even traditionally have a concept of a processor. So inside the browser, for example, traditionally cloud native development on the backend has been dominated by things like Docker and Docker is a wonderful technology and Container is a wonderful technology that really drove the last 10 years of cloud native with the great lift and shift, if you will. Take our existing applications, package them up in this virtual desktop and then deliver them. But to deliver the next 10 years of experiences, we need solutions that let us have portability first and a security model that's portable across the entire landscape. So this isn't just browsers and servers on the back end, WebAssembly creates an a layer of equality from truly edge to edge. It's can transcend different CPUs, different operating systems. So where containers have this lower bound off you need to be running Linux and you need to be in a place where you're going to bring Kubernetes. WebAssembly is so small and portable, it transcends that lower bound. It can go to places like iOS. It can go to places like web browsers. It can even go to teeny tiny CPUs that don't even traditionally have a full on operating systems inside them. >> Colin: Right, places where you can't run Docker. >> So as I think about that, and I'm a developer and I'm running my back end and I'm running whatever web stack that I want, how does this work? Like, how do I get started with it? >> Well, there's some great stuff Liam already mentioned with WASM cloud and Frmion Spin. Microsoft is heavily involved now on providing cloud products that can take advantage of WebAssembly. So we've got a lot of languages, new languages coming in.net and Ruby, Rust is a big one, TinyGo, really just a lot of places to get involved. A lot of places to get started. >> At the highest level Finton Ryan, when he was at Gartner, he's a really well known analyst. He wrote something profound a few years ago. He said, WebAssembly is the one technology, You don't need a strategy to adopt. >> Mm. >> Because frankly you're already using it because there's so many wonderful experiences and products that are out there, like what Adobe's doing. This virtual CPU is not just a platform to run on cloud native and to build applications towards the edge. You can embed this virtual CPU inside of applications. So cases where you would want to allow your users to customize an application or to extend functionality. Give you an example, Shopify is a big believer in WebAssembly because while their platform covers, two standard deviations or 80% of the use cases, they have a wonderful marketplace of extensions that folks can use in order to customize the checkout process or apply specialized discounts or integrate into a partner ecosystem. So when you think about the requirements for those scenarios, they line up to the same requirements that we have in browsers and servers. I want real security. I want portability. I want reuseability. And ultimately I want to save money and go faster. So organizations everywhere should take a few minutes and do a heads up and think about one, where WebAssembly is already in their environment, inside of places like Envoy and Istio, some of the most popular projects in the cloud native ecosystem, outside of Kubernetes. And they should perhaps consider studying, how WebAssembly can help them to transform the experiences that they're delivering for their customers. This may be the last day of Kubecon, but this is certainly not the last time we're going to be talking about WebAssembly, I'll tell you that. >> So, last question, we've talked a lot about how to get started. How about day two, when I'm thinking about performance troubleshooting and ensuring clients have a great experience what's day two operation like? >> That's a really good question. So there's, I know that each language kind of brings their own tool chain and their, and you know we saw some great stuff on, on WASM day. You can look it up around the .net experience for debugging, They really tried to make it as seamless and the same as it was for native code. So, yeah, I think that's a great question. I mean, right now it's still trying to figure out server side, It's still, as Liam said, a shifting landscape. But we've got some great stuff out here already >> You know, I'd make an even bigger call than that. When I think about the last 20 years as computing has evolved, we've continued to move through these epics of tech that were dominated by a key abstraction. Think about the rise of virtualization with VMware and the transition to the cloud. The rise of containerization, we virtualized to OS. The rise of Kubernetes and CNCF itself, where we virtualize cloud APIs. I firmly believe that WebAssembly represents the next epic of tech. So I think that day two WebAssembly continues to become one of the dominant themes, not only across cloud native but across the entire technical computing landscape. And it represents a fundamentally gigantic opportunity for organizations such as Adobe, that are always market leading and at the cutting edge of tech, to bring new experiences to their customers and for vendors to bring new platforms and tools to companies that want to execute on that opportunity. >> Colin Murphy, Liam Randall, I want to thank you for joining the Cube at Kubecon Cloudnativecon 2022. I'm now having a JavaScript based app that I want to re-look at, and maybe re-platforming that to WebAssembly. It's some lot of good stuff there. We want to thank you for tuning in to our coverage of Kubecon Cloudnativecon. And we want to thank the organization for hosting us, here from Valencia, Spain. I'm Keith Townsend, and you're watching the Cube, the leader in high tech coverage. (bright music)

Published Date : May 20 2022

SUMMARY :

brought to you by Red Hat, I've loved the conversations. the chair of CNCF First off, let's talk about the show. that have happened over the last year. And I've been to a bunch of and to have coped and scaled and it's going to probably be, And one of the reasons that you do that how the web has evolved here on the last day of Kubecon, Photoshop to the web without WASM. WASM, really the only way to go. So I'm getting hints of and all the way to the edge. and products that just died on the buying and as the co-chair of and it's going to dramatically change Well , talk to me about the ramp up. and bringing it into the browser. instance on the back end and servers on the back end, where you can't run Docker. A lot of places to get started. is the one technology, and to build applications how to get started. and the same as it was for native code. and at the cutting edge of tech, that to WebAssembly.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Liam RandallPERSON

0.99+

ColinPERSON

0.99+

Colin MurphyPERSON

0.99+

LiamPERSON

0.99+

AdobeORGANIZATION

0.99+

80%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

BMWORGANIZATION

0.99+

oneQUANTITY

0.99+

170 sponsorsQUANTITY

0.99+

CosmonicORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

iOSTITLE

0.99+

Finton RyanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

C++TITLE

0.99+

twoQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

two standard deviationsQUANTITY

0.99+

PhotoshopTITLE

0.99+

7,500 peopleQUANTITY

0.99+

LinuxTITLE

0.99+

CNCFORGANIZATION

0.99+

ShopifyORGANIZATION

0.99+

WebAssemblyTITLE

0.99+

ChromeTITLE

0.99+

JavaScriptTITLE

0.99+

RubyTITLE

0.99+

RustTITLE

0.99+

Capital OneORGANIZATION

0.98+

FirstQUANTITY

0.98+

first oneQUANTITY

0.98+

three daysQUANTITY

0.98+

GoogleORGANIZATION

0.98+

WASM cloudTITLE

0.98+

todayDATE

0.97+

each languageQUANTITY

0.97+

pandemicEVENT

0.97+

WASMTITLE

0.97+

firstQUANTITY

0.97+

C+TITLE

0.97+

KubeconORGANIZATION

0.97+

last yearDATE

0.97+

CimdiPERSON

0.96+

day twoQUANTITY

0.96+

Kubecon CloudnativeconTITLE

0.96+

four languagesQUANTITY

0.96+

KubernetesTITLE

0.95+

next couple of yearsDATE

0.95+

bothQUANTITY

0.94+

2022DATE

0.94+

HTMLTITLE

0.93+

CTITLE

0.93+

JavaTITLE

0.93+

ATMOTITLE

0.92+

yearsDATE

0.9+

Kubecon KubeconORGANIZATION

0.87+

Varun Talwar, Tetrate | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Narrator: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, in KubeCon, CloudNativeCon Europe 2022. It's near the end of the day, that's okay. We have plenty of energy because we're bringing it. I'm Keith Townsend, along with my cohost, Paul Gillon. Paul, this has been an amazing day. Thus far we've talked to some incredible folks. You got a chance to walk the show floor. >> Yeah. >> So I'm really excited to hear what's the vibe of the show floor, 7,500 people in Europe, following the protocols, but getting stuff done. >> Well, at first I have to say that I haven't traveled for two years. So getting out to a show by itself is an amazing experience. But a show like this with all the energy and the crowd too, enormously crowded at lunchtime today. It's hard to believe how many people have made it all the way here. Out on the floor the booth are crowded, the demonstrations are what you would expect at a show like this. Lots of code, lots of block diagrams, lots of architecture. I think the audience is eating it up. They're on their laptops, they're coding on their laptops. And this is very much symbolic of the crowd that comes to a KubeCon. And it's just a delight to see them out here having so much fun. >> So speaking of lots of code, we have Varun Talwar, co-founder of Tetrate. But, I just saw I didn't realize this, Istio becoming part of CNCF. What's the latest on Istio? >> Yeah, Istio is, it was always one of those service mesh projects which was very widely adopted. And it's great to see it going into the Cloud Native Computing Foundation. And, I think what happened with Kubernetes like just became the de-facto container orchestrator. I think similar thing is happening with Istio and service mesh. >> So. >> I'm sorry, go ahead Keith. What's the process like of becoming adopted by and incubated by the CNCF? >> Yeah, I mean, it's pretty simple. It's an application process into the foundation where you say, what the project is about, how diverse is your contributor base, how many people are using it. And it goes through a review of, with TOC, it goes through a review of like all the users and contributors, and if you see a good base of deployments in production, if you see a diverse community of contributors, then you can basically be part of the CNCF. And as you know, CNCF is very flexible on governance. Basically it's like bring your own governance. Then the projects can basically seamlessly go in and get into incubation and gradually graduate. >> Another project close and dear to you, Envoy. >> Yes. >> Now I've always considered Envoy just as what it is. It's a, I've always used it as a low balancer type thing. So, I've always considered it some wannabe gateway of proxy. But Envoy gateway was announced last week. >> Yes. So Envoy is, basically won the data plane war of in cloud native workloads, right? And, but, and this was over the last five years. Envoy was announced even way before Istio, and it is used in various deployment models. You can use it as a front load balancer, you can use it as an ingress in Kubernetes, you can use it as a side car in a service mesh like Istio. And it's lightweight, dynamically programmable, very open with the right community. But, what we looked at when we looked at the Envoy base was, it still wasn't very approachable for application developers. Like, when you still see like the nouns that it uses in terms of clusters and so on is not what an application developer was used to. And, so Envoy gateway is really an effort to make Envoy even more stronger out of the box for an application developer to use it as an API gateway, right? Because if you think about it, ultimately people, developers, start deploying workloads onto their Kubernetes clusters, they need some functionality like an API gateway to expose their services and you want to make it really, really easy and simple, right? I often say like, what Engine X was to like static websites, like Envoy gateway will be to like APIs. And it's really, the community coming together, we are a big part, but also VMware, and as well as end users, like in this case Fidelity, who is investing heavily into Envoy and API gateway use cases, joining forces saying, let's do this in upstream Envoy. >> I'd like to go back Istio, because this is a major step in Istio's development. Where do you see Istio coming into the picture? And Kubernetes is already broadly accepted, is Istio generally adopted as an after, an after step to Kubernetes, or are they increasingly being adopted together? >> Yeah. So, usually it's adopted as a follow on step. And, the reason is, primarily the learning curve, right? It's just to get used to all the Kubernetes and, it takes a while for people to understand the concepts, get applications going, and then, Istio was made to basically solve, three big problems there, right? Which is around, observability, traffic management, and security, right? So as people deploy more services they figure out, okay, how do I connect them? How do I secure all the connections? And how do I do more fine grain routing? I'm doing more frequent deployments with Kubernetes, but I would like to do canary releases, to make safer roll outs, right? And those are the problems that Istio solves. And I don't really want to know the metrics of like, yes, it'll be, it's good to know all the node level, and CPO level metrics, but really what I want to know is, how are my services performing? Where is the latency, right? Where is the error rate? And those are the things that Istio gives out of the box. So that's like a very natural next step for people using Kubernetes. And, Tetrate was really formed as a company to enable enterprises to adopt Istio, Envoy, and service mesh in their environment, right? So we do everything from, run an academy for like courses and certifications on Envoy and Istio, to a distribution, which is, compliant with various rules and tooling, as well as a whole platform on top of Istio, to make it usable in deployment in a large enterprise. >> So paint the end to end for me for Istio and Envoy. I know they can be used in similar fashions as like side cars, but how do they work together to deliver value? >> Yeah. So if you step back from technology a little bit, right? And you make sort of, look at what customers are doing and facing, right? Really it is about, they have applications, they have some applications that new workloads going into Kubernetes and cloud native, they have a lot of legacy workloads, a lot of workloads in VMs, and with different teams in different clouds or due to acquisitions, they're very heterogeneous, right? Now our mission, Tetrate's mission is power the world's application traffic. But really the business value that we are going after is consistency of application operations, right? And I'll tell you how powerful that is. Because the more places you can deploy Envoy into, the more places you can deploy Istio into, the more consistency you can get for the value pillars of observability, traffic management, and security, right? And really if you think about what is the journey for an enterprise to migrate from VM workloads into Kubernetes, or from data centers into cloud, the challenges are around security and connectivity, right? Because if it's Kubernetes fabric, the same Kubernetes app and data center can be deployed exactly as it is in cloud, right? >> Keith: Right. >> So why is it hard to migrate to cloud, right? The challenges come in the security and networking layer, right? >> So let's talk about that with some granularity and you can maybe give me some concrete examples. >> Right. >> Because as I think about the hybrid infrastructure, where I have VMs on-premises, cloud native stuff running in the public cloud or even cloud native next to VMs. >> Varun: Right. >> I do security differently when I'm in the VM world. I say, you know what? This IP address can't talk to this Oracle database server. >> Right. >> Keith: That's not how cloud native works. >> Right. >> I can't say, if I have a cloud native app talking to a Oracle database, there's no IP address. >> Yeah. >> Keith: But how do I secure the communication between the two? >> Exactly. So I think you hit it, well, straight on the head. So which is, with things like Kubernetes IP is no longer a really a valid noun, where you can say because things will auto scale either from Kubernetes or the cloud autoscalers. So really the noun that is becoming now is service. So, and I could have many instances of it. They could, will scale up and down. But what I'm saying is, this service, which you know some app server, some application can talk to the Oracle service. >> Keith: Hmm. >> And what we have done with the Tetrate Service Bridge which is why we call our platform service bridge, because it's all about bridging all the services, is whatever you're running on the VM can be onboarded onto the mesh, like as if it were a Kubernetes service, right? And then my policy around this service can talk to this service, is same in Kubernetes, is same for Kubernetes talking to VM, it's same for VM to VM, both in terms of access control. In terms of encryption what we do is, because it's, the Envoy proxy goes everywhere and the traffic is going through them we actually take care of distributing certs, encrypting everything, and it becomes, and that is what leads to consistent application operations. And that's where the value is. >> We're seeing a lot of activity around observability right now, a lot of different tools, both open source and proprietary Istio, certainly part of the open telemetry project, and I believe you're part of that project? >> Yes. >> But the customers are still piecing together a lot of tools on their own. >> Right. >> Do you see a more coherent framework forming around observability? >> I think very much so. And there are layers of observability, right? So the thing is, like if we tell you there is latency between these two services at L seven layer, the first question is, is it the service? Is it the Envoy? Or is it the network? It sounds like a very simple question. It's actually not that easy to answer. And that is one of the questions we answer in like platforms like ours, right? But even that is not the end. If it's neither of these three, it could be the node, it could be the hardware underneath, right? And those, you realize like those are different observability tools that work on each layer. So I think there's a lot of work to be done to enable end users to go from IP, like from top to bottom, to make, reduce what is called MPTR or meantime to, resolution of an issue where is the problem. But I think with tools like what is being built now, it is becoming easier, right? It is because, one of the things we have to realize is with things like Kubernetes we made the development of microservices easier, right? And that's great, But as a result, what is happening is that more things are getting broken down. So there is more network in between. So there's, harder it gets to troubleshoot, harder it gets to secure everything, harder it gets to get visibility from everywhere, right? So I often say like, actually if you're going, embarking down microservices journey, you actually are... You better have a platform like this. Otherwise, you're taking on operational cost. >> Wow, Jevons paradox, the more accessible we make something, the more it get used, the more complex it is. That's been a theme here at KubecCon, CloudNativeCon Europe 2022, from Valencia, Spain. I'm Keith Townsend, along with my cohost Paul Gillon. And you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

the Cloud Native Computing Foundation It's near the end of the day, So I'm really excited to hear Out on the floor the booth are crowded, What's the latest on Istio? like just became the de-facto What's the process like of becoming be part of the CNCF. and dear to you, Envoy. So, I've always considered it Envoy even more stronger out of the box coming into the picture? Where is the latency, right? So paint the end to end the more places you can deploy Istio into, and you can maybe give me in the public cloud I say, you know what? how cloud native works. talking to a Oracle database, So really the noun that is and the traffic is going through them But the customers are And that is one of the questions we answer the more accessible we make something,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NicolaPERSON

0.99+

MichaelPERSON

0.99+

DavidPERSON

0.99+

JoshPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Jeremy BurtonPERSON

0.99+

Paul GillonPERSON

0.99+

GMORGANIZATION

0.99+

Bob StefanskiPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave McDonnellPERSON

0.99+

amazonORGANIZATION

0.99+

JohnPERSON

0.99+

James KobielusPERSON

0.99+

KeithPERSON

0.99+

Paul O'FarrellPERSON

0.99+

IBMORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

BMWORGANIZATION

0.99+

FordORGANIZATION

0.99+

David SiegelPERSON

0.99+

CiscoORGANIZATION

0.99+

SandyPERSON

0.99+

Nicola AcuttPERSON

0.99+

PaulPERSON

0.99+

David LantzPERSON

0.99+

Stu MinimanPERSON

0.99+

threeQUANTITY

0.99+

LisaPERSON

0.99+

LithuaniaLOCATION

0.99+

MichiganLOCATION

0.99+

AWSORGANIZATION

0.99+

General MotorsORGANIZATION

0.99+

AppleORGANIZATION

0.99+

AmericaLOCATION

0.99+

CharliePERSON

0.99+

EuropeLOCATION

0.99+

Pat GelsingPERSON

0.99+

GoogleORGANIZATION

0.99+

BobbyPERSON

0.99+

LondonLOCATION

0.99+

Palo AltoLOCATION

0.99+

DantePERSON

0.99+

SwitzerlandLOCATION

0.99+

six-weekQUANTITY

0.99+

VMwareORGANIZATION

0.99+

SeattleLOCATION

0.99+

BobPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

100QUANTITY

0.99+

Michael DellPERSON

0.99+

John WallsPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

Sandy CarterPERSON

0.99+

Brian Schwarz, Google Cloud | VeeamON 2022


 

(soft intro music) >> Welcome back to theCUBE's coverage of VeeamON 2022. Dave Vellante with David Nicholson. Brian Schwarz is here. We're going to stay on cloud. He's the director of product management at Google Cloud. The world's biggest cloud, I contend. Brian, thanks for coming on theCUBE. >> Thanks for having me. Super excited to be here. >> Long time infrastructure as a service background, worked at Pure, worked at Cisco, Silicon Valley guy, techie. So we're going to get into it here. >> I love it. >> I was saying before, off camera. We used to go to Google Cloud Next every year. It was an awesome show. Guys built a big set for us. You joined, right as the pandemic hit. So we've been out of touch a little bit. It's hard to... You know, you got one eye on the virtual event, but give us the update on Google Cloud. What's happening generally and specifically within storage? >> Yeah. So obviously the Cloud got a big boost during the pandemic because a lot of work went online. You know, more things kind of being digitally transformed as people keep trying to innovate. So obviously the growth of Google Cloud, has got a big tailwind to it. So business has been really good, lots of R&D investment. We obviously have an incredible set of technology already but still huge investments in new technologies that we've been bringing out over the past couple of years. It's great to get back out to events to talk to people about 'em. Been a little hard the last couple of years to give people some of the insights. When I think about storage, huge investments, one of the things that some people know but I think it's probably underappreciated is we use the same infrastructure for Google Cloud that is used for Google consumer products. So Search and Photos and all the public kind of things that most people are familiar with, Maps, et cetera. Same infrastructure at the same time is also used for Google Cloud. So we just have this tremendous capability of infrastructure. Google's got nine products that have a billion users most of which many people know. So we're pretty good at storage pretty good at compute, pretty good at networking. Obviously a lot of that kind of shines through on Google Cloud for enterprises to bring their applications, lift and shift and/or modernize, build new stuff in the Cloud with containers and things like that. >> Yeah, hence my contention that Google has the biggest cloud in the world, like I said before. Doesn't have the most IS revenue 'cause that's a different business. You can't comment, but I've got Google Cloud running at $12 billion a year run rate. So a lot of times people go, "Oh yeah, Google they're third place going for the bronze." But that is a huge business. There aren't a lot of 10, $12 billion infrastructure companies. >> In a rapidly growing market. >> And if you do some back of napkin math, whatever, give me 10, 15, let's call it 15% of that, to storage. You've got a big storage business. I know you can't tell us how big, but it's big. And if you add in all the stuff that's not in GCP, you do a lot of storage. So you know storage, you understand the technology. So what is the state of technology? You have a background in Cisco, nearly a networking company, they used to do some storage stuff sort of on the side. We used to say they're going to buy NetApp, of course that never happened. That would've made no sense. Pure Storage, obviously knows storage, but they were a disk array company essentially. Cloud storage, what's different about it? What's different in the technology? How does Google think about it? >> You know, I always like to tell people there's some things that are the same and familiar to you, and there's some things that are different. If I start with some of the differences, object storage in the Cloud, like just fundamentally different. Object storage on-prem, it's been around for a while, often used as kind of like a third tier of storage, maybe a backup target, compliance, something like that. In the cloud, object storage is Tier one storage. Public reference for us, Spotify, okay, use object storage for all the songs out there. And increasingly we see a lot of growth in-- >> Well, how are you defining Tier one storage in that regard? Again, are you thinking streaming service? Okay. Fine. Transactional? >> Spotify goes down and I'm pissed. >> Yeah. This is true. (Dave laughing) >> Not just you, maybe a few million other people too. One is importance, business importance. Tier one applications like critical to the business, like business down type stuff. But even if you look at it for performance, for capabilities, object storage in the cloud, it's a different thing than it was. >> Because of the architecture that you're deploying? >> Yeah. And the applications that we see running on it. Obviously, a huge growth in our business in AI and analytics. Obviously, Google's pretty well known in both spaces, BigQuery, obviously on the analytics side, big massive data warehouses and obviously-- >> Gets very high marks from customers. >> Yeah, very well regarded, super successful, super popular with our customers in Google Cloud. And then obviously AI as well. A lot of AI is about getting structure from unstructured data. Autonomous vehicles getting pictures and videos around the world. Speech recognition, audio is a fundamentally analog signal. You're trying to train computers to basically deal with analog things and it's all stored in object storage, machine learning on top of it, creating all the insights, and frankly things that computers can deal with. Getting structure out of the unstructured data. So you just see performance capabilities, importance as it's really a Tier one storage, much like file and block is where have kind of always been. >> Depending on, right, the importance. Because I mean, it's a fair question, right? Because we're used to thinking, "Oh, you're running your Oracle transaction database on block storage." That's Tier one. But Spotify's pretty important business. And again, on BigQuery, it is a cloud-native born in the cloud database, a lot of the cloud databases aren't, right? And that's one of the reasons why BigQuery is-- >> Google's really had a lot of success taking technologies that were built for some of the consumer services that we build and turning them into cloud-native Google Cloud. Like HDFS, who we were talking about, open source technologies came originally from the Google file system. Now we have a new version of it that we run internally called Colossus, incredible technologies that are cloud scale technologies that you can use to build things like Google Cloud storage. >> I remember one of the early Hadoop worlds, I was talking to a Google engineer and saying, "Well, wow, that's so cool that Hadoop came. You guys were the main spring of that." He goes, "Oh, we're way past Hadoop now." So this is early days of Hadoop (laughs) >> It's funny whenever Google says consumer services, usually consumer indicates just for me. But no, a consumer service for Google is at a scale that almost no business needs at a point in time. So you're not taking something and scaling it up-- >> Yeah. They're Tier one services-- for sure. >> Exactly. You're more often pairing it down so that a fortune 10 company can (laughs) leverage it. >> So let's dig into data protection in the Cloud, disaster recovery in the Cloud, Ransomware protection and then let's get into why Google. Maybe you could give us the trends that you're seeing, how you guys approach it, and why Google. >> Yeah. One of the things I always tell people, there's certain best practices and principles from on-prem that are just still applicable in the Cloud. And one of 'em is just fundamentals around recovery point objective and recovery time objective. You should know, for your apps, what you need, you should tier your apps, get best practice around them and think about those in the Cloud as well. The concept of RPO and RTO don't just magically go away just 'cause you're running in the Cloud. You should think about these things. And it's one of the reasons we're here at the VeeamON event. It's important, obviously, they have a tremendous skill in technology, but helping customers implement the right RPO and RTO for their different applications. And they also help do that in Google Cloud. So we have a great partnership with them, two main offerings that they offer in Google. One is integration for their on-prem things to use, basically Google as a backup target or DR target and then cloud-native backups they have some technologies, Veeam backup for Google. And obviously they also bought Kasten a while ago. 'Cause they also got excited about the container trend and obviously great technologies for those customers to use those in Google Cloud as well. >> So RPO and RTO is kind of IT terms, right? But we think of them as sort of the business requirement. Here's the business language. How much data are you willing to lose? And the business person says, "What? I don't want to lose any data." Oh, how big's your budget, right? Oh, okay. That's RPO. RTO is how fast you want to get it back? "How fast do you want to get it back if there's an outage?" "Instantly." "How much money do you want to spend on that?" "Oh." Okay. And then your application value will determine that. Okay. So that's what RPO and RTO is for those who you may not know that. Sometimes we get into the acronym too much. Okay. Why Google Cloud? >> Yeah. When I think about some of the infrastructure Google has and like why does it matter to a customer of Google Cloud? The first couple things I usually talk about is networking and storage. Compute's awesome, we can talk about containers and Kubernetes in a little bit, but if you just think about core infrastructure, networking, Google's got one of the biggest networks in the world, obviously to service all these consumer applications. Two things that I often tell people about the Google network, one, just tremendous backbone bandwidth across the regions. One of the things to think about with data protection, it's a large data set. When you're going to do recoveries, you're pushing lots of terabytes often and big pipes matter. Like it helps you hit the right recovery time objective 'cause you, "I want to do a restore across the country." You need good networks. And obviously Google has a tremendous network. I think we have like 20 subsea cables that we've built underneath the the world's oceans to connect the world on the internet. >> Awesome. >> The other thing that I think is really underappreciated about the Google network is how quickly you get into it. One of the reasons all the consumer apps have such good response time is there's a local access point to get into the Google network somewhere close to you almost anywhere in the world. I'm sure you can find some obscure place where we don't have an access point, but look Search and Photos and Maps and Workspace, they all work so well because you get in the Google network fast, local access points and then we can control the quality of service. And that underlying substrate is the same substrate we have in Google Cloud. So the network is number one. Second one in storage, we have some really incredible capabilities in cloud storage, particularly around our dual region and multi-region buckets. The multi-region bucket, the way I describe it to people, it's a continent sized bucket. Single bucket name, strongly consistent that basically spans a continent. It's in some senses a little bit of the Nirvana of storage. No more DR failover, right? In a lot of places, traditionally on-prem but even other clouds, two buckets, failover, right? Orchestration, set up. Whenever you do orchestration, the DR is a lot more complicated. You got to do more fire drills, make sure it works. We have this capability to have a single name space that spans regions and it has strong read after write consistency, everything you drop into it you can read back immediately. >> Say I'm on the west coast and I have a little bit of an on-premises data center still and I'm using Veeam to back something up and I'm using storage within GCP. Trace out exactly what you mean by that in terms of a continent sized bucket. Updates going to the recovery volume, for lack of a better term, in GCP. Where is that physically? If I'm on the west coast, what does that look like? >> Two main options. It depends again on what your business goals are. First option is you pick a regional bucket, multiple zones in a Google Cloud region are going to store your data. It's resilient 'cause there's three zones in the region but it's all in one region. And then your second option is this multi-region bucket, where we're basically taking a set of the Google Cloud regions from around North America and storing your data basically in the continent, multiple copies of your data. And that's great because if you want to protect yourself from a regional outage, right? Earthquake, natural disaster of some sort, this multi-region, it basically gives you this DR protection for free and it's... Well, it's not free 'cause you have to pay for it of course, but it's a free from a failover perspective. Single name space, your app doesn't need to know. You restart the app on the east coast, same bucket name. >> Right. That's good. >> Read and write instantly out of the bucket. >> Cool. What are you doing with Veeam? >> So we have this great partnership, obviously for data protection and DR. And I really often segment the conversation into two pieces. One is for traditional on-prem customers who essentially want to use the Cloud as either a backup or a DR target. Traditional Veeam backup and replication supports Google Cloud targets. You can write to cloud storage. Some of these advantages I mentioned. Our archive storage, really cheap. We just actually lowered the price for archive storage quite significantly, roughly a third of what you find in some of the other competitive clouds if you look at the capabilities. Our archive class storage, fast recovery time, right? Fast latency, no hours to kind of rehydrate. >> Good. Storage in the cloud is overpriced. >> Yeah. >> It is. It is historically overpriced despite all the rhetoric. Good. I didn't know that. I'm glad to hear. >> Yeah. So the archive class store, so you essentially read and write into this bucket and restore. So it's often one of the things I joke with people about. I live in Silicon Valley, I still see the tape truck driving around. I really think people can really modernize these environments and use the cloud as a backup target. You get a copy of your data off-prem. >> Don't you guys use tape? >> Well, we don't talk a lot about-- >> No comment. Just checking. >> And just to be clear, when he says cloud storage is overpriced, he thinks that a postage stamp is overpriced, right? >> No. >> If I give you 50 cents, are you going to deliver a letter cross country? No. Cloud storage, it's not overpriced. >> Okay. (David laughing) We're going to have that conversation. I think it's historically overpriced. I think it could be more attractive, relative to the cost of the underlying technology. So good for you guys pushing prices. >> Yeah. So this archive class storage, is one great area. The second area we really work with Veeam is protecting cloud-native workloads. So increasingly customers are running workloads in the Cloud, they run VMware in the Cloud, they run normal VMs, they run containers. Veeam has two offerings in Google that essentially help customers protect that data, hit their RPO, RTO objectives. Another thing that is not different in the Cloud is the need to meet your compliance regulations, right? So having a product like Veeam that is easy to show back to your auditor, to your regulator to make sure that you have copies of your data, that you can hit an appropriate recovery time objective if you're in finance or healthcare, energy. So there's some really good Veeam technologies that work in Google Cloud to protect applications that actually run in Google Cloud all in. >> To your point about the tape truck I was kind of tongue in cheek, but I know you guys use tape. But the point is you shouldn't have to call the tape truck, right, you should go to Google and say, "Okay. I need my data back." Now having said that sometimes the highest bandwidth in the world is putting all this stuff on the truck. Is there an option for that? >> Again, it gets back to this networking capability that I mentioned. Yes. People do like to joke about, okay, trucks and trains and things can have a lot of bandwidth, big networks can push a lot of data around, obviously. >> And you got a big network. >> We got a huge network. So if you want to push... I've seen statistics. You can do terabits a second to a single Google Cloud storage bucket, super computing type performance inside Google Cloud, which from a scale perspective, whether it be network compute, these are things scale. If there's one thing that Google's really, really good at, it's really high scale. >> If your's companies can't afford to. >> Yeah, if you're that sensitive, avoid moving the data altogether. If you're that sensitive, have your recovery capability be in GCP. >> Yeah. Well, and again-- >> So that when you're recovering you're not having to move data. >> It's approximate to, yeah. That's the point. >> Recovering GCV, fail over your VMware cluster. >> Exactly. >> And use the cloud as a DR target. >> We got very little time but can you just give us a rundown of your portfolio in storage? >> Yeah. So storage, cloud storage for object storage got a bunch of regional options and classes of storage, like I mentioned, archive storage. Our first party offerings in the file area, our file store, basic enterprise and high scale, which is really for highly concurrent paralyzed applications. Persistent disk is our block storage offering. We also have a very high performance cash block storage offering and local SSDs. So that's the main kind of food groups of storage, block file object, increasingly doing a lot of work in data protection and in transfer and distributed cloud environments where the edge of the cloud is pushing outside the cloud regions themselves. But those are our products. Also, we spend a lot of time with our partners 'cause Google's really good at building and open sourcing and partnering at the same time hence with Veeam, obviously with file. We partner with NetApp and Dell and a bunch of folks. So there's a lot of partnerships we have that are important to us as well. >> Yeah. You know, we didn't get into Kubernetes, a great example of open source, Istio, Anthos, we didn't talk about the on-prem stuff. So Brian we'll have to have you back and chat about those things. >> I look forward to it. >> To quote my friend Matt baker, it's not a zero sum game out there and it's great to see Google pushing the technology. Thanks so much for coming on. All right. And thank you for watching. Keep it right there. Our next guest will be up shortly. This is Dave Vellante for Dave Nicholson. We're live at VeeamON 2022 and we'll be right back. (soft beats music)

Published Date : May 18 2022

SUMMARY :

He's the director of product Super excited to be here. So we're going to get into it here. You joined, right as the pandemic hit. and all the public kind of things that Google has the In a rapidly What's different in the technology? the same and familiar to you, in that regard? (Dave laughing) storage in the cloud, BigQuery, obviously on the analytics side, around the world. a lot of the cloud of the consumer services the early Hadoop worlds, is at a scale that for sure. so that a fortune 10 company protection in the Cloud, And it's one of the reasons of the business requirement. One of the things to think is the same substrate we have If I'm on the west coast, of the Google Cloud regions That's good. out of the bucket. And I really often segment the cloud is overpriced. despite all the rhetoric. So it's often one of the things No comment. are you going to deliver the underlying technology. is the need to meet your But the point is you shouldn't have a lot of bandwidth, So if you want to push... avoid moving the data altogether. So that when you're recovering That's the point. over your VMware cluster. So that's the main kind So Brian we'll have to have you back pushing the technology.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave NicholsonPERSON

0.99+

David NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

Brian SchwarzPERSON

0.99+

DavidPERSON

0.99+

GoogleORGANIZATION

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

50 centsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

two piecesQUANTITY

0.99+

10QUANTITY

0.99+

NetAppORGANIZATION

0.99+

second optionQUANTITY

0.99+

two offeringsQUANTITY

0.99+

15%QUANTITY

0.99+

VeeamORGANIZATION

0.99+

First optionQUANTITY

0.99+

three zonesQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

15QUANTITY

0.99+

one regionQUANTITY

0.99+

oneQUANTITY

0.99+

BigQueryTITLE

0.99+

OracleORGANIZATION

0.99+

Two main optionsQUANTITY

0.99+

OneQUANTITY

0.99+

Matt bakerPERSON

0.99+

DavePERSON

0.99+

second areaQUANTITY

0.98+

Second oneQUANTITY

0.98+

20 subsea cablesQUANTITY

0.98+

10, $12 billionQUANTITY

0.98+

two main offeringsQUANTITY

0.97+

North AmericaLOCATION

0.97+

nine productsQUANTITY

0.97+

two bucketsQUANTITY

0.96+

one thingQUANTITY

0.96+

SingleQUANTITY

0.96+

HadoopTITLE

0.95+

Google CloudTITLE

0.95+

one eyeQUANTITY

0.95+

AnthosORGANIZATION

0.95+

Two thingsQUANTITY

0.94+

PureORGANIZATION

0.94+

first partyQUANTITY

0.92+

VeeamON 2022EVENT

0.91+

pandemicEVENT

0.91+

Breaking Analysis: Enterprise Technology Predictions 2022


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> The pandemic has changed the way we think about and predict the future. As we enter the third year of a global pandemic, we see the significant impact that it's had on technology strategy, spending patterns, and company fortunes Much has changed. And while many of these changes were forced reactions to a new abnormal, the trends that we've seen over the past 24 months have become more entrenched, and point to the way that's coming ahead in the technology business. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we welcome our partner and colleague and business friend, Erik Porter Bradley, as we deliver what's becoming an annual tradition for Erik and me, our predictions for Enterprise Technology in 2022 and beyond Erik, welcome. Thanks for taking some time out. >> Thank you, Dave. Luckily we did pretty well last year, so we were able to do this again. So hopefully we can keep that momentum going. >> Yeah, you know, I want to mention that, you know, we get a lot of inbound predictions from companies and PR firms that help shape our thinking. But one of the main objectives that we have is we try to make predictions that can be measured. That's why we use a lot of data. Now not all will necessarily fit that parameter, but if you've seen the grading of our 2021 predictions that Erik and I did, you'll see we do a pretty good job of trying to put forth prognostications that can be declared correct or not, you know, as black and white as possible. Now let's get right into it. Our first prediction, we're going to go run into spending, something that ETR surveys for quarterly. And we've reported extensively on this. We're calling for tech spending to increase somewhere around 8% in 2022, we can see there on the slide, Erik, we predicted spending last year would increase by 4% IDC. Last check was came in at five and a half percent. Gardner was somewhat higher, but in general, you know, not too bad, but looking ahead, we're seeing an acceleration from the ETR September surveys, as you can see in the yellow versus the blue bar in this chart, many of the SMBs that were hard hit by the pandemic are picking up spending again. And the ETR data is showing acceleration above the mean for industries like energy, utilities, retail, and services, and also, notably, in the Forbes largest 225 private companies. These are companies like Mars or Koch industries. They're predicting well above average spending for 2022. So Erik, please weigh in here. >> Yeah, a lot to bring up on this one, I'm going to be quick. So 1200 respondents on this, over a third of which were at the C-suite level. So really good data that we brought in, the usual bucket of, you know, fortune 500, global 2000 make up the meat of that median, but it's 8.3% and rising with momentum as we see. What's really interesting right now is that energy and utilities. This is usually like, you know, an orphan stock dividend type of play. You don't see them at the highest point of tech spending. And the reason why right now is really because this state of tech infrastructure in our energy infrastructure needs help. And it's obvious, remember the Florida municipality break reach last year? When they took over the water systems or they had the ability to? And this is a real issue, you know, there's bad nation state actors out there, and I'm no alarmist, but the energy and utility has to spend this money to keep up. It's really important. And then you also hit on the retail consumer. Obviously what's happened, the work from home shift created a shop from home shift, and the trends that are happening right now in retail. If you don't spend and keep up, you're not going to be around much longer. So I think the really two interesting things here to call out are energy utilities, usually a laggard in IT spend and it's leading, and also retail consumer, a lot of changes happening. >> Yeah. Great stuff. I mean, I recall when we entered the pandemic, really ETR was the first to emphasize the impact that work from home was going to have, so I really put a lot of weight on this data. Okay. Our next prediction is we're going to get into security, it's one of our favorite topics. And that is that the number one priority that needs to be addressed by organizations in 2022 is security and you can see, in this slide, the degree to which security is top of mind, relative to some other pretty important areas like cloud, productivity, data, and automation, and some others. Now people may say, "Oh, this is obvious." But I'm going to add some context here, Erik, and then bring you in. First, organizations, they don't have unlimited budgets. And there are a lot of competing priorities for dollars, especially with the digital transformation mandate. And depending on the size of the company, this data will vary. For example, while security is still number one at the largest public companies, and those are of course of the biggest spenders, it's not nearly as pronounced as it is on average, or in, for example, mid-sized companies and government agencies. And this is because midsized companies or smaller companies, they don't have the resources that larger companies do. Larger companies have done a better job of securing their infrastructure. So these mid-size firms are playing catch up and the data suggests cyber is even a bigger priority there, gaps that they have to fill, you know, going forward. And that's why we think there's going to be more demand for MSSPs, managed security service providers. And we may even see some IPO action there. And then of course, Erik, you and I have talked about events like the SolarWinds Hack, there's more ransomware attacks, other vulnerabilities. Just recently, like Log4j in December. All of this has heightened concerns. Now I want to talk a little bit more about how we measure this, you know, relatively, okay, it's an obvious prediction, but let's stick our necks out a little bit. And so in addition to the rise of managed security services, we're calling for M&A and/or IPOs, we've specified some names here on this chart, and we're also pointing to the digital supply chain as an area of emphasis. Again, Log4j really shone that under a light. And this is going to help the likes of Auth0, which is now Okta, SailPoint, which is called out on this chart, and some others. We're calling some winners in end point security. Erik, you're going to talk about sort of that lifecycle, that transformation that we're seeing, that migration to new endpoint technologies that are going to benefit from this reset refresh cycle. So Erik, weigh in here, let's talk about some of the elements of this prediction and some of the names on that chart. >> Yeah, certainly. I'm going to start right with Log4j top of mind. And the reason why is because we're seeing a real paradigm shift here where things are no longer being attacked at the network layer, they're being attacked at the application layer, and in the application stack itself. And that is a huge shift left. And that's taking in DevSecOps now as a real priority in 2022. That's a real paradigm shift over the last 20 years. That's not where attacks used to come from. And this is going to have a lot of changes. You called out a bunch of names in there that are, they're either going to work. I would add to that list Wiz. I would add Orca Security. Two names in our emerging technology study, in addition to the ones you added that are involved in cloud security and container security. These names are either going to get gobbled up. So the traditional legacy names are going to have to start writing checks and, you know, legacy is not fair, but they're in the data center, right? They're, on-prem, they're not cloud native. So these are the names that money is going to be flowing to. So they're either going to get gobbled up, or we're going to see some IPO's. And on the other thing I want to talk about too, is what you mentioned. We have CrowdStrike on that list, We have SentinalOne on the list. Everyone knows them. Our data was so strong on Tanium that we actually went positive for the first time just today, just this morning, where that was released. The trifecta of these are so important because of what you mentioned, under resourcing. We can't have security just tell us when something happens, it has to automate, and it has to respond. So in this next generation of EDR and XDR, an automated response has to happen because people are under-resourced, salaries are really high, there's a skill shortage out there. Security has to become responsive. It can't just monitor anymore. >> Yeah. Great. And we should call out too. So we named some names, Snyk, Aqua, Arctic Wolf, Lacework, Netskope, Illumio. These are all sort of IPO, or possibly even M&A candidates. All right. Our next prediction goes right to the way we work. Again, something that ETR has been on for awhile. We're calling for a major rethink in remote work for 2022. We had predicted last year that by the end of 2021, there'd be a larger return to the office with the norm being around a third of workers permanently remote. And of course the variants changed that equation and, you know, gave more time for people to think about this idea of hybrid work and that's really come in to focus. So we're predicting that is going to overtake fully remote as the dominant work model with only about a third of the workers back in the office full-time. And Erik, we expect a somewhat lower percentage to be fully remote. It's now sort of dipped under 30%, at around 29%, but it's still significantly higher than the historical average of around 15 to 16%. So still a major change, but this idea of hybrid and getting hybrid right, has really come into focus. Hasn't it? >> Yeah. It's here to stay. There's no doubt about it. We started this in March of 2020, as soon as the virus hit. This is the 10th iteration of the survey. No one, no one ever thought we'd see a number where only 34% of people were going to be in office permanently. That's a permanent number. They're expecting only a third of the workers to ever come back fully in office. And against that, there's 63% that are saying their permanent workforce is going to be either fully remote or hybrid. And this, I can't really explain how big of a paradigm shift this is. Since the start of the industrial revolution, people leave their house and go to work. Now they're saying that's not going to happen. The economic impact here is so broad, on so many different areas And, you know, the reason is like, why not? Right? The productivity increase is real. We're seeing the productivity increase. Enterprises are spending on collaboration tools, productivity tools, We're seeing an increased perception in productivity of their workforce. And the CFOs can cut down an expense item. I just don't see a reason why this would end, you know, I think it's going to continue. And I also want to point out these results, as high as they are, were before the Omicron wave hit us. I can only imagine what these results would have been if we had sent the survey out just two or three weeks later. >> Yeah. That's a great point. Okay. Next prediction, we're going to look at the supply chain, specifically in how it's affecting some of the hardware spending and cloud strategies in the future. So in this chart, ETRS buyers, have you experienced problems procuring hardware as a result of supply chain issues? And, you know, despite the fact that some companies are, you know, I would call out Dell, for example, doing really well in terms of delivering, you can see that in the numbers, it's pretty clear, there's been an impact. And that's not not an across the board, you know, thing where vendors are able to deliver, especially acute in PCs, but also pronounced in networking, also in firewall servers and storage. And what's interesting is how companies are responding and reacting. So first, you know, I'm going to call the laptop and PC demand staying well above pre-COVID norms. It had peaked in 2012. Pre-pandemic it kept dropping and dropping and dropping, in terms of, you know, unit volume, where the market was contracting. And we think can continue to grow this year in double digits in 2022. But what's interesting, Erik, is when you survey customers, is despite the difficulty they're having in procuring network hardware, there's as much of a migration away from existing networks to the cloud. You could probably comment on that. Their networks are more fossilized, but when it comes to firewalls and servers and storage, there's a much higher propensity to move to the cloud. 30% of customers that ETR surveyed will replace security appliances with cloud services and 41% and 34% respectively will move to cloud compute and storage in 2022. So cloud's relentless march on traditional on-prem models continues. Erik, what do you make of this data? Please weigh in on this prediction. >> As if we needed another reason to go to the cloud. Right here, here it is yet again. So this was added to the survey by client demand. They were asking about the procurement difficulties, the supply chain issues, and how it was impacting our community. So this is the first time we ran it. And it really was interesting to see, you know, the move there. And storage particularly I found interesting because it correlated with a huge jump that we saw on one of our vendor names, which was Rubrik, had the highest net score that it's ever had. So clearly we're seeing some correlation with some of these names that are there, you know, really well positioned to take storage, to take data into the cloud. So again, you didn't need another reason to, you know, hasten this digital transformation, but here we are, we have it yet again, and I don't see it slowing down anytime soon. >> You know, that's a really good point. I mean, it's not necessarily bad news for the... I mean, obviously you wish that it had no change, would be great, but things, you know, always going to change. So we'll talk about this a little bit later when we get into the Supercloud conversation, but this is an opportunity for people who embrace the cloud. So we'll come back to that. And I want to hang on cloud a bit and share some recent projections that we've made. The next prediction is the big four cloud players are going to surpass 167 billion, an IaaS and PaaS revenue in 2022. We track this. Observers of this program know that we try to create an apples to apples comparison between AWS, Azure, GCP and Alibaba in IaaS and PaaS. So we're calling for 38% revenue growth in 2022, which is astounding for such a massive market. You know, AWS is probably not going to hit a hundred billion dollar run rate, but they're going to be close this year. And we're going to get there by 2023, you know they're going to surpass that. Azure continues to close the gap. Now they're about two thirds of the size of AWS and Google, we think is going to surpass Alibaba and take the number three spot. Erik, anything you'd like to add here? >> Yeah, first of all, just on a sector level, we saw our sector, new survey net score on cloud jumped another 10%. It was already really high at 48. Went up to 53. This train is not slowing down anytime soon. And we even added an edge compute type of player, like CloudFlare into our cloud bucket this year. And it debuted with a net score of almost 60. So this is really an area that's expanding, not just the big three, but everywhere. We even saw Oracle and IBM jump up. So even they're having success, taking some of their on-prem customers and then selling them to their cloud services. This is a massive opportunity and it's not changing anytime soon, it's going to continue. >> And I think the operative word there is opportunity. So, you know, the next prediction is something that we've been having fun with and that's this Supercloud becomes a thing. Now, the reason I say we've been having fun is we put this concept of Supercloud out and it's become a bit of a controversy. First, you know, what the heck's the Supercloud right? It's sort of a buzz-wordy term, but there really is, we believe, a thing here. We think there needs to be a rethinking or at least an evolution of the term multi-cloud. And what we mean is that in our view, you know, multicloud from a vendor perspective was really cloud compatibility. It wasn't marketed that way, but that's what it was. Either a vendor would containerize its legacy stack, shove it into the cloud, or a company, you know, they'd do the work, they'd build a cloud native service on one of the big clouds and they did do it for AWS, and then Azure, and then Google. But there really wasn't much, if any, leverage across clouds. Now from a buyer perspective, we've always said multicloud was a symptom of multi-vendor, meaning I got different workloads, running in different clouds, or I bought a company and they run on Azure, and I do a lot of work on AWS, but generally it wasn't necessarily a prescribed strategy to build value on top of hyperscale infrastructure. There certainly was somewhat of a, you know, reducing lock-in and hedging the risk. But we're talking about something more here. We're talking about building value on top of the hyperscale gift of hundreds of billions of dollars in CapEx. So in addition, we're not just talking about transforming IT, which is what the last 10 years of cloud have been like. And, you know, doing work in the cloud because it's cheaper or simpler or more agile, all of those things. So that's beginning to change. And this chart shows some of the technology vendors that are leaning toward this Supercloud vision, in our view, building on top of the hyperscalers that are highlighted in red. Now, Jerry Chan at Greylock, they wrote a piece called Castles in the Cloud. It got our thinking going, and he and the team at Greylock, they're building out a database of all the cloud services and all the sub-markets in cloud. And that got us thinking that there's a higher level of abstraction coalescing in the market, where there's tight integration of services across clouds, but the underlying complexity is hidden, and there's an identical experience across clouds, and even, in my dreams, on-prem for some platforms, so what's new or new-ish and evolving are things like location independence, you've got to include the edge on that, metadata services to optimize locality of reference and data source awareness, governance, privacy, you know, application independent and dependent, actually, recovery across clouds. So we're seeing this evolve. And in our view, the two biggest things that are new are the technology is evolving, where you're seeing services truly integrate cross-cloud. And the other big change is digital transformation, where there's this new innovation curve developing, and it's not just about making your IT better. It's about SaaS-ifying and automating your entire company workflows. So Supercloud, it's not just a vendor thing to us. It's the evolution of, you know, the, the Marc Andreessen quote, "Every company will be a SaaS company." Every company will deliver capabilities that can be consumed as cloud services. So Erik, the chart shows spending momentum on the y-axis and net score, or presence in the ETR data center, or market share on the x-axis. We've talked about snowflake as the poster child for this concept where the vision is you're in their cloud and sharing data in that safe place. Maybe you could make some comments, you know, what do you think of this Supercloud concept and this change that we're sensing in the market? >> Well, I think you did a great job describing the concept. So maybe I'll support it a little bit on the vendor level and then kind of give examples of the ones that are doing it. You stole the lead there with Snowflake, right? There is no better example than what we've seen with what Snowflake can do. Cross-portability in the cloud, the ability to be able to be, you know, completely agnostic, but then build those services on top. They're better than anything they could offer. And it's not just there. I mean, you mentioned edge compute, that's a whole nother layer where this is coming in. And CloudFlare, the momentum there is out of control. I mean, this is a company that started off just doing CDN and trying to compete with Okta Mite. And now they're giving you a full soup to nuts with security and actual edge compute layer, but it's a fantastic company. What they're doing, it's another great example of what you're seeing here. I'm going to call out HashiCorp as well. They're more of an infrastructure services, a little bit more of an open-source freemium model, but what they're doing as well is completely cloud agnostic. It's dynamic. It doesn't care if you're in a container, it doesn't matter where you are. They recently IPO'd and they're down 25%, but their data looks so good across both of our emerging technology and TISA survey. It's certainly another name that's playing on this. And another one that we mentioned as well is Rubrik. If you need storage, compute, and in the cloud layer and you need to be agnostic to it, they're another one that's really playing in this space. So I think it's a great concept you're bringing up. I think it's one that's here to stay and there's certainly a lot of vendors that fit into what you're describing. >> Excellent. Thank you. All right, let's shift to data. The next prediction, it might be a little tough to measure. Before I said we're trying to be a little black and white here, but it relates to Data Mesh, which is, the ideas behind that term were created by Zhamak Dehghani of ThoughtWorks. And we see Data Mesh is really gaining momentum in 2022, but it's largely going to be, we think, confined to a more narrow scope. Now, the impetus for change in data architecture in many companies really stems from the fact that their Hadoop infrastructure really didn't solve their data problems and they struggle to get more value out of their data investments. Data Mesh prescribes a shift to a decentralized architecture in domain ownership of data and a shift to data product thinking, beyond data for analytics, but data products and services that can be monetized. Now this a very powerful in our view, but they're difficult for organizations to get their heads around and further decentralization creates the need for a self-service platform and federated data governance that can be automated. And not a lot of standards around this. So it's going to take some time. At our power panel a couple of weeks ago on data management, Tony Baer predicted a backlash on Data Mesh. And I don't think it's going to be so much of a backlash, but rather the adoption will be more limited. Most implementations we think are going to use a starting point of AWS and they'll enable domains to access and control their own data lakes. And while that is a very small slice of the Data Mesh vision, I think it's going to be a starting point. And the last thing I'll say is, this is going to take a decade to evolve, but I think it's the right direction. And whether it's a data lake or a data warehouse or a data hub or an S3 bucket, these are really, the concept is, they'll eventually just become nodes on the data mesh that are discoverable and access is governed. And so the idea is that the stranglehold that the data pipeline and process and hyper-specialized roles that they have on data agility is going to evolve. And decentralized architectures and the democratization of data will eventually become a norm for a lot of different use cases. And Erik, I wonder if you'd add anything to this. >> Yeah. There's a lot to add there. The first thing that jumped out to me was that that mention of the word backlash you said, and you said it's not really a backlash, but what it could be is these are new words trying to solve an old problem. And I do think sometimes the industry will notice that right away and maybe that'll be a little pushback. And the problems are what you already mentioned, right? We're trying to get to an area where we can have more assets in our data site, more deliverable, and more usable and relevant to the business. And you mentioned that as self-service with governance laid on top. And that's really what we're trying to get to. Now, there's a lot of ways you can get there. Data fabric is really the technical aspect and data mesh is really more about the people, the process, and the governance, but the two of those need to meet, in order to make that happen. And as far as tools, you know, there's even cataloging names like Informatica that play in this, right? Istio plays in this, Snowflake plays in this. So there's a lot of different tools that will support it. But I think you're right in calling out AWS, right? They have AWS Lake, they have AWS Glue. They have so much that's trying to drive this. But I think the really important thing to keep here is what you said. It's going to be a decade long journey. And by the way, we're on the shoulders of giants a decade ago that have even gotten us to this point to talk about these new words because this has been an ongoing type of issue, but ultimately, no matter which vendors you use, this is going to come down to your data governance plan and the data literacy in your business. This is really about workflows and people as much as it is tools. So, you know, the new term of data mesh is wonderful, but you still have to have the people and the governance and the processes in place to get there. >> Great, thank you for that, Erik. Some great points. All right, for the next prediction, we're going to shine the spotlight on two of our favorite topics, Snowflake and Databricks, and the prediction here is that, of course, Databricks is going to IPO this year, as expected. Everybody sort of expects that. And while, but the prediction really is, well, while these two companies are facing off already in the market, they're also going to compete with each other for M&A, especially as Databricks, you know, after the IPO, you're going to have, you know, more prominence and a war chest. So first, these companies, they're both looking pretty good, the same XY graph with spending velocity and presence and market share on the horizontal axis. And both Snowflake and Databricks are well above that magic 40% red dotted line, the elevated line, to us. And for context, we've included a few other firms. So you can see kind of what a good position these two companies are really in, especially, I mean, Snowflake, wow, it just keeps moving to the right on this horizontal picture, but maintaining the next net score in the Y axis. Amazing. So, but here's the thing, Databricks is using the term Lakehouse implying that it has the best of data lakes and data warehouses. And Snowflake has the vision of the data cloud and data sharing. And Snowflake, they've nailed analytics, and now they're moving into data science in the domain of Databricks. Databricks, on the other hand, has nailed data science and is moving into the domain of Snowflake, in the data warehouse and analytics space. But to really make this seamless, there has to be a semantic layer between these two worlds and they're either going to build it or buy it or both. And there are other areas like data clean rooms and privacy and data prep and governance and machine learning tooling and AI, all that stuff. So the prediction is they'll not only compete in the market, but they'll step up and in their competition for M&A, especially after the Databricks IPO. We've listed some target names here, like Atscale, you know, Iguazio, Infosum, Habu, Immuta, and I'm sure there are many, many others. Erik, you care to comment? >> Yeah. I remember a year ago when we were talking Snowflake when they first came out and you, and I said, "I'm shocked if they don't use this war chest of money" "and start going after more" "because we know Slootman, we have so much respect for him." "We've seen his playbook." And I'm actually a little bit surprised that here we are, at 12 months later, and he hasn't spent that money yet. So I think this prediction's just spot on. To talk a little bit about the data side, Snowflake is in rarefied air. It's all by itself. It is the number one net score in our entire TISA universe. It is absolutely incredible. There's almost no negative intentions. Global 2000 organizations are increasing their spend on it. We maintain our positive outlook. It's really just, you know, stands alone. Databricks, however, also has one of the highest overall net sentiments in the entire universe, not just its area. And this is the first time we're coming up positive on this name as well. It looks like it's not slowing down. Really interesting comment you made though that we normally hear from our end-user commentary in our panels and our interviews. Databricks is really more used for the data science side. The MLAI is where it's best positioned in our survey. So it might still have some catching up to do to really have that caliber of usability that you know Snowflake is seeing right now. That's snowflake having its own marketplace. There's just a lot more to Snowflake right now than there is Databricks. But I do think you're right. These two massive vendors are sort of heading towards a collision course, and it'll be very interesting to see how they deploy their cash. I think Snowflake, with their incredible management and leadership, probably will make the first move. >> Well, I think you're right on that. And by the way, I'll just add, you know, Databricks has basically said, hey, it's going to be easier for us to come from data lakes into data warehouse. I'm not sure I buy that. I think, again, that semantic layer is a missing ingredient. So it's going to be really interesting to see how this plays out. And to your point, you know, Snowflake's got the war chest, they got the momentum, they've got the public presence now since November, 2020. And so, you know, they're probably going to start making some aggressive moves. Anyway, next prediction is something, Erik, that you and I have talked about many, many times, and that is observability. I know it's one of your favorite topics. And we see this world screaming for more consolidation it's going all in on cloud native. These legacy stacks, they're fighting to stay relevant, but the direction is pretty clear. And the same XY graph lays out the players in the field, with some of the new entrants that we've also highlighted, like Observe and Honeycomb and ChaosSearch that we've talked about. Erik, we put a big red target around Splunk because everyone wants their gold. So please give us your thoughts. >> Oh man, I feel like I've been saying negative things about Splunk for too long. I've got a bad rap on this name. The Splunk shareholders come after me all the time. Listen, it really comes down to this. They're a fantastic company that was designed to do logging and monitoring and had some great tool sets around what you could do with it. But they were designed for the data center. They were designed for prem. The world we're in now is so dynamic. Everything I hear from our end user community is that all net new workloads will be going to cloud native players. It's that simple. So Splunk has entrenched. It's going to continue doing what it's doing and it does it really, really well. But if you're doing something new, the new workloads are going to be in a dynamic environment and that's going to go to the cloud native players. And in our data, it is extremely clear that that means Datadog and Elastic. They are by far number one and two in net score, increase rates, adoption rates. It's not even close. Even New Relic actually is starting to, you know, entrench itself really well. We saw New Relic's adoption's going up, which is super important because they went to that freemium model, you know, to try to get their little bit of an entrenched customer base and that's working as well. And then you made a great list here, of all the new entrants, but it goes beyond this. There's so many more. In our emerging technology survey, we're seeing Century, Catchpoint, Securonix, Lucid Works. There are so many options in this space. And let's not forget, the biggest data that we're seeing is with Grafana. And Grafana labs as yet to turn on their enterprise. Elastic did it, why can't Grafana labs do it? They have an enterprise stack. So when you look at how crowded this space is, there has to be consolidation. I recently hosted a panel and every single guy on that panel said, "Please give me a consolidation." Because they're the end users trying to actually deploy these and it's getting a little bit confusing. >> Great. Thank you for that. Okay. Last prediction. Erik, might be a little out of your wheelhouse, but you know, you might have some thoughts on it. And that's a hybrid events become the new digital model and a new category in 2022. You got these pure play digital or virtual events. They're going to take a back seat to in-person hybrids. The virtual experience will eventually give way to metaverse experiences and that's going to take some time, but the physical hybrid is going to drive it. And metaverse is ultimately going to define the virtual experience because the virtual experience today is not great. Nobody likes virtual. And hybrid is going to become the business model. Today's pure virtual experience has to evolve, you know, theCUBE first delivered hybrid mid last decade, but nobody really wanted it. We did Mobile World Congress last summer in Barcelona in an amazing hybrid model, which we're showing in some of the pictures here. Alex, if you don't mind bringing that back up. And every physical event that we're we're doing now has a hybrid and virtual component, including the pre-records. You can see in our studios, you see that the green screen. I don't know. Erik, what do you think about, you know, the Zoom fatigue and all this. I know you host regular events with your round tables, but what are your thoughts? >> Well, first of all, I think you and your company here have just done an amazing job on this. So that's really your expertise. I spent 20 years of my career hosting intimate wall street idea dinners. So I'm better at navigating a wine list than I am navigating a conference floor. But I will say that, you know, the trend just goes along with what we saw. If 35% are going to be fully remote. If 70% are going to be hybrid, then our events are going to be as well. I used to host round table dinners on, you know, one or two nights a week. Now those have gone virtual. They're now panels. They're now one-on-one interviews. You know, we do chats. We do submitted questions. We do what we can, but there's no reason that this is going to change anytime soon. I think you're spot on here. >> Yeah. Great. All right. So there you have it, Erik and I, Listen, we always love the feedback. Love to know what you think. Thank you, Erik, for your partnership, your collaboration, and love doing these predictions with you. >> Yeah. I always enjoy them too. And I'm actually happy. Last year you made us do a baker's dozen, so thanks for keeping it to 10 this year. >> (laughs) We've got a lot to say. I know, you know, we cut out. We didn't do much on crypto. We didn't really talk about SaaS. I mean, I got some thoughts there. We didn't really do much on containers and AI. >> You want to keep going? I've got another 10 for you. >> RPA...All right, we'll have you back and then let's do that. All right. All right. Don't forget, these episodes are all available as podcasts, wherever you listen, all you can do is search Breaking Analysis podcast. Check out ETR's website at etr.plus, they've got a new website out. It's the best data in the industry, and we publish a full report every week on wikibon.com and siliconangle.com. You can always reach out on email, David.Vellante@siliconangle.com I'm @DVellante on Twitter. Comment on our LinkedIn posts. This is Dave Vellante for the Cube Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (mellow music)

Published Date : Jan 22 2022

SUMMARY :

bringing you data-driven and predict the future. So hopefully we can keep to mention that, you know, And this is a real issue, you know, And that is that the number one priority and in the application stack itself. And of course the variants And the CFOs can cut down an expense item. the board, you know, thing interesting to see, you know, and take the number three spot. not just the big three, but everywhere. It's the evolution of, you know, the, the ability to be able to be, and the democratization of data and the processes in place to get there. and is moving into the It is the number one net score And by the way, I'll just add, you know, and that's going to go to has to evolve, you know, that this is going to change anytime soon. Love to know what you think. so thanks for keeping it to 10 this year. I know, you know, we cut out. You want to keep going? This is Dave Vellante for the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErikPERSON

0.99+

IBMORGANIZATION

0.99+

Jerry ChanPERSON

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

March of 2020DATE

0.99+

Dave VellantePERSON

0.99+

Zhamak DehghaniPERSON

0.99+

DavePERSON

0.99+

Marc AndreessenPERSON

0.99+

GoogleORGANIZATION

0.99+

2022DATE

0.99+

Tony BaerPERSON

0.99+

AlexPERSON

0.99+

DatabricksORGANIZATION

0.99+

8.3%QUANTITY

0.99+

2021DATE

0.99+

DecemberDATE

0.99+

38%QUANTITY

0.99+

last yearDATE

0.99+

November, 2020DATE

0.99+

twoQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Last yearDATE

0.99+

Erik Porter BradleyPERSON

0.99+

AlibabaORGANIZATION

0.99+

41%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

MarsORGANIZATION

0.99+

DellORGANIZATION

0.99+

40%QUANTITY

0.99+

30%QUANTITY

0.99+

NetskopeORGANIZATION

0.99+

oneQUANTITY

0.99+

BostonLOCATION

0.99+

GrafanaORGANIZATION

0.99+

63%QUANTITY

0.99+

Arctic WolfORGANIZATION

0.99+

167 billionQUANTITY

0.99+

SlootmanPERSON

0.99+

two companiesQUANTITY

0.99+

35%QUANTITY

0.99+

34%QUANTITY

0.99+

SnykORGANIZATION

0.99+

70%QUANTITY

0.99+

FloridaLOCATION

0.99+

Palo AltoLOCATION

0.99+

4%QUANTITY

0.99+

GreylockORGANIZATION

0.99+

Nick Durkin, Harness.io | KubeCon + CloudNative Con NA 2021


 

>>Oh, welcome back to the cubes coverage of coop con cloud native con 2021. I'm John is the Cuba, David Nicholson, our cloud host analyst, and it's exciting to be back in person in event. So we're back. It's been two years with the cube con and Linux foundation. So scrape, it was a hybrid event and we have a great guest here, Cuban London, Nick Dirk, and CT field CTO of harness and harness.io. The URL love the.io. Good to see you. >>Thank you guys for having me on. I genuinely appreciate >>It. Thanks for coming on. You were a part of our AWS startup showcase, which you guys were featured as a fast growing mature company, uh, as cloud scales, you guys have been doing extremely well. So congratulations. But now we're in reality now, right? So, okay. Cloud native has kind of like, okay, we don't have to sell it anymore. People buying into it. Um, and now operationalizing it with cloud operations, which means you're running stuff, applications and infrastructure is code and it costs money. Yeah. Martine Casada at Andreessen Horowitz. Oh, repatriated from the cloud. So there's a lot of, there's some cost conversations starting to happen. This is what you guys are in the middle of. >>Yeah, absolutely. What's interesting is when you think about it today, we want to shift left. When you want to empower all the engineers, we want to empower people. We're not giving them the data they need, right. They get a call from the CFO 30 days later, as opposed to actually being able to look at what change I did and how it actually affected. And this is what we're bringing in. Allowing people to have is now really empowering. So throughout the whole software delivery life cycle from CGI continuous integration, continuous delivery feature flagging, and even bringing cost modeling and in cloud cost management. And even then being able to shut down, shut down the services that you're not using, how much of that is waste. We talk about it. Every single cloud conference it's how much is waste. And so being able to actually turn those on, use those accordingly and then take advantage of even the cheapest instances when you should. That's really what >>It's so funny. People almost trip over dollars to pick up pennies in the cloud business because they're so focused on innovation that they think, okay, we've got to just innovate at all costs, but at some point you can make it productive for the developers in process in the pipeline to actually manage that. >>That's exactly it. I mean, if you think about it to me in order to breach state continuous delivery, we have to automate everything. Right. But that doesn't mean stop at just delivering, you know, to production. That means to customer, which means we've got to make them happy, but then ultimately all of those resources in dev and QA and staging and UAT, we've sticker those as well. And if we're not being mindful of it, the costs are astronomical, right. And we've seen it time and time again with every company you see, you've seen every article about how they've blown through all their budgets. So bring it to the people that can affect change. That's really the difference, making it visible, looking at it. In-depth not just at the cloud level and all the spend there, but also even at the, uh, thinking about it, the Kubernetes level down to the containers, the pods and understanding where are the resources even inside of the clusters and bringing that as an aggregate, not just for visibility and, and giving recommendations, but now more importantly, because part of a pipeline start taking action. That's where it's interesting. It's not just about being able to see it and understand it and hope, right? Hope is not a strategy acting upon it is what makes it valuable. And that's part of the automate everything. >>Yeah. We'll let that at the Dawn of the age of DevOps, uh, there was a huge incentive for a developer just to get their job done, to seize control of infrastructure, the idea of infrastructure as code, you know, and it's, it's, you know, w when it was being born, it's a fantastic, I've always wondered though, you know, be careful what you wish for. Do you really want all of that responsibility? So we've got responsibility from a compliance and security perspective and of course cost. So, so where do we, where do we go from here, I guess is the question. Yeah. So >>When we look at building this all together, I think when we think about software delivery, everybody wants to go fast. We start with velocity, right? Everybody says, that's where I want to go. And to your point with governance compliance, the next roadblock to hit is weight. In order to go fast, I have to do it appropriately. I've got governing bodies that tell me how this has to work. And that becomes a challenge. >>It slows it down too. It doesn't, I mean, basically people are getting pissed off, right? This is, this general sentiment is, is that developers are moving fast with their code. And then they have to stop. Compliance has to give the green light sometimes days, correct? Uh, it used to be weeks now. It's days, it's still unacceptable. So there's like this always been that tension to the security groups or say it, or finance was like slow down and they actually want to go faster. So that has to be policy-based something. Yep. This is the future. What is your take on that? >>Take on, this is pretty simple. When everybody talks about people, process and technology, it's kind of bogus, right? It's all about confidence. If you're confident that your developers can deploy appropriately and they're not going to do something wrong, you'll let them to play all the time. Well, that requires process. But if you have tooling that literally guarantees your governance, make sure that at no point in time, can any of your developers actually do something wrong. Now you have, >>That's the key. That's the key. That's the key because you're giving them a policy-based guardrails to execute in their programs >>And that's it. So now you can free up all those pieces. So all those bottlenecks, all those waiting all those time, and this is how all of our customers, they move from, you know, change advisory boards that approve deployments. >>Can you give us some, give us some, give us some, uh, customer anecdotal examples of this inaction and kind of the love letters you get, or, or the customer you take us through a use case of how it all. >>So this is one of my favorites. So NCR national cash register. If you slide a credit card at like a Chick-fil-A or a Safeway, right? Um, traditional technology. But what was interesting is they went from doing PCI audit, which would take seven days to go to a PCI audit right now with harness, because, >>And by the way, when you and the seventh, six day, the things that you did on day one change. >>Exactly, exactly. And so now, because of using harness and everything's audited, and all the changes are, are controlled to make sure that developers again, can only do what they're allowed. They only get to broadcast two per production. If they've met all their security requirements, all their compliance, permits, all their quality checks. Now, because of that, they literally gave a re read only view of harness to their auditor. And in three hours it was over. And it's because now we're that evidence file from code commit through to production. Yeah. It's there for point of sale compliant. >>So what is the benefits to them? What's the result saves them time, saves the money. What's the good, the free up more times. I'll see the chops it down. That's the key. >>Yeah. It's actually something we didn't build in like our ROI calculators, which was, we talked to their engineers and we gave them their nights and their weekends back, which I thought was amazing. But Thursday night, when we're doing that deploy, they don't have to be up. Harness is actually managing and understanding, using machine learning to understand what normal looks like. So they don't have to, they don't have to sit and look at the knock or sit in the war room and eat the free pizza. Yeah. Right. And then when those things break, same concept rates aren't as good. So >>I got to ask you, I got you here. You know, as the software development delivery lifecycle is radically being overhauled right now, which people generally agree that that's the case, the old models are, are different. How do you see your vision around AI and automation playing into this? Because you could say, okay, we're going to have different kinds of coding styles. This batch has got an AI block here. It's very Lego block. Like yep. Okay. Services and higher level services in the cloud. What's your reaction to how this impacts automation and >>Sure. So throughout our entire platform, we've designed our AI to take care of the worst parts of anyone's job as Guinea dev ops person. If they love babysitting deployments, they don't harness handles that for them, ask your engineers that they love sitting there waiting for their tests to run. Every time they build, they go get coffee, right. Because we're waiting for all of our tests to run. Y yeah. Right. The reality >>Is sometimes they have to wait days and >>That's it. But like, if I change the gas cap on, uh, on your car, would you expect me to check every light switch and every electronic piece? No. Well, why do we do that with code? And so our AI, our ML is designed to remove all the things that people hate. It's not to remove people's jobs. It's actually to make their jobs much better. >>How do you guys feed the data? What's the training algorithm for that? How does that work? Yeah, >>Actually, it's interesting. A lot of people think it's going to take a ton of time to figure this out. The good news is we start seeing this on the second deployment. On the second bill, we have to have a baseline of what good looks like, and that's where it starts. And it goes from there. And by the way, this isn't a lot of people say AI, and this AML, I teach a class on this because ML is not standard deviation. It's not some checks. So we use a massive amount of machine learning, but we have neural networks to think about things like engineers do. Like if we looked at a log and I saw the same log with two different user IDs, you and I would know, well, it's the same thing. It's just different users, but machine learning models. Don't so we've got to build neural networks to actually think like humans. So that, >>So that's the whole expectation maximization kind of concept of people talk about, >>Well, and that's it because at the end of the day, we're like I said, I'm not trying to take people's jobs. I want to meet. >>Yeah. You want to do the crap work out of the way. And I had to do other redundant, heavy lifting that they have to do every single time we use the cloud way. We've >>Built mechanical muscle in, in the early 19 hundreds. Right. And it made everyone's jobs easier, allowed them to do more with their time. That's exactly what we're doing here. >>I mean, we've seen the big old guys in the industry trying to evolve. You got the hot startups coming out. So you got, you know, adapt or die as classic thing. We've been saying for many years, David on the cube, you know that. So it's like, this is a moment of truth. We're going to see who comes out the other side. How do you, Nick, what would you be your, your kind of guess of when that other side is, when are we gonna know the winners and the losers truly in the sense of where we are now? >>So I think what I've found is that in this space specifically, there's a constant shift and this is something with software. And the problem is, is that we see them come in ebbs and flows, right. And very few times are there businesses that actually carry the model? And what you find is that when they focus on one specific problem, it solves it. Now, if I was working on VMs a few years ago, great, but now we're, we're here at coop con, right? And that's because it's eaten, uh, that side of the world. And so I think it's the companies that can actually grow the test of time and continue to expand to where the problems are. Right. And that's one of the things that I traditionally think about harness and we've done it. We cover our customers where they were, I think the old mainframes, if you had to, where they were, where they are at their traditional, their VM. >>I mean, if you think about it, Nick, it's one of those things where it's like, that's such a common sense way to look at it evolves with a problem. So I ride the right with tech ways. But if you think about the high order bit, here is just applications. We ended the day. Companies have applications that they want to write modern. The applications of their business is going to be codified so that you just work backwards from there. Then you say, okay, what is the infrastructure as code working for me? That's an ethos of dev ops. And that's where we're at. So that's why I think that the cloud need is kind of one already, but we still have the edge devices, more complexity. This is a huge next level conversation at one point is that we just put a hard and top on the complexity. When is that coming? Because the developers are clear. They want to go fast. They want to go shift left and have all that data, get the right analytics, the telemetry and the AI. But it's too complicated still. That is a big problem. >>It's too complicated. You ask for a full-stack developer to also know infrastructure, to also know edge computing. Like it's impossible, right? And this is where tooling helps, right? Because if you can actually parameterize that and make it to the engineers and have to care, they can do what they're best at. Hey, I'm great at turning code in artifact, let them do that and have tooling take care of the rest. This is where our goal is. Again, allow people >>We'll do what they love. And this is kind of the new roles that are changing. What SRE has done. Everyone talks about the SRE and some states just as he had dev ops guy, but it's not just that there's also, uh, different roles emerging. It's, it's an architectural game. At this point, we would say, >>I'd say a hundred percent. And this is where the decisions that you make on are architecturally. If you don't know how to then roll them out, this is what we've seen. Time and time again, you go to these large companies, I've got these great architectures on planning four years later, we haven't reached it because to that point process, >>The process killed them four >>Different new tools throughout the process. Well, yeah. >>So when do we hit peak Kubernetes peak >>Kubernetes? I think we have a bit to go in and I'm excited about the networking space and really what we're doing there and, and bringing that holistic portion of the network, like when Istio was originally released, I thought that was one of the most amazing things, uh, to truly come to it. And I think there's a vast space in networking. Um, and, and so I think in the next few years, we're going to see this, you know, turn into that a hundred percent utilized across the board. This will be that where everyone's workloads continue to exist. Um, somewhat like VMs we're in >>And, and, and no, no fear of developers as code in the very near future. You're talking about automating the mundane. Correct. Uh, there have been stories recently about the three-day workweek, you know, as a, as a fan of, um, utopian science fiction, myself, as opposed to dystopian. Absolutely. I think that, you know, technology does have the opportunity to lift all boats and, uh, and it's, it's not nothing to be afraid of. You know, the fact that I put my dishes in the dishwasher and they run by themselves for three hours. It's a good thing. It's a great thing. >>I don't need to deal with that. Yeah, I agree. No, I think that's, and that's what I said in the beginning. Right. That's really where we can start empowering people. So allow them to do what they're good at and do what they're best at. And if you look at why do people quit? We don't have to go so hard to find. Yeah. Why? Because they're secondary to babysit and implement and they're told everywhere they go, they're not going to have to >>That's the line. And that's all right. We got a break, but it's great insight to have you on the Q one final question for you. Um, I got to ask about the whole data as code something that I've been riffing on for a bunch of years now. And as infrastructures could we get that, but data is now the resource everyone needs, and everyone's trying to, okay, I have the control plane for this and that, but ultimately data cannot be siloed. This is a critical architectural element. How does that get resolved in the land of the competitive advantage and lock in and whatnot? What's your take on that? >>So data's an interesting one because it has, it has gravity and this is the problem. And as we move, as I think you guys know, as you move to the edge as remove, move it places there's insights to be taken at the edge there's insights to be taken as it moves through. And I think what you'll see honestly, going forward is you'll see compute done differently to your point. It needs to be aggregated. It needs to be able to be used together, but I think you'll see people computing it on its way through it. So now even in transport, you'll start seeing insights gained in real time before you can have the larger insights. And I see that happening more and more. Um, and I think ultimately we just want to empower that >>Nick, great to have you on CTO of field CTO of harness and harness.io is a URL. Check it out. Thanks for the insight. Thank you so much. Great comments. Appreciate it. Natural cube analysts right here, Nick, of course, we've got our, our analysts right here, David Nicholson. You're good on your own. I'm John for a, you know, we have the host. Thanks for watching. Stay with two more days of coverage. We'll be back after this short break.

Published Date : Oct 13 2021

SUMMARY :

I'm John is the Cuba, Thank you guys for having me on. This is what you guys are in the middle of. They get a call from the CFO 30 days later, as opposed to actually being able to look at what change I did and how it productive for the developers in process in the pipeline to actually manage that. And that's part of the automate everything. the idea of infrastructure as code, you know, and it's, it's, you know, w when it was being born, the next roadblock to hit is weight. So there's like this always been that tension to the security groups or say it, or finance was like slow and they're not going to do something wrong, you'll let them to play all the time. That's the key because you're giving them a policy-based guardrails to and this is how all of our customers, they move from, you know, change advisory boards that approve deployments. and kind of the love letters you get, or, or the customer you take us through a use case of how it all. So this is one of my favorites. and all the changes are, are controlled to make sure that developers again, can only do what they're allowed. That's the key. And then when those things break, same concept rates aren't as good. I got to ask you, I got you here. If they love babysitting deployments, they don't harness handles that for them, But like, if I change the gas cap on, uh, on your car, would you expect me to check every light switch On the second bill, we have to have a baseline of what good looks like, Well, and that's it because at the end of the day, we're like I said, I'm not trying to take people's jobs. And I had to do other redundant, heavy lifting that they have to do every single time allowed them to do more with their time. So you got, you know, adapt or die as classic thing. And the problem is, is that we see them come in ebbs and flows, The applications of their business is going to be codified so that you just work backwards from there. that and make it to the engineers and have to care, they can do what they're best at. And this is kind of the new roles that are changing. And this is where the decisions that you make on are architecturally. Well, yeah. Um, and, and so I think in the next few years, we're going to see this, you know, turn into that a hundred percent utilized have the opportunity to lift all boats and, uh, and it's, it's not nothing to be afraid So allow them to do what they're good at and do what they're best at. We got a break, but it's great insight to have you on the Q one final question for you. And as we move, as I think you guys know, as you move to the edge as remove, move it places there's insights to be Nick, great to have you on CTO of field CTO of harness and harness.io is a URL.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

Nick DurkinPERSON

0.99+

NickPERSON

0.99+

Martine CasadaPERSON

0.99+

JohnPERSON

0.99+

three hoursQUANTITY

0.99+

Nick DirkPERSON

0.99+

seventhQUANTITY

0.99+

seven daysQUANTITY

0.99+

AWSORGANIZATION

0.99+

DavidPERSON

0.99+

Thursday nightDATE

0.99+

six dayQUANTITY

0.99+

second billQUANTITY

0.99+

two yearsQUANTITY

0.99+

SREORGANIZATION

0.99+

four years laterDATE

0.99+

coop conORGANIZATION

0.99+

30 days laterDATE

0.98+

todayDATE

0.98+

Chick-fil-AORGANIZATION

0.98+

cube conORGANIZATION

0.98+

NCRORGANIZATION

0.98+

oneQUANTITY

0.98+

Harness.ioORGANIZATION

0.97+

SafewayORGANIZATION

0.97+

two different user IDsQUANTITY

0.97+

two more daysQUANTITY

0.97+

IstioTITLE

0.97+

one pointQUANTITY

0.96+

second deploymentQUANTITY

0.95+

day oneQUANTITY

0.95+

KubernetesTITLE

0.94+

hundred percentQUANTITY

0.92+

CubaLOCATION

0.91+

early 19 hundredsDATE

0.91+

CloudNative Con NA 2021EVENT

0.91+

LinuxORGANIZATION

0.89+

two per productionQUANTITY

0.89+

KubeCon +EVENT

0.88+

DevOpsTITLE

0.88+

three-day workweekQUANTITY

0.87+

few years agoDATE

0.81+

GuineaLOCATION

0.74+

Andreessen HorowitzORGANIZATION

0.74+

harnessORGANIZATION

0.72+

next few yearsDATE

0.7+

every electronic pieceQUANTITY

0.7+

harness.ioOTHER

0.68+

single timeQUANTITY

0.67+

single cloud conferenceQUANTITY

0.62+

the.ioOTHER

0.61+

timeQUANTITY

0.58+

Q one final questionQUANTITY

0.58+

con 2021EVENT

0.58+

CloudORGANIZATION

0.52+

Cuban LondonPERSON

0.51+

CTPERSON

0.45+

CGITITLE

0.42+

cloud nativeCOMMERCIAL_ITEM

0.36+

Ali Golshan, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Announcer: From around the Globe, it's theCUBE with coverage of Kube Con and Cloud Native Con Europe 2021 virtual brought to you by Red Hat, the cloud native computing foundation and ecosystem partners. >> Hello, and welcome back to theCUBE's coverage of Kube Con and Cloud Native Con 2021 virtual. I'm John Furrier, host of theCUBE, here with a great guest, I'm excited to talk to. His company, that he was part of founding CTO, was bought by Red Hat. Ali Golshan, Senior Director of Global Software Engineer at Red Hat, formerly CTO of StackRox. Ali thanks for coming on, I appreciate it. Thanks for joining us. >> Thanks for having me excited to be here. >> So big acquisition in January, where we covered it on SiliconANGLE, You guys, security company, venture backed amplify Sequoya and on and on. Big part of Red Hat story in their security as developers want to shift left as they say and as more and more modern applications are being developed. So congratulations. So real quick, just quick highlight of what you guys do as a company and inside Red Hat. >> Sure, so the company's premise was built around how do you bring security the entire application life cycle. So StackRox focuses on sort of three big areas that we talk about. One is, how do you secure the supply chain? The second part of it is, how do you secure infrastructure and foster management and then the third part is now, how do you protect the workload that run on top of that infrastructure. So this is the part that aligned really well with Red Hat which is, Red Hat had wanted to take a lot of what we do around infrastructure, foster management configuration management and developer tools integrated into a lot of the things they do and obviously the workload protection part was a very seamless part of integrating us into the OpenShift part because we were built around cloud native constructs and obviously Red Hat having some of the foremost experts around cloud native sort of created a really great asset. >> Yeah, you guys got a great story. Obviously cloud native applications are rocking and rolling. You guys were in early serverless emerges, Kubernetes and then security in what I call the real time developer workflow. Ones that are building really fast, pushing code. Now it's called day two operations. So cloud native did two operations kind of encapsulates this new environment. You guys were right in the sweet spot of that. So this became quite the big deal, Red Hat saw an opportunity to bring you in. What was the motivation when you guys did the deal Was it like, "wow" this is a good fit. How did you react? What was the vibe at the StackRox when this was all going down? >> Yeah, so I think there's really three areas you look for, anytime a company comes up and sort of starts knocking on your door. One is really, is the team going to be the right fit? Is the culture going to be the right environment for the people? For us, that was a big part of what we were taking into consideration. We found Red Hat's general culture, how they approach people and sort of the overall approach the community was very much aligned with what we were trying to do. The second part of it was really the product fit. So we had from very early on started to focus purely on the Kubernetes components and doing everything we could, we call it sort of our product approach built in versus bolted on and this is sort of a philosophy that Red Hat had adopted for a long time and it's a part of a lot of their developer tools, part of their shift left story as well as part of OpenShift. And then the third part of it was really the larger strategy of how do you go to market. So we were hitting that point where we were in triple digit customers and we were thinking about scalability and how to scale the company. And that was the part that also fit really well which was obviously, RedHat more and more hearing from their customers about the importance and the criticality of security. So that last part happened to be one part. We ended up spending a lot of time on it, ended up being sort of three out of three matches that made this acquisition happen. >> Well congratulations, always great to see startups in the right position. Good hustle, great product, great market. You guys did a great job, congratulations. >> Thank you. >> Now, the big news here at KubeCon as Linux foundation open-source, you guys are announcing that you're open-sourcing at StackRox, this is huge news, obviously, you now work for an open-source company and so that was probably a part of it. Take us through the news, this is the top story here for this segment tickets through open-source. Take us through the news. >> Yeah, so traditionally StackRox was a proprietary tool. We do have open-source tooling but the entire platform in itself was a proprietary tool. This has been a number of discussions that we've had with the Red Hat team from the very beginning. And it sort of aligns around a couple of core philosophies. One is obviously Red Hat at its core being an open-source company and being very much plugged into the community and working with users and developers and engineers to be able to sort of get feedback and build better products. But I think the other part of it is that, I think a lot of us from a historic standpoint have viewed security to be a proprietary thing as we've always viewed the sort of magic algorithms or black boxes or some magic under the hood that really moved the needle. And that happens not to be the case anymore also because StackRox's philosophy was really built around Kubernetes and Built-in, we feel like one of the really great messages around wide open-source of security product is to build that trust with the community being able to expose, here's how the product works, here's how it integrates here are the actions it takes here's the ramifications or repercussions of some of the decisions you may make in the product. Those all I feel make for very good stories of how you build connection, trust and communication with the community and actually get feedback on it. And obviously at its core, the company being very much focused on Kubernetes developer tools, service manage, these are all open-source toolings obviously. So, for us it was very important to sort of talk the talk and walk the walk and this is sort of an easy decision at the end of the day for us to take the platform open-source. And we're excited about it because I think most still want a productized supported commercial product. So while it's great to have some of the tip of the spear customers look at it and adopt the open-source and be able to drive it themselves. We're still hearing from a lot of the customers that what they do want is really that support and that continuous management, maintenance and improvement around the product. So we're actually pretty excited. We think it's only going to increase our velocity and momentum into the community. >> Well, I got some questions on how it's going to work but I do want to get your comment because I think this is a pretty big deal. I had a conversation about 10 years ago with Doug Cutting, who was the founder of Hadoop, And he was telling me a story about a company he worked for, you know all this coding, they went under and the IP was gone, the software was gone and it was a story to highlight that proprietary software sometimes can never see the light of day and it doesn't continue. Here, you guys are going to continue the story, continue the code. How does that feel? What's your expectations? How's that going to work? I'm assuming that's what you're going to open it up which means that anyone can download the code. Is that right? Take us through how to first of all, do you agree with that this is going to stay alive and how's it going to work? >> Yeah, I mean, I think as a founder one of the most fulfilling things to have is something you build that becomes sustainable and stands the test of time. And I think, especially in today's world open-source is a tool that is in demand and only in a market that's growing is really a great way to do that. Especially if you have a sort of an established user base and the customer base. And then to sort of back that on top of thousands of customers and users that come with Red Hat in itself, gives us a lot of confidence that that's going to continue and only grow further. So the decision wasn't a difficult one, although transparently, I feel like even if we had pushed back I think Red Hat was pretty determined about open-source and we get anyway, but it's to say that we actually were in agreement to be able to go down that path. I do think that there's a lot of details to be worked out because obviously there's sort of a lot of the nuances in how you build product and manage it and maintain it and then, how do you introduce community feedback and community collaboration as part of open-source projects is another big part of it. I think the part we're really excited about is, is that it's very important to have really good community engagement, maintenance and response. And for us, even though we actually discussed this particular strategy during StackRox, one of the hindering aspects of that was really the resources required to be able to manage and maintain such a massive open-source project. So having Red Hat behind us and having a lot of this experience was very relevant. I think, as a, as a startup to start proprietary and suddenly open it and try to change your entire business model or go to market strategy commercialization, changed the entire culture of the company can sometimes create a lot of headwind. And as a startup, like sort of I feel like every year just trying not to die until you create that escape velocity. So those were I think some of the risk items that Red Hat was able to remove for us and as a result made the decision that much easier. >> Yeah, and you got the mothership with Red Hat they've done it before, they've been doing it for generations. You guys, you're in the startup, things are going crazy. It's like whitewater rafting, it's like everything's happening so fast. And now you got the community behind you cause you're going to have the CNC if you get Kubecon. I mean, it's a pretty great community, the support is amazing. I think the only thing the engineers might want to worry about is go back into the code base and clean things up a bit, as you start to see the code I'm like, wait a minute, their names are on it. So, it's always always a fun time and all serious now this is a big story on the DevSecOps. And I want to get your thoughts on this because kubernetes is still emerging, and DevOps is awesome, we've been covering that in for all of the life of theCUBE for the 11 years now and the greatness of DevOps but now DevSecOps is critical and Kubernetes native security is what people are looking at. When you look at that trend only continuing, what's your focus? What do you see? Now that you're in Red Hat as the CTO, former CTO of StackRox and now part of the Red Hat it's going to get bigger and stronger Kubernetes native and shifting left-hand or DevSecOps. What's your focus? >> Yeah, so I would say our focus is really around two big buckets. One is, Kubernetes native, sort of a different way to think about it as we think about our roadmap planning and go-to-market strategy is it's mutually exclusive with being in infrastructure native, that's how we think about it and as a startup we really have to focus on an area and Kubernetes was a great place for us to focus on because it was becoming the dominant orchestration engine. Now that we have the resources and the power of Red Hat behind us, the way we're thinking about this is infrastructure native. So, thinking about cloud native infrastructure where you're using composable, reusable, constructs and objects, how do you build potential offerings or features or security components that don't rely on third party tools or components anymore? How do you leverage the existing infrastructure itself to be able to conduct some of these traditional use cases? And one example we use for this particular scenario is networking. Networking, the way firewalling in segmentation was typically done was, people would tweak IP tables or they would install, for example, a proxy or a container that would terminate MTLS or become inline and it would create all sorts of sort of operational and risk overhead for users and for customers. And one of the things we're really proud of as sort of the company that pioneered this notion of cloud native security is if you just leverage network policies in Kubernetes, you don't have to be inline you don't have to have additional privileges, you don't have to create additional risks or operational overhead for users. So we're taking those sort of core philosophies and extending them. The same way we did to Kubernetes all the way through service manager, we're doing the same sorts of things Istio being able to do a lot of the things people are traditionally doing through for example, proxies through layer six and seven, we want to do through Istio. And then the same way for example, we introduced a product called GoDBledger which was an open-source tool, which would basically look at a yaml on helm charts and give you best practices responses. And it's something you we want for example to your get repositories. We want to take those sort of principles, enabling developers, giving them feedback, allowing them not to break their existing workflows and leveraging components in existing infrastructure to be able to sort of push security into cloud native. And really the two pillars we look at are ensuring we can get users and customers up and running as quickly as possible and reduce as much as possible operational overhead for them over time. So we feel these two are really at the core of open-sourcing in building into the infrastructure, which has sort of given us momentum over the last six years and we feel pretty confident with Red Hat's help we can even expand that further. >> Yeah, I mean, you bring up a good point and it's certainly as you get more scale with Red Hat and then the customer base, not only in dealing with the threat detection around containers and cloud native applications, you got to kind of build into the life cycle and you've got to figure out, okay, it's not just Kubernetes anymore, it's something else. And you've got advanced cluster security with Red Hat they got OpenShift cloud platform, you're going to have managed services so this means you're going to have scale, right? So, how do you view that? Because now you're going to have, you guys at the center of the advanced cluster security paradigm for Red Hat. That's a big deal for them and they've got a lot of R and D and a lot of, I wouldn't say R and D, but they got emerging technologies developing around that. We covered that in depth. So when you start to get into advanced cluster, it's compliance too, it's not just threat detection. You got insights telemetry, data acquisition, so you have to kind of be part of that now. How do you guys feel about that? Are you up for the task? >> Yeah, I hope so it's early days but we feel pretty confident about it, we have a very good team. So as part of the advanced cluster security we work also very closely with the advanced cluster management team in Red Hat because it's not just about security, it's about, how do you operationalize it, how do you manage it and maintain it and to your point sort of run it longterm at scale. The compliance part of it is a very important part. I still feel like that's in its infancy and these are a lot of conversations we're having internally at Red Hat, which is, we all feel that compliance is going to sort of more from the standard benchmarks you have from CIS or particular compliance requirements like the power, of PCI or Nest into how do you create more flexible and composable policies through a unified language that allows you to be able to create more custom or more useful things specific to your business? So this is actually, an area we're doing a lot of collaboration with the advanced cluster management team which is in that, how do you sort of bring to light a really easy way for customers to be able to describe and sort of abstract policies and then at the same time be able to actually and enforce them. So we think that's really the next key point of what we have to accomplish to be able to sort of not only gain scale, but to be able to take this notion of, not only detection in response but be able to actually build in what we call declarative security into your infrastructure. And what that means is, is to be able to really dictate how you want your applications, your services, your infrastructure to be configured and run and then anything that is sort of conflicting with that is auto responded to and I think that's really the larger vision that with Red Hat, we're trying to accomplish. >> And that's a nice posture to have you build it in, get it built in, you have the declarative models then you kind of go from there and then let the automation kick in. You got insights coming in from Red Hat. So all these things are kind of evolving. It's still early days and I think it was a nice move by Red Hat, so congratulations. Final question for you is, as you prepare to go to the next generation KubeCon is also seeing a lot more end user participation, people, you know, cloud native is going mainstream, when I say mainstream, seeing beyond the hyperscalers in the early adopters, Kubernetes and other infrastructure control planes are coming in you start to see the platforms emerge. Nobody wants another security tool, they want platforms that enable applications handle tools. As it gets more complicated, what's going to be the easy button in security cloud native? What's the approach? What's your vision on what's next? >> Yeah so, I don't know if there is an easy button in security and I think part of it is that there's just such a fragmentation and use cases and sort of designs and infrastructure that doesn't exist, especially if you're dealing with such a complex stack. And not only just a complex stack but a potentially use cases that not only span runtime but they deal with you deployment annual development life cycle. So the way we think about it is more sort of this notion that has been around for a long time which is the shared responsibility model. Security is not security's job anymore. Especially, because security teams probably cannot really keep up with the learning curve. Like they have to understand containers then they have to understand Kubernetes and Istio and Envoy and cloud platforms and APIs. and there's just too much happening. So the way we think about it is if you deal with security a in a declarative version and if you can state things in a way where how infrastructure is ran is properly configured. So it's more about safety than security. Then what you can do is push a lot of these best practices back as part of your gift process. Involve developers, engineers, the right product security team that are responsible for day-to-day managing and maintaining this. And the example we think about is, is like CVEs. There are plenty of, for example, vulnerability tools but the CVEs are still an unsolved problem because, where are they, what is the impact? Are they actually running? Are they being exploited in the wild? And all these things have different ramifications as you span it across the life cycle. So for us, it's understanding context, understanding assets ensuring how the infrastructure has to handle that asset and then ensuring that the route for that response is sent to the right team, so they can address it properly. And I think that's really our larger vision is how can you automate this entire life cycle? So, the information is routed to the right teams, the right teams are appending it to the application and in the future, our goal is not to just pardon the workload or the compute environment, but use this information to action pardon application themselves and that creates that additional agility and scalability. >> Yeah it's in the lifecycle of that built in right from the beginning, more productivity, more security and then, letting everything take over on the automation side. Ali congratulations on the acquisition deal with Red Hat, buyout that was great for them and for you guys. Take a minute to just quickly answer final final question for the folks watching here. The big news is you're open-sourcing StackRox, so that's a big news here at KubeCon. What can people do to get involved? Well, just share a quick quick commercial for what people can do to get involved? What are you guys looking for? Take a pledge to the community? >> Yeah, I mean, what we're looking for is more involvement in direct feedback from our community, from our users, from our customers. So there's a number, obviously the StackRox platform itself being open-source, we have other open-source tools like the KubeLinter. What we're looking for is feedback from users as to what are the pain points that they're trying to solve for. And then give us feedback as to how we're not addressing those or how can we better design our systems? I mean, this is the sort of feedback we're looking for and naturally with more resources, we can be a lot faster in response. So send us feedback good or bad. We would love to hear it from our users and our customers and get a better sense of what they're looking for. >> Innovation out in the open love it, got to love open-source going next gen, Ali Golshan Senior Director of Global Software Engineering the new title at Red Hat former CTO and founder of StackRox which spread had acquired in January, 2021. Ali thanks for coming on congratulations. >> Thanks for having, >> Okay, so keeps coverage of Kube Con cloud native Con 2021. I'm John Furrie, your host. Thanks for watching. (soft music)

Published Date : May 5 2021

SUMMARY :

brought to you by Red Hat, and Cloud Native Con 2021 virtual. me excited to be here. and as more and more modern applications and obviously the workload protection part to bring you in. and sort of the overall in the right position. and so that was probably a part of it. and momentum into the community. and how's it going to work? and as a result made the and now part of the Red Hat and the power of Red Hat behind us, and it's certainly as you the standard benchmarks you have from CIS and I think it was a nice move by Red Hat, and in the future, our goal is that was great for them and for you guys. and naturally with more resources, Innovation out in the open love it, Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ali GolshanPERSON

0.99+

January, 2021DATE

0.99+

John FurrierPERSON

0.99+

Doug CuttingPERSON

0.99+

Red HatORGANIZATION

0.99+

JanuaryDATE

0.99+

John FurriePERSON

0.99+

StackRoxORGANIZATION

0.99+

AliPERSON

0.99+

11 yearsQUANTITY

0.99+

one partQUANTITY

0.99+

threeQUANTITY

0.99+

KubeConORGANIZATION

0.99+

third partQUANTITY

0.99+

second partQUANTITY

0.99+

Global Software EngineeringORGANIZATION

0.99+

three matchesQUANTITY

0.98+

OneQUANTITY

0.98+

KubernetesTITLE

0.98+

todayDATE

0.98+

KubeConEVENT

0.98+

two operationsQUANTITY

0.98+

twoQUANTITY

0.98+

two pillarsQUANTITY

0.97+

DevSecOpsTITLE

0.97+

one exampleQUANTITY

0.97+

oneQUANTITY

0.96+

HadoopORGANIZATION

0.96+

three areasQUANTITY

0.95+

StackRoxTITLE

0.95+

Red HatTITLE

0.93+

GoDBledgerTITLE

0.93+

three big areasQUANTITY

0.92+

SequoyaORGANIZATION

0.92+

IstioTITLE

0.91+

RedHatORGANIZATION

0.91+

OpenShiftTITLE

0.9+

Kube Con cloud native Con 2021EVENT

0.88+

DevOpsTITLE

0.88+

IstioORGANIZATION

0.87+

thousands of customersQUANTITY

0.86+

Cloud Native Con 2021EVENT

0.85+

theCUBEORGANIZATION

0.84+

last six yearsDATE

0.83+

Cloud Native Con Europe 2021EVENT

0.82+

KubeLinterTITLE

0.82+

10 years agoDATE

0.81+

KubeconORGANIZATION

0.81+

two big bucketsQUANTITY

0.8+

CloudNativeCon Europe 2021EVENT

0.8+

EnvoyTITLE

0.79+

LinuxORGANIZATION

0.79+

Steve Gordon, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Announcer: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2021-Virtual, brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hey, welcome back everyone to theCUBE's coverage of KubeCon and CloudNativeCon 2021-Virtual. I'm John Furrier, your host here on theCUBE. We've got Steve Gordon, Director of Product Management, Cloud Platforms at Red Hat. Steve, welcome to theCUBE, good to see you, thanks for coming on. >> Hey John, thanks for having me on, it's great to be back. >> So soon we'll be in real life, I think North America show, this is for the Europe Virtual, I think the North American one might be in person. It's not yet official. We'll hear, but we'll find out, but looking good so far. But thanks for all your collaboration. You guys have been a big part of the CNCF we've been covering on theCUBE, as you know, since the beginning. But, I wanted to get into the Edge conversation that's been going on. And first I want to just get this out there. You guys are sponsoring Edge Day here at KubeCon. I want you to bring that together for us, because this is a big part of what Red Hat's talking about and frankly customers. The Edge is the most explosive growth area. It's got the most complexity, it's crazy. It's got data, it's got everything at the Edge. Everything's happening. How important is Kubernetes to Edge Computing? >> Yeah, it's certainly interesting to be here talking about it now, and having kind of a dedicated Kubernetes Edge Day. I was thinking back earlier, I think it was one of the last in-person KubeCon events I think, if not the last, the San Diego event where there was already kind of a cresting of interest in Edge and kind of topics on the agenda around Edge. And it's just great to see that momentum has continued up to where we are today. And really more and more people not only talking about using Kubernetes for Edge, but actually getting in there and doing it. And I think, when we look at why people are doing that, they're really leaning into some of the things that they saw as strengths of Kubernetes in general, that they're now able to apply to edge computing use cases in terms of what they can actually do in terms of having a common interface to this very powerful platform that you can take to a growing multitude of footprints, be they your public cloud providers, where a lot of people may have started their Kubernetes journey or their own data center, to these edge locations where they're increasingly trying to do processing closer to where they're collecting data, basically. >> You know, when you think about Edge and all the evolution with Cloud Native, what's interesting is Kubernetes is enabling a lot of value. I'd like to get your thoughts. What are you hearing from customers around use cases? I mean, you are doing product management, you've got to document all the features, the wishlist. You have the keys to the kingdom on what's going on over at Red Hat. You know, we're seeing just the amazing connectivity between businesses with hybrid cloud. It's a game changer. Haven't seen this kind of change at this level since the late '80s, early '90s in terms of inflection point impact. This is huge. What are you hearing? >> I think it's really interesting that you use the word connectivity there because one of the first edge computing use cases that I've really been closely involved with and working a lot on, which then grows into the others, is around telecommunications and 5G networking. And the reason we're working with service providers on that adoption of Kubernetes as they build 5G basically as a cloud native platform from the ground up, is they're really leveraging what they've seen with Kubernetes elsewhere and taking that to deliver this connectivity, which is going to be crucial for other use cases. If you think about people whether they're trying to do automotive edge cases, where they're increasingly putting more sensors on the car to make smarter decisions, but also things around the infotainment system using more and more data there as well. If you think about factory edge, all of these use cases build on connectivity as one of the core fundamental things they need. So that's why we've been really zoomed in there with the service providers and our partners, trying to deliver a 5G networking capabilities as fast as we can and the throughput and latency benefits that come with that. >> If you don't mind me asking, I got to just go one step deeper if you don't mind. You mentioned some of these use cases, the connectivity. You know, IoT was the big buzz word, okay IoT. It's an Edge, it's Operational Technology, or it's a dumb endpoint or a node on the network has connectivity. It's got power. It's a purpose built device. It's operating, it's getting surveillance data, whatever the hell it's doing, right. It's got Edge. Now you're bringing in more intelligent, which is an IT kind of thing, state, databases, caching. Is the database too slow? Is it too fast? So again, it brings up more complexity. Can you just talk about how you view that? Because this is what I'm hearing, what do you think? >> Yeah, I agree. I think there's a real spectrum, when we talk about edge computing, both in terms of the footprints and the locations, and the various constraints that each of those imply. And sometimes those strengths can be, as you're talking about as a specially designed board which has a very specific chip on it, has very specific memory and storage constraints or it can be a literal physical constraint in terms of I only have this much space in this location to actually put something, or that space is subject to excess heat or other considerations environmentally. And I think when we look at what we're trying to provide, not just with Kubernetes but also with Linux, is a variety of solutions that can help people no matter where they are along that spectrum of the smallest devices where maybe Red Hat Enterprise Linux, or REL for Edge is suitable to those use cases where maybe there's a little more flexibility in terms of, what are the workloads I might want to run on that in the future? Or how do I want to grow that environment potentially in the future as well? If I want to add nodes, then all of a sudden, the capability that nannies brings can be a more flexible building base for them to start with. >> So with all of these use cases and the changing dynamics and the power dynamics between Operational Technology in IT, which we're kind of riffing on, what should developers take away from that when they're considering their development, whether they just want an app, be app developers, programming the infrastructure or they're tinkering with the underlying, some database work, or if they're under the hood kind of full dev ops? What should developers take into consideration for all these new use cases? >> Yeah, I think one of the key things is that we're trying to minimize the impact to the developer as much as we can. Now of course, with an edge computing use case where you may be designing your application specifically for that board or device, then that's a more challenging proposition. But there's also the case increasingly where that intelligence already exists in the application somewhere, whether it's in the data center or in the cloud, and they're just trying to move it closer to that endpoint, where the actual data is collected. And that's where I think there's a really powerful story in terms of being able to use Kubernetes and OpenShift as that interface that the application developer interacts with but can use that same interface, whether they're running in the cloud maybe for development purposes, but also when they take it to production and it's running somewhere else. >> I got to ask you the AI impact because every conversation I have or everyone I interview that's an expert as a practitioner is usually something along the lines of chief architect of cloud and AI. You're seeing a lot of cloud, SRE, cloud-scale architects meeting and also running the AI piece, especially in industries. So AI as a certain component seems to be resonating from a functional persona standpoint. People who are doing these transformations tend to have cloud and AI responsibility. Is that a fluke or is that just the pattern that's real? >> No, I think that's very real. And I think when you look at AI and machine learning and how it works, it's very data centric in terms of what is the data I'm collecting, sending back to the mothership, maybe in terms of actually training my model. But when I actually go to processing something, I want to make that as close as I can to the actual data collection, so that I can minimize what I'm trying to send back. Particularly, people may not be as cognizant of it, but even today, many times we're talking about sites where that connectivity is actually fairly limited in some of these edge use cases still today. So what you're actually putting over the pipe is something you're still trying to minimize, while trying to advance your business and improve your agility, by making these decisions closer to the edge. >> What's the advantage for Red Hat? Talk about the benefits. What are you guys bringing to the table? Obviously, hybrid cloud is the new shift. Everyone's agreed to that. I mean, pretty much the consensus is public clouds, great, been there, done that. It's out there pumping out as a resource, but now enterprise is goading us to keep stuff on premises, especially when you talk about factories or whatever, on premises, things that they might need, stuff on premise. So it's clear hybrid is happening. Everyone's in agreement. What does Red Hat bring to the table? What's in it for the customer? >> Yeah, I think I would say hybrid is really an evolving at the moment in terms of, I think, Hybrid has kind of gone through this transition where, first of all, it was maybe moving from my data center to public cloud and I'm managing most of those through that transition, and maybe I'm (indistinct) public clouds. And now we're seeing this transition where it's almost that some of that processing is moving back out again closer to the use case of the data. And that's where we really see as an extension of our existing hybrid cloud story, which is simply to say that we're trying to provide a consistent experience and interface for any footprint, any location, basically. And that's where OpenShift is a really powerful platform for doing this. But also, it's got Kubernetes at the heart of it. but it's also worth considering when we look at Kubernetes, is there's this entire Cloud Native ecosystem around it. And that's an increasingly crucial part of why people are making these decisions as well. It's not just Kubernetes itself, but all of those other projects both directly in the CNCF ecosystem itself, but also in that broader CNCF landscape of projects which people can leverage, and even if they don't leverage them today, know they have options out there for when they need to change in the future if they have a new need for their application. >> Yeah, Steve, I totally agree with you. And I want to just get your thoughts on this because I was kind of riffing with Brian Gracely who works at Red Hat on your team. And he was saying that, you know, we were talking about KubeCon + CloudNativeCon as the name of the conference. He's a little bit more CloudNativeCon this year than KubeCon, inferring, implying, and saying that, okay so what about Kubernetes, Kubernetes, Kubernetes? Now it's like, whoa, CloudNative is starting to come to the table, which shows the enablement of Kubernetes. That was our point. The point was, okay, if Kubernetes does its job as creating a lever, some leverage to create value and that's being rendered in CloudNative, and that enterprise is, not the hardcore hyperscalers and/or the early adopters, I call it classic enterprise, are coming in. They're contributing to open source as participants, and they're harvesting the value in creating CloudNative. What's your reaction to that? And can you share your perspective on there's more CloudNative going on than ever before? >> Yeah, I certainly think, you know, we've always thought from the beginning of OpenShift that it was about more than just Linux and Kubernetes and even the container technologies that came before them from the point of view of, to really build a fully operational and useful platform, you need more than just those pieces. That's something that's been core to what we've been trying to build from the beginning. But it's also what you see in the community is people making those decisions as well, as you know, what are these pieces I need, whether it's fairly fundamental infrastructure concerns like logging and monitoring, or whether it's things like trying to enable different applications on top using projects like KubeVert for virtualization, Istio for service mesh and so on. You know, those are all considerations that people have been making gradually. I think what you're seeing now is there's a growing concern in some of these areas within that broad CNCF landscape in terms of, okay, what is the right option for each of these things that I need to build the platform? And certainly, we see our role is to guide customers to those solutions, but it's also great to see that consensus emerging in the communities that we care about, like the CNCF. >> Great stuff. Steve, I got to ask you a final question here. As you guys innovate in the open, I know your roadmaps are all out there in the open. And I got to ask you, product managing is about making decisions about what you what you work on. I know there's a lot of debates. Red Hat has a culture of innovation and engineering, so there's heated arguments, but you guys align at the end of the day. That's kind of the culture. What's top of mind, if someone asks you, "Hey, Steve, bottom line, I'm a Red Hat customer. I'm going full throttle as a hybrid. We're investing. You guys have the cloud platforms, what's in it for me? What's the bottom line?" What do you say? >> Yeah, I think the big thing for us is, you know, I talked about that this is extending the hybrid cloud to the edge. And we're certainly very conscious that we've done a great job at addressing a number of footprints that are core to the way people have done computing today. And now as we move to the edge, that there's a real challenge to go and address more of those footprints. And that's, whether it's delivering OpenShift on a single node of itself, but also working with cloud providers on their edge solutions, as they move further out from the cloud as well. So I think that's really core to the mission is continuing to enable those footprints so that we can be true to that mission of delivering a platform that is consistent across any footprint at any location. And certainly that's core to me. I think the other big trend that we're tracking and really continuing to work on, you know, you talked about AI machine learning, the other other space we really see kind of continuing to develop and certainly relevant in the work with the telecommunications companies I do but also increasingly in the accelerator space where there's really a lot of new and very interesting things happening with hardware and silicon, whether it be kind of FPGAs, EA6, and even the data processing units, lots of things happening in that space that I think are very interesting and going to be key to the next three to five years. >> Yeah, and software needs to run on hardware. Love your tagline there. It sounds like a nice marketing slogan. Any workload, any footprint, any location. (laughs) Hey, DevSecOps, you got to scale it up. So good job. Thank you very much for coming on. Steve Gordon, Director of Product Management, Clout Platforms, Red Hat, Steve, thanks for coming on. >> Thanks, John, really appreciate it. >> Okay, this is theCUBE coverage of KubeCon and CloudNativeCon 2021 Europe Virtual. I'm John Furrier, your host from theCUBE. Thanks for watching. (serene music)

Published Date : May 4 2021

SUMMARY :

brought to you by Red Hat, theCUBE, good to see you, me on, it's great to be back. The Edge is the most that they're now able to apply You have the keys to the kingdom on the car to make smarter decisions, I got to just go one step or that space is subject to excess heat in terms of being able to use I got to ask you the AI impact And I think when you look What's in it for the customer? is really an evolving at the as the name of the conference. that I need to build the platform? And I got to ask you, that are core to the way people needs to run on hardware. of KubeCon and CloudNativeCon

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Brian GracelyPERSON

0.99+

Steve GordonPERSON

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

JohnPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

San DiegoLOCATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

Ecosystem PartnersORGANIZATION

0.98+

LinuxTITLE

0.98+

late '80sDATE

0.98+

Edge DayEVENT

0.98+

CloudNativeCon 2021-VirtualEVENT

0.98+

early '90sDATE

0.98+

eachQUANTITY

0.97+

CloudNativeCon Europe 2021-VirtualEVENT

0.97+

CloudNativeConEVENT

0.97+

singleQUANTITY

0.97+

CloudNativeTITLE

0.97+

theCUBEORGANIZATION

0.96+

CNCFORGANIZATION

0.95+

firstQUANTITY

0.95+

this yearDATE

0.95+

KubernetesTITLE

0.94+

North AmericaLOCATION

0.94+

Europe VirtualEVENT

0.94+

CloudNativeCon 2021 Europe VirtualEVENT

0.93+

Red Hat Enterprise LinuxTITLE

0.93+

OpenShiftTITLE

0.92+

five yearsQUANTITY

0.91+

Clout PlatformsORGANIZATION

0.89+

Kubernetes Edge DayEVENT

0.84+

REL for EdgeTITLE

0.84+

EdgeEVENT

0.8+

CloudNativeCon Europe 2021 - VirtualEVENT

0.77+

EdgeORGANIZATION

0.7+

KC6 Ali Golshan V1


 

>> Announcer: From around the Globe, it's theCUBE with coverage of Kube Con and Cloud Native Con Europe 2021 virtual brought to you by Red Hat, the cloud native computing foundation and ecosystem partners. >> Hello, and welcome back to theCUBE's coverage of Kube Con and Cloud Native Con 2021 virtual. I'm John Furrier, host of theCUBE, here with a great guest, I'm excited to talk to. His company, that he was part of founding CTO, was bought by Red Hat. Ali Golshan, Senior Director of Global Software Engineer at Red Hat, formerly CTO of StackRox. Ali thanks for coming on, I appreciate it. Thanks for joining us. >> Thanks for having me excited to be here. >> So big acquisition in January, where we covered it on SiliconANGLE, You guys, security company, venture backed amplify Sequoya and on and on. Big part of Red Hat story in their security as developers want to shift left as they say and as more and more modern applications are being developed. So congratulations. So real quick, just quick highlight of what you guys do as a company and inside Red Hat. >> Sure, so the company's premise was built around how do you bring security the entire application life cycle. So StackRox focuses on sort of three big areas that we talk about. One is, how do you secure the supply chain? The second part of it is, how do you secure infrastructure and foster management and then the third part is now, how do you protect the workload that run on top of that infrastructure. So this is the part that aligned really well with Red Hat which is, Red Hat had wanted to take a lot of what we do around infrastructure, foster management configuration management and developer tools integrated into a lot of the things they do and obviously the workload protection part was a very seamless part of integrating us into the OpeShift part because we were built around cloud native constructs and obviously Red Hat having some of the foremost experts around cloud native sort of created a really great asset. >> Yeah, you guys got a great story. Obviously cloud native applications are rocking and rolling. You guys were in early serverless emerges, Kubernetes and then security in what I call the real time developer workflow. Ones that are building really fast, pushing code. Now it's called day two operations. So cloud native did two operations kind of encapsulates this new environment. You guys were right in the sweet spot of that. So this became quite the big deal, Red Hat saw an opportunity to bring you in. What was the motivation when you guys did the deal Was it like, "wow" this is a good fit. How did you react? What was the vibe at the StackRox when this was all going down? >> Yeah, so I think there's really three areas you look for, anytime a company comes up and sort of starts knocking on your door. One is really, is the team going to be the right fit? Is the culture going to be the right environment for the people? For us, that was a big part of what we were taking into consideration. We found Red Hat's general culture, how they approach people and sort of the overall approach the community was very much aligned with what we were trying to do. The second part of it was really the product fit. So we had from very early on started to focus purely on the Kubernetes components and doing everything we could, we call it sort of our product approach built in versus built it on and this is sort of a philosophy that Red Hat had adopted for a long time and it's a part of a lot of their developer tools, part of their shift left story as well as part of OpenShift. And then the third part of it was really the larger strategy of how do you go to market. So we were hitting that point where we were in triple digit customers and we were thinking about scalability and how to scale the company. And that was the part that also fit really well which was obviously, RedHat more and more hearing from their customers about the importance and the criticality of security. So that last part happened to be one part. We ended up spending a lot of time on it, ended up being sort of the outer three matches that made this acquisition happen. >> Well congratulations, always great to see startups in the right position. Good hustle, great product, great market. You guys did a great job, congratulations. >> Thank you. >> Now, the big news here at KubeCon as Linux foundation open-source, you guys are announcing that you're open-sourcing at StackRox, this is huge news, obviously, you now work for an open-source company and so that was probably a part of it. Take us through the news, this is the top story here for this segment tickets through open-source. Take us through the news. >> Yeah, so traditionally StackRox was a proprietary tool. We do have open-source tooling but the entire platform in itself was a proprietary tool. This has been a number of discussions that we've had with the Red Hat team from the very beginning. And it sort of aligns around a couple of core philosophies. One is obviously Red Hat at its core being an open-source company and being very much plugged into the community and working with users and developers and engineers to be able to sort of get feedback and build better products. But I think the other part of it is that, I think a lot of us from a historic standpoint have viewed security to be a proprietary thing as we've always viewed the sort of magic algorithms or black boxes or some magic under the hood that really moved the needle. And that happens not to be the case anymore also because StackRox's philosophy was really built around Kubernetes and Built-in, we feel like one of the really great messages around wide open-source of security product is to build that trust with the community being able to expose, here's how the product works, here's how it integrates here are the actions it takes here's the ramifications or repercussions of some of the decisions you may make in the product. Those all I feel make for very good stories of how you build connection, trust and communication with the community and actually get feedback on it. And obviously at its core, the company being very much focused on Kubernetes developer tools, service manage, these are all open-source toolings obviously. So, for us it was very important to sort of talk the talk and walk the walk and this is sort of an easy decision at the end of the day for us to take the platform open-source. And we're excited about it because I think most still want a productized supported commercial product. So while it's great to have some of the tip of the spear customers look at it and adopt the open-source and be able to drive it themselves. We're still hearing from a lot of the customers that what they do want is really that support and that continuous management, maintenance and improvement around the product. So we're actually pretty excited. We think it's only going to increase our velocity and momentum into the community. >> Well, I got some questions on how it's going to work but I do want to get your comment because I think this is a pretty big deal. I had a conversation about 10 years ago with Doug Cutting, who was the founder of Hadoop, And he was telling me a story about a company he worked for, you know all this coding, they went under and the IP was gone, the software was gone and it was a story to highlight that proprietary software sometimes can never see the light of day and it doesn't continue. Here, you guys are going to continue the story, continue the code. How does that feel? What's your expectations? How's that going to work? I'm assuming that's what you're going to open it up which means that anyone can download the code. Is that right? Take us through how to first of all, do you agree with that this is going to stay alive and how's it going to work? >> Yeah, I mean, I think as a founder one of the most fulfilling things to have is something you build that becomes sustainable and stands the test of time. And I think, especially in today's world open-source is a tool that is in demand and only in a market that's growing is really a great way to do that. Especially if you have a sort of an established user base and the customer base. And then to sort of back that on top of thousands of customers and users that come with Red Hat in itself, gives us a lot of confidence that that's going to continue and only grow further. So the decision wasn't a difficult one, although transparently, I feel like even if we had pushed back I think Red Hat was pretty determined about open-source and we get anyway, but it's to say that we actually were in agreement to be able to go down that path. I do think that there's a lot of details to be worked out because obviously there's sort of a lot of the nuances in how you build product and manage it and maintain it and then, how do you introduce community feedback and community collaboration as part of open-source projects is another big part of it. I think the part we're really excited about is, is that it's very important to have really good community engagement, maintenance and response. And for us, even though we actually discussed this particular strategy during StackRox, one of the hindering aspects of that was really the resources required to be able to manage and maintain such a massive open-source project. So having Red Hat behind us and having a lot of this experience was very relevant. I think, as a, as a startup to start proprietary and suddenly open it and try to change your entire business model or go to market strategy commercialization, changed the entire culture of the company can sometimes create a lot of headwind. And as a startup, like sort of I feel like every year just trying not to die until you create that escape velocity. So those were I think some of the risk items that Red Hat was able to remove for us and as a result made the decision that much easier. >> Yeah, and you got the mothership with Red Hat they've done it before, they've been doing it for generations. You guys, you're in the startup, things are going crazy. It's like whitewater rafting, it's like everything's happening so fast. And now you got the community behind you cause you're going to have the CNC if you get Kubecon. I mean, it's a pretty great community, the support is amazing. I think the only thing the engineers might want to worry about is go back into the code base and clean things up a bit, as you start to see the code I'm like, wait a minute, their names are on it. So, it's always always a fun time and all serious now this is a big story on the DevSecOps. And I want to get your thoughts on this because kubernetes is still emerging, and DevOps is awesome, we've been covering that in for all of the life of theCUBE for the 11 years now and the greatness of DevOps but now DevSecOps is critical and Kubernetes native security is what people are looking at. When you look at that trend only continuing, what's your focus? What do you see? Now that you're in Red Hat as the CTO, former CTO of StackRox and now part of the Red Hat it's going to get bigger and stronger Kubernetes native and shifting left-hand or DevSecOps. What's your focus? >> Yeah, so I would say our focus is really around two big buckets. One is, Kubernetes native, sort of a different way to think about it as we think about our roadmap planning and go-to-market strategy is it's mutually exclusive with being in infrastructure native, that's how we think about it and as a startup we really have to focus on an area and Kubernetes was a great place for us to focus on because it was becoming the dominant orchestration engine. Now that we have the resources and the power of Red Hat behind us, the way we're thinking about this is infrastructure native. So, thinking about cloud native infrastructure where you're using composable, reusable, constructs and objects, how do you build potential offerings or features or security components that don't rely on third party tools or components anymore? How do you leverage the existing infrastructure itself to be able to conduct some of these traditional use cases? And one example we use for this particular scenario is networking. Networking, the way firewalling in segmentation was typically done was, people would tweak IP tables or they would install, for example, a proxy or a container that would terminate MTLS or become inline and it would create all sorts of sort of operational and risk overhead for users and for customers. And one of the things we're really proud of as sort of the company that pioneered this notion of cloud native security is if you just leverage network policies in Kubernetes, you don't have to be inline you don't have to have additional privileges, you don't have to create additional risks or operational overhead for users. So we're taking those sort of core philosophies and extending them. The same way we did to Kubernetes all the way through service manager, we're doing the same sorts of things Istio being able to do a lot of the things people are traditionally doing through for example, proxies through layer six and seven, we want to do through Istio. And then the same way for example, we introduced a product called GoDBledger which was an open-source tool, which would basically look at a yaml on helm charts and give you best practices responses. And it's something you we want for example to your get repositories. We want to take those sort of principles, enabling developers, giving them feedback, allowing them not to break their existing workflows and leveraging components in existing infrastructure to be able to sort of push security into cloud native. And really the two pillars we look at are ensuring we can get users and customers up and running as quickly as possible and reduce as much as possible operational overhead for them over time. So we feel these two are really at the core of open-sourcing in building into the infrastructure, which has sort of given us momentum over the last six years and we feel pretty confident with Red Hat's help we can even expand that further. >> Yeah, I mean, you bring up a good point and it's certainly as you get more scale with Red Hat and then the customer base, not only in dealing with the threat detection around containers and cloud native applications, you got to kind of build into the life cycle and you've got to figure out, okay, it's not just Kubernetes anymore, it's something else. And you've got advanced cluster security with Red Hat they got OpenShift cloud platform, you're going to have managed services so this means you're going to have scale, right? So, how do you view that? Because now you're going to have, you guys at the center of the advanced cluster security paradigm for Red Hat. That's a big deal for them and they've got a lot of R and D and a lot of, I wouldn't say R and D, but they got emerging technologies developing around that. We covered that in depth. So when you start to get into advanced cluster, it's compliance too, it's not just threat detection. You got insights telemetry, data acquisition, so you have to kind of be part of that now. How do you guys feel about that? Are you up for the task? >> Yeah, I hope so it's early days but we feel pretty confident about it, we have a very good team. So as part of the advanced cluster security we work also very closely with the advanced cluster management team in Red Hat because it's not just about security, it's about, how do you operationalize it, how do you manage it and maintain it and to your point sort of run it longterm at scale. The compliance part of it is a very important part. I still feel like that's in its infancy and these are a lot of conversations we're having internally at Red Hat, which is, we all feel that compliance is going to sort of more from the standard benchmarks you have from CIS or particular compliance requirements like the power, of PCI or Nest into how do you create more flexible and composable policies through a unified language that allows you to be able to create more custom or more useful things specific to your business? So this is actually, an area we're doing a lot of collaboration with the advanced cluster management team which is in that, how do you sort of bring to light a really easy way for customers to be able to describe and sort of abstract policies and then at the same time be able to actually and enforce them. So we think that's really the next key point of what we have to accomplish to be able to sort of not only gain scale, but to be able to take this notion of, not only detection in response but be able to actually build in what we call declarative security into your infrastructure. And what that means is, is to be able to really dictate how you want your applications, your services, your infrastructure to be configured and run and then anything that is sort of conflicting with that is auto responded to and I think that's really the larger vision that with Red Hat, we're trying to accomplish. >> And that's a nice posture to have you build it in, get it built in, you have the declarative models then you kind of go from there and then let the automation kick in. You got insights coming in from Red Hat. So all these things are kind of evolving. It's still early days and I think it was a nice move by Red Hat, so congratulations. Final question for you is, as you prepare to go to the next generation KubeCon is also seeing a lot more end user participation, people, you know, cloud native is going mainstream, when I say mainstream, seeing beyond the hyperscalers in the early adopters, Kubernetes and other infrastructure control planes are coming in you start to see the platforms emerge. Nobody wants another security tool, they want platforms that enable applications handle tools. As it gets more complicated, what's going to be the easy button in security cloud native? What's the approach? What's your vision on what's next? >> Yeah so, I don't know if there is an easy button in security and I think part of it is that there's just such a fragmentation and use cases and sort of designs and infrastructure that doesn't exist, especially if you're dealing with such a complex stack. And not only just a complex stack but a potentially use cases that not only span runtime but they deal with you deployment annual development life cycle. So the way we think about it is more sort of this notion that has been around for a long time which is the shared responsibility model. Security is not security's job anymore. Especially, because security teams probably cannot really keep up with the learning curve. Like they have to understand containers then they have to understand Kubernetes and Istio and Envoy and cloud platforms and APIs. and there's just too much happening. So the way we think about it is if you deal with security a in a declarative version and if you can state things in a way where how infrastructure is ran is properly configured. So it's more about safety than security. Then what you can do is push a lot of these best practices back as part of your gift process. Involve developers, engineers, the right product security team that are responsible for day-to-day managing and maintaining this. And the example we think about is, is like CVEs. There are plenty of, for example, vulnerability tools but the CVEs are still an unsolved problem because, where are they, what is the impact? Are they actually running? Are they being exploited in the wild? And all these things have different ramifications as you span it across the life cycle. So for us, it's understanding context, understanding assets ensuring how the infrastructure has to handle that asset and then ensuring that the route for that response is sent to the right team, so they can address it properly. And I think that's really our larger vision is how can you automate this entire life cycle? So, the information is routed to the right teams, the right teams are appending it to the application and in the future, our goal is not to just pardon the workload or the compute environment, but use this information to action pardon application themselves and that creates that additional agility and scalability. >> Yeah it's in the lifecycle of that built in right from the beginning, more productivity, more security and then, letting everything take over on the automation side. Ali congratulations on the acquisition deal with Red Hat, buyout that was great for them and for you guys. Take a minute to just quickly answer final final question for the folks watching here. The big news is you're open-sourcing StackRox, so that's a big news here at KubeCon. What can people do to get involved? Well, just share a quick quick commercial for what people can do to get involved? What are you guys looking for? Take a pledge to the community? >> Yeah, I mean, what we're looking for is more involvement in direct feedback from our community, from our users, from our customers. So there's a number, obviously the StackRox platform itself being open-source, we have other open-source tools like the KubeLinter. What we're looking for is feedback from users as to what are the pain points that they're trying to solve for. And then give us feedback as to how we're not addressing those or how can we better design our systems? I mean, this is the sort of feedback we're looking for and naturally with more resources, we can be a lot faster in response. So send us feedback good or bad. We would love to hear it from our users and our customers and get a better sense of what they're looking for. >> Innovation out in the open love it, got to love open-source going next gen, Ali Golshan Senior Director of Global Software Engineering the new title at Red Hat former CTO and founder of StackRox which spread had acquired in January, 2021. Ali thanks for coming on congratulations. >> Thanks for having, >> Okay, so keeps coverage of Kube Con cloud native Con 2021. I'm John Furrie, your host. Thanks for watching. (soft music)

Published Date : Apr 8 2021

SUMMARY :

brought to you by Red Hat, and Cloud Native Con 2021 virtual. me excited to be here. and as more and more modern applications and obviously the workload protection part to bring you in. and sort of the overall in the right position. and so that was probably a part of it. and momentum into the community. and how's it going to work? and as a result made the and now part of the Red Hat and the power of Red Hat behind us, and it's certainly as you the standard benchmarks you have from CIS and I think it was a nice move by Red Hat, and in the future, our goal is that was great for them and for you guys. and naturally with more resources, Innovation out in the open love it, Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ali GolshanPERSON

0.99+

January, 2021DATE

0.99+

John FurrierPERSON

0.99+

Doug CuttingPERSON

0.99+

Red HatORGANIZATION

0.99+

JanuaryDATE

0.99+

John FurriePERSON

0.99+

AliPERSON

0.99+

11 yearsQUANTITY

0.99+

StackRoxORGANIZATION

0.99+

one partQUANTITY

0.99+

KubeConORGANIZATION

0.99+

third partQUANTITY

0.99+

second partQUANTITY

0.99+

Global Software EngineeringORGANIZATION

0.99+

OneQUANTITY

0.98+

todayDATE

0.98+

two operationsQUANTITY

0.98+

two pillarsQUANTITY

0.97+

twoQUANTITY

0.97+

KubernetesTITLE

0.97+

one exampleQUANTITY

0.97+

DevSecOpsTITLE

0.96+

HadoopORGANIZATION

0.96+

Kube ConEVENT

0.95+

oneQUANTITY

0.95+

three areasQUANTITY

0.95+

Red HatTITLE

0.93+

KubeConEVENT

0.93+

SequoyaORGANIZATION

0.92+

three big areasQUANTITY

0.92+

three matchesQUANTITY

0.91+

RedHatORGANIZATION

0.91+

StackRoxTITLE

0.91+

IstioORGANIZATION

0.91+

GoDBledgerTITLE

0.91+

IstioTITLE

0.87+

two big bucketsQUANTITY

0.87+

DevOpsTITLE

0.86+

thousands of customersQUANTITY

0.86+

Cloud Native Con 2021EVENT

0.85+

OpeShiftTITLE

0.85+

theCUBEORGANIZATION

0.84+

KubeconORGANIZATION

0.84+

last six yearsDATE

0.84+

Cloud Native Con Europe 2021EVENT

0.82+

10 years agoDATE

0.81+

Con 2021EVENT

0.8+

CTOPERSON

0.78+

KubeLinterTITLE

0.77+

KubernetesORGANIZATION

0.77+

CTOORGANIZATION

0.77+

LinuxORGANIZATION

0.76+

Global Software EngineerORGANIZATION

0.75+

ON DEMAND API GATEWAYS INGRESS SERVICE MESH


 

>> Thank you, everyone for joining. I'm here today to talk about ingress controllers, API gateways, and service mesh on Kubernetes, three very hot topics that are also frequently confusing. So I'm Richard Li, founder/CEO of Ambassador Labs, formerly known as Datawire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including Telepresence and Ambassador, which is a Kubernetes native API gateway. And most of what I'm going to talk about today is related to our work around Ambassador. So I want to start by talking about application architecture and workflow on Kubernetes and how applications that are being built on Kubernetes really differ from how they used to be built. So when you're building applications on Kubernetes, the traditional architecture is the very famous monolith. And the monolith is a central piece of software. It's one giant thing that you build deploy, run. And the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly is that architecture is really reflected in that workflow. So with a monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular piece of software and everyone works towards that release train. And you have specialized teams. You have a development team, which has all your developers. You have a QA team, you have a release team, you have an operations team. So that's your typical development organization and workflow with a monolithic application. As organizations shift to microservices, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of this application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship as soon as they're ready. And so we call this full cycle development because that team is really responsible not just for the coding of that microservice, but also the testing and the release and operations of that service. So this is a huge change, particularly with workflow, and there's a lot of implications for this. So I have a diagram here that just tries to visualize a little bit more the difference in organization. With the monolith, you have everyone who works on this monolith. With microservices, you have the yellow folks work on the yellow microservice and the purple folks work on the purple microservice and maybe just one person work on the orange microservice and so forth. So there's a lot more diversity around your teams and your microservices, and it lets you really adjust the granularity of your development to your specific business needs. So how do users actually access your microservices? Well, with a monolith, it's pretty straightforward. You have one big thing, so you just tell the internet, well, I have this one big thing on the internet. Make sure you send all your traffic to the big thing. But when you have microservices and you have a bunch of different microservices, how do users actually access these microservices? So the solution is an API gateway. So the API gateway consolidates all access to your microservices. So requests come from the internet. They go to your API gateway. The API gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate microservice. And because the API gateway is centralizing access to all of the microservices, it also really helps you simplify authentication, observability, routing, all these different cross-cutting concerns, because instead of implementing authentication in each of your microservices, which would be a maintenance nightmare and a security nightmare, you've put all of your authentication in your API gateway. So if you look at this world of microservices, API gateways are a really important part of your infrastructure which are really necessary, and pre-microservices, pre-Kubernetes, an API gateway, while valuable, was much more optional. So that's one of the really big things around recognizing with the microservices architecture, you really need to start thinking much more about an API gateway. The other consideration with an API gateway is around your management workflow, because as I mentioned, each team is actually responsible for their own microservice, which also means each team needs to be able to independently manage the gateway. So Team A working on that microservice needs to be able to tell the API gateway, this is how I want you to route requests to my microservice, and the purple team needs to be able to say something different for how purple requests get routed to the purple microservice. So that's also a really important consideration as you think about API gateways and how it fits in your architecture, because it's not just about your architecture, it's also about your workflow. So let me talk about API gateways on Kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the internet to services inside the cluster. Kubernetes, from an architectural perspective, it actually has a requirement that all the different pods in a Kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own IP address. So this makes things very, very simple for interpod communication. Kubernetes, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you route it around the cluster, and it's very opinionated about how this works, but getting traffic into the cluster, there's a lot of different options and there's multiple strategies. There's Pod IP, there's Ingress, there's LoadBalancer resources, there's NodePort. I'm not going to go into exhaustive detail on all these different options, and I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be a hardware load balancer. It can be a virtual machine. It can be a cloud load balancer. But the key requirement for an external load balancer is to be able to attach a stable IP address so that you can actually map a domain name and DNS to that particular external load balancer, and that external load balancer usually, but not always, will then route traffic and pass that traffic straight through to your ingress controller. And then your ingress controller takes that traffic and then routes it internally inside Kubernetes to the various pods that are running your microservices. There are other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really require each of your microservices to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about an ingress controller. What exactly is an ingress controller? So an ingress controller is an application that can process rules according to the Kubernetes ingress specification. Strangely, Kubernetes is not actually shipped with a built-in ingress controller. I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement, and it is. It turns out that this is complex enough that there's no one size fits all ingress controller. And so there is a set of ingress rules that are part of the Kubernetes ingress specification that specify how traffic gets routed into the cluster, and then you need a proxy that can actually route this traffic to these different pods. And so an ingress controller really translates between the Kubernetes configuration and the proxy configuration, and common proxies for ingress controllers include HAProxy, Envoy Proxy, or NGINX. So let me talk a little bit more about these common proxies. So all these proxies, and there are many other proxies. I'm just highlighting what I consider to be probably the three most well-established proxies, HAProxy, NGINX, and Envoy Proxy. So HAProxy is managed by HAProxy Technologies. Started in 2001. The HAProxy organization actually creates an ingress controller. And before they created an ingress controller, there was an open source project called Voyager which built an ingress controller on HAProxy. NGINX, managed by NGINX, Inc., subsequently acquired by F5. Also open source. Started a little bit later, the proxy, in 2004. And there's the Nginx-ingress, which is a community project. That's the most popular. As well as the Nginx, Inc. kubernetes-ingress project, which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the NGINX ingress controller, and it's not clear if they're using this commercially supported version or this open source version. And they actually, although they have very similar names, they actually have different functionality. Finally, Envoy Proxy, the newest entrant to the proxy market, originally developed by engineers at Lyft, the ride sharing company. They subsequently donated it to the Cloud Native Computing Foundation. Envoy has become probably the most popular cloud native proxy. It's used by Ambassador, the API gateway. It's used in the Istio service mesh. It's used in the VMware Contour. It's been used by Amazon in App Mesh. It's probably the most common proxy in the cloud native world. So as I mentioned, there's a lot of different options for ingress controllers. The most common is the NGINX ingress controller, not the one maintained by NGINX, Inc., but the one that's part of the Kubernetes project. Ambassador is the most popular Envoy-based option. Another common option is the Istio Gateway, which is directly integrated with the Istio mesh, and that's actually part of Docker Enterprise. So with all these choices around ingress controller, how do you actually decide? Well, the reality is the ingress specification's very limited. And the reason for this is that getting traffic into a cluster, there's a lot of nuance into how you want to do that, and it turns out it's very challenging to create a generic one size fits all specification because of the vast diversity of implementations and choices that are available to end users. And so you don't see ingress specifying anything around resilience. So if you want to specify a timeout or rate-limiting, it's not possible. Ingress is really limited to support for HTTP. So if you're using gRPC or web sockets, you can't use the ingress specification. Different ways of routing, authentication. The list goes on and on. And so what happens is that different ingress controllers extend the core ingress specification to support these use cases in different ways. So NGINX ingress, they actually use a combination of config maps and the ingress resources plus custom annotations that extend the ingress to really let you configure a lot of the additional extensions that is exposed in the NGINX ingress. With Ambassador, we actually use custom resource definitions, different CRDs that extend Kubernetes itself to configure Ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by Kubernetes. So when you do a kub control apply of an Ambassador CRD, kub control can immediately validate and tell you if you're actually applying a valid schema and format for your Ambassador configuration. And as I previously mentioned, Ambassador's built on Envoy Proxy, Istio Gateway also uses CRDs. They can be used in extension of the service mesh CRDs as opposed to dedicated gateway CRDs. And again, Istio Gateway is built on Envoy Proxy. So I've been talking a lot about ingress controllers, but the title of my talk was really about API gateways and ingress controllers and service mesh. So what's the difference between an ingress controller and an API gateway? So to recap, an ingress controller processes Kubernetes ingress routing rules. An API gateway is a central point for managing all your traffic to Kubernetes services. It typically has additional functionality such as authentication, observability, a developer portal, and so forth. So what you find is that not all API gateways are ingress controllers because some API gateways don't support Kubernetes at all. So you can't, they can't be ingress controllers. And not all ingress controllers support the functionality such as authentication, observability, developer portal, that you would typically associate with an API gateway. So generally speaking, API gateways that run on Kubernetes should be considered a superset of an ingress controller. But if the API gateway doesn't run on Kubernetes, then it's an API gateway and not an ingress controller. So what's the difference between a service mesh and an API gateway? So an API gateway is really focused on traffic into and out of a cluster. So the colloquial term for this is North/South traffic. A service mesh is focused on traffic between services in a cluster, East/West traffic. All service meshes need an API gateway. So Istio includes a basic ingress or API gateway called the Istio Gateway, because a service mesh needs traffic from the internet to be routed into the mesh before it can actually do anything. Envoy Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Docker Enterprise provides an Envoy-based solution out of the box, Istio Gateway. The reason Docker does this is because, as I mentioned, Kubernetes doesn't come package with an ingress. It makes sense for Docker Enterprise to provide something that's easy to get going, no extra steps required, because with Docker enterprise, you can deploy it and get going, get it exposed on the internet without any additional software. Docker Enterprise can also be easily upgraded to Ambassador because they're both built on Envoy. It ensures consistent routing semantics. And also with Ambassador, you get greater security for single sign-on. There's a lot of security by default that's configured directly into Ambassador. Better control over TLS, things like that. And then finally, there's commercial support that's actually available for Ambassador. Istio is an open source project that has a very broad community, but no commercial support options. So to recap, ingress controllers and API gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. And I think a lot of times organizations don't think critically enough about the API gateway until they're much further down the Kubernetes journey. Considerations around how to choose that API gateway include functionality such as how does it do with traffic management and observability? Does it support the protocols that you need? Also nonfunctional requirements such as does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this? An API gateway is focused on North/South traffic, so traffic into and out of your Kubernetes cluster. A service mesh is focused on East/West traffic, so traffic between different services inside the same cluster. Docker Enterprise includes Istio Gateway out of the box. Easy to use, but can also be extended with Ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between API gateways, ingress controllers, and service meshes, and how you should be thinking about that on your Kubernetes deployment.

Published Date : Sep 14 2020

SUMMARY :

So ingress is the process

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2004DATE

0.99+

Richard LiPERSON

0.99+

2001DATE

0.99+

Ambassador LabsORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

DatawireORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

each podQUANTITY

0.99+

LyftORGANIZATION

0.99+

Nginx, Inc.ORGANIZATION

0.99+

todayDATE

0.98+

eachQUANTITY

0.98+

KubernetesTITLE

0.98+

one personQUANTITY

0.98+

HAProxy TechnologiesORGANIZATION

0.98+

HAProxyTITLE

0.97+

Docker EnterpriseTITLE

0.96+

AmbassadorORGANIZATION

0.96+

bothQUANTITY

0.96+

NGINXTITLE

0.96+

NGINX, Inc.ORGANIZATION

0.96+

Docker EnterpriseTITLE

0.96+

Envoy ProxyTITLE

0.96+

oneQUANTITY

0.95+

one big thingQUANTITY

0.95+

NGINX ingressTITLE

0.95+

Docker enterpriseTITLE

0.94+

one particular vehicleQUANTITY

0.93+

ingressORGANIZATION

0.91+

TelepresenceORGANIZATION

0.87+

F5ORGANIZATION

0.87+

EnvoyTITLE

0.86+

Nginx-ingressTITLE

0.85+

three very hot topicsQUANTITY

0.82+

both meshQUANTITY

0.82+

three most well-established proxiesQUANTITY

0.76+

single signQUANTITY

0.75+

Istio GatewayOTHER

0.75+

one giant thingQUANTITY

0.73+

VMware ContourTITLE

0.71+

IngressORGANIZATION

0.7+

Docker EnterpriseORGANIZATION

0.69+

AmbassadorTITLE

0.67+

VoyagerTITLE

0.67+

EnvoyORGANIZATION

0.65+

Istio GatewayTITLE

0.65+

IstioORGANIZATION

0.62+

Clayton Coleman, Red Hat | Google Cloud Next OnAir '20


 

>>From around the globe covering Google cloud next. >>Hi, I'm Stu middleman and this is the cube coverage of Google cloud. Next, happy to welcome back to the program. One of our cube alumni, Clayton Coleman, he's the architect for Kubernetes and OpenShift with red hat Clayton. Thanks for joining us again. Great to see you. Good to see you. All right. So of course, one of the challenges in 2020 is we love to be able to get unity together. And while we can't do it physically, we do get to do it through all of the virtual events and online forum. Of course, you know, we had the cubit red hat summit cube con, uh, for the European show and now Google cloud. So, you know, give us kind of your, your state of the state 2020 Kubernetes. Of course it was Google, uh, taking the technology from Borg, a few people working on it, and, you know, just that this project that has just had massive impact on it. So, you know, where are with the community in Kubernetes today? >>So, uh, you know, 2020 has been a crazy year for a lot of folks. Um, a lot of what I've been spending my time on is, um, you know, taking feedback from people who, you know, in this time of, you know, change and concern and worry and huge shift to the cloud, um, working with them to make sure that we have a really good, um, you know, foundation in Kubernetes and that the ecosystem is healthy and the things are moving forward there. So there's a ton of exciting projects. I will say, you know, the, the pandemics had a, an impact on, um, you know, the community. And so in many places we've reacted by slowing down our schedules or focusing more on the things that people are really worried about, like quality and bugs and making sure that the stuff just works. Uh, I will say this year has been a really interesting one and open source. >>There's been much more focus, I think, on how we start to tie this stuff together. Um, and new use cases and new challenges coming into, um, what maybe, you know, the original Kubernetes was very focused on helping you bring stuff together, bring your applications together and giving you common abstractions for working with them. Um, we went through a phase where we made it easy to extend Kubernetes, which brought a whole bunch of new abstractions. And, and I think now we're starting to see the challenges and the needs of organizations and companies and individuals that are getting out of, um, not just in Kubernetes, but across multiple locations across placement edge has been huge in the last few years. And so the projects in and around Kubernetes are kind of reacting to that. They're starting to, um, bridge, um, many of these, um, you know, disparate locations, different clouds, multicloud hybrid cloud, um, connecting enterprises to data centers are connecting data centers to the cloud, helping workloads be a little bit more portable in of themselves, but helping workloads move. >>And then I think, you know, we're, we're really starting to ask those next big questions about what comes, what comes next for making applications really come alive in the cloud, um, where you're not as focused on the hardware. You're not focused on the details, which are focused on abstractions, like, um, you know, reliability and availability, not just in one cluster, but in multiple. So that's been a really exciting, uh, transition in many of the projects that I've been following. You know, certainly projects like Istio I've been dealing with, um, spanning clusters and connecting existing workloads in and, uh, you know, each step along the way, I see people sort of broaden their scope about what they want, uh, open source to help themselves. >>Yeah, I it's, it's, it's been fascinating to watch just the, the breadth of the projects that can tie in and leverage Kubernetes. Uh, you brought up edge computing and want to get into some of the future pieces, but before we do, you know, let's look at Kubernetes itself. Uh, one dot 19 is kind of where we are at. Uh, um, I already see some, some red stalking about one dot 20. Can you just talk about the, the, the base project itself contributions to it, how the upstream, uh, works and you know, how, how should customers think about, you know, their Kubernetes environment, obviously, you know, red hat with open shifts had a very strong position. You've got thousands of customers now using it, all of the cloud providers have their, uh, Kubernetes flavor, but also you partner with them. So walk us through a little bit about, you know, the open source, the project and those dynamics. >>The project is really healthy. I think we've got through a couple of big transitions over the last few years. We've moved from the original, um, you know, I was on the bootstrap steering committee trying to help the governance model. The full bootstrap committee committee has handed off responsibility to, um, new participants. There's been a lot of growth in the project governance and community governance. Um, I think there's huge credit to the folks on the steering committee today. Folks, part of contributor experience and standardizing and formalizing Kubernetes as its own thing. I think we've really moved into being a community managed project. Um, we've developed a lot of maturity around that and Kubernetes and the folks involved in helping Kubernetes be successful, have actually been able to help others within the CNCF ecosystem and other open source projects outside of CNCF be successful. So that angle is going phenomenally well. >>Uh, contribution is up. I think one of the tension points that we've talked about is, um, Kubernetes is maturing one 19, spent a lot of time on stability. And while there's definitely lots of interesting new things in a few areas like storage, and we have fee to an ingress fee too, coming up on the horizon dual stack, support's been hotly anticipated by a lot of on premise folks looking to make the transition to IPV six. I think we've been a little bit less focused on chasing features and more focused on just making sure that Kubernetes is maturing responsibly. Now that we have a really successful ecosystem of integrators and vendors and, um, you know, unification, the conformance efforts in Kubernetes. Um, there've been some great work. I happened to be involved in the, um, in the architecture conformance definition group, and there's been some amazing participation from, um, uh, from that group of people who've made real strides in growing the testing efforts so that, you know, not only can you look at, um, two different Kubernetes vendors, but you can compare them in meaningful ways. >>That's actually helped us with our test coverage and Kubernetes, there's been a lot of focus on, um, really spending time on making sure that upgrades work well, that we've reduced the flakiness of our test suites and that when a contributor comes into Kubernetes, they're not presented with a confusing, massive instructions, but they have a really clear path to make their first contribution and their next contribution. And then the one after that. So from a project maturity standpoint, I think 2020 has been a great great year for the project. And I want to see that continue. >>Yeah. One of the things we talked quite a bit about, uh, at both red hat summit, as well as, uh, the CubeCon cloud native con Europe, uh, was operators. And, you know, maybe I believe there was some updates also about how operators can work with Google cloud. So can you give us that update? >>Sure. There's been a lot of, um, there's been a lot of growth in both the client tooling and the libraries and the frameworks that make it easy to integrate with Kubernetes. Um, and those integrations are about patterns that, um, make operations teams more productive, but it takes time to develop the domain expertise in, uh, operationalizing large groups of software. So over the last year, um, know the controller runtime project, uh, which is an outgrowth of the Kubernetes Siggy lb machinery. So it's kind of a, an outshoot that's intended to standardize and make it easier to write integrations to Kubernetes that next step of, um, you know, going then pass that red hat's worked, uh, with, um, others in the community around, um, the operator SDK, uh, which unifying that project and trying to get it aligned with others in the ecosystem. Um, almost all of the cloud providers, um, have written operators. >>Google has been an early adopter of the controller and operator pattern, uh, and have continued to put time and effort into helping make the community be successful. And, um, we're really appreciative of everyone who's come together to take some of those ideas from Kubernetes to extend them into, um, whether it's running databases and service on top of Kubernetes or whether it's integrating directly with cloud. Um, most of that work or almost all of that work benefits everybody in the ecosystem. Um, I think there's some future work that we'd like to see around, um, you know, uh, folks, uh, from, um, a number of places have gone even further and tried to boil Kubernetes down into simpler mechanisms, um, that you can integrate with. So a little bit more of a, a beginner's approach or a simplification, a domain specific, uh, operator kind of idea that, um, actually really does accelerate people getting up to speed with, um, you know, building these sorts of integrations, but at the end of the day, um, one of the things that I really see is the increasing integration between the public clouds and their Kubernetes on top of those clouds through capabilities that make everybody better off. >>So whether you're using a managed service, um, you know, on a particular cloud or whether you're running, um, the elements of that managed open source software using an open source operator on top of Kubernetes, um, there's a lot of abstractions that are really productive for admins. You might use the managed service for your production instances, but you want to use, um, throw away, um, database instances for developers. Um, and there's a lot of experimentation going on. So it's almost, it's almost really difficult to say what the most interesting part is. Um, operators is really more of an enabling technology. I'm really excited to see that increasing glue that makes automation and makes, um, you know, dev ops teams, um, more productive just because they can rely increasingly on open source or managed services offerings from, you know, the large cloud providers to work well together. >>Yeah. You had mentioned that we're seeing all the other projects that are tying into Coobernetti's, we're seeing Kubernetes going into broader use cases, things like edge computing, what, from an architectural standpoint, you know, needs to be done to make sure that, uh, Kubernetes can be used, you know, meets the performance, the simplicity, um, in these various use cases. >>That's a, that's a good question. There's a lot of complexity in some areas of what you might do in a large application deployment that don't make sense in edge deployments, but you get advantages from having a reasonably consistent environment. I think one of the challenges everybody is going through is what is that reasonable consistency? What are the tools? You know, one of the challenges obviously is as we have more and more clusters, a lot of the approaches around edge involve, you know, whether it's a single cluster on a single machine and, um, you know, in a fairly beefy, but, uh, remote, uh, computer, uh, that you still need to keep in sync with your application deployment. Um, you might have a different life cycle for, uh, the types of hardware that you're rolling out, you know, whether it's regional or whether it's tied to, whether someone can go out to that particular site that you've been update the software. Sometimes it's connected, sometimes it isn't. So I think a need that is becoming really clear is there's a lot of abstractions missing above Coopernetties. Uh, and everyone's approaching this differently. We've got a get ops and centralized config management. Um, we have, uh, architectures where, you know, you, you boot up and you go check some remote cloud location for what you should be running. Um, I think there's some, some productive obstructions that are >>That, or haven't been, um, >>It haven't been explored sufficiently yet that over the next couple of years, how do you treat a whole bunch of clusters as a pool of compute where you're not really focused on the details of where a cluster is, or how can you define applications that can easily move from your data center out to the edge or back up to the cloud, but get those benefits of Kubernetes, all those places. And >>That >>This is for so early, that what I see in open source and what I see with people deploying this is everyone is approaching this subtly differently, but you can start to see some of those patterns emerge where, um, you need reproducible bundles of applications, things that help can do REL, or you can do with just very simply with Kubernetes. Um, not every edge location needs, um, uh, an ingress controller or a way to move traffic onto that cluster because their job is to generate traffic and send it somewhere else. But then that puts more pressure on, well, you need those where you're feeding that data to your API APIs, whether that's a cloud or something within your something within a private data center, you need, um, enough of commonalities across those clusters and across your applications that you could reason about what's going on. So >>There's a huge amount >>Out of a space here. And I don't think it's just going to be Kubernetes. In fact, I, I want to say, I think we're starting to move to that phase where Kubernetes is just part of the platform that people are building or need to build. And what can we do to build those tools that help you stitch together computer across a lot of footprints, um, parts of applications across a lot of footprints. And there's, there's a bunch of open source projects that are trying to drive to that today. Um, projects like I guess the O and K natives, um, with the work being done with the venting in K native, and obviously the venting is a hugely, um, you know, we talk about edge, we'd almost be remiss, not talk about moving data. And you talk about moving data. Well, you want streams of data and you want to be reacted to data with compute and K native and Istio are both great examples of technologies within the QB ecosystem that are starting to broaden, um, you know, outside of the, well, this is just about one cube cluster to, um, we really need to stitch together a mindset of development, even if we have a reasonably consistent Kubernetes across all those footprints. >>Yeah. Well, Clayton so important. There's so many technologies out there it's becoming about that technology. And it's just a given, it's an underlying piece of it. You know, we don't talk about the internet. We don't talk, you know, as much about Linux anymore. Cause it's just in the fabric of everything we do. And it sounds like we're saying that's where we're getting with Kubernetes. Uh, I'd love to pull on that thread. You mentioned that you're hearing some patterns starting to emerge out there. So when you're talking to enterprises, especially if you're talking 2020, uh, lots of companies, all of a sudden have to really accelerate, uh, you know, those transformational projects that they were doing so that they can move faster and keep up with the pace of change. Uh, so, you know, what should enterprise be, be working on? What feedback are you hearing from customers, but what are some of those themes that you can share and w what, what should everybody else be getting ready for that? >>The most common pattern I think, is that many people still find a need to build, uh, platforms or, um, standardization of how they do application development across fairly large footprints. Um, I think what they're missing, and this is what everyone's kind of building on their own today, that, um, is a real opportunity within the community is, uh, abstract abstractions around a location, not really about clusters or machines, but something broader than that, whether it's, um, folks who need to be resilient across clouds, and whether it's folks who are looking to bring together disparate footprints to accelerate their boot to the cloud, or to modernize their on premise stack. They're looking for abstractions that are, um, productive to say, I don't really want to worry too much about the details of clusters or machines or applications, but I'm talking about services and where they run and that I need to stitch those into. >>Um, I need to stitch those deeply into some environments, but not others. So that pattern, um, has been something that we've been exploring for a long time within the community. So the open service broker project, um, you know, has been a long running effort of trying to genericize one type of interface operators and some of the obstructions and Kubernetes for extending Kubernetes and new dimensions is another. What I'm seeing is that people are building layers on top through continuous deployment, continuous integration, building their own API is building their own services that really hide these details. I think there's a really rich opportunity within open to observe what's going on and to offer some supporting technologies that bridge clouds, bridge locations, what you deal with computed a little bit more of an abstract level, um, and really doubled down on making services run. Well, I think we're kind of ready to make the transition to say officially, it's not just about applications, which is what we've been saying for a long time. >>You know, I've got these applications and I'm moving them, but to flip it around and say, we want to be service focused and services, have a couple of characteristics, the details of where they run are more about the guarantees that you're providing for your customers. Um, we lack a lot of open source tools that make it easier to build and run services, not just to consume as dependencies or run open source software, but what are the things that make our applications more resilient in and of themselves? I think Kubernetes was a good start. Um, I really see organizations struggling with that today. You're going to have multiple locations. You're going to have, um, the need to dramatically move workloads. What are the tools that the whole ecosystem, the open source ecosystem, um, can collaborate on and help accelerate that transition? >>Well, Clayton, you teed up on my last thing. I want to ask you, you know, we're, we're here at the Google cloud show and when you talk about ecosystem, you talk about community, you know, Google and red hat, both very active participants in this community. So, you know, you, you peer you collaborate with a lot of people from Google I'm sure. So give our audience a little bit of insight as to, you know, Google's participation. What, what you've been seeing from them the last couple of years at Google has been a great partner, >>Crazy ecosystem for red hat. Um, we worked really closely with them on Istio and K native and a number of other projects. Um, I, you know, as always, um, I'm continually impressed by the ability of the folks that I've worked with from Google to really take a community focus and to concentrate on actually solving use cases. I think the, you know, there's always the desire to create drama around technology or strategy or business and open source. You know, we're all coming together to work on common goals. I really want to, um, you know, thank the folks that I've worked with at Google over the years. Who've been key participants. They've believed very strongly in enabling users. Um, you know, regardless of, um, you know, business or technology, it's about making sure that we're improving software for everyone. And one of the beauties of working on an open source project like Kubernetes is everyone can get some benefit out of it. And those are really, um, you know, the sum of all of the individual contributions is much larger than what the simple math would apply. And I think that's, um, you know, Kubernetes has been a huge success. I want to see more successes like that. Um, you know, working with Google and others in the open source ecosystem around infrastructure as a service and, you know, this broadening >>Domain of places where we can collaborate to make it easier for developers and operations teams and dev ops and sec ops to just get their jobs done. Um, you know, there's a lot more to do and I think open source is the best way to do that. All right. Well, Clayton Coleman, thank you so much for the update. It's really great to catch up. It was a pleasure. All right. Stay tuned for lots more coverage. The Google cloud next 2020 virtually I'm Stu Miniman. Thank you for watching the cube.

Published Date : Aug 25 2020

SUMMARY :

From around the globe covering Google cloud Borg, a few people working on it, and, you know, just that this project that has just had good, um, you know, foundation in Kubernetes and that the ecosystem is healthy and um, what maybe, you know, the original Kubernetes was very focused on helping you bring in and, uh, you know, each step along the way, I see people sort of broaden their scope about it, how the upstream, uh, works and you know, how, how should customers think about, We've moved from the original, um, you know, I was on the bootstrap steering committee trying to help you know, not only can you look at, um, two different Kubernetes vendors, of our test suites and that when a contributor comes into Kubernetes, they're not presented with a And, you know, maybe I believe there was some updates also about um, you know, going then pass that red hat's worked, uh, with, um, um, you know, building these sorts of integrations, but at the end of the day, um, you know, the large cloud providers to work well together. uh, Kubernetes can be used, you know, meets the performance, the simplicity, um, a lot of the approaches around edge involve, you know, whether it's a single cluster on not really focused on the details of where a cluster is, or how can you define applications that can easily move a private data center, you need, um, enough of commonalities to broaden, um, you know, outside of the, well, this is just about one cube cluster all of a sudden have to really accelerate, uh, you know, those transformational projects that they were doing so a need to build, uh, platforms or, um, So the open service broker project, um, you know, has been a long You're going to have, um, the need to dramatically move workloads. So, you know, you, you peer you collaborate with a lot And those are really, um, you know, the sum of all of the individual contributions is much Um, you know, there's a lot more to do and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Clayton ColemanPERSON

0.99+

ClaytonPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

thousandsQUANTITY

0.99+

first contributionQUANTITY

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

EuropeLOCATION

0.99+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

Red HatORGANIZATION

0.98+

Stu middlemanPERSON

0.98+

OneQUANTITY

0.97+

last yearDATE

0.97+

pandemicsEVENT

0.97+

LinuxTITLE

0.97+

single clusterQUANTITY

0.96+

single machineQUANTITY

0.96+

CNCFORGANIZATION

0.96+

one clusterQUANTITY

0.94+

each stepQUANTITY

0.94+

todayDATE

0.94+

this yearDATE

0.92+

dot 20COMMERCIAL_ITEM

0.91+

IstioORGANIZATION

0.91+

KubernetesORGANIZATION

0.9+

OpenShiftORGANIZATION

0.89+

K nativeORGANIZATION

0.88+

customersQUANTITY

0.88+

Google cloudTITLE

0.88+

next couple of yearsDATE

0.85+

19QUANTITY

0.84+

yearsDATE

0.84+

Google CloudTITLE

0.81+

one cubeQUANTITY

0.81+

lastDATE

0.8+

IPV sixTITLE

0.79+

red hatORGANIZATION

0.77+

'20DATE

0.77+

dot 19COMMERCIAL_ITEM

0.76+

RELTITLE

0.74+

last few yearsDATE

0.68+

Vijoy Pandey, Cisco | KubeCon + CloudNativeCon Europe 2020 - Virtual


 

>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the CloudNative Computing Foundation, and Ecosystem Partners. >> Hi and welcome back to theCUBE's coverage of KubeCon, CloudNativeCon 2020 in Europe, of course the virtual edition. I'm Stu Miniman and happy to welcome back to the program one of the keynote speakers, he's also a board member of the CNCF, Vijoy Pandey who is the vice president and chief technology officer for Cloud at Cisco. Vijoy, nice to see you and thanks so much for joining us. >> Thank you Stu, and nice to see you again. It's a strange setting to be in but as long as we are both health, everything is good. >> Yeah, it's still a, we still get to be together a little bit even though while we're apart, we love the engagement and interaction that we normally get through the community but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Network, Please Evolve", been watching that journey. But why don't we start it first, you know, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so that, yeah let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports into Chuck. The role, the charter for this team, this new team, is to incubate the next bets for Cisco. And, if you can imagine, it's natural for Cisco to start with bets which are closer to its core business, but the charter for this group is to mover further and further out from Cisco's core business and takes this core into newer markets, into newer products, and newer businesses. I am running the engineering and research for that group. And, again, the whole deal behind this is to be a little bit nimble, to be a little startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast and you throw out 80% of those and concentrate on the 20% that make sense to take forward as a venture. >> Interesting. So it reminds me a little bit, but different, I remember John Chambers a number of years back talking about various adjacencies, trying to grow those next, you know, multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for, you know, driving innovation, giving engineering 20% of their time to work on things. Give us a little bit of insight. What's kind of an example of a bet that you might be looking at in the space? Bring us inside a little bit. >> Well that's actually a good question and I think a little bit of that comparison is, are those conversations that taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets. And, yes, my previous employer, I mean Google X actually goes pretty far out when it comes to incubations. The core business being primarily around ads, now Google Cloud as well, but you have things like Verily and Calico and others which are pretty far out from where Google started. And the way we are looking at these things within Cisco is, it's a new muscle for Cisco so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to go-to-market alignment or business alignment. So while the first bets that we are taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure's code, but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I talked a lot about this in the past couple of keynotes where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, whatever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets that we are going after is around connecting and securing API endpoints wherever they happen to be of whatever type they happen to be. And so API networking, or app networking, is one big bet that we're going after. Our other big bet is around API security and that has a bunch of other connotations to it where we think about security moving from runtime security where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security which is only under the developer pipeline and higher up in the stack. So those are two big bets that we're going after and as you can see, they're pretty close to Cisco's core business but also very differentiated from where Cisco is today. And once when you prove some of these bets out, you can walk further and further away or a few degrees away from Cisco's core as it exists today. >> All right, well Vijoy, I mentioned you're also on the board for the CNCF, maybe let's talk a little bit about open source. How does that play into what you're looking at for emerging technologies and these bets, you know, so many companies, that's an integral piece, and we've watched, you know really, the maturation of Cisco's journey, participating in these open source environments. So help us tie in where Cisco is when it comes to open source. >> So, yeah, so I think we've been pretty deeply involved in open source in our past. We've been deeply involved in Linux foundational networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack. We are big supporters of OpenStack. We have a couple of products that are on the OpenStack offering. And as you all know, we've been involved in CNCF right from the get go as a foundational member. We brought NSM as a project. It's sandbox currently. We're hoping to move it forward. But even beyond that, I mean we are big users of open source. You know a lot of us has offerings that we have from Cisco and you would not know this if you're not inside of Cisco, but Webex, for example, is a big, big user of linger D right from the get go from version 1.0. But we don't talk about it, which is sad. I think for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise site. We use Kubernetes very deeply in our security platforms. So we are pretty deep users internally in all our SAS products. But we want to press the accelerator and accelerate this whole journey towards open source quite a bit moving forward as part of ET&I, Emerging Tech and Incubation as well. So you will see more of us in open source forums, not just the NCF but very recently we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative and we actually are big believers in handling data in ethical and privacy preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health and we will be working very closely with Dan and the foundational companies there to, not just bring open source, but also evangelize and use what comes out of that forum. >> All right. Well, Vijoy, I think it's time for us to dig into your keynote. We've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on, and big focus you talked about was SD-WAN. Of course anybody that been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling Multicloud, so help us, you know, help explain to our audience a little bit, you know, what do you mean when you talk about things like CloudNative, SD-WAN, and how that helps people really enable their applications in the modern environment? >> Yeah, so, well we we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actual data connectivity across to your enterprise or branch and SD-WAN came in and changed the game there. But I think SD-WAN as it exists today is application-alaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM, or network service mesh, has actually helped us simplify operational complexities, simplify the ticketing and process hell that any developer needs to go through just to get a multicloud, multicluster app up and running. So the keynote actually talked about bringing those two things together where we've talked about using NSM in the past, in chapter one and chapter two, ah chapter two, no this is chapter three and at some point I would like to stop the chapters. I don't want this to be like, like an encyclopedia of networking (mumbling) But we are at chapter three and we are talking about how you can take the same consumption models that I talked about in chapter two which is just adding a simple annotation in your CRD and extending that notion of multicloud, multicluster wires within the components of our application but extending it all the way down to the user in an enterprise. And as you saw an example, Gavin Russom is trying to give a keynote holographically and he's suffering from SD-WAN being application alaware. And using this construct of a simple annotation, we can actually make SD-WAN CloudNative. We can make it application-aware, and we can guarantee the SLOs that Gavin is looking for in terms of 3D video, in terms of file access or audio just to make sure that he's successful and Ross doesn't come in and take his place. >> Well I expect Gavin will do something to mess things up on his own even if the technology works flawly. You know, Vijoy the modernization journey that customers are on is a neverending story. I understand the chapters need to end on the current volume that you're working on. But, you know, we'd love to get your view point. You talk about things like service mesh. It's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the the kind of real challenges but opportunities that they see in today's CloudNative space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree and we are seeing a lot of that happening where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've got linger D, you've got various proprietary notions around control planes like App Mesh from Amazon. There's Console which is an open source project But not part of (mumbles) today. So there's a whole bunch of service meshes in terms of control planes coming in on volumes becoming a de facto side car data plane, whatever you would like to call it, de facto standard there which is good for the community I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment. And that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for. Like API Gate was in the past. And those functions are actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of some service meshes is here to stay. And so some work has to be done in bringing these things together and that's something that we are trying to focus in on all as well because that's something that our customers are asking for. >> Yeah, actually you connected for me something I wanted to get your viewpoint on. Dial back you know 10, 15 years ago and everybody would say, "Ah, you know, I really want to have single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, you know, not that long before this event, Google had their Google Cloud show talking about the partnership that you have with Cisco with Google. They have Anthos. You look at Azure has Arc. You know, VMware has Tanzu. Everybody's talking about, really, kind of this multicluster management type of solution out there. And just want to get your viewpoint on this Vijoy is to, you know, how are we doing on the management plane and what do you think we need to do as a industry as a whole to make things better for customers? >> Yeah, but I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all of these things together. And I think the SMI apropos from Microsoft is bang on on that front where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes. As you saw in the SD-WAN and also going back in the Google discussion that you just, or Google conference that we just offered It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a CloudNative environment in the keynote. But I think the bigger problem here is how do we manage this complexity of disparate stacks, whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And, single pane of glass is over loaded as a term because it brings in these notions of big, monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, you know, the API economy should be able to help on these, you know, multi, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco and congratulations on the new role. >> Thank you Stu. It was a pleasure to be here. >> All right, and stay tuned for much more coverage of theCUBE at KubeCon, CloudNativeCon. I'm Stu Miniman and thanks for watching. (light digital music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, Vijoy, nice to see you and nice to see you again. since the last time we got together. and concentrate on the 20% that make sense that you might be looking at in the space? And the way we are looking at and we've watched, you and the foundational companies there to, and big focus you talked about was SD-WAN. and we are talking about What are some of the the and we are seeing a lot of that happening and what do you think we need in the Google discussion that you just, and give me the visibility I need Thank you Stu. I'm Stu Miniman and thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

Red HatORGANIZATION

0.99+

20%QUANTITY

0.99+

Vijoy PandeyPERSON

0.99+

80%QUANTITY

0.99+

Linux Foundation for Public HealthORGANIZATION

0.99+

GavinPERSON

0.99+

Stu MinimanPERSON

0.99+

VijoyPERSON

0.99+

StuPERSON

0.99+

DanPERSON

0.99+

Emerging TechORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

ET&IORGANIZATION

0.99+

KubeConEVENT

0.99+

first betsQUANTITY

0.99+

Gavin RussomPERSON

0.99+

CloudNativeConEVENT

0.99+

VerilyORGANIZATION

0.99+

RossPERSON

0.99+

EuropeLOCATION

0.99+

ChuckPERSON

0.99+

WebexORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

John ChambersPERSON

0.99+

NSMORGANIZATION

0.98+

CalicoORGANIZATION

0.98+

two big betsQUANTITY

0.98+

bothQUANTITY

0.98+

NCFORGANIZATION

0.98+

VMwareORGANIZATION

0.97+

LinuxTITLE

0.97+

two thingsQUANTITY

0.97+

CloudNativeCon 2020EVENT

0.97+

todayDATE

0.96+

SASORGANIZATION

0.96+

Emerging Tech and IncubationORGANIZATION

0.96+

firstQUANTITY

0.96+

one big betQUANTITY

0.96+

chapter twoOTHER

0.95+

this yearDATE

0.95+

first few betsQUANTITY

0.95+

chapter oneOTHER

0.94+

TanzuORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

chapter threeOTHER

0.93+

Keynote Analysis | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> From around the globe, it's theCUBE! With coverage of KubeCon and CloudNativeCon Europe 2020, virtual. Brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Hi, I'm Stu Miniman and welcome to theCUBE's coverage of KubeCon CloudNativeCon 2020 in Europe. Of course the event this year was supposed to be in the Netherlands, I know I was very much looking forward to going to Amsterdam. This year of course it's going to be virtual, I'm really excited theCUBE's coverage, we've got some great members of the CNCF, we've got a bunch of end users, we've got some good thought leaders, and I'm also bringing a little bit of the Netherlands to help me bring in and start this keynote analysis, happy to welcome back to the program my cohost for the show, Joep Piscaer, who is an industry analyst with TLA. Thank you, Joep, so much for joining us, and we wish we could be with you in person, and check out your beautiful country. >> Absolutely, thanks for having me Stu, and I'm still a little disappointed we cannot eat the (indistinct foreign term) rijsttafel together this year. >> Oh, yeah, can we just have a segment to explain to people the wonder that is the fusion of Indonesian food and the display that you get only in the Netherlands? Rijsttafel, I seriously had checked all over the US and Canada, when I was younger, to find an equivalent, but one of my favorite culinary delights in the world, but we'll have to put a pin in that. You've had some warm weather in the Netherlands recently, and so many of the Europeans take quite a lot of time off in July and August, but we're going to talk about some hardcore tech, KubeCon, a show we love doing, the European show brings good diversity of experiences and customers from across the globe. So, let's start, the keynote, Priyanka Sharma, the new general manager of the CNCF, of course, just some really smart people that come out and talk about a lot of things. And since it's a foundation show, there's some news in there, but it's more about how they're helping corral all of these projects, of course, a theme we've talked about for a while is KubeCon was the big discussion for many years about Kubernetes, still important, and we'll talk about that, but so many different projects and everything from the sandbox, their incubation, through when they become fully, generally available, so, I guess I'll let you start and step back and say when you look at this broad ecosystem, you work with vendors, you've been from the customer side, what's top of mind for you, what's catching your attention? >> So, I guess from a cloud-native perspective, looking at the CNCF, I think you hit the nail on the head. This is not about any individual technology, isn't about just Kubernetes or just Prometheus, or just service mesh. I think the added value of the CNCF, and the way I look at it at least, looking back at my customer perspective, I would've loved to have a organization curate the technology world around me, for me. To help me out with the decisions on a technology perspective that I needed to make to kind of move forward with my IT stack, and with the requirements my customer had, or my organization had, to kind of move that into the next phase. That is where I see the CNCF come in and do their job really well, to help organizations, both on the vendor side as well as on the customer side, take that next step, see around the corner, what's new, what's coming, and also make sure that between different, maybe even competing standards, the right ones surface up and become the de facto standard for organizations to use. >> Yeah, a lot of good thoughts there, Joep, I want to walk through that stack a little bit, but before we do, big statement that Priyanka made, I thought it was a nice umbrella for her keynote, it's a foundation of doers powering end user driven open-source, so as I mentioned, you worked at a service provider, you've done strategies for some other large organizations, what's your thought on the role of how the end users engage with and contribute to open-source? One of the great findings I saw a couple years ago, as you said, it went from open-source being something that people did on the weekend to the sides, to many end users, and of course lots of vendors, have full-time people that their jobs are to contribute and participate in the open-source communities. >> Yeah, I guess that kind of signals a maturity in the market to me, where organizations are investing in open-source because they know they're going to get something out of it. So back in the day, it was not necessarily certain that if you put a lot of effort into an open-source project, for your own gain, for your own purposes, that that would work out, and that with the backing of the CNCF, as well as so many member organizations and end user organizations, I think participating in open-source becomes easier, because there's more of a guarantee that what you put in will kind of circulate, and come out and have value for you, in a different way. Because if you're working on a service mesh, some other organization might be working on Prometheus, or Kubernetes, or another project, and some organizations are now kind of helping each other with the CNCF as the gatekeeper, to move all of those technology stacks forward, instead of everyone doing it for themselves. Maybe even being forced to reinvent the wheel for some of those technology components. >> So let's walk through the stack a little bit, and the layers that are out there, so let's start with Kubernetes, the discussion has been Kubernetes won the container orchestration battles, but whose Kubernetes am I going to use? For a while it was would it be distributions, we've seen every platform basically has at least one Kubernetes option built into it, so doesn't mean you're necessarily using this, before AWS had their own flavor of Kubernetes, there was at least 15 different ways that you could run Kubernetes on top of it, but now they have ECS, they have EKS, even things like Fargate now work with EKS, so interesting innovation and adoption there. But VMware baked Kubernetes into vSphere 7. Red Hat of course, with OpenShift, has thousands of customers and has great momentum, we saw SUSE buy Rancher to help them move along and make sure that they get embedded there. One of the startups you've worked with, Spectro Cloud, helps play into the mix there, so there is no shortage of options, and then from a management standpoint, companies like Microsoft, Google, VMware, Red Hat, all, how do I manage across clusters, because it's not going to just be one Kubernetes that you're going to use, we're expecting that you're going to have multiple options out there, so it sure doesn't sound boring to me yet, or reached full maturity, Joep. What's your take, what advice do you give to people out there when they say "Hey, okay, I'm going to use Kubernetes," I've got hybrid cloud, or I probably have a couple things, how should they be approaching that and thinking about how they engage with Kubernetes? >> So that's a difficult one, because it can go so many different ways, just because, like you said, the market is maturing. Which means, we're kind of back at where we left off virtualization a couple years ago, where we had managers of managers, managing across different data centers, doing the multicloud thing before it was a cloud thing. We have automation doing day two operations, I saw one of the announcements for this week will be a vendor coming out with day two operations automation, to kind of help simplify that stack of Kubernetes in production. And so the best advice I think I have is, don't try to do it all yourself, right, so Kubernetes is still maturing, it is still fairly open, in a sense that you can change everything, which makes it fairly complex to use and configure. So don't try and do that part yourself, necessarily, either use a managed service, which there are a bunch of, Spectro Cloud, for example, as well as Platform9, even the bigger players are now having those platforms. Because in the end, Kubernetes is kind of the foundation of what you're going to do on top of it. Kubernetes itself doesn't have business value in that sense, so spending a lot of time, especially at the beginning of a project, figuring that part out, I don't think makes sense, especially if the risk and the impact of making mistakes is fairly large. Like, make a mistake in a monitoring product, and you'll be able to fix that problem more easily. But make a mistake in a Kubernetes platform, and that's much more difficult, especially because I see organizations build one cluster to rule them all, instead of leveraging what the cloud offers, which is just spin up another cluster. Even spin it up somewhere else, because we can now do the multicloud thing, we can now manage applications across Kubernetes clusters, we can manage many different clusters from a single pane of glass, so there's really no reason anymore to see that Kubernetes thing as something really difficult that you have to do yourself, hence just do it once. Instead, my recommendation would be to look at your processes and figure out, how can I figure out how to have a Kubernetes cluster for everything I do, maybe that's per team, maybe that's per application or per environment, per cloud, and they kind of work from that, because, again, Kubernetes is not the holy grail, it's not the end state, it is a means to an end, to get where we're going with applications, with developing new functionality for customers. >> Well, I think you hit on a really important point, if you look out in the social discussion, sometimes Kubernetes and multicloud get attacked, because when I talk to customers, they shouldn't have a Kubernetes strategy. They have their business strategy, and there are certain things that they're trying to, "How do I make sure everything's secure," and I'm looking at DevSecOps, I need to really have an edge computing strategy because that's going to help my business objectives, and when I look at some of the tools that are going to help and get me there, well, Kubernetes, the service meshes, some of the other tools in the CNCF are going to help me get there, and as you said, I've got managed services, cloud providers, integrators are going to help me build those solutions without me having to spend years to understand how to do that. So yeah, I'd love to hear any interesting projects you're hearing about, edge computing, the security space has gone from super important to even more important if that's possible in 2020. What are you hearing? >> Yeah, so the most interesting part for me is definitely the DevSecOps movement, where we're basically not even allowed to call it DevOps anymore. Security has finally gained a foothold, they're finally able to shift lift the security practices into the realm of developers, simplifying it in a way, and automating it in a way that, it's no longer a trivial task to integrate security. And there's a lot of companies supporting that, even from a Kubernetes perspective, integrating with Kubernetes or integrating with networking products on top of Kubernetes. And I think we finally have reached a moment in time where security is no longer something that we really need to think about. Again, because CNCF is kind of helping us select the right projects, helping us in the right direction, so that making choices in the security realm becomes easier, and becomes a no-brainer for teams, special security teams, as well as the application development teams, to integrate security. >> Well, Joep, I'm glad to hear we've solved security, we can all go home now. That's awesome. But no, in all seriousness, such an important piece, lots of companies spending time on there, and it does feel that we are starting to get the process and organization around, so that we can attack these challenges a little bit more head-on. How 'about service mesh, it's one of those things that's been a little bit contentious the last couple of years, of course ahead of the show, Google is not donating Istio to the foundation, instead, the trademark's open. I'm going to have an interview with Liz Rice to dig into that piece, in the chess moves, Microsoft is now putting out a service mesh, so as Corey Quinn says, the plural of service mesh must be service meeshes, so, it feels like Mr. Meeseeks, for any Rick and Morty fans, we just keep pressing the button and more of them appear, which may cause us more trouble, but, what's your take, do you have a service mesh coming out, Kelsey Hightower had a fun little thing on Twitter about it, what's the state of the state? >> Yeah, so I won't be publishing a service mesh, maybe I'll try and rickroll someone, but we'll see what happens. But service meshes are, they're still a hot topic, it's still one of the spaces where most discussion is kind of geared towards. There is yet to form a single standard, there is yet a single block of companies creating a front to solve that service mesh issue, and I think that's because in the end, service meshes are, from a complexity perspective, they're not mature enough to be able to commoditize into a standard. I think we still need a little while, and maybe ask me this question next year again, and we'll see what happens. But we'll still need a little while to kind of let this market shift and let this market innovate, because I don't think we've reached the end state with service meshes. Also kind of gauging from customer interest and actual production implementations, I don't think this has trickled down from the largest companies that have the most requirements into the smaller companies, the smaller markets, which is something that we do usually see, now Kubernetes is definitely doing that. So in terms of service meshes, I don't think the innovation has reached that endpoint yet, and I think we'll still need a little while, which will mean for the upcoming period, that we'll kind of see this head to head from different companies, trying to gain a foothold, trying to lead a market, introduce their own products. And I think that's okay, and I think the CNCF will continue to kind of curate that experience, up to a point where maybe somewhere in the future we will have a noncompeting standard to finally have something that's commoditized and easy to implement. >> Yeah, it's an interesting piece, one of the things I've always enjoyed when I go to the show is just wander, and the things you bump into are like "Oh my gosh, wow, look at all of these cool little projects." I don't think we are going to stop that Cambrian explosion of innovation and ideas. When you go walk around there's usually over 200 vendors there, and a lot of them are opensource projects. I would say many of them, when you have a discussion with them, I'm not sure that there's necessarily a business behind that project, and that's where you also see maturity in spaces. A year or so ago, in the observability space, open tracing helped pull together a couple of pieces. Storage is starting to mature. Doesn't mean we're going to get down to one standard, there's still a couple of storage engines out there, I have some really good discussions this week to go into that, but it goes from, "Boy, storage is a mess," to "Oh, okay, we have a couple of uses," and just like storage in the data center, there's not a box or a protocol to do anything, it's what's your use case, what performance, what clouds, what environments are you living on, and therefore you can do that. So it's good to see lots of new things added, but then they mature out and they consolidate, and as you said, the CNCF is help giving those roadmaps, those maps, the landscapes, which boy, if you go online, they have some really good tools. Go to CNCF, the website, and you can look through, Cheryl Hung put one, I'm trying to remember which, it's basically a bullseye of the ones that, here's the one that's fully baked, and here's the ones that are making its way through, and the customer feedback, and they're going to do more of those to help give guidance, because no one solution is going to fit everybody's needs, and you have these spectrums of offerings. Wild card for you, are there any interesting projects out there, new things that you're hearing about, what areas should people be poking around that might not be the top level big things? >> So, I guess for me, that's really personal because I'm still kind of an infrastructure geek in that sense. So one of the things that really surprised me was a more traditional vendor, Zerto in this case, with a fantastic solution, finally, they're doing data protection for Kubernetes. And my recommendation would be to look at companies like Zerto in the data protection space, finally making that move into containers, because even though we've completed the discussion, stateful versus stateless, there's still a lot to be said for thinking about data protection, if you're going to go all-in into containers and into Kubernetes, so that was one that really provoked my thoughts, I really was interested in seeing, "Okay, what's Zerto doing in this list of CNCF members?" And for that matter, I think other vendors like VMware, like Red Hat, like other companies that are moving into this space, with a regained trust in their solutions, is something that I think is really interesting, and absolutely worth exploring during the event, to see what those more traditional companies, to use the term, are doing to innovate with their solutions, and kind of helping the CNCF and the cloud data world, become more enterprise-ready, and that's kind of the point I'm trying to make, where for the longest time, we've had this cloud-native versus traditional, but I always thought of it like cloud-native versus enterprise-ready, or proven technology. This is kind of for the developers doing a new thing, this is for the IT operations teams, and we're kind of seeing those two groups, at least from a technology perspective, being fused into one new blood group, making their way forward and innovating with those technologies. So, I think it's interesting to look at the existing vendors and the CNCF members to see where they're innovating. >> Well, Joep, you connected a dotted line between the cloud-native insights program that I've been doing, you were actually my first guest on that. We've got a couple of months worth of episodes out there, and it is closing that gap between what the developers are doing and what the enterprise was, so absolutely, there's architectural pieces, Joep, like you, I'm an infrastructure geek, so I come from those pieces, and there was that gap between, I'm going to use VMs, and now I'm using containers, and I'm looking at things like serverless too, how do we built applications, and is it that bottom-up versus top-down, and what a company's needs, they need to be able to react fast, they need to be able to change along the way, they need to be able to take advantage of the innovation that ecosystems like this have, so, I love the emphasis CNCF has, making sure that the end users are going to have a strong voice, because as you said, the big companies have come in, not just VMware and Red Hat, but, IBM and Dell are behind those two companies, and HPE, Cisco, many others out there that the behemoths out there, not to mention of course the big hyperscale clouds that helped start this, we wouldn't have a lot of this without Google kicking off with Kubernetes, AWS front and center, and an active participant here, and if you talk to the customers, they're all leveraging it, and of course Microsoft, so it is a robust, big ecosystem, Joep, thank you so much for helping us dig into it, definitely hope we can have events back in the Netherlands in the near future, and great to see you as always. >> Thanks for having me. >> All right, stay tuned, we have, as I said, full spectrum of interviews from theCUBE, they'll be broadcasting during the three days, and of course go to theCUBE.net to catch all of what we've done this year at the show, as well as all the back history. Feel free to reach out to me, I'm @Stu on Twitter, and thank you, as always, for watching theCUBE. (calm music)

Published Date : Aug 18 2020

SUMMARY :

Brought to you by Red Hat, little bit of the Netherlands and I'm still a little disappointed and the display that you get and the way I look at it at least, that people did on the in the market to me, where and the layers that are out there, and the impact of making that are going to help and get me there, so that making choices in the of course ahead of the show, that have the most requirements and just like storage in the data center, and the CNCF members to see and great to see you as always. and of course go to theCUBE.net

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Liz RicePERSON

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

JoepPERSON

0.99+

Red HatORGANIZATION

0.99+

Corey QuinnPERSON

0.99+

ZertoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

CanadaLOCATION

0.99+

Priyanka SharmaPERSON

0.99+

Joep PiscaerPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

2020DATE

0.99+

NetherlandsLOCATION

0.99+

PriyankaPERSON

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

USLOCATION

0.99+

Cheryl HungPERSON

0.99+

AmsterdamLOCATION

0.99+

next yearDATE

0.99+

CNCFORGANIZATION

0.99+

two groupsQUANTITY

0.99+

vSphere 7TITLE

0.99+

KubeConEVENT

0.99+

Kelsey HightowerPERSON

0.99+

AugustDATE

0.99+

HPEORGANIZATION

0.99+

three daysQUANTITY

0.99+

oneQUANTITY

0.99+

this weekDATE

0.98+

two companiesQUANTITY

0.98+

KubernetesTITLE

0.98+

EuropeLOCATION

0.98+

first guestQUANTITY

0.98+

theCUBE.netOTHER

0.98+

A yearDATE

0.98+

TLAORGANIZATION

0.98+

MeeseeksPERSON

0.98+

VMwareORGANIZATION

0.97+

CloudNativeCon Europe 2020EVENT

0.97+

bothQUANTITY

0.97+

JulyDATE

0.96+

EuropeanOTHER

0.96+

over 200 vendorsQUANTITY

0.96+

this yearDATE

0.95+

KubernetesORGANIZATION

0.94+

single blockQUANTITY

0.94+

single standardQUANTITY

0.94+

IstioORGANIZATION

0.94+

@StuPERSON

0.94+

OneQUANTITY

0.94+

thousands of customersQUANTITY

0.93+

single paneQUANTITY

0.93+

DevOpsTITLE

0.92+

Vijoy Pandey, Cisco | kubecon + Cloudnativecon europe 2020


 

(upbeat music) >> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation, and the ecosystem partners. >> Hi, and welcome back to theCUBE's coverage of KubeCon + CloudNativeCon 2020 in Europe, of course, the virtual edition. I'm Stu Miniman, and happy to welcome you back to the program. One of the keynote speakers is also a board member of the CNCF, Vijoy Pandey, who is the Vice President and Chief Technology Officer for Cloud at Cisco. Vijoy, nice to see you, thanks so much for joining us. >> Hi there, Stu, so nice to see you again. It's a strange setting to be in, but as long as we are both healthy, everything's good. >> Yeah, we still get to be together a little bit even though while we're apart. We love the the engagement and interaction that we normally get to the community, but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Networking, Please Evolve". I've been watching that journey. But why don't we start at first, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports on to Chuck. The charter for the team, this new team, is to incubate the next bets for Cisco. And if you can imagine, it's natural for Cisco to start with bets which are closer to its core business. But the charter for this group is to move further and further out from Cisco's core business and take Cisco into newer markets, into newer products, and newer businesses. I'm running the engineering and resource for that group. And again, the whole deal behind this is to be a little bit nimble, to be a little bit, to startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast, and you throw out 80% of those, and concentrate on the 20% that makes sense to take forward as a venture. >> Interesting. So it reminds me a little bit but different, I remember John Chambers, a number of years back, talking about various adjacencies trying to grow those next multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for driving innovation, giving engineers 20% of their time to work on things, maybe give us a little bit insight, what's kind of an example of a bet that you might be looking at in this space, bring us in tight a little bit. >> Well, that's actually a good question. And I think a little bit of that comparison is all those conversations are taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets? And yes, my previous employer, I mean, Google X actually goes pretty far out when it comes to incubations, the core business being primarily around ads, now Google Cloud as well. But you have things like Verily and Calico, and others, which are pretty far out from where Google started. And the way we're looking at the these things within Cisco is, it's a new muscle for Cisco, so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to, go to market alignment or business alignment. So one of the first bets that we're taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure as code but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I've talked a lot about this in the past couple of keynotes, where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, wherever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets, that we're going after is around connecting and securing API endpoints, wherever they happen to be, of whatever type they happen to be. And so API networking or app networking is one big bet that we're going after. Another big bet is around API security. And that has a bunch of other connotations to it, where we think about security moving from runtime security, where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security, which is earlier in the development pipeline, and higher up in the stack. So those are two big bets that we're going after. And as you can see, they're pretty close to Cisco's core business, but also are very differentiated from where Cisco is today. And once you prove some of these bets out, you can walk further and further away, or a few degrees away from Cisco's core. >> All right, Vijoy, why don't you give us the update about how Cisco is leveraging and participating in open source? >> So I think we've been pretty, deeply involved in open source in our past. We've been deeply involved in Linux Foundation Networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack, we have been supporters of OpenStack. We have a couple of products that are around the OpenStack offering. And as you all know, we've been involved in CNCF, right from the get-go, as a foundation member. We brought NSM as a project. I had Sandbox currently, but we're hoping to move it forward. But even beyond that, I mean, we are big users of open source, a lot of those has offerings that we have from Cisco, and you will not know this if you're not inside of Cisco. But Webex, for example, is a big, big user of Linkerd, right from the get-go, from version 1.0, but we don't talk about it, which is sad. I think, for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise side. We use Kubernetes very deeply in our security platforms. So we're pretty good, pretty deep users internally in our SaaS products. But we want to press the accelerator and accelerate this whole journey towards open source, quite a bit moving forward as part of ET&I, Emerging Tech and Incubation, as well. So you will see more of us in open source forums, not just CNCF, but very recently, we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative, and we actually are big believers in handling data in ethical and privacy-preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health, and we will be working very closely with Dan and foundational companies that do not just bring open source but also evangelize and use what comes out of that forum. >> All right, well, Vijoy, I think it's time for us to dig into your keynote. We've we've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on. And big focus you talked about was SD-WAN. Of course, anybody that's been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling multi-cloud. So help explain to our audience a little bit, what do you mean when you talk about things like Cloud Native SD-WAN and how that helps people really enable their applications in the modern environment? >> Yes, well, I mean, we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actually get your connectivity across to your enterprise or branch. And SD-WAN came in and changed the game there, but I think SD-WAN, as it exists today, is application-unaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM or Network Service Mesh has actually helped us simplify operational complexities, simplify the ticketing and process health that any developer needs to go through just to get a multi-cloud, multi-cluster app up and running. So the keynote actually talked about bringing those two things together, where we've talked about using NSM in the past in chapter one and chapter two. And I know this is chapter three, and at some point, I would like to stop the chapters. I don't want this like an encyclopedia of "Networking, Please Evolve". But we are at chapter three, and we are talking about how you can take the same consumption models that I talked about in chapter two, which is just adding a simple annotation in your CRD, and extending that notion of multi-cloud, multi-cluster wires within the components of our application, but extending it all the way down to the user in an enterprise. And as we saw an example, Gavin Belson is trying to give a keynote holographically and he's suffering from SD-WAN being application-unaware. And using this construct of a simple annotation, we can actually make SD-WAN cloud native, we can make it application-aware, and we can guarantee the SLOs, that Gavin is looking for, in terms of 3D video, in terms of file access for audio, just to make sure that he's successful and Ross doesn't come in and take his place. >> Well, I expect Gavin will do something to mess things up on his own even if the technology works flawlessly. Vijoy, the modernization journey that customers are on is a never-ending story. I understand the chapters need to end on the current volume that you're working on, but we'd love to get your viewpoint. You talk about things like service mesh, it's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the kind of real challenges but opportunities that they see in today's cloud native space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree, and we are seeing a lot of that happening, where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've Linkerd, you've got various proprietary notions around control planes like App Mesh, from Amazon, there's Consul, which is an open source project, but not part of CNCF today. So there's a whole bunch of service meshes in terms of control planes coming in. Envoy is becoming a de facto sidecar data plane, whatever you would like to call it, de facto standard there, which is good for the community, I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment, and that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for, like API gateways in the past, and those functions actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of service meshes is here to stay. And so some work has to be done in bringing these things together. And that's something that we are trying to focus in on as well. Because that's something that our customers are asking for. >> Yeah, actually, you connected for me something I wanted to get your viewpoint on, go dial back, 10, 15 years ago, and everybody would say, "Oh, I really want to have a single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, not that long before this event, Google had their Google Cloud Show, talking about the partnership that you have with, Cisco with Google. They have Anthos, you look at Azure has Arc, VMware has Tanzu. Everybody's talking about really the kind of this multi-cluster management type of solution out there, and just want to get your viewpoint on this Vijoy as to how are we doing on the management plane, and what do you think we need to do as an industry as a whole to make things better for customers? >> Yeah, I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers. Because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all these things together? I think the SMI proposal from Microsoft is bang on on that front, where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes as you saw in the SD-WAN announcement back in the Google discussion that we just, Google conference that you just referred. It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a cloud native environment in the keynote. But I think the bigger problem here is how do we manage this complexity of this pallet stacks? Whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And single pane of glass is overloaded as a term, because it brings in these notions of big monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. And I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, the API economy should be able to help on these, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco, and congratulations on the new role. >> Thank you, Stu. It was a pleasure to be here. >> All right, and stay tuned for lots more coverage of theCUBE at KubeCon + CloudNativeCon. I'm Stu Miniman. Thanks for watching. (upbeat music)

Published Date : Jul 28 2020

SUMMARY :

and the ecosystem partners. One of the keynote speakers nice to see you again. since the last time we got together. and concentrate on the 20% that that you might be And one of the bets that we are, that are around the OpenStack offering. in the modern environment? And that's one of the big of conversation for the and that's here to stay. as to how are we doing and bringing it all the way and congratulations on the new role. It was a pleasure to be here. of theCUBE at KubeCon + CloudNativeCon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

StuPERSON

0.99+

ChuckPERSON

0.99+

80%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

GavinPERSON

0.99+

20%QUANTITY

0.99+

Linux Foundation for Public HealthORGANIZATION

0.99+

VijoyPERSON

0.99+

Gavin BelsonPERSON

0.99+

EuropeLOCATION

0.99+

ET&IORGANIZATION

0.99+

Emerging TechORGANIZATION

0.99+

NSMORGANIZATION

0.99+

Vijoy PandeyPERSON

0.99+

CNCFORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

VerilyORGANIZATION

0.99+

two big betsQUANTITY

0.99+

John ChambersPERSON

0.99+

CalicoORGANIZATION

0.99+

KubeConEVENT

0.99+

oneQUANTITY

0.99+

VMwareORGANIZATION

0.99+

RossPERSON

0.99+

10DATE

0.99+

one big betQUANTITY

0.98+

OneQUANTITY

0.98+

WebexORGANIZATION

0.98+

this yearDATE

0.98+

two thingsQUANTITY

0.97+

Linux Foundation for Public HealthORGANIZATION

0.97+

CloudNativeConEVENT

0.97+

LinkerdORGANIZATION

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.97+

chapter threeOTHER

0.97+

TanzuORGANIZATION

0.96+

todayDATE

0.96+

IncubationORGANIZATION

0.94+

ArcORGANIZATION

0.94+

Emerging Tech and IncubationORGANIZATION

0.94+

first betsQUANTITY

0.93+

KubeConsEVENT

0.93+

betsQUANTITY

0.93+

chapter twoOTHER

0.92+

FD.ioORGANIZATION

0.92+

two ofQUANTITY

0.92+

first few betsQUANTITY

0.91+

chapter threeOTHER

0.9+

AnthosORGANIZATION

0.9+

Sanjay Poonen, VMware | AWS Summit Online 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hello, welcome back to theCUBE's coverage, CUBE Virtual's coverage, CUBE digital coverage, of AWS Summit, virtual online, Amazon Summit's normally in face-to-face all around the world, it's happening now online, follow the sun. Of course, we want to bring theCUBE coverage like we do at the events digitally, and we've got a great guest that usually comes on face-to-face, he's coming on virtual, Sanjay Poonen, the chief operating officer of VMware. Sanjay great to see you, thanks for coming in virtually, you look great. >> Hey, John thank you very much. Always a pleasure to talk to you. This is the new reality. We both happen to live very close to each other, me in Los Altos, you in Palo Alto, but here we are in this new mode of communication. But the good news is I think you guys at theCUBE were pioneering a lot of digital innovation, the AI platform, so hopefully it's not much of an adjustment for you guys to move digital. >> It's not really a pivot, just move the boat, put the sails up and sail into the next generation, which brings up really the conversation that we're seeing, which is this digital challenge, the virtual world, it's virtualization, Sanjay, it sounds like VMware. Virtualization spawned so much opportunity, it created Amazon, some say, I'd say. Virtualizing our world, life is now integrated, we're immersed into each other, physical and digital, you got edge computing, you got cloud native, this is now a clear path to customers that recognize with the pandemic challenges of at-scale, that they have to operate their business, reset, reinvent, and grow coming out of this pandemic. This has been a big story that we've been talking about and a lot of smart managers looking at projects saying, I'm doubling down on that, and I'm going to move the resources from this, the people and budget, to this new reality. This is a tailwind for the folks who were prepared, the ones that have the experience, the ones that did the work. theCUBE, thanks for the props, but VMware as well. Your thoughts and reaction to this new reality, because it has to be cloud native, otherwise it doesn't work, your thoughts. >> Yeah, I think, John, you're right on. We were very fortunate as a company to invent the term virtualization for an x86 architecture and the category 20 years ago when Diane founded this great company. And I would say you're right, the public cloud is the instantiation of virtualization at its sort of scale format and we're excited about this Amazon partnership, we'll talk more about that. This new world of doing everything virtual has taken the same concepts to whole new levels. We are partnering very closely with companies like Zoom, because a good part of this is being able to deliver video experiences in there, we'll talk about that if needed. Cloud native security, we announced an acquisition today in container security that's very important because we're making big moves in security, security's become very important. I would just say, John, the first thing that was very important to us as we began to shelter in place was the health of our employees. Ironically, if I go back to, in January I was in Davos, in fact some of your other folks who were on the show earlier, Matt Garman, Andy, we were all there in January. The crisis already started in China, but it wasn't on the world scene as much of a topic of discussion. Little did we know, three, four weeks later, fast forward to February things were moving so quickly. I remember a Friday late in February where we were just about to go the next week to Las Vegas for our in-person sales kickoffs. Thousands of people, we were going to do, I think, five or 6,000 people in Las Vegas and then another 3,000 in Barcelona, and then finally in Singapore. And it had not yet been categorized a pandemic. It was still under this early form of some worriable virus. We decided for the health and safety of our employees to turn the entire event that was going to happen on Monday to something virtual, and I was so proud of the VMware team to just basically pivot just over the weekend. To change our entire event, we'd been thinking about video snippets. We have to become in this sort of virtual, digital age a little bit like TV producers like yourself, turn something that's going to be one day sitting in front of an audience to something that's a lot shorter, quicker snippets, so we began that, and the next thing we began doing over the next several weeks while the shelter in place order started, was systematically, first off, tell our employees, listen, focus on your health, but if you're healthy, turn your attention to serving your customers. And we began to see, which we'll talk about hopefully in the context of the discussion, parts of our portfolio experience a tremendous amount of interest for a COVID-centered world. Our digital workplace solutions, endpoint security, SD-WAN, and that trifecta began to be something that we began to see story after story of customers, hospitals, schools, governments, retailers, pharmacies telling us, thank you, VMware, for helping us when we needed those solutions to better enable our people on the front lines. And all VMware's role, John, was to be a digital first responder to the first responder, and that gave tremendous amount of motivation to all of our employees into it. >> Yeah, and I think that's a great point. One of the things we've been talking about, and you guys have been aligned with this, you mentioned some of those points, is that as we work at home, it points out that digital and technology is now part of lifestyle. So we used to talk about consumerization of IT, or immersion with augmented reality and virtual reality, and then talk about the edge of the network as an endpoint, we are at the edge of the network, we're at home, so this highlights some of the things that are in demand, workspaces, VPN provisioning, these new tools, that some cases we've been hearing people that no one ever thought of having a forecast of 100% VPN penetration. Okay, you did the AirWatch deal way back when you first started, these are now fruits of those labors. So I got to ask you, as managers of your customer base are out there thinking, okay, I got to double down on the right growth strategy for this post-pandemic world, the smart managers are going to look at the technologies enabled for business outcome, so I have to ask you, innovation strategies are one thing, saying it, putting it place, but now more than ever, putting them in action is the mandate that we're hearing from customers. Okay I need an innovation strategy, and I got to put it into action fast. What do you say to those customers? What is VMware doing with AWS, with cloud, to make those innovation strategies not only plausible but actionable? >> That's a great question, John. We focused our energy, before even COVID started, as we prepared for this year, going into sales kickoffs and our fiscal year, around five priorities. Number one was enabling the world to be multicloud, private cloud and public cloud, and clearly our partnership here with Amazon is the best example of that and they are our preferred cloud partner. Secondly, building modern apps with microservices and cloud native, what we call app modernization. Thirdly, which is a key part to the multicloud, is building out the entire network stack, data center networking, the firewalls, the load bouncing in SD-WAN, so I'd call that cloud network. Number four, the modernization of workplace with an additional workspace solution, Workspace ONE. And five, intrinsic security from all aspects of security, network, endpoint, and cloud. So those five priorities were what we began to think through, organize our portfolio, we call them solution pillars, and for any of your viewers who're interested, there's a five-minute version of the VMware story around those five pillars that you can watch on YouTube that I did, you just search for Sanjay Poonen and five-minute story. But then COVID hit us, and we said, okay we got to take these strategies now and make them more actionable. Exactly your question, right? So a subset of that portfolio of five began to become more actionable, because it's pointless going and talking about stuff and it's like, hey, listen, guys, I'm a house on fire, I don't care about the curtains and all the wonderful art. You got to help me through this crisis. So a subset of that portfolio became kind of what was those, think about now your laptop at home, or your endpoint at home. People wanted, on top of their Zoom call, or surrounding their Zoom call, a virtual desktop managed easily, so we began to see Workspace ONE getting a lot of interest from our customers, especially the VDI part of that portfolio. Secondly, that laptop at home needed to be secured. Traditional, old, legacy AV solutions that've worked, enter Carbon Black, so Workspace ONE plus Carbon Black, one and two. Third, that laptop at home needs network acceleration, because we're dialoguing and, John, we don't want any latency. Enter SD-WAN. So the trifecta of Workspace ONE, Carbon Black and VeloCloud, that began to see even more interest and we began to hone in our portfolio around those three. So that's an example of where you have a general strategy, but then you apply it to take action in the midst of a crisis, and then I say, listen, that trifecta, let's just go and present what we can do, we call that the business continuity or business resilience part of our portfolio. We began to start talking to customers, and saying, here's our business continuity solution, here's what we could do to help you, and we targeted hospitals, schools, governments, pharmacies, retailers, the ones who're on the front line of this and said again, that line I said earlier, we want to be a digital first responder to you, you are the real first responder. Right before this call I got off a CIO call with the CIO of a major hospital in the northeast area. What gives me great joy, John, is the fact that we are serving them. Their beds are busting at the seam, in serving patients-- >> And ransomware's a huge problem you guys-- >> We're serving them. >> And great stuff there, Sanjay, I was just on a call this morning with a bunch of folks in the security industry, thought leaders, was in DC, some generals were there, some real thought leaders, trying to figure out security policy around biosecurity, COVID-19, and this invisible disruption, and they were equating it to like the World Wars. Big inflection point, and one of the generals said, in those times of crisis you need alliances. So I got to ask you, COVID-19 is impactful, it's going to have serious impact on the critical nature of it, like you said, the house is on fire, don't worry about the curtains. Alliances matter more than ever when you need to come together. You guys have an ecosystem, Amazon's got an ecosystem, this is going to be a really important test to the alliances out there. How do you view that as you look forward? You need the alliances to be successful, to compete and win in the new world as this invisible enemy, if you will, or disruptor happens, what's your thoughts? >> Yeah, I'll answer in a second, just for your viewers, I sneezed, okay? I've been on your show dozens of time, John, but in your live show, if I sneezed, you'd hear the loud noise. The good news in digital is I can mute myself when a sneeze is about to happen, and we're able to continue the conversation, so these are some side benefits of the digital part of it. But coming to your question on alliance, super important. Ecosystems are how the world run around, united we stand, divided we fall. We have made ecosystems, I've always used this phrase internally at VMware, sort of like Isaac Newton, we see clearly because we stand on the shoulders of giants. So VMware is always able to be bigger of a company if we stand on the shoulders of bigger giants. Who were those companies 20 years ago when Diane started the company? It was the hardware economy of Intel and then HP and Dell, at the time IBM, now Lenovo, Cisco, NetApp, DMC. Today, the new hardware companies Amazon, Azure, Google, whoever have you, we were very, I think, prescient, if you would, to think about that and build a strategic partnership with Amazon three or four years ago. I've mentioned on your show before, Andy's a close friend, he was a classmate over at Harvard Business School, Pat, myself, Ragoo, really got close to Andy and Matt Garman and Mike Clayville and several members of their teams, Teresa Carlson, and began to build a partnership that I think is one of the most incredible success stories of a partnership. And Dell's kind of been a really strong partner with us on private cloud, having now Amazon with public cloud has been seminal, we do regular meetings and build deep integration of, VMware Cloud and AWS is not some announcement two or three years ago. It's deep engineering between, Bask's now in a different role, but in his previous role, that and people like Mark Lohmeyer in our team. And that deep engineering allows us to know and tell customers this simple statement, which both VMware and Amazon reps tell their customers today, if you have a workload running on vSphere, and you want to move that to Amazon, the best place, the preferred place for that is VMware Cloud and Amazon. If you try to refactor that onto a native VC 2, it's a waste of time and money. So to have the entire army of VMware and Amazon telling customers that statement is a huge step, because it tells customers, we have 70 million virtual machines running on-prem. If customers are looking to move those workloads to Amazon, the best place for that VMware Cloud and AWS, and we have some credible customer case studies. Freddie Mac was at VMworld last year. IHS Markit was at VMworld last year talking about it. Those are two examples and many more started it, so we would like to have every VMware and Amazon customer that's thinking about VMware to look at this partnership as one of the best in the industry and say very similar to what Andy I think said on stage at the time of this announcement, it doesn't have to be now a trade-off between public and private cloud, you can get the best of both worlds. That's what we're trying to do here-- >> That's a great point, I want to get your thoughts on leadership, as you look at COVID-19, one of our tracks we're going to be promoting heavily on theCUBE.net and our sites, around how to manage through this crisis. Andy Jassy was quoted on the fireside chat, which is coming up here in North America, but I saw it yesterday in New Zealand time as I time shifted over there, it's a two-sided door versus a one-sided door. That was kind of his theme is you got to be able to go both ways. And I want to get your thoughts, because you might know what you're doing in certain contexts, but if you don't know where you're going, you got to adjust your tactics and strategies to match that, and there's and old expression, if you don't know where you're going, every road will take you there, okay? And so a lot of enterprise CXOs or CEOs have to start thinking about where they want to go with their business, this is the growth strategy. Then you got to understand which roads to take. Your thoughts on this? Obviously we've been thinking it's cloud native, but if I'm a decision maker, I want to make sure I have an architecture that's going to carry me forward to the future. I need to make sure that I know where I'm going, so I know what road I'm on. Versus not knowing where I'm going, and every road looks good. So your thoughts on leadership and what people should be thinking around knowing what their destination is, and then the roads to take? >> John, I think it's the most important question in this time. Great leaders are born through crisis, whether it's Winston Churchill, Charles de Gaulle, Roosevelt, any of the leaders since then, in any country, Mahatma Gandhi in India, the country I grew up, Nelson Mandela, MLK, all of these folks were born through crisis, sometimes severe crisis, they had to go to jail, they were born through wars. I would say, listen, similar to the people you talked about, yeah, there's elements of this crisis that similar to a World War, I was talking to my 80 year old father, he's doing well. I asked him, "When was the world like this?" He said, "Second World War." I don't think this crisis is going to last six years. It might be six or 12 months, but I really don't think it'll be six years. Even the health care professionals aren't. So what do we learn through this crisis? It's a test of our leadership, and leaders are made or broken during this time. I would just give a few guides to leaders, this is something tha, Andy's a great leader, Pat, myself, we all are thinking through ways by which we can exercise this. Think of Sully Sullenberger who landed that plane on the Hudson. Did he know when he flew that airbus, US Airways airbus, that few flock of birds were going to get in his engine, and that he was going to have to land this plane in the Hudson? No, but he was making decisions quickly, and what did he exude to his co-pilot and to the rest of staff, calmness and confidence and appropriate communication. And I think it's really important as leaders, first off, that we communicate, communicate, communicate, communicate to our employees. First, our obligation is first to our employees, our family first, and then of course to our company employees, all 30,000 at VMware, and I'm sure similarly Andy does it to his, whatever, 60, 70,000 at AWS. And then you want to be able to communicate to them authentically and with clarity. People are going to be reading between the lines of everything you say, so one of the things I've sought to do with my team, all the front office functions report to me, is do half an hour Zoom video conferences, in the time zone that's convenient to them, so Japan, China, India, Europe, in their time zone, so it's 10 o'clock my time because it's convenient to Japan, and it's just 10 minutes of me speaking of what I'm seeing in the world, empathizing with them but listening to them for 20 minutes. That is communication. Authentically and with clarity, and then turn your attention to your employees, because we're going stir crazy sitting at home, I get it. And we've got to abide by the ordinances with whatever country we're in, turn your attention to your customers. I've gotten to be actually more productive during this time in having more customer conference calls, video conference calls on Zoom or whatever platform with them, and I'm looking at this now as an opportunity to engage in a new way. I have to be better prepared, like I said, these are shorter conversations, they're not as long. Good news I don't have to all over the place, that's better for my family, better for the carbon emission of the world, and also probably for my life long term. And then the third thing I would say is pick one area that you can learn and improve. For me, the last few years, two, three years, it's been security. I wanted to get the company into security, as you saw today we've announced mobile, so I helped architect the acquisition of Carbon Black, very similar to kind of the moves I've made six years ago around AirWatch, very key part to all of our focus to getting more into security, and I made it a personal goal that this year, at the start of the year, before COVID, I was going to meet 1,000 CISOs, in the Fortune 1000 Global 2000. Okay, guess what, COVID happens, and quite frankly that goal's gotten a little easier, because it's much easier for me to meet a lot more people on Zoom video conferences. I could probably do five, 10 per day, and if there's 200 working days in a day, I can easily get there, if I average about five per day, and sometimes I'm meeting them in groups of 10, 20. >> So maybe we can get you on theCUBE more often too, 'cause you have access to a video camera. >> That is my growth mindset for this year. So pick a growth mindset area. Satya Nadella puts this pretty well, "Move from being a know-it-all to a learn-it-all." And that's the mindset, great company. Andy has that same philosophy for Amazon, I think the great leaders right now who are running these cloud companies have that growth mindset. Pick an area that you can grow in this time, and you will find ways to do it. You'll be able to learn online and then be able to teach in some fashion. So I think communicate effectively, authentically, turn your attention to serving your customers, and then pick some growth area that you can learn yourself, and then we will come out of this crisis collectively, individuals and as partners, like VMware and Amazon, and then collectively as a society, I believe we'll come out stronger. >> Awesome great stuff, great insight there, Sanjay. Really appreciate you sharing that leadership. Back to the more of technical questions around leadership is cloud native. It's clear that there's going to be a line in the sand, if you will, there's going to be a right side of history, people are going to have to be on the right side of history, and I believe it's cloud native. You're starting to see this emersion. You guys have some news, you just announced today, you acquired a Kubernetes security startup, around Kubernetes, obviously Kubernetes needs security, it's one of those key new enablers, disruptive enablers out there. Cloud native is a path that is a destination opportunity for people to think about, why that acquisition? Why that company? Why is VMware making this move? >> Yeah, we felt as we talked about our plans in security, backing up to things I talked about in my last few appearances on your show at VMworld, when we announced Carbon Black, was we felt the security industry was broken because there was too many point benders, and we figured there'd be three to five control points, network, endpoint, cloud, where we could play a much more pronounced role at moving a lot of these point benders, I describe this as not having to force our customers to go to a doctor and say I've got to eat 5,000 tablets to get healthy, you make it part of your diet, you make it part of the infrastructure. So how do we do that? With network security, we're off to the races, we're doing a lot more data center networking, firewall, load bouncing, SD-WAN. Really, reality is we can eat into a lot of the point benders there that I've just been, and quite frankly what's happened to us very gratifying in the network security area, you've seen the last few months, some firewall vendors are buying SD-WAN players, kind of following our strategy. That's a tremendous validation of the fact that the network security space is being disrupted. Okay, move to endpoint security, part of the reason we acquired Carbon Black was to unify the client side, Workspace ONE and Carbon Black should come together, and we're well under way in doing that, make Carbon Black agentless on the server side with vSphere, we're well on the way to that, you'll see that very soon. By the way both those things are something that the traditional endpoint players can't do. And then bring out new forms of workload. Servers that are virtualized by VMware is just one form of work. What are other workloads? AWS, the public clouds, and containers. Container's just another workload. And we've been looking at container security for a long time. What we didn't want to do was buy another static analysis player, another platform and replatform it. We felt that we could get great technology, we have incredible grandeur on container cell. It's sort of Red Hat and us, they're the only two companies who are doing Kubernetes scales. It's not any of these endpoint players who understand containers. So Kubernetes, VMware's got an incredible brand and relevance and knowledge there. The networking part of it, service mesh, which is kind of a key component also to this. We've been working with Google and others like Istio in service mesh, we got a lot of IP there that the traditional endpoint players, Symantec, McAfee, Trend, CrowdStrike, don't know either Kubernetes or service mesh well. We add now container security into this, we really distinguish ourselves further from the traditional endpoint players with bringing together, not just the endpoint platform that can do containers, but also Kubernetes service mesh. So why is that important? As people think about their future in containers, they'll want to do this at the runtime level, not at the static level. They'll want to do it at build time And they'll want to have it integrated with some of their networking capabilities like service mesh. Who better to think about that IP and that evolution than VMware, and now we bring, I think it's 12 to 14 people we're bringing in from this acquisition. Several of them in Israel, some of them here in Palo Alto, and they will build that platform into the tech that VMware has onto the Carbon Black cloud and we will deliver that this year. It's not going to be years from now. >> Did you guys talk about the-- >> Our capability, and then we can bring the best of Carbon Black, with Tanzu, service mesh, and even future innovation, like, for example, there's a big movement going around, this thing call open policy agent OPA, which is an open source effort around policy management. You should expect us to embrace that, there could be aspects of OPA that also play into the future of this container security movement, so I think this is a really great move for Patrick and his team, I'm very excited. Patrick is the CEO of Carbon Black and the leader of that security business unit, and he came to me and said, "Listen, one of the areas "we need to move in is container security "because it's the number one request I'm hearing "from our CESOs and customers." I said, "Go ahead Patrick. "Find out who are the best player you could acquire, "but you have to triangulate that strategy "with the Tanzu team and the NSX team, "and when you have a unified strategy what we should go, "we'll go an make the right acquisition." And I'm proud of what he was able to announce today. >> And I noticed you guys on the release didn't talk about the acquisition amount. Was it not material, was it a small amount? >> No, we don't disclose small, it's a tuck-in acquisition. You should think of this as really bringing us some tech and some talent, and being able to build that into the core of the platform of Carbon Black. Carbon Black was the real big move we made. Usually what we do, you saw this with AirWatch, right, anchor on a fairly big move. We paid I think 2.1 billion for Carbon Black, and then build and build and build on top of that, partner very heavily, we didn't talk about that. If there's time we could talk about it. We announced today a security alliance with top SIEM players, in what's called a sock alliance. Who's announced in there? Splunk, IBM QRadar, Google Chronicle, Sumo Logic, and Exabeam, five of the biggest SIEM players are embracing VMware in endpoint security, saying, Carbon Black is who we want to work with. Nobody else has that type of partnership, so build, partner, and then buy. But buy is always very carefully thought through, we're not one of these companies like CA of the past that just bought every company and then it becomes a graveyard of dead acquisition. Our view is we're very disciplined about how we think about acquisition. Acquisitions for us are often the last resort, because we'd prefer to build and partner. But sometimes for time-to-market reasons, we acquire, and when we acquire, it's thoughtful, it's well-organized within VMware, and we take care of our people, 'cause we want, I mean listen, why do acquisitions fail? Because the good people leave. So we're excited about this team, the team in Israel, and the team in Palo Alto, they come from Octarine. We're going to integrate them rapidly into the platform, and this is a good evidence of VMware investing more in security, and our Q3 earnings pulled, John, I said, sorry, we said that the security business was a billion dollar business at VMware already, primarily from network, but some from endpoint. This is evidence of us putting more fuel behind that fire. It's only been six, seven months and Patrick's made his first acquisition inside Carbon Black, so you're going to see us investing more in security, it's an important priority for the company, and I expect us to be a very prominent player in these three pillars, network security, endpoint security, endpoint is both client and the workload, and cloud. Network, endpoint, cloud, they are the three areas where we think there's lots of room for innovation in security. >> Well, we'll be watching, we'll be reporting and analyzing the moves. Great playbook, by the way. Love that organic partnering and then key acquisitions which you build around, it's a great playbook, I think it's very relevant for this time. The most important question I have to ask you, Sanjay, and this is a personal question, because you're the leader of VMware, I noticed that, we all know you're into music, you've been putting music online, kind of a virtual band. You've also hired a CUBE alumni, Victoria Verango from McAfee who also puts up music, you've got some musicians, but you kind of know how to do the digital moves there, so the question is, will the music at VMworld this year be virtual? >> Oh, man. Victoria is actually an even better musician than me. I'm excited about his marketing gifts, but I'm also excited to watch him. But yeah, you've heard him sing, he's got a voice that's somewhat similar to Sting, so we, just for fun, in our Diwali, which is an Indian celebration last year, Tom Corn, myself, and a wonderful lady named Divya, who's got a beautiful voice, had sung a song, which was off the soundtrack of the Bollywood movie, "Secret Superstar," and we just for fun decided to record that in our three separate homes, and put that out on YouTube. You can listen, it's just a two or three-minute run, and it kind of went a little bit viral. And I was thinking to myself, hey, if this is one way by which we can let the VMware community know that, hey, you know what, art conquers COVID-19, you can do music even socially distant, and bring out the spirit of VMware, which is community. So we might build on that idea, Victoria and I were talking about that last night and saying, hey, maybe we do a virtual music kind of concert of maybe 10 or 15 or 20 voices in the various different countries. Record piece of a song and music and put it out there. I think these are just ways by which we're having fun in a virtual setting where people get to see a different side of VMware where, and the intent here, we're all amateurs, John, we're not like great. There are going to be mistakes in this music. If you listen to that audio, it sounds a little tinny, 'cause we're recording it off our iPhone and our iPad microphone. But we'll do the best we can, the point is just to show the human spirit and to show that we care, and at the end of the day, see, the COVID-19 virus has no prejudice on color of skin, or nationality, or ethnicity. It's affecting the whole world. We all went into the tunnel at different times, we will come out of this tunnel together and we will be a stronger human fabric when we're done with this, We shall absolutely overcome. >> Sanjay, give us a quick update to end the segment on your thoughts around VMworld. It's one of the biggest events, we look forward to it. It's the only even left standing that theCUBE's been to every year of theCUBE's existence, we're looking forward to being part of theCUBE virtual. It's been announced it's virtual. What are some of the thinking going on at the highest levels within the VMware community around how you're going to handle VMworld this year? >> Listen, when we began to think about it, we had to obviously give our customers and folks enough notice, so we didn't want to just spring that sometime this summer. So we decided to think through it carefully. I asked Robin, our CMO, to talk to many of the other CMOs in the industry. Good news is all of these are friends of ours, Amazon, Microsoft, Google, Salesforce, Adobe, and even some smaller companies, IBM did theirs. And if they were in the first half of the year, they had to go virtual 'cause we're sheltered in place, and IBM did theirs, Okta did theirs, and we began to watch how they were doing this. We're kind of in the second half, because we were August, September, and we just sensed a lot of hesitancy from our customers that wanted to get on a plane to come here, and even if we got just 500, 1,000, a few thousand, it wasn't going to be the same and there would always be that sort of, even if we were getting back to that, some worry, so we figured we'd do something that might be semi-digital, and we may have some people that roam, but the bulk of it is going to be digital, and we changed the dates to be a little later. I think it's September 20th to 29th. Right now it's all public now, we announced that, and we're going to make it a great program. In some senses like we're becoming TV producer. I told our team we got to be like Disney or ESPN or whoever your favorite show is, YouTube, and produce a really good several-hour program that has got a different way in which digital content is provided, smaller snippets, very interesting speakers, great brand names, make the content clear, crisp and compelling. And if we do that, this will be, I don't know, maybe it's the new norm for some period of time, or it might be forever, I don't know. >> John: We're all learning. >> In the past we had huge conferences that were busting 50, 70, 100,000 and then after the dot-com era, those all shrunk, they're like smaller conferences, and now with advent of companies like Amazon and Salesforce, we have huge events that, like VMworld, are big events. We may move to a environment that's a lot more digital, I don't know what the future of in-presence physical conferences are, but we, like others, we're working with AWS in terms of their future with Reinvent, what Microsoft's doing with Ignite, what Google's doing with Next, what Salesforce's going to do with Dreamforce, all those four companies are good partners of ours. We'll study theirs, we'll work together as a community, the CMOs of all those companies, and we'll come together with something that's a very good digital experience for our customers, that's really what counts. Today I did a webinar with a partner. Typically when we did a briefing in our briefing center, 20 people came. There're 100 people attending this, I got a lot more participation in this QBR that I did with this SI partner, one of the top SIs in the world, in an online session with them, than would I have gotten if they'd all come to Palo Alto. That's goodness. Should we take the best of that world and some physical presence? Maybe in the future, we'll see how it goes. >> Content quality. You know, you know content. Content quality drives everything online, good engagement creates community, that's a nice flywheel. I think you guys will figure it out, you've got a lot of great minds there, and of course, theCUBE virtual will be helping out as we can, and we're rethinking things too-- >> We count on that, John-- >> We're going to be open minded to new ideas, and, hey, whatever's the best content we can deliver, whether it's CUBE, or with you guys, or whoever, we're looking forward to it. Sanjay, thanks for spending the time on this CUBE Keynote coverage of AWS Summit. Since it's digital we can do longer programs, we can do more diverse content. We got great customer practitioners coming up, talking about their journey, their innovation strategies. Sanjay Poonen, COO of VMware, thank you for taking your precious time out of your day today. >> Thank you, John, always a pleasure. >> Thank you. Okay, more CUBE, virtual CUBE digital coverage of AWS Summit 2020, theCUBE.net is we're streaming, and of course, tons of videos on innovation, DevOps, and more, scaling cloud, scaling on-premise hybrid cloud, and more. We got great interviews coming up, stay with us our all-day coverage. I'm John Furrier, thanks for watching. (upbeat music)

Published Date : May 13 2020

SUMMARY :

leaders all around the world, all around the world, This is the new reality. and I'm going to move and the next thing we began doing and I got to put it into action fast. and all the wonderful art. You need the alliances to be successful, and began to build a and then the roads to take? and then of course to So maybe we can get you and then be able to teach in some fashion. to be a line in the sand, part of the reason we and the leader of that didn't talk about the acquisition amount. and the team in Palo Alto, I have to ask you, Sanjay, and to show that we care, standing that theCUBE's been to but the bulk of it is going to be digital, In the past we had huge conferences and we're rethinking things too-- We're going to be and of course, tons of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Mark LohmeyerPERSON

0.99+

PatrickPERSON

0.99+

LenovoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RobinPERSON

0.99+

Charles de GaullePERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

Sanjay PoonenPERSON

0.99+

Victoria VerangoPERSON

0.99+

NSXORGANIZATION

0.99+

DellORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

fiveQUANTITY

0.99+

Mike ClayvillePERSON

0.99+

Teresa CarlsonPERSON

0.99+

VMwareORGANIZATION

0.99+

IsraelLOCATION

0.99+

Andy JassyPERSON

0.99+

DMCORGANIZATION

0.99+

Matt GarmanPERSON

0.99+

AdobeORGANIZATION

0.99+

sixQUANTITY

0.99+

Tom CornPERSON

0.99+

SingaporeLOCATION

0.99+

SanjayPERSON

0.99+

Mahatma GandhiPERSON

0.99+

Satya NadellaPERSON

0.99+

DisneyORGANIZATION

0.99+

Winston ChurchillPERSON

0.99+

JanuaryDATE

0.99+

six yearsQUANTITY

0.99+

Sully SullenbergerPERSON

0.99+

Los AltosLOCATION

0.99+

12QUANTITY

0.99+

BarcelonaLOCATION

0.99+

FebruaryDATE

0.99+

VictoriaPERSON

0.99+

NetAppORGANIZATION

0.99+

Nelson MandelaPERSON

0.99+

Carbon BlackORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

HPORGANIZATION

0.99+

5,000 tabletsQUANTITY

0.99+

ChinaLOCATION

0.99+

John FurrierPERSON

0.99+

North AmericaLOCATION

0.99+

Las VegasLOCATION

0.99+

yesterdayDATE

0.99+

20 minutesQUANTITY

0.99+

McAfeeORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Hillery Hunter, IBM Cloud | IBM Think 2020


 

>>From the cube studios in Palo Alto and Boston gets the Q covering IBM thing brought to you by IBM. >>Welcome back to our coverage of IBM think 2020 the digital version of IBM. Thank, my name is Dave Vellante and you're watching the cube. Hillary Hunter is here. She's the vice president and CTO of IBM cloud and also an IBM fellow. Hillary, thanks for coming on. Good to see you. >>Thanks so much for having me today. >>All right, let's get really, let's get into it. We want to focus on security and compliance. It's a key, obviously a key aspect and consideration for customers. But I have to start by asking you, there's this sort of the age old conflict between being secure and then having the flexibility and agility and speed that business people need. How does IBM clouds sort of square that circle? >>Yeah, you know, it's, it's really interesting because cloud itself is detained, um, designed to deliver agility, um, and speed. And that's everything from the release cadence to being able to consume things as APIs. And so when we say cloud and security, it's about the things that we implement as a cloud provider and the services that we stand up. And all of that is API driven. Um, all of that is intended to enable, you know, data protection through API APIs intended to enable security monitoring through PIs and dashboards and other things like that. And so actually when delivered as cloud services, security functions can actually even go more quickly and can facilitate that speed and agility in and of themselves. So it's really interesting that the means of delivering cloud capabilities actually can facilitate that agility in the security area. >>Yeah, I mean I think it's, especially in these times with COBIT 19 a lot of why is that? We're talking, you were saying, Hey, yeah, we're really going harder, uh, for the cloud because the downturns have been actually pretty good for them. For the cloud. I presume you're sort of seeing the same thing, but if you think about the cost of a breach, it's millions of millions of dollars on average. And think about the time it takes for an organization to identify when there's been an infiltration. Mmm. I know small companies like ours, we feel good that we can tap into, you know, cloud infrastructure. what are your thoughts? Oh, on sort of that whole notion cloud essentially maybe even having better security in a way, but however you define better. >>Yeah. You know, I, I actually agree with those statements and I think it's played out in many of our client engagements. Um, because when you are talking about cloud and you're talking about security, we have the opportunity to present to you a proactive approach, right? Where we're saying, okay, leverage this type of technology in order to do your key management or data encryption. It is up by us already fully as a service. You consume it API driven. Um, and so we are able to say that this will enable you to have end to end data encryption or corruption according to some standard or key management, um, where the keys remained in your hands or you know, use these things that are security services so that there isn't, um, there doesn't have to be, um, as detailed of a conversation. Um, as you often have to have in your solution, in your own it. >>You can say, okay, what's the objective we're trying to get to what is the net security and compliance posture? And we as a cloud provider can be proactive and telling you, Hey, therefore then use this combination of services and use them in this following way and that will enable you to reach those outcomes. And so moving past, um, you know, being fully self service where you have to configure hundreds and hundreds of things yourselves. To me being more prescriptive and proactive and goal oriented and outcome oriented, um, is an opportunity that we have in cloud where we're standing up Janning up capabilities. And so we really tried to talk to clients about, okay, what's the, what are you trying to accomplish? Are you concerned about control over your it? Are you concerned about meeting particular documentation on particular regulatory compliance? What's the point? And then how does that relate into a conversation about data compute, networking, et cetera, and then what does that matter too in terms of how you should then use certain cloud capabilities. >>I want to follow up on that, Hillary, because I want to see it. If I can discern, maybe there's some difference in the way IBM approaches this. I've often said in the cube that bad user behavior trumps good security every time. And of course you've got multiple layers, you've got IBM securing, you know it's infrastructure and it's cloud. You've got it in whatever role there and you've got the end user now. Yeah. Somebody fishes the end user or end user admin. Okay. There are things you can do fine. Hmm. But there's also the, it kind of in the middle you mentioned managed services is IBM's approach, you know, somewhat different >>no >>cloud suppliers. Maybe you could elaborate on that. >>Yeah. So, you know, we really look to protect the services that we're standing up, whether it's infrastructure services, where it's yeah, networking, whether or not it's container service or you know, other services that we're providing. We're looking to protect it, those, you know, down to the core of what that service is and how it works and, and how it provides security and then the technologies that that service integrates into. Right? So services seamlessly integrating into bring your own key and our, um, FIPs one 40 dash two level four baths, um, keep your own key, et cetera. So, so we take other things for our clients and then in doing so, we enable end to end the client to understand both what the status of the service itself is as well as, um, you know, how they use it in order to take into account other security considerations. >>And, and I think it is a fundamentally different, um, approach then one takes for, you know, your own it, you're responsible end to end for everything. In this case, you know, we a secure what we're doing. And then we enable through things like our security advisor, um, to do configurations in such that, that governed the developer behavior and ensure that overall together between us and the client, the posture, even of what the developers and such is understood and can be monitored and ensured that it is secure and compliant. Okay. So I just want to take an example of that. So you are responsible for let's say, securing the object store as an example, but yet at the same time the clients it organization policies that map to the edict of their organization. So they've got flexibility sort of a partnership. Okay. Am I understanding that correctly? >>Yeah, absolutely. And the question is then that it organization that's taken policies, um, we then enable our clients to use tools, everything from things that can be integrated into the dev sec ops pipeline of red hat, you know, and initiatives that are going on. We had CNCF and NIST and other places like that. Yeah. So how can they translate their risk, insecurity, postures into concrete tools? That's that we deliver, right? Everything from dev, sec ops and OpenShift. So then tools and dashboards that we have, like security advisor, um, so that they can then most effectively implement the entirety of what constitutes security on in public cloud environment with confidence. Yeah. So security in compliance slash privacy or sort of two sides of the same coin. So I want to understand, Oh, IBM cloud is approaching, Oh, compliance, obviously GDPR, yeah, yeah. Whatever. They may have, I guess 2018 in terms of the fines. >>Oh, the, the California consumer privacy act. Everybody sort of has their own little GDPR now States and regions and countries, et cetera. How is IBM supporting clients in regard to Oh, compliance such initiatives? Yeah. You know, and this is an area where, you know, again, we are working to make it as easy as possible for our clients to not only see our status on certain compliance areas, which is visible through our website on compliance, but also to achieve compliance is where there is some joint or shared responsibility. So for example, in Europe with the European banking 30, we have kind of an industry unique position and enabling clients you achieve, um, what is needed. And so we provide proactive, you know, guidance. I'm on European banking authority or a PCI DSS or other things like that. So we really are trying to take a very proactive approach to Mmm, uh, providing the guidance that clients need and meeting them in that journey over all. >>We, in addition have a specific program for financial services, um, where we announced our partnership back in November with the bank of America for financial services for a very significant control setting compliance, um, that is not just a of a bunch of little existing things, but it really is a tailored control set for the financial services industry. Um, that acknowledges the fact that, you know, getting compliance in that space can be particularly, ah, particularly challenging. So we are, are taking a very proactive approach, do helping our clients across different doctors, um, deal with those changing, you know, postures and internally as a cloud organization. Um, we are advised also by IBM Promitory, which, um, it has extensive background over 70 jurisdictions globally, changes in all these postures and in compliance and rules and such like that, that they consistently and continuously monitor. Um, and help us design the right cloud moving forward. Cause is compliance as you said is it's very much a dynamic and changing landscape. >>You know, when you talk to chief information security officers and ask them what their biggest challenges, they'll tell you. Yeah. The lack of skills. Uh, and so they're looking to automation. It really helped close that gap. And clearly cloud is sort of all about automation. So I wonder if you could just talk a little bit about what you're seeing with regard to automation generally, but specifically how it's helping, you know, close that skills gap. >>Yeah, you know, it, the, the, the topic of automation is so interesting when it intersects security because I really view this, um, transition to cloud and the use of cloud native and the use of containers and such actually is an opportunity again, yet again to improve security and compliance posture. Um, because cloud, um, and uh, the dev ops and CICB pipelines, um, and all of that of, of a cloud native build and a containerized build give you a certain opportunity both to prevent a bunch of behaviors as well as to collect certain information that may become useful later on. Um, I think actually called modernization because of the automation it brings, um, is a really, really topic for both CSOs and risk officers right now because it can not just improve the agility that you started with as a motivation to go to cloud, but it can also improve visibility into what's going on with all your workloads. >>You know, to know that a developer used a particular library and then you see, oops, maybe there's a concern about that library and you instantly know where across the entirety of your IOT that that's been deployed. That's a tremendous amount of knowledge. Um, and you can take either, you know, immediate action on that or you can through automation push out changes and things like that. Um, we use internally as a cloud provider the best of SRE and automation practices to keep our estate patched and other things like that. And that can also then translate into people's own workloads, which I think is a really exciting opportunity of cloud. >>You know, we're out of time, but I want to close and asking you sort of what we should look at 42, we had a great conversation earlier, well with Jamie Thomas about, about quantum and she talked about ideas. You get that on the IBM what what should we look forward to sort of in the coming months and even years in IBM cloud. >>Yeah. You know, we're really excited about that agility, that cloud itself for us as a company and provides, right? Like you said with quantum, it is the place that we can bring out the latest and greatest things, um, in, you know, uh, for our clients to use and experiment with and adopt their algorithms and such juice. So you're going to continue to see us taking a very aggressive posture in turning the latest and open source and technologies into cloud delivered fully managed services. Um, and so, you know, everything from what we've done already with, um, Istio is a service and can native as a server, a service and quantum as a service, et cetera. Um, you'll continue to see us take that approach that, um, you know, we want to be a fresh and vital environment for developers to consume the latest and greatest that's out there. Um, but yet as an enterprise focused company and a company, you know, very much focused on security and compliance, you'll continue to see us back those things with our own efforts to secure and then enable security, um, on our environment. >>Well, Hillary, thanks so much for coming on the cube. It's always great to have experts like yourself, uh, share with, uh, with our community. Appreciate it. >>Great. Thank you so much for having me. >>And so we're seeing cloud acceleration as a result of covert 19, but it's always been a, a real wave for the last 10 years. We're just seeing it again, accelerate even faster. This is Dave Volante for the cube. You're watching the cubes, continuous coverage of IBM thing, digital thing, 2020 people right there, but right back, right after this short, >>right.

Published Date : May 5 2020

SUMMARY :

IBM thing brought to you by IBM. She's the vice president and IBM clouds sort of square that circle? you know, data protection through API APIs intended to enable security monitoring through PIs and dashboards you know, cloud infrastructure. Um, and so we are able to say that this will enable you to have And so moving past, um, you know, being fully self service where it kind of in the middle you mentioned managed services is IBM's approach, Maybe you could elaborate on that. those, you know, down to the core of what that service is and how it works and, and how you know, your own it, you're responsible end to end for everything. the dev sec ops pipeline of red hat, you know, and initiatives that are going on. And so we provide proactive, you know, guidance. Um, that acknowledges the fact that, you know, getting compliance in that space can be particularly, You know, when you talk to chief information security officers and ask them what their biggest challenges, just improve the agility that you started with as a motivation to go to cloud, but it can also improve You know, to know that a developer used a particular library and then you see, You know, we're out of time, but I want to close and asking you sort of what we should look at 42, we had a great conversation earlier, Um, and so, you know, everything from what we've done already with, um, Well, Hillary, thanks so much for coming on the cube. Thank you so much for having me. This is Dave Volante for the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

HillaryPERSON

0.99+

EuropeLOCATION

0.99+

IBMORGANIZATION

0.99+

Hillary HunterPERSON

0.99+

NovemberDATE

0.99+

Jamie ThomasPERSON

0.99+

NISTORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Dave VolantePERSON

0.99+

2018DATE

0.99+

BostonLOCATION

0.99+

bothQUANTITY

0.99+

hundredsQUANTITY

0.99+

GDPRTITLE

0.99+

2020DATE

0.99+

CNCFORGANIZATION

0.99+

Hillery HunterPERSON

0.98+

todayDATE

0.98+

over 70 jurisdictionsQUANTITY

0.98+

two sidesQUANTITY

0.98+

IBM PromitoryORGANIZATION

0.97+

California consumer privacy actTITLE

0.95+

bank of America forORGANIZATION

0.94+

millions of millions of dollarsQUANTITY

0.93+

Think 2020COMMERCIAL_ITEM

0.92+

IstioORGANIZATION

0.89+

four bathsQUANTITY

0.83+

30QUANTITY

0.83+

last 10 yearsDATE

0.82+

oneQUANTITY

0.81+

covert 19QUANTITY

0.8+

42QUANTITY

0.69+

PCIORGANIZATION

0.68+

ainQUANTITY

0.67+

two levelQUANTITY

0.63+

servicesORGANIZATION

0.6+

IBM CloudORGANIZATION

0.59+

CICBORGANIZATION

0.57+

EuropeanOTHER

0.57+

presidentPERSON

0.56+

CTOPERSON

0.55+

40QUANTITY

0.53+

EuropeanORGANIZATION

0.45+

19TITLE

0.43+

DSSTITLE

0.43+

COBITORGANIZATION

0.41+

OpenShiftTITLE

0.4+

dashOTHER

0.36+

Ashesh Badani, Red Hat | Red Hat Summit 2020


 

>> Announcer: From around the globe, it's theCUBE, with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of Red Hat Summit, happening digitally, interviewing practitioners, executives, and thought leaders from around the world. Happy to welcome back to our program, one of our CUBE alumni, Ashesh Badani, who's the Senior Vice President of Cloud Platforms with Red Hat. Ashesh, thank you so much for joining us, and great to see you. >> Yeah, likewise, thanks for having me on, Stu. Good to see you again. >> All right, so, Ashesh, since the last time we had you on theCUBE a few things have changed. One of them is that IBM has now finished the acquisition of Red Hat, and I've heard from you from a really long time, you know, OpenShift, it's anywhere and it's everywhere, but with the acquisition of Red Hat, it just means this only runs on IBM mainframes and IBM Cloud, and all things blue, correct? >> Well, that's true for sure, right? So, Stu, you and I have been talking for many, many times. As you know, we've been committed to hybrid multi-cloud from the very get-go, right? So, OpenShift supported to run on bare metal, on virtualization platforms, whether they come from us, or VMware, or Microsoft Hyper-V, on private clouds like OpenStack, as well as AWS, Google Cloud, as well as on Azure. Now, with the completion of the IBM acquisition of Red Hat, we obviously always partnered with IBM before, but given, if you will, a little bit of a closer relationship here, you know, IBM's been very keen to make sure that they promote OpenShift in all their platforms. So as you can probably see, OpenShift on IBM Cloud, as well as OpenShift on Z on mainframe, so regardless of how you like OpenShift, wherever you like OpenShift, you will get it. >> Yeah, so great clarification. It's not only on IBM, but of course, all of the IBM environments are supported, as you said, as well as AWS, Google, Azure, and the like. Yeah, I remember years ago, before IBM created their single, condensed conference of THINK, I attended the conference that would do Z, and Power, and Storage, and people would be like, you know, "What are they doing with that mainframe?" I'm like, "Well, you do know that it can run Linux." "Wait, it can run Linux?" I'm like, "Oh my god, Z's been able to run Linux "for a really long time." So you want your latest Container, Docker, OpenShift stuff on there? Yeah, that can sit on a mainframe. I've talked to some very large, global companies that that is absolutely a part of their overall story. So, OpenShift-- >> Interesting you say that, because we already have customers who've been procuring OpenShift on mainframe, so if you made the invest mainframe, it's running machine learning applications for you, looking to modernize some of the applications and services that run on top in OpenShift on mainframe now is an available option, which customers are already taking advantage of. So exactly right to your point, we're seeing that in the market today. >> Yeah, and Ashesh, maybe it's good to kind of, you know, you've got a great viewpoint as to customers deploying across all sorts of environments, so you mentioned VMware environments, the public cloud environment. It was our premise a few years ago on theCUBE that Kubernetes get staked into all the platforms, and absolutely, it's going to just be a layer underneath. I actually think we won't be talking a lot about Kubernetes if you fast-forward a couple of years, just because it's in there. I'm using it in all of my environments. So what are you seeing from your customers? Where are we in that general adoption, and any specifics you can give us about, you know, kind of the breadth and the depth of what you're seeing from your customer base? >> Yeah, so, you're exactly right. We're seeing that adoption continue on the path it's been on. So we've got now, over 1700 customers for OpenShift, running in all of these environments that you mentioned, so public, private, a combination of the two, running on traditional virtualization environments, as well as ensuring that they run in public cloud at scale. In some cases managed by customers, in other cases managed by us on their behalf in a public cloud. So, we're seeing all permutation, if you will, of that in play today. We're also seeing a huge variety of workloads, and to me, that's actually really interesting and fascinating. So, earliest days, as you'd expect, people trying to play with micro-services, so trying to build new market services and run it, so cloud native, what have you. Then as we're ensuring that we're supporting stateful application, right. Now you're starting to see if your legacy applications move on, ensuring that we can run them, support them at scale, within the platform 'cause we're looking to modernize applications. We'll talk maybe in a few minutes also about lift-and-shift that we got to play as well. But now also we're starting to see new workloads come on. So just most recently we announced some of the work that we're doing with a series of partners, from NVIDIA to emerging AI ML, AI, artificial intelligence machine learning, frameworks or ISVs, looking to bring those to market. Been ensuring that those are supported and can run with OpenShift. Right, our partnership with NVIDIA, ensuring OpenShift be supported on GPU based environment for specific workloads, whether it be performance sensitive or specific workloads that take advantage of underlying hardware. So starting now to see a wide variety if you will, of application types is also something that we're starting, right, so numbers of customers increasing, types of workloads, you know, coming on increasing, and then the diversity of underlying deployment environments. Where they're running all services. >> Ashesh, such an important piece and I'm so glad you talked about it there. 'Cause you know my background's infrastructure and we tend to look at things as to "Oh well, I moved from VM to a container, "to cloud or all these other things," but the only reason infrastructure exists is to run my application, is my data and my application that are the most important things out there. So Ashesh, let me get in some of the news that you got here, your team work on a lot of things, I believe one of them talks about some of those, those new ways that customers are building applications and how OpenShift fits into those environments. >> Yeah, absolutely. So look, we've been on this journey as you know for several years now. You know recently we announced the GA of OpenShift Service Mesh in support of Istio, increasing an interest as for turning microservices will take advantage of close capabilities that are coming in. At this event we're now also announcing the GA of OpenShift Serverless. We're starting to see obviously a lot of interest, right, we've seen the likes of AWS spawn that in the first instance, but more and more customers are interested in making sure that they can get a portable way to run serverless in any Kubernetes environment, to take advantage of open source projects as building blocks, if you will, so primitives in, within Kubernetes to allow for serverless capabilities, allow for scale down to zero, supporting serving and eventing by having portable functions run across those environments. So that's something that is important to us and we're starting to see support of in the marketplace. >> Yeah, so I'd love just, obviously I'm sure you've got lots of break outs in the OpenShift Serverless, but I've been talking to your team for a number of years, and people, it's like "Oh, well, just as cloud killed everything before it, "serverless obviates the need for everything else "that we were going to use before." Underlying OpenShift Serverless, my understanding, Knative either is the solution, or a piece of the solution. Help us understand what serverless environment this ties into, what this means for both your infrastructure team as well as your app dev team. >> Yeah, great, great question, so Knative is the basis of our serverless solution that we're introducing on OpenShift to the marketplace. The best way for me to talk about this is there's no one size fits all, so you're going to have specific applications or service that will take advantage of serverless capabilities, there will be some others that will take advantage of running within OpenShift, there'll be yet others, we talked about the AI ML frameworks, that will run with different characteristics, also within the platform. So now the platform is being built to help support a diversity, a multitude of different ways of interacting with it, so I think maybe Stu, you're starting to allude to this a little bit, right, so now we're starting to focus on, we've got a great set of building blocks, on the right compute network storage, a set of primitives that Kubernetes laid out, thinking of the notions of clustering and being able to scale, and we'll talk a little bit about management as well of those clusters. And then it changes to a, "What are the capabilities now, "that I need to build to make sure "that I'm most effective, most efficient, "regard to these workloads that I bring on?" You're probably hearing me say workloads now, several times, because we're increasingly focused on adoption, adoption, adoption, how can we ensure that when these 1700 plus, hopefully, hundreds if not thousands more customers come on, how they can get the most variety of applications onto this platform, so it can be a true abstraction over all the underlying physical resources that they have, across every deployment that they put out. >> All right, well Ashesh, I wish we could spend another hour talking about the serverless piece, I definitely am going to make sure I check out some of the breakouts that cover the piece that we talked to you, but, I know there's a lot more that the OpenShift update adds, so what other announcements, news, do you have to cover for us? >> Yeah, so a couple other things I want to make sure I highlight here, one is a capability called ACM, advanced cluster management, that we're introducing. So it was an experimental work that was happening with the IBM team, working on cluster management capabilities, we'd been doing some of that work ourselves, within Red Hat, as part of IBM and Red Hat coming together. We've had several folks from IBM actually join Red Hat, and so we're now open sourcing and providing this cluster management capability, so this is the notion of being able to run and manage these different clusters from OpenShift, at scale, across multiple environments, be able to check on cluster health, be able to apply policy consistently, provide governance, ensure that appropriate applications are running in appropriate clusters, and so on, a series of capabilities, to really allow for multiple clusters to be run at scale and managed effectively, so that's one set of, go ahead, Stu. >> Yeah, if I could, when I hear about multicluster management, I think of some of the solutions that I've heard talked about in the industry, so Azure Arc from Microsoft, Tanzu from VMware, when they talk about multicluster management, it is not only the Kubernetes solutions that they're offering, but also, how do I at least monitor, if not even allow a little bit of control across these environments? So when you talk about cluster management, is that all the OpenShift pieces, or things like AKS, EKS, other options out there, how do those fit into the overall management story? >> Yeah, that's absolutely our goal, right, so we've got to get started somewhere, right? So we obviously want to make sure that we bring into effect the solution to manage OpenShift clusters at scale, and then of course as we would expect, multiple other clusters exist, from Kubernetes, like the ones you mentioned, from the cloud providers as well as others from third parties and we want the solution to manage that as well. But obviously we're going to sort of take steps to get to the endpoint of this journey, so yes, we will get there, we've got to get started somewhere. >> Yeah, and Ashesh, any guides, when you look at people, some of the solutions I mentioned out there, when they start out it's "Here's the vision." So what guidance would you give to customers about where we are, how fast they can expect these things to mature, and I know anything that Red Hat does is going to be fully open source and everything, what's your guidance out there as to what customers should be looking for? >> Yeah, so we're at an interesting point, I think, in this Kubernetes journey right now, and so when we, if you will, started off, and Stu you and I have been talking about this for at least five years if not longer, was this notion that we want to provide a platform that can be portable and successfully run in multiple deployment environments. And we've done that over these years. But all the while when we were doing that, we're always thinking about, what are the capabilities that are needed that are perhaps not developed upstream, but will be over time, but we can ensure that we can look ahead and bring that into the platform. And for a really long time, and I think we still do, right, we at Red Hat take a lot of stick for saying "Hey look, you form the platform." Our outcome back to that has always been, "Look, we're trying to help solve problems "that we believe enterprise customers have, "we want to ensure that they're available open source, "and we want to upstream those capabilities always, "back into the community." But, let's say making available a platform without RBAC, role-based access control, well it's going to be hard then for enterprises to adopt that, we've got to make sure we introduce that capability, and then make sure that it's supported upstream as well. And there's a series of capabilities and features like that that we work through. We've always provided an abstraction within OpenShift to make it more productive for developers and administrators to use it. And we always also support working with kubectl or the command line interface from kube as well. And then we always hear back from folks saying "Well, you've got your own abstraction, "that might make that seem impossible," Nope, you can use both kubectl GPUs or C commands, whichever one is better for you, have at it, we're just trying to be more productive. And now increasingly what we're seeing in the marketplace is this notion that we've got to make sure we work our way up from not just laying out a Kubernetes distribution, but thinking about the additional capability, additional services that you can provide, that would be more valuable to customers, and I think Stu, you were making the point earlier, increasingly, the more popular and the more successful Kubernetes becomes, the less you will see and hear of it, which by the way is exactly the way it should be, because that becomes then the basis of your underlying infrastructure, you are confident that you've got a rock solid bottom, and now you as a customer, you as a user, are focusing all of your energy and time on building the productive application and services on top. >> Yeah, great great points there Ashesh, the vision people always talked about is "If I'm leveraging cloud services, "I shouldn't have to worry "about what version they're running." Well, when it comes to Kubernetes, ultimately we should be able to get there, but I know there's always a little bit of a delta between the latest and newest version of Kubernetes that comes out, and what the managed services, and not only managed services, what customers are doing in their own environment. Even my understanding, even Google, which is where Kubernetes came out of, if you're looking at GKE, GKE is not on the latest, what are we on, 1.19, from the community, Ashesh, so what's Red Hat's position on this, what version are you up to, how do you think customers should think about managing across those environments, because boy, I've got too many scars from interoperability history, go back 10 or 15 years and everything, "Oh, my server BIOS doesn't work on that latest "kernel.org version of what we're doing for Linux." Red Hat is probably better prepared than any company in the industry, to deal with that massive change happening from a code-based standpoint, I've heard you give presentations on the history of Linux and Kubernetes, and what's going forward, so when it comes to the release of Kubernetes, where are you with OpenShift, and how should people be thinking about upgrading from versions? >> Yeah, another excellent point, Stu, it's clearly been following us pretty closely over the years, so where we came at this, was we actually learned quite a bit from our experience in the company with OpenStack. And so what would happen with OpenStack is, you would have customers that are on a certain version of Openstack, and then they kept saying "Hey look, we want to consume close to trunk, "we want new features, we want to go faster." And we'd obviously spent some time, from the release in community to actually shipping our distribution into customer's hand, there's going to be some amount of time for testing and QE to happen, and some integration points that need to be certified, before we make it available. We often found that customers lagged, so there'd be let's say a small subset if you will within every customer or several customers who want to be consuming close to trunk, a majority actually want stability. Especially as time wore on, they were more interested in stability. And you can understand that, because now if you've got mission critical applications running on it you don't necessarily want to go and put that at risk. So the challenge that we addressed when we actually started shipping OpenShift four last summer, so about a year ago, was to say, "How can we provide you basically a way "to help upgrade your clusters, "essentially remotely, so you can upgrade, "if you will, your clusters, or at least "be able to consume them at different speeds." So what we introduced with OpenShift four was this ability to give you over the air updates, so the best way to think about it is with regard to a phone. So you have your phone, your new OS upgrades show up, you get a notification, you turn it on, and you say "Hey, pull it down," or you say at a certain point of time, or you can go off and delay it, do it at a different point in time. That same notion now exists within OpenShift. Which is to say, we provide you three channels, so there's a stable channel where you say "Hey look, maybe this cluster in production, "no rush here, I'll stay at or even a little behind," there's a fast channel for "Hey, I want to be up latest and greatest," or there's a third channel which allows for essentially features that are being in developed, or are still in early stage of development to be pushed out to you. So now you can start consuming these upgrades based on "Hey, I've got a dev team, "on day one I get these quicker," "I've got these applications that are stable in production, "no rush here." And then you can start managing that better yourself. So now if you will, those are capabilities that we're introducing into a Kubernetes platform, a standard Kubernetes platform, but adding additional value, to be able to have that be managed much much, in a much better fashion that serves the different needs of different parts of an organization, allows for them to move at different speeds, but at the same time, gives you that same consistent platform regardless of where you are. >> All right, so Ashesh, we started out the conversation talking about OpenShift anywhere and everywhere, so in the cloud, you talked about sitting on top of VMware, VM Farms is very prevalent in the data centers, or bare metal. I believe since I saw, one of the updates for OpenShift is how Red Hat virtualization is working with OpenShift there, and a lot of people out there are kind of staring out what VMware did with VSphere seven, so maybe you can set it up with a little bit of a compare contrast as to how Red Hat's doing this rollout, versus what you're seeing your partner VMware doing, or how Kubernetes fits into the virtualization environment. >> Yeah, I feel like we're both approaching it from different perspective and learnset that we come at it, so if I can, the VMware perspective is likely "Hey look, there's all these installations of VSphere "in the marketplace, how can we make sure "that we help bring containers there," and they've come up with a solution that you can argue is quite complicated in the way how they're achieving it. Our approach is a different one, right, so we always looked at this problem from the get-go with regard to containers as a new paradigm shift, it's not necessarily a revolution, because most companies that we're looking at are working with existing application services, but it's an evolution in the way you're thinking about the world, but this is definitely the long term future. And so how can we then think about introducing this environment, this application platform into the environment, and then be able to build a new application in it, but also bring in existing applications to the form? And so with this release of OpenShift, what we're introducing is something that we're calling OpenShift Virtualization, which is a few of our existing applications, certain VMs, how can we ensure that we bring those VMs into the platform, they've been certified, data security boundaries around it, or certain constraints or requirements have been put by your internal organization around it, and we can keep all of those, but then still encapsulate that VM as a container, have that be run natively within an environment orchestrated by OpenShift, Kubernetes as the primary orchestrator of those VMs, just like it does with everything else that's cloud-native, or is running directly as containers as well. We think that's extremely powerful, for us to really bring now the promise of Kubernetes into a much wider market, so I talked about 1700 customers, you can argue that that 1700 is the early majority, or if you will, almost the scratching of the surface of the numbers that we believe will adopt this platform. To get, if you held the next setup, whatever, five, 10, 20,000 customers, we'll have to make sure we meet them where they are. And so introducing this notion of saying "We can help migrate," with a series of tools that Rock's providing, these VM-based applications, and then have them run within Kubernetes in a consistent fashion, is going to be extremely powerful, and we're really excited about it, by those capabilities, bringing that to our customers. >> Well Ashesh, I think that puts a great exclamation point as to how we go from these early days off to the vast majority of environments, Ashesh, one thing, congratulations to you and the team on the growth, the momentum, all the customer stories, I'd love the opportunity to talk to many of the Red Hat customers about their digital transformation and how your cloud platforms have been a piece of it, so once again, always a pleasure to catch up with you. >> Likewise, thanks a lot, Stuart, good chatting with you, and hope to see you in person soon sometime. >> Absolutely, we at theCUBE of course hope to see you at events later in 2020, for the time being, we of course fully digital, always online, check out theCUBE.net for all of the archives as well as the events including all the digital ones that we are doing, I'm Stu Miniman, and as always, thanks for watching theCUBE. (calm music)

Published Date : Apr 1 2020

SUMMARY :

brought to you by Red Hat. and great to see you. Good to see you again. we had you on theCUBE a few things have changed. So as you can probably see, OpenShift on IBM Cloud, and Power, and Storage, and people would be like, you know, so if you made the invest mainframe, and any specifics you can give us about, you know, So, we're seeing all permutation, if you will, So Ashesh, let me get in some of the news that you got here, spawn that in the first instance, but I've been talking to your team Yeah, great, great question, so Knative is the basis so this is the notion of being able to run from Kubernetes, like the ones you mentioned, So what guidance would you give to customers and so when we, if you will, started off, GKE is not on the latest, what are we on, 1.19, Which is to say, we provide you three channels, so in the cloud, you talked about sitting on top of VMware, is the early majority, or if you will, to you and the team on the growth, the momentum, and hope to see you in person soon sometime. Absolutely, we at theCUBE of course hope to see you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Ashesh BadaniPERSON

0.99+

fiveQUANTITY

0.99+

AsheshPERSON

0.99+

StuartPERSON

0.99+

Red HatORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

hundredsQUANTITY

0.99+

twoQUANTITY

0.99+

first instanceQUANTITY

0.99+

StuPERSON

0.99+

VMwareORGANIZATION

0.99+

LinuxTITLE

0.99+

GoogleORGANIZATION

0.99+

KubernetesTITLE

0.99+

OpenShiftTITLE

0.99+

CUBEORGANIZATION

0.99+

over 1700 customersQUANTITY

0.99+

OneQUANTITY

0.98+

10QUANTITY

0.98+

Red Hat SummitEVENT

0.98+

Red Hat Summit 2020EVENT

0.98+

three channelsQUANTITY

0.98+

15 yearsQUANTITY

0.98+

OpenShift ServerlessTITLE

0.98+

bothQUANTITY

0.97+

KnativeORGANIZATION

0.96+

todayDATE

0.96+

GKEORGANIZATION

0.96+

Azure ArcTITLE

0.96+

thousands more customersQUANTITY

0.96+

Red HatTITLE

0.96+

third channelQUANTITY

0.96+

last summerDATE

0.96+

RBACTITLE

0.95+

zeroQUANTITY

0.93+

Nicholas Klick, GitLab | GitLab Commit 2020


 

>> Presenter: From San Francisco, it's theCUBE. Covering GitLab Commit 2020. Brought to you by GitLab. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of GitLab Commit 2020 here in San Francisco. You might notice some of our guests have some jackets on. It is a little cooler than normal here in San Francisco, but the community and knowledge is keeping us all warm. Joining us for the first time on the program is Nicholas Klick, who is an engineering manager at GitLab. Thanks so much for joining us. >> Thanks for inviting me. >> Alright, so you had an interesting topic. The state of serverless in 2020 was the session that you gave. Definitely a topic we love covering on theCUBE, something I personally have been digging into, trying to understand. Definitely something that the developers, and especially the app devs that I speak with, are very bullish on, so what is the state of serverless in 2020? >> That's actually a good question. So, my talk was actually broken into two parts. One was, like initially I just wanted to help provide a clear definition of what serverless is. In my opinion, serverless is more than just functions. There are a lot of other a lot of other technologies, like backend is a service, API gateways, service integration proxies that you can stitch together to create dynamic applications. So, I created a more expanded definition of what serverless is from my perspective, and the other part was to really talk about three things that I'm finding exciting right now in the serverless space. The first was Knative, and the fact that Knative is likely going to go to GA pretty soon, so it'll be production ready, and we can finally build production workloads on it. The second is that running serverless at the edge I find to be an exciting topic. And then finally, talking more in depth on those, the service integrations. Of how you can actually create applications that don't include functions at all, so functionless serverless. >> Yeah, so a lot of things I definitely want to tease out of that, but Nicholas, I guess maybe we should step back a second-- >> Nicholas: Okay. >> And was there survey work, or was there something done, or is this kind of something related to your job that you put together as just an important topic? >> Yeah, I know this is just me speaking as someone that works in the space and sees the technology is evolving and just my opinions, I guess. >> Okay, when I talk to the practitioners, when you go and say, "Oh, they're interested in it." Chances are they're doing stuff on Amazon, is like what kind of the first piece of it tends to be. There are lots of open source projects out there, but it's still this kind of dominated by Amazon. Azure has some pieces, of course. Google has things they're doing. I liked how you teased out that serverless definitely isn't a thing, and the definition, and even the term itself, gets people all riled up and things like that, so I hate getting into the ontological arguments, but the promise of it is that I can build applications in a different way, and I shouldn't have to think about some of the underlying components, hence the name serverless, kind of-- >> Right. >> does that, but it definitely is a change in mindset as to how I build and consume environments. >> Right. Right, and like another point that I made in the talk, that I believe pretty strongly, is that serverless is not something that's going to replace monoliths and microservices. I believe it's another tool in the tool belt of the developer, of the operator, to solve problems, and that we should look at it like that. It shouldn't be, it's not the next progression in application architecture. >> Yeah, I've met some companies that are 100%, they've built everything on serverless, but that's like saying I've met plenty of companies that are all in the cloud. It depends on what you do and what your business is. >> Nicholas: Right. >> When we look at the enterprise, it is a broad spectrum, and making changes along that path is something that typically takes a decade or more, and they have hundreds, if not thousands of applications, and therefore, we understand. I've got my stuff running on my mainframe through my latest microservice architecture, and everything in between. >> Right, and I mean I'm speaking as an employee of GitLab, and we have a very well known monolith that we deploy, and so for my opinion, I don't believe that monoliths are going to die any time soon. >> Alright, I'd love you to tease out some of those pieces that you talked about, the three items you talked about: Knative. You know, Knative is interesting. The thing I poked at when I go to KubenCon and CloudNativeCon is today I mentioned when I think about customers, most of them are using Amazon. The second choice is they're probably doing Azure, and today Knative directly doesn't work with EKS, AKS, or the like. I know there's a solution like trigger match that actually will interact-- >> Right. >> Between the Amazon and there, but don't you need the buy-in of Amazon and Microsoft for Knative to be taken seriously. And the other thing is, Google still hasn't opened up the-- >> Right. >> the Google controls, the governance of both Istio and Knative, and there are some concerns in the ecosystem about that, so what makes you so bullish on Knative. >> Yeah, so I'm definitely aware of some of the discussions around Knative. From my perspective, I think that Knative is, if someone is already operating a lot of Kubernetes infrastructure, if they already have those, that infrastructure running, then deploying Knative to it is not that much more of a it doesn't require additional resources and expense, so it could be, again it depends on their use case, and I think that, when I think about serverless, I try to remain pragmatic, so if I'm already using Kubernetes, and I want a simple serverless runtime, Knative would be a great option in that situation. If I want to be able to work cross-cloud, like this is another opportunity that Knative provides, is the ability of deploying to any Kubernetes cluster anywhere, so it has that, you know, that, there's not a vendor lock-in issue with Knative. >> Yeah, and absolutely there was initially some concern that, could serverless actually be the ultimate lock-in? >> Right. >> I'm going to go deep on one provider and don't have a way. There, open source groups like the CNCF trying to help along those ways-- >> Sure. >> Knative absolutely along those ways looking at that environment. From a GitLab customer's standpoint, GitLab's not tied to whether you're doing containers or serverless or VMs or in the environment. What does it mean for GitLab customers? If I want to look at serverless, how does that fit into my overall work flow? >> Yeah, so initially at GitLab we focused on providing the ability to deploy to Knative. That was, we were very early in the Knative space, and I think that as it's matured, as those APIs have matured, then our product has kind of developed, and so right now we enable you to be able to create Kubernetes clusters through our interface and then deploy your function run times directly from your GitLab repo. We've also, are kind of growing in our our examples and documentation of how to integrate GitLab CI/CD with Lambda. That's another big area that we're moving into as well. >> Great. As you look forward to 2020, we've got a whole new decade in front of us, what should, what do you think people should be watching on in the maturity of this space. >> Yeah, so I think that the point that I touched on earlier of the service integrations, I think that that is something you're going to see more and more of. Of the providers themselves linking together their different services and enabling you to create these dynamic applications without a lot of glue that you have to manually create in between. I think that we're going to see, you know, more open source frameworks, like, for example, Service Framework or Terraform that people want the, I mean, I know that a lot of people use, for example, AWS SAM. People want easier ways, and faster ways, to be able to deploy their serverless, so you have the bootstrapping of serverless. I guess, another thing that I expect is that the serverless, the serverless development life cycle will mature, in that whether going from bootstrapping to testing, deployment, monitoring security, I believe you're going to see companies that will start to really fill in that entire space, the same way that they do for monoliths and microservices. >> Yeah, absolutely. Thank you so much, Nicholas. Definitely something we've been tracking over the last year or so. You start to see many in the tool chain of cloud native environments digging into serverless, helping to mature those solutions, and definitely an area to watch closely. >> Great. >> Alright. Lot's more coverage. Check out theCUBE.net for all the events that we will be at through 2020 as well. If you can go back and see we've actually done Serverlessconf a couple of years, many of the other cloud and cloud native shows. Search in our index. I'm Stu Miniman, and thank you for watching theCUBE. (energetic electronic music)

Published Date : Jan 14 2020

SUMMARY :

Brought to you by GitLab. but the community and knowledge is keeping us all warm. and especially the app devs that I speak with, and the other part was to really talk about three things and sees the technology is evolving and the definition, and even the term itself, but it definitely is a change in mindset as to how I build and that we should look at it like that. that are all in the cloud. and making changes along that path is something that monoliths are going to die any time soon. the three items you talked about: Knative. And the other thing is, so what makes you so bullish on Knative. and I think that, when I think about serverless, There, open source groups like the CNCF trying to help or VMs or in the environment. and so right now we enable you to be able to create in the maturity of this space. and enabling you to create these dynamic applications and definitely an area to watch closely. and thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NicholasPERSON

0.99+

Nicholas KlickPERSON

0.99+

MicrosoftORGANIZATION

0.99+

hundredsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

2020DATE

0.99+

100%QUANTITY

0.99+

GitLabORGANIZATION

0.99+

KnativeORGANIZATION

0.99+

thousandsQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

last yearDATE

0.99+

first pieceQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

first timeQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

GALOCATION

0.98+

second choiceQUANTITY

0.98+

three itemsQUANTITY

0.97+

todayDATE

0.97+

a decadeQUANTITY

0.97+

one providerQUANTITY

0.96+

IstioORGANIZATION

0.96+

theCUBE.netOTHER

0.95+

CNCFORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

three thingsQUANTITY

0.94+

TerraformTITLE

0.87+

KubernetesTITLE

0.81+

AzureTITLE

0.8+

applicationsQUANTITY

0.78+

AKSORGANIZATION

0.72+

Commit 2020TITLE

0.69+

KubenConEVENT

0.68+

LambdaTITLE

0.61+

EKSORGANIZATION

0.61+

SAMTITLE

0.58+

ServiceTITLE

0.57+

GitLabTITLE

0.56+

theCUBETITLE

0.56+

CloudNativeConEVENT

0.51+

GitLab Commit 2020TITLE

0.48+

Rob Esker & Matt Baldwin, NetApp | KubeCon + CloudNativeCon NA 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE! Covering KubeCon and CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back, this is theCUBE's fourth year of coverage at KubeCon CloudNativeCon, we're here in San Diego, it's 2019, I'm Stu Miniman, my host for this afternoon is Justin Warren, and happy to welcome two guests from the newly minted platinum member of the CNCF, NetApp, sitting to my right is Matt Baldwin, who is the director of cloud native and Kubernetes engineering, and sitting to his right is Rob Esker, who does product and strategy for Kubernetes, and is also a forward member on the CNCF, thank you both for joining us. >> Thank you. >> Thanks for having us. >> All right, so Matt, maybe start with you, NetApp, companies that know, I've got plenty of history with NetApp there, what I've been hearing from NetApp for the last few years is, the core of NetApp has always been software, and it is a multicloud world. I've been hearing this message since before the cloud native and Kubernetes piece was going. Of course there's been some acquisitions, and NetApp continuing to go through its transformations, if you will. So help us understand NetApp's positioning in this ecosystem. >> In Kubernetes? >> Yes. >> Okay, so, what we're doing is, we're building a product that allows you to manage cloud-native workloads on top of Kubernetes, so we've solved the infrastructure problem, and that's kind of the old problem we're bored to death talking about that problem, but what we try to do is try to provide a single pane of glass to manage on-premise workloads and off-premise workloads, and so that's what we're trying to do, we're trying to say, it's now more about the app taxonomy in Kubernetes, and then what type of tooling do you build to manage that application in Kubernetes, and so that's what we're building right now, that's where we're headed with the hybrid multicloud. >> There's a piece of it, though, that does draw from the historical strengths of NetApp, of course. So we're building, we are essentially already in market a capability that allows you to deploy Kubernetes, in an agnostic way, using pure open unmodified Kubernetes, on all of the major public clouds, but also on-prem. But over time, and some of this is already evident, you'll see it married to the storage and data management capabilities that we draw from the historical NetApp, and that we're starting to deploy into those public clouds. >> With the idea that you should be able to take a project, so a project being in a namespace, namespace having an application in it, so you have multiple deployments, I should be able to protect that namespace, or that project, I should be able to move that, and that data goes with it, so that we're very data-aware, that's what we're trying to do with our software is, make it very data-aware and have that align with apps inside of Kubernetes. >> Yeah, so Rob, maybe step back for a second, one of the things we've heard a few times at this show before, and it was talked about in the keynote this morning, is that it is project over company when it comes to the CNCF. Project over company, so it's about the ecosystem, the CNCF tries not to be opinionated, so it's okay for multiple projects to fit in a space. NetApp moving up to a platinum sponsor level, participated here, NetApp's got lots of histories in participating and driving standards, helping move where the industry's going, where does NetApp see its position in participating in the foundation and participating in this ecosystem? >> Yeah, so great question, and actually, I love it, it's one of my favorite topics, so, I think the way we look at it is, oftentimes projects, to the extent they become ubiquitous, define a standard, a defacto standard, so not necessarily ratified by some standards body, and so we're very interested in making sure that in the scenario where you want to employ this standard, from a technology integration perspective, our capabilities can operate as an implementation behind the standard. So you get the distinguishing qualities of our capabilities, our products and our services, vis-a-vis, or in the context of the standard, but we're not trying to take you down a walled garden path in a proprietary journey, if you will. We would rather compel you to work with us on the basis of the value, not necessarily operating off a proprietary set of interfaces. So Kubernetes, broadly perceive it as a defacto standard at this point, there's still some work to be done on rounding out the edges, a lot of it underway this week, it's definitely the case that there's an appeal to making this more offerable by, pardon the expression, mere mortals, and we think we can offer some help in that respect as well. >> Yeah, where is its usability? I mean, that's the reason I started stacked on cloud, was that there was a usability problem with Kubernetes. I had a usability problem with Kubernetes. That's what we're trying, that's how I'm looking at the landscape, and I look at all the projects inside of the CNCF, and I look at my role is, our role is to, how do we tie these together, how do we make these so they're very very usable to the users, and how we're engaging with the community is to try to align this, basically pure upstream projects, and create a usability layer on top of that. But we're not going to, we don't want to ever say we're going to fork any of these projects, but we're going to contribute back into these projects. >> So that's one concern that I have heard from some customers, which speaking of which, some of them yesterday, one of the concerns they had was that, when you add that manageability onto the base Kubernetes layer, that often, various vendors become rather opinionated about which way we think this is a good way to do that, and when you're trying to maintain that compatibility across the ecosystem, so some customers say, "Well I actually don't want to have to be too closely welded "to any one vendor, 'cause part of the benefit "of Kubernetes is I can move my workloads around." So how do you navigate what is the right level of opinion to have, and which part should actually just be part of a common standard? >> Think it needs to be along the lines of best practices, is how we do it. So, let's take network policy, for example, applying a sane, default network policy to every namespace. Defining a sane, default pod security policy, building a cluster in a best practices fashion, with security turned on, hardening done, where you would've done this already as a user, so we're not locking you in in any way there. So that's, we're not trying, I'm not trying to curate any type of opinion of the product, what we're trying to do is harmonize your experience across all this ecosystem, so that you don't ever have to think about, "I'm building a cluster on top of Amazon, "so I got to worry about how do I manage this on Amazon." I don't want you to have to think about those providers anymore. And then on top of those, on top of that infrastructure, I want to have a way that you're thinking about managing the applications on those environments in the exact same way, so I'm scaling, or I'm protecting an application on-premise, in the identical way I'm doing it in the cloud. >> So if it's the same everywhere, what's the value that you're providing that means that I should choose your option than something else? >> So, we do have, this is where we have controllers that live inside of the clusters, that manage this stuff for the users. So, you could rebuild what we're doing, but you would have to roll it all by hand. But you could, we don't stand in the way of your operations either, so if we go down, you don't go down, type of idea. But we do have controllers, we're using CRDs, and so our app management technology, our controllers are just watching for a workload to come into the environment, and then we show that in the interface, but you can just walk away as well, if you wanted to. >> There's also a constellation of other services that we're building around, this experience, that do draw, again, from some of the storage and data management capabilities, so staple sets, your traditional workloads that want to interact with or transact data against a block or a shared file system. We're providing capabilities for sophisticated qualities of persistence that can exist in all of those same public clouds, but moreover, over time, we're going to be, and on-premise as well, we're going to be able to actually move, migrate, place, cache, per policy, your persistent data, with your workloads, as you move, migrate, scale, burst, whatever the model is, as you move across and between clouds. >> How far down that pathway do you think we are, 'cause one criticism of Kubernetes is that a lot of the tooling that we're used to from more traditional ways of operating this kind of infrastructure, isn't really there yet, hence the question about, we actually need to make this easier to use. How far down that pathway are we? >> I'd argue that the tooling that I've built has already solved some of those problems. So I think we're pretty far down the path. Now, what we haven't done is open sourced all of my tooling, right, to make it easier on everybody else. >> Rob, NetApp's got strong partnerships across the cloud platforms, I had a chance to interview George at the Google Cloud event, I know you partner of the year, I believe, on some of these stuff, help us understand how some of the things Matt and the team are building interact with the public clouds, you look at Anthos, and Azure Arc, and of course Amazon has many different ways you can do your container and management piece there. Talk a little bit about that relationship and how, both with those partners and then across those partners, work. >> Yeah, it's, how much time do we have, so there's certainly a lot of facets to that, but drawing from the Google experience, we just announced the general availability of Cloud Volumes ONTAP, so the ability to stand up and manage your own ONTAP instance in Google's cloud. Likewise, we announced the general availability of the Cloud Volume service, which gives you the managed push button as a service experience of shared file system on demand, at Google, I believe it was either today or yesterday, in London, I guess maybe I'll blame that on the time zone conversion, not knowing what day it was, but the point is, that's now generally available. Some of those capabilities are going to be able to be connected to our ability from MKS, to deploy a on-demand Kubernetes cluster, and deploy applications from a marketplace experience, in a common way, not just with Google but Azure, with Amazon, and so frankly the story does differ a little bit from one cloud to the next, but the endeavor is to provide common capabilities across all of them. It's also the case that we do have people that are very opinionated about, I want to live only in the Google or the Microsoft or the Amazon ecosystem, we're trying to deliver a rich experience for those folks as well, even if you don't value the agnostic multicloud experience. >> Yeah, and Matt, I'm sure you have a viewpoint on this, but it's that skillset that's really challenging. I was at the Microsoft show, and you've got people, it's not just about .NET, they're embracing and open to all of these environments, but people tend to have the environments that they're used to, and for multicloud to be a reality, it needs to be a little bit easier for me to go between them, but it's still, we're making progress but there's work to do. >> Matt: Yeah, what's the question? >> Yeah, so, I know you're building tools and everything, but what more do we need to do, where are some of the areas that you're hopeful for, but where are the areas that we need to go further? >> So for me it's coming down to the data side. I need to be able to say that, when I turn on data services, inside of Kubernetes, I need to be able to have that workload go anywhere, because as a developer, I'm running a production, I'm running an Amazon, but maybe I'm doing tests locally on my bare metal environments, right, I want to be able to maybe sink down some of my data that I'm working with in production down to my test environment. That stuff's missing, there's no one doing that right now, and that's where we're headed, that's the path, that's where we're headed. >> Yeah, I'm glad you brought that up, actually, 'cause one of the things that I feel like I heard a little bit last year but it is highlighted more this year, is we're talking a little bit more to the application developers because, Kubernetes is a piece of the infrastructure, but it's about-- >> It's the kernel. >> Yeah, it's the kernel there, so, how do we make sure we're spanning between what the app developer needs and still making sure that infrastructure is taken care of, because storage and networking are still hard. >> It is, yeah, I mean I'm approaching, I'm thinking more along the lines of, I'm trying to think more about app developers, personally, than infrastructure at this point. For me, so I can give you a cluster in three minutes, right, so I don't really have to worry about that problem. We also put Istio on top of the clusters, so it's like we're trying to create this whole narrative that you can manage that environment on day one, day two type operations. But, and that's for an IT manager, right, so inside of our product, how I'm addressing this is you have personas, and so you have this concept, you have an IT manager, they can do these things, they can set limits, but for the developer, who's building the applications or the services and pushing those up into the environment, they need to have a sense of freedom, and so on that side of the house, I'm trying not to break them out of their tooling, so part of our product ties into Git, so we have cd, so you just do a git push, git commit to a branch, and we can target multiple clusters. But at no point did the developer actually draft DAML, or anything, we basically create the container for you, create the deployment, bring it online, and I feel like there's these lines, and the IT guys need to be able to say, "I need to create the guardrails for the devs, "but I don't want to make it seem like "I'm creating guardrails for the devs, "'cause the devs don't like that." So that's how I'm balancing it. >> Okay, 'cause that has always been the tension, in that there's a lot of talk about DevOps, but you go and talk to application developers, and they don't want to have anything to do with infrastructure, they just want to program to an API and get things done, they would like this infrastructure to be seamless. >> Yeah, and what we do, also what I'm giving them is service dashboards, because as a developer, you know, because now you're in charge of your QA, you're writing your tests, you're pushing it through CI, it's going to CD. You own your service and production, right? And so we're delivering dashboards as well for services that the developers are running, so they can dig in and say, "Oh, here's an issue," or "Here's where the issue's probably going to be at, "I'm going to go fix this." And we're trying to create that type of scenario for a developer, and for an IT manager. >> Slightly different angle on it, if I'm understanding the question correctly, part of the complexity of infrastructure is something we're also trying to provide a deterministic sort of easy button capability for, perhaps you're familiar with NetApp's Nason ATI product, which we kind of expand that as hybrid cloud infrastructure. If the intention is to make it a simple, private cloud capability, and indeed, our NetApp Kubernetes service operates directly off of it, it's a big part of actually how we deliver cloud services from it. So the point is that, if you're that application developer, if you want the effective NKS on-prem, the endeavor with our NetApp ATI product is to give you that sort of easy button experience, because you didn't really want to be a storage admin or a network admin, you didn't want to get into the, be mired in the details of infra, so that's obviously work in progress, but we think we're definitely headed down the right direction. >> It does seem that a lot of enterprises want to have the cloudlike experience, but they want to be able to bring it home, we're seeing that a lot more. >> Yeah, so this turnkey on-premise, turnkey cloud on-premise, and, with NKS we can, the same auto-scaling, so take the dynamic nature of Kubernetes, so I have a base cluster size of say four worker nodes, right, but my workload's going to maybe need to have more nodes, so my auto-scaler's going to increase the size of my cluster and decrease the size, right? Pretty much everybody only can do that in the public cloud. I can do that in public cloud and on-premise, now. And so that's what we're trying to deliver, and that's pretty cool stuff, I think. >> Well there's a lot of advantages to enterprises operating in that way, because people out here, I can go and buy them or hire them, and say "Hey, we need you to operate this gear," and you've already done it elsewhere, you can do it in cloud, you can do it on-site, I can now run my operations the same across, no matter where my applications live, which saves me a lot of money on training costs, on development costs, and generally it makes for a much more smooth and seamless experience. >> So Rob, if you could, just love your takeaway on NetApp's participation here at the event, and what you want people to take away from the show this year. >> So it's certainly the case that we're doing a lot of great work, we like people to become aware of it. NetApp of course is not, I think we talked about this in perhaps other contexts, not strictly a storage and data management company only. We do draw from the strengths of that as we're providing full stack capabilities, in a way that are interconnected with public cloud, things like our NetApp Kubernetes service as really the foundational glue in many ways, to how we deliver the application runtime, but over time we'll build a constellation of data-centric capabilities around that as well. >> Matt, I would just love to get your viewpoint as someone that built a company in this ecosystem, there's so many startups here, give us kind of that founder viewpoint of being in this sort of ecosystem. >> Of the ecosystem... So this is, I came into the ecosystem at the beginning. I would have to say that it does feel different at this point, I'm going to speak as Matt, not as NetApp. And so my thinking has always been it feels a lot like, you're a big fan of that rock band, right, and you go to a local club, and we all get to know each other at that local club, and there's maybe 500 of us or 1000 of us, and then that band gets signed to Warner Brothers, and goes to the top, and now there's 20,000 people or 12,000 people. That's how it feels to me right now. I think, but what I like about it is that, it just shows the power of the community is now at a point where it's drawing in cities now, not just a small collection of a tribe of people. And I think that's a very powerful thing with this community, and like all the, what are they called, the Kubernetes Summits that they're doing, we didn't have any of those back when we first got going, I mean it was tough to fill the room, and now we can fill the room, and it's amazing, and what I like seeing is people moving past the problem of Kubernetes itself, and moving into what other problems can I solve on top of Kubernetes, so you're starting to see all these really exciting startups doing really neat things, and I really like, like this vendor hall I really like, 'cause you get to see all the new guys, but there's a lot of neat stuff going on, and I'm excited to see where the community goes in the next five years, but it's, we've gone from zero to 60 insanely fast, 'cause you guys were at the original KubeCon, I think, as well. >> It's our fourth year doing theCUBE at this show, but absolutely, we've watched it since the early days. I'm not supposed to mention OpenStack at this show, but we remember talking to JJ and some of the early people there, and we interviewed Craig McLuckie back in his Google days, and the like, so we've been fortunate to be on here since really day zero here, and definitely great energy, congrats so much on the progress, I really appreciate the updates on everything going, as you said, we've reached a certain state, and adding more value on top of this whole environment. >> Yeah, we're in junior high now, right, and we were in grade school for a few years. >> All right, well Matt and Rob, thank you so much for the update, hopefully not an awkward dance tonight for the junior people. For Justin Warren, I'm Stu Miniman, back with more coverage here from KubeCon CloudNativeCon 2019 in San Diego. Thank you for watching theCUBE. (techno music)

Published Date : Nov 21 2019

SUMMARY :

Brought to you by Red Hat, of the CNCF, NetApp, sitting to my right and NetApp continuing to go and then what type of tooling do you build and that we're starting to With the idea that you in the keynote this morning, in the scenario where you and I look at all the of the concerns they had so that you don't ever that live inside of the clusters, from some of the storage of the tooling that we're used to I'd argue that the and the team are building so the ability to stand up and for multicloud to be a reality, headed, that's the path, Yeah, it's the kernel there, so, and the IT guys need to be able to say, always been the tension, for services that the If the intention is to make It does seem that a lot of enterprises and decrease the size, right? and say "Hey, we need you and what you want people to take away So it's certainly the love to get your viewpoint and I'm excited to see and some of the early people there, and we were in grade and Rob, thank you so much

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Matt BaldwinPERSON

0.99+

MattPERSON

0.99+

GeorgePERSON

0.99+

Rob EskerPERSON

0.99+

RobPERSON

0.99+

Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

LondonLOCATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San DiegoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

2019DATE

0.99+

20,000 peopleQUANTITY

0.99+

Craig McLuckiePERSON

0.99+

GoogleORGANIZATION

0.99+

yesterdayDATE

0.99+

JJPERSON

0.99+

fourth yearQUANTITY

0.99+

last yearDATE

0.99+

three minutesQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

two guestsQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

CNCFORGANIZATION

0.99+

500QUANTITY

0.99+

NetAppORGANIZATION

0.99+

todayDATE

0.99+

KubeConEVENT

0.99+

Warner BrothersORGANIZATION

0.99+

KubernetesORGANIZATION

0.99+

this yearDATE

0.99+

NKSORGANIZATION

0.98+

KubernetesTITLE

0.98+

zeroQUANTITY

0.98+

60QUANTITY

0.98+

bothQUANTITY

0.98+

CloudNativeConEVENT

0.97+

NetAppTITLE

0.97+

oneQUANTITY

0.97+

this weekDATE

0.97+

GoogleTITLE

0.97+

GitTITLE

0.95+

one concernQUANTITY

0.94+

day oneQUANTITY

0.93+

Kubernetes SummitsEVENT

0.9+

tonightDATE

0.89+

Kelsey Hightower, Google Cloud | KubeCon + CloudNativeCon NA 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE, covering KubeCon and CloudNativeCon, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to theCUBE here at KubeCon CloudNativeCon, 2019. Second day of three days, wall to wall coverage. I am Stu Miniman, John Troyer is my cohost for the three days, and we've had a great schedule, but this one will be super dope, of course, 'cause it is the one, the only >> That's the right phrase to use >> Kelsey Hightower >> to bring me out. >> who is now a principal developer advocate at Google Cloud. Kelsey, thanks so much for joining us. >> Well, thanks for having me. >> All right, let's start. You did a keynote yesterday and I actually heard, not only did it rain in San Diego, people were talking about allergies. They were grabbing their tissues, eyes seemed to be tearing. You had stepped back for a little bit. When I first came into this show, we've been doing it for four years, it was, you know, Kelsey Hightower and Kubernetes almost seem to get top billing of the show. You specifically stepped back for a little bit, and you're here this week. So, talk a little bit about that piece. >> Yeah, so I stepped back to do some serverless stuff, right? So I worked on some cloud function stuff at Google, launching the ghost support for cloud functions, and really trying to understand the serverless base by being in it, and that means stepping back from Kubernetes quite a bit. So the keynote, I wanted people to have emotion. So no live demos, no slides, no speaker notes, and then just telling stories from the last six years of being a part of the Kubernetes community, and making people feel something. And I think it resonated with folks, and, of course, people got a little teary-eyed. I gave people a cover, so we just kept saying the allergies are starting to flare up in the room, and we really connected with people. >> Awesome. So you came back, which means serverless not completely taking over and obviating what we've been doing here for years. >> Yeah, I think serverless is just another tool in the toolbox, and I didn't want to miss it. So before I put it in its category, I wanted to make sure that I got super deep with it, used it myself, gave it a fair shot, and it definitely deserves a place. But I think the idea of serverless is the thing that's going to stick. This idea of eliminating as much infrastructure as possible and then putting that everywhere we can. >> I want to bring that idea of a tool in the toolbox to what we're talking about at this show. >> Kelsey: Okay. >> So, you know, Kubernetes is one of the most hottest topic at the show. The CNCF, now I mean, there's dozens and dozens of projects here. Dan Kohn, when he kicked it off, talked about Minecraft. And it's like there's that board there with all the tools, and, oh boy, which one do I pick, and how do I use it? >> How do you look at where Kubernetes fits in the overall landscape? Obviously, 12,000 people, it's really exciting. Why is there so much excitement around something that I think is really, it becomes another tool in the tool shed and baked into the platform? >> I think Kubernetes represents a problem that most people have. If you went down the Linux and then virtualization path, then you ended up with a bunch of virtual machines that you need to glue together somehow. So if you look inside of what Kubernetes has, like the scheduler, how it takes in the pain of running a workload. If you're running VMs in Linux, this is a problem you already have, so Kubernetes just resonates with almost everyone that is using virtualization. This is why it's so popular. So it fits. Now every tool in the landscape may not resonate the same way because everyone doesn't have the same set of problems around the edges, but Kubernetes is a very obvious thing to anyone that's managing more than a handful of machines. >> Well, I think that brings up an interesting question of, as companies and people assemble the stacks, right, assemble the engines out of the components, do you have any thoughts on, well, I guess we could take it from a couple of different ways. But maybe as a person coming here for the first time, representing their team, getting started, maybe not involved online with upstream Kubernetes but trying to make sense of the landscape here and all the different, the zoo of different projects. >> Lots of new people here. You talk to people, I think, what, 50% or more of the people are brand-new. People have been ignoring, rightfully so, Kubernetes for four or five years. "Maybe I don't need it, I'm good where I am." But we're at a point now where you can't ignore it. VMware's offering Kubernetes, every conference you go, where it's KubeCon or not, this is the thing they're talking about. It's just like Linux was years prior, right? It's just the thing that people are doing. So now, you're coming to see for yourself first-hand. You're coming to ask people how's it going, now that we're five years in? There's a sense of maturity, things are slowing down, the ecosystem's getting a lot more mature around it. So you almost have no choice but to be here because now it's in your world. >> All right, so, there's some people that I've been seeing online that are still looking at this a little bit skeptically, and said, "You know, we've been down this path before." You know, "Oh, everybody's involved in Kubernetes." you know, "There's my Kubernetes "versus some of the other environments." How should we think about that? 'Cause as you said, it's going to be baked into VMware when they do project-specific, and they've got a couple of ways to get you to Kubernetes. Yeah, Microsoft just announced an update. Is it an inter-operability issue? Is this the universal backplane? Do you have a good analogy as to how we should be thinking about where we are today and where we need to go so that we don't repeat the sins of the past when it was the multi-vendor mess that really didn't solve the customer's problems. >> You're going to always have multi-vendors because there's too many customers for one vendor to satisfy. That's always going to be the case, there's no way around that. But the way I look at Kubernetes now is like, take the web. Click around, webpages, link them together. And out of that, we extracted REST. People can build APIs, we build tooling on top, cloud providers built APIs to manage infrastructure. So the REST component comes out of the larger picture of the web. And when we take the larger components of Kubernetes, and we extract out that Kubernetes API, you get Istio, you get these network control plans, you get people building 5G infrastructure using that Kubernetes model. You get all the cloud providers saying, "Now, if the world's going to have "this set of APIs that are based on Kubernetes, "then I can actually build a global control plane "because I can assume that Kubernetes' API everywhere." Not just for containers, also for networking, authorization, management systems. So it's only natural that people start moving up the stack, and I expect even more panes, ever more fragmentation, if you will, because now it's so much easier to explore a new idea, even if it's only for a smaller subset of the market. So I expect it to explode. >> Yeah, one of the things we've been looking at this year is really the simplicity of the offering. You had done Kubernetes the hard way a couple of years back. We've been looking at things like lightweight Kubernetes, the K3s. How are we with that simplicity of the overall solution and making sure that Kubernetes can reach its potential to get to all of those use cases and end points that you were talking about? >> Kubernetes' job is to manage the complexity. If you need to run in multiple regions across the globe, that is a set up complexity, Kubernetes has one way of addressing it by sitting on top of all those VMs globally, and then providing a set of APIs. That Kubernetes set up end cluster is going to be way more complex than a MicroK8s, where you have a single virtual machine where you install the components on one machine, you don't deal with networking, you're not dealing with multiple nodes. That flow is super-easy. I think I did a tweet for the Canonical folks. They have a tool called MicroK8s, you just run one command, you have a Kubernetes cluster, and off you go. And that's great for a developer, but as the underlying infrastructure gets more complex, I think the overall cluster, and the components that you need in that cluster, matches the complexity. So I think Kubernetes has proven to scale up, and now you can see it's scaling down. So I think it's one of these things that's adapted to complexity, versus having to jump off of the platform because it can't meet either range. >> Now, Kelsey, we've talked a little bit about both Kubernetes as this universal API, but also being embedded, right, and being below a lot of application layer and other management-layer things, I mean, did you think about talking to our fellow technologists, right? There are some people who are going to be, we've also used the metaphor, mechanics, right? There's some people who are going to be the mechanics, but, like, everybody drives. So, as we get to this level of maturity here now at KubeCon 2019, any advice on how people should pick? Do I need to, and also online we hear a lot about, "Oh, I don't need, I don't know if I need Kubernetes. "I don't know if my particular use case right now, "boy, I don't know if I want to go there." So, I mean, how should people be looking at it? And also up scaling, should every IT and technologist and developer be working towards Kubernetes? >> Absolutely not. >> Thank you. >> If you're managing a bunch of machines, you got two choices. You could build a lot of custom tooling and build something that looks like Kubernetes, most people don't have the time to do that. So what we want to do is say, look, a lot of people are collaborating on that obvious thing that you should build to manage that. Now if I give you 80% of your time back, you should go and fill in that gap between what Kubernetes brings to the table and what your developers want to actually do. And at the end of the day, it's always been the same thing. You check in code, it should adopt the company's best practice, and I should be able to get an end point and some debugging tools. That has always been the north star, even when there was virtualization, early days of cloud. Kubernetes is no different. The thing that Kubernetes represents, though, is that you don't have to build as much glue between either your own VW ware or your pre-early cloud. Kubernetes has built all that stuff way up to this line, so maybe you actually finish that CICD part you were supposed to do anyway. >> All right, so, Kelsey, every year we try to figure out and distill down the theme of the event. A couple of years ago, the service matched really extensions were going at it. Here, there's so many different pieces, it's a little tough to kind of pin down. We talked about some of the edge simplicity use cases, security has, of course, been a discussion for a couple of years. Anything that you've distilled so far or the things that you are finding most interesting and new, kind of at the edges of this whole ecosystem? >> This whole thing is a Swiss army knife, so it depends on who's holding it. Whatever problem they have, that's the piece of the tool that they're going to make front and center. So that's what this is. And right now I think there's a lot of confusion on, do I even need all the other components in this Swiss army knife? Some people are just like, "Well, this tool looks interesting. "I don't have a problem that this tool is for." And some people are actively creating a problem so they can use the other tools in the Swiss army knife. I think the biggest thing that I've seen in the last two years is, make the new thing work the old way. So you're getting the more traditional vendors showing up and adding their Kubernetes integrations, and they're making the new thing more familiar to the people who have the existing tool. And when I look around, that's the thing that I see arise. "Hey, that firewall you were using? "We now have Kubernetes support. "That security tool you were using? "We now have Kubernetes support." The security tool works fundamentally the same, it's just now easier to adopt and maybe make Kubernetes things that are deployed in it, leverage those thing. >> So you're saying that's a good thing, not a bad thing. >> It's a good thing, but it can also be dangerous in some cases where we may get complacent a little bit, and what we end up doing is recreating the world that we tried to run away from a little bit. We try to create a little distance and maybe rethink a few of these approaches, maybe eliminate some need for some of these things. But if we get stuck in recreating the old world on top of the new thing, it doesn't really benefit anyone if we did that for too long. >> Yeah, it's interesting 'cause you talk to the enterprise and only 20% of applications are in the cloud, and if you talk about, out of my entire portfolio, how many are really new Cloud Native applications? Its much smaller than that 20%. So we know it's the long pole in the tent of modernization, but you spend a lot of time talking to customers, you're traveling the world, what are some of the best things that you're seeing out here that are helping people adopt those new environments and not just stake a place in, as you said? >> Pragmatism and leadership, if I see those two things. If there is someone that can make a decision. I see Spinnaker, I see Jenkins, I see a thousand things, I see the options. Leadership is pick one. They roughly do the same exact thing. You get someone that knows what they're doing, hires someone, get some help, make it work. And then the pragmatism is just be honest about your velocity. You might only bring in the VMs, and then you go to containers. So, this all or nothing approach never worked. You know it doesn't work. So I think when you have those two fundamental things, then you see a lot of success. And it's not about the age of the enterprise, either. There are hundred-year-old companies are making it work because they have the leadership component, and they're very skeptical, so they approach the problem with pragmatism, so they actually get to production. Sometimes faster than the startups that are trying 7,000 things in more of a reckless fashion, the whole thing catches fire. So, those are the positive outcomes that, there's so many tools now. You have your traditional vendors now with skin in the game, giving you documentation. I think right now, if you've got those two components, you're on your path to success. >> Yeah, I guess last thing, I want to get your thoughts just on this community these days. A couple of the keynote speakers today really talked about project over company, and definitely the open-source ethos is front and center at our show here. Give us your viewpoint how the community's doing and any highlight you want to share. >> So I have one more thing on top of that hierarchy, is people over projects always. And then that means that the people should be able to say, "Hey, I am not wedded to this project forever. "There's going to be a time when we have to jump off, "there's going to be a time when we have to learn "from the other communities." And if you do that, then we can actually be on the straight path. If we put the projects too much front and center I think we start to miss the boat. Kubernetes, Kubernetes, and the rest of the world is moving on. And then we look up, we've missed it, and we actually didn't even get to contribute to the new thing. So I think the biggest part about this community is that hopefully we keep the thing going where we keep reminding people, it's people over these projects. And I think in my keynote, I was trying to address the idea that we're just kind of pacesetters. You come in, you contribute, all contributions are welcome, documentation, code, or leadership, and then sometimes you got to jump back out and allow someone else to come in and set the pace and let the ecosystem become the marathon and let it keep running. >> All right well, Kelsey, thank you so much for sharing with our community. I tell ya, I've had countless stories of people over the years that have talked about how they've reached out to you, you've helped them along the way, and I know everybody in this ecosystem really appreciates everything that you've helped to move this to where we are today. >> Awesome, thanks for having me. >> All right, for John Troyer, I'm Stu Miniman. Super dope coverage of KubeCon CloudNativeCon continues. We'll be right back, thanks for watching theCUBE. (electronic beats)

Published Date : Nov 20 2019

SUMMARY :

brought to you by Red Hat, John Troyer is my cohost for the three days, who is now a principal developer advocate at Google Cloud. it was, you know, Kelsey Hightower and Kubernetes the allergies are starting to flare up in the room, So you came back, which means serverless is the thing that's going to stick. to what we're talking about at this show. is one of the most hottest topic at the show. and baked into the platform? that you need to glue together somehow. and all the different, the zoo of different projects. So you almost have no choice but to be here and they've got a couple of ways to get you to Kubernetes. even if it's only for a smaller subset of the market. and end points that you were talking about? and the components that you need in that cluster, I mean, did you think about talking is that you don't have to build as much glue or the things that you are finding most interesting and new, "Hey, that firewall you were using? and what we end up doing is recreating the world and only 20% of applications are in the cloud, and then you go to containers. and definitely the open-source ethos and then sometimes you got to jump back out of people over the years that have talked about Super dope coverage of KubeCon CloudNativeCon continues.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

KelseyPERSON

0.99+

John TroyerPERSON

0.99+

Stu MinimanPERSON

0.99+

80%QUANTITY

0.99+

San DiegoLOCATION

0.99+

fourQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

20%QUANTITY

0.99+

three daysQUANTITY

0.99+

MinecraftTITLE

0.99+

five yearsQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

dozensQUANTITY

0.99+

four yearsQUANTITY

0.99+

yesterdayDATE

0.99+

7,000 thingsQUANTITY

0.99+

bothQUANTITY

0.99+

KubernetesTITLE

0.99+

12,000 peopleQUANTITY

0.99+

LinuxTITLE

0.99+

VWORGANIZATION

0.99+

first timeQUANTITY

0.98+

Kelsey HightowerPERSON

0.98+

one vendorQUANTITY

0.98+

one machineQUANTITY

0.98+

two choicesQUANTITY

0.98+

two thingsQUANTITY

0.98+

KubeConEVENT

0.98+

two componentsQUANTITY

0.98+

GoogleORGANIZATION

0.98+

CloudNativeConEVENT

0.98+

Second dayQUANTITY

0.98+

MicroK8sTITLE

0.98+

oneQUANTITY

0.97+

JenkinsPERSON

0.97+

todayDATE

0.96+

this weekDATE

0.96+

KubeCon 2019EVENT

0.96+

2019DATE

0.96+

firstQUANTITY

0.96+

one commandQUANTITY

0.96+

VMwareORGANIZATION

0.96+

KubernetesORGANIZATION

0.95+

SpinnakerPERSON

0.94+

two fundamental thingsQUANTITY

0.93+

A couple of years agoDATE

0.93+

Kubernetes'TITLE

0.93+

this yearDATE

0.92+

hundred-year-oldQUANTITY

0.92+

CNCFORGANIZATION

0.88+

single virtual machineQUANTITY

0.87+

0%QUANTITY

0.87+

CanonicalORGANIZATION

0.85+

CloudNativeCon NA 2019EVENT

0.85+