Image Title

Search Results for Linkerd:

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great questions, Dave. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

WalmartORGANIZATION

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.99+

StarlinkORGANIZATION

0.99+

oneQUANTITY

0.98+

a dayQUANTITY

0.97+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

CubeORGANIZATION

0.9+

SuperCloud2ORGANIZATION

0.89+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.88+

one more pieceQUANTITY

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great question, Steve. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, and it does get disconnected on a regular basis. In fact, I forget the exact number, but some several dozen locations get disconnected daily just by virtue of the fact that there's construction going on and things are happening in the real world. When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Jan 9 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

StarlinkORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.98+

oneQUANTITY

0.98+

a dayQUANTITY

0.98+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

CubeORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.86+

one more pieceQUANTITY

0.86+

SuperCloud2ORGANIZATION

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

KubeCon Preview with Madhura Maskasky


 

(upbeat music) >> Hello, everyone. Welcome to theCUBE here, in Palo Alto, California for a Cube Conversation. I'm John Furrier, host of theCUBE. This is a KubeCon preview conversation. We got a great guest here, in studio, Madhura Maskasky, Co-Founder and VP of Product, Head of Product at Platform9. Madhura, great to see you. Thank you for coming in and sharing this conversation about, this cube conversation about KubeCon, a Kubecon conversation. >> Thanks for having me. >> A light nice play on words there, a little word play, but the fun thing about theCUBE is, we were there at the beginning when OpenStack was kind of on its transition, Kubernetes was just starting. I remember talking to Lou Tucker back in, I think Seattle or some event and Craig McLuckie was still working at Google at the time. And Google was debating on putting the paper out and so much has happened. Being present at creation, you guys have been there too with Platform9. Present at creation of the Kubernetes wave was not obvious only a few insiders kind of got the big picture. We were one of 'em. We saw this as a big wave. Docker containers at that time was a unicorn funded company. Now they've went back to their roots a few years ago. I think four years ago, they went back and recapped and now they're all pure open source. Since then Docker containers and containers have really powered the Kubernetes wave. Combined with the amazing work of the CNCF and KubeCon which we've been covering every year. You saw the maturation, you saw the wave, the early days, end user projects being contributed. Like Envoy's been a huge success. And then the white spaces filling in on the map, you got observability, you've got run time, you got all the things, still some white spaces in there but it's really been great to watch this growth. So I have to ask you, what do you expect this year? You guys have some cutting edge technology. You got Arlo announced and a lot's going on Kubernetes this year. It's going mainstream. You're starting to see the traditional enterprises embrace and some are scaling faster than others, manage services, plethora of choices. What do you expect this year at KubeCon North America in Detroit? >> Yeah, so I think you summarize kind of that life cycle or lifeline of Kubernetes pretty well. I think I remember the times when, just at the very beginning of Kubernetes, after it was released we were sitting I think with box, box dot com and they were describing to us why they are early adopters of Kubernetes. And we were just sitting down taking notes trying to understand this new project and what value it adds, right? And then flash forward to today where there are Dilbert strips written about Kubernetes. That's how popular it has become. So, I think as that has happened, I think one of the things that's also happened is the enterprises that adopted it relatively early are running it at a massive scale or looking to run it at massive scale. And so I think at scale cloud-native is going to be the most important theme. At scale governance, at scale manageability are going to be top of the mind. And the third factor, I think that's going to be top of the mind is cost control at scale. >> Yeah, and one of the things that we've seen is that the incubated projects a lot more being incubated now and you got the combination of end user and company contributed open source. You guys are contributing RLO >> RLO. >> and open source. >> Yeah. >> That's been part of your game plan there. So you guys are no stranger open source. How do you see this year's momentum? Is it more white space being filled? What's new coming out of the block? What do you think is going to come out of this year? What's rising in terms of traction? What do you see emerging as more notable that might not have been there last year? >> Yeah, so I think it's all about filling that white space, some level of consolidation, et cetera. That's usually the trend in the cloud-native space. And I think it's going to continue to be on that and it's going to be tooling that lets users simplify their lives. Now that Kubernetes is part of your day to day. And so it is observability, et cetera, have always been top of the mind, but I think starting this year, et cetera it's going to be at the next level. Which is gone other times of just running your Prometheus at individual cluster level, just to take that as an example. Now you need a solution- >> Yep. >> that operates at this massive scale across different distributions and your edge locations. So, it's taking those same problems but taking them to that next order of management. >> I'm looking at my notes here and I see orchestration and service mesh, which Envoy does. And you're seeing other solutions come out as well like Linkerd and whatnot. Some are more popular than others. What areas do you see are most needed? If you could go in there and be program chair for a day and you've got a day job as VP of product at Platform9. So you kind of have to have that future view of the roadmap and looking back at where you've come, what would you want to prioritize if you could bring your VP of product skills to the open source and saying, hey, can I point out some needs here? What would you say? >> Yeah, I think just the more tooling that lets people make sense and reduce some of the chaos that this prowling ecosystem of cloud-native creates. Which is tooling, that is not adding more tooling that covers white space is great, but introducing abilities that let you better manage what you have today is probably absolutely top of the mind. And I think that's really not covered today in terms of tools that are around. >> You know, I've been watching the top five incubated projects in CNCF, Argo cracked the top five. I think they got close to 12,000 GitHub stars. They have a conference now, ArgoCon here in California. What is that about? >> Yeah. >> Why is that so popular? I mean, I know it's kind of about obviously workflows and dealing with good pipeline, but why is that so popular right now? >> I think it's very interesting and I think Argo's journey and it's just climbed up in terms of its Github stars for example. And I think it's because as these scale factors that we talk about on one end number of nodes and clusters growing, and on the other end number of sites you're managing grows. I think that CD or continuous deployment of applications it used to kind of be something that you want to get to, it's that north star, but most enterprises wouldn't quite be there. They would either think that they're not ready and it's not needed enough to get there. But now when you're operating at that level of scale and to still maintain consistency without sky rocketing your costs, in terms of ops people, CD almost becomes a necessity. You need some kind of manageable, predictable way of deploying apps without having to go out with new releases that are going out every six months or so you need to do that on a daily basis, even hourly basis. And that's why. >> Scales the theme again, >> Yep. >> back to scale. >> Yep. >> All right, final question. We'll wrap up this preview for KubeCon in Detroit. Whereas we start getting the lay of the land and the focus. If you had to kind of predict the psychology of the developer that's going to be attending in person and they're going to have a hybrid event. So, they will be not as good as being in person. Us, it's going to be the first time kind of post pandemic when I think everyone's going to be together in LA it was a weird time in the calendar and Valencia was the kind of the first international one but this is the first time in North America. So, we're expecting a big audience. >> Mhm. >> If you could predict or what's your view on the psychology of the attendee this year? Obviously pumped to be back. But what do you think they're going to be thinking about? what's on their mind? What are they going to be peaked on? What's the focus? Where will be the psychology? Where will be the mindset? What are people going to be looking for this year? If you had to make a prediction on what the attendees are going to be thinking about what would you say? >> Yeah. So there's always a curiosity in terms of what's new, what new cool tools that are coming out that's going to help address some of the gaps. What can I try out? That's always as I go back to my development roots, first in mind, but then very quickly it comes down to what's going to help me do my job easier, better, faster, at lower cost. And I think again, I keep going back to that theme of automation, declarative automation, automation at scale, governance at scale, these are going to be top of the mind for both developers and ops teams. >> We'll be there covering it like a blanket like we always do from day one, present at creation at KubeCon we are going to be covering again for the consecutive year in a row. We love the CNCF. We love what they do. We thank the developers this year, again continue going mainstream closer and closer to the front lines as the company is the application. As we say, here on theCUBE we'll be there bringing you all the signal. Thanks for coming in and sharing your thoughts on KubeCon 2022. >> Thank you for having me. >> Okay. I'm John Furrier here in theCUBE in Palo Alto, California. Thanks for watching. (upbeat music)

Published Date : Sep 7 2022

SUMMARY :

Co-Founder and VP of Product, in on the map, you got observability, think that's going to be top Yeah, and one of the What do you see emerging and it's going to be but taking them to that of product skills to the and reduce some of the chaos in CNCF, Argo cracked the top five. and it's not needed enough to get there. Us, it's going to be the first of the attendee this year? of the mind for both We thank the developers this year, in theCUBE in Palo Alto, California.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Madhura MaskaskyPERSON

0.99+

Craig McLuckiePERSON

0.99+

John FurrierPERSON

0.99+

Lou TuckerPERSON

0.99+

CaliforniaLOCATION

0.99+

LALOCATION

0.99+

North AmericaLOCATION

0.99+

MadhuraPERSON

0.99+

DetroitLOCATION

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.99+

first timeQUANTITY

0.99+

ArgoORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

four years agoDATE

0.99+

KubeConEVENT

0.99+

SeattleLOCATION

0.99+

oneQUANTITY

0.98+

this yearDATE

0.98+

bothQUANTITY

0.98+

third factorQUANTITY

0.98+

todayDATE

0.98+

Platform9ORGANIZATION

0.97+

12,000QUANTITY

0.97+

a dayQUANTITY

0.97+

KubernetesTITLE

0.97+

PrometheusTITLE

0.97+

LinkerdORGANIZATION

0.95+

CNCFORGANIZATION

0.95+

firstQUANTITY

0.94+

KubeCon 2022EVENT

0.93+

DilbertPERSON

0.93+

theCUBEORGANIZATION

0.92+

waveEVENT

0.91+

day oneQUANTITY

0.91+

few years agoDATE

0.9+

ValenciaLOCATION

0.9+

ArgoConEVENT

0.9+

five incubated projectsQUANTITY

0.87+

KubeConORGANIZATION

0.86+

EnvoyORGANIZATION

0.85+

first internationalQUANTITY

0.84+

six monthsQUANTITY

0.81+

KubeconORGANIZATION

0.74+

top fiveQUANTITY

0.71+

DockerORGANIZATION

0.69+

north starLOCATION

0.63+

ArloTITLE

0.54+

pandemicEVENT

0.52+

GithubORGANIZATION

0.5+

OpenStackTITLE

0.49+

KubernetesPERSON

0.47+

GitHubTITLE

0.47+

CubeEVENT

0.3+

Alex Ellis, OpenFaaS | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Announcer: TheCUBE presents KubeCon and CloudNativeCon Europe, 2022. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, a KubeCon, CloudNativeCon Europe, 2022. I'm your host, Keith Townsend alongside Paul Gillon, Senior Editor, Enterprise Architecture for SiliconANGLE. We are, I think at the half point way point this to be fair we've talked to a lot of folks in open source in general. What's the difference between open source communities and these closed source communities that we attend so so much? >> Well open source is just it's that it's open it's anybody can contribute. There are a set of rules that manage how your contributions are reflected in the code base. What has to be shared, what you can keep to yourself but the it's an entirely different vibe. You know, you go to a conventional conference where there's a lot of proprietary being sold and it's all about cash. It's all about money changing hands. It's all about doing the deal. And open source conferences I think are more, they're more transparent and yeah money changes hands, but it seems like the objective of the interaction is not to consummate a deal to the degree that it is at a more conventional computer conference. >> And I think that can create an uneven side effect. And we're going to talk about that a little bit with, honestly a friend of mine Alex Ellis, founder of OpenFaaS. Alex welcome back to the program. >> Thank you, good to see Keith. >> So how long you've been doing OpenFaaS? >> Well, I first had this idea that serverless and function should be run on your own hardware back in 2016. >> Wow and I remember seeing you at DockerCon EU, was that in 2017? >> Yeah, I think that's when we first met and Simon Foskett took us out to dinner and we got chatting. And I just remember you went back to your hotel room after the presentation. You just had your iPhone out and your headphones you were talking about how you tried to OpenWhisk and really struggled with it and OpenFaaS sort of got you where you needed to be to sort of get some value out of the solution. >> And I think that's the magic of these open source communities in open source conferences that you can try stuff, you can struggle with it, come to a conference either get some advice or go in another direction and try something like a OpenFaaS. But we're going to talk about the business perspective. >> Yeah. >> Give us some, like give us some hero numbers from the project. What types of organizations are using OpenFaaS and what are like the download and stars all those, the ways you guys measure project success. >> So there's a few ways that you hear this talked about at KubeCon specifically. And one of the metrics that you hear the most often is GitHub stars. Now a GitHub star means that somebody with their laptop like yourself has heard of a project or seen it on their phone and clicked a button that's it. There's not really an indication of adoption but of interest. And that might be fleeting and a blog post you might publish you might bump that up by 2000. And so OpenFaaS quite quickly got a lot of stars which encouraged me to go on and do more with it. And it's now just crossed 30,000 across the whole organization of about 40 different open source repositories. >> Wow that is a number. >> Now you are in ecosystem where Knative is also taken off. And can you distinguish your approach to serverless or FaaS to Knatives? >> Yes so, Knative isn't an approach to FaaS. That's simply put and if you listen to Aikas Ville from the Knative project, he was working inside Google and wished that Kubernetes would do a little bit more than what it did. And so he started an initiative with some others to start bringing more abstractions like Auto Scaling, revision management so he can have two versions of code and and shift traffic around. And that's really what they're trying to do is add onto Kubernetes and make it do some of the things that a platform might do. Now OpenFaaS started from a different angle and frankly, two years earlier. >> There was no Kubernetes when you started it. >> It kind of led in the space and and built out that ecosystem. So the idea was, I was working with Lambda and AWS Alexa skills. I wanted to run them on my own hardware and I couldn't. And so OpenFaaS from the beginning started from that developer experience of here's my code, run it for me. Knative is a set of extensions that may be a building block but you're still pretty much working with Kubernetes. We get calls come through. And actually recently I can't tell you who they are but there's a very large telecommunications provider in the US that was using OpenFaaS, like yourself heard of Knative and in the hype they switched. And then they switched back again recently to OpenFaaS and they've come to us for quite a large commercial deal. >> So did they find Knative to be more restrictive? >> No, it's the opposite. It's a lot less opinionated. It's more like building blocks and you are dealing with a lot more detail. It's a much bigger system to manage, but don't get me wrong. I mean the guys are very friendly. They have their sort of use cases that they pursue. Google's now donated the project to CNCF. And so they're running it that way. Now it doesn't mean that there aren't FaaS on top of it. Red Hat have a serverless product VMware have one. But OpenFaaS because it owns the whole stack can get you something that's always been very lean, simple to use to the point that Keith in his hotel room installed it and was product with it in an evening without having to be a Kubernetes expert. >> And that is and if you remember back that was very anti-Kubernetes. >> Yes. >> It was not a platform I thought that was. And for some of the very same reasons, I didn't think it was very user friendly. You know, I tried open with I'm thinking what enterprise is going to try this thing, especially without the handholding and the support needed to do that. And you know, something pretty interesting that happened as I shared this with you on Twitter, I was having a briefing by a big microprocessor company, one of the big two. And they were showing me some of the work they were doing in Cloud-native and the way that they stretch test the system to show me Auto Scaling. Is that they bought up a OpenFaaS what is it? The well text that just does a bunch of, >> The cows maybe. >> Yeah the cows. That does just a bunch of texts. And it just all, and I'm like one I was amazed at is super simple app. And the second one was the reason why they discovered it was because of that simplicity is just a thing that's in your store that you can just download and test. And it was open fast. And it was this big company that you had no idea that was using >> No >> OpenFaaS. >> No. >> How prevalent is that? That you're always running into like these surprises of who's using the solution. >> There are a lot of top tier companies, billion dollar companies that use software that I've worked on. And it's quite common. The main issue you have with open source is you don't have like the commercial software you talked about, the relationships. They don't tell you they're using it until it breaks. And then they may come in incognito with a personal email address asking for things. What they don't want to do often is lend their brands or support you. And so it is a big challenge. However, early on, when I met you, BT, live person the University of Washington, and a bunch of other companies had told us they were using it. We were having discussions with them took them to Kubecon and did talks with them. You can go and look at them in the video player. However, when I left my job in 2019 to work on this full time I went to them and I said, you know, use it in production it's useful for you. We've done a talk, we really understand the business value of how it saves you time. I haven't got a way to fund it and it won't exist unless you help they were like sucks to be you. >> Wow that's brutal. So, okay let me get this right. I remember the story 2019, you leave your job. You say I'm going to do OpenFaaS and support this project 100% of your time. If there's no one contributing to the project from a financial perspective how do you make money? I've always pitched open source because you're the first person that I've met that ran an open source project. And I always pitched them people like you who work on it on their side time. But they're not the Knatives of the world, the SDOs, they have full time developers. Sponsored by Google and Microsoft, etc. If you're not sponsored how do you make money off of open source? >> If this is the million dollar question, really? How do you make money from something that is completely free? Where all of the value has already been captured by a company and they have no incentive to support you build a relationship or send you money in any way. >> And no one has really figured it out. Arguably Red Hat is the only one that's pulled it off. >> Well, people do refer to Red Hat and they say the Red Hat model but I think that was a one off. And we quite, we can kind of agree about that in a business. However, I eventually accepted the fact that companies don't pay for something they can get for free. It took me a very long time to get around that because you know, with open source enthusiast built a huge community around this project, almost 400 people have contributed code to it over the years. And we have had full-time people working on it on and off. And there's some people who really support it in their working hours or at home on the weekends. But no, I had to really think, right, what am I going to offer? And to begin with it would support existing customers weren't interested. They're not really customers because they're consuming it as a project. So I needed to create a product because we understand we buy products. Initially I just couldn't find the right customers. And so many times I thought about giving up, leaving it behind, my family would've supported me with that as well. And they would've known exactly why even you would've done. And so what I started to do was offer my insights as a community leader, as a maintainer to companies like we've got here. So Casting one of my customers, CSIG one of my customers, Rancher R, DigitalOcean, a lot of the vendors you see here. And I was able to get a significant amount of money by lending my expertise and writing content that gave me enough buffer to give the doctors time to realize that maybe they do need support and go a bit further into production. And over the last 12 months, we've been signing six figure deals with existing users and new users alike in enterprise. >> For support >> For support, for licensing of new features that are close source and for consulting. >> So you have proprietary extensions. Also that are sort of enterprise class. Right and then also the consulting business, the support business which is a proven business model that has worked >> Is a proven business model. What it's not a proven business model is if you work hard enough, you deserve to be rewarded. >> Mmh. >> You have to go with the system. Winter comes after autumn. Summer comes after spring and you, it's no point saying why is it like that? That's the way it is. And if you go with it, you can benefit from it. And that's what the realization I had as much as I didn't want to do it. >> So you know this community, well you know there's other project founders out here thinking about making the leap. If you're giving advice to a project founder and they're thinking about making this leap, you know quitting their job and becoming the next Alex. And I think this is the perception that the misperception out there. >> Yes. >> You're, you're well known. There's a difference between being well known and well compensated. >> Yeah. >> What advice would you give those founders >> To be. >> Before they make the leap to say you know what I'm going to do my project full time. I'm going to lean on the generosity of the community. So there are some generous people in the community. You've done some really interesting things for individual like contributions etc but that's not enough. >> So look, I mean really you have to go back to the MBA mindset. What problem are you trying to solve? Who is your target customer? What do they care about? What do they eat and drink? When do they go to sleep? You really need to know who this is for. And then customize a journey for them so that they can come to you. And you need some way initially of funneling those people in qualifying them because not everybody that comes to a student or somebody doing a PhD is not your customer. >> Right, right. >> You need to understand sales. You need to understand a lot about business but you can work it out on your way. You know, I'm testament to that. And once you have people you then need something to sell them that might meet their needs and be prepared to tell them that what you've got isn't right for them. 'cause sometimes that's the one thing that will build integrity. >> That's very hard for community leaders. It's very hard for community leaders to say, no >> Absolutely so how do you help them over that hump? I think of what you've done. >> So you have to set some boundaries because as an open source developer and maintainer you want to help everybody that's there regardless. And I think for me it was taking some of the open source features that companies used not releasing them anymore in the open source edition, putting them into the paid developing new features based on what feedback we'd had, offering support as well but also understanding what is support. What do you need to offer? You may think you need a one hour SLA for a fix probably turns out that you could sell a three day response time or one day response time. And some people would want that and see value in it. But you're not going to know until you talk to your customers. >> I want to ask you, because this has been a particular interest of mine. It seems like managed services have been kind of the lifeline for pure open source companies. Enabling these companies to maintain their open source roots, but still have a revenue stream of delivering as a service. Is that a business model option you've looked at? >> There's three business models perhaps that are prevalent. One is OpenCore, which is roughly what I'm following. >> Right. >> Then there is SaaS, which is what you understand and then there's support on pure open source. So that's more like what Rancher does. Now if you think of a company like Buoyant that produces Linkerd they do a bit of both. So they don't have any close source pieces yet but they can host it for you or you can host it and they'll support you. And so I think if there's a way that you can put your product into a SaaS that makes it easier for them to run then you know go for it. However, we've OpenFaaS, remember what is the core problem we are solving, portability So why lock into my cloud? >> Take that option off the table, go ahead. >> It's been a long journey and I've been a fan since your start. I've seen the bumps and bruises and the scars get made. If you're open source leader and you're thinking about becoming as famous as Alex, hey you can do that, you can put in all the work become famous but if you want to make a living, solve a problem, understand what people are willing to pay for that problem and go out and sell it. Valuable lessons here on theCUBE. From Valencia, Spain I'm Keith Townsend along with Paul Gillon and you're watching theCUBE the leader in high-tech coverage. (Upbeat music)

Published Date : May 19 2022

SUMMARY :

Brought to you by Red Hat, What's the difference between what you can keep to yourself And I think that can create that serverless and function you went back to your hotel room that you can try stuff, the ways you guys measure project success. and a blog post you might publish And can you distinguish your approach and if you listen to Aikas Ville when you started it. and in the hype they switched. and you are dealing And that is and if you remember back and the support needed to do that. that you can just download and test. like these surprises of and it won't exist unless you help you leave your job. to support you build a relationship Arguably Red Hat is the only a lot of the vendors you see here. that are close source and for consulting. So you have proprietary extensions. is if you work hard enough, And if you go with it, that the misperception out there. and well compensated. to say you know what I'm going so that they can come to you. And once you have people community leaders to say, no Absolutely so how do you and maintainer you want to help everybody have been kind of the lifeline perhaps that are prevalent. that you can put your product the table, go ahead. and the scars get made.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

Keith TownsendPERSON

0.99+

GoogleORGANIZATION

0.99+

KeithPERSON

0.99+

one dayQUANTITY

0.99+

Alex EllisPERSON

0.99+

2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

Simon FoskettPERSON

0.99+

2016DATE

0.99+

100%QUANTITY

0.99+

three dayQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

one hourQUANTITY

0.99+

2017DATE

0.99+

USLOCATION

0.99+

DigitalOceanORGANIZATION

0.99+

KnativeORGANIZATION

0.99+

AWSORGANIZATION

0.99+

BuoyantORGANIZATION

0.99+

Valencia, SpainLOCATION

0.99+

Rancher RORGANIZATION

0.99+

OneQUANTITY

0.99+

CNCFORGANIZATION

0.99+

OpenFaaSTITLE

0.99+

University of WashingtonORGANIZATION

0.99+

AlexPERSON

0.99+

KubeConEVENT

0.99+

three business modelsQUANTITY

0.99+

OpenFaaSORGANIZATION

0.99+

30,000QUANTITY

0.99+

two years earlierDATE

0.98+

million dollarQUANTITY

0.98+

oneQUANTITY

0.98+

six figureQUANTITY

0.98+

about 40 different open source repositoriesQUANTITY

0.98+

two versionsQUANTITY

0.98+

CloudNativeCon EuropeEVENT

0.97+

CloudnativeconORGANIZATION

0.97+

BTORGANIZATION

0.96+

bothQUANTITY

0.96+

firstQUANTITY

0.96+

KubeconORGANIZATION

0.95+

twoQUANTITY

0.95+

FaaSTITLE

0.95+

KubernetesORGANIZATION

0.94+

AlexaTITLE

0.94+

almost 400 peopleQUANTITY

0.94+

TwitterORGANIZATION

0.94+

TheCUBEORGANIZATION

0.93+

first personQUANTITY

0.92+

billion dollarQUANTITY

0.92+

second oneQUANTITY

0.91+

LinkerdORGANIZATION

0.88+

Red HatTITLE

0.87+

KubernetesTITLE

0.87+

CSIGORGANIZATION

0.87+

KnativeTITLE

0.86+

HatTITLE

0.85+

OpenCoreTITLE

0.84+

RancherORGANIZATION

0.83+

EuropeLOCATION

0.79+

KnativesORGANIZATION

0.79+

SiliconANGLEORGANIZATION

0.78+

Christopher Voss, Microsoft | Kubecon + Cloudnativecon Europe 2022


 

>> theCUBE presents KubeCon and CloudNativeCon, Europe, 2022. Brought to you by Red Hat, the cloud-native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in KubeCon, CloudNativeCon, Europe, 2022. I'm Keith Townsend with my cohosts, Enrico Signoretti, Senior IT Analyst at GigaOm. >> Exactly. >> 7,500 people I'm told, Enrico. What's the flavor of the show so far? >> It's a fantastic mood, I mean, I found a lot of people wanting to track, talk about what they're doing with Kubernetes, sharing their you know, stories, some war stories that bit tough. And you know, this is where you learn actually. Because we had a lot of Zoom calls, webinar and stuff. But it is when you talk a video, "Oh, I did it this way, and it didn't work out very well." So, and, you start a conversation like this that is really different from learning from Zoom, when, you know, everybody talks about things that work it well, they did it right. No, it's here that you learn from other experiences. >> So we're talking to amazing people the whole week, talking about those experiences here on theCUBE. Fresh on the theCUBE for the first time, Chris Voss, senior software engineer at Microsoft Xbox. Chris, welcome to the theCUBE. >> Thank you so much for having me. >> So first off, give us a high level picture of the environment that you're running at Microsoft. >> Yeah. So, you know, we've got 20 well probably close to 30 clusters at this point around the globe, you know 700 to 1,000 pods per cluster, roughly. So about 22,000 pods total. So yeah, it's pretty, pretty sizable footprint and yeah. So we've been running on Kubernetes since 2018 and well actually might be 2017, but anyways, so yeah, that's kind of our footprint. Yeah. >> So all of that, let's talk about the basics which is security across multiple I'm assuming containers, microservices, etcetera. Why did you and the team settle on Linkerd? >> Yeah, so previously we had our own kind of solution for managing TLS certs and things like that. And we found it to be pretty painful, pretty quickly. And so we knew, you know we wanted something that was a little bit more abstracted away from the developers and things like that, that allowed us to move quickly. And so we began investigating, you know, solutions to that. And a few of our colleagues went to Kubecon in San Diego in 2019, Cloudnativecon as well. And basically they just, you know, sponged it all up. And actually funny enough, my old manager was one of the people who was there and he went to the Linkerd booth and they had a thing going that was like, "Hey, get set up with MTLS in five minutes." And he was like, "This is something we want to do, why not check this out?" And he was able to do it. And so that put it on our radar. And so yeah, we investigated several others and Linkerd just perfectly fit exactly what we needed. >> So, in general we are talking about, you know, security at scale. So how you manage security scale and also flexibility. Right? So, but you know, what is the... You told us about the five minutes to start using there but you know, again, we are talking about war stories. We're talking about, you know, all these. So what kind of challenges you found at the beginning when you started adopting this technology? >> So the biggest ones were around getting up and running with like a new service, especially in the beginning, right, we were, you know, adding a new service almost every day. It felt like. And so, you know, basically it took someone going through a whole bunch of different repos, getting approvals from everyone to get the certs minted, all that fun stuff getting them put into the right environments and in the right clusters, to make sure that, you know, everybody is talking appropriately. And just the amount of work that that took alone was just a huge headache and a huge barrier to entry for us to, quickly move up the number of services we have. >> So, I'm trying to wrap my head around the scale of the challenge. When I think about certification or certificate management, I have to do it on a small scale. And every now and again, when a certificate expires it is just a troubleshooting pain. >> Yes. >> So as I think about that, it costs it's not just certificates across 22,000 pods, or it's certificates across 22,000 pods in multiple applications. How were you doing that before Linkerd? Like, what was the... And what were the pain points? Like what happens when a certificate either fails? Or expired up? Not updated? >> So, I mean, to be completely honest, the biggest thing is we're just unable to make the calls, you know, out or in, based on yeah, what is failing basically. But, you know, we saw essentially an uptick in failures around a certain service and pretty quickly, pretty quickly, we got used to the fact that it was like, oh, it's probably a cert expiration issue. And so we tried, you know, a few things in order to make that a little bit more automated and things like that. But we never came to a solution that like didn't require every engineer on the team to know essentially quite a bit about this, just to get into it, which was a huge issue. >> So talk about day two, after you've deployed Linkerd, how did this alleviate software engineers? And what was like the benefits of now having this automated way of managing certs? >> So the biggest thing is like, there is no touch from developers, everyone on our team... Well, I mean, there are a lot of people who are familiar with security and certs and all of that stuff. But no one has to know it. Like it's not a requirement. Like for instance, I knew nothing about it when I joined the team. And even when I was setting up our newer clusters, I knew very little about it. And I was still able to really quickly set up Linkerd, which was really nice. And it's been, you know, essentially we've been able to just kind of set it, and not think about it too much. Obviously, you know, there're parts of it that you have to think about, we monitor it and all that fun stuff, but yeah, it's been pretty painless almost day one. It took a long time to trust it for developers. You know, anytime there was a failure, it's like, "Oh, could this be Linkerd?" you know. But after a while, like now we don't have that immediate assumption because people have built up that trust, but. >> Also you have this massive infrastructure I mean, 30 clusters. So, I guess, that it's quite different to manage a single cluster in 30. So what are the, you know, consideration that you have to do to install this software on, you know, 30 different cluster, manage different, you know versions probably, et cetera, et cetera, et cetera. >> So, I mean, you know, as far as like... I guess, just to clarify, are you asking specifically with Linkerd? Or are you just asking in more in general? >> Well, I mean, you can take that the question in two ways. >> Okay. >> Sure, yeah, so Linkerd in particular but the 30 cluster also quite interesting. >> Yeah. So, I mean, you know, more generally, you know how we manage our clusters and things like that. We have, you know, a CLI tool that we use in order to like change context very quickly, and switch and communicate with whatever cluster we're trying to connect to and you know, are we debugging or getting logs, whatever. And then, you know, with Linkerd it's nice because again, you know, we aren't having to worry about like, oh, how is this cert being inserted in the right node? Or not the right node, but in the right cluster or things like that. Whereas with Linkerd, we don't really have that concern. When we spin up our clusters, essentially we get the route certificate and everything like that packaged up, passed along to Linkerd on installation. And then essentially, there's not much we have to do after that. >> So talk to me about your upcoming section here at Kubecon. what's the high level talking points? Like what attendees learn? >> Yeah. So it's a journey. Those are the sorts of talks that I find useful. Having not been, you know, I'm not a deep Kubernetes expert from, you know decades or whatever of experience, but-- >> I think nobody is. >> (indistinct). >> True, yes. >> That's also true. >> That's another story >> That's a job posting decades of requirements for-- >> Of course, yeah. But so, you know, it's a journey. It's really just like, hey, what made us decide on a service mesh in the first place? What made us choose Linkerd? And then what are the ways in which, you know, we use Linkerd? So what are those, you know we use some of the extra plugins and things like that. And then finally, a little bit about more what we're going to do in the future. >> Let's talk about not just necessarily the future as in two or three days from now, or two or three years from now. Well, the future after you immediately solve the low level problems with Linkerd, what were some of the surprises? Because Linkerd in service mesh and in general have side benefits. Do you experience any of those side benefits as well? >> Yeah, it's funny, you know, writing the blog post, you know, I hadn't really looked at a lot of the data in years on, you know when we did our investigations and things like that. And we had seen that we like had very low latency and low CPU utilization and things like that. And looking at some of that, I found that we were actually saving time off of requests. And I couldn't really think of why that was and I was talking with someone else and the biggest, unfortunately all that data's gone now, like the source data. So I can't go back and verify this but it makes sense, you know, there's the availability zone routing that Linkerd supports. And so I think that's actually doing it where, you know essentially, if a node is closer to another node, it's essentially, you know, routing to those ones. So when one service is talking to another service and maybe they're on the same node, you know, it short circuits that and allows us to gain some time there. It's not huge, but it adds up after, you know, 10, 20 calls down the line. >> Right. In general, so you are saying that it's smooth operations at this very, you know, simplifying your life. >> And again, we didn't have to really do anything for that. It handled that for us. >> It was there? >> Yep. Yeah, exactly. >> So we know one thing when I do it on my laptop it works fine. When I do it with across 22,000 pods, that's a different experience. What were some of the lessons learned coming out of Kubecon 2018 in San Diego? I was there. I wish I would've ran into the Microsoft folks, but what were some of the hard lessons learned scaling Linkerd across the 22,000 nodes? >> So, you know, the first one and this seems pretty obvious, but was just not something I knew about was the high availability mode of Linkerd. So obviously makes sense. You would want that in, you know a large scale environment. So like, that's one of the big lessons that like, we didn't ride away. No. Like one of the mistakes we made in one of our pre-production clusters was not turning that on. And we were kind of surprised. We were like, whoa, like all of these pods are spinning up but they're having issues, like actually getting injected and things like that. And we found, oh, okay. Yeah, you need to actually give it some more resources. But it's still very lightweight considering, you know, they have high availability mode but it's just a few instances still. >> So from, even from, you know, binary perspective and running Linkerd how much overhead is it? >> That is a great question. So I don't remember off the top of my head, the numbers but it's very lightweight. We evaluated a few different service missions and it was the lightest weight that we encountered at that point. >> And then from a resource perspective, is it a team of Linkerd people? Is it a couple of people? Like how? >> To be completely honest for a long time, it was one person Abraham, who actually is the person who proposed this talk. He couldn't make it to Valencia, but he essentially did probably 95% of the work to get into production. And then this was before, we even had a team dedicated to our infrastructure. And so we have, now we have a team dedicated, we're all kind of Linkerd folks, if not Linkerd experts, we at least can troubleshoot basically. And things like that. So it's, I think a group of six people on our team and then, you know various people who've had experience with it on other teams. >> But others, dedicated just to that. >> No one is dedicated just to it. No, it's pretty like pretty light touch once it's up and running. It took a very long time for us to really understand it and to, you know, get like not getting started, but like getting to where we really felt comfortable letting it go in production. But once it was there, like, it is very, very light touch. >> Well, I really appreciate you stopping by Chris. It's been an amazing conversation to hear how Microsoft is using a open source project. >> Exactly. >> At scale, it's just a few years ago when you would've heard the concept of Microsoft and open source together and like OS, just, you know-- >> They have changed a lot in the last few years. Now, there are huge contributors. And, you know, if you go to Azure, it's full of open source stuff, everywhere so. >> Yeah. >> Wow. The Kubecon 2022, how the world has changed in so many ways. From Valencia Spain, I'm Keith Townsend, along with Enrico Signoretti. You're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

Brought to you by Red Hat, Welcome to Valencia, Spain What's the flavor of the show so far? And you know, this is Fresh on the theCUBE for the first time, of the environment that at this point around the globe, you know Why did you and the And so we knew, you know So, but you know, what is the... right, we were, you know, I have to do it on a small scale. How were you doing that before Linkerd? And so we tried, you know, And it's been, you know, So what are the, you know, So, I mean, you know, as far as like... Well, I mean, you can take that but the 30 cluster also quite interesting. And then, you know, with Linkerd So talk to me about Having not been, you know, But so, you know, you immediately solve but it makes sense, you know, you know, simplifying your life. And again, we didn't have So we know one thing So, you know, the first one and it was the lightest and then, you know dedicated just to that. and to, you know, get you stopping by Chris. And, you know, if you go to Azure, how the world has changed in so many ways.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EnricoPERSON

0.99+

ChrisPERSON

0.99+

Enrico SignorettiPERSON

0.99+

Christopher VossPERSON

0.99+

Chris VossPERSON

0.99+

Keith TownsendPERSON

0.99+

95%QUANTITY

0.99+

700QUANTITY

0.99+

2017DATE

0.99+

LinkerdORGANIZATION

0.99+

San DiegoLOCATION

0.99+

30 clustersQUANTITY

0.99+

Red HatORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AbrahamPERSON

0.99+

10QUANTITY

0.99+

2019DATE

0.99+

20QUANTITY

0.99+

ValenciaLOCATION

0.99+

six peopleQUANTITY

0.99+

22,000 podsQUANTITY

0.99+

30QUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

Valencia SpainLOCATION

0.99+

KubeConEVENT

0.99+

7,500 peopleQUANTITY

0.99+

2018DATE

0.99+

1,000 podsQUANTITY

0.99+

two waysQUANTITY

0.99+

five minutesQUANTITY

0.99+

EuropeLOCATION

0.99+

CloudNativeConEVENT

0.98+

Enrico SignorePERSON

0.98+

three daysQUANTITY

0.98+

GigaOmORGANIZATION

0.98+

twoQUANTITY

0.98+

first timeQUANTITY

0.98+

firstQUANTITY

0.98+

CloudnativeconORGANIZATION

0.97+

one serviceQUANTITY

0.97+

KubeconORGANIZATION

0.97+

three yearsQUANTITY

0.97+

30 different clusterQUANTITY

0.96+

first oneQUANTITY

0.96+

22,000 nodesQUANTITY

0.96+

oneQUANTITY

0.96+

30 clusterQUANTITY

0.95+

one thingQUANTITY

0.94+

XboxCOMMERCIAL_ITEM

0.93+

about 22,000 podsQUANTITY

0.92+

single clusterQUANTITY

0.92+

20 callsQUANTITY

0.91+

day twoQUANTITY

0.91+

one personQUANTITY

0.89+

few years agoDATE

0.88+

decadesQUANTITY

0.87+

2022DATE

0.85+

AzureTITLE

0.79+

KubernetesTITLE

0.77+

Day 1 Wrap | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: theCUBE presents KubeCon and Cloud NativeCon Europe, 2022 brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain. A coverage of KubeCon, Cloud NativeCon, Europe, 2022. I'm Keith Townsend. Your host of theCUBE, along with Paul Gillum, Senior Editor Enterprise Architecture for Silicon Angle, Enrico, Senior IT Analyst for GigaOm . This has been a full day, 7,500 attendees. I might have seen them run out of food, this is just unexpected. I mean, it escalated from what I understand, it went from capping it off at 4,000 gold, 5,000 gold in it off finally at 7,500 people. I'm super excited for... Today's been a great dead coverage. I'm super excited for tomorrow's coverage from theCUBE, but first off, we'll let the the new person on stage take the first question of the wrap up of the day of coverage, Enrico, what's different about this year versus other KubeCons or Cloud Native conversations. >> I think in general, it's the maturity. So we talk a lot about day two operations, observability, monitoring, going deeper and deeper in the security aspects of the application. So this means that for many enterprises, Kubernetes is becoming real critical. They want to get more control of it. And of course you have the discussion around FinOps, around cost control, because we are deploying Kubernetes everywhere. And if you don't have everything optimized, control, monitored, costs go to the roof and think about deploying the Public Cloud . If your application is not optimized, you're paying more. But also in that, on-premises if you are not optimized, you don't have any clear idea what is going to happen. So capacity planning become the nightmare, that we know from the past. So there is a lot of going on around these topics, really exciting actually, less infrastructure, more application. That is what Kubernetes is in here. >> Paul help me separate some of the signal from the noise. There is a lot going on a lot of overlap. What are some of the big themes of takeaways for day one that Enterprise Architects, Executives, need to take home and really chew on? >> Well, the Kubernetes was a turning point. Docker was introduced nine years ago, and for the first three or four years it was an interesting technology that was not very widely adopted. Kubernetes came along and gave developers a reason to use containers. What strikes me about this conference is that this is a developer event, ordinarily you go to conferences and it's geared toward IT Managers, towards CIOs, this is very much geared toward developers. When you have the hearts and minds of developers the rest of the industry is sort of pulled along with it. So this is ground zero for the hottest area of the entire computing industry right now, is in this area building Distributed services, Microservices based, Cloud Native applications. And it's the developers who are leading the way. I think that's a significant shift. I don't see the Managers here, the CIOs here. These are the people who are pulling this industry into the next generation. >> One of the interesting things that I've seen when we've always said, Kubernetes is for the developers, but we talk with an icon from MoneyGram, who's a end user, he's an enterprise architect, and he brought Kubernetes to his front end developers, and they rejected it. They said, what is this? I just want to develop code. So when we say Kubernetes is for developers or the developers are here, how do we reconcile that mismatch of experience? We have Enterprise Architect here. I hear constantly that the Kubernetes is for developers, but is it a certain kind of developer that Kubernetes is for? >> Well, yes and no. I mean, so the paradigm is changing. Okay. So, and maybe a few years back, it was tough to understand how make your application different. So microservices, everything was new for everybody, but actually, everything has changed to a point and now the developer understands, is neural. So, going through the application, APIs, automation, because the complexity of this application is huge, and you have, 724 kind of development sort of deployment. So you have to stay always on, et cetera, et cetera. And actually, to the point of developers bringing this new generation of decision makers in there. So they are actually decision, they are adopting technology. Maybe it's a sort of shadow IT at the very beginning. So they're adopting it, they're using it. And they're starting to use a lot of open source stuff. And then somebody upper in the stack, the Executive, says what are... They discover that the technology is already in place is a critical component, and then it's transformed in something enterprise, meaning paying enterprise services on top of it to be sure support contract and so on. So it's a real journey. And these guys are the real decision makers, or they are at the base of the decision making process, at least >> Cloud Native is something we're going to learn to take for granted. When you remember back, remember the Fail Whale in the early days of Twitter, when periodically the service would just crash from traffic, or Amazon went through the same thing. Facebook went through the same thing. We don't see that anymore because we are now learning to take Cloud Native for granted. We assume applications are going to be available. They're going to be performant. They're going to scale. They're going to handle anything we throw at them. That is Cloud Native at work. And I think we forget sometimes how refreshing it is to have an internet that really works for you. >> Yeah, I think we're much earlier in the journey. We had Microsoft on, the Xbox team talked about 22,000 pods running Linkerd some of the initial problems and pain points around those challenges. Much of my hallway track conversation has been centered around as we talk about the decision makers, the platform teams. And this is what I'm getting excited to talk about in tomorrow's coverage. Who's on the ground doing this stuff. Is it developers as we see or hear or told? Or is it what we're seeing from the Microsoft example, the MoneyGram example, where central IT is getting it. And not only are they getting it, they're enabling developers to simply write code, build it, and Kubernetes is invisible. It seems like that's become the Holy Grail to make Kubernetes invisible and Cloud Native invisible, and the experience is much closer to Cloud. >> So I think that, it's an interesting, I mean, I had a lot of conversation in the past year is that it's not that the original traditional IT operations are disappearing. So it's just that traditional IT operation are giving resources to these new developers. Okay, so it's a sort of walled garden, you don't see the wall, but it's a walled garden. So they are giving you resources and you use these resources like an internal Cloud. So a few years back, we were talking about private Cloud, the private Cloud as let's say the same identical paradigm of the Public Cloud is not possible, because there are no infinite resources or well, whatever we think are infinite resources. So what you're doing today is giving these developers enough resources to think that they are unlimited and they can do automatic operationing and do all these kind of things. So they don't think about infrastructure at all, but actually it's there. So IT operation are still there providing resources to let developers be more free and agile and everything. So we are still in a, I think an interesting time for all of it. >> Kubernetes and Cloud Native in general, I think are blurring the lines, traditional lines development and operations always were separate entities. Obviously with DevOps, those two are emerging. But now we're moving when you add in shift left testing, shift right testing, DevSecOps, you see the developers become much more involved in the infrastructure and they want to be involved in infrastructure because that's what makes their applications perform. So this is going to cause, I think IT organizations to have to do some rethinking about what those traditional lines are, maybe break down those walls and have these teams work much closer together. And that should be a good thing because the people who are developing applications should also have intimate knowledge of the infrastructure they're going to run on. >> So Paul, another recurring theme that we've heard here is the impact of funding on resources. What have your discussions been around founders and creators when it comes to sourcing talent and the impact of the markets on just their day to day? >> Well, the sourcing talent has been a huge issue for the last year, of course, really, ever since the pandemic started. Interestingly, one of our guests earlier today said that with the meltdown in the tech stock market, actually talent has become more available, because people who were tied to their companies because of their stock options are now seeing those options are underwater and suddenly they're not as loyal to the companies they joined. So that's certainly for the startups, there are many small startups here, they're seeing a bit of a windfall now from the tech stock bust. Nevertheless, skills are a long term problem. The US educational system is turning out about 10% of the skilled people that the industry needs every year. And no one I know, sees an end to that issue anytime soon. >> So Enrico, last question to you. Let's talk about what that means to the practitioner. There's a lot of opportunity out there. 200 plus sponsors I hear, I think is worth the projects is 200 plus, where are the big opportunities as a practitioner, as I'm thinking about the next thing that I'm going to learn to help me survive the next 10 or 15 years of my career? Where you think the focus should be? Should it be that low level Cloud builder? Or should it be at those levels of extraction that we're seeing and reading about? >> I think that it's a good question. The answer is not that easy. I mean, being a developer today, for sure, grants you a salary at the end of the month. I mean, there is high demand, but actually there are a lot of other technical figures in the data center, in the Cloud, that could really find easily a job today. So, developers is the first in my mind also because they are more, they can serve multiple roles. It means you can be a developer, but actually you can be also with the new roles that we have, especially now with the DevOps, you can be somebody that supports operation because you know automation, you know a few other things. So you can be a sysadmin of the next generation even if you are a developer, even if when you start as a developer. >> KubeCon 2022, is exciting. I don't care if you're a developer, practitioner, a investor, IT decision maker, CIO, CXO, there's so much to learn and absorb here and we're going to be covering it for the next two days. Me and Paul will be shoulder to shoulder, I'm not going to say you're going to get sick of this because it's just, it's all great information, we'll help sort all of this. From Valencia, Spain. I'm Keith Townsend, along with my host Enrico Signoretti, Paul Gillum, and you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

the Cloud Native Computing Foundation of the wrap up of the day of coverage, of the application. of the signal from the noise. and for the first three or four years I hear constantly that the and now the developer understands, the early days of Twitter, and the experience is is that it's not that the of the infrastructure and the impact of the markets So that's certainly for the startups, So Enrico, last question to you. of the next generation it for the next two days.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillumPERSON

0.99+

Enrico SignorettiPERSON

0.99+

AmazonORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

PaulPERSON

0.99+

Valencia, SpainLOCATION

0.99+

last yearDATE

0.99+

7,500 attendeesQUANTITY

0.99+

EnricoPERSON

0.99+

Silicon AngleORGANIZATION

0.99+

4,000 goldQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

5,000 goldQUANTITY

0.99+

KubeConEVENT

0.99+

nine years agoDATE

0.99+

GigaOmORGANIZATION

0.99+

7,500 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

Cloud NativeConEVENT

0.98+

TodayDATE

0.98+

four yearsQUANTITY

0.98+

first questionQUANTITY

0.97+

this yearDATE

0.96+

200 plusQUANTITY

0.96+

KubernetesTITLE

0.96+

DevSecOpsTITLE

0.95+

Cloud NativeTITLE

0.95+

DevOpsTITLE

0.95+

about 10%QUANTITY

0.94+

first threeQUANTITY

0.94+

15 yearsQUANTITY

0.94+

KubeconORGANIZATION

0.93+

KubeCon 2022EVENT

0.93+

day oneQUANTITY

0.93+

OneQUANTITY

0.92+

TwitterORGANIZATION

0.92+

past yearDATE

0.92+

KubernetesPERSON

0.92+

724QUANTITY

0.91+

pandemicEVENT

0.91+

MoneyGramORGANIZATION

0.89+

XboxCOMMERCIAL_ITEM

0.89+

earlier todayDATE

0.89+

about 22,000 podsQUANTITY

0.89+

DockerTITLE

0.89+

DayQUANTITY

0.84+

LinkerdORGANIZATION

0.84+

2022DATE

0.83+

CloudTITLE

0.82+

EuropeLOCATION

0.81+

10QUANTITY

0.81+

200 plus sponsorsQUANTITY

0.8+

few years backDATE

0.78+

Cloud NativeCon EuropeEVENT

0.78+

EnricoORGANIZATION

0.77+

FinOpsTITLE

0.76+

USLOCATION

0.76+

a few years backDATE

0.74+

next two daysDATE

0.73+

KubernetesORGANIZATION

0.69+

theCUBEORGANIZATION

0.68+

day twoQUANTITY

0.67+

CloudnativeconORGANIZATION

0.58+

Public CloudTITLE

0.54+

2022EVENT

0.53+

Fail WhaleTITLE

0.52+

William Morgan, Buoyant | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: theCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe 2022. I'm Keith Townsend and alongside Enrico senior IT analyst for (indistinct). Welcome back to the show Enrico. >> Thank you again for having me here. >> First impressions of Kubecon. >> Well, great show. As I mentioned before, I think that we are really in this very positive mood of talking with each other and people wanting to see the projects, people that build the projects and it's amazing. A lot of interesting conversation in the show floor and in the various sessions, very positive mood. >> So this is going to be a fun one, we have some amazing builders on the show this week and none other than William Morgan, CEO of Buoyant. What's your role in the Linkerd project? >> So I was one of the original creators of Linkerd, but at this point I'm just the beautiful face of the project. (all laughing) >> Speaking of beautiful face of the project Linkerd just graduated from as a CNCF project. >> Yeah, that's right so last year we became the first service mesh to graduate in the CNCF, very proud of that and that's thanks largely to the incredible community around Linkerd that is just excited about the project and wants to talk about it and wants to be involved. >> So let's talk about the significance of that. Linkerd not the only service mesh project out there. Talk to me about the level effort to get it to the point that it's graduated. You don't see too many projects graduating CNCF in general so let's talk about kind of the work needed to get Linkerd to this point. >> Yeah so the bar is high and it's mostly a measure, not necessarily of like the project being technically good or bad or anything but it's really a measure of maturity of the community around it so is it being adopted by organizations that are really relying on it in a critical way? Is it being adopted across industries? Is it having kind of a significant impact on the Cloudnative community? And so for us there was the work involved in that was really not any different from the work involved in kind of maintaining Linkerd and growing the community in the first place, which is you try and make it really useful. You try and make it really easy to get started with, you try and be supportive and to have a friendly and welcoming community. And if you do those things and you kind of naturally get yourself to the point where it's a really strong community full of people who are excited about it. >> So from the point of view of users adopting this technology, so we are talking about everybody or do you see really large organization, large Kubernetes clusters infrastructure adopting it? >> Yeah, so the answer to that is changed a little bit over time but at this point we see Linkerd adoption across industries, across verticals, and we see it from very small companies to very large ones so one of the talks I'm really excited about at this conference is from the folks at Xbox cloud gaming who are going to talk about how they deployed Linkerd across 22,000 pods around the world to serve basically on demand video games. Never a use case I would ever have imagined for Linkerd and at the previous Kubecon virtually Kubecon EU, we had a whole keynote about how Linkerd was used to combat COVID 19. So all sorts of uses and it really doesn't, whether it's a small cluster or large cluster it's equally applicable. >> Wow so as we talk about Linkerd service mesh we obviously are going to talk about security, application control, etcetera. But in this climate software supply chain is critical and you think about open source software supply chain, talk to us about the recent security audit of Linkerd. >> Yeah so one of the things that we do as part of a CNCF project and also as part of, I think our relationship with our community is we have regular security audits where we engage security professionals who are very thorough and dig into all the details. Of course the source code is all out there, so anyone can read through the code but they'll build threat model analysis and things like that. And then we take their report and we publish it. We say, "Hey look, here's the situation." So we have earlier reports online and this newest one was done by a company called Trail of Bits and they built a whole threat model and looked through all the different ways that Linkerd could go wrong and they always find issues of course, it would be very scary, I think, to get a report that was like, no, we didn't find- >> Yeah everything's clean. >> Yeah everything's fine, should be okay, I don't know. But they did not find anything critical. They found some issues that we rapidly addressed and then everything gets written up in the report and then we publish it, as part of an open source artifact. >> How do you, let's say, do they give you and adds up something? So if something happens so that you can act on the code before somebody else discovers the- >> Yeah, they'll give you a preview of what they found and then often it's not like you're going before the judge and the judge makes a judgment and then like off to jail, it's a dialogue because they don't necessarily understand the project. Well, they definitely don't understand it as well as you do. So you are helping them understand which parts are interesting to look at from the security perspective, which parts are not that interesting. They do their own investigation of course but it's a dialogue the entire time. So you do have an opportunity to say, "Oh you told me that was a a minor issue. "I actually think that's larger or vice versa." You think that's a big problem actually, we thought about that and it's not a big problem because of whatever. So it's a collaborative process. >> So Linkerd been around, like when I first learned about service mesh Linkerd was the project that I learned about. It's been there for a long time, just mentioned 22,000 clusters. That's just mind boggling- >> Pods, 22,000 pods. >> That's pods. >> Clusters would be great. >> Yeah, clusters would be great too but it filled 22,000 pods. >> It's a big deployment. >> That's a big deployment of Linkerd, but all the way down to the smallest set of pods as well. What are some of the recent project updates some of the learnings you bought back from the community and updated the project as a result? >> Yeah so a big one for us, on the topic of security, Linkerd, a big driver of Linkerd adoption is security and less on the supply chain side and more on the traffic, like live traffic security. So things like mutual TLS, so you can encrypt the communication between pods and make sure it's authenticated. One of the recent feature additions is authorization policy so you can lock down connections between services and you can say Service A is only allowed to talk to Service B and I want to do that not based on network identity, not based on like IP addresses, 'cause those are spoofable and we've kind of like as an industry moved, we've gotten a little more advanced from that but actually based on the workload identity as captured by the mutual TLS certificate exchange. So we give you the ability now to restrict the types of communication that are allowed to happen on your cluster. >> So, okay this is what happened. What about the future? Can you give us into suggestion on what is going to happen in the medium and long term? >> I think we're done you know we graduated, so we're just going to stop. (all laughing) What else is there to do? There's no grad school. No, so for us, there's a clear roadmap ahead continuing down the security realm, for sure. We've given you kind of the very first building block which at the service level, but coming up in the 2.12 release we'll have route based policy as well, as you can say this service is only allowed to call these three routes on this end point. And we'll be working later to do things like mesh expansions so we can run the data plane outside of Kubernetes, so the control plane will stay in Kubernetes but the data plane will, you'll be able to run that on Vms and things like that. And then of course in the, we're also starting to look at things like, I like to make a fun of (indistinct) a lot but we are actually starting to look at (indistinct) in the ways that that might actually be useful for Linkerd users. >> So we talk a lot about the flexibility of a project like Linkerd you can do amazing things with it from a security perspective but we're talking still to a DevOps type cloud of developers who are spread thin across their skillset. How do you help balance the need for the flexibility which usually comes with more nerd knobs and servicing a crowd that wants even higher levels of abstraction and simplicity. >> Yeah, that's a great question and this is what makes Linkerd so unique in the service mesh spaces. We have a laser focus on simplicity and especially on operational simplicity so our audience, we can make it easy to install Linkerd but what we really care about is when you're running it and you're on call for it and it's sitting in this critical, vulnerable part of your infrastructure, do you feel confident in that? Do you feel like you understand it? Do you feel like you can observe it? Do you feel like you can predict what it's going to do? And so every aspect of Linkerd is designed to be as operationally simple as possible. So when we deliver features, that's always our primary consideration, is we have to reject the urge, we have an urge as engineers to like want to build everything, it's an ultimate platform to solve all problems and we have to really be disciplined and say we're not going to do that, we're going to look at solving the minimum possible problem with a minimum set are features because we need to keep things simple and then we need to look at the human aspect to that. And I think that's been a part of Linkerd's success. And then on the Buoyant side, of course, I don't just work on Linkerd, I also work on Buoyant which helps organizations adopt Linkerd and increasingly large organizations that are not service mesh experts don't want to be service mesh experts, they want to spend their time and energy developing their business, right? And building the business logic that powers their company. So for them we have actually recently introduced, fully managed Linkerd where we can take on, even though Linkerd has to run on your cluster, the sidecar proxies has to be alongside your application. We can actually take on the operational burden of upgrades and trust income rotation, and installation. And you could effectively treat it as a utility, and have a hosted-like experience even though the actual bits, at least most of them not all of them, most of 'em have to live on your cluster. >> I love the focus of most CNCF projects, it's peanut butter or jelly, not peanut butter trying to be become jelly. What's the peanut butter to Linkerd's jelly? Like where does Linkerd stop? And some of the things that customers should really consider when looking at service mesh? >> Yeah, now that's a great way of looking at it and I actually think that philosophy comes from Kubernetes. I think Kubernetes itself, one of the reasons it was so successful is because it had some clearly delineated boundaries. It said, "This is what we're going to do. "And this is what we're not going to do. "So we're going to do layer three, four networking, "but we're going to stop there, "we're not going to do anything with layer seven." And that allowed the service mesh. So I guess if I were to go down the bread of the sandwich is Kubernetes, and then Linkerd is the peanut butter, I guess. And then the jelly, so I think the jelly is every other aspect of of building a platform. So if you are the audience for Linkerd most of the time is a platform owners. They're building a platform an internal platform for their developers to write code and so, as part of that, of course you've got Kubernetes, you've got Linkerd, but you've also got a CICD system. You've also got a code repository that's GitLab or or GitHub or whatever, you've got other kind of tools that are enforcing various other constraints. All of that is the jelly in the, this is analogy it's getting complicated now, and like the platform sandwich that you're serving. >> So talk to us about trans and service mesh from the, as we think of the macro. >> Yeah, so it's been an interesting space because, we were talking a little bit about this before the show but, there was so much buzz and then what we saw was basically it took two years for that buzz to become actual adoption and now a lot of the buzz is off on other exciting things and the people who remain in the Linkerd space are very focused on, "Oh, I actually have a real problem "that I need to solve "and I need to solve it now." So that's been great. So in terms of broader trends, I think one thing we've seen for sure is the service mesh space is kind of notorious for complexity, and a lot of what we've been doing on the Linkerd side has been trying to reverse that idea, because it doesn't actually have to be complex. There's interesting stuff you can do, especially when you get into the way we handle the sidecar model. It's actually really, it's a wonderful model operationally. It's really, it feels weird at first and then you're like, "Oh, actually this makes my operations a lot easier." So a lot of the trends that I see at least for Linkerd is doubling down on the sidecar model trying to make side cars as small and as thin as possible and try and make them kind of transparent to the rest of the application. >> Well, William Morgan, one of the coolest Twitter handles I've seen at WM on Twitter, that's actually a really cool Twitter handle. >> William: Thank you. >> CEO of Buoyant. Thank you for joining theCube again, Cube alum. From Valencia Spain, I'm Keith Towns, along with Enrico's (indistinct) and you're watching theCube, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

the cloud native computing foundation I'm Keith Townsend and alongside Enrico and in the various sessions, on the show this week the beautiful face of the project. face of the project the first service mesh kind of the work needed and growing the community Yeah, so the answer to that and you think about open Yeah so one of the things that we do and then we publish it, and the judge makes a judgment So Linkerd been around, but it filled 22,000 pods. some of the learnings you bought back and more on the traffic, in the medium and long term? so the control plane the flexibility of a project like Linkerd the human aspect to that. And some of the things that customers and like the platform sandwich So talk to us about and now a lot of the buzz is one of the coolest the leader in high tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Red HatORGANIZATION

0.99+

22,000 podsQUANTITY

0.99+

Trail of BitsORGANIZATION

0.99+

WilliamPERSON

0.99+

LinkerdORGANIZATION

0.99+

William MorganPERSON

0.99+

BuoyantORGANIZATION

0.99+

Keith TownsPERSON

0.99+

William MorganPERSON

0.99+

last yearDATE

0.99+

Valencia, SpainLOCATION

0.99+

two yearsQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

Valencia SpainLOCATION

0.99+

oneQUANTITY

0.98+

22,000 clustersQUANTITY

0.98+

EnricoORGANIZATION

0.98+

KubernetesTITLE

0.98+

this weekDATE

0.98+

KubeconORGANIZATION

0.98+

OneQUANTITY

0.97+

XboxCOMMERCIAL_ITEM

0.96+

firstQUANTITY

0.96+

CNCFORGANIZATION

0.96+

2022DATE

0.95+

first serviceQUANTITY

0.9+

GitHubORGANIZATION

0.89+

First impressionsQUANTITY

0.88+

EuropeLOCATION

0.86+

Service BOTHER

0.83+

layer threeQUANTITY

0.82+

first building blockQUANTITY

0.82+

theCUBEORGANIZATION

0.79+

theCubeORGANIZATION

0.78+

CEOPERSON

0.78+

one thingQUANTITY

0.78+

CubeORGANIZATION

0.78+

2.12DATE

0.78+

Service AOTHER

0.77+

TwitterORGANIZATION

0.77+

Vijoy Pandey, Cisco | kubecon + Cloudnativecon europe 2020


 

(upbeat music) >> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation, and the ecosystem partners. >> Hi, and welcome back to theCUBE's coverage of KubeCon + CloudNativeCon 2020 in Europe, of course, the virtual edition. I'm Stu Miniman, and happy to welcome you back to the program. One of the keynote speakers is also a board member of the CNCF, Vijoy Pandey, who is the Vice President and Chief Technology Officer for Cloud at Cisco. Vijoy, nice to see you, thanks so much for joining us. >> Hi there, Stu, so nice to see you again. It's a strange setting to be in, but as long as we are both healthy, everything's good. >> Yeah, we still get to be together a little bit even though while we're apart. We love the the engagement and interaction that we normally get to the community, but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Networking, Please Evolve". I've been watching that journey. But why don't we start at first, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports on to Chuck. The charter for the team, this new team, is to incubate the next bets for Cisco. And if you can imagine, it's natural for Cisco to start with bets which are closer to its core business. But the charter for this group is to move further and further out from Cisco's core business and take Cisco into newer markets, into newer products, and newer businesses. I'm running the engineering and resource for that group. And again, the whole deal behind this is to be a little bit nimble, to be a little bit, to startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast, and you throw out 80% of those, and concentrate on the 20% that makes sense to take forward as a venture. >> Interesting. So it reminds me a little bit but different, I remember John Chambers, a number of years back, talking about various adjacencies trying to grow those next multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for driving innovation, giving engineers 20% of their time to work on things, maybe give us a little bit insight, what's kind of an example of a bet that you might be looking at in this space, bring us in tight a little bit. >> Well, that's actually a good question. And I think a little bit of that comparison is all those conversations are taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets? And yes, my previous employer, I mean, Google X actually goes pretty far out when it comes to incubations, the core business being primarily around ads, now Google Cloud as well. But you have things like Verily and Calico, and others, which are pretty far out from where Google started. And the way we're looking at the these things within Cisco is, it's a new muscle for Cisco, so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to, go to market alignment or business alignment. So one of the first bets that we're taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure as code but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I've talked a lot about this in the past couple of keynotes, where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, wherever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets, that we're going after is around connecting and securing API endpoints, wherever they happen to be, of whatever type they happen to be. And so API networking or app networking is one big bet that we're going after. Another big bet is around API security. And that has a bunch of other connotations to it, where we think about security moving from runtime security, where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security, which is earlier in the development pipeline, and higher up in the stack. So those are two big bets that we're going after. And as you can see, they're pretty close to Cisco's core business, but also are very differentiated from where Cisco is today. And once you prove some of these bets out, you can walk further and further away, or a few degrees away from Cisco's core. >> All right, Vijoy, why don't you give us the update about how Cisco is leveraging and participating in open source? >> So I think we've been pretty, deeply involved in open source in our past. We've been deeply involved in Linux Foundation Networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack, we have been supporters of OpenStack. We have a couple of products that are around the OpenStack offering. And as you all know, we've been involved in CNCF, right from the get-go, as a foundation member. We brought NSM as a project. I had Sandbox currently, but we're hoping to move it forward. But even beyond that, I mean, we are big users of open source, a lot of those has offerings that we have from Cisco, and you will not know this if you're not inside of Cisco. But Webex, for example, is a big, big user of Linkerd, right from the get-go, from version 1.0, but we don't talk about it, which is sad. I think, for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise side. We use Kubernetes very deeply in our security platforms. So we're pretty good, pretty deep users internally in our SaaS products. But we want to press the accelerator and accelerate this whole journey towards open source, quite a bit moving forward as part of ET&I, Emerging Tech and Incubation, as well. So you will see more of us in open source forums, not just CNCF, but very recently, we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative, and we actually are big believers in handling data in ethical and privacy-preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health, and we will be working very closely with Dan and foundational companies that do not just bring open source but also evangelize and use what comes out of that forum. >> All right, well, Vijoy, I think it's time for us to dig into your keynote. We've we've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on. And big focus you talked about was SD-WAN. Of course, anybody that's been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling multi-cloud. So help explain to our audience a little bit, what do you mean when you talk about things like Cloud Native SD-WAN and how that helps people really enable their applications in the modern environment? >> Yes, well, I mean, we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actually get your connectivity across to your enterprise or branch. And SD-WAN came in and changed the game there, but I think SD-WAN, as it exists today, is application-unaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM or Network Service Mesh has actually helped us simplify operational complexities, simplify the ticketing and process health that any developer needs to go through just to get a multi-cloud, multi-cluster app up and running. So the keynote actually talked about bringing those two things together, where we've talked about using NSM in the past in chapter one and chapter two. And I know this is chapter three, and at some point, I would like to stop the chapters. I don't want this like an encyclopedia of "Networking, Please Evolve". But we are at chapter three, and we are talking about how you can take the same consumption models that I talked about in chapter two, which is just adding a simple annotation in your CRD, and extending that notion of multi-cloud, multi-cluster wires within the components of our application, but extending it all the way down to the user in an enterprise. And as we saw an example, Gavin Belson is trying to give a keynote holographically and he's suffering from SD-WAN being application-unaware. And using this construct of a simple annotation, we can actually make SD-WAN cloud native, we can make it application-aware, and we can guarantee the SLOs, that Gavin is looking for, in terms of 3D video, in terms of file access for audio, just to make sure that he's successful and Ross doesn't come in and take his place. >> Well, I expect Gavin will do something to mess things up on his own even if the technology works flawlessly. Vijoy, the modernization journey that customers are on is a never-ending story. I understand the chapters need to end on the current volume that you're working on, but we'd love to get your viewpoint. You talk about things like service mesh, it's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the kind of real challenges but opportunities that they see in today's cloud native space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree, and we are seeing a lot of that happening, where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've Linkerd, you've got various proprietary notions around control planes like App Mesh, from Amazon, there's Consul, which is an open source project, but not part of CNCF today. So there's a whole bunch of service meshes in terms of control planes coming in. Envoy is becoming a de facto sidecar data plane, whatever you would like to call it, de facto standard there, which is good for the community, I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment, and that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for, like API gateways in the past, and those functions actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of service meshes is here to stay. And so some work has to be done in bringing these things together. And that's something that we are trying to focus in on as well. Because that's something that our customers are asking for. >> Yeah, actually, you connected for me something I wanted to get your viewpoint on, go dial back, 10, 15 years ago, and everybody would say, "Oh, I really want to have a single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, not that long before this event, Google had their Google Cloud Show, talking about the partnership that you have with, Cisco with Google. They have Anthos, you look at Azure has Arc, VMware has Tanzu. Everybody's talking about really the kind of this multi-cluster management type of solution out there, and just want to get your viewpoint on this Vijoy as to how are we doing on the management plane, and what do you think we need to do as an industry as a whole to make things better for customers? >> Yeah, I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers. Because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all these things together? I think the SMI proposal from Microsoft is bang on on that front, where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes as you saw in the SD-WAN announcement back in the Google discussion that we just, Google conference that you just referred. It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a cloud native environment in the keynote. But I think the bigger problem here is how do we manage this complexity of this pallet stacks? Whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And single pane of glass is overloaded as a term, because it brings in these notions of big monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. And I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, the API economy should be able to help on these, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco, and congratulations on the new role. >> Thank you, Stu. It was a pleasure to be here. >> All right, and stay tuned for lots more coverage of theCUBE at KubeCon + CloudNativeCon. I'm Stu Miniman. Thanks for watching. (upbeat music)

Published Date : Jul 28 2020

SUMMARY :

and the ecosystem partners. One of the keynote speakers nice to see you again. since the last time we got together. and concentrate on the 20% that that you might be And one of the bets that we are, that are around the OpenStack offering. in the modern environment? And that's one of the big of conversation for the and that's here to stay. as to how are we doing and bringing it all the way and congratulations on the new role. It was a pleasure to be here. of theCUBE at KubeCon + CloudNativeCon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

StuPERSON

0.99+

ChuckPERSON

0.99+

80%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

GavinPERSON

0.99+

20%QUANTITY

0.99+

Linux Foundation for Public HealthORGANIZATION

0.99+

VijoyPERSON

0.99+

Gavin BelsonPERSON

0.99+

EuropeLOCATION

0.99+

ET&IORGANIZATION

0.99+

Emerging TechORGANIZATION

0.99+

NSMORGANIZATION

0.99+

Vijoy PandeyPERSON

0.99+

CNCFORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

VerilyORGANIZATION

0.99+

two big betsQUANTITY

0.99+

John ChambersPERSON

0.99+

CalicoORGANIZATION

0.99+

KubeConEVENT

0.99+

oneQUANTITY

0.99+

VMwareORGANIZATION

0.99+

RossPERSON

0.99+

10DATE

0.99+

one big betQUANTITY

0.98+

OneQUANTITY

0.98+

WebexORGANIZATION

0.98+

this yearDATE

0.98+

two thingsQUANTITY

0.97+

Linux Foundation for Public HealthORGANIZATION

0.97+

CloudNativeConEVENT

0.97+

LinkerdORGANIZATION

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.97+

chapter threeOTHER

0.97+

TanzuORGANIZATION

0.96+

todayDATE

0.96+

IncubationORGANIZATION

0.94+

ArcORGANIZATION

0.94+

Emerging Tech and IncubationORGANIZATION

0.94+

first betsQUANTITY

0.93+

KubeConsEVENT

0.93+

betsQUANTITY

0.93+

chapter twoOTHER

0.92+

FD.ioORGANIZATION

0.92+

two ofQUANTITY

0.92+

first few betsQUANTITY

0.91+

chapter threeOTHER

0.9+

AnthosORGANIZATION

0.9+

Innovation Happens Best in Open Collaboration Panel | DockerCon Live 2020


 

>> Announcer: From around the globe, it's the queue with digital coverage of DockerCon live 2020. Brought to you by Docker and its ecosystem partners. >> Welcome, welcome, welcome to DockerCon 2020. We got over 50,000 people registered so there's clearly a ton of interest in the world of Docker and Eddie's as I like to call it. And we've assembled a power panel of Open Source and cloud native experts to talk about where things stand in 2020 and where we're headed. I'm Shawn Conley, I'll be the moderator for today's panel. I'm also a proud alum of JBoss, Red Hat, SpringSource, VMware and Hortonworks and I'm broadcasting from my hometown of Philly. Our panelists include; Michelle Noorali, Senior Software Engineer at Microsoft, joining us from Atlanta, Georgia. We have Kelsey Hightower, Principal developer advocate at Google Cloud, joining us from Washington State and we have Chris Aniszczyk, CTO CIO at the CNCF, joining us from Austin, Texas. So I think we have the country pretty well covered. Thank you all for spending time with us on this power panel. Chris, I'm going to start with you, let's dive right in. You've been in the middle of the Docker netease wave since the beginning with a clear focus on building a better world through open collaboration. What are your thoughts on how the Open Source landscape has evolved over the past few years? Where are we in 2020? And where are we headed from both community and a tech perspective? Just curious to get things sized up? >> Sure, when CNCF started about roughly four, over four years ago, the technology mostly focused on just the things around Kubernetes, monitoring communities with technology like Prometheus, and I think in 2020 and the future, we definitely want to move up the stack. So there's a lot of tools being built on the periphery now. So there's a lot of tools that handle running different types of workloads on Kubernetes. So things like Uvert and Shay runs VMs on Kubernetes, which is crazy, not just containers. You have folks that, Microsoft experimenting with a project called Kruslet which is trying to run web assembly workloads natively on Kubernetes. So I think what we've seen now is more and more tools built around the periphery, while the core of Kubernetes has stabilized. So different technologies and spaces such as security and different ways to run different types of workloads. And at least that's kind of what I've seen. >> So do you have a fair amount of vendors as well as end users still submitting in projects in, is there still a pretty high volume? >> Yeah, we have 48 total projects in CNCF right now and Michelle could speak a little bit more to this being on the DOC, the pipeline for new projects is quite extensive and it covers all sorts of spaces from two service meshes to security projects and so on. So it's ever so expanding and filling in gaps in that cloud native landscape that we have. >> Awesome. Michelle, Let's head to you. But before we actually dive in, let's talk a little glory days. A rumor has it that you are the Fifth Grade Kickball Championship team captain. (Michelle laughs) Are the rumors true? >> They are, my speech at the end of the year was the first talk I ever gave. But yeah, it was really fun. I wasn't captain 'cause I wasn't really great at anything else apart from constantly cheer on the team. >> A little better than my eighth grade Spelling Champ Award so I think I'd rather have the kickball. But you've definitely, spent a lot of time leading an Open Source, you've been across many projects for many years. So how does the art and science of collaboration, inclusivity and teamwork vary? 'Cause you're involved in a variety of efforts, both in the CNCF and even outside of that. And then what are some tips for expanding the tent of Open Source projects? >> That's a good question. I think it's about transparency. Just come in and tell people what you really need to do and clearly articulate your problem, more clearly articulate your problem and why you can't solve it with any other solution, the more people are going to understand what you're trying to do and be able to collaborate with you better. What I love about Open Source is that where I've seen it succeed is where incentives of different perspectives and parties align and you're just transparent about what you want. So you can collaborate where it makes sense, even if you compete as a company with another company in the same area. So I really like that, but I just feel like transparency and honesty is what it comes down to and clearly communicating those objectives. >> Yeah, and the various foundations, I think one of the things that I've seen, particularly Apache Software Foundation and others is the notion of checking your badge at the door. Because the competition might be between companies, but in many respects, you have engineers across many companies that are just kicking butt with the tech they contribute, claiming victory in one way or the other might make for interesting marketing drama. But, I think that's a little bit of the challenge. In some of the, standards-based work you're doing I know with CNI and some other things, are they similar, are they different? How would you compare and contrast into something a little more structured like CNCF? >> Yeah, so most of what I do is in the CNCF, but there's specs and there's projects. I think what CNCF does a great job at is just iterating to make it an easier place for developers to collaborate. You can ask the CNCF for basically whatever you need, and they'll try their best to figure out how to make it happen. And we just continue to work on making the processes are clearer and more transparent. And I think in terms of specs and projects, those are such different collaboration environments. Because if you're in a project, you have to say, "Okay, I want this feature or I want this bug fixed." But when you're in a spec environment, you have to think a little outside of the box and like, what framework do you want to work in? You have to think a little farther ahead in terms of is this solution or this decision we're going to make going to last for the next how many years? You have to get more of a buy in from all of the key stakeholders and maintainers. So it's a little bit of a longer process, I think. But what's so beautiful is that you have this really solid, standard or interface that opens up an ecosystem and allows people to build things that you could never have even imagined or dreamed of so-- >> Gotcha. So I'm Kelsey, we'll head over to you as your focus is on, developer advocate, you've been in the cloud native front lines for many years. Today developers are faced with a ton of moving parts, spanning containers, functions, Cloud Service primitives, including container services, server-less platforms, lots more, right? I mean, there's just a ton of choice. How do you help developers maintain a minimalist mantra in the face of such a wealth of choice? I think minimalism I hear you talk about that periodically, I know you're a fan of that. How do you pass that on and your developer advocacy in your day to day work? >> Yeah, I think, for most developers, most of this is not really the top of mind for them, is something you may see a post on Hacker News, and you might double click into it. Maybe someone on your team brought one of these tools in and maybe it leaks up into your workflow so you're forced to think about it. But for most developers, they just really want to continue writing code like they've been doing. And the best of these projects they'll never see. They just work, they get out of the way, they help them with log in, they help them run their application. But for most people, this isn't the core idea of the job for them. For people in operations, on the other hand, maybe these components fill a gap. So they look at a lot of this stuff that you see in the CNCF and Open Source space as number one, various companies or teams sharing the way that they do things, right? So these are ideas that are put into the Open Source, some of them will turn into products, some of them will just stay as projects that had mutual benefit for multiple people. But for the most part, it's like walking through an ion like Home Depot. You pick the tools that you need, you can safely ignore the ones you don't need, and maybe something looks interesting and maybe you study it to see if that if you have a problem. And for most people, if you don't have that problem that that tool solves, you should be happy. No one needs every project and I think that's where the foundation for confusion. So my main job is to help people not get stuck and confused in LAN and just be pragmatic and just use the tools that work for 'em. >> Yeah, and you've spent the last little while in the server-less space really diving into that area, compare and contrast, I guess, what you found there, minimalist approach, who are you speaking to from a server-less perspective versus that of the broader CNCF? >> The thing that really pushed me over, I was teaching my daughter how to make a website. So she's on her Chromebook, making a website, and she's hitting 127.0.0.1, and it looks like geo cities from the 90s but look, she's making website. And she wanted her friends to take a look. So she copied and paste from her browser 127.0.0.1 and none of her friends could pull it up. So this is the point where every parent has to cross that line and say, "Hey, do I really need to sit down "and teach my daughter about Linux "and Docker and Kubernetes." That isn't her main goal, her goal was to just launch her website in a way that someone else can see it. So we got Firebase installed on her laptop, she ran one command, Firebase deploy. And our site was up in a few minutes, and she sent it over to her friend and there you go, she was off and running. The whole server-less movement has that philosophy as one of the stated goal that needs to be the workflow. So, I think server-less is starting to get closer and closer, you start to see us talk about and Chris mentioned this earlier, we're moving up the stack. Where we're going to up the stack, the North Star there is feel where you get the focus on what you're doing, and not necessarily how to do it underneath. And I think server-less is not quite there yet but every type of workload, stateless web apps check, event driven workflows check, but not necessarily for things like machine learning and some other workloads that more traditional enterprises want to run so there's still work to do there. So server-less for me, serves as the North Star for why all these Projects exists for people that may have to roll their own platform, to provide the experience. >> So, Chris, on a related note, with what we were just talking about with Kelsey, what's your perspective on the explosion of the cloud native landscape? There's, a ton of individual projects, each can be used separately, but in many cases, they're like Lego blocks and used together. So things like the surface mesh interface, standardizing interfaces, so things can snap together more easily, I think, are some of the approaches but are you doing anything specifically to encourage this cross fertilization and collaboration of bug ability, because there's just a ton of projects, not only at the CNCF but outside the CNCF that need to plug in? >> Yeah, I mean, a lot of this happens organically. CNCF really provides of the neutral home where companies, competitors, could trust each other to build interesting technology. We don't force integration or collaboration, it happens on its own. We essentially allow the market to decide what a successful project is long term or what an integration is. We have a great Technical Oversight Committee that helps shepherd the overall technical vision for the organization and sometimes steps in and tries to do the right thing when it comes to potentially integrating a project. Previously, we had this issue where there was a project called Open Tracing, and an effort called Open Census, which is basically trying to standardize how you're going to deal with metrics, on the tree and so on in a cloud native world that we're essentially competing with each other. The CNCF TC and committee came together and merged those projects into one parent ever called Open Elementary and so that to me is a case study of how our committee helps, bridges things. But we don't force things, we essentially want our community of end users and vendors to decide which technology is best in the long term, and we'll support that. >> Okay, awesome. And, Michelle, you've been focused on making distributed systems digestible, which to me is about simplifying things. And so back when Docker arrived on the scene, some people referred to it as developer dopamine, which I love that term, because it's simplified a bunch of crufty stuff for developers and actually helped them focus on doing their job, writing code, delivering code, what's happening in the community to help developers wire together multi-part modern apps in a way that's elegant, digestible, feels like a dopamine rush? >> Yeah, one of the goals of the(mumbles) project was to make it easier to deploy an application on Kubernetes so that you could see what the finished product looks like. And then dig into all of the things that that application is composed of, all the resources. So we're really passionate about this kind of stuff for a while now. And I love seeing projects that come into the space that have this same goal and just iterate and make things easier. I think we have a ways to go still, I think a lot of the iOS developers and JS developers I get to talk to don't really care that much about Kubernetes. They just want to, like Kelsey said, just focus on their code. So one of the projects that I really like working with is Tilt gives you this dashboard in your CLI, aggregates all your logs from your applications, And it kind of watches your application changes, and reconfigures those changes in Kubernetes so you can see what's going on, it'll catch errors, anything with a dashboard I love these days. So Yali is like a metrics dashboard that's integrated with STL, a service graph of your service mesh, and lets you see the metrics running there. I love that, I love that dashboard so much. Linkerd has some really good service graph images, too. So anything that helps me as an end user, which I'm not technically an end user, but me as a person who's just trying to get stuff up and running and working, see the state of the world easily and digest them has been really exciting to see. And I'm seeing more and more dashboards come to light and I'm very excited about that. >> Yeah, as part of the DockerCon just as a person who will be attending some of the sessions, I'm really looking forward to see where DockerCompose is going, I know they opened up the spec to broader input. I think your point, the good one, is there's a bit more work to really embrace the wealth of application artifacts that compose a larger application. So there's definitely work the broader community needs to lean in on, I think. >> I'm glad you brought that up, actually. Compose is something that I should have mentioned and I'm glad you bring that up. I want to see programming language libraries, integrate with the Compose spec. I really want to see what happens with that I think is great that they open that up and made that a spec because obviously people really like using Compose. >> Excellent. So Kelsey, I'd be remiss if I didn't touch on your January post on changelog entitled, "Monoliths are the Future." Your post actually really resonated with me. My son works for a software company in Austin, Texas. So your hometown there, Chris. >> Yeah. >> Shout out to Will and the chorus team. His development work focuses on adding modern features via micro services as extensions to the core monolith that the company was founded on. So just share some thoughts on monoliths, micro services. And also, what's deliverance dopamine from your perspective more broadly, but people usually phrase as monoliths versus micro services, but I get the sense you don't believe it's either or. >> Yeah, I think most companies from the pragmatic so one of their argument is one of pragmatism. Most companies have trouble designing any app, monolith, deployable or microservices architecture. And then these things evolve over time. Unless you're really careful, it's really hard to know how to slice these things. So taking an idea or a problem and just knowing how to perfectly compartmentalize it into individual deployable component, that's hard for even the best people to do. And double down knowing the actual solution to the particular problem. A lot of problems people are solving they're solving for the first time. It's really interesting, our industry in general, a lot of people who work in it have never solved the particular problem that they're trying to solve for the first time. So that's interesting. The other part there is that most of these tools that are here to help are really only at the infrastructure layer. We're talking freeways and bridges and toll bridges, but there's nothing that happens in the actual developer space right there in memory. So the libraries that interface to the structure logging, the libraries that deal with rate limiting, the libraries that deal with authorization, can this person make this query with this user ID? A lot of those things are still left for developers to figure out on their own. So while we have things like the brunettes and fluid D, we have all of these tools to deploy apps into those target, most developers still have the problem of everything you do above that line. And to be honest, the majority of the complexity has to be resolved right there in the app. That's the thing that's taking requests directly from the user. And this is where maybe as an industry, we're over-correcting. So we had, you said you come from the JBoss world, I started a lot of my Cisco administration, there's where we focus a little bit more on the actual application needs, maybe from a router that as well. But now what we're seeing is things like Spring Boot, start to offer a little bit more integration points in the application space itself. So I think the biggest parts that are missing now are what are the frameworks people will use for authorization? So you have projects like OPA, Open Policy Agent for those that are new to that, it gives you this very low level framework, but you still have to understand the concepts around, what does it mean to allow someone to do something and one missed configuration, all your security goes out of the window. So I think for most developers this is where the next set of challenges lie, if not actually the original challenge. So for some people, they were able to solve most of these problems with virtualization, run some scripts, virtualize everything and be fine. And monoliths were okay for that. For some reason, we've thrown pragmatism out of the window and some people are saying the only way to solve these problems is by breaking the app into 1000 pieces. Forget the fact that you had trouble managing one piece, you're going to somehow find the ability to manage 1000 pieces with these tools underneath but still not solving the actual developer problems. So this is where you've seen it already with a couple of popular blog posts from other companies. They cut too deep. They're going from 2000, 3000 microservices back to maybe 100 or 200. So to my world, it's going to be not just one monolith, but end up maybe having 10 or 20 monoliths that maybe reflect the organization that you have versus the architectural pattern that you're at. >> I view it as like a constellation of stars and planets, et cetera. Where you you might have a star that has a variety of, which is a monolith, and you have a variety of sort of planetary microservices that float around it. But that's reality, that's the reality of modern applications, particularly if you're not starting from a clean slate. I mean your points, a good one is, in many respects, I think the infrastructure is code movement has helped automate a bit of the deployment of the platform. I've been personally focused on app development JBoss as well as springsSource. The Spring team I know that tech pretty well over the years 'cause I was involved with that. So I find that James Governor's discussion of progressive delivery really resonates with me, as a developer, not so much as an infrastructure Deployer. So continuous delivery is more of infrastructure notice notion, progressive delivery, feature flags, those types of things, or app level, concepts, minimizing the blast radius of your, the new features you're deploying, that type of stuff, I think begins to speak to the pain of application delivery. So I'll guess I'll put this up. Michelle, I might aim it to you, and then we'll go around the horn, what are your thoughts on the progressive delivery area? How could that potentially begin to impact cloud native over 2020? I'm looking for some rallying cries that move up the stack and give a set of best practices, if you will. And I think James Governor of RedMonk opened on something that's pretty important. >> Yeah, I think it's all about automating all that stuff that you don't really know about. Like Flagger is an awesome progressive delivery tool, you can just deploy something, and people have been asking for so many years, ever since I've been in this space, it's like, "How do I do AB deployment?" "How do I do Canary?" "How do I execute these different deployment strategies?" And Flagger is a really good example, for example, it's a really good way to execute these deployment strategies but then, make sure that everything's happening correctly via observing metrics, rollback if you need to, so you don't just throw your whole system. I think it solves the problem and allows you to take risks but also keeps you safe in that you can be confident as you roll out your changes that it all works, it's metrics driven. So I'm just really looking forward to seeing more tools like that. And dashboards, enable that kind of functionality. >> Chris, what are your thoughts in that progressive delivery area? >> I mean, CNCF alone has a lot of projects in that space, things like Argo that are tackling it. But I want to go back a little bit to your point around developer dopamine, as someone that probably spent about a decade of his career focused on developer tooling and in fact, if you remember the Eclipse IDE and that whole integrated experience, I was blown away recently by a demo from GitHub. They have something called code spaces, which a long time ago, I was trying to build development environments that essentially if you were an engineer that joined a team recently, you could basically get an environment quickly start it with everything configured, source code checked out, environment properly set up. And that was a very hard problem. This was like before container days and so on and to see something like code spaces where you'd go to a repo or project, open it up, behind the scenes they have a container that is set up for the environment that you need to build and just have a VS code ID integrated experience, to me is completely magical. It hits like developer dopamine immediately for me, 'cause a lot of problems when you're going to work with a project attribute, that whole initial bootstrap of, "Oh you need to make sure you have this library, this install," it's so incredibly painful on top of just setting up your developer environment. So as we continue to move up the stack, I think you're going to see an incredible amount of improvements around the developer tooling and developer experience that people have powered by a lot of this cloud native technology behind the scenes that people may not know about. >> Yeah, 'cause I've been talking with the team over at Docker, the work they're doing with that desktop, enable the aim local environment, make sure it matches as closely as possible as your deployed environments that you might be targeting. These are some of the pains, that I see. It's hard for developers to get bootstrapped up, it might take him a day or two to actually just set up their local laptop and development environment, and particularly if they change teams. So that complexity really corralling that down and not necessarily being overly prescriptive as to what tool you use. So if you're visual code, great, it should feel integrated into that environment, use a different environment or if you feel more comfortable at the command line, you should be able to opt into that. That's some of the stuff I get excited to potentially see over 2020 as things progress up the stack, as you said. So, Michelle, just from an innovation train perspective, and we've covered a little bit, what's the best way for people to get started? I think Kelsey covered a little bit of that, being very pragmatic, but all this innovation is pretty intimidating, you can get mowed over by the train, so to speak. So what's your advice for how people get started, how they get involved, et cetera. >> Yeah, it really depends on what you're looking for and what you want to learn. So, if you're someone who's new to the space, honestly, check out the case studies on cncf.io, those are incredible. You might find environments that are similar to your organization's environments, and read about what worked for them, how they set things up, any hiccups they crossed. It'll give you a broad overview of the challenges that people are trying to solve with the technology in this space. And you can use that drill into the areas that you want to learn more about, just depending on where you're coming from. I find myself watching old KubeCon talks on the cloud native computing foundations YouTube channel, so they have like playlists for all of the conferences and the special interest groups in CNCF. And I really enjoy talking, I really enjoy watching excuse me, older talks, just because they explain why things were done, the way they were done, and that helps me build the tools I built. And if you're looking to get involved, if you're building projects or tools or specs and want to contribute, we have special interest groups in the CNCF. So you can find that in the CNCF Technical Oversight Committee, TOC GitHub repo. And so for that, if you want to get involved there, choose a vertical. Do you want to learn about observability? Do you want to drill into networking? Do you care about how to deliver your app? So we have a cig called app delivery, there's a cig for each major vertical, and you can go there to see what is happening on the edge. Really, these are conversations about, okay, what's working, what's not working and what are the next changes we want to see in the next months. So if you want that kind of granularity and discussion on what's happening like that, then definitely join those those meetings. Check out those meeting notes and recordings. >> Gotcha. So on Kelsey, as you look at 2020 and beyond, I know, you've been really involved in some of the earlier emerging tech spaces, what gets you excited when you look forward? What gets your own level of dopamine up versus the broader community? What do you see coming that we should start thinking about now? >> I don't think any of the raw technology pieces get me super excited anymore. Like, I've seen the circle of around three or four times, in five years, there's going to be a new thing, there might be a new foundation, there'll be a new set of conferences, and we'll all rally up and probably do this again. So what's interesting now is what people are actually using the technology for. Some people are launching new things that maybe weren't possible because infrastructure costs were too high. People able to jump into new business segments. You start to see these channels on YouTube where everyone can buy a mic and a B app and have their own podcasts and be broadcast to the globe, just for a few bucks, if not for free. Those revolutionary things are the big deal and they're hard to come by. So I think we've done a good job democratizing these ideas, distributed systems, one company got really good at packaging applications to share with each other, I think that's great, and never going to reset again. And now what's going to be interesting is, what will people build with this stuff? If we end up building the same things we were building before, and then we're talking about another digital transformation 10 years from now because it's going to be funny but Kubernetes will be the new legacy. It's going to be the things that, "Oh, man, I got stuck in this Kubernetes thing," and there'll be some governor on TV, looking for old school Kubernetes engineers to migrate them to some new thing, that's going to happen. You got to know that. So at some point merry go round will stop. And we're going to be focused on what you do with this. So the internet is there, most people have no idea of the complexities of underwater sea cables. It's beyond one or two people, or even one or two companies to comprehend. You're at the point now, where most people that jump on the internet are talking about what you do with the internet. You can have Netflix, you can do meetings like this one, it's about what you do with it. So that's going to be interesting. And we're just not there yet with tech, tech is so, infrastructure stuff. We're so in the weeds, that most people almost burn out what's just getting to the point where you can start to look at what you do with this stuff. So that's what I keep in my eye on, is when do we get to the point when people just ship things and build things? And I think the closest I've seen so far is in the mobile space. If you're iOS developer, Android developer, you use the SDK that they gave you, every year there's some new device that enables some new things speech to text, VR, AR and you import an STK, and it just worked. And you can put it in one place and 100 million people can download it at the same time with no DevOps team, that's amazing. When can we do that for server side applications? That's going to be something I'm going to find really innovative. >> Excellent. Yeah, I mean, I could definitely relate. I was Hortonworks in 2011, so, Hadoop, in many respects, was sort of the precursor to the Kubernetes area, in that it was, as I like to refer to, it was a bunch of animals in the zoo, wasn't just the yellow elephant. And when things mature beyond it's basically talking about what kind of analytics are driving, what type of machine learning algorithms and applications are they delivering? You know that's when things tip over into a real solution space. So I definitely see that. I think the other cool thing even just outside of the container and container space, is there's just such a wealth of data related services. And I think how those two worlds come together, you brought up the fact that, in many respects, server-less is great, it's stateless, but there's just a ton of stateful patterns out there that I think also need to be addressed as these richer applications to be from a data processing and actionable insights perspective. >> I also want to be clear on one thing. So some people confuse two things here, what Michelle said earlier about, for the first time, a whole group of people get to learn about distributed systems and things that were reserved to white papers, PhDs, CF site, this stuff is now super accessible. You go to the CNCF site, all the things that you read about or we used to read about, you can actually download, see how it's implemented and actually change how it work. That is something we should never say is a waste of time. Learning is always good because someone has to build these type of systems and whether they sell it under the guise of server-less or not, this will always be important. Now the other side of this is, that there are people who are not looking to learn that stuff, the majority of the world isn't looking. And in parallel, we should also make this accessible, which should enable people that don't need to learn all of that before they can be productive. So that's two sides of the argument that can be true at the same time, a lot of people get caught up. And everything should just be server-less and everyone learning about distributed systems, and contributing and collaborating is wasting time. We can't have a world where there's only one or two companies providing all infrastructure for everyone else, and then it's a black box. We don't need that. So we need to do both of these things in parallel so I just want to make sure I'm clear that it's not one of these or the other. >> Yeah, makes sense, makes sense. So we'll just hit the final topic. Chris, I think I'll ask you to help close this out. COVID-19 clearly has changed how people work and collaborate. I figured we'd end on how do you see, so DockerCon is going to virtual events, inherently the Open Source community is distributed and is used to not face to face collaboration. But there's a lot of value that comes together by assembling a tent where people can meet, what's the best way? How do you see things playing out? What's the best way for this to evolve in the face of the new normal? >> I think in the short term, you're definitely going to see a lot of virtual events cropping up all over the place. Different themes, verticals, I've already attended a handful of virtual events the last few weeks from Red Hat summit to Open Compute summit to Cloud Native summit, you'll see more and more of these. I think, in the long term, once the world either get past COVID or there's a vaccine or something, I think the innate nature for people to want to get together and meet face to face and deal with all the serendipitous activities you would see in a conference will come back, but I think virtual events will augment these things in the short term. One benefit we've seen, like you mentioned before, DockerCon, can have 50,000 people at it. I don't remember what the last physical DockerCon had but that's definitely an order of magnitude more. So being able to do these virtual events to augment potential of physical events in the future so you can build a more inclusive community so people who cannot travel to your event or weren't lucky enough to win a scholarship could still somehow interact during the course of event to me is awesome and I hope something that we take away when we start all doing these virtual events when we get back to physical events, we find a way to ensure that these things are inclusive for everyone and not just folks that can physically make it there. So those are my thoughts on on the topic. And I wish you the best of luck planning of DockerCon and so on. So I'm excited to see how it turns out. 50,000 is a lot of people and that just terrifies me from a cloud native coupon point of view, because we'll probably be somewhere. >> Yeah, get ready. Excellent, all right. So that is a wrap on the DockerCon 2020 Open Source Power Panel. I think we covered a ton of ground. I'd like to thank Chris, Kelsey and Michelle, for sharing their perspectives on this continuing wave of Docker and cloud native innovation. I'd like to thank the DockerCon attendees for tuning in. And I hope everybody enjoys the rest of the conference. (upbeat music)

Published Date : May 29 2020

SUMMARY :

Brought to you by Docker of the Docker netease wave on just the things around Kubernetes, being on the DOC, the A rumor has it that you are apart from constantly cheer on the team. So how does the art and the more people are going to understand Yeah, and the various foundations, and allows people to build things I think minimalism I hear you You pick the tools that you need, and it looks like geo cities from the 90s but outside the CNCF that need to plug in? We essentially allow the market to decide arrived on the scene, on Kubernetes so that you could see Yeah, as part of the and I'm glad you bring that up. entitled, "Monoliths are the Future." but I get the sense you and some people are saying the only way and you have a variety of sort in that you can be confident and in fact, if you as to what tool you use. and that helps me build the tools I built. So on Kelsey, as you and be broadcast to the globe, that I think also need to be addressed the things that you read about in the face of the new normal? and meet face to face So that is a wrap on the DockerCon 2020

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

MichellePERSON

0.99+

Shawn ConleyPERSON

0.99+

Michelle NooraliPERSON

0.99+

Chris AniszczykPERSON

0.99+

2011DATE

0.99+

CNCFORGANIZATION

0.99+

KelseyPERSON

0.99+

1000 piecesQUANTITY

0.99+

10QUANTITY

0.99+

Apache Software FoundationORGANIZATION

0.99+

2020DATE

0.99+

JanuaryDATE

0.99+

oneQUANTITY

0.99+

CiscoORGANIZATION

0.99+

PhillyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

a dayQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

SpringSourceORGANIZATION

0.99+

TOCORGANIZATION

0.99+

100QUANTITY

0.99+

HortonworksORGANIZATION

0.99+

DockerConEVENT

0.99+

North StarORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

PrometheusTITLE

0.99+

Washington StateLOCATION

0.99+

first timeQUANTITY

0.99+

Red HatORGANIZATION

0.99+

bothQUANTITY

0.99+

DockerORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

WillPERSON

0.99+

200QUANTITY

0.99+

Spring BootTITLE

0.99+

AndroidTITLE

0.99+

two companiesQUANTITY

0.99+

two sidesQUANTITY

0.99+

iOSTITLE

0.99+

one pieceQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

RedMonkORGANIZATION

0.99+

two peopleQUANTITY

0.99+

3000 microservicesQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

JBossORGANIZATION

0.99+

Google CloudORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

50,000 peopleQUANTITY

0.98+

20 monolithsQUANTITY

0.98+

OneQUANTITY

0.98+

one thingQUANTITY

0.98+

ArgoORGANIZATION

0.98+

KubernetesTITLE

0.98+

two companiesQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

over 50,000 peopleQUANTITY

0.98+

five yearsQUANTITY

0.98+

twoQUANTITY

0.98+

DockerEVENT

0.98+

Nigel Poulton, MSB com | KubeCon + CloudNativeCon NA 2019


 

>> Live from San Diego California, it's theCUBE. Covering KubeCon and CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back. We're at the end of three days of wall-to-wall coverage here at KubeCon CloudNativeCon 2019 in San Diego. I am Stu Miniman and my co-host for this week has been John Troyer, and we figured no better way to cap our coverage than bring on a CUBE alumni who has likely educated more people about containers and Kubernetes, you know, may be second only to the CNCF. So, Nigel Poulton now the head of content at msb.com. Nigel, pleasure to see you and thanks for coming back on the program. >> Honestly gents, the pleasure is all mine, as always. >> All right, so Nigel, first of all I'd love to get your just gestalt of the week. You know, take away, what's the energy. You know, how is was this community doing. >> Yeah, so it's the end of the week and my brain is a mixture of fried and about to explode, okay. Which i think is a good thing. That's what you want at the end of a conference, right. But I think if we can dial it back to the first day at that opening keynote, something that really grabbed me at the time and has been sort of a theme for me throughout the conference, is when they asked, can you raise your hand if this is your first KubeCon, and it's a room of 8,000 people, and I don't have the data at hand right, but I'm sat there, I've got my brother on this side, it's his first ever KubeCon, and he kind of goes like this, and then he realizes that nearly everybody around us has got their hands up, so he's kind of like, whoa yeah, I feel like I'm on the in the in-crowd now. And I think from the people that I've spoken to it seems to be that the community is maturing, the conference or the event itself is maturing, and that starts to bring in kind of a different crowd, and a new crowd. People that are not necessarily building Kubernetes or building projects in the Kubernetes ecosystem, but looking to bring it into their organizations to run their own applications. >> Yeah, no absolutely. You know, the rough number I heard was somewhere two-thirds to three-quarters of that room were new. >> Nigel: I can believe that. >> 12,000 here in attendance, right. There were 8,000 here last year. >> Nigel: Yeah. >> You think about the, you know, somebody, oh I sent somebody this year, I sent somebody different the next year, and all the new people. So, you know, Nigel, luckily that keeps you busy, because there is something I've said for a long, long, time is there is always a need for that introductory and then how do I get started and how do I get into here, and luckily the the ecosystem and all the projects and everything, somebody could pick that up in five or 10 minutes if they'd just put their mind to it, right. >> So I say this a lot of the time, that I feel like we live in the Golden Age of being able to take hold of your own career and learn a technology and make the best of what's available for you. Now we don't live in the day where we used, you know, to learn something new you would have to buy infrastructure. I mean even to learn Windows back in the day, or NetWare or Linux you'd need a couple of dusty old PCs in the corner of your office or your bedroom or something, and it was hard. Whereas now with cloud, with video training, with all the hands-on labs and stuff that are out there, with all of the sessions that you get at events like this, if you're interested in pushing your career forward, not only have you not got an excuse not to do it anymore, but the opportunities are just amazing, right. I feel like we live in such an, I feel like we're living in a exciting time for tech. >> Well Nigel, you do books, said you've done training courses, you have your platform of like a lab platform, msb.com. And one of the challenges in this space is that it is moving so fast, right. Yes, you have, anything's at your fingertips, but. >> Nigel: Yeah. >> Kubernetes changes every every quarter. Here at the show, both scale of people's deployments, but also scale of the probably number of projects, and everything has a different name. >> Nigel: Yeah. >> So, how are you, what should people be looking for? How are you changing your curriculum? What are you what are you adding to it, what are you replicating? >> Yeah, so that's super interesting. I think, right, as well, so it's a Golden Age for learning right, but if you're in the technology industry in the sort of areas that we are, right, if you don't love it and if you're not passionate about it, I almost feel like you're in the wrong industry, because you need that passion, and that sort of it's my hobby as well as my job, just to keep up. Like I feel like I spend an unhealthy amount of time in the Cloud Native ecosystem and just trying to keep track of everything that's going on. And all that time that I spend in, I still feel like I'm playing catch-up all the time. So I think you have to adjust your mentality. Like if you thought that you could learn something, a technology or whatever, and be comfortable for five years in your role, then you really need to adjust that. Like just an example, right. So I write, I offer a book as well, and I would love nothing better than to write that book, stick it on a shelf on Amazon and what-have-you and let it be valid for five years. I would love that because it's hard work, but I can't so like I do a six monthly update, but that applies to way more than that. So for your career, you know, if you want to, it sounds cheesy, if you want to rock it in your career, you have got to keep yourself up to date. And it's a race, but I do think that the kind of things were doing with tech now, they're fun things, right. >> Yeah, a little scary, because while we're at this show I hope you kept up with all the Amazon announcements, the Google announcements. >> Nigel: Yeah. >> And everything going, because it is it is non-stop. >> Nigel: It is. >> Out there. Nigel, we last had you on theCUBE two years ago at this show, and at every show for a bunch of shows it seemed like there was a project or a category du jour. >> Nigel: Yeah. >> I don't know that I quite got that this year. There were some really cool things at edge computing. There was the observability, something we spent a bunch of time talking on. But we'd love to just kind of throw it out there as to what you're seeing in the ecosystem, the landscape, some of the areas that are interesting. >> Nigel: Yeah. >> Important, and what's growing, what's not. >> Okay, so if I can take the event first off, right, so KubeCon itself. Loads of new people, okay, and when I talk to them I'm getting three answers from them. Like number one, they're like, some people like, I just love it, you know, which is great, and I've loved it and it's an amazing event. Other people are like kind of over awed by it, the size. So I don't know, maybe we should send them to re:Invent and then come back here and then they'll be like, oh yeah, it's not so bad. But the second thing is that some of the sessions are going over the first timers heads. So I'm hoping, and I'm sure it will, that going forward in Amsterdam and Boston next year that we'll start to be able to pitch parts of the conference to that new user base. So that was kind of a theme from speaking to people at the event from me. But a couple of things from the ecosystem, like we talked about service mesh, right, two years ago, and it felt like it was a bit of a buzzword, but everyone was talking about it and it was a real theme, and I don't get that at this conference, but what I do feel from the community in general is that uptake and adoption is actually starting to happen now, and thanks a lot to, well look, Linkerd pretty easy these days, STO is making great strides to being easier to deploy, but I also think that the cloud providers, those hosted cloud providers, really stepping up to the plate, like they did with hosted Kubernetes, you know when it was hard to get Kubernetes for your environment. We're seeing a similar thing with the service mesh. You can spin something up in GKE, Kubernetes cluster, click the box, and I'll have a service mesh, thank you very much. >> Well, it's funny. I think back to Austin, when I talk to the average customer in the show floor and said, "What are you doing?" they were rolling their own. Picking all of the pieces and doing it. When I talk to the average customer here, is, I'm using managed services. >> Nigel: Yeah. >> Seems to have matured a lot. Of course, some of the manage public cloud services were brand new or a couple months there. Is that's a general direction you see things going? >> So, yes, but I almost wonder if it will be like cloud in general, right, where there was a big move to the cloud. And I understand why people will want to do hosted Kubernetes and things, 'cause it's easy and you know it gets you. I'm careful that when I use the term production grade, because I know it means different things to different people, but you get something that we can at least loosely turn production grade. >> Yeah, and actually just to be clear, we had a lot of discussions about on-premises, so I guess it's more the managed service rather than the, I'm going to roll all the pieces myself. >> Yeah, but I wonder will we start, and because of price and maybe the ability to tweak the cluster towards your needs and things, whether we might see people taking their first steps on a managed service or a hosted Kubernetes, and then as they scale up then they start to say, well, tell you what we'll start rolling our own, because we're better at doing this now, and then run like, you know, you still have your hosted stuff, but you have some stuff on premises as well, and then we move towards something that's a bit more hybrid. I don't know, but I just wonder if that will become a trend. >> Well Nigel, I mean it's been a busy week. You started off with workshops. I don't know, what did you miss? What's the first, when you go home, back to England, are you going to, and you pop open your browser and start looking at all the session videos and stuff, I don't know, what didn't you get a chance to do here this week? >> So I was kind of, for me it's been the busiest KubeCon I've had and it's robbed me of a lot of sessions, right, and when I remember when I looked at the catalog at the beginning it was like, you know it's one of those conferences where almost every slot there's three things that I want to go to, which is a sign of a good conference. I'm quite interested at the moment in K3s. I actually haven't touched it for a long time, but outside of KubeCon I have had a lot of people talk to me about that, so I will go home and I will hunt down, right, what are the K3s sessions to try and get myself back up to speed, 'cause I know there are other projects that are similar right, but I find it quite fascinating in that it's one of those projects where it started out with like this goal of we'll be for the edge, right, or for IOT or something, and the community are like, we really like it, and actually I want to use it for loads of other things. You have no idea whether it will go on to be like a roaring success, but it. I don't know, so often you have it where a project isn't planned to be something. >> Announcer: Good afternoon attendees. Breakout sessions will begin in 10 minutes. >> But it naturally in the community. >> Announcer: Session locations are listed. >> Take it on and say. >> Announcer: On the noted schedule. >> We're going to do something with it. >> Announcer: On digital signage throughout the venue. >> That wasn't originally planned, yeah. So I'll be looking up K3s as my first thing when I go home, but it is the first thing on a long list, right. >> All right. Nigel, tell us a little bit about, you know, latest things you're doing, msb.com. I know you had your book signing for your book here, had huge lines here. >> Yeah. >> Great to see. So, tell us about what you're doing overall. >> Thank you, yeah. So, I've got a couple of books and I've got a bunch of video training courses out there, and I'm super fortunate that I've reached a lot of people, but a real common theme when I talk to people are like, look, I love your book, I love your video courses, whatever, how do I take that next step, and the answer was always, look, get your hands on as much as possible, okay. And I would send people to like Minikube and to play with Docker or play with Kubernetes and various other solutions, but none of them really seem to be like, a real something that looked and smelled and tasted like production. So I'm working with a start-up at the moment, msb.com, where we have curated learning content. Everybody gets their own fully functioning private free node Kubernetes cluster. Ingress will work, internet-facing load balancers will all work on it, and the idea is that instead of having like a single node development environment on your laptop, which is fine, but you know, you can't really play with scheduling and things like that, then msb.com takes that sort of learning journey to the next level because it's it's a real working cluster, plus we've got this amazing visual dashboard so that when you're deploying stuff and scaling and rolling updates you see it all happening in the browser. And for me as an educator, right, it's sometimes hard for people to connect the dots when you're reading a book or, and I spend hours on like PowerPoint animations and stuff, whereas now in this browser to augment like reading a book, and to augment taking a training video, you can go and get your hands on and have this amazing sort of rich visual experience that really helps you like, sort of, oh I get it now, yeah. >> All right, so Nigel, final question I have for you. I've known you back when we were just a couple of infrastructure guys. You've done phenomenal things. >> Nigel: The glory days. >> With kind of the wave of containers, you're a Docker captain. You know, really well known in the Kubernetes. When you reflect back on something, on kind of this journey we've been on, you look at 12,000 people here, you know Docker has some recent news here, so give us a reflection back on that this journey the whole industry's on. >> Yeah, so I had breakfast with a guy this morning who I wrote my first ever public blog with. He had a blog site and he loaned me some space on his blog site 'cause I didn't even know how to build a blog at the time, and it was a storage blog, yeah, we're talking about EMC and HDS and all that kind of stuff, and I'm having breakfast with him, 14 I think years later in San Diego at KubeCon. And I think, and I don't know if this really answers your question, but I feel like that Kubernetes is almost so, if ubiquitous is the right word or it's so pervasive, and it's so all-encompassing almost, that it is bringing almost the entire community. I don't want to get too carried away with saying this, right, but it is bringing people from all different areas to like a common platform for want of a better term, right. I mean we were infrastructure guys, yourself as well John, and here we are at an event that as a community and as a technology I think it's just, it's changing the world, but it's also bringing things almost under one hood. So I would say anybody, like whatever you're doing, do all roads lead to Kubernetes at the moment, I don't know. >> Yeah, well we know software can actually be a unifying factor. Best term I've heard is Kubernetes is looking to be that universal back plain. >> Nigel: Yeah. >> and therefore, both you know, southbound to the infrastructure, northbound to the application. Nigel Poulton congratulations on the progress. Definitely, everybody makes sure to check out his training online, and thank you for helping us to wrap up our three days of coverage here. For John Troyer, I am Stu Miniman. TheCUBE will be at KubeCon 2020 in both Amsterdam and Boston. we will be at lots of other shows. Be sure to check out thecube.net. Please reach out if you have any questions. We are looking for more people to help support our growing coverage in the cloud native space, so thank you so much for the community, thank you to all of our guests, thank you to the CNCF and our sponsors that make this coverage possible, and thank you to you our audience for watching theCUBE. (upbeat music)

Published Date : Nov 22 2019

SUMMARY :

Brought to you by Red Hat, and Kubernetes, you know, may be second only to the CNCF. All right, so Nigel, first of all I'd love to get and that starts to bring in kind of a different crowd, You know, the rough number I heard was There were 8,000 here last year. and luckily the the ecosystem and learn a technology and make the best of you have your platform of like a lab platform, msb.com. but also scale of the probably number of projects, So I think you have to adjust your mentality. I hope you kept up with all the Amazon announcements, Nigel, we last had you on theCUBE I don't know that I quite got that this year. and I don't get that at this conference, and said, "What are you doing?" Is that's a general direction you see things going? to different people, but you get something Yeah, and actually just to be clear, and because of price and maybe the ability to and you pop open your browser I don't know, so often you have it where Breakout sessions will begin in 10 minutes. but it is the first thing on a long list, right. I know you had your book signing for your book here, Great to see. and the answer was always, look, I've known you back when we were just With kind of the wave of containers, and it's so all-encompassing almost, is looking to be that universal back plain. and thank you to you our audience for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NigelPERSON

0.99+

Nigel PoultonPERSON

0.99+

Stu MinimanPERSON

0.99+

John TroyerPERSON

0.99+

AmsterdamLOCATION

0.99+

EnglandLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

8,000QUANTITY

0.99+

JohnPERSON

0.99+

San DiegoLOCATION

0.99+

five yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

next yearDATE

0.99+

fiveQUANTITY

0.99+

last yearDATE

0.99+

BostonLOCATION

0.99+

12,000 peopleQUANTITY

0.99+

KubeConEVENT

0.99+

San Diego CaliforniaLOCATION

0.99+

10 minutesQUANTITY

0.99+

firstQUANTITY

0.99+

PowerPointTITLE

0.99+

8,000 peopleQUANTITY

0.99+

this yearDATE

0.99+

two years agoDATE

0.99+

AmazonORGANIZATION

0.99+

WindowsTITLE

0.99+

second thingQUANTITY

0.99+

12,000QUANTITY

0.99+

KubeCon 2020EVENT

0.98+

bothQUANTITY

0.98+

thecube.netOTHER

0.98+

EMCORGANIZATION

0.98+

first dayQUANTITY

0.98+

three daysQUANTITY

0.98+

oneQUANTITY

0.98+

CloudNativeConEVENT

0.97+

first stepsQUANTITY

0.97+

GoogleORGANIZATION

0.97+

CUBEORGANIZATION

0.97+

three-quartersQUANTITY

0.97+

three thingsQUANTITY

0.97+

LinuxTITLE

0.97+

this weekDATE

0.97+

singleQUANTITY

0.96+

three answersQUANTITY

0.96+

CNCFORGANIZATION

0.96+

HDSORGANIZATION

0.95+

first thingQUANTITY

0.94+

KubernetesTITLE

0.94+

STOORGANIZATION

0.93+

two-thirdsQUANTITY

0.92+

msb.comORGANIZATION

0.92+

msb.comOTHER

0.91+

Kamal Shah, StackRox | Sumo Logic Illuminate 2019


 

>> Narrator: From Burlingame, California, it's the Cube, covering Sumo Logic Illuminate 2019. Brought to you by Sumo Logic. >> Hey welcome back everybody! Jeff Frick here with the Cube, we're at the Sumo Logic Illuminate conference, it's at the Hyatt San Francisco Airport. About 700, 800 people, full house in the keynote earlier today, all about operational process monitoring, all this crazy data is being kicked out of the Cloud and IoT and all these crazy next-gen applications. We're excited to have a very close friend of mine, CEO of a very hot company, Kamal Shah, the CEO of StackRox. Kamal, great to see you! >> Thank you, and great to be here, Jeff! >> Absolutely! So for folks that aren't familiar with StackRox, give us the overview. >> Sure, so in a nutshell, we do Kubernetes Security, and so as we've heard all day today, enterprises are deploying microservices, containers, Kubernetes, and we do security for your cloud data infrastructure. >> So how does security work for Kubernetes versus security for other things? >> Yeah, so the use cases for security, or the mission for the security team is the same, right? You got to harden your environment to prevent the bad guys from getting in. >> And, you have to make sure, despite your best efforts, if somebody does break in, then you catch them before they do any damage, right? But the how you do security has to evolve for the cloud data stack, right? It has to understand the containers are immutable affirm all infrastructure, you have to understand that it's not just about the container, but it's also about the orchestrator, and specifically Kubernetes, and it's also about making sure that you seamlessly integrate with dev ops processes, automation and workflow. So it requires a fundamentally different approach to security than traditional security tools. >> So you know, we talk a lot about the increasing attack area that's offered by IoT, right? And increasing attack area that's offered by all those API's and all these interconnected applications, but I've never heard anyone really talk about containers or orchestration as kind of a new attack surface. Did we just stop paying attention? Is that something you're seeing happen? >> Yeah it's something that is starting to emerge, and we've seen some high-profile breachers at a large next generation electric car company, and a large shopping site where misconfigurations led to security breaches in the Kubernetes' environment, and Kubernetes' ecosystem also did a Cube security audit, and so I think we're going to start to hear a lot more, because there's more and more applications are being deployed in production. It's creating a new attack area, and as the old saying goes, the predators go where there's food in the system. >> And so if you're not proactive about it, I think it's going to really hurt as you deploy containers in Kubernetes. >> Right, so we hear over and over and over again about breaches because people misconfigure stuff. That just seems to happen, whether it's a database or this, that, and the other. And I think we can pretty much safely assume everyone's going to get breached if they haven't got breached already, 'cause we hear about it all the time. How do you catch them fast, limit the damage and try not to have too much vulnerabilities? >> Exactly, so the use cases for what we do at Kubernetes are the same. Right? Its vulnerability management, it's configuration management, and we just did a study around the state of container in Kubernetes security and misconfigeration was the number one concern. Because the reality is that Kubernetes, there are a lot of knobs. And each knob has multiple options, so if you're not careful you can really misconfigure your environment and make it so much easier for attackers. >> Right, right. >> And it's precisely what happened at the two examples I sighted earlier. So a misconfigerations is important, runtime security is important, and also compliance. Let's not forget about compliance, right. You have to make sure that you meet your PCI, HIPAA, NIST, and CIS benchmark standards for this cloud native stock. >> So what we're seeing is that these are all becoming very, very important and as a result, it's increasing awareness as Kubernetes gets more prominent. >> Right, and then they are creating and tearing down hundreds, thousands, millions of these things at a nidicolous pace. >> I mean exactly. Kubernetes came out of Google, they open sourced it, and it's really what allows you to deploy, manage, containers at scale. Apparently, they manage hundreds of millions of container a day using Kubernetes, it's incredible. >> Jeff: Oh yeah, I saw a statistic that Google launches 4 billion containers per week. That was from a presentation, actually from a 451 analyst from like 2 years ago. So one can only imagine the scale. >> We are also seeing not quite 4 billion containers per week, but we are seeing thousands, and tens of thousands of containers at scale at companies everywhere. They are all deployed in production, and now they are waking up to security. The good news here is that they are waiting for breaches to happen before they solve the problem. There's still a lack of awareness, and what Sumo Logic has done today with the announcement around continued intelligence for Kubernetes just increases the awareness around, hey we have to solve observability, which is logs, metrics, and tracing, which is what Sumo does, and security for your cloud native infrastructures. >> Yeah, I mean the automation is so important, right? You can't do any of this stuff with this exponential growth of data, exponential growth of pushes, of new code releases. There's so many pieces in this, so automation is a huge piece of the puzzle. >> Automation is paramount and with this new infrastructure there aren't enough security people to solve this. So security has to become everybody's responsibility. And the only way we are going to solve this is to automate it. It also has to integrate with your DebOps processes and automation and work flows. If you don't, then the DebOps body is going to reject the security organ, right? So it has to be seamless in the way you deploy it. >> It's interesting you say that because we go to RSA, forty thousand people, more vendor than you can count, it bulges Moscone to the absolute edges. Everyone says over and over that security has to be baked in the entire process from beginning to end, it's not a bolt on and can never be successful as a bolt on. So it surprises me to hear you say that still a lot of people are kind of behind the curve. >> Well I mean if you think about I, even though they say that, right? In a traditional model of the application you go to spend 6 months building it and then you can go spend a couple of weeks or month hardening and putting security around it. But when you are launching applications every 6 hours, you can spend 6 days addressing security, so it has to be built in. And speaking of RSA, if you recall, last year the big talk at RSA was around AI, right. Everything was AI driven security. My prediction, my bold prediction for this RSA is it's going to be all around Kubernetes security. >> Yeah, well it's applied AI. Applied AI for Kubernetes. >> Exactly. >> And that's what you need. I always feel for the SISO just walking the floor at RSA going, "Where do I begin? I mean where do I spend my money, how do I prioritize?" It's kind of like an insurance problem. You can't insure to the nth degree. You got to have a budget, but how do you deploy your assets? It's got to be super, super confusing. >> It really is. I think what your seeing is that SISO's are relying on their DEV and IT ops teams, right? They are partnering with the VP of platform, the VP of infrastructure, the VP engineering, because when you think about this new world security is really, the ownership of security is now shifting from the information's security teams to DevOps teams. So security teams still drive policy, and they still want to make sure they do the trust and verify, but the implementation of the security is now being owned by DevOps teams. So its a big cultural shift that's going on in organizations today. SISO's have to realize that it's no longer just them, but they have to partner with their DevOps counterparts to effectively address security for this cloud native stock. >> Right, so tell us a little bit about the relationship with Sumo. How do the applications work together? What's the solution look like when the 2 solutions are brought together. >> So Sumo has been a great partner. We have several joint customers. The simplest way to think about this is that Sumo does observability for Kubernetes, so that's logs, metrics, and tracing, and we do security from Kubernetes. We are the yin to their yang. What we do is we have taken all the intelligence we get from security and we feed it into the Sumo dashboard. Sumo customers get a single pane of glass, not just for the observability data, but also for their security violations, weather its for vulnerability, weathers it's for configuration or if it's for runtime threats, right? You get it all in one single place. >> Right. So I just want to get your take on kind of this rise of the momentum behind Hybrid Cloud that we've seen recently. Big announcement at Google Cloud show, with Anthos. Big announcement between VMware and Amazon. It always kind of swings back and forth. It was all in to public cloud and now there's a little bit of a pullback in Hybrid, but that's terrific for you. The fact of the matter is workload should run where they should run, they don't really care it's what's appropriate. Horses for courses, right? >> Precisely so, we see the shift from public cloud to Multi-cloud, and then from Multi-cloud to Hybrid cloud. The underlying infrastructure that makes that a reality are containers and Kubernetes, right? And that's why we've seen this tremendous momentum on Kubernetes. What we are seeing is customers that want to give their Dev teams that flexibility to pick their favorite cloud, or to do it on premises, their private clouds. But they want to make it in a single security solution that gets integrated no matter where you run your infrastructure and that's integrated back to your Sumo dashboard. So you have visibility across all Dev teams, all your application infrastructure, regardless of where they are running. There is one security standard that gets implemented. That is really, that's the future. You don't want to be beholden to a one claw provider, you want flexibility, you want choice. Kubernetes allows you to do that. >> Well and the whole thing becomes more autotomized, right, with autonomic memory, autonomic compute, autonomic store, throw that on an IoT and Edges and now you're starting to distribute all those pieces all over the place, which is going to happen. >> Kamal: It is going to happen for sure. >> All right, looking forward I can't believe we're almost through 2019, it still shocks me everyday I look at the calendar, but what are some of your priorities looking forward? What are you guys working on? What do you see coming down the pipe? >> Yes, so you touches on a couple of these. So today, is a lot of talk around Kubernete. We are seeing Kubernetes also get deployed in IoT and edge devices, we are also seeing they are being used to manage serve-less infrastructure. So we are going to continue to evolve as Kubernetes evolves. The other big trend that we are seeing in the market today is around service mesh. People talk a lot about Istio and Linkerd and using service mesh as your policy framework to drive consistent policies across applications, so that's another area where we are innovating very rapidly and that will become, I think, more and more real in enterprise deployments over 2020. >> Well, congratulations Kamal to you and the team. I think you picked a good horse to ride on, I should say ship, right, with Kubernetes. Thanks for taking a few minutes. >> No, thank you for having me. I can officially say now that I've checked off one of my professional bucket-list items, which is to be on the Cube with an old friend. So thank you for having me. >> Check that box man. All right, he's Kamal, I'm Jeff, you're watching the Cube. Were at Sumo Logic Illuminate from the Hyatt San Francisco Airport. Thanks for watching, see you next time.

Published Date : Sep 11 2019

SUMMARY :

Brought to you by Sumo Logic. it's at the Hyatt San Francisco Airport. So for folks that aren't familiar Kubernetes, and we do security for You got to harden your environment But the how you do security has to evolve So you know, we talk a lot about Yeah it's something that is starting to emerge, I think it's going to really hurt as you deploy How do you catch them fast, limit the damage Exactly, so the use cases for what we do You have to make sure that you meet your PCI, HIPAA, So what we're seeing is that these are all becoming Right, and then they are creating and tearing down they open sourced it, and it's really what allows you to So one can only imagine the scale. and what Sumo Logic has done today with the announcement so automation is a huge piece of the puzzle. So it has to be seamless in the way you deploy it. So it surprises me to hear you say that still a lot and then you can go spend a couple of weeks or month Applied AI for Kubernetes. You got to have a budget, but how do you deploy your assets? of infrastructure, the VP engineering, because when you the relationship with Sumo. We are the yin to their yang. The fact of the matter is workload should run where they Multi-cloud, and then from Multi-cloud to Hybrid cloud. Well and the whole thing becomes more autotomized, right, Yes, so you touches on a couple of these. Well, congratulations Kamal to you and the team. So thank you for having me. Thanks for watching, see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

AmazonORGANIZATION

0.99+

6 daysQUANTITY

0.99+

2 solutionsQUANTITY

0.99+

Kamal ShahPERSON

0.99+

thousandsQUANTITY

0.99+

Jeff FrickPERSON

0.99+

6 monthsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

2019DATE

0.99+

KamalPERSON

0.99+

StackRoxPERSON

0.99+

forty thousand peopleQUANTITY

0.99+

Sumo LogicORGANIZATION

0.99+

last yearDATE

0.99+

SumoORGANIZATION

0.99+

two examplesQUANTITY

0.99+

Burlingame, CaliforniaLOCATION

0.99+

IstioORGANIZATION

0.99+

SISOORGANIZATION

0.99+

LinkerdORGANIZATION

0.99+

StackRoxORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

each knobQUANTITY

0.98+

2 years agoDATE

0.98+

About 700, 800 peopleQUANTITY

0.98+

todayDATE

0.97+

Sumo Logic IlluminateEVENT

0.97+

single security solutionQUANTITY

0.96+

Hyatt San Francisco AirportLOCATION

0.96+

one security standardQUANTITY

0.96+

oneQUANTITY

0.96+

451 analystQUANTITY

0.95+

KubernetesTITLE

0.94+

millionsQUANTITY

0.94+

4 billion containers per weekQUANTITY

0.94+

2020DATE

0.93+

hundreds of millions of container a dayQUANTITY

0.93+

tens of thousands of containersQUANTITY

0.93+

CloudTITLE

0.92+

KuberneteTITLE

0.91+

4 billion containers per weekQUANTITY

0.91+

single paneQUANTITY

0.9+

DebOpsORGANIZATION

0.87+

Kubernetes SecurityTITLE

0.85+

RSAORGANIZATION

0.85+

hundreds, thousandsQUANTITY

0.83+

Sumo Logic IlluminateORGANIZATION

0.82+

Kubernetes'TITLE

0.81+

KubernetesORGANIZATION

0.8+

IlluminateTITLE

0.79+

earlier todayDATE

0.76+

DevOpsORGANIZATION

0.76+

one claw providerQUANTITY

0.75+

6 hoursQUANTITY

0.75+

CubeORGANIZATION

0.73+

RSATITLE

0.73+

single placeQUANTITY

0.71+

Hybrid CloudTITLE

0.71+

nth degreeQUANTITY

0.71+

one concernQUANTITY

0.71+

Abby Kearns, Cloud Foundry Foundation | CUBEConversation, March 2019


 

(funky music) >> From our studios in the heart of Silicon Valley, Palo Alto, California. This is a CUBEConversation. >> Everyone, welcome to this CUBEConversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. Here in theCUBE Studios here with Abby Kearns, Executive Director, Cloud Foundry Foundation, CUBE alumni. Great to see you again. I think this is your eighth time on theCUBE chatting. Always great to get the update. Thanks for spending the time. >> My pleasure, and it's a joy to drive down to your actual studios. >> (laughs) This is where all happens Wednesdays and Thursdays when we're not on the road doing CUBE events. I think we'll have over 120 events this year. We'll certainly see you at a bulk of them. Cloud Foundry, give us the update. Yeah, we took 'em joking before we came on camera. Boy this cloud thing is kind of working out. I mean, I think IBM CEO calls it chapter two. I'm like, we're still in chapter one, two, three? Give us the update Cloud Foundry, obviously open-source. Things are rocking. Give us the update. >> I do feel like we're moving into chapter two. Chapter one was a really long chapter. (laughs) It spanned about 10 years. But I do think we're starting to see actual growth and actual usage. And I think a lot of people are like, no, there's actually been usage for a while. Me, no no no not on a real scale. And we haven't seen any of the workloads for organizations running at massive scale. At the scale that we know that they can run at. But we're starting to see interesting scale. Like 40, 50 thousand applications, you know. Billions of transactions now passing through. A lot of cloud native technology. So we're starting to see real interesting volume. And so that's going to actually dictate how the next five years unfold because scale is going to dictate how the technologies unfold, how they're used. And they're going to feed into this virtuous cycle of how the technologies unfold, and how they're going to be used, which feedback into how enterprises are using them, and you know, and the cycle continues. >> Give us the update on the foundation. What's going on with the foundation, status, momentum, clouds out there. Obviously open-source continues to drive however we saw a lot of acquisitions and fundings around people who are using open-source to build a business around that. >> I love that. >> Your favorite conversation. But, I mean you know the technical challenges with open-source allow for technical challenges but also the people side is they're learning. What's the update with the foundation? >> Well open-source is really tricky, and I think there is a lot of people that are really enthusiastic as it is a because model. I mean last year 2018 was a pretty substantial year for open-source. The year ended with Red Hat's acquisition by IBM. One of their biggest acquisitions, $34 billion. But we saw in December alone, we also saw Heptio get picked up by VMware which is a services company which is really based on Kubernetes on an open-source technology. But we also saw HashiCorp get another round of funding. And then earlier in the year, Pivotal IPO'd. And so if you look at 2018 at a bigger level, you saw a lot of momentum around open-source and how it's actually being commercialized. Now you and I were talking a little bit prior and I'm a big believer that open-source has the potential and is going to change fundamentally how technology is used and consumed. But at the end of the day for the commercial aspects of it you still have to have a business around that. And I think there's always going to be that fine line. And that line is actually always be going to be moving because how you provide value in, around, and on top of open-source, has to evolve with both the market and your customer needs. >> Yeah and where you are on that wave, whatever wave that is, is it an early wave or is it more mature so the metrization certainly matters? >> Sure. >> You could be early on setting the table or if it's growing when there's some complexity. So it kind of depends, it's always that depends is it the cloud air or is it the Red Hat? There's different approaches and people kind of get confused on that and your answer to that is just pick one that works for, that's a good business model. Don't get hung up on kind of the playbook if you will, is that kind of what you're saying? >> Well I think we're seeing this play out this week with AWS's Elastic announcement, right? And there's been a lot of conversation around how do we think about open-source. Who has access to it? Who has the right to commercialize it? What does commercialization look like? And I think, I've always cautioned people that are proceeding down the path to open-source is really be thoughtful about why you're doing open-source. Like what is your, what are you hoping to achieve? There's a lot of potential that comes with open sourcing your technology. You gain ecosystem, community, momentum. There's a lot of positives that come with that but there's also a lot of work that comes with that too. Managing your community. Managing a much more varied share of stakeholders and people that are going to have thoughts and opinions around how that technology unfolds. And then of course it's because it's open-sources there's more opportunity for people to use that and build their own ideas and their own solutions on top of that. And potentially their own commercial products. And so really figuring out that fine line and what works best for your business. What works best for the technology. And then what your hopes are at the end of the day with that. >> And what are some of the momentums or points for the Foundation, with Cloud Foundry, obviously seeing Pivotal went public, you mentioned that VMWare, I talk to Michael Dell all the time, the numbers are great coming from that operation. Pat Kelson near the Amazon deal think that clear and where VMWare was. But still you have a lot more cloud, multi-cloud conversations happening than ever before. >> Well, for sure I mean at Cloud Foundry, we've actually been talking about multicloud since 2016. We saw that trend coming based on user behavior. And now you've seen everyone is multicloud, even the public clouds are multicloud. >> I think you had the first study out on that, too on multicloud. We did. We were we were firm believers in multicloud. Last year we've actually moved more broadly to multi-platform. Because at the end of the day there isn't one technology that solves all of these problems. Multicloud is you know is pervasive and at the end of the day multicloud means a lot of different things to a lot of people. But for many enterprises what it gives is optionality. You don't want to be locked into a single provider. You don't want to be locked into a single cloud or single solution because you know if I'm an enterprise, I don't know where I'm going to be in five years. Do I want to make a five year or a 10 year or a 20 year commitment to a single infrastructure provider when I don't know what my needs are going to be. So having that optionality and also being able to use the best of what clouds can provide, the best services, the best outcomes. And so for me, I want to have that optionality. So I'm going to look at technologies that give me that portability and then I'm going to use that to allow me to choose the best cloud that I need for right now for my business and maybe again a different one in the future. >> I want to get your thoughts on this. I just doubled down on this conversation because I think there's two things going on that I'm saying we'll get your reaction to. One is I've heard things like pick the right cloud for the right workload and I heard analogies. Hey, if you got an airplane you need to have two engines. You have one engine if it works for that plane, but your whole fleet of planes could be other clouds. So, pick the right cloud for the right workload. Meaning workload is defined spec. >> Yeah. >> I've also heard that the people side of the equation, where people are behaving like they are comfortable with API's tooling is potentially a lock-in, kind of by default. Not a technical lock-in, but people are comfortable with the API's and the tooling. >> Yeah. >> And the workloads need a certain cloud. Then maybe that cloud would be it. That's not saying pick that cloud for the entire company. Right, so certainly that the trend seems to be coming from a lot of people in the news saying hey, this whole sole-cloud, multi-cloud thing argument really isn't about one cloud vs. multiple clouds. It's workload cloud for the use case in the tooling, if it fits and the people are there to do it. Then you can still have other clouds and that's in the multi-cloud architecture. So is that real? What's your thoughts on that? >> Let's dissect that 'cause I think that's actually solving for two different outcomes. Like one multi-cloud for optionality's purpose and workload specific. I think it's a great one. There's a lot of services that are native to certain clouds that maybe you really would like to get greater access to. And so I think you're going to choose the best. You know that's going to drive your workload. Now also factoring in that you know you're going to have a much more mediated access to cloud based on what people are comfortable with. I do think it's at some point as an organization you want to have a better control over that. You know historically over the last decade what we've seen. Shadow IT really dictates your Cloud spend right. You know everyone's got a credit card. I got I've got access to AWS. >> And they got most of that business. Amazon did. >> Yes and that served them quite well. If I am an organization that's trying to digitally transform, I'm also trying to get a better handle on what we're spending, how we're spending it and frankly, now if I have compliance requirements, where's my data? These are going to be important questions for you when you're starting to run production workloads at scale on multiple clouds and so, I predict we're going to see a lot more tension there in internal organizations. Like, hey I'd love for you to use cloud, you know? Where this no longer needs to be a shadow thing, but let's figure out a way to do it that's strategically and intentional versus just random pockets. Choosing to do cloud because of the workflow that they like. >> Well you bring up a good point. The cost thing was never a problem, but then you have sprawl and you realize there's a cost to Optimizer component which means you might be overpaying because as you think about the system aspects, you got networking and you got Cloud management factors. So you start as you get into that Shadow IT expansion. You got to realize, wait a minute, I'm still spending a lot of cash here. >> This adds up really really quickly. I mean, I think the information piece a couple weeks ago where they talked about the Pinterest bill, this stuff, it starts adding up. And for organizations, this is like not just thousands of dollars. It's now hundreds of thousands of dollars. If not you know, tens of millions of dollars. And so, if I'm trying to figure out ways to optimize my business and my scale, I'm going to look at that because that is not an insignificant amount of money. And so if I'm in it, that's money that could be better invested in more developers, better outcomes, a better alignment with my business, then that's where I want to spend my time and money, and so, I'm going to spend more time being really thoughtful about what clouds we're using, what infrastructure we're using, and the tools we're using to allow us to have that optionality. >> So you would agree with the statement if I said, generally, multi-cloud is here, it already exists. >> Yes. >> And that multi-cloud architecture thinking is really the conversation that needs to be had. Not so much cloud selection, per say. It's not a mutually exclusive situation. Meaning, I'm not all in on Amazon. I'm going to have clouds plural? >> Well, yeah you are. Like we have already seen as of early last year over half of our users. Which right now over half the Fortune 500 are multi-cloud already, and that number has gone up since last year I'm for sure. Some workloads were on-prem and some are in a public cloud. Be it GCP, AWS, Azure, or AliCloud. And so that is a statement of fact. And I have every executive that I've talked to with every enterprise has been like, yes, we're doing multi-cloud. >> Yeah, they're going to have some kind of on-prem anyway, So we know that's there. That's not going to go away. >> No, PRIM is not going to go away. >> Then an IOT edge, and an Enterprise Edge, SDWAN comes back into vogue as people start using SAS across network connections. >> Yeah. >> I mean, SDWAN is essentially the internet basically. >> I feel like the older I get the more I'm like, wow, didn't I have this conversation like, 20 years ago? (laughs) >> I was talking about something earlier when I came in. The old becomes the new again. It's what's happening, right? Distributor computing now goes to cloud, you got the Enterprise. What are the big players doing? Google Next is coming up next month, big event. >> It is the week after Cloud Foundry Summit. >> They got Amit Zavery, big news over there they poached from Oracle. So Thomas Kurian brought in his Oracle, who is Cube alumni as well. Really smart guy. Diane is not there. What do you expect from Google Next for the week? What are we going to see there? What's the sentiment? What's the vibe? What do you see happening? >> Well, I think it's going to be all about the Enterprise right. That's why Thomas was brought in. And then I think they really give Google that Enterprise focus and say, how do we end up? As it's not just about I'm going to sell to enterprises. That's not, you know, when you're selling to an enterprise there is a whole different approach and you have to write how to the teams, the sales teams. You have to write how to the ecosystem, the services, the enablement capabilities, the support, the training, the product strategy? All of that takes a very different slant when you're thinking about an enterprise. And so I'm sure, that's going to be front-and-center for everything that they talk about. >> And certainly he's very public about, you know, the position Oracle Cloud, he knows the Enterprise Oracle was the master of enterprise gamesmanship for sure. >> Yes, for sure. You don't get a whole lot more enterprising than Oracle. >> What's going on in the CNCF any news there? What's happening on the landscape? What's the Abby take on the landscape of cloud? >> Well, speaking as someone that does not run CNCF. >> Feel free to elaborate. >> Cloud Native Computing Foundation, for those of you that aren't aren't, you know, aren't familiar is a sister open-source organization that is a clearing house or collective of cloud made of technologies. The anchor project is the very well-known Kubernetes, but it also spans a variety of technologies from everything from LINKerD to SEDA to Envoy, so it's just a variety of cloud-native technologies. And you know they're continuing to grow because obviously cloud-native is becoming you know it's coming into its own time right now. Because we're starting to really think about how to do better with workloads. Particularly workloads that I can run across a cloud. I mean and that seems pretty pedantic but we've been talking about Cloud since 2007. And we were talking about what cloud brings. What did cloud bring, it brings resiliency. You can auto-scale. You can burst into the cloud, remember bursting? Now all the things we talked about in 2007 to 2008 but weren't really reality because the applications that were written weren't necessarily written to do that. >> And that's exactly the point. >> So now we're actually seeing a lot more of these applications written we call them microservices, 12 Factor apps, serverless apps. What have you but it's applications written to run and scale across the cloud. And that is a really defining point because now these technologies are actually relevant because we're starting to see more of these created and run and now run at scale. >> Yeah, I think that's the point. I think you nailed it. The applications are driving everything And I think that's the chapter two narrative. In my opinion, chapter one was, let's get infrastructures code going. And chapter two is apps dictating policy and then you're going to see microservices start to emerge. Kind of new different vibe in terms of like what it means for scale as less of about, hey, I'm doing cloud, I got some stuff in the public cloud. Here the conversation is around apps, the workloads and that's where the business value is. It's not like people who is trying to do transformation. They're not saying hey I stood up a Kubernetes Cluster. They're saying I got to deploy my banking app or I got to do, I got to drive this workload. >> And I have to iterate now. I can't do a banking app and then update it in a year. That's not acceptable anymore. You are constantly having to update. You're constantly having to iterate, and that is not something you can do with a large application. I mean the whole reason we talk a lot about monolithic vs 12 factor or cloud in a box is because it isn't that my monolithics are inherently bad, it's just they're big and they're complex. Which means in order to make any updates it takes time. That's where the year comes in, the 18-months come in. And I think that is no longer acceptable you know. I remember the time and I'm going to date myself here, but I remember the time when you know banks would or any e-commerce site would be down. They'd have what they call the orange page. But the orange page would come up, site down tonight 'cause we're doing maintenance for the weekend, right? >> Under construction. >> Under construction. Okay, well I'll just come back on Monday. That's fine. And now, you're like, if it's down for 5 minutes you're like what is actually happening right now. Why is this not here. >> Yeah like when Facebook went down the other day. I was like, what the hell? Facebook sucks. >> You know, the internet blows up if Instagram is down. Oh my God, my life is over and I think our our expectation now is not only constant availability. So you know always available. But also our expectation is real-time access to data transparency and a visibility into what's actually happening at all times. That I've said something that a lot of organizations are really having to figure out. How to develop the applications to expose that. And that takes time and that takes change. And there's a ton of culture change. it has to happen and that is the more important thing if I'm a business I care more about how do I make that a reality and I should care a lot less about the technologies that you use. >> It's interesting you mention about the monolith versus the decomposed application of being agile. Because if you don't have the culture and the people to do it it's still a monolithic effort in the sense of the holistic thinking and the architectural, it's a systems architecture. You have to look at it like a system and that's not easy either. Once get that done the benefits are multifold in terms of like what you can do. But its it's that systems thinking setup is becoming more of an architectural concept that's super important. >> For sure if I have a microservice app, but it takes a 150 people to get that through change management and get it into production well that will still take me a year. Does it matter if there's maybe 12 lines of code in that application? It doesn't matter and so, you know I spend a lot of time. Even though I run Cloud Foundry, I spend a lot of time talking about culture change. All the writing I do is really around cultural change and what does that look like. Because at the end of the day if you're not willing to make those changes, you're not willing to structure your teams and allow for that collaboration and if you're doing iterative work, feedback loops from your customers. If you're not willing to put those pieces into place there is no technology that's going to make you better. >> I totally agree, so let me ask you a question on that point, great point, by the way. Most followed your you're writing your blog posts in the links, but I think that's the question. When do you know when it's not working? So I've seen companies that are rearranging the deckchairs, if you will, to use an analogy with all the culture rah, rah! And then nothing ever happens right? So they've gone into that paralysis mode. When do you look at a culture? When does the executive, what should they be thinking about because people kind of aspire to do this execution that you said is critical? When do you know it's not working or what should they be doing? What's the best practice? How does someone say hey you know what I really want is to be more holistic in my architecture. I don't want to spend two years on that the architecture and then find out it's now just starting. I want to get an architecture in place. I want to hit the ground running. >> I mean it's twofold, one, start small. I mean you're not going to change you know if you're an 85 year old company with 200,000 people you're not going to change that overnight and you should expect that's going to be an 8 to 10 year process now what that's also going to mean is you're going to have to have a really clear vision and you're going to have to be really committed like this is going to be a hard road but conversely when someone says what does success look like, when you're looking at a variety of companies how do you know which ones which ones you think are going to be the most successful at the end of the day because no one's ever actually done any of this before there's no one that's ever gone through this digital transformation and it should have come out on the other side no one. There isn't and so I think what does success look and I said well for me, what I look for are companies that are investing and re-skilling their workforce. That's what I'm looking for. I get real excited when companies talk about their internal boot camps or their programs to rescale or upscale their teams because it's not like you're going to lay off 20,000 people and hire 20,000 cloud native developers, they don't exist and they're certainly not going to exists for thousands of companies to go and do that so you know how are you investing in re-skilling because-- >> It's easy to grow your own internally from pre-existing positions. >> Well sure, they know your business. >> Rather than go to a job board that has no one available. >> And you know at the end of the day that needs to be your new business model what is digital transformation actually it's just a different way of working and there isn't, there is no destination to the digital trend. This isn't a journey that has an end and so you need to really think about how are you going to invest differently in your people so that they can continuously learn continuously learning needs to be part of your model and your mantra and that needs to be in everything you do from hiring to HR to MBO's to you know how do you how do you structure your teams like how do you make sure that people can constantly learn and evolve because if that's not happening it doesn't you know everything else is going to fall by the wayside >> Is the technology gap easy to fill? Lot of tech out there. Talent gap hard to fill. >> For sure. >> That's the real challenge. >> If you have all the best tech in the world but you don't have the right people or the right structure are you going to be successful, probably not. >> Yeah, that's a challenge. Alright, so final question for you where are you going to be, what's your schedule look like, where can people find you, what events going to be at? You guys have an event coming up? >> April 2nd through 4th in Philly. We're going to have a summit you want to see some people that are actually running cloud at scale that's the place to go >> April 5th? >> 2nd through 4th. First week of April Philly, fingers crossed good weather lots of cloud talk and it's a great way. >> City of Brotherly Love >> Yes, we're bringing it. >> Philadelphia. The Patriots couldn't make it to the playoffs last year but love the Philly fans down there Paul Martino and friends down there. Abby thanks for coming on. Appreciate it-good to see you. Thanks for the update. We'll see you around the events, I won't be able to make your event I'll be taking the week off skiing. >> Well one of us has to. >> First vacation of the year, two years. Thanks for coming in. >> You should do that. >> Abby Kearns here inside theCUBE for CUBEConversation I'm John Furrier, thanks for watching (funky music)

Published Date : Mar 15 2019

SUMMARY :

in the heart of Silicon Valley, Great to see you again. to drive down to your actual studios. We'll certainly see you at a bulk of them. and how they're going to be used, which feedback Obviously open-source continues to drive But, I mean you know the technical challenges And I think there's always going to be that fine line. is it the cloud air or is it the Red Hat? that are proceeding down the path to open-source I talk to Michael Dell all the time, even the public clouds are multicloud. and at the end of the day multicloud means for the right workload and I heard analogies. I've also heard that the people side of the equation, if it fits and the people are there to do it. Now also factoring in that you know you're going to have And they got most of that business. These are going to be important questions for you but then you have sprawl and you realize and so, I'm going to spend more time being really thoughtful So you would agree with the statement if I said, is really the conversation that needs to be had. And I have every executive that I've talked to That's not going to go away. Then an IOT edge, and an Enterprise Edge, SDWAN Distributor computing now goes to cloud, What do you expect from Google Next for the week? And so I'm sure, that's going to be front-and-center And certainly he's very public about, you know, You don't get a whole lot more enterprising than Oracle. And you know they're continuing to grow because obviously and scale across the cloud. I think you nailed it. I remember the time and I'm going to date myself here, And now, you're like, if it's down for 5 minutes I was like, what the hell? make that a reality and I should care a lot less about the Once get that done the benefits are multifold in terms of that's going to make you better. to do this execution that you said is critical? thousands of companies to go and do that so you know It's easy to grow your own and that needs to be in everything you do from hiring Is the technology gap easy to fill? or the right structure are you going to be successful, where are you going to be, what's your schedule look like, that's the place to go First week of April Philly, fingers crossed good The Patriots couldn't make it to the playoffs Thanks for coming in.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Paul MartinoPERSON

0.99+

Abby KearnsPERSON

0.99+

John FurrierPERSON

0.99+

8QUANTITY

0.99+

Amit ZaveryPERSON

0.99+

12 linesQUANTITY

0.99+

DianePERSON

0.99+

PhillyLOCATION

0.99+

DecemberDATE

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

2018DATE

0.99+

Thomas KurianPERSON

0.99+

Michael DellPERSON

0.99+

5 minutesQUANTITY

0.99+

two yearsQUANTITY

0.99+

March 2019DATE

0.99+

$34 billionQUANTITY

0.99+

2008DATE

0.99+

April 5thDATE

0.99+

Last yearDATE

0.99+

Pat KelsonPERSON

0.99+

200,000 peopleQUANTITY

0.99+

MondayDATE

0.99+

last yearDATE

0.99+

VMwareORGANIZATION

0.99+

2007DATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two thingsQUANTITY

0.99+

thousandsQUANTITY

0.99+

Cloud Foundry FoundationORGANIZATION

0.99+

20 yearQUANTITY

0.99+

HashiCorpORGANIZATION

0.99+

AbbyPERSON

0.99+

150 peopleQUANTITY

0.99+

20,000 peopleQUANTITY

0.99+

PatriotsORGANIZATION

0.99+

April 2ndDATE

0.99+

CUBEORGANIZATION

0.99+

Cloud FoundryORGANIZATION

0.99+

10 yearQUANTITY

0.99+

tonightDATE

0.99+

two enginesQUANTITY

0.99+

five yearsQUANTITY

0.99+

one engineQUANTITY

0.99+

CNCFORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

five yearQUANTITY

0.99+

HeptioORGANIZATION

0.99+

ThomasPERSON

0.99+

next monthDATE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.98+

PivotalORGANIZATION

0.98+

2016DATE

0.98+

this yearDATE

0.98+

OneQUANTITY

0.98+

18-monthsQUANTITY

0.98+

4thDATE

0.98+

PinterestORGANIZATION

0.98+

this weekDATE

0.98+

CUBEConversationEVENT

0.98+

Cloud Foundry SummitEVENT

0.98+

ThursdaysDATE

0.97+

bothQUANTITY

0.97+

eighth timeQUANTITY

0.97+

singleQUANTITY

0.97+

early last yearDATE

0.97+

20 years agoDATE

0.97+

thousands of dollarsQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

single cloudQUANTITY

0.96+

over 120 eventsQUANTITY

0.96+

2ndDATE

0.96+

single solutionQUANTITY

0.96+

oneQUANTITY

0.96+