Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud
>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great questions, Dave. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)
SUMMARY :
and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Martin Fowler | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
early 1960s | DATE | 0.99+ |
Starlink | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
a day | QUANTITY | 0.97+ |
GCP | TITLE | 0.97+ |
Azure | TITLE | 0.96+ |
WCNP | TITLE | 0.96+ |
10 milliseconds | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.94+ |
Cloud Spanner | TITLE | 0.94+ |
Linkerd | ORGANIZATION | 0.93+ |
triplet | QUANTITY | 0.92+ |
three cloud providers | QUANTITY | 0.91+ |
Cube | ORGANIZATION | 0.9+ |
SuperCloud2 | ORGANIZATION | 0.89+ |
two core sets | QUANTITY | 0.88+ |
John Furrier | PERSON | 0.88+ |
one more piece | QUANTITY | 0.86+ |
two public clouds | QUANTITY | 0.86+ |
thousand locations | QUANTITY | 0.83+ |
Vice President | PERSON | 0.8+ |
10-ish | QUANTITY | 0.79+ |
WCNP | ORGANIZATION | 0.75+ |
decades | QUANTITY | 0.75+ |
three different major regions | QUANTITY | 0.74+ |
Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud
>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great question, Steve. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, and it does get disconnected on a regular basis. In fact, I forget the exact number, but some several dozen locations get disconnected daily just by virtue of the fact that there's construction going on and things are happening in the real world. When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)
SUMMARY :
and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Martin Fowler | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Starlink | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two things | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
early 1960s | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
a day | QUANTITY | 0.98+ |
GCP | TITLE | 0.97+ |
Azure | TITLE | 0.96+ |
WCNP | TITLE | 0.96+ |
10 milliseconds | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.94+ |
Cloud Spanner | TITLE | 0.94+ |
Linkerd | ORGANIZATION | 0.93+ |
Cube | ORGANIZATION | 0.93+ |
triplet | QUANTITY | 0.92+ |
three cloud providers | QUANTITY | 0.91+ |
two core sets | QUANTITY | 0.88+ |
John Furrier | PERSON | 0.86+ |
one more piece | QUANTITY | 0.86+ |
SuperCloud2 | ORGANIZATION | 0.86+ |
two public clouds | QUANTITY | 0.86+ |
thousand locations | QUANTITY | 0.83+ |
Vice President | PERSON | 0.8+ |
10-ish | QUANTITY | 0.79+ |
WCNP | ORGANIZATION | 0.75+ |
decades | QUANTITY | 0.75+ |
three different major regions | QUANTITY | 0.74+ |
Tim Yocum, Influx Data | Evolving InfluxDB into the Smart Data Platform
(soft electronic music) >> Okay, we're back with Tim Yocum who is the Director of Engineering at InfluxData. Tim, welcome, good to see you. >> Good to see you, thanks for having me. >> You're really welcome. Listen, we've been covering opensource software on theCUBE for more than a decade and we've kind of watched the innovation from the big data ecosystem, the cloud is being built out on opensource, mobile, social platforms, key databases, and of course, InfluxDB. And InfluxData has been a big consumer and crontributor of opensource software. So my question to you is where have you seen the biggest bang for the buck from opensource software? >> So yeah, you know, Influx really, we thrive at the intersection of commercial services and opensource software, so OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services, templating engines. Our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants. And like you've mentioned, even better, we contribute a lot back to the projects that we use, as well as our own product InfluxDB. >> But I got to ask you, Tim, because one of the challenge that we've seen, in particular, you saw this in the heyday of Hadoop, the innovations come so fast and furious, and as a software company, you got to place bets, you got to commit people, and sometimes those bets can be risky and not pay off. So how have you managed this challenge? >> Oh, it moves fast, yeah. That's a benefit, though, because the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we tend to do is we fail fast and fail often; we try a lot of things. You know, you look at Kubernetes, for example. That ecosystem is driven by thousands of intelligent developers, engineers, builders. They're adding value every day, so we have to really keep up with that. And as the stack changes, we try different technologies, we try different methods. And at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's something that we just do every day. >> So we have a survey partner down in New York City called Enterprise Technology Research, ETR, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes, is one of the areas that is kind of, it's been off the charts and seen the most significant adoption and velocity particularly along with cloud, but really, Kubernetes is just, you know, still up and to the right consistently, even with the macro headwinds and all of the other stuff that we're sick of talking about. So what do you do with Kubernetes in the platform? >> Yeah, it's really central to our ability to run the product. When we first started out, we were just on AWS and the way we were running was a little bit like containers junior. Now we're running Kubernetes everywhere at AWS, Azure, Google cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code. So our developers can focus on delivering services not trying to learn the intricacies of Amazon, Azure, and Google, and figure out how to deliver services on those three clouds with all of their differences. >> Just a followup on that, is it now, so I presume it sounds like there's a PaaS layer there to allow you guys to have a consistent experience across clouds and out to the edge, wherever. Is that correct? >> Yeah, so we've basically built more or less platform engineering is this the new, hot phrase. Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that just gets all of the underlying infrastructure out of the way and lets them focus on delivering Influx cloud. >> And I know I'm taking a little bit of a tangent, but is that, I'll call it a PaaS layer, if I can use that term, are there specific attributes to InfluxDB or is it kind of just generally off-the-shelf PaaS? Is there any purpose built capability there that is value-add or is it pretty much generic? >> So we really build, we look at things with a build versus buy, through a build versus buy lens. Some things we want to leverage, cloud provider services, for instance, POSTGRES databases for metadata, perhaps. Get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can deliver on, that has consistency, that is all generated from code. that we can, as an SRE group, as an OPS team, that we can manage with very few people, really, and we can stamp out clusters across multiple regions in no time. >> So sometimes you build, sometimes you buy it. How do you make those decisions and what does that mean for the platform and for customers? >> Yeah, so what we're doing is, it's like everybody else will do. We're looking for trade-offs that make sense. We really want to protect our customers' data, so we look for services that support our own software with the most up-time reliability and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team and of course, for our customers; you don't even see that. But we don't want to try to reinvent the wheel, like I had mentioned with SQL datasource for metadata, perhaps. Let's build on top of what of these three large cloud providers have already perfected and we can then focus on our platform engineering and we can help our developers then focus on the InfluxData software, the Influx cloud software. >> So take it to the customer level. What does it mean for them, what's the value that they're going to get out of all these innovations that we've been talking about today, and what can they expect in the future? >> So first of all, people who use the OSS product are really going to be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across over four billion series keys that people have stored, so there's a proven ability to scale. Now in terms of the opensource software and how we've developed the platform, you're getting highly available, high cardinality time-series platform. We manage it and really, as I had mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in realtime. We deploy to our platform every day, repeatedly, all the time. And it's that continuous deployment that allow us to continue testing things in flight, rolling things out that change, new features, better ways of doing deployments, safer ways of doing deployments. All of that happens behind the scenes and like we had mentioned earllier, Kubernetes, I mean, that allows us to get that done. We couldn't do it without having that platform as a base layer for us to then put our software on. So we iterate quickly. When you're on the Influx cloud platform, you really are able to take advantage of new features immediately. We roll things out every day and as those things go into production, you have the ability to use them. And so in the then, we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let us do that for you. >> That makes sense. Are the innovations that we're talking about in the evolution of InfluxDB, do you see that as sort of a natural evolution for existing customers? Is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >> Yeah, it really is. It's a little bit of both. Any engineer will say, "Well it depends." So cloud-native technologies are really the hot thing, IoT, industrial IoT especially. People want to just shove tons of data out there and be able to do queries immediately and they don't want to manage infrastructure. What we've started to see are people that use the cloud service as their datastore backbone and then they use edge computing with our OSS product to ingest data from say, multiple production lines, and down-sample that data, send the rest of that data off to Influx cloud where the heavy processing takes place. So really, us being in all the different clouds and iterating on that, and being in all sorts of different regions, allows for people to really get out of the business of trying to manage that big data, have us take care of that. And, of course, as we change the platform, endusers benefit from that immediately. >> And so obviously you've taken away a lot of the heavy lifting for the infrastructure. Would you say the same things about security, especially as you go out to IoT at the edge? How should we be thinking about the value that you bring from a security perspective? >> We take security super seriously. It's built into our DNA. We do a lot of work to ensure that our platform is secure, that the data that we store is kept private. It's, of course, always a concern, you see in the news all the time, companies being compromised. That's something that you can have an entire team working on which we do, to make sure that the data that you have, whether it's in transit, whether it's at rest is always kept secure, is only viewable by you. You look at things like software bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software and we do that, you know, as we use new tools. That's something, that's just part of our jobs to make sure that the platform that we're running has fully vetted software. And you know, with opensource especially, that's a lot of work, and so it's definitely new territory. Supply chain attacks are definitely happening at a higher clip that they used to but that is really just part of a day in the life for folks like us that are building platforms. >> And that's key, especially when you start getting into the, you know, that we talk about IoT and the operations technologies, the engineers running that infrastrucutre. You know, historically, as you know, Tim, they would air gap everything; that's how they kept it safe. But that's not feasible anymore. Everything's-- >> Can't do that. >> connected now, right? And so you've got to have a partner that is, again, take away that heavy lifting to R&D so you can focus on some of the other activities. All right, give us the last word and the key takeaways from your perspective. >> Well, you know, from my perspective, I see it as a two-lane approach, with Influx, with any time-series data. You've got a lot of stuff that you're going to run on-prem. What you had mentioned, air gapping? Sure, there's plenty of need for that. But at the end of the day, people that don't want to run big datacenters, people that want to entrust their data to a company that's got a full platform set up for them that they can build on, send that data over to the cloud. The cloud is not going away. I think a more hybrid approach is where the future lives and that's what we're prepared for. >> Tim, really appreciate you coming to the program. Great stuff, good to see you. >> Thanks very much, appreciate it. >> Okay in a moment, I'll be back to wrap up today's session. You're watching theCUBE. (soft electronic music)
SUMMARY :
the Director of Engineering at InfluxData. So my question to you back to the projects that we use, in the heyday of Hadoop, And at the end of the day, we and all of the other stuff and the way we were and out to the edge, wherever. And so that just gets all of that we can manage with for the platform and for customers? and we can then focus on that they're going to get And so in the then, we want you to focus about in the evolution of InfluxDB, and down-sample that data, that you bring from a that the data that you have, and the operations technologies, and the key takeaways that data over to the cloud. you coming to the program. to wrap up today's session.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Yocum | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
InfluxData | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
New York City | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two-lane | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
tomorrow | DATE | 0.98+ |
today | DATE | 0.98+ |
more than a decade | QUANTITY | 0.98+ |
270 terabytes | QUANTITY | 0.98+ |
InfluxDB | TITLE | 0.98+ |
one | QUANTITY | 0.97+ |
about 1500 CIOs | QUANTITY | 0.97+ |
Influx | ORGANIZATION | 0.96+ |
Azure | ORGANIZATION | 0.94+ |
one way | QUANTITY | 0.93+ |
single server | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
PaaS | TITLE | 0.92+ |
Kubernetes | TITLE | 0.91+ |
Enterprise Technology Research | ORGANIZATION | 0.91+ |
Kubernetes | ORGANIZATION | 0.91+ |
three clouds | QUANTITY | 0.9+ |
ETR | ORGANIZATION | 0.89+ |
tons of data | QUANTITY | 0.87+ |
rsus | ORGANIZATION | 0.87+ |
Hadoop | TITLE | 0.85+ |
over four billion series | QUANTITY | 0.85+ |
three large cloud providers | QUANTITY | 0.74+ |
three different cloud providers | QUANTITY | 0.74+ |
theCUBE | ORGANIZATION | 0.66+ |
SQL | TITLE | 0.64+ |
opensource | ORGANIZATION | 0.63+ |
intelligent developers | QUANTITY | 0.57+ |
POSTGRES | ORGANIZATION | 0.52+ |
earllier | ORGANIZATION | 0.5+ |
Azure | TITLE | 0.49+ |
InfluxDB | OTHER | 0.48+ |
cloud | TITLE | 0.4+ |
Evolving InfluxDB into the Smart Data Platform
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Herain Oberoi | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Kamile Taouk | PERSON | 0.99+ |
John Fourier | PERSON | 0.99+ |
Rinesh Patel | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Santana Dasgupta | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ICE | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jack Berkowitz | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Venkat | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Camille | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Venkat Krishnamachari | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Don Tapscott | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Intercontinental Exchange | ORGANIZATION | 0.99+ |
Children's Cancer Institute | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Sabrina Yan | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
Sabrina | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
MontyCloud | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Leo | PERSON | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Santa Ana | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
Tushar | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Valente | PERSON | 0.99+ |
JL Valente | PERSON | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Evolving InfluxDB into the Smart Data Platform Full Episode
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tim | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
16 times | QUANTITY | 0.99+ |
two rows | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
60,000 people | QUANTITY | 0.99+ |
Rust | TITLE | 0.99+ |
Influx | ORGANIZATION | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
three experts | QUANTITY | 0.99+ |
InfluxDB | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
each row | QUANTITY | 0.99+ |
two lane | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Noble nine | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Flux | ORGANIZATION | 0.99+ |
Influx DB | TITLE | 0.99+ |
each column | QUANTITY | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
cube.net | OTHER | 0.99+ |
twice | QUANTITY | 0.99+ |
Bryan | PERSON | 0.99+ |
Pandas | TITLE | 0.99+ |
c plus plus | TITLE | 0.99+ |
three years ago | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
more than a decade | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
dozens | QUANTITY | 0.98+ |
free@influxdbu.com | OTHER | 0.98+ |
30,000 feet | QUANTITY | 0.98+ |
Rust Foundation | ORGANIZATION | 0.98+ |
two temperature values | QUANTITY | 0.98+ |
In Flux Data | ORGANIZATION | 0.98+ |
one time stamp | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
Russ | PERSON | 0.98+ |
IOT | ORGANIZATION | 0.98+ |
Evolving InfluxDB | TITLE | 0.98+ |
first | QUANTITY | 0.97+ |
Influx data | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
Influx DB University | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
The Cube | TITLE | 0.96+ |
Influx DB Cloud | TITLE | 0.96+ |
single server | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
Michael Foster & Doron Caspin, Red Hat | KubeCon + CloudNativeCon NA 2022
(upbeat music) >> Hey guys, welcome back to the show floor of KubeCon + CloudNativeCon '22 North America from Detroit, Michigan. Lisa Martin here with John Furrier. This is day one, John at theCUBE's coverage. >> CUBE's coverage. >> theCUBE's coverage of KubeCon. Try saying that five times fast. Day one, we have three wall-to-wall days. We've been talking about Kubernetes, containers, adoption, cloud adoption, app modernization all morning. We can't talk about those things without addressing security. >> Yeah, this segment we're going to hear container and Kubernetes security for modern application 'cause the enterprise are moving there. And this segment with Red Hat's going to be important because they are the leader in the enterprise when it comes to open source in Linux. So this is going to be a very fun segment. >> Very fun segment. Two guests from Red Hat join us. Please welcome Doron Caspin, Senior Principal Product Manager at Red Hat. Michael Foster joins us as well, Principal Product Marketing Manager and StackRox Community Lead at Red Hat. Guys, great to have you on the program. >> Thanks for having us. >> Thank you for having us. >> It's awesome. So Michael StackRox acquisition's been about a year. You got some news? >> Yeah, 18 months. >> Unpack that for us. >> It's been 18 months, yeah. So StackRox in 2017, originally we shifted to be the Kubernetes-native security platform. That was our goal, that was our vision. Red Hat obviously saw a lot of powerful, let's say, mission statement in that, and they bought us in 2021. Pre-acquisition we were looking to create a cloud service. Originally we ran on Kubernetes platforms, we had an operator and things like that. Now we are looking to basically bring customers in into our service preview for ACS as a cloud service. That's very exciting. Security conversation is top notch right now. It's an all time high. You can't go with anywhere without talking about security. And specifically in the code, we were talking before we came on camera, the software supply chain is real. It's not just about verification. Where do you guys see the challenges right now? Containers having, even scanning them is not good enough. First of all, you got to scan them and that may not be good enough. Where's the security challenges and where's the opportunity? >> I think a little bit of it is a new way of thinking. The speed of security is actually does make you secure. We want to keep our images up and fresh and updated and we also want to make sure that we're keeping the open source and the different images that we're bringing in secure. Doron, I know you have some things to say about that too. He's been working tirelessly on the cloud service. >> Yeah, I think that one thing, you need to trust your sources. Even if in the open source world, you don't want to copy paste libraries from the web. And most of our customers using third party vendors and getting images from different location, we need to trust our sources and we have a really good, even if you have really good scanning solution, you not always can trust it. You need to have a good solution for that. >> And you guys are having news, you're announcing the Red Hat Advanced Cluster Security Cloud Service. >> Yes. >> What is that? >> So we took StackRox and we took the opportunity to make it as a cloud services so customer can consume the product as a cloud services as a start offering and customer can buy it through for Amazon Marketplace and in the future Azure Marketplace. So customer can use it for the AKS and EKS and AKS and also of course OpenShift. So we are not specifically for OpenShift. We're not just OpenShift. We also provide support for EKS and AKS. So we provided the capability to secure the whole cloud posture. We know customer are not only OpenShift or not only EKS. We have both. We have free cloud or full cloud. So we have open. >> So it's not just OpenShift, it's Kubernetes, environments, all together. >> Doron: All together, yeah. >> Lisa: Meeting customers where they are. >> Yeah, exactly. And we focus on, we are not trying to boil the ocean or solve the whole cloud security posture. We try to solve the Kubernetes security cluster. It's very unique and very need unique solution for that. It's not just added value in our cloud security solution. We think it's something special for Kubernetes and this is what Red that is aiming to. To solve this issue. >> And the ACS platform really doesn't change at all. It's just how they're consuming it. It's a lot quicker in the cloud. Time to value is right there. As soon as you start up a Kubernetes cluster, you can get started with ACS cloud service and get going really quickly. >> I'm going to ask you guys a very simple question, but I heard it in the bar in the lobby last night. Practitioners talking and they were excited about the Red Hat opportunity. They actually asked a question, where do I go and get some free Red Hat to test some Kubernetes out and run helm or whatever. They want to play around. And do you guys have a program for someone to get start for free? >> Yeah, so the cloud service specifically, we're going to service preview. So if people sign up, they'll be able to test it out and give us feedback. That's what we're looking for. >> John: Is that a Sandbox or is that going to be in the cloud? >> They can run it in their own environment. So they can sign up. >> John: Free. >> Doron: Yeah, free. >> For the service preview. All we're asking for is for customer feedback. And I know it's actually getting busy there. It's starting December. So the quicker people are, the better. >> So my friend at the lobby I was talking to, I told you it was free. I gave you the sandbox, but check out your cloud too. >> And we also have the open source version so you can download it and use it. >> Yeah, people want to know how to get involved. I'm getting a lot more folks coming to Red Hat from the open source side that want to get their feet wet. That's been a lot of people rarely interested. That's a real testament to the product leadership. Congratulations. >> Yeah, thank you. >> So what are the key challenges that you have on your roadmap right now? You got the products out there, what's the current stake? Can you scope the adoption? Can you share where we're at? What people are doing specifically and the real challenges? >> I think one of the biggest challenges is talking with customers with a slightly, I don't want to say outdated, but an older approach to security. You hear things like malware pop up and it's like, well, really what we should be doing is keeping things into low and medium vulnerabilities, looking at the configuration, managing risk accordingly. Having disparate security tools or different teams doing various things, it's really hard to get a security picture of what's going on in the cluster. That's some of the biggest challenges that we talk with customers about. >> And in terms of resolving those challenges, you mentioned malware, we talk about ransomware. It's a household word these days. It's no longer, are we going to get hit? It's when? It's what's the severity? It's how often? How are you guys helping customers to dial down some of the risk that's inherent and only growing these days? >> Yeah, risk, it's a tough word to generalize, but our whole goal is to give you as much security information in a way that's consumable so that you can evaluate your risk, set policies, and then enforce them early on in the cluster or early on in the development pipeline so that your developers get the security information they need, hopefully asynchronously. That's the best way to do it. It's nice and quick, but yeah. I don't know if Doron you want to add to that? >> Yeah, so I think, yeah, we know that ransomware, again, it's a big world for everyone and we understand the area of the boundaries where we want to, what we want to protect. And we think it's about policies and where we enforce it. So, and if you can enforce it on, we know that as we discussed before that you can scan the image, but we never know what is in it until you really run it. So one of the thing that we we provide is runtime scanning. So you can scan and you can have policy in runtime. So enforce things in runtime. But even if one image got in a way and get to your cluster and run on somewhere, we can stop it in runtime. >> Yeah. And even with the runtime enforcement, the biggest thing we have to educate customers on is that's the last-ditch effort. We want to get these security controls as early as possible. That's where the value's going to be. So we don't want to be blocking things from getting to staging six weeks after developers have been working on a project. >> I want to get you guys thoughts on developer productivity. Had Docker CEO on earlier and since then I had a couple people messaging me. Love the vision of Docker, but Docker Hub has some legacy and it might not, has does something kind of adoption that some people think it does. Are people moving 'cause there times they want to have these their own places? No one place or maybe there is, or how do you guys see the movement of say Docker Hub to just using containers? I don't need to be Docker Hub. What's the vis-a-vis competition? >> I mean working with open source with Red Hat, you have to meet the developers where they are. If your tool isn't cutting it for developers, they're going to find a new tool and really they're the engine, the growth engine of a lot of these technologies. So again, if Docker, I don't want to speak about Docker or what they're doing specifically, but I know that they pretty much kicked off the container revolution and got this whole thing started. >> A lot of people are using your environment too. We're hearing a lot of uptake on the Red Hat side too. So, this is open source help, it all sorts stuff out in the end, like you said, but you guys are getting a lot of traction there. Can you share what's happening there? >> I think one of the biggest things from a developer experience that I've seen is the universal base image that people are using. I can speak from a security standpoint, it's awesome that you have a base image where you can make one change or one issue and it can impact a lot of different applications. That's one of the big benefits that I see in adoption. >> What are some of the business, I'm curious what some of the business outcomes are. You talked about faster time to value obviously being able to get security shifted left and from a control perspective. but what are some of the, if I'm a business, if I'm a telco or a healthcare organization or a financial organization, what are some of the top line benefits that this can bubble up to impact? >> I mean for me, with those two providers, compliance is a massive one. And just having an overall look at what's going on in your clusters, in your environments so that when audit time comes, you're prepared. You can get through that extremely quickly. And then as well, when something inevitably does happen, you can get a good image of all of like, let's say a Log4Shell happens, you know exactly what clusters are affected. The triage time is a lot quicker. Developers can get back to developing and then yeah, you can get through it. >> One thing that we see that customers compliance is huge. >> Yes. And we don't want to, the old way was that, okay, I will provision a cluster and I will do scans and find things, but I need to do for PCI DSS for example. Today the customer want to provision in advance a PCI DSS cluster. So you need to do the compliance before you provision the cluster and make all the configuration already baked for PCI DSS or HIPAA compliance or FedRAMP. And this is where we try to use our compliance, we have tools for compliance today on OpenShift and other clusters and other distribution, but you can do this in advance before you even provision the cluster. And we also have tools to enforce it after that, after your provision, but you have to do it again before and after to make it more feasible. >> Advanced cluster management and the compliance operator really help with that. That's why OpenShift Platform Plus as a bundle is so popular. Just being able to know that when a cluster gets provision, it's going to be in compliance with whatever the healthcare provider is using. And then you can automatically have ACS as well pop up so you know exactly what applications are running, you know it's in compliance. I mean that's the speed. >> You mentioned the word operator, I get triggering word now for me because operator role is changing significantly on this next wave coming because of the automation. They're operating, but they're also devs too. They're developing and composing. It's almost like a dashboard, Lego blocks. The operator's not just manually racking and stacking like the old days, I'm oversimplifying it, but the new operators running stuff, they got observability, they got coding, their servicing policy. There's a lot going on. There's a lot of knobs. Is it going to get simpler? How do you guys see the org structures changing to fill the gap on what should be a very simple, turn some knobs, operate at scale? >> Well, when StackRox originally got acquired, one of the first things we did was put ACS into an operator and it actually made the application life cycle so much easier. It was very easy in the console to go and say, Hey yeah, I want ACS my cluster, click it. It would get provisioned. New clusters would get provisioned automatically. So underneath it might get more complicated. But in terms of the application lifecycle, operators make things so much easier. >> And of course I saw, I was lucky enough with Lisa to see Project Wisdom in AnsibleFest. You going to say, Hey, Red Hat, spin up the clusters and just magically will be voice activated. Starting to see AI come in. So again, operations operator is got to dev vibe and an SRE vibe, but it's not that direct. Something's happening there. We're trying to put our finger on. What do you guys think is happening? What's the real? What's the action? What's transforming? >> That's a good question. I think in general, things just move to the developers all the time. I mean, we talk about shift left security, everything's always going that way. Developers how they're handing everything. I'm not sure exactly. Doron, do you have any thoughts on that. >> Doron, what's your reaction? You can just, it's okay, say what you want. >> So I spoke with one of our customers yesterday and they say that in the last years, we developed tons of code just to operate their infrastructure. That if developers, so five or six years ago when a developer wanted VM, it will take him a week to get a VM because they need all their approval and someone need to actually provision this VM on VMware. And today they automate all the way end-to-end and it take two minutes to get a VM for developer. So operators are becoming developers as you said, and they develop code and they make the infrastructure as code and infrastructure as operator to make it more easy for the business to run. >> And then also if you add in DataOps, AIOps, DataOps, Security Ops, that's the new IT. It seems to be the new IT is the stuff that's scaling, a lot of data's coming in, you got security. So all that's got to be brought in. How do you guys view that into the equation? >> Oh, I mean you become big generalists. I think there's a reason why those cloud security or cloud professional certificates are becoming so popular. You have to know a lot about all the different applications, be able to code it, automate it, like you said, hopefully everything as code. And then it also makes it easy for security tools to come in and look and examine where the vulnerabilities are when those things are as code. So because you're going and developing all this automation, you do become, let's say a generalist. >> We've been hearing on theCUBE here and we've been hearing the industry, burnout, associated with security professionals and some DataOps because the tsunami of data, tsunami of breaches, a lot of engineers getting called in the middle of the night. So that's not automated. So this got to get solved quickly, scaled up quickly. >> Yes. There's two part question there. I think in terms of the burnout aspect, you better send some love to your security team because they only get called when things get broken and when they're doing a great job you never hear about them. So I think that's one of the things, it's a thankless profession. From the second part, if you have the right tools in place so that when something does hit the fan and does break, then you can make an automated or a specific decision upstream to change that, then things become easy. It's when the tools aren't in place and you have desperate environments so that when a Log4Shell or something like that comes in, you're scrambling trying to figure out what clusters are where and where you're impacted. >> Point of attack, remediate fast. That seems to be the new move. >> Yeah. And you do need to know exactly what's going on in your clusters and how to remediate it quickly, how to get the most impact with one change. >> And that makes sense. The service area is expanding. More things are being pushed. So things will, whether it's a zero day vulnerability or just attack. >> Just mix, yeah. Customer automate their all of things, but it's good and bad. Some customer told us they, I think Spotify lost the whole a full zone because of one mistake of a customer because they automate everything and you make one mistake. >> It scale the failure really. >> Exactly. Scaled the failure really fast. >> That was actually few contact I think four years ago. They talked about it. It was a great learning experience. >> It worked double edge sword there. >> Yeah. So definitely we need to, again, scale automation, test automation way too, you need to hold the drills around data. >> Yeah, you have to know the impact. There's a lot of talk in the security space about what you can and can't automate. And by default when you install ACS, everything is non-enforced. You have to have an admission control. >> How are you guys seeing your customers? Obviously Red Hat's got a great customer base. How are they adopting to the managed service wave that's coming? People are liking the managed services now because they maybe have skills gap issues. So managed service is becoming a big part of the portfolio. What's your guys' take on the managed services piece? >> It's just time to value. You're developing a new application, you need to get it out there quick. If somebody, your competitor gets out there a month before you do, that's a huge market advantage. >> So you care how you got there. >> Exactly. And so we've had so much Kubernetes expertise over the last 10 or so, 10 plus year or well, Kubernetes for seven plus years at Red Hat, that why wouldn't you leverage that knowledge internally so you can get your application. >> Why change your toolchain and your workflows go faster and take advantage of the managed service because it's just about getting from point A to point B. >> Exactly. >> Well, in time to value is, you mentioned that it's not a trivial term, it's not a marketing term. There's a lot of impact that can be made. Organizations that can move faster, that can iterate faster, develop what their customers are looking for so that they have that competitive advantage. It's definitely not something that's trivial. >> Yeah. And working in marketing, whenever you get that new feature out and I can go and chat about it online, it's always awesome. You always get customers interests. >> Pushing new code, being secure. What's next for you guys? What's on the agenda? What's around the corner? We'll see a lot of Red Hat at re:Invent. Obviously your relationship with AWS as strong as a company. Multi-cloud is here. Supercloud as we've been saying. Supercloud is a thing. What's next for you guys? >> So we launch the cloud services and the idea that we will get feedback from customers. We are not going GA. We're not going to sell it for now. We want to get customers, we want to get feedback to make the product as best what we can sell and best we can give for our customers and get feedback. And when we go GA and we start selling this product, we will get the best product in the market. So this is our goal. We want to get the customer in the loop and get as much as feedback as we can. And also we working very closely with our customers, our existing customers to announce the product to add more and more features what the customer needs. It's all about supply chain. I don't like it, but we have to say, it's all about making things more automated and make things more easy for our customer to use to have security in the Kubernetes environment. >> So where can your customers go? Clearly, you've made a big impact on our viewers with your conversation today. Where are they going to be able to go to get their hands on the release? >> So you can find it on online. We have a website to sign up for this program. It's on my blog. We have a blog out there for ACS cloud services. You can just go there, sign up, and we will contact the customer. >> Yeah. And there's another way, if you ever want to get your hands on it and you can do it for free, Open Source StackRox. The product is open source completely. And I would love feedback in Slack channel. It's one of the, we also get a ton of feedback from people who aren't actually paying customers and they contribute upstream. So that's an awesome way to get started. But like you said, you go to, if you search ACS cloud service and service preview. Don't have to be a Red Hat customer. Just if you're running a CNCF compliant Kubernetes version. we'd love to hear from you. >> All open source, all out in the open. >> Yep. >> Getting it available to the customers, the non-customers, they hopefully pending customers. Guys, thank you so much for joining John and me talking about the new release, the evolution of StackRox in the last season of 18 months. Lot of good stuff here. I think you've done a great job of getting the audience excited about what you're releasing. Thank you for your time. >> Thank you. >> Thank you. >> For our guest and for John Furrier, Lisa Martin here in Detroit, KubeCon + CloudNativeCon North America. Coming to you live, we'll be back with our next guest in just a minute. (gentle music)
SUMMARY :
back to the show floor Day one, we have three wall-to-wall days. So this is going to be a very fun segment. Guys, great to have you on the program. So Michael StackRox And specifically in the code, Doron, I know you have some Even if in the open source world, And you guys are having and in the future Azure Marketplace. So it's not just OpenShift, or solve the whole cloud security posture. It's a lot quicker in the cloud. I'm going to ask you Yeah, so the cloud So they can sign up. So the quicker people are, the better. So my friend at the so you can download it and use it. from the open source side that That's some of the biggest challenges How are you guys helping so that you can evaluate So one of the thing that we we the biggest thing we have I want to get you guys thoughts you have to meet the the end, like you said, it's awesome that you have a base image What are some of the business, and then yeah, you can get through it. One thing that we see that and make all the configuration and the compliance operator because of the automation. and it actually made the What do you guys think is happening? Doron, do you have any thoughts on that. okay, say what you want. for the business to run. So all that's got to be brought in. You have to know a lot about So this got to get solved and you have desperate environments That seems to be the new move. and how to remediate it quickly, And that makes sense. and you make one mistake. Scaled the contact I think four years ago. you need to hold the drills around data. And by default when you install ACS, How are you guys seeing your customers? It's just time to value. so you can get your application. and take advantage of the managed service Well, in time to value is, whenever you get that new feature out What's on the agenda? and the idea that we will Where are they going to be able to go So you can find it on online. and you can do it for job of getting the audience Coming to you live,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael Foster | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Doron | PERSON | 0.99+ |
Doron Caspin | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
December | DATE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two minutes | QUANTITY | 0.99+ |
seven plus years | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
five | DATE | 0.99+ |
one mistake | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
a week | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two providers | QUANTITY | 0.99+ |
Two guests | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
four years ago | DATE | 0.98+ |
five times | QUANTITY | 0.98+ |
one issue | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
zero day | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
OpenShift | TITLE | 0.98+ |
last night | DATE | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
one image | QUANTITY | 0.97+ |
last years | DATE | 0.97+ |
First | QUANTITY | 0.97+ |
Azure Marketplace | TITLE | 0.97+ |
One thing | QUANTITY | 0.97+ |
telco | ORGANIZATION | 0.97+ |
Day one | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
Docker Hub | TITLE | 0.96+ |
Docker Hub | ORGANIZATION | 0.96+ |
10 plus year | QUANTITY | 0.96+ |
Doron | ORGANIZATION | 0.96+ |
Project Wisdom | TITLE | 0.96+ |
day one | QUANTITY | 0.95+ |
Lego | ORGANIZATION | 0.95+ |
one change | QUANTITY | 0.95+ |
a minute | QUANTITY | 0.95+ |
ACS | TITLE | 0.95+ |
CloudNativeCon '22 | EVENT | 0.94+ |
Kubernetes | TITLE | 0.94+ |
Tim Yocum, Influx Data
(upbeat music) >> Okay, we're back with Tim Yoakum, who is the Director of Engineering at Influx Data. Tim, welcome. Good to see you. >> Good to see you. Thanks for having me. >> You're really welcome. Listen, we've been covering open source software on the Cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem, the cloud is being built out on open source, mobile social platforms, key databases, and of course Influx DB, and Influx Data has been a big consumer and contributor of open source software. So my question to you is where have you seen the biggest bang for the buck from open source software? >> So, yeah, you know, Influx, really, we thrive at the intersection of commercial services and open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service, from our core storage engine technologies to web services, templating engines. Our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants. And like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product, Influx DB. >> You know, but I got to ask you, Tim, because one of the challenge that we've seen, in particular, you saw this in the heyday of Hadoop. The innovations come so fast and furious, and as a software company, you got to place bets, you got to, you know, commit people, and sometimes those bets can be risky and not pay off. How have you managed this challenge? >> Oh, it moves fast, yeah. That's a benefit though, because the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we tend to do is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example. That ecosystem is driven by thousands of intelligent developers, engineers, builders. They're adding value every day. So we have to really keep up with that. And as the stack changes, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's something that we just do every day. >> So we have a survey partner down in New York City called Enterprise Technology Research, ETR, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes, is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity, particularly, you know, along with cloud. But really Kubernetes is just, you know, still up and to the right consistently, even with, you know the macro headwinds and all of the other stuff that we're sick of talking about. So what are you doing with Kubernetes in the platform? >> Yeah, it's really central to our ability to run the product. When we first started out, we were just on AWS, and the way we were running was a little bit like containers junior. Now we're running Kubernetes everywhere, at AWS, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers, and we can manage that in code. So our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google, and figure out how to deliver services on those three clouds with all of their differences. >> Just a follow up on that, is it, now, so I presume it sounds like there's a PaaS layer there to allow you guys to have a consistent experience across clouds and up to the edge, you know, wherever. Is that, is that correct? >> Yeah, so we've basically built, more or less, platform engineering. This is the new hot phrase. You know, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on, and they only have to learn one way of deploying their application, managing their application. And so that just gets all of the underlying infrastructure out of the way and lets them focus on delivering Influx Cloud. >> Yeah, and I know I'm taking a little bit of a tangent, but is that, I'll call it a PaaS layer if I can use that term, are there specific attributes to Influx DB, or is it kind of just generally off the shelf PaaS? You know, is there any purpose built capability there that is value add, or is it pretty much generic? >> So we really build, we look at things with a build versus buy, through a build versus buy lens. Some things we want to leverage, cloud provider services for instance, Postgres databases for metadata perhaps, get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can deliver on, that has consistency, that is all generated from code that we can, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions in no time. >> So how, so sometimes you build, sometimes you buy it. How do you make those decisions, and what does that mean for the platform and for customers? >> Yeah, so what we're doing is, it's like everybody else will do. We're looking for trade offs that make sense. You know, we really want to protect our customers' data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers, you don't even see that, but we don't want to try to reinvent the wheel. Like I had had mentioned with SQL data storage for metadata perhaps. Let's build on top of what these three large cloud providers have already perfected, and we can then focus on our platform engineering, and we can have our developers then focus on the Influx Data software, Influx Cloud software. >> So take it to the customer level. What does it mean for them? What's the value that they're going to get out of all these innovations that we've been been talking about today? And what can they expect in the future? >> So first of all, people who use the OSS product are really going to be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you. But then you want to scale up. We have some 270 terabytes of data across over 4 billion series keys that people have stored. So there's a proven ability to scale. Now, in terms of the open source software, and how we've developed the platform, you're getting highly available, high cardinality time series platform. We manage it, and really as I mentioned earlier, we can keep up with the state of the art. We keep reinventing. We keep deploying things in real time. We deploy to our platform every day repeatedly, all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change, new features, better ways of doing deployments, safer ways of doing deployments. All of that happens behind the scenes. And we had mentioned earlier Kubernetes, I mean that allows us to get that done. We couldn't do it without having that platform as a base layer for us to then put our software on. So we iterate quickly. When you're on the Influx Cloud platform, you really are able to take advantage of new features immediately. We roll things out every day. And as those things go into production, you have the ability to use them. And so in the end, we want you to focus on getting actionable insights from your data instead of running infrastructure. You know, let us do that for you. >> And that makes sense, but so is the, are the innovations that we're talking about in the evolution of Influx DB, do you see that as sort of a natural evolution for existing customers? Is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >> Yeah, it really is. It's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are really the hot thing. IoT, industrial IoT especially, people want to just shove tons of data out there and be able to do queries immediately, and they don't want to manage infrastructure. What we've started to see are people that use the cloud service as their data store backbone, and then they use edge computing with our OSS product to ingest data from say multiple production lines and down-sample that data, send the rest of that data off to Influx Cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that, and being in all sorts of different regions allows for people to really get out of the business of trying to manage that big data, have us take care of that. And of course, as we change the platform, end users benefit from that immediately. >> And so obviously, taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IoT and the edge? How should we be thinking about the value that you bring from a security perspective? >> Yeah, we take security super seriously. It's built into our DNA. We do a lot of work to ensure that our platform is secure, that the data we store is kept private. It's of course always a concern. You see in the news all the time companies being compromised. You know, that's something that you can have an entire team working on, which we do, to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You look at things like software bill of materials. If you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that's just part of our jobs, to make sure that the platform that we're running has fully vetted software. And with open source especially, that's a lot of work. And so it's definitely new territory. Supply chain attacks are definitely happening at a higher clip than they used to. But that is really just part of a day in the life for folks like us that are building platforms. >> Yeah, and that's key. I mean, especially when you start getting into the, you know, we talk about IoT and the operations technologies, the engineers running that infrastructure. You know, historically, as you know, Tim, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >> Can't do that. >> connected now, right? And so you've got to have a partner that is, again, take away that heavy lifting to R and D so you can focus on some of the other activities. All right. Give us the last word and the key takeaways from your perspective. >> Well, you know, from my perspective, I see it as a a two lane approach. With Influx, with any any time series data, you know, you've got a lot of stuff that you're going to run on-prem. What you mentioned, air gaping, sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want to entrust their data to a company that's got a full platform set up for them that they can build on, send that data over to the cloud. The cloud is not going away. I think a more hybrid approach is where the future lives, and that's what we're prepared for. >> Tim, really appreciate you coming to the program. Great stuff. Good to see you. >> Thanks very much. Appreciate it. >> Okay, in a moment, I'll be back to wrap up today's session. You're watching the Cube. (gentle music)
SUMMARY :
Good to see you. Good to see you. So my question to you is to the projects that we use in the heyday of Hadoop. And as the stack changes, we and all of the other stuff that and the way we were to allow you guys to have and they only have to learn one way that we can manage with So how, so sometimes you and we can have our developers then focus So take it to the customer level. And so in the end, we want you to focus And of course, as we change the platform, that the data we store is kept private. and the operations technologies, and the key takeaways that data over to the cloud. you coming to the program. Thanks very much. I'll be back to wrap up today's session.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Yoakum | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
Tim Yocum | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
New York City | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
two lane | QUANTITY | 0.99+ |
Influx | ORGANIZATION | 0.98+ |
Azure | ORGANIZATION | 0.98+ |
270 terabytes | QUANTITY | 0.98+ |
about 1500 CIOs | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
more than a decade | QUANTITY | 0.97+ |
over 4 billion | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
tons of data | QUANTITY | 0.95+ |
Influx DB | TITLE | 0.95+ |
Kubernetes | TITLE | 0.94+ |
Enterprise Technology Research | ORGANIZATION | 0.93+ |
first | QUANTITY | 0.93+ |
single server | QUANTITY | 0.92+ |
SQL | TITLE | 0.91+ |
three | QUANTITY | 0.91+ |
Postgres | ORGANIZATION | 0.91+ |
Influx Cloud | TITLE | 0.9+ |
thousands of intelligent developers | QUANTITY | 0.9+ |
ETR | ORGANIZATION | 0.9+ |
Hadoop | TITLE | 0.9+ |
three large cloud providers | QUANTITY | 0.81+ |
three clouds | QUANTITY | 0.79+ |
Influx DB | ORGANIZATION | 0.74+ |
cloud | QUANTITY | 0.62+ |
Google Cloud | ORGANIZATION | 0.56+ |
Cube | PERSON | 0.53+ |
Cube | COMMERCIAL_ITEM | 0.52+ |
Cloud | TITLE | 0.45+ |
Influx | TITLE | 0.36+ |
Jay Workman, VMware & Geoff Thompson, VMware | VMware Explore 2022
>>Hey everyone. Welcome back to the cubes day two coverage of VMware Explorer, 22 from San Francisco. Lisa Martin, back here with you with Dave Nicholson, we have a couple of guests from VMware. Joining us, please. Welcome Jay Workman, senior director, cloud partner, and alliances marketing, and Jeff Thompson, VP cloud provider sales at VMware guys. It's great to have you on the program. >>Ah, good to be here. Thanks for having us on. >>We're gonna be talking about a really interesting topic. Sovereign cloud. What is sovereign cloud? Jeff? Why is it important, but fundamentally, what is >>It? Yeah, well, we were just talking a second ago. Aren't we? And it's not about royalty. So yeah, data sovereignty is really becoming super important. It's about the regulation and control of data. So lots of countries now are being very careful and advising companies around where to place data and the jurisdictional controls mandate that personal data or otherwise has to be secured. We ask, we have to have access controls around it and privacy controls around it. So data sovereign clouds are clouds that have been built by our cloud providers in, in, in VMware that specifically satisfy the requirements of those jurisdictions and regulated industries. So we've built a, a little program around that. We launched it about a year ago and continuing to add cloud providers to that. >>Yeah, and I, I think it's also important just to build on what Jeff said is, is who can access that data is becoming increasingly important data is, is almost in it's. It is becoming a bit of a currency. There's a lot of value in data and securing that data is, is becoming over the years increasingly important. So it's, it's not like we built a problem or we created a solution for problem that didn't exist. It's gotten it's, it's been a problem for a while. It's getting exponentially bigger data is expanding and growing exponentially, and it's becoming increasingly important for organizations and companies to realize where my data sits, who can access it, what types of data needs to go and what type of clouds. And it's very, very aligned with multi-cloud because some data can sit in a, in a public cloud, which is fine, but some data needs to be secure. It needs to be resident within country. And so this is, this is what we're addressing through our partners. >>Yeah, I, yeah, I was just gonna add to that. I think there's a classification there there's data residency, and then there's data sovereignty. So residency is just about where is the data, which country is it in sovereignty is around who can access that data. And that's the critical aspect of, of data sovereignty who's got control and access to that data. And how do we make sure that all the controls are in place to make sure that only the right people can get access to that data? Yeah. >>So let's, let's sort of build from the ground up an example, and let's use Western Europe as an example, just because state to state in the United States, although California is about to adopt European standards for privacy in a, in a unique, in a unique, unique way, pick a country in, in Europe, I'm a service provider. I have an offering and that offering includes a stack of hardware and I'm running what we frequently refer to as the STDC or software defined data center stack. So I've got NEX and I've got vs N and I've got vSphere and I'm running and I have a cloud and you have all of the operational tools around that, and you can spin up VMs and render under applications there. And here we are within the borders of this country, what makes it a sovereign cloud at that? So at that point, is that a sovereign cloud or? >>No, not yet. Not it's close. I mean, you nailed, >>What's >>A secret sauce. You nailed the technology underpinning. So we've got 4,500 plus cloud provider partners around the world. Less than 10% of those partners are running the full STDC stack, which we've branded as VMware cloud verified. So the technology underpinning from our perspective is the starting point. Okay. For sovereignty. So they, they, they need that right. Technology. Okay. >>Verified is required for sovereign. Yes. >>Okay. Cloud verified is the required technology stack for sovereign. So they've got vSphere vs. A NSX in there. Okay. A lot of these partners are also offering a multitenant cloud with VMware cloud director on top of that, which is great. That's the starting point. But then we've, we've set a list of standards above and beyond that, in addition to the technology, they've gotta meet certain jurisdiction requirements, certain local compliance requirements and certifications. They've gotta be able to address the data re data residency requirements of their particular jurisdiction. So it's going above and beyond. But to your point, it does vary by country. >>Okay. So, so in this hypothetical example, this is this country. You a stand, I love it. When people talk about Stan, people talk about EMIA and you know, I, I love AMEA food. Isn't AIAN food. One. There's no such thing as a European until you have an Italian, a Britain, a German yep. In Florida arguing about how our beer and our coffee is terrible. Right. Right. Then they're all European. They go home and they don't like each other. Yeah. So, but let's just pretend that there's a thing called Europe. So this, so there's this, so we've got a border, we know residency, right. Because it physically is here. Yep. But what are the things in terms of sovereignty? So you're talking about a lot of kind of certification and validation, making sure that, that everything maps to those existing rules. So is, this is, this is a lot of this administrative and I mean, administrative in the, in the sort of state administrative terminology, >>I I'm let's build on your example. Yeah. So we were talking about food and obviously we know the best food in the world comes from England. >>Of course it does. Yeah. I, no doubt. I agree. I Don not get that. I do. I do do agree. Yeah. >>So UK cloud, fantastic partner for us. Okay. Whether they're one of our first sovereign cloud providers in the program. So UK cloud, they satisfied the requirements with the local UK government. They built out their cloud verified. They built out a stack specifically that enables them to satisfy the requirements of being a sovereign cloud provider. They have local data centers inside the UK. The data from the local government is placed into those data centers. And it's managed by UK people on UK soil so that they know the privacy, they know the security aspects, the compliance, all of that wrapped up on top of a secure SDDC platform. Okay. Satisfies the requirements of the UK government, that they are managing that data in a sovereign way that, that, that aligns to the jurisdictional control that they expect from a company like UK cloud. Well, >>I think to build on that, a UK cloud is an example of certain employees at UK, UK cloud will have certain levels of clearance from the UK government who can access and work on certain databases that are stored within UK cloud. So they're, they're addressing it from multiple fronts, not just with their hardware, software data center framework, but actually at the individual compliance level and individual security clearance level as to who can go in and work on that data. And it's not just a governmental, it's not a public sector thing. I mean, any highly regulated industry, healthcare, financial services, they're all gonna need this type of data protection and data sovereignty. >>Can this work in a hyperscaler? So you've got you, have, you have VMC AVS, right? GC V C >>O >>CVAs O CVS. Thank you. Can it be, can, can a sovereign cloud be created on top of physical infrastructure that is in one of those hyperscalers, >>From our perspective, it's not truly sovereign. If, if it's a United States based company operating in Germany, operating in the UK and a local customer or organization in Germany, or the UK wants to deploy workloads in that cloud, we wouldn't classify that as totally sovereign. Okay. Because by virtue of the cloud act in the United States, that gives the us government rights to request or potentially view some of that data. Yeah. Because it's, it's coming out of a us based operator data center sitting on foreign soil so that the us government has some overreach into that. And some of that data may actually be stored. Some of the metadata may reside back in the us and the customer may not know. So certain workloads would be ideally suited for that. But for something that needs to be truly sovereign and local data residency, that it wouldn't be a good fit. I think that >>Perspectives key thing, going back to residency versus sovereignty. Yeah. It can be, let's go to our UK example. It can be on a hyperscaler in the UK now it's resident in the UK, but some of the metadata, the profiling information could be accessible by the entity in the United States. For example, there now it's not sovereign anymore. So that's the key difference between a, what we view as a pro you know, a pure sovereign cloud play and then maybe a hyperscaler that's got more residency than sovereignty. >>Yeah. We talk a lot about partnerships. This seems to be a unique opportunity for a certain segment of partners yeah. To give that really is an opportunity for them to have a line of business established. That's unique from some of the hyperscale cloud providers. Yeah. Where, where sort of the, the modesty of your size might be an advantage if you're in a local. Yes. You're in Italy and you are a service provider. There sounds like a great fit, >>That's it? Yeah. You've always had the, the beauty of our program. We have 4,500 cloud providers and obviously not, all of them are able to provide a data, a sovereign cloud. We have 20 in the program today in, in the country. You you'd expect them to be in, you know, the UK, Italy, Italy, France, Germany, over in Asia Pacific. We have in Australia and New Zealand, Japan, and, and we have Canada and Latin America to, to dovetail, you know, the United States. But those are the people that have had these long term relationships with the local governments, with these regulated industries and providing those services for many, many years. It's just that now data sovereignty has become more important. And they're able to go that extra mile and say, Hey, we've been doing this pretty much, you know, for decades, but now we're gonna put a wrap and some branding around it and do these extra checks because we absolutely know that we can provide the sovereignty that's required. >>And that's been one of the beautiful things about the entire initiative is we're actually, we're learning a lot from our partners in these countries to Jeff's point have been doing this. They've been long time, VMware partners they've been doing sovereignty. And so collectively together, we're able to really establish a pretty robust framework from, from our perspective, what does data sovereignty mean? Why does it matter? And then that's gonna help us work with the customers, help them decide which workloads need to go and which type of cloud. And it dovetails very, very nicely into a multi-cloud that's a reality. So some of those workloads can sit in the public sector and the hyperscalers and some of 'em need to be sovereign. Yeah. So it's, it's a great solution for our customers >>When you're in customer conversations, especially as, you know, data sovereign to be is becomes a global problem. Where, who are you talking to? Are you talking to CIOs? Are you talking to chief data officers? I imagine this is a pretty senior level conversation. >>Yeah. I it's, I think it's all of the above. Really. It depends. Who's managing the data. What type of customer is it? What vertical market are they in? What compliance regulations are they are they beholden to as a, as an enterprise, depending on which country they're in and do they have a need for a public cloud, they may already be all localized, you know? So it really depends, but it, it could be any of those. It's generally I think a fair, fairly senior level conversation. And it's, it's, it's, it's consultancy, it's us understanding what their needs are working with our partners and figuring out what's the best solution for them. >>And I think going back to, they've probably having those conversations for a long time already. Yeah. Because they probably have had workloads in there for years, maybe even decades. It's just that now sovereignty has become, you know, a more popular, you know, requirements to satisfy. And so they've gone going back to, they've gone the extra mile with those as the trusted advisor with those people. They've all been working with for many, many years to do that work. >>And what sort of any examples you mentioned some of the highly regulated industries, healthcare, financial services, any customer come to mind that you think really articulates the value of what VMware's delivering through its service through its cloud provider program. That makes the obvious why VMware an obvious answer? >>Wow. I, I, I get there's, there's so many it's, it's actually, it's each of our different cloud providers. They bring their win wise to us. And we just have, we have a great library now of assets that are on our sovereign cloud website of those win wires. So it's many industries, many, many countries. So you can really pick, pick your, your choice. There. That's >>A good problem >>To have, >>To the example of UK cloud they're, they're really focused on the UK government. So some of them aren't gonna be referenced. Well, we may have indication of a major financial services company in Australia has deployed with AU cloud, one of our partners. So we we've also got some semi blind references like that. And, and to some degree, a lot of these are maintained as fairly private wins and whatnot for obvious security reasons, but, and we're building it and building that library up, >>You mentioned the number 4,500, a couple of times, you, you referencing VMware cloud provider partners or correct program partners. So VCP P yes. So 45, 4500 is the, kind of, is the, is the number, you know, >>That's the number >>Globally of our okay. >>Partners that are offering a commercial cloud service based at a minimum with vSphere and they're. And many of 'em have many more of our technologies. And we've got little under 10% of those that have the cloud verified designation that are running that full STDC, stack >>Somebody, somebody Talli up, all of that. And the argument has been made that, that rep that, that would mean that VMware cloud. And although some of it's on IAS from hyperscale cloud providers. Sure. But that, that rep, that means that VMware has the third or fourth largest cloud on the planet already right now. >>Right. Yep. >>Which is kind of interesting because yeah. If you go back to when, what 2016 or so when VMC was at least baned about yeah. Is that right? A lot of people were skeptical. I was skeptical very long history with VMware at the time. And I was skeptical. I I'm thinking, nah, it's not gonna work. Yeah. This is desperation. Sorry, pat. I love you. But it's desperation. Right. AWS, their attitude is in this transaction. Sure. Send us some customers we'll them. Yeah. Right. I very, very cynical about it. Completely proved me wrong. Obviously. Where did it go? Went from AWS to Azure to right. Yeah. To GCP, to Oracle, >>Oracle, Alibaba, >>Alibaba. Yep. Globally. >>We've got IBM. Yep. Right. >>Yeah. So along the way, it would be easy to look at that trajectory and say, okay, wow, hyperscale cloud. Yeah. Everything's consolidating great. There's gonna be five or six or 10 of these players. And that's it. And everybody else is out in the cold. Yeah. But it turns out that long tail, if you look at the chart of who the largest VCP P partners are, that long tail of the smaller ones seem to be carving out specialized yes. Niches where you can imagine now, at some point in the future, you sum up this long tail and it becomes larger than maybe one of the hyperscale cloud providers. Right. I don't think a lot of people predicted that. I think, I think people predicted the demise of VMware and frankly, a lot of people in the VMware ecosystem, just like they predicted the demise of the mainframe. Sure. The storage area network fill in the blank. I >>Mean, Jeff and I we've oh yeah. We've been on the, Jeff's been a little longer than I have, but we've been working together for 10 plus years on this. And we've, we've heard that many times. Yeah. Yeah. Our, our ecosystem has grown over the years. We've seen some consolidation, some M and a activity, but we're, we're not even actively recruiting partners and it's growing, we're focused on helping our partners gain more, share internally, gain, more share at wallet, but we're still getting organic growth in the program. Really. So it, it shows, I think that there is value in what we can offer them as a platform to build a cloud on. >>Yeah. What's been interesting is there's there's growth and there's some transition as well. Right? So there's been traditional cloud providers. Who've built a cloud in their data center, some sovereign, some not. And then there's other partners that are adopting VCP P because of our SA. So we've either converted some technology from product into SA or we've built net new SA or we've acquired companies that have been SA only. And now we have a bigger portfolio that service providers, cloud providers, managed service providers are all interested in. So you get resellers channel partners. Who've historically been doing ELAs and reselling to end customers. They're transitioning their business into doing recurring revenue and the only game in town where you really wanna do recurring revenues, VCP P. So our ecosystem is both growing because our cloud providers with their data center are doing more with our customers. And then we're adding more managed service providers because of our SA portfolio. And that, that, that combo, that one, two punch is creating a much bigger VCP P ecosystem overall. >>Yeah. >>Impressive. >>Do you think we have a better idea of what sovereign cloud means? Yes. I think we do. >>It's not Royal. >>It's all about royalty, >>All royalty. What are some of the things Jeff, as we look on the horizon, obviously seven to 10,000 people here at, at VMwares where people really excited to be back. They want to hear it from VMware. They wanna hear from its partner ecosystem, the community. What are some of the things that you think are on the horizon where sovereign cloud is concerned that are really opportunities yeah. For businesses to get it right. >>Yeah. We're in the early days of this, I think there's still a whole bunch of rules, regulatory laws that have not been defined yet. So I think there's gonna be some more learning. There's gonna be some top down guidance like Gaia X in Europe. That's the way that they're defining who gets access and control over what data and what's in. And what's out of that. So we're gonna get more of these Gaia X type things happening around the world, and they're all gonna be slightly different. Everyone's gonna have to understand what they are, how to interpret and then build something around them. So we need to stay on top of that, myself and Jay, to make sure that we've got the right cloud providers in the right space to capitalize on that, build out the sovereign cloud program over time and make sure that what they're building to support aligns with these different requirements that are out there across different countries. So it's an evolving landscape. That's >>Yeah. And one of the things too, we're also doing from a product perspective to better enable partners to, to address these sovereign cloud workloads is where we have, we have gaps maybe in our portfolio is we're partner partnering with some of our ISVs, like a, Konic like a Forex vem. So we can give our partners object storage or ransomware protection to add on to their sovereign cloud service, all accessible through our cloud director consult. So we're, we're enhancing the program that way. And to Jeff's point earlier, we've got 20 partners today. We're hoping to double that by the end of our fiscal year and, and just take a very methodical approach to growth of the program. >>Sounds great guys, early innings though. Thank you so much for joining Dave and me talking about what software and cloud is describing it to us, and also talking about the difference between that data residency and all the, all of the challenges and the, in the landscape that customers are facing. They can go turn to VMware and its ecosystem for that help. We appreciate your insights and your time. Guys. Thank >>You >>For >>Having us. Our >>Pleasure. Appreciate it >>For our guests and Dave Nicholson. I'm Lisa Martin. You've been watching the cube. This is the end of day, two coverage of VMware Explorer, 2022. Have a great rest of your day. We'll see you tomorrow.
SUMMARY :
It's great to have you on the program. Ah, good to be here. What is sovereign cloud? It's about the Yeah, and I, I think it's also important just to build on what Jeff said is, And that's the critical aspect of, of data sovereignty who's got control and access to So let's, let's sort of build from the ground up an example, and let's use Western I mean, you nailed, So the technology underpinning from Verified is required for sovereign. That's the starting point. So is, this is, this is a lot of this administrative and I mean, So we were talking about food and obviously we know the best food in the world comes I Don not get that. that enables them to satisfy the requirements of being a sovereign cloud provider. I think to build on that, a UK cloud is an example of certain employees at UK, Can it be, can, can a sovereign cloud be foreign soil so that the us government has some overreach into that. So that's the key difference between a, what we view as a pro you know, of the hyperscale cloud providers. to dovetail, you know, the United States. sit in the public sector and the hyperscalers and some of 'em need to be sovereign. Where, who are you talking to? And it's, it's, it's, it's consultancy, it's us understanding what their needs are working with It's just that now sovereignty has become, you know, And what sort of any examples you mentioned some of the highly regulated industries, So you can really pick, So we we've also got some semi blind references like that. So 45, 4500 is the, kind of, is the, is the number, you know, And many of 'em have many more of our technologies. And the argument has been made that, Right. And I was skeptical. can imagine now, at some point in the future, you sum up this long tail and it becomes Our, our ecosystem has grown over the years. So you get resellers channel I think we do. What are some of the things that you think are on the horizon Everyone's gonna have to understand what they And to Jeff's point earlier, we've got 20 partners today. all of the challenges and the, in the landscape that customers are facing. Having us. Appreciate it This is the end of day, two coverage of VMware Explorer, 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Jeff Thompson | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
Florida | LOCATION | 0.99+ |
2016 | DATE | 0.99+ |
UK | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Jay | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
20 partners | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Jay Workman | PERSON | 0.99+ |
England | LOCATION | 0.99+ |
United States | LOCATION | 0.99+ |
10 plus years | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
France | LOCATION | 0.99+ |
VMC | ORGANIZATION | 0.99+ |
Canada | LOCATION | 0.99+ |
New Zealand | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Latin America | LOCATION | 0.99+ |
UK government | ORGANIZATION | 0.99+ |
Western Europe | LOCATION | 0.99+ |
Geoff Thompson | PERSON | 0.99+ |
Britain | LOCATION | 0.99+ |
EMIA | ORGANIZATION | 0.99+ |
AMEA | ORGANIZATION | 0.99+ |
VMwares | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
vSphere | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
Less than 10% | QUANTITY | 0.97+ |
4,500 cloud providers | QUANTITY | 0.97+ |
10,000 people | QUANTITY | 0.97+ |
Konic | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
2022 | DATE | 0.97+ |
James Bion, DXC Technology | VMware Explore 2022
(upbeat music) >> Good afternoon. theCUBE is live at VMware Explorer. Lisa Martin here in San Francisco with Dave Nicholson. This is our second day of coverage talking all things VMware and it's ecosystem. We're excited to welcome from DXC Technology, James Bion, Hybrid Cloud and Multi Cloud Offering manager to have a conversation next. Welcome to the program. >> Thank you very much. >> Welcome. >> Talk to us a little bit about before we get into the VMware partnership, what's new at DXC? What's going on? >> So DXC is really evolving and revitalizing into more of a cloud orientated company. So we're already driving change in our customers at the moment. We take them on that cloud journey, but we're taking them in the right way, in a structured mannered way. So we are really excited about it, we're kicking off our Cloud First type, Cloud Right sort of story and helping customers on that journey. >> Yesterday in the keynote, VMware was talking about customers are on this Cloud chaos phase, they want to get to Cloud Smart. You're saying they want to get to Cloud Right. Talk to us about what DXC Cloud Right is, what does it mean? What does it enable businesses to achieve? >> That's a very good question. So DXC has come up with this concept of Cloud Right, we looked at it from a services and outcome. So what do customers want to achieve? And how do we get it successfully? This is not a technology conversation, this is about putting the right workloads at the right place, at the right time, at the right cost to get the right value for your business. It's not about just doing it for the sake of doing it, okay. There's a lot of changes it's not technology only you've got to change how people operate. You've got to work through the organizational change. You need to ensure that you have the right security in place to maintain it. And it's about value, really about value proposition. So we don't just focus on cost, we focus on operations of it, we focus on security of it. We focus on ensuring the value proposition of it and putting not just for one Cloud, it's the right place. Big focus on Hybrid and Multi Cloud solutions in particular, we're very excited about what's happening with VMware Cloud on maybe AWS or et cetera because we see there a real dynamic change for our customers where they can transition across to the right Cloud services, at the right time, at the right place, but minimal disruption to the actual operation of their business. Very easy to move a workload into that place using the same skilled resources, the same tools, the same environment that you have had for many years, the same SLAs. Customers don't want a variance in their SLAs, they just want an outcome at a right price and the right time. >> Right, what are some of the things going on with the VMware partnership and anything you know, here we are at this the event called the theme is "The Center of the Multi Cloud Universe", which I keep saying sounds like a Marvel movie, I think there needs to be some superheroes here. But how is DXC working with VMware to help customers that are in Multi Cloud by default, not by design? >> That's a very good one. So DXC works jointly with VMware for more than a thousand clients out there. Wide diversity of different clients. We go to market together, we work collaboratively to put roadmaps in place for our clients, it's a unified team. On top of that, we have an extremely good VMware practice, joint working VMware team working directly with DXC dedicated resources and we deliver real value for clients. For example, we have a customer experience zone, we have a customer innovation zone so we can run proof of concepts on all the different VMware technologies for customers. If they want to try something different, try and push the boundaries a little bit with the VMware products, we can do that for them. But at the end of the day we deliver outcome based services. We are not there to deliver a piece of software, but a technology which show the customer the value of the service that they've been receiving within that. So we bring the VMware fantastic technologies in and then we bring the DXC managed services which we do so well and we look after our customers and do the right thing for our customers. >> So what does the go-to market strategy look like from a DXC perspective? We say that there are a finite number of strategic seats at the customer table. DXC has longstanding deep relationships with customers, so does VMware and probably over a shorter period of time, the Hyper scale Cloud Providers. How are you approaching these relationships with customers? Is it you bringing in your friends from the cloud? Is it the cloud bringing in their friend DXC? What does it look like? >> So we have relationships with all of them, but were agnostic. So we are the people who bring it all together into that unified platform and services that the customers expect. VMware will bring us certainly to the table and we'll bring VMware to the table. Equally, we work very collaboratively with all the cloud providers and we work in deals together. They bring us deals, we bring them deals. So it works extremely well from that perspective, but of course it's a multi-cloud world these days. We don't just deal with one cloud provider, we'll normally have all of the different services to find the right place for our customers. >> Now, one thing that that's been mentioned from DXC is this idea that Cloud First which has been sort of a mantra that scores you points if you're a CIO lately, maybe that's not the best way to wake up in the morning. Why not saying, Cloud First? >> So we have a lot of clients who who've tried that Cloud First journey and they've aggressively taken on migration of workloads. And now that they've settled in a few of those they're discovering maybe the ROI isn't quite what they expected it was going to be. That transformation takes a long time, a very long time. We've seen some of the numbers around averaging a hundred apps can take up to seven years to transition and transform, that's a long time. It makes you almost less agile by doing the transformation quite ironically. So DXC's Cloud Right program really helps you to ensure that you assess those workloads correctly, you target the ones that are going to give you the best business value, possibly the best return on investment using our Cloud and advisory practice to do that. And then obviously off the back of that we've got our migration teams and our run services and our application modernization factories and our application platforms for that. So DXC Cloud Right can certainly help our customers on that journey and get that sort of Hybrid Multi Cloud solution that suits their particular outcomes, not just one Cloud provider. >> So Cloud Right isn't just Cloud migration? >> No. >> People sometimes confuse digital transformation with Cloud migration. >> Correct. >> So to be clear Cloud Right and DXC has the ability to work with customers on not just, oh, here, this is how we box it up and ship it out, but what makes sense to box up and ship out. >> Correct, and it's all about that whole end to end life cycle. Remember, this is not just a technology conversation, this is an end to end business conversation. It's the outcomes are important, not the technology. That's why you have good partners like DXC who will help you on that technology journey. >> Let's talk about in the dynamics of the market the last couple of years, we saw so many customers in every industry race to the Cloud, race to digitally transform. You bring up a good point of people interchangeably talking about digital transformation, Cloud migration, but we saw the massive adoption of SaaS technologies. What are you seeing? Are you seeing customers in that sort of Cloud chaos as VMware calls it? That you're coming in with the Cloud Right approach saying, let's actually figure out, you may have done this because of the pandemic maybe it was accelerated, you needed to facilitate collaboration or whatnot, but actually this is the right approach. Are you seeing a lot of customers in that situation? >> We are certainly seeing some customers going into that chaos world. Some of them are still in the early stages of their journey and are taking a more cautious step towards in particular, the companies that would die on systems to be up available all the time. Others have gone too far, the other are in extreme are in the chaos world. And our Cloud Right program will certainly help them to pull their chaos back in, identify what workloads are potentially running in the wrong place, get the framework in place for ensuring that security and governance is in place. Ensuring that we don't have a cost spend blowout in particular, make sure that security is key to everything that we do and operations is key to everything we do. We have our own intelligent Platform X, it's called, our service management platform which is really the engine that sits behind our delivery mechanism. And that's got a whole lot of AI analytics engines in there to identify things and proactively identify workload placements, workload repairs, scripting, and hyper automation behind that too, to keep available here and there. And that's really some of our Cloud Right story, it's not just sorting out the mess, it's sorting out and then running it for you in the right way. >> So what does a typical, a customer engagement look like for a customer in that situation? >> So we would obviously engage our client right advisory team and they would come in and sit down with your application owners, sit down with the business units, identify what success needs to look like. They do all the discovery, they'll run it through our engines to identify what workloads are in the right place, should go to the right place. Just 'cause you can do something doesn't mean you should do something and that's an important thing. So we will come back with that and say, this is where I think your cloud roadmap journey should be. And obviously that takes an intuitive process, but we then can pick off the key topics early at the right time and that low hanging fruit that's really going to drive that value for the customer. >> And where are your customer conversations these days? I mean from a Cloud perspective, digital transformation, we're seeing everything escalate up the C-suite? Are you engaging the executives in this conversation so that they really want to facilitate, let's do things the right way that's the most efficient that allows us as a business to do what we're best at? >> So where we've seen programs fail is where we don't have executive leadership and brought in from day one. So if you don't have that executive and business driver and business leadership, then you're definitely not going to be successful. So to answer your question, yes, of course we are, but we also working directly with the IT departments as well. >> So you just brought up an insight executive alignment, critically important. Based on what you've experienced in the real world, contrast that with the sort of message to the world that we hear constantly about Cloud and IT, what would be the most shocking thing that you can share with us that people might not be aware of? It's like what shocks you the most about the disconnect between what everybody talks about and the reality on the ground? Don't name any names of anyone, but give us an example of the like, this is what's really going on. >> So, we certainly are seeing that big sort of move into Cloud quickly, okay. And then the big bill shock comes and just moving a workload across doesn't mean you're in Cloud, it's a transition and transformation to the SaaS and power services, it's where you get your true value out of cloud. So the concept that just 'cause it's in Cloud it's cheap is not always the case. Doing it right in Cloud is definitely going to have some cost value, but it's going to bring other additional values to their business. It's going to give them agility, it's going to give them resilience. So if you look at all three of those platforms cost, agility, and resilience and live across all three of those, then you're definitely going to get the best outcomes. And we've certainly seen some of those where they haven't taken all of those into consideration, quite often it's cost is what drives it, not the other two. And if you can't keep operations up working efficiently then you are in a lot of trouble. >> So Cloud wrong comes with sticker shock. >> It certainly does. >> What's on the horizon for DXC? >> We're certainly seeing a big drive towards apps modernization and certainly help our customers on that journey. DXC is definitely a Cloud company, may that be on Hybrid Cloud, Private Cloud, Public Cloud, DXC is certainly leading that edge and pushing it forward. >> Excellent, James, thank you so much for joining us on the program today talking about what Cloud Right is, the right approach, how you're helping customers really get to that right approach with the people, the processes, and the technology. We appreciate your time. >> Thank you very much. >> For our guest and Dave Nicholson, I'm Lisa Martin. You're watching theCUBE live from VMware Explorer, 2022. Our next guest joins us momentarily so don't change the channel. (upbeat music)
SUMMARY :
Welcome to the program. in our customers at the moment. Yesterday in the keynote, Cloud, it's the right place. is "The Center of the But at the end of the day we of strategic seats at the customer table. that the customers expect. maybe that's not the best way are going to give you with Cloud migration. Right and DXC has the ability important, not the technology. in every industry race to the Cloud, to everything that we So we will come back with that and say, So to answer your question, and the reality on the ground? So the concept that just So Cloud wrong comes DXC is certainly leading that to that right approach with the people, so don't change the channel.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
James Bion | PERSON | 0.99+ |
James | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
DXC | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
second day | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Cloud Right | TITLE | 0.99+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Cloud Right | TITLE | 0.98+ |
Cloud First | TITLE | 0.98+ |
more than a thousand clients | QUANTITY | 0.98+ |
Yesterday | DATE | 0.97+ |
DXC Technology | ORGANIZATION | 0.97+ |
DXC | TITLE | 0.97+ |
three | QUANTITY | 0.96+ |
Hyper scale | ORGANIZATION | 0.94+ |
VMware Cloud | TITLE | 0.94+ |
VMware Explorer | ORGANIZATION | 0.93+ |
last couple of years | DATE | 0.92+ |
up to seven years | QUANTITY | 0.89+ |
Marvel | ORGANIZATION | 0.88+ |
Cloud | TITLE | 0.88+ |
Explorer | TITLE | 0.86+ |
Cloud Smart | TITLE | 0.82+ |
VMware Explore 2022 | TITLE | 0.8+ |
one cloud provider | QUANTITY | 0.79+ |
one thing | QUANTITY | 0.79+ |
a hundred apps | QUANTITY | 0.76+ |
theCUBE | ORGANIZATION | 0.75+ |
pandemic | EVENT | 0.73+ |
Hybrid Cloud | ORGANIZATION | 0.71+ |
Center of the Multi Cloud | EVENT | 0.66+ |
Platform | TITLE | 0.6+ |
day one | QUANTITY | 0.58+ |
Providers | ORGANIZATION | 0.55+ |
Cloud | OTHER | 0.5+ |
Universe | TITLE | 0.38+ |
First | TITLE | 0.38+ |
Hannah Duce, Rackspace & Adrianna Bustamante, Rackspace | VMware Explore 2022
foreign greetings from San Francisco thecube is live this is our second day of wall-to-wall coverage of VMware Explorer 2022. Lisa Martin and Dave Nicholson here we're going to be talking with some ladies from Rackspace next please welcome Adriana Bustamante VP of strategic alliances and Hannah Deuce director of strategic alliances from Rackspace it's great to have you on the program thank you so much for having us good afternoon good morning is it lunchtime already almost almost yes and it's great to be back in person we were just talking about the keynote yesterday that we were in and it was standing room only people are ready to be back they're ready to be hearing from VMware it's ecosystem its Partners it's Community yes talk to us Adriana about what Rackspace is doing with Dell and VMware particularly in the healthcare space sure no so for us Partnerships are a big foundation to how we operate as a company and um and I have the privilege of doing it for over over 16 years so we've been looking after the dell and VMware part partnership ourselves personally for the last three years but they've been long-standing partners for for us and and how do we go and drive more meaningful joint Solutions together so Rackspace you know been around since since 98 we've seen such an evolution of coming becoming more of this multi-cloud transformation agile Global partner and we have a lot of customers that fall in lots of different verticals from retail to public sector into Healthcare but we started noticing and what we're trying trying to drive as a company is how do we drive more specialized Solutions and because of the pandemic and because of post-pandemic and everyone really trying to to figure out what the new normal is addressing different clients we saw that need increasing and we wanted to Rally together with our most strategic alliances to do more Hannah talk about obviously the the pandemic created such problems for every industry but but Healthcare being front and center it still is talk about some of the challenges that Healthcare organizations are coming to Rackspace going help yeah common theme that we've heard from some of our large providers Healthcare Providers has been helped me do more with less which we're all trying to do as we navigate The New Normal but in that space we found the opportunity to really leverage some of our expertise long-term expertise and that the talent and the resource pool that we had to really help in a some of the challenges that are being faced at a resource shortage Talent shortage and so Rackspace is able to Leverage What what we've done for many many years and really tailor it to the outcomes that Health Care Providers are needing nowadays that more with less Mantra runs across the gamut but a lot of it's been helped me modernize helped me get to that next phase I can't I can't I don't have the resources to DIY it myself anymore I need to figure out a more robust business continuity program and so helping with business continuity Dr you know third copies of just all all this data that's growing so it's not just covered pandemic driven but it's that's definitely driving the the need and the requirement to modernize so much quicker it's interesting that you mentioned rackspace's history and expertise in doing things and moving that forward and leveraging that pivoting focusing on specific environments to create something net new we've seen a lot of that here if you go back 10 years I don't know if that's the perfect date to go back to but if you go back 10 years ago you think about VMware where would we have expected VMware to be in this era of cloud we may have thought of things very very differently differently Rackspace a Pioneer in creating off-premises hey we will do this for you didn't even really call it Cloud at the time right but it was Cloud yeah and so the ability for entities like Rackspace like VMware we had a NetApp talking to us about stuff they're doing in the cloud 10 years ago if you I would say no they'd be they'll be gone they'll be gone so it's really really cool to see Rackspace making this transition and uh you know being aware of everything that's going on and focusing on the best value proposition moving forward I mean am I am I you know do I sound like somebody who would who would fit into the Rackspace culture right now or do I not get it yes you sound like a rocker we'll make you an honorary record that's what we call a Rackspace employees yes you know what we've noticed too and is budgets are moving those decision makers are moving so again 10 years ago just like you said you would be talking to sometimes a completely different Persona than we do than we do today and we've seen a shift more towards that business value we have a really unique ability to bring business and Technical conversations together I did a lot of work in the past of working with a lot of CMO and and digital transformation companies and so helping bring it and business seeing the same and how healthcare because budgets are living in different places and even across the board with Rackspace people are trying to drive more business outcomes business driven Solutions so the technical becomes the back end and really the ingredients to make all of that all of that happen and that's what we're helping to solve and it's a lot it's very fast paced everyone wants to be agile now and so they're leaning on us more and more to drive more services so if you've seen Rackspace evolve we're driving more of that advisement and those transformation service type discussions where where our original history was DNA was very much always embedded in driving a great experience now they're just wanting more from us more services help us how help us figure out the how Adriana comment on the outcomes that you're helping Healthcare organizations achieve as as we as we it's such a relatable tangible topic Healthcare is Right everybody's everybody's got somebody who's sick or you've been sick or whatnot what are some of those outcomes that we can ex that customers can expect to achieve with Rackspace and VMware oh great great question so very much I can't mentioned earlier it's how do I modernize how do I optimize how do I take the biggest advantage of the budgets and the landscape that I have I want to get to the Cloud we need to help our patients and get access to that data is this ready to go into the cloud is this not ready to go into the cloud you know how do we how do we help make sure we're taking care of our patients we're keeping things secure and accessible you know what else do you think is coming up yeah and one specific one uh sequencing genetic sequencing and so we've had this come up from a few different types of providers whether it's medical devices that they may provide to their end clients and an outcome that they're looking for is how do we get how do we leverage um here's rip here's what we do but now we have so many more people we need to give this access to we need them to be able to have access to the sequencing that all of this is doing all of these different entities are doing and the outcome that they're trying to get to to is more collaboration so so that way we can speed up in the face of a pandemic we can speed up those resolutions we could speed up to you know whether it's a vaccine needed or something that's going to address the next thing that might be coming you know um so that's a specific one I've heard that from a handful of different different um clients that that we work with and so trying to give them a Consolidated not trying to we are able to deliver them a Consolidated place that their application and tooling can run in and then all of these other entities can safely and securely access this data to do what they're going to do in their own spaces and then hopefully it helps the betterment of of of us globally like as humans in the healthcare space we all benefit from this so leveraging the technology to really drive a valuable outcome helps us all so so and by the way I like trying to because it conveys the proper level of humility that we all need to bring to this because it's complicated and anybody who looks you in the eye it pretends like they know exactly how to do it you need to run from those people no it is and and look that's where our partners become so significant we we know we're Best in Class for specific things but we rely on our Partnerships with Dell and VMware to bring their expertise to bring their tried and true technology to help us all together collectively deliver something good technology for good technology for good it is inherently good and it's nice when it's used for goodness it's nice when it's yeah yeah talk about security for a second you know we've seen the threat landscape change dramatically obviously nobody wants to be the next breach ransomware becoming a household term it's now a matter of when we get a head not F where has security gone in terms of conversations with customers going help us ensure that what we're doing is delivering data access to the right folks that need it at the right time in real time in a secure fashion no uh that's another good question in hot and burning so you know I think if we think about past conversations it was that nice Insurance offering that seemed like it came at a high cost if you really need it I've never been breached before um I'll get it when I when I need it but exactly to your point it's the win and not the if so what we're finding and also working with a nice ecosystem of Partners as well from anywhere from Akamai to cloudflare to BT it's how do we help ensure that there is the security as Hannah mentioned that we're delivering the right data access to the right people and permissions you know we're able to help meet multitude of compliance and regulations obviously health care and other regulated space as well we look to make sure that from our side of the house from the infrastructure that we have the right building blocks to help them Reach those compliance needs obviously it's a mutual partnership in maintaining that compliance and that we're able to provide guidance and best practices on to make sure that the data is living in a secure place that the people that need access to it get it when they when they need it and monitor those permissions and back to your complexity comment so more and more complex as we are a global global provider so when you start to talk to our teams in the UK and our our you know clients there specializing um kind of that Sovereign Cloud mentality of hey we need to have um we need to have a cloud that is built for the specific needs that reside within Healthcare by region so it's not just even I mean you know we're we're homegrown out of San Antonio Texas so like we know the U.S and have spent time here but we've been Global for many years so we just get down into the into the nitty-gritty to customize what's needed within each region well Hannah is that part of the Rackspace value proposition at large moving forward because frankly look if I if I want if I want something generic I can I can swipe credit card and and fire up some Services sure um moving forward this is something that is going to more characterize the Rackspace experience and I and I understand that the hesitancy to say hey it's complicated it's like I don't want to hear that I want to hear that it's easy it's like well okay we'll make it easy for you yes but it's still complicated is that okay that's the honest that's that's the honest yeah that's why you need help right that's why we need to talk about that because people people have a legitimate question why Rackspace yep and we don't I don't want to put you on the spot but no yeah but why why Rackspace you've talked a little bit about it already but kind of encapsulate it oh gosh so good good question why Rackspace it's because you can stand up [Laughter] well you can you do it there's many different options out there um and if I had a PowerPoint slide I'd show you this like lovely web of options of directions that you could go and what is Rackspace value it's that we come in and simplify it because we've had experience with this this same use case whatever somebody is bringing forward to us is typically something we've dealt with at numerous times and so we're repeating and speeding up the ability to simplify the complex and to deliver something more simplified well it may be complex within us and we're like working to get it done the outcome that we're delivering is is faster it's less expensive than dedicating all the resources yourself to do it and go invest in all of that that we've already built up and then we're able to deliver it in a more simplified manner it's like the duck analogy the feet below the water yes exactly and a lot of expertise as well yes a lot talk a little bit about the solution that that Dell VMware Rackspace are delivering to customers sure so when we think about um Healthcare clouds or Cloud specific to the healthcare industry you know there's some major players within that space that you think epic we'll just use them as an example this can play out with others but we are building out a custom or we have a custom clouds able to host epic and then provide services up through the Epic help application through partnership so that is broadening the the market for us in the sense that we can tailor what the what that end and with that healthcare provider needs uh do they do they have the expertise to manage the application okay you do that and then we will build out a custom fit Cloud for that application oh and you need all the adjacent things that come with it too so then we have reference architecture you know built out already to to tailor to whatever all those other 40 80 90 hundreds of applications that need to come with that and then and then you start to think about Imaging platforms so we have Imaging platforms available for those specific needs whether it's MRIs and things like that and then the long-term retention that's needed with that so all of these pieces that build out a healthcare ecosystem and those needs we've built those we've built those out and provide those two to our clients yesterday VMware was talking about Cloud chaos yes and and it's true you talk about the complexity and Dave talks about it too like acknowledging yes this is a very complex thing to do yeah there's just so many moving parts so many Dynamics so many people involved or lack thereof people they they then talked about kind of this this the goal of getting customers from cloud chaos to Cloud smart how does that message resonate with Rackspace and how are you helping customers get from simplifying the chaos to eventually get to that cloud smart goal so a lot of it I I believe is with the power of our alliances and I was talking about this earlier we really believe in creating those powerful ecosystems and Jay McBain former for Forester analyst talks about you know the people are going to come ahead really are serve as that orchestration layer of bringing everybody together so if you look at all of that cloud chaos and all of the different logos and the webs and which decisions to make you know the ones that can help simplify that bring it all together like we're going to need a little bit of this like baking a cake in some ways we're going to need a little bit of sugar we'll need this technology this technology and whoever is able to put it together in a clean and seamless way and as Hannah said you know we have specific use cases in different verticals Healthcare specifically and talking from the Imaging and the Epic helping them get hospitals and different you know smaller clinics get to the edge so we have all of the building blocks to get them what they need and we can't do that without Partners but we help simplify those outcomes for those customers yep so there's where they're Cloud smart so then they're like I want I want to be agile I want to work on my cost I want to be able to leverage a multi-cloud fashion because some things may may inherently need to be on Azure some things we inherently need to be on VMware how do we make them feel like they still have that modernized platform and Technology but still give the secure and access that they need right yeah we like to think of it as are you multi-cloud by accident or multi-cloud by Design and help you get to that multi-cloud by Design and leveraging the right yeah the right tools the right places and Dell was talking about that just that at Dell Technologies world just a couple months ago that most most organizations are multi-cloud by default not designed are you seeing any customers that are are able or how are you able to help customers go from that we're here by default for whatever reason acquisition growth.oit line of business and go from that default to a more strategic multi-cloud approach yes it takes planning and commitment you know you really need the business leaders and the technical leaders bought in and saying this is what I'm gonna do because it is a journey because exactly right M A is like inherited four different tools you have databases that kind of look similar but they're a little bit different but they serve four different things so at Rackspace we're able to help assess and we sit down with their teams we have very amazing rock star expertise that will come in and sit with the customers and say what are we trying to drive for it let's get a good assessment of the landscape and let's figure out what are you trying to get towards in your journey and looking at what's the best fit for that application from where it is now to where it is where it wants to be because we saw a lot of customers move to the cloud very quickly you know they went Cloud native very fast some of it made sense retailers who had the spikiness that completely made sense we had some customers though that we've seen move certain workloads they've been in the public Cloud now for a couple years but it was a static website it doesn't make as much sense anymore for certain things so we're able to help navigate all of those choices for them so it's interesting you just you just said something sort of offhand about having experts having them come in so if I am a customer and I have some outcome I want to achieve yes the people that I'm going to be talking to from Rackspace or from Rackspace and the people from Rackspace who are going to be working with the actual people who are deploying infrastructure are also Rackspace people so the interesting contrast there between other circumstances oftentimes is you may have a Global Systems integrator with smart people representing what a cloud provider is doing the perception if they try to make people perceive that okay everybody is working in lockstep but often there are disconnects between what the real capabilities are and what's being advertised so is that I mean I I know it's like a leading question it's like softball get your bats out but I mean isn't that an advantage you've got a single you know the saying used to be uh one throat to show now it's one back to pack because it's kind of Contour friendly yeah yeah but talk about that is that a real Advantage it does it really helps us because again this is our our this is our expertise this is where we where we live we're really close to the infrastructure we're great at the advisement on it we can help with those ongoing and day two management and Opera in operations and what it feels like to grow and scale so we lay this out cleanly and and clearly as possible if this is where we're really good we can we can help you in these areas but we do work with system integrators as well and part of our partner Community because they're working on sometimes the bigger overall Transformations and then we're staying look we understand this multi-cloud but it helps us because in the end we're doing that end to end for for them customer knows this is Rackspace and on hand and we we really strive to be very transparent in what it is that we want to drive and outcomes so sometimes at the time where it's like we're gonna talk about a certain new technology Dell might bring some of their Architects to the table we will say here is Dell with us we're doing that actively in the healthcare space today and it's all coming together but you know at the end of the day this is what Rackspace is going to drive and deliver from an end to end and we tap those people when needed so you don't have to worry about picking up the phone to call Dell or VMware so if I had worded the hard-hitting journalist question the right way it would have elicited the same responses that yeah yeah it drives accountability at the end of the day because what we advised on what we said now we got to go deliver yeah and it's it's all the same the same organization driving accountability so from a customer perspective they're engaging Rackspace who will then bring in dell and VMware as needed as we find the solution exactly we have all of the certification I mean the team the team is great on getting all of the certs because we're getting to handling all of the level one level two level three business they know who to call they have their dedicated account teams they have engagement managers that help them Drive what those bigger conversations are and they don't have to worry about the experts because we either have it on hand or we'll pull them in as needed if it's the bat phone we need to call awesome ladies thank you so much for joining Dave and me today talking about what Rackspace is up to in the partner ecosystem space and specifically what you're doing to help Healthcare organizations transform and modernize we appreciate your insights and your thoughts yeah thank you for having us thank you pleasure for our guests and Dave Nicholson I'm Lisa Martin you're watching thecube live from VMware Explorer 2022 we'll be back after a short break foreign [Music]
SUMMARY :
ready to go into the cloud you know how
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jay McBain | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Adriana Bustamante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Adriana | PERSON | 0.99+ |
Hannah Deuce | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Adrianna Bustamante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Rackspace | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
Hannah | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
Hannah Duce | PERSON | 0.99+ |
pandemic | EVENT | 0.99+ |
10 years ago | DATE | 0.99+ |
10 years ago | DATE | 0.99+ |
rackspace | ORGANIZATION | 0.99+ |
dell | ORGANIZATION | 0.99+ |
San Antonio Texas | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
second day | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Health Care Providers | ORGANIZATION | 0.97+ |
third copies | QUANTITY | 0.96+ |
each region | QUANTITY | 0.94+ |
Healthcare | ORGANIZATION | 0.94+ |
Cloud | TITLE | 0.92+ |
VMware | TITLE | 0.91+ |
Azure | TITLE | 0.91+ |
U.S | LOCATION | 0.89+ |
Akamai | ORGANIZATION | 0.89+ |
over over 16 years | QUANTITY | 0.89+ |
a couple months ago | DATE | 0.88+ |
four | QUANTITY | 0.88+ |
single | QUANTITY | 0.86+ |
BT | ORGANIZATION | 0.84+ |
one | QUANTITY | 0.83+ |
hundreds of applications | QUANTITY | 0.83+ |
Dynamics | TITLE | 0.82+ |
Epic | ORGANIZATION | 0.81+ |
last three years | DATE | 0.8+ |
98 | DATE | 0.79+ |
Dave Linthicum, Deloitte | VMware Explore 2022
>>Welcome back everyone to the cubes coverage here live in San Francisco for VMware Explorer. Formerly got it. World. We've been to every world since 2010. Now is VMware Explorer. I'm John furier host with Dave ante with Dave lium here. He's the chief cloud strategy officer at Deloitte. Welcome to the cube. Thanks for coming on. Appreciate your time. >>Thanks for having me. It's >>Epic keynote today on stage all seven minutes of your great seven minutes >>Performance discussion. Yes. Very, very, very, very quick to the order. I brought everybody up to speed and left. >>Well, Dave's great to have you on the cube one. We follow your work. We've been following for a long time. Thank you. A lot of web services, a lot of SOA, kind of in your background, kind of the old web services, AI, you know, samples, RSS, web services, all that good stuff. Now it's, it's now we're in kind of web services on steroids. Cloud came it's here. We're NextGen. You wrote a great story on Metacloud. You've been following the Supercloud with Dave. Does VMware have it right? >>Yeah, they do. Because I'll tell you what the market is turning toward. Anything that sit above and between the clouds. So things that don't exist in the hyperscaler, things that provide common services above the cloud providers are where the growth's gonna happen. We haven't really solved that problem yet. And so there's lots of operational aspects, security aspects, and the ability to have some sort of a brokering service that'll scale. So multi-cloud, which is their strategy here is not about cloud it's about things that exist in between cloud and making those things work. So getting to another layer of abstraction and automation to finally allow us to make use out of all these hyperscaler services that we're signing on today. Dave, >>Remember the old days back in the eighties, when we were young bucks coming into the business, the interoperability wave was coming. Remember that? Oh yeah, I got a deck mini computer. I got an IBM was gonna solve that unex. And then, you know, this other thing over here and lands and all and everything started getting into this whole, okay. Networking. Wasn't just coax. You started to see segment segments. Interoperability was a huge, what 10 year run. It feels like that's kind of like the vibe going on here. >>Yeah. We're not focused on having these things interop operate onto themselves. So what we're doing is putting a layer of things which allows them to interop operate. That's a different, that's a different problem to solve. And it's also solvable. We were talking about getting all these very distinct proprietary systems to communicate one to another and interate one to another. And that never really happened. Right? Cause you gotta get them to agree on interfaces and protocols. But if you put a layer above it, they can talk down to whatever native interfaces that are there and deal with the differences between the heterogeneity and abstract yourself in the complexity. And that's, that's kind of the different that works. The ability to kind of get everybody, you know, clunk their heads together and make them work together. That doesn't seem to scale couple >>And, and people gotta be motivated for that. Not many people might not >>Has me money. In other words has to be a business for them in doing so. >>A couple things I wanna follow up on from work, you know, this morning they used the term cloud chaos. When you talk to customers, you know, when they have multiple clouds, do they, are they saying to you, Hey, we have cloud chaos are, do they have cloud chaos? And they don't know it or do they not have cloud chaos? What's the mix. >>Yeah. I don't think the word chaos is used that much, but they do tell me they're hitting a complexity wall, which you do here out there as a term. So in other words, they're getting to a point where they can't scale operations to deal with a complexity and heterogeneity that they're, that they're bringing into the organization because using multiple clouds. So that is chaotic. So I guess that, you know, it is another way to name complexity. So there's so many services are moving from a thousand cloud services, under management to 3000 cloud services under management. They don't have the operational team, the skill, skill levels to do it. They don't have the tooling to do it. That's a wall. And you have to be able to figure out how to get beyond that wall to make those things work. So >>When, when we had our conversation about Metacloud and Supercloud, we we've, I think very much aligned in our thinking. And so now you've got this situation where you've got these abstraction layers, but, and there, but my question is, are we gonna have multiple abstraction layers? And will they talk to each other or are standards emerging? Will they be able to, >>No, we can't have multiple abstraction layers else. We just, we don't solve the problem. We go from complexity of exists at the native cloud levels to complexity of exists, that this thing we're dealing with to deal with complexity. So if you do that, we're screwing up. We have to go back and fix it. So ultimately this is about having common services, common security, layers, common operational layers, and things like that that are really reduced redundancy within the system. So instead of having a, you know, five different security layers and five different cloud providers, we're layering one and providing management and orchestration capabilities to make that happen. If we don't do that, we're not succeeding. >>What do you think about the marketplace? I know there's a lot of things going on that are happening around this. Wanna get your thoughts on obviously the industry dynamics, vendors preserving their future. And then you've got customers who have been leveraging the CapEx, goodness of say Amazon and then have to solve their whole distributed environment problem. So when you look at this, is it really solving? Is it is the order of operations first common layer abstraction because you know, it seems like the vendor, I won't say desperation move, but like their first move is we're gonna be the control plane or, you know, I think Cisco has a vision in their mind that no, no we're gonna have that management plane. I've heard a lot of people talking about, we're gonna be the management interface into something. How do you see that playing out? Because the order of operations to do the abstraction is to get consensus, right, right. First not competition. Right. So how do you see that? What's your reaction to that? And what's your observation. >>I think it's gonna be tough for the people who are supplying the underlying services to also be the orchestration and abstraction layers, because they're, they're kind of conflicted in making that happen. In other words, it's not in their best interest to make all these things work and interoperate one to another, but it's their best interest to provide, provide a service that everybody's going to leverage. So I see the layers here. I'm certainly the hyperscalers are gonna play in those layers and then they're welcome to play in those layers. They may come up with a solution that everybody picks, but ultimately it's about independence and your ability to have an objective way of, of allowing all these things to communicate together and driving this, driving this stuff together, to reduce the complexity again, to reduce. >>So a network box, for instance, maybe have hooks into it, but not try to dominate it >>Or that's right. Yeah, that's right. I think if you're trying to own everything and I get that a lot when I write about Supercloud and, and Metacloud they go, well, we're the Metacloud, we're the Supercloud you can't be other ones. That's a huge problem to solve. I know you don't have a solution for that. Okay. It's gonna be many different products to make that happen. And the reality is people who actually make that work are gonna have to be interdependent independent of the various underlying services. They're gonna, they can support them, but they really can't be them. They have to be an interate interop. They have to interoperate with those services. >>Do you, do you see like a w three C model, like the worldwide web consortium, remember that came out around 96, came to the us and MIT and then helped for some of those early standards in, in, in the internet, not DNS, but like the web, but DNS was already there and internet was already there, but like the web standards HTML kind of had, I think wasn't really hardcore get you in the headlock, but at least it was some sort of group that said, Hey, intellectually be honest, you see that happening in this area. >>I hope not. And here's >>Why not. >>Yeah. >>Here's, here's why the reality is is that when these consortiums come into play, it freezes the market. Everybody waits for the consortium to come up with some sort of a solution that's gonna save the world. And that solution never comes because you can't get these organizations through committee to figure out some sort of a technology stack that's gonna be working. So I'd rather see the market figure that out. Not a consortium when >>I, you mean the ecosystem, not some burning Bush. >>Yeah. Not some burning Bush. And it just hasn't worked. I mean, if it worked, it'd be great. And >>We had a, an event on August 9th, it was super cloud 22 and we had a security securing the super cloud panel. And one of my was a great conversation as you remember, John, but it was kind of depressing in that, like we're never gonna solve this problem. So what are you seeing in the security front? You know, it seems to like that's a main blocker to the Metacloud the Supercloud >>Yeah. The reality is you can't build all the security services in, in the Metacloud. You have to basically leverage the security services on the native cloud and leverage them as they exist. So this idea that we're gonna replace all of these security services with one layer of abstraction, that's gonna provide the services. So you don't need these underlying security systems that won't work. You have to leverage the native security systems, native governance, native operating interfaces, native APIs of all the various native clouds using the terms that they're looking to leverage. And that's the mistake. I think people are going to make, you don't need to replace something that's working. You just may need to make it easier to >>Use. Let's ask Dave about the, sort of the discussion that was on Twitter this morning. So when VMware announced their, you know, cross cloud services and, and the whole new Tansu one, three, and, and, and, and aria, there was a little chatter on Twitter basically saying, yeah, but VMware they'll never win the developers. And John came and said, well, hi, hang on. You know, if, if you've got open tools and you're embracing those, it's really about the ops and having standards on the op side. And so my question to you is, does VMware, that's >>Not exactly what I said, but close enough, >>Sorry. I mean, I'm paraphrasing. You can fine tune it, but, but does VMware have to win the developers or are they focused on kind of the right areas that whole, you know, op side of DevOps >>Focused on the op side, cuz that's the harder problem to solve. Developers are gonna use whatever tools they need to use to build these applications and roll them out. And they're gonna change all the time. In other words, they're gonna change the tools and technologies to do it in the supply chain. The ops problem is the harder problem to solve the ability to get these things working together and, and running at a certain point of reliability where the failure's not gonna be there. And I think that's gonna be the harder issue and doing that without complexity. >>Yeah. That's the multi-cloud challenge right there. I agree. The question I want to also pivot on that is, is that as we look at some of the reporting we've done and interviews, data and security really are hard areas. People are tune tuning up DevOps in the developer S booming, everyone's going fast, fast and loose. Shifting left, all that stuff's happening. Open source, booming Toga party. Everyone's partying ops is struggling to level up. So I guess the question is what's the order of operations from a customer. So a lot of customers have lifted and shift. The, some are going all in on say, AWS, yeah, I got a little hedge with Azure, but I'm not gonna do a full development team. As you talk to customers, cuz they're the ones deploying the clouds that want to get there, right? What's the order of operations to do it properly in your mind. And what's your advice as you look at as a strategy to, to do it, right? I mean, is there a playbook or some sort of situational, you know, sequence, >>Yes. One that works consistently is number one, you think about operations up front and if you can't solve operations, you have no business rolling out other applications and other databases that quite frankly can't be operated and that's how people are getting into trouble. So in other words, if you get into these very complex architectures, which is what a multicloud is, complex distributed system. Yeah. And you don't have an understanding of how you're gonna operationalize that system at scale, then you have no business in building the system. You have no business of going in a multicloud because you are going to run into that wall and it's gonna lead to a, an outage it's gonna lead to a breach or something that's gonna be company killing. >>So a lot of that's cultural, right. Having, having the cultural fortitude to say, we're gonna start there. We're gonna enforce these standards. >>That's what John CLE said. Yeah. CLE is famous line. >>Yeah, you're right. You're right. So, so, so what happens if the, if that as a consultant, if you, you probably have to insist on that first, right? Or, I mean, I don't know, you probably still do the engagement, but you, you're gonna be careful about promising an outcome aren't you, >>You're gonna have to insist on the fact they're gonna have to do some advanced planning and come up with a very rigorous way in which they're gonna roll it out. And the reality is if they're not doing that, then the advice would be you're gonna fail. So it's not a matter of when it's, when it's gonna happen. We're gonna, but at some point you're gonna fail either. Number one, you're gonna actually fail in some sort of a big disastrous event or more likely or not. You're gonna end up building something that's gonna cost you $10 million more a month to run and it's gonna be underoptimized. And is >>That effective when you, when you say that to a client or they say, okay, but, or do they say yes, you're >>Right. I view my role as a, someone like a doctor and a lawyer. You may not want to hear what I'm telling you. But the thing is, if I don't tell you the truth and I'm not doing my job as a trusted advisor. And so they'll never get anything but that from us, you know, as a firm and the reality is they can make their own decisions and will have to help them, whatever path they want to go. But we're making the warnings in place to make. >>And, and also also situationally it's IQ driven. Are they ready? What's their makeup. Are they have the kind of talent to execute. And there's a lot of unbeliev me. I totally think agree with on the op side, I think that's right on the money. The question I want to ask you is, okay, assume that someone has the right makeup of team. They got some badass people in there, coding away, DevOps, SREs, you name it. Everyone lined up platform teams, as they said today on stage, all that stuff. What's the CXO conversation at the boardroom that you, you have around business strategy. Cuz if you assume that cloud is here and you do things right and you get the right advisors in the next step is what does it transform my business into? Because you're talking about a fully digitalized business that converges it's not just, it helps you run an app back office with some terminal it's full blown business edge app business model innovation is it that the company becomes a cloud on their own and they have scale. And they're the super cloud of their category servicing a power law of second place, third place, SMB market. So I mean, Goldman Sachs could be the service provider cloud for financial services maybe. Or is that the dream? What, what's the dream for the, the, the CXO staff take us through the, >>What they're trying to do is get a level of automation with every able to leverage best breed technology to be as innovative as they possibly can. Using an architecture that's near a hundred percent optimized. It'll never be a hundred percent optimized. Therefore it's able to run, bring the best value to the business for the least amount of money. That's the big thing. If they want to become a cloud, that's, that's not a, not necessarily a good idea. If they're finance company be a finance company, just build these innovations around how to make a finance company be innovative and different for them. So they can be a disruptor without being disrupted. I see where see a lot of companies right now, they're gonna be exposed in the next 10 years because a lot of these smaller companies are able to weaponize technology to bring them to the next level, digital transformations, whatever, to create a business value. That's gonna be more compelling than the existing player >>Because they're on the CapEx back of Amazon or some technical innovation. Is that what the smaller guys, what's the, what's the lever that beats the >>It's the ability to use whatever technology you need to solve your issues. So in other words, I can use anything that exists on the cloud because it's part of the multi-cloud I'm I able to find the services that I need, the best AI system, the best database systems, the fastest transaction processing system, and assemble these syncs together to solve more innovative problems in my competitor. If I'm able to do that, I'm gonna win the game. So >>It's a buffet of technology. Pick your yes, your meal, come on, >>Case spray something, this operations, first thing in my head, remember Alan NA, when he came in the Cub and he said, listen, if you're gonna do cloud, you better change the operating model or you you're gonna make, you know, you'll drop millions to the bottom line. He was at CIO of Phillips at the time. You're not gonna drop billions. And it's all about, you know, the zeros, right? So do you find yourself in a lot of cases, sort of helping people rearchitect their operating model as a function of, of, of what cloud can, can enable? >>Yeah. Every, every engagement that we go into has operating model change op model changes, and typically it's gonna be major surgery. And so it's re reevaluating the skill sets, reevaluating, the operating model, reevaluating the culture. In fact, we have a team of people who come in and that's all they focus on. And so it used to be just kind of an afterthought. We'd put this together, oh, by the way, I think you need to do this and this and this. And here's what we recommend you do. But people who can go in and get cultural changes going get the operating models systems, going to get to the folks where they're gonna be successful with it. Reality. If you don't do that, you're gonna fail because you're not gonna have the ability to adapt to a cloud-based a cloud-based infrastructure. You can leverage this scale. >>David's like a masterclass here on the cube at VMware explore. Thanks for coming on. Thanks for spending the valuable time. Just what's going on in your world right now, take a quick minute to plug what's going on with you. What are you working on? What are you excited about? What what's happening, >>Loving life. I'm just running around doing, doing things like this, doing a lot of speaking, you know, still have the blog on in info world and have that for the last 12 years and just loving the fact that we're innovating and changing the world. And I'm trying to help as many people as I can, as quickly as I can. What's >>The coolest thing you've seen this year in terms of cloud kind of either weirdness coolness or something that made you fall outta your chair. Wow. That >>Was cool. I think the AI capabilities and application of AI, I'm just seeing use cases in there that we never would've thought about the ability to identify patterns that we couldn't identify in the past and do so for, for the good, I've been an AI analyst. It was my first job outta college and I'm 60 years old. So it's, it's matured enough where it actually impresses me. And so we're seeing applications >>Right now. That's NLP anymore. Is it? >>No, no, not list. That's what I was doing, but it's, we're able to take this technology to the next level and do, do a lot of good with it. And I think that's what just kind of blows me on the wall. >>Ah, I wish we had 20 more minutes, >>You know, one, one more masterclass sound bite. So we all kind of have kids in college, David and I both do young ones in college. If you're coming outta college, CS degree or any kind of smart degree, and you have the plethora of now what's coming tools and unlimited ways to kind of clean canvas up application, start something. What would you do if you were like 22? Right now, >>I would focus on being a multi-cloud architect. And I would learn a little about everything. Learn a little about at the various cloud providers. And I would focus on building complex distributed systems and architecting those systems. I would learn about how all these things kind of kind of run together. Don't learn a particular technology because that technology will ultimately go away. It'll be displaced by something else, learn holistically what the technologies is able to do and become the orchestrator of that technology. It's a harder problem to solve, but you'll get paid more for it. And it'll be more fun job. >>Just thinking big picture, big >>Picture, how everything comes together. True architecture >>Problems. All right, Dave is on the queue masterclass here on the cube. Bucha for Dave ante Explorer, 2022. Live back with our next segment. After this short break.
SUMMARY :
Welcome back everyone to the cubes coverage here live in San Francisco for VMware Thanks for having me. I brought everybody up to Well, Dave's great to have you on the cube one. security aspects, and the ability to have some sort of a brokering service that'll And then, you know, this other thing over The ability to kind of get everybody, you know, clunk their heads together and make them work together. And, and people gotta be motivated for that. In other words has to be a business for them in doing so. A couple things I wanna follow up on from work, you know, this morning they used the term cloud chaos. They don't have the operational team, the skill, skill levels to do it. And so now you've got this situation where you've got these abstraction layers, exists at the native cloud levels to complexity of exists, that this thing we're dealing with to deal with complexity. Because the order of operations to do the abstraction is to get consensus, So I see the layers here. And the reality is people who actually make that work are gonna have to be interdependent get you in the headlock, but at least it was some sort of group that said, Hey, intellectually be honest, And here's And that solution never comes because you can't get these organizations through committee to And it just hasn't worked. So what are you seeing in the security front? I think people are going to make, you don't need to replace something that's working. And so my question to you is, you know, op side of DevOps Focused on the op side, cuz that's the harder problem to solve. What's the order of operations to do it properly in your mind. So in other words, if you get into these very complex Having, having the cultural fortitude to say, That's what John CLE said. Or, I mean, I don't know, you probably still do the engagement, And the reality is if they're not doing that, then the advice would be you're gonna fail. And so they'll never get anything but that from us, you know, as a firm and the reality is they can make their own The question I want to ask you is, a lot of these smaller companies are able to weaponize technology to bring them to the next level, Is that what the smaller guys, what's the, what's the lever that beats the It's the ability to use whatever technology you need to solve your issues. It's a buffet of technology. And it's all about, you know, the zeros, right? get cultural changes going get the operating models systems, going to get to the folks where they're gonna be successful with it. take a quick minute to plug what's going on with you. you know, still have the blog on in info world and have that for the last 12 years and just loving the something that made you fall outta your chair. in the past and do so for, for the good, I've been an AI analyst. That's NLP anymore. And I think that's what just kind of blows me on the wall. CS degree or any kind of smart degree, and you have the plethora of now what's coming tools and unlimited And I would focus on building complex distributed systems and Picture, how everything comes together. Live back with our next segment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Raj | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Caitlyn | PERSON | 0.99+ |
Pierluca Chiodelli | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Lynn Lucas | PERSON | 0.99+ |
Caitlyn Halferty | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Munyeb Minhazuddin | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Christy Parrish | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ed Amoroso | PERSON | 0.99+ |
Adam Schmitt | PERSON | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
Sanjay Ghemawat | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Ashley | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Craig Sanderson | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Cockroach Labs | ORGANIZATION | 0.99+ |
Jim Walker | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Ashley Gaare | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Rob Emsley | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Lynn | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Allen Crane | PERSON | 0.99+ |
Sarbjeet Johal | Supercloud22
(upbeat music) >> Welcome back, everyone to CUBE Supercloud 22. I'm John Furrier, your host. Got a great influencer, Cloud Cloud RRT segment with Sarbjeet Johal, Cloud influencer, Cloud economist, Cloud consultant, Cloud advisor. Sarbjeet, welcome back, CUBE alumni. Good to see you. >> Thanks John and nice to be here. >> Now, what's your title? Cloud consultant? Analyst? >> Consultant, actually. Yeah, I'm launching my own business right now formally, soon. It's in stealth mode right now, we'll be (inaudible) >> Well, I'll just call you a Cloud guru, Cloud influencer. You've been great, friend of theCUBE. Really powerful on social. You share a lot of content. You're digging into all the trends. Supercloud is a thing, it's getting a lot of traction. We introduced that concept last reinvent. We were riffing before that. As we kind of were seeing the structural change that is now Supercloud, it really is kind of the destination or outcome of what we're seeing with hybrid cloud as a steady state into the what's now, they call multicloud, which is kind of awkward. It feels like it's default. Like multicloud, multi-vendor, but Supercloud has much more of a comprehensive abstraction around it. What's your thoughts? >> As you said, as Dave says that too, the Supercloud has that abstraction built into it. It's built on top of cloud, right? So it's being built on top of the CapEx which is being spent by likes of AWS and Azure and Google Cloud, and many others, right? So it's leveraging that infrastructure and building software stack on top of that, which is a platform. I see that as a platform being built on top of infrastructure as code. It's another platform which is not native to the cloud providers. So it's like a kind of cross-Cloud platform. That's what I said. >> Yeah, VMware calls it that cloud-cross cloud. I'm not a big fan of the name but I get what you're saying. We had a segment on earlier with Adrian Cockcroft, Laurie McVety and Chris Wolf, all part of the Cloud RRT like ourselves, and you've involved in Cloud from day one. Remember the OpenStack days Early Cloud, AWS, when they started we saw the trajectory and we saw the change. And I think the OpenStack in those early days were tell signs because you saw the movement of API first but Amazon just grew so fast. And then Azure now is catching up, their CapEx is so large that companies like Snowflake's like, "Why should I build my own? "I just sit on top of AWS, "move fast on one native cloud, then figure it out." Seems to be one of the playbooks of the Supercloud. >> Yeah, that is true. And there are reasons behind that. And I think number one reason is the skills gravity. What I call it, the developers and/or operators are trained on one set of APIs. And I've said that many times, to out compete your competition you have to out educate the market. And we know which cloud has done that. We know what traditional vendor has done that, in '90s it was Microsoft, they had VBS number one language and they were winning. So in the cloud era, it's AWS, their marketing efforts, their go-to market strategy, the micro nature of the releasing the micro sort of features, if you will, almost every week there's a new feature. So they have got it. And other two are trying to mimic that and they're having low trouble light. >> Yeah and I think GCP has been struggling compared to the three and native cloud on native as you're right, completely successful. As you're caught up and you see the Microsoft, I think is a a great selling point around multiple clouds. And the question that's on the table here is do you stay with the native cloud or you jump right to multicloud? Now multicloud by default is kind of what I see happening. We've been debating this, I'd love to get your thoughts because, Microsoft has a huge install base. They've converted to Office 365. They even throw SQL databases in there to kind of give it a little extra bump on the earnings but I've been super critical on their numbers. I think their shares are, there's clearly overstating their share, in my opinion, compared to AWS is a need of cloud, Azure though is catching up. So you have customers that are happy with Microsoft, that are going to run their apps on Azure. So if a customer has Azure and Microsoft that's technically multiple clouds. >> Yeah, true. >> And it's not a strategy, it's just an outcome. >> Yeah, I see Microsoft cloud as friendly to the internal developers. Internal developers of enterprises. but AWS is a lot more ISV friendly which is the software shops friendly. So that's what they do. They just build software and give it to somebody else. But if you're in-house developer and you have been a Microsoft shop for a long time, which enterprise haven't been that, right? So Microsoft is well entrenched into the enterprise. We know that, right? >> Yeah. >> For a long time. >> Yeah and the old joke was developers love code and just go with a lock in and then ops people don't want lock in because they want choice. So you have the DevOps movement that's been successful and they get DevSecOps. The real focus to me, I think, is the operating teams because the ops side is really with the pressure vis-a-vis. I want to get your reaction because we're seeing kind of the script flip. DevOps worked, infrastructure's code has worked. We don't yet see security as code yet. And you have things like cloud native services which is all developer, goodness. So I think the developers are doing fine. Give 'em a thumbs up and open source's booming. So they're shifting left, CI/CD pipeline. You have some issues around repo, monolithic repos, but devs are doing fine. It's the ops that are now have to level up because that seems to be a hotspot. What's your take? What's your reaction to that? Do you agree? And if you say you agree, why? >> Yeah, I think devs are doing fine because some of the devs are going into ops. Like the whole movement behind DevOps culture is that devs and ops is one team. The people who are building that application they're also operating that as well. But that's very foreign and few in enterprise space. We know that, right? Big companies like Google, Microsoft, Amazon, Twitter, those guys can do that. They're very tech savvy shops. But when it comes to, if you go down from there to the second tier of enterprises, they are having hard time with that. Once you create software, I've said that, I sound like a broken record here. So once you create piece of software, you want to operate it. You're not always creating it. Especially when it's inhouse software development. It's not your core sort of competency to. You're not giving that software to somebody else or they're not multiple tenants of that software. You are the only user of that software as a company, or maybe maximum to your employees and partners. But that's where it stops. So there are those differences and when it comes to ops, we have to still differentiate the ops of the big companies, which are tech companies, pure tech companies and ops of the traditional enterprise. And you are right, the ops of the traditional enterprise are having tough time to cope up with the changing nature of things. And because they have to run the old traditional stacks whatever they happen to have, SAP, Oracle, financial, whatnot, right? Thousands of applications, they have to run that. And they have to learn on top of that, new scripting languages to operate the new stack, if you will. >> So for ops teams do they have to spin up operating teams for every cloud specialized tooling, there's consequences to that. >> Yeah. There's economics involved, the process, if you are learning three cloud APIs and most probably you will end up spending a lot more time and money on that. Number one, number two, there are a lot more problems which can arise from that, because of the differences in how the APIs work. The rule says if you pick one primary cloud and then you're focused on that, and most of your workloads are there, and then you go to the secondary cloud number two or three on as need basis. I think that's the right approach. >> Well, I want to get your take on something that I'm observing. And again, maybe it's because I'm old school, been around the IT block for a while. I'm observing the multi-vendors kind of as Dave calls the calisthenics, they're out in the market, trying to push their wears and convincing everyone to run their workloads on their infrastructure. multicloud to me sounds like multi-vendor. And I think there might not be a problem yet today so I want to get your reaction to my thoughts. I see the vendors pushing hard on multicloud because they don't have a native cloud. I mean, IBM ultimately will probably end up being a SaaS application on top of one of the CapEx hyperscale, some say, but I think the playbook today for customers is to stay on one native cloud, run cloud native hybrid go in on OneCloud and go fast. Then get success and then go multiple clouds. versus having a multicloud set of services out of the gate. Because if you're VMware you'd love to have cross cloud abstraction layer but that's lock in too. So what's your lock in? Success in the marketplace or vendor access? >> It's tricky actually. I've said that many times, that you don't wake up in the morning and say like, we're going to do multicloud. Nobody does that by choice. So it falls into your lab because of mostly because of what MNA is. And sometimes because of the price to performance ratio is better somewhere else for certain kind of workloads. That's like foreign few, to be honest with you. That's part of my read is, that being a developer an operator of many sort of systems, if you will. And the third tier which we talked about during the VMworld, I think 2019 that you want vendor diversity, just in case one vendor goes down or it's broken up by feds or something, and you want another vendor, maybe for price negotiation tactics, or- >> That's an op mentality. >> Yeah, yeah. >> And that's true, they want choice. They want to get locked in. >> You want choice because, and also like things can go wrong with the provider. We know that, we focus on top three cloud providers and we sort of assume that they'll be there for next 10 years or so at least. >> And what's also true is not everyone can do everything. >> Yeah, exactly. So you have to pick the provider based on all these sort of three sets of high level criteria, if you will. And I think the multicloud should be your last choice. Like you should not be gearing up for that by default but it should be by design, as Chuck said. >> Okay, so I need to ask you what does Supercloud in my opinion, look like five, 10 years out? What's the outcome of a good Supercloud structure? What's it look like? Where did it come from? How did it get there? What's your take? >> I think Supercloud is getting born in the absence of having standards around cloud. That's what it is. Because we don't have standards, we long, or we want the services at different cloud providers, Which have same APIs and there's less learning curve or almost zero learning curve for our developers and operators to learn that stuff. Snowflake is one example and VMware Stack is available at different cloud providers. That's sort of infrastructure as a service example if you will. And snowflake is a sort of data warehouse example and they're going down the stack. Well, they're trying to expand. So there are many examples like that. What was the question again? >> Is Supercloud 10 years out? What does it look like? What's the components? >> Yeah, I think the Supercloud 10 years out will expand because we will expand the software stack faster than the hardware stack and hardware stack will be expanding of course, with the custom chips and all that. There was the huge event yesterday was happening from AWS. >> Yeah, the Silicon. >> Silicon Day. And that's an eyeopening sort of movement and the whole technology consumption, if you will. >> And yeah, the differentiation with the chips with supply chain kind of herding right now, we think it's going to be a forcing function for more cloud adoption. Because if you can't buy networking gear you going to go to the cloud. >> Yeah, so Supercloud to me in 10 years, it will be bigger, better in the likes of HashiCorp. Actually, I think we need likes of HashiCorp on the infrastructure as a service side. I think they will be part of the Supercloud. They are kind of sitting on the side right now kind of a good vendor lost in transition kind of thing. That sort of thing. >> It's like Kubernetes, we'll just close out here. We'll make a statement. Is Kubernetes a developer thing or an infrastructure thing? It's an ops thing. I mean, people are coming out and saying Kubernetes is not a developer issue. >> It's ops thing. >> It's an ops thing. It's in operation, it's under the hood. So you, again, this infrastructure's a service integrating this super pass layer as Dave Vellante and Wikibon call it. >> Yeah, it's ops thing, actually, which enables developers to get that the Azure service, like you can deploy your software in sort of different format containers, and then you don't care like what VMs are those? And, but Serverless is the sort of arising as well. It was hard for a while now it's like the lull state, but I think Serverless will be better in next three to five years on. >> Well, certainly the hyperscale is like AWS and Azure and others have had great CapEx and investments. They need to stay ahead, in your opinion, final question, how do they stay ahead? 'Cause, AWS is not going to stand still nor will Azure, they're pedaling as fast as they can. Google's trying to figure out where they fit in. Are they going to be a real cloud or a software stack? Same with Oracle. To me, it's really, the big race is now with AWS and Azure's nipping at their heels. Hyperscale, what do they need to do to differentiate going forward? >> I think they are in a limbo. They, on one side, they don't want to compete with their customers who are sitting on top of them, likes of Snowflake and others, right? And VMware as well. But at the same time, they have to keep expanding and keep innovating. And they're debating within their themselves. Like, should we compete with these guys? Should we launch similar sort of features and functionality? Or should we keep it open? And what I have heard as of now that internally at AWS, especially, they're thinking about keeping it open and letting people sort of (inaudible)- >> And you see them buying some the Cerner with Oracle that bought Cerner, Amazon bought a healthcare company. I think the likes of MongoDB, Snowflake, Databricks, are perfect examples of what we'll see I think on the AWS side. Azure, I'm not so sure, they like to have a little bit more control at the top of the stack with the SaaS, but I think Databricks has been so successful open source, Snowflake, a little bit more proprietary and closed than Databricks. They're doing well is on top of data, and MongoDB has got great success. All of these things compete with AWS higher level services. So, that advantage of those companies not having the CapEx investment and then going multiple clouds on other ecosystems that's a path of customers. Stay one, go fast, get traction, then go. >> That's huge. Actually the last sort comment I want to make is that, Also, that you guys include this in the definition of Supercloud, the likes of Capital One and Soner sort of vendors, right? So they are verticals, Capital One is in this financial vertical, and then Soner which Oracle bar they are in this healthcare vertical. And remember in the beginning of the cloud and when the cloud was just getting born. We used to say that we will have the community clouds which will be serving different verticals. >> Specialty clouds. >> Specialty clouds, community clouds. And actually that is happening now at very sort of small level. But I think it will start happening at a bigger level. The Goldman Sachs and others are trying to build these services on the financial front risk management and whatnot. I think that will be- >> Well, what's interesting, which you're bringing up a great discussion. We were having discussions around these vertical clouds like Goldman Sachs Capital One, Liberty Mutual. They're going all in on one native cloud then going into multiple clouds after, but then there's also the specialty clouds around functionality, app identity, data security. So you have multiple 3D dimensional clouds here. You can have a specialty cloud just on identity. I mean, identity on Amazon is different than Azure. Huge issue. >> Yeah, I think at some point we have to distinguish these things, which are being built on top of these infrastructure as a service, in past with a platform, a service, which is very close to infrastructure service, like the lines are blurred, we have to distinguish these two things from these Superclouds. Actually, what we are calling Supercloud maybe there'll be better term, better name, but we are all industry path actually, including myself and you or everybody else. Like we tend to mix these things up. I think we have to separate these things a little bit to make things (inaudible) >> Yeah, I think that's what the super path thing's about because you think about the next generation SaaS has to be solved by innovations of the infrastructure services, to your point about HashiCorp and others. So it's not as clear as infrastructure platform, SaaS. There's going to be a lot of interplay between this levels of services. >> Yeah, we are in this flasker situation a lot of developers are lost. A lot of operators are lost in this transition and it's just like our economies right now. Like I was reading at CNBC today, and here's sort of headline that people are having hard time understanding what state the economy is in. And so same is true with our technology economy. Like we don't know what state we are in. It's kind of it's in the transition phase right now. >> Well we're definitely in a bad economy relative to the consumer market. I've said on theCUBE publicly, Dave has as well, not as aggressive. I think the tech is still in a boom. I don't think there's tech bubble at all that's bursting, I think, the digital transformation from post COVID is going to continue. And this is the first recession downturn where the hyperscalers have been in market, delivering the economic value, almost like they're pumping on all cylinders and going to the next level. Go back to 2008, Amazon web services, where were they? They were just emerging out. So the cloud economic impact has not been factored into the global GDP relationship. I think all the firms that are looking at GDP growth and tech spend as a correlation, are completely missing the boat on the fact that cloud economics and digital transformation is a big part of the new economics. So refactoring business models this is continuing and it's just the early days. >> Yeah, I have said that many times that cloud works good in the bad economy and cloud works great in the good economy. Do you know why? Because there are different type of workloads in the good economy. A lot of experimentation, innovative solutions go into the cloud. You can do experimentation that you have extra money now, but in the bad economy you don't want to spend the CapEx because don't have money. Money is expensive at that point. And then you want to keep working and you don't need (inaudible) >> I think inflation's a big factor too right now. Well, Sarbjeet, great to see you. Thanks for coming into our studio for our stage performance for Supercloud 22, this is a pilot episode that we're going to get a consortium of experts Cloud RRT like yourselves, in the conversation to discuss what the architecture is. What is a taxonomy? What are the key building blocks and what things need to be in place for Supercloud capability? Because it's clear that if without standards, without defacto standards, we're at this tipping point where if it all comes together, not all one company can do everything. Customers want choice, but they also want to go fast too. So DevOps is working. It's going the next level. We see this as Supercloud. So thank you so much for your participation. >> Thanks for having me. And I'm looking forward to listen to the other sessions (inaudible) >> We're going to take it on A stickers. We'll take it on the internet. I'm John Furrier, stay tuned for more Supercloud 22 coverage, here at the Palo Alto studios in one minute. (bright music)
SUMMARY :
Good to see you. It's in stealth mode right as a steady state into the what's now, the Supercloud has that I'm not a big fan of the name So in the cloud era, it's AWS, And the question that's on the table here And it's not a strategy, and you have been a Microsoft It's the ops that are now have to level up and ops of the traditional enterprise. have to spin up operating teams the process, if you are kind of as Dave calls the calisthenics, And the third tier And that's true, they want choice. and we sort of assume And what's also true is not And I think the multicloud in the absence of having faster than the hardware stack and the whole technology Because if you can't buy networking gear in the likes of HashiCorp. and saying Kubernetes is It's in operation, it's under the hood. get that the Azure service, Well, certainly the But at the same time, they at the top of the stack with the SaaS, And remember in the beginning of the cloud on the financial front risk So you have multiple 3D like the lines are blurred, by innovations of the It's kind of it's in the So the cloud economic but in the bad economy you in the conversation to discuss And I'm looking forward to listen We'll take it on the internet.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Goldman Sachs | ORGANIZATION | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
Chris Wolf | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Liberty Mutual | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Laurie McVety | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
2019 | DATE | 0.99+ |
one minute | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
multicloud | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Soner | ORGANIZATION | 0.98+ |
CNBC | ORGANIZATION | 0.98+ |
two things | QUANTITY | 0.98+ |
Office 365 | TITLE | 0.98+ |
CapEx | ORGANIZATION | 0.98+ |
Silicon Day | EVENT | 0.98+ |
third tier | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
Snowflake | TITLE | 0.98+ |
second tier | QUANTITY | 0.98+ |
one team | QUANTITY | 0.98+ |
MNA | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.97+ |
Azure | ORGANIZATION | 0.97+ |
WS | ORGANIZATION | 0.97+ |
VBS | TITLE | 0.97+ |
10 years | QUANTITY | 0.97+ |
one example | QUANTITY | 0.96+ |
DevOps | TITLE | 0.96+ |
two | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
one set | QUANTITY | 0.96+ |
Goldman Sachs Capital One | ORGANIZATION | 0.96+ |
DevSecOps | TITLE | 0.95+ |
CapEx | TITLE | 0.95+ |
Serverless | TITLE | 0.95+ |
Thousands of applications | QUANTITY | 0.95+ |
VMware Stack | TITLE | 0.94+ |
Danny Allan & David Harvey, Veeam | HPE Discover 2022
(inspiring music) >> Announcer: theCUBE presents HPE Discover 2022. Brought to you by HPE. >> Welcome back to theCUBE's coverage of HPE Discover 2022, from the Venetian in Las Vegas, the first Discover since 2019. I really think this is my 14th Discover, when you include HP, when you include Europe. And I got to say this Discover, I think has more energy than any one that I've ever seen, about 8,000 people here. Really excited to have one of HPE's longstanding partners, Veeam CTO, Danny Allen is here, joined by David Harvey, Vice President of Strategic Alliances at Veeam. Guys, good to see you again. It was just earlier, let's see, last month, we were together out here. >> Yeah, just a few weeks ago. It's fantastic to be back and what it's telling us, technology industry is coming back. >> And the events business, of course, is coming back, which we love. I think the expectations were cautious. You saw it at VeeamON, a little more than you expected, a lot of great energy. A lot of people, 'cause it was last month, it was their first time out, >> Yes. >> in two years. Here, I think people have started to go out more, but still, an energy that's palpable. >> You can definitely feel it. Last night, I think I went to four consecutive events and everyone's out having those discussions and having conversations, it's good to be back. >> You guys hosted the Storage party last night, which is epic. I left at midnight, I took a picture, it was still packed. I said, okay, time to go, nothing good happens after midnight kids. David, talk about the alliance with HPE, how it's evolved, and where you see it going? >> I appreciate it, and certainly this, as you said, has been a big alliance for us. Over 10 years or so, fantastic integrations across the board. And you touched on 2019 Discover. We launched with GreenLake at that event, we were one of the launch partners, and we've seen fantastic growth. Overall, what we're excited about, is that continuation of the movement of the customer's buying patterns in line with HPE's portfolio and in line with Veeam. We continue to be with all their primary, secondary storage, we continue to be a spearhead position with GreenLake, which we're really excited about. And we're also really excited to hear from HPE, unfortunately under NDA, some of their future stuff they're investing in, which is a really nice invigoration for what they're doing for their portfolio. And we see that being a big deal for us over the next 24 months. >> Your relationship with HPE predates the HP, HPE split. >> Mmm. >> Yes. >> But it was weird, because they had Data Protector, and that was a quasi-competitor, or really not, but it was a competitor, a legacy competitor, of what you guys have, kind of modern data protection I think is the tagline, if I got it right. Post the split, that was an S-curve moment, wasn't it, in terms of the partnership? >> It really was. If you go back 10 years, we did our first integration sending data to StoreOnce and we had some blueprints around that. But now, if you look what we have, we have integrations on the primary side, so, 3PAR, Primera, Nimble, all their top-tier storage, we can manage the snapshots. We have integration on the target side. We integrate with Catalyst in the movement of data and the management of data. And, as David alluded to, we integrate with GreenLake. So, customers who want to take this as a consumption model, we integrate with that. And so it's been, like you said, the strongest relationship that we have on the technology alliance side. >> So, V12, you announced at VeeamON. What does that mean for HPE customers, the relationship? Maybe you guys could both talk about that. >> Technology side, to touch on a few things that we're doing with them, ransomware has been a huge issue. Security's been a big theme, obviously, at the conference, >> Dave: Yeah, you bet. and one of the things we're doing in V12 is adding immutability for both StoreOnce and StoreEver. So, we take the features that our partners have, immutability being big in the security space, and we integrate that fully into the product. So a customer checks a box and says, hey, I want to make sure that the data is secure. >> Yeah, and also, it's another signification about the relationship. Every single release we've done has had HPE at the heart of it, and the same thing is being said with V12. And it shows to our customers, the continual commitment. Relationships come and go. They're hard, and the great news is, 10 years has proven that we get through good times and tricky situations, and we both continue to invest, et cetera. And I think there's a lot of peace of mind and the revenue figures prove that, which is what we're really excited about. >> Yeah I want to come back to that, but just to follow up, Danny, on that immutability, that's a feature that you check? It's service within GreenLake, or within Veeam? How does that all work? >> We have immutability now depending on the target. We introduced the ability to send data, for example, into S3 two years ago, and make it immutable when you send it to an S3 or S3 compatible environment. We added, in Version 11, the ability to take a Linux repository and make it, and harden it, essentially make it immutable. But what we're doing now is taking our partner systems like StoreOnce, like StoreEver, and when we send data there, we take advantage of an API flag or whatever it happens to be, that it makes the data, when it's written to that system, can't be deleted, can't be encrypted. Now, what does that mean for a customer? Well, we do all the hard work in the back end, it's just a check box. They say, I want to make it immutable, and we manage how long it's immutable. Because if you made everything immutable forever, that's hugely expensive, right? So, it's all about, how long is that immutable before you age it out and make sure the new data coming in is immutable. >> Dave: It's like an insurance policy, you have that overlap. >> Yes. >> Right, okay. And then David, you mentioned the revenue, Lou bears that out. I got the IDC guys comin' on later on today. I'll ask 'em about that, if that's their swim lane. But you guys are basically a statistical tie, with Dell for number one? Am I getting that right? And you're growing at a faster rate, I believe, it's hard to tell 'cause I don't think Dell reports on the pace of its growth within data protection. You guys obviously do, but is that right? It's a statistical tie, is it? >> Yeah, hundred percent. >> Yeah, statistical tie for first place, which we're super excited about. When I joined Veeam, I think we were in fifth place, but we've been in the leader's quadrant of the Gartner Magic- >> Cause and effect there or? (panelists laughing) >> No, I don't think so. >> Dave: Ha, I think maybe. >> We've been on a great trajectory. But statistical tie for first place, greatest growth sequentially, and year-over-year, of all of the data protection vendors. And that's a testament not just to the technology that we're doing, but partnerships with HPE, because you never do this, the value of a technology is not that technology alone, it's the value of that technology within the ecosystem. And so that's why we're here at HPE Discover. It's our joint technology solutions that we're delivering. >> What are your thoughts or what are you seeing in the field on As-a-service? Because of course, the messaging is all about As-a-service, you'd think, oh, a hundred percent of everything is going to be As-a-service. A lot of customers, they don't mind CapEx, they got good, balance sheet, and they're like, hey, we'll take care of this, and, we're going to build our own little internal cloud. But, what are you seeing in the market in terms of As-a-service, versus, just traditional licensing models? >> Certainly, there's a mix between the two. What I'd say, is that sources that are already As-a-service, think Microsoft 365, think AWS, Azure, GCP, the cloud providers. There's a natural tendency for the customer to want the data protection As-a-service, as well for those. But if you talk about what's on premises, customers who have big data centers deployed, they're not yet, the pendulum has not shifted for that to be data protection As-a-service. But we were early to this game ourselves. We have 10,000, what we call, Veeam Cloud Service Providers, that are offering data protection As-a-service, whether it be on premises, so they're remotely managing it, or cloud hosted, doing data protection for that. >> So, you don't care. You're providing the technology, and then your customers are actually choosing the delivery model. Is that correct? >> A hundred percent, and if you think about what GreenLake is doing for example, that started off as being a financial model, but now they're getting into that services delivery. And what we want to do is enable them to deliver it, As-a-service, not just the financial model, but the outcome for the customer. And so our technology, it's not just do backup, it's do backup for a multi-tenant, multi-customer environment that does all of the multi-tenancy and billing and charge back as part of that service. >> Okay, so you guys don't report on this, but I'm going to ask the question anyway. You're number one now, let's call you, let's declare number one, 'cause we're well past that last reporting and you're growin' faster. So go another quarter, you're now number one, so you're the largest. Do you spend more on R&D in data protection than any other company? >> Yes, I'm quite certain that we do. Now, we have an unfair advantage because we have 450,000 customers. I don't think there's any other data protection company out there, the size and scope and scale, that we have. But we've been expanding, our largest R&D operation center's in Prague, it's in Czech Republic, but we've been expanding that. Last year it grew 40% year on year in R&D, so big investment in that space. You can see this just through our product space. Five years ago, we did data protection of VMware only, and now we do all the virtual environments, all the physical environments, all the major cloud environments, Kubernetes, Microsoft 365, we're launching Salesforce. We announced that at VeeamON last month and it will be coming out in Q3. All of that is coming from our R&D investments. >> A lot of people expect that when a company like Insight, a PE company, purchases a company like Veeam, that one of the things they'll dial down is R&D. That did not happen in this case. >> No, they very much treat us as a growth company. We had 22% year-over-year growth in 2020, and 25% year-over-year last year. The growth has been tremendous, they continue to give us the freedom. Now, I expect they'll want returns like that continuously, but we have been delivering, they have been investing. >> One of my favorite conversations of the year was our supercloud conversation, which was awesome, thank you for doing that with me. But that's clearly an area of focus, what we call supercloud, and you don't use that term, I know, you do sometimes, but it's not your marketing, I get that. But that is an R&D intensive effort, is it not? To create that common experience. And you see HPE, attempting to do that as well, across all these different estates. >> A hundred percent. We focus on three things, I always say, our differentiators, simplicity, flexibility, and reliability. Making it simple for the customers is not an easy thing to do. Making that checkbox for immutability? We have to do a lot behind the scenes to make it simple. Same thing on flexibility. We don't care if they're using 3PAR, Primera, Nimble, whatever you want to choose as the primary storage, we will take that out of your hands and make it really easy. You mentioned supercloud. We don't care what the cloud infrastructure, it can be on GreenLake, it can be on AWS, can be on Azure, it can be on GCP, it can be on IBM cloud. It is a lot of effort on our part to abstract the cloud infrastructure, but we do that on behalf of our customers to take away that complexity, it's part of our platform. >> Quick follow-up, and then I want to ask a question of David. I like talking to you guys because you don't care where it is, right? You're truly agnostic to it all. I'm trying to figure out this repatriation thing, cause I hear a lot of hey, Dave, you should look into repatriation that's happened all over the place, and I see pockets of it. What are you seeing in terms of repatriation? Have customers over-rotated to the cloud and now they're pullin' back a little bit? Or is it, as I'm claiming, in pockets? What's your visibility on that? >> Three things I see happening. There's the customers who lifted up their data center, moved it into the cloud and they get the first bill. >> (chuckling) Okay. >> And they will repatriate, there's no question. If I talk to those customers who simply lifted up and moved it over because the CIO told them to, they're moving it back on premises. But a second thing that we see is people moving it over, with tweaks. So they'll take their SQL server database and they'll move it into RDS, they'll change some things. And then you have people who are building cloud-native, they're never coming back on premises, they are building it for the cloud environment. So, we see all three of those. We only really see repatriation on that first scenario, when they get that first bill. >> And when you look at the numbers, I think it gets lost, 'cause you see the cloud is growing so fast. So David, what are the conversations like? You had several events last night, The Veeam party, slash Storage party, from HPE. What are you hearing from your alliance partners and the customers at the event. >> I think Danny touched on that point, it's about philosophy of evolution. And I think at the end of the day, whether we're seeing it with our GSI alliances we've got out there, or with the big enterprise conversations we're having with HPE, it's about understanding which workloads they want to move. In our mind, the customers are getting much smarter in making that decision, rather than experimenting. They're really taking a really solid look. And the work we're doing with the GSIs on workplace modernization, data center transformation, they're really having that investment work up front on the workloads, to be able to say, this works for me, for my personality and my company. And so, to the point about movement, it's more about decisive decision at the start, and not feeling like the remit is, I have to do one thing or another, it's about looking at that workflow position. And that's what we've seen with the revenue part as well. We've seen our movement to GreenLake tremendously grow in the last 18 months to two years. And from our GSI work as well, we're seeing the types of conversations really focus on that workload, compared to, hey, I just need a backup solution, and that's really exciting. >> Are you having specific conversations about security, or is it a data protection conversation still, (David chuckles) that's an adjacency to security? >> That's a great question. And I think it's a complex one, because if you come to a company like Veeam, we are there, and you touched on it before, we provide a solution when something has happened with security. We're not doing intrusion detection, we're not doing that barrier position at the end of it, but it's part of an end-to-end assumption. And I don't think that at this particular point, I started in security with RSA and Check Point, it was about layers of protection. Now it's layers of protection, and the inevitability that at some point something will happen, so about the recovery. So the exciting conversations we're having, especially with the big enterprises, is not about the fear factor, it's about, at some point something's going to occur. Speed of recovery is the conversation. And so for us, and your question is, are they talking to us about security, or more, the continuity position? And that's where the synergy's getting a lot simpler, rather than a hard demark between security and backup. >> Yeah, when you look at the stock market, everything's been hit, but security, with the exception of Okta, 'cause it got that weird benign hack, but security, generally, is an area that CIOs have said, hey, we can't really dial that back. We can maybe, some other discretionary stuff, we'll steal and prioritize. But security seems to be, and I think data protection is now part of that discussion. You're not a security company. We've seen some of your competitors actually pivot to become security companies. You're not doing that, but it's very clearly an adjacency, don't you think? >> It's an adjacency, and it's a new conversation that we're having with the Chief Information Security Officer. I had a meeting an hour ago with a customer who was hit by ransomware, and they got the call at 2:00 AM in the morning, after the ransomware they recovered their entire portfolio within 36 hours, from backups. Didn't even contact Veeam, I found out during this meeting. But that is clearly something that the Chief Information Security Officer wants to know about. It's part of his purview, is the recovery of that data. >> And they didn't pay the ransom? >> And they did not pay the ransom, not a penny. >> Ahh, we love those stories. Guys, thanks so much for coming on theCUBE. Congratulations on all the success. Love when you guys come on, and it was such a fun event at VeeamON. Great event here, and your presence is, was seen. The Veeam green is everywhere, so appreciate your time. >> Thank you. >> Thanks, Dave. >> Okay, and thank you for watching. This is Dave Vellante for John Furrier and Lisa Martin. We'll be back right after this short break. You're watching theCUBE's coverage of HPE Discover 2022, from Las Vegas. (inspiring music)
SUMMARY :
Brought to you by HPE. And I got to say this Discover, and what it's telling us, And the events business, started to go out more, it's good to be back. and where you see it going? of the movement of the predates the HP, HPE split. and that was a and the management of data. customers, the relationship? that we're doing with them, and one of the things we're doing in V12 and the same thing is being said with V12. that it makes the data, when you have that overlap. I got the IDC guys of the Gartner Magic- of all of the data protection vendors. Because of course, the messaging for the customer to want are actually choosing the delivery model. all of the multi-tenancy Okay, so you guys don't report on this, and now we do all the that one of the things they continue to give us the freedom. conversations of the year the scenes to make it simple. I like talking to you guys There's the customers who the cloud environment. and the customers at the event. in the last 18 months to two years. and the inevitability that at some point at the stock market, that the Chief Information the ransom, not a penny. Congratulations on all the success. Okay, and thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
David Harvey | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Danny Allen | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Danny | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Prague | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Czech Republic | LOCATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
VeeamON | ORGANIZATION | 0.99+ |
Danny Allan | PERSON | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
first bill | QUANTITY | 0.99+ |
22% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
450,000 customers | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Last night | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
first scenario | QUANTITY | 0.99+ |
Five years ago | DATE | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.98+ |
S3 | TITLE | 0.98+ |
Insight | ORGANIZATION | 0.98+ |
first integration | QUANTITY | 0.98+ |
four consecutive events | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |
Over 10 years | QUANTITY | 0.98+ |
36 hours | QUANTITY | 0.98+ |
last night | DATE | 0.98+ |
IBM | ORGANIZATION | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.97+ |
10,000 | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Nimble | ORGANIZATION | 0.97+ |
Lou | PERSON | 0.97+ |
Q3 | DATE | 0.97+ |
IDC | ORGANIZATION | 0.97+ |
CapEx | ORGANIZATION | 0.97+ |
fifth place | QUANTITY | 0.97+ |
Linux | TITLE | 0.96+ |
Luis Ceze, OctoML | Amazon re:MARS 2022
(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)
SUMMARY :
live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Luis Ceze | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Luis | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Boston Dynamics | ORGANIZATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Japan | LOCATION | 0.99+ |
Madrona Venture Capital | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
University of Washington | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Pepsi | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
SiMa.ai | ORGANIZATION | 0.99+ |
OctoML | TITLE | 0.99+ |
OctoML | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
one instance | QUANTITY | 0.98+ |
DevOps | TITLE | 0.98+ |
Madrona Venture Group | ORGANIZATION | 0.98+ |
Swami | PERSON | 0.98+ |
Madrona | ORGANIZATION | 0.98+ |
about six years | QUANTITY | 0.96+ |
Spot | ORGANIZATION | 0.96+ |
The Lean Machine Learning | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
ARMs | ORGANIZATION | 0.94+ |
pineapple | ORGANIZATION | 0.94+ |
Raspberry Pis | ORGANIZATION | 0.92+ |
TensorFlow | TITLE | 0.89+ |
Snapdragon | ORGANIZATION | 0.89+ |
about three years old | QUANTITY | 0.89+ |
a couple years ago | DATE | 0.88+ |
two hyperscaler cloud providers | QUANTITY | 0.88+ |
first ones | QUANTITY | 0.87+ |
one of | QUANTITY | 0.85+ |
50 milliseconds | QUANTITY | 0.83+ |
Apache TVM | ORGANIZATION | 0.82+ |
both laugh | QUANTITY | 0.82+ |
three major cloud providers | QUANTITY | 0.81+ |
Changing the Game for Cloud Networking | Pluribus Networks
>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.
SUMMARY :
And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Donnie | PERSON | 0.99+ |
Bob Liberte | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alessandra Burberry | PERSON | 0.99+ |
Sandra | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pete Bloomberg | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Alexandra | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Pete Lummus | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob LA Liberte | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Alessandra Bobby | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Bluefield | ORGANIZATION | 0.99+ |
NetApps | ORGANIZATION | 0.99+ |
demand@thekey.net | OTHER | 0.99+ |
20% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
a year | QUANTITY | 0.99+ |
March 21st | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
www.pluribusnetworks.com/e | OTHER | 0.99+ |
Tyco | ORGANIZATION | 0.99+ |
late April | DATE | 0.99+ |
Doka | TITLE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
two services | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second aspect | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
www.pluribusnetworks.com | OTHER | 0.99+ |
Pete | PERSON | 0.99+ |
last year | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Alessandro Barbieri and Pete Lumbis
>>mhm. Okay, we're back. I'm John. Fully with the Cuban. We're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alexandra Barberry, VP of product Management and Pluribus Networks. And Pete Lambasts, the director of technical market and video. Remotely guys, thanks for coming on. Appreciate it. >>I think >>so. Deep dive. Let's get into the what and how Alexandra, we heard earlier about the pluribus and video partnership in the solution you're working together on. What is it? >>Yeah. First, let's talk about the what? What are we really integrating with the NVIDIA Bluefield deep You Technology pluribus says, uh, has been shipping, uh, in volume in multiple mission critical networks. So this adviser, one network operating systems it runs today on merchant silicon switches and effectively, it's a standard based open network computing system for data centre. Um, and the novelty about this operating system is that it integrates a distributed the control plane for Atwater made effective in STN overlay. This automation is completely open and interoperable, and extensible to other type of clouds is nothing closed and this is actually what we're now porting to the NVIDIA GPU. >>Awesome. So how does it integrate into video hardware? And specifically, how is plural is integrating its software within video hardware? >>Yeah, I think we leverage some of the interesting properties of the blue field the GPU hardware, which allows actually to integrate, um, our soft our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the GPU card is completely agnostic to the hyper visor layer or OS layer running on on the host even more. Um, uh, we can also independently manage this network. Note this switch on a nick effectively, uh, managed completely independently from the host. You don't have to go through the network operating system running on X 86 to control this network node. So you truly have the experience effectively of a top of rack for virtual machine or a top of rack for kubernetes spots. Where instead of, uh, um, if you allow me with analogy instead of connecting a server nique directly to a switchboard now you're connecting a VM virtual interface to a virtual interface on the switch on a nick. And also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in accelerating the entire day to play in for networking and security. So we are taking advantage of the DACA, uh, video DACA api to programme the accelerators and this your accomplished two things with that number one, you, uh, have much greater performance, much better performance than running the same network services on an X 86 CPU. And second, this gives you the ability to free up. I would say around 2025% of the server capacity to be devoted either to additional war close to run your cloud applications. Or perhaps you can actually shrink the power footprint and compute footprint of your data centre by 20% if you want to run. The same number of computer work was so great efficiencies in the overall approach. >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero quote from pluribus running on the X 86 this is what why we think this enables a very clean demarcation between computer and network. >>So, Pete, I gotta get I gotta get you in here. We heard that the GPUS enable cleaner separation of devops and net ops. Can you explain why that's important? Because everybody's talking. Def SEC ops, right now you've got Net ops. Net net SEC ops, this separation. Why is this clean separation important? >>Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little a little messier than that. And I think a lot of the devops stuff in that, uh, mentality and philosophy. There's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers? Well, the network has always been this other thing, and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we we in the networking industry have gotten closer together. But there's still a gap. There's still some distance, and I think in that distance isn't going to be closed and So again it comes down to pragmatism. And I think, you know, one of my favourite phrases is look, good fences make good neighbours. And that's what this is. Yeah, >>it's a great point because devops has become kind of the calling card for cloud. Right? But devops is a simply infrastructure as code infrastructure is networking, right? So if infrastructure as code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where one from from the policy, the security, the zero trust aspect of this right. If you get it wrong on that network side, all of a sudden, you you can totally open up that those capabilities and so security is part of that. But the other part is thinking about this at scale, right. So we're taking one top of rack switch and adding, you know, up to 48 servers per rack, and so that ability to automate orchestrate and manage its scale becomes absolutely critical. >>Alexandra, this is really the why we're talking about here. And this is scale and again getting it right. If you don't get it right, you're gonna be really kind of up. You know what you know. So this is a huge deal. Networking matters. Security matters. Automation matters. DEVOPS. Net ops all coming together. Clean separation. Help us understand how this joint solution within video gets into the pluribus unified cloud networking vision. Because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're talking to major problems in cloud networking. One is the operation of cloud networking, and the second is distributing security services in the cloud infrastructure. First, let me talk about first. What are we really unifying? If you really find something, something must be at least fragmented or disjointed. And what is this? Joint is actually the network in the cloud. If you look holistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your I P clause fabric leaf and spine to apologies. this is actually well understood the problem. I would say, um, there are multiple vendors with a similar technologies. Very well, standardised. Very well understood. Um, and almost a commodity, I would say building an I P fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually were called. Builders have to instrument a separate network virtualisation layer, where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options, that they don't talk to each other, and they are very dependent on the kind of hyper visor or compute solution you choose. Um, for example, the networking API between an SX I environment or and hyper V or a Zen are completely disjointed. You have multiple orchestration layers and when and then when you throw in Also kubernetes in this In this in this type of architecture, uh, you're introducing yet another level of networking, and when you burn it, it runs on top of the M s, which is a prevalent approach. You actually just stuck in multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the night effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any work clothes. Uh, whether it's this fabric spans on a switch which can become connected to a bare metal workload or can spend all the way inside the deep You where you have your multi hypervisors computer environment. It's one a P I one common network control plane and one common set of segmentation services for the network. That's probably number one. >>You know, it's interesting you I hear you talking. I hear one network different operating models reminds me the old server list days. You know there's still servers, but they called server list. Is there going to be a term network list? Because at the end of the, it should be one network, not multiple operating models. This this is like a problem that you guys are working on. Is that right? I mean, I'm not I'm just joking. Server, Listen, network list. But the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we're trying to recompose this fragmentation in terms of network operations across physical networking and server networking. Server networking is where the majority of the problems are because of the as much as you have standardised the ways of building, uh, physical networks and cloud fabrics with high people articles on the Internet. And you don't have that kind of, uh, sort of, uh, operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently. Whether it's micro segmentation is a state, full firewall services or even encryption, those are all capabilities enabled by the blue field deep you technology and, uh, we can actually integrate those capabilities directly into the network fabric. Limiting dramatically, at least for is to have traffic, the sprawl of security appliances with a virtual or physical that is typically the way people today segment and secured the traffic in the >>cloud. All kidding aside about network. Listen, Civil is kind of fun. Fun play on words There the network is one thing is basically distributed computing, right? So I love to get your thoughts about this Distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think. What's what's beautiful and kind of what the deep you brings that's new to this model is completely isolated. Compute environment inside. So you know, it's the yo dog. I heard you like a server, So I put a server inside your server. Uh, and so we provide, you know, arm CPUs, memory and network accelerators inside, and that is completely isolated from the host. So the server, the the actual X 86 host just thinks it has a regular nick in there. But you actually have this full control plane thing. It's just like taking your top of rack, switch and shovel. Get inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack Switch, and we distribute them now. And, you know, as time has gone on, we've we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralised appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if aliens good enough or we hope that if you excellent tunnel is good enough, and we can actually apply more advanced techniques there because we can't physically, financially afford that appliance to see all of the traffic, and now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer real quick. I think this is an interesting point. You mentioned policy. Everyone in networking those policies just a great thing. And it has. You hear it being talked about up the stack as well. When you start getting to orchestrate microservices and what not all that good stuff going on their containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application. Enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement? >>It comes down to taking again the capabilities that were that top of rack switch and distracting them down. So that makes simplicity smaller. Blast Radius is for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So, just as in, say, a Vieques land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a deep You with any networking in the core there. And so you get this extreme flexibility, you can start small. You can scale large. Um, you know, to me that the possibilities are endless. >>It's a great security control Playing really flexibility is key, and and also being situationally aware of any kind of threats or new vectors or whatever is happening in the network. Alexandra, this is huge Upside, right? You've already identified some, uh, successes with some customers on your early field trials. What are they doing? And why are they attracted? The solution? >>Yeah, I think the response from customer has been the most encouraging and exciting for for us to, uh, to sort of continuing work and develop this product. And we have actually learned a lot in the process. Um, we talked to three or two or three cloud providers. We talked to s P um, sort of telco type of networks, uh, as well as enter large enterprise customers. Um, in one particular case, um uh, one, I think. Let me let me call out a couple of examples here just to give you a flavour. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple hypervisors their native services based on Zen. But they also, um, ramp into the cloud workloads based on SX I and N K P M. Depending on what the customer picks from the piece from the menu. And they have the problem of now orchestrating through the orchestrate or integrating with Zen Centre with this fear with open stock to coordinate this multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of cost complication, and it's up into the service of you the promise that they saw in this technology they call it. Actually, game changing is actually to remove all this complexity, even a single network, and distribute the micro segmentation service directly into the fabric. And overall, they're hoping to get out of it. Tremendous OPEC's benefit and overall operational simplification for the cloud infrastructure. That's one important use case, um, another large enterprise customer, a global enterprise customer is running both Essex I and I purvey in their environment, and they don't have a solution to do micro segmentation consistently across Hypervisors. So again, micro segmentation is a huge driver. Security looks like it's a recurring theme talking to most of these customers and in the telco space. Um, uh, we're working with a few telco customers on the CFT programme, uh, where the main goal is actually to Arman Eyes Network operation. They typically handle all the V NFC with their own homegrown DPD K stock. This is overly complex. It is, frankly, also slow and inefficient. And then they have a physical network to manage the idea of having again one network to coordinate the provisioning of cloud services between the take of the NFC. Uh, the rest of the infrastructure is extremely powerful on top of the offloading capability. After by the blue fill the pews. Those are just some examples. >>There's a great use case, a lot more potential. I see that with the unified cloud networking. Great stuff shout out to you guys that NVIDIA, you've been following your success for a long time and continuing to innovate his cloud scales and pluribus here with unified networking. Kind of bringing the next level great stuff. Great to have you guys on and again, software keeps, uh, driving the innovation again. Networking is just part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in in this new architecture and solution learn more? Because this is an architectural ship. People are working on this problem. They're trying to think about multiple clouds are trying to think about unification around the network and giving more security more flexibility to their teams. How can people learn more? >>And so, uh, Alexandra and I have a talk at the upcoming NVIDIA GTC conference, so it's the week of March 21st through 24th. Um, you can go and register for free and video dot com slash gtc. Um, you can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact, Um, and we're going to dive a little bit more into the specifics and the details and what we're providing a solution >>as Alexandra. How can people learn more? >>Yeah, so that people can go to the pluribus website www pluribus networks dot com slash e. F t and they can fill up the form and, uh, they will contact Pluribus to either no more or to know more and actually to sign up for the actual early field trial programme. Which starts at the end of it. >>Okay, well, we'll leave it there. Thank you both for joining. Appreciate it up. Next, you're going to hear an independent analyst perspective and review some of the research from the Enterprise Strategy Group E s G. I'm John Ferry with the Cube. Thanks for watching. Mhm. Mhm.
SUMMARY :
And Pete Lambasts, the director of technical market and Let's get into the what and how Alexandra, we heard earlier about the pluribus and video Um, and the novelty about this operating system is that it integrates a distributed the And specifically, how is plural is integrating its software within video hardware? of the server capacity to be devoted either to additional war close to is what why we think this enables a very clean demarcation between computer and network. We heard that the GPUS enable cleaner separation of Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, So if infrastructure as code, you know, you're talking about, you know, that part of the stack But the other part is thinking about this at scale, right. You know what you know. the place where you deploy most of your services in the cloud, particularly from a security standpoint. I hear one network different operating models reminds me the old server enabled by the blue field deep you technology and, So I love to get your thoughts scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? I can now do this at a different layer and so you can run Alexandra, this is huge Upside, Let me let me call out a couple of examples here just to give you a flavour. So I got to ask both of you to wrap this bit more into the specifics and the details and what we're providing a solution How can people learn more? Yeah, so that people can go to the pluribus website www pluribus networks dot analyst perspective and review some of the research from the Enterprise Strategy Group E s G.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alexandra | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Pete Lambasts | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Ferry | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Pluribus | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Alexandra Barberry | PERSON | 0.99+ |
Pete Lumbis | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Alessandro Barbieri | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
OPEC | ORGANIZATION | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Pete | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
March 21st | DATE | 0.99+ |
24th | DATE | 0.99+ |
One | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
Arman Eyes Network | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
two things | QUANTITY | 0.98+ |
Atwater | ORGANIZATION | 0.98+ |
Pluribus Networks | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
YouTube | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.92+ |
DACA | TITLE | 0.92+ |
one network | QUANTITY | 0.92+ |
Enterprise | ORGANIZATION | 0.91+ |
single network | QUANTITY | 0.91+ |
zero quote | QUANTITY | 0.89+ |
one common set | QUANTITY | 0.88+ |
zero trust | QUANTITY | 0.88+ |
one important use case | QUANTITY | 0.87+ |
Essex I | ORGANIZATION | 0.84+ |
telco | ORGANIZATION | 0.84+ |
three cloud providers | QUANTITY | 0.82+ |
N K P | ORGANIZATION | 0.82+ |
Cuban | PERSON | 0.82+ |
K | COMMERCIAL_ITEM | 0.81+ |
X 86 | OTHER | 0.8+ |
zero | QUANTITY | 0.79+ |
Zen | ORGANIZATION | 0.79+ |
each one | QUANTITY | 0.78+ |
one particular case | QUANTITY | 0.76+ |
up to 48 servers per rack | QUANTITY | 0.74+ |
around 2025% | QUANTITY | 0.73+ |
couple | QUANTITY | 0.68+ |
Group | ORGANIZATION | 0.67+ |
Vieques | ORGANIZATION | 0.65+ |
X 86 | COMMERCIAL_ITEM | 0.64+ |
X | COMMERCIAL_ITEM | 0.61+ |
NVIDIA GTC conference | EVENT | 0.6+ |
pluribus | ORGANIZATION | 0.57+ |
NVIDIA Bluefield | ORGANIZATION | 0.54+ |
Centre | COMMERCIAL_ITEM | 0.52+ |
X 86 | TITLE | 0.51+ |
Zen | TITLE | 0.47+ |
86 | TITLE | 0.45+ |
Cube | ORGANIZATION | 0.44+ |
SX | TITLE | 0.41+ |
Webb Brown & Alex Thilen, Kubecost | AWS Startup Showcase S2 E1 | Open Cloud Innovations
>>Hi, everyone. Welcome to the cubes presentation of the eight of us startup showcase open cloud innovations. This is season two episode one of the ongoing series covering the exciting startups from ABC ecosystems today. Uh, episode one, steam is the open source community and open cloud innovations. I'm Sean for your host got two great guests, Webb brown CEO of coop costs and as Thielen, head of business development, coop quest, gentlemen, thanks for coming on the cube for the showcase 80, but startups. >>Thanks for having a Sean. Great to be back, uh, really excited for the discussion we have here. >>I keep alumni from many, many coupons go. You guys are in a hot area right now, monitoring and reducing the Kubernetes spend. Okay. So first of all, we know one thing for sure. Kubernetes is the hottest thing going on because of all the benefits. So take us through you guys. Macro view of this market. Kubernetes is growing, what's going on with the company. What is your company's role? >>Yeah, so we've definitely seen this growth firsthand with our customers in addition to the broader market. Um, you know, and I think we believe that that's really indicative of the value that Kubernetes provides, right? And a lot of that is just faster time to market more scalability, improved agility for developer teams and, you know, there's even more there, but it's a really exciting time for our company and also for the broader cloud native community. Um, so what that means for our company is, you know, we're, we're scaling up quickly to meet our users and support our users, every, you know, metric that our company's grown about four X over the last year, including our team. Um, and the reason that one's the most important is just because, you know, the, the more folks and the larger that our company is, the better that we can support our users and help them monitor and reduce those costs, which ultimately makes Kubernetes easier to use for customers and users out there on the market. >>Okay. So I want to get into why Kubernetes is costing so much. Obviously the growth is there, but before we get there, what is the background? What's the origination story? Where did coop costs come from? Obviously you guys have a great name costs. Qube you guys probably reduced costs and Kubernetes great name, but what's the origination story. How'd you guys get here? What HR you scratching? What problem are you solving? >>So yeah, John, you, you guessed it, uh, you know, oftentimes the, the name is a dead giveaway there where we're cost monitoring cost management solutions for Kubernetes and cloud native. Um, and backstory here is our founding team was at Google before starting the company. Um, we were working on infrastructure monitoring, um, both on internal infrastructure, as well as Google cloud. Um, we had a handful of our teammates join the Kubernetes effort, you know, early days. And, uh, we saw a lot of teams, you know, struggling with the problems we're solving. We were solving internally at Google and we're we're solving today. Um, and to speak to those problems a little bit, uh, you know, you, you, you touched on how just scale alone is making this come to the forefront, right. You know, there's now many billions of dollars being spent on CU, um, that is bringing this issue, uh, to make it a business critical questions that is being asked in lots of organizations. Um, you know, that combined with, you know, the dynamic nature and complexity of Kubernetes, um, makes it really hard to manage, um, you know, costs, uh, when you scale across a very large organization. Um, so teams turned to coop costs today, you know, thousands of them do, uh, to get monitoring in place, you know, including alerts, recurring reports and like dynamic management insights or automation. >>Yeah. I know we talked to CubeCon before Webb and I want to come back to the problem statement because when you have these emerging growth areas that are really relevant and enabling technologies, um, you move to the next point of failure. And so, so you scaling these abstraction layers. Now services are being turned on more and more keeping it as clusters are out there. So I have to ask you, what is the main cost driver problem that's happening in the cube space that you guys are addressing? Is it just sheer volume? Is it different classes of services? Is it like different things are kind of working together, different monitoring tools? Is it not a platform and take us through the, the problem area? What do you guys see this? >>Yeah, the number one problem area is still actually what, uh, the CNCF fin ops survey highlighted earlier this year, um, which is that approximately two thirds of companies still don't have kind of baseline to visibility into spend when they moved to Kubernetes. Um, so, you know, even if you had a really complex, you know, chargeback program in place, when you're building all your applications on BMS, you move to Kubernetes and most teams again, can't answer these really simple questions. Um, so we're able to give them that visibility in real time, so they can start breaking these problems down. Right. They can start to see that, okay, it's these, you know, the deployments are staple sets that are driving our costs or no, it's actually, you know, these workloads that are talking to, you know, S3 buckets and, you know, really driving, you know, egress costs. Um, so it's really about first and foremost, just getting the visibility, getting the eyes and ears. We're able to give that to teams in real time at the largest scale Kubernetes clusters in the world. Um, and again, most teams, when they first start working with us, don't have that visibility, not having that visibility can have a whole bunch of downstream impacts, um, including kind of not getting, you know, costs right. You know, performance, right. Et cetera. >>Well, let's get into that downstream benefit, uh, um, problems and or situations. But the first question I have just throw naysayer comment at you would be like, oh, wait, I have all this cost monitoring stuff already. What's different about Kubernetes. Why what's what's the problem I can are my other tool is going to work for me. How do you answer that one? >>Yeah. So, you know, I think first and foremost containers are very dynamic right there. They're often complex, often transient and consume variable cluster resources. And so as much as this enables teams to contract construct powerful solutions, um, the associated costs and actually tracking those, those different variables can be really difficult. And so that's why we see why a solution like food costs. That's purpose built for developers using Kubernetes is really necessary because some of those older, you know, traditional cloud cost optimization tools are just not as fit for, for this space specifically. >>Yeah. I think that's exactly right, Alex. And I would add to that just the way that software is being architected deployed and managed is fundamentally changing with Kubernetes, right? It is deeply impacting every part of scifi software delivery process. And through that, you know, decisions are getting made and, you know, engineers are ultimately being empowered, um, to make more, you know, costs impacting decisions. Um, and so we've seen, you know, organizations that get real time kind of built for Kubernetes are built for cloud native, um, benefit from that massively throughout their, their culture, um, you know, cost performance, et cetera. >>Uh, well, can you just give a quick example because I think that's a great point. The architectures are shifting, they're changing there's new things coming in, so it's not like you can use an old tool and just retrofit it. That's sometimes that's awkward. What specific things you see changing with Kubernetes that's that environments are leveraging that's good. >>Yeah. Yeah. Um, one would be all these Kubernetes primitives are concepts that didn't exist before. Right. So, um, you know, I'm not, you know, managing just a generic workload, I'm managing a staple set and, or, you know, three replica sets. Right. And so having a language that is very much tailored towards all of these Kubernetes concepts and abstractions, et cetera. Um, but then secondly, it was like, you know, we're seeing this very obvious, you know, push towards microservices where, you know, typically again, you're shipping faster, um, you know, teams are making more distributed or decentralized decisions, uh, where there's not one single point where you can kind of gate check everything. Um, and that's a great thing for innovation, right? We can move much faster. Um, but for some teams, um, you know, not using a tool like coop costs, that means sacrificing having a safety net in place, right. >>Or guard rails in place to really help manage and monitor this. And I would just say, lastly, you know, uh, a solution like coop costs because it's built for Kubernetes sits in your infrastructure, um, it can be deployed with a single helmet stall. You don't have to share any data remotely. Um, but because it's listening to your infrastructure, it can give you data in real time. Right. And so we're moving from this world where you can make real time automated decisions or manual decisions as opposed to waiting for a bill, you know, a day, two days or a week later, um, when it may be already too late, you know, to avoid, >>Or he got the extra costs and you know what, he wants that. And he got to fight for a refund. Oh yeah. I threw a switch or wasn't paying attention or human error or code because a lot of automation is going on. So I could see that as a benefit. I gotta, I gotta ask the question on, um, developer uptake, because develop, you mentioned a good point. There that's another key modern dynamic developers are in, in the moment making decisions on security, on policy, um, things to do in the CIC D pipeline. So if I'm a developer, how do I engage with Qube cost? Do I have to, can I just download something? Is it easy? How's the onboarding process for your customers? >>Yeah. Great, great question. Um, so, you know, first and foremost, I think this gets to the roots of our company and the roots of coop costs, which is, you know, born in open-source, everything we do is built on top of open source. Uh, so the answer is, you know, you can go out and install it in minutes. Like, you know, thousands of other teams have, um, it is, you know, the, the recommended route or preferred route on our side is, you know, a helm installed. Um, again, you don't have to share any data remotely. You can truly not lock down, you know, namespace eat grass, for example, on the coop cost namespace. Um, and yeah, and in minutes you'll have this visibility and can start to see, you know, really interesting metrics that, again, most teams, when we started working with them, either didn't have them in place at all, or they had a really rough estimate based on maybe even a coop cost Scruff on a dashboard that they installed. >>How does cube cost provide the visibility across the environment? How do you guys actually make it work? >>Yeah, so we, you know, sit in your infrastructure. Um, we have integrations with, um, for on-prem like custom pricing sheets, uh, with card providers will integrate with your actual billing data, um, so that we can, uh, listen for events in your infrastructure, say like a nude node coming up, or a new pod being scheduled, et cetera. Um, we take that information, join with your billing data, whether it's on-prem or in one of the big three cloud providers. And then again, we can, in real time tell you the cost of, you know, any dimension of your infrastructure, whether it's one of the backing, you know, virtual assets you're using, or one of the application dimensions like a label or annotation namespace, you know, pod container, you name it >>Awesome. Alex, what's your take on the landscape with, with the customers as they look the cost reductions. I mean, everyone loves cost reductions as a, certainly I love the safety net comment that Webb made, but at the end of the day, Kubernetes is not so much a cost driver. It's more of a, I want the modern apps faster. Right? So, so, so people who are buying Kubernetes usually aren't price sensitive, but they also don't want to get gouged either on mistakes. Where is the customer path here around Kubernetes cost management and reduction and a scale? >>Yeah. So I think one thing that we're looking forward to hearing this upcoming year, just like we did last year is continuing to work with the various tools that customers are already using and, you know, meeting those customers where they are. So some examples of that are, you know, working with like CICT tools out there. Like we have a great integration with armoring Spinnaker to help customers actually take the insights from coop costs and deploy those, um, in a more efficient manner. Um, we're also working with a lot of partners, like, you know, for fauna to help customers visualize our data and, you know, integrate with or rancher, which are management platforms for Kubernetes. And all of that I think is just to make cost come more to the forefront of the conversation when folks are using Kubernetes and provide that, that data to customers and all the various tools that they're using across the ecosystem. Um, so I think we really want to surface this and make costs more of a first-class citizen across, you know, the, the ecosystem and then the community partners. >>What's your strategy of the biz dev side. As you guys look at a growing ecosystem with CubeCon CNCF, you mentioned that earlier, um, the community is growing. It's always been growing fast. You know, the number of people entering in are amazing, but now that we start going, you know, the S curves kicking in, um, integration and interoperability and openness is always a key part of company success. What's Qube costs is vision on how you're going to do biz dev going forward. >>Absolutely. So, you know, our products opensource that is deeply important to our company, we're always going to continue to drive innovation on our open source product. Um, as Webb mentioned, you know, we have thousands of teams that are, that are using our product. And most of that is actually on the free, but something that we want to make sure continues to be available for the community and continue to bring that development for the community. And so I think a part of that is making sure that we're working with folks not just on the commercial side, but also those open source, um, types of products, right? So, you know, for Fanta is open source Spinnaker's are open source. I think a lot of the biz dev strategies just sticking to our roots and make sure that we continue to drive it a strong open source presence and product for, for our community of users, keep that >>And a, an open source and commercial and keep it stable. Well, I got to ask you, obviously, the wave is here. I always joke, uh, going back. I remember when the word Kubernetes was just kicked around pre uh, the OpenStack days many, many years ago. It's the luxury of being a old cube guy that I am 11 years doing the cube, um, all fun. But if we remember talking to him in the early days, is that with Kubernetes was, if, if it worked, the, the phrase was rising, tide floats all boats, I would say right now, the tides rising pretty well right now, you guys are in a good spot with the cube costs. Are there areas that you see coming where cost monitoring, um, is going to expand more? Where do you see the Kubernetes? Um, what's the aperture, if you will, of the, of the cost monitoring space at your end that you think you can address. >>Yeah, John, I think you're exactly right. This, uh, tide has risen and it just keeps riding rising, right? Like, um, you know, the, the sheer number of organizations we use C using Kubernetes at massive scale is just mind blowing at this point. Um, you know, what we see is this really natural pattern for teams to start using a solution like coop costs, uh, start with, again, either limited or no visibility, get that visibility in place, and then really develop an action plan from there. And that could again be, you know, different governance solutions like alerts or, you know, management reports or, you know, engineering team reports, et cetera. Um, but it's really about, you know, phase two of taking that information and really starting to do something with it. Right. Um, we, we are seeing and expect to see more teams turn to an increasing amount of, of automation to do that. Um, but ultimately that is, uh, very much after you get this baseline highly accurate, uh, visibility that you feel very comfortable making, potentially critical, very critical related to reliability, performance decisions within your infrastructure. >>Yeah. I think getting it right key, you mentioned baseline. Let me ask you a quick follow-up on that. How fast can companies get there when you say baseline, there's probably levels of baseline. Obviously all environments are different now. Not all one's the same, but what's just anecdotally you see, as that baseline, how fast we will get there, is there a certain minimum viable configuration or architecture? Just take us through your thoughts on that. >>Yeah. Great question. It definitely depends on organizational complexity and, you know, can depend on applicational application complexity as well. But I would say most importantly is, um, you know, the, the array of cost centers, departments, you know, complexity across the org as opposed to, you know, technological. Um, so I would say for, you know, less complex organizations, we've seen it happen in, you know, hours or, you know, a day less, et cetera. Um, because that's, you know, one or two or a smaller engineering games, they can share that visibility really quickly. And, um, you know, they may be familiar with Kubernetes and they just get it right away. Um, for larger organizations, we've seen it take kind of up 90 days where it's really about infusing this kind of into their DNA. When again, there may not have been a visibility or transparency here before. Um, again, I think the, the, the bulk of the time there is really about kind of the cultural element, um, and kind of awareness building, um, and just buy in throughout the organization. >>Awesome. Well, guys got a great product. Congratulations, final question for both of you, it's early days in Kubernetes, even though the tide is rising, keeps rising, more boats are coming in. Harbor is getting bigger, whatever, whatever metaphor you want to use, it's really going great. You guys are seeing customer adoption. We're seeing cloud native. I was told that my friends at dock or the container side is going crazy as well. Everything's going great in cloud native. What's the vision on the innovation? How do you guys continue to push the envelope on value in open source and in the commercial area? What's the vision? >>Yeah, I think there's, there's many areas here and I know Alex will have more to add here. Um, but you know, one area that I know is relevant to his world is just more, really interesting integrations, right? So he mentioned coop costs, insights, powering decisions, and say Spinnaker, right? I think more and more of this tool chain really coming together and really seeing the benefits of all this interoperability. Right. Um, so that I think combined with, uh, just more and more intelligence and automation being deployed again, that's only after the fact that teams are really comfortable with his decisions and the information and the decisions that are being made. Um, but I think that increasingly we see the community again, being ready to leverage this information and really powerful ways. Um, just because, you know, as teams scale, there's just a lot to manage. And so a team, you know, leveraging automation can, you know, supercharge them and in really impactful ways. >>Awesome, great integration integrations, Alex, expand on that. A whole different kind of set of business development integrations. When you have lots of tool chains, lots of platforms and tools kind of coming together, sharing data, working together, automating together. >>Well. Yeah, we, so I think it's going to be super important to keep a pulse on the new tools. Right. Make sure that we're on the forefront of what customers are using and just continuing to meet them where they are. And a lot of that honestly, is working with AWS too, right? Like they have great services and EKS and managed Prometheus's. Um, so we want to make sure that we continue to work with that team and support their services as that launched as well. >>Great stuff. I got a couple of minutes left. I felt I'll throw one more question in there since I got two great experts here. Um, just, you know, a little bit change of pace, more of an industry question. That's really no wrong answer, but I'd love to get your reaction to, um, the SAS conversation cloud has changed what used to be SAS. SAS was, oh yeah. Software as a service. Now that you have all these kinds of new kinds of you have automation, horizontally, scalable cloud and edge, you now have vertical machine learning. Data-driven insights. A lot of things in the stack are changing. So the question is what's the new SAS look like it's the same as the old SAS? Or is it a new kind of refactoring of what SAS is? What's your take on this? >>Yeah. Um, there's a web, please jump in here wherever. But in, in my view, um, it's a spectrum, right? There's there's customers that are on both ends of this. Some customers just want a fully hosted, fully managed product that wouldn't benefit from the luxury of not having to do any, any sort of infrastructure management or patching or anything like that. And they just want to consume a great product. Um, on the other hand, there's other customers that have more highly regulated industries or security requirements, and they're going to need things to deploy in their environment. Um, right now QP cost is, is self hosted. But I think in the future, we want to make sure that, you know, we, we have versions of our product available for customers across that entire spectrum. Um, so that, you know, if somebody wants the benefit of just not having to manage anything, they can use a fully self hosted sat or a fully multitenant managed SAS, or, you know, other customers can use a self hosted product. And then there's going to be customers that are in the middle, right, where there's certain components that are okay to be a SAS or hosted elsewhere. But then there's going to be components that are really important to keep in their own environment. So I think, uh, it's really across the board and it's going to depend on customer and customer, but it's important to make sure we have options for all of them. >>Great guys, we have SAS, same as the old SAS. What's the SAS playbook. Now >>I think it is such a deep and interesting question and one that, um, it's going to touch so many aspects of software and on our lives, I predict that we'll continue to see this, um, you know, tension or real trade-off across on the one hand convenience. And now on the other hand, security, privacy and control. Um, and I think, you know, like Alex mentioned, you know, different organizations are going to make different decisions here based on kind of their relative trade-offs. Um, I think it's going to be of epic proportions. I think, you know, we'll look back on this period and just say that, you know, this was one of the foundational questions of how to get this right. We ultimately view it as like, again, we want to offer choice, um, and make, uh, make every choice be great, but let our users, uh, pick the right one, given their profile on those, on those streets. >>I think, I think it's a great comment choice. And also you got now dimensions of implementations, right? Multitenant, custom regulated, secure. I want have all these controls. Um, it's great. No one, no one SaaS rules the world, so to speak. So it's again, great, great dynamic. But ultimately, if you want to leverage the data, is it horizontally addressable? MultiTech and again, this is a whole nother ball game we're watching this closely and you guys are in the middle of it with cube costs, as you guys are creating that baseline for customers. Uh, congratulations. Uh, great to see you where thanks for coming on. Appreciate it. Thank you so much for having us again. Okay. Great. Conservation aiders startup showcase open cloud innovators here. Open source is driving a lot of value as it goes. Commercial, going to the next generation. This is season two episode, one of the AWS startup series with the cube. Thanks for watching.
SUMMARY :
as Thielen, head of business development, coop quest, gentlemen, thanks for coming on the cube for the showcase 80, Great to be back, uh, really excited for the discussion we have here. So take us through you guys. Um, you know, and I think we believe that that's really indicative of the value Obviously you guys have a great name costs. Um, you know, that combined with, you know, the dynamic nature and complexity of Kubernetes, And so, so you scaling these abstraction layers. you know, even if you had a really complex, you know, chargeback program in place, when you're building all your applications But the first question I have just throw naysayer comment at you would be like, oh, wait, I have all this cost monitoring you know, traditional cloud cost optimization tools are just not as fit for, for this space specifically. Um, and so we've seen, you know, organizations that get What specific things you see changing with Kubernetes that's Um, but for some teams, um, you know, not using a tool like coop costs, And I would just say, lastly, you know, uh, a solution like coop costs because it's built for Kubernetes Or he got the extra costs and you know what, he wants that. Uh, so the answer is, you know, you can go out and install it in minutes. Yeah, so we, you know, sit in your infrastructure. comment that Webb made, but at the end of the day, Kubernetes is not so much a cost driver. So some examples of that are, you know, working with like CICT you know, the S curves kicking in, um, integration and interoperability So, you know, our products opensource that is deeply important to our company, I would say right now, the tides rising pretty well right now, you guys are in a good spot with the Um, you know, what we see is this really natural pattern How fast can companies get there when you say baseline, there's probably levels of baseline. you know, complexity across the org as opposed to, you know, technological. How do you guys continue Um, but you know, one area that I know is relevant to his world is just more, When you have lots of tool chains, lots of platforms and tools kind Um, so we want to make sure that we continue to work with that team and Um, just, you know, a little bit change of pace, more of an industry question. But I think in the future, we want to make sure that, you know, we, What's the SAS playbook. Um, and I think, you know, like Alex mentioned, you know, we're watching this closely and you guys are in the middle of it with cube costs, as you guys are creating
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Alex Thilen | PERSON | 0.99+ |
Webb Brown | PERSON | 0.99+ |
11 years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Sean | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Thielen | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
last year | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
Kubecost | PERSON | 0.99+ |
Webb | PERSON | 0.99+ |
90 days | QUANTITY | 0.99+ |
Webb brown | PERSON | 0.99+ |
ABC | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.98+ |
Kubernetes | ORGANIZATION | 0.98+ |
CubeCon | ORGANIZATION | 0.98+ |
two great guests | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
both ends | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
two great experts | QUANTITY | 0.96+ |
one more question | QUANTITY | 0.96+ |
a day | QUANTITY | 0.96+ |
single helmet | QUANTITY | 0.94+ |
earlier this year | DATE | 0.94+ |
today | DATE | 0.94+ |
secondly | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.93+ |
S3 | COMMERCIAL_ITEM | 0.92+ |
Fanta | ORGANIZATION | 0.92+ |
Qube | ORGANIZATION | 0.91+ |
a week later | DATE | 0.91+ |
Kubernetes | PERSON | 0.91+ |
SAS | ORGANIZATION | 0.9+ |
season two episode | QUANTITY | 0.88+ |
approximately two thirds | QUANTITY | 0.87+ |
about four X | QUANTITY | 0.87+ |
coop | ORGANIZATION | 0.85+ |
three replica sets | QUANTITY | 0.85+ |
EKS | ORGANIZATION | 0.85+ |
billions of dollars | QUANTITY | 0.84+ |
80 | QUANTITY | 0.81+ |
two days | QUANTITY | 0.8+ |
single point | QUANTITY | 0.8+ |
one area | QUANTITY | 0.77+ |
season two | QUANTITY | 0.76+ |
BMS | TITLE | 0.76+ |
OpenStack | TITLE | 0.75+ |
Ian Buck, NVIDIA | AWS re:Invent 2021
>>Well, welcome back to the cubes coverage of AWS reinvent 2021. We're here joined by Ian buck, general manager and vice president of accelerated computing at Nvidia I'm. John Ford, your host of the QB. And thanks for coming on. So in video, obviously, great brand congratulates on all your continued success. Everyone who has does anything in graphics knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing, uh, in ML and AI, that's accelerating computing to the cloud. Yeah, >>I mean, AI is kind of drape bragging breakthroughs innovations across so many segments, so many different use cases. We see it showing up with things like credit card, fraud prevention and product and content recommendations. Really it's the new engine behind search engines is AI. Uh, people are applying AI to things like, um, meeting transcriptions, uh, virtual calls like this using AI to actually capture what was said. Um, and that gets applied in person to person interactions. We also see it in intelligence systems assistance for a contact center, automation or chat bots, uh, medical imaging, um, and intelligence stores and warehouses and everywhere. It's really, it's really amazing what AI has been demonstrated, what it can do. And, uh, it's new use cases are showing up all the time. >>Yeah. I'd love to get your thoughts on, on how the world's evolved just in the past few years, along with cloud, and certainly the pandemics proven it. You had this whole kind of full stack mindset initially, and now you're seeing more of a horizontal scale, but yet enabling this vertical specialization in applications. I mean, you mentioned some of those apps, the new enablers, this kind of the horizontal play with enablement for specialization, with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >>Yeah, it's the innovations on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIS as well as machine learning techniques that are, um, just being invented by researchers for, uh, and the community at large, including Amazon. Um, you know, it started with these convolutional neural networks, which are great for image processing, but as it expanded more recently into, uh, recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic graph neural networks, where the actual graph now is trained as a, as a neural network, you have this underpinning of great AI technologies that are being adventure around the world in videos role is try to productize that and provide a platform for people to do that innovation and then take the next step and innovate vertically. Um, take it, take it and apply it to two particular field, um, like medical, like healthcare and medical imaging applying AI, so that radiologists can have an AI assistant with them and highlight different parts of the scan. >>Then maybe troublesome worrying, or requires more investigation, um, using it for robotics, building virtual worlds, where robots can be trained in a virtual environment, their AI being constantly trained, reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box, um, to do, to activate that we co we are creating different vertical solutions, vertical stacks for products that talk the languages of those businesses, of those users, uh, in medical imaging, it's processing medical data, which is obviously a very complicated large format data, often three-dimensional boxes in robotics. It's building combining both our graphics and simulation technologies, along with the, you know, the AI training capabilities and different capabilities in order to run in real time. Those are, >>Yeah. I mean, it's just so cutting edge. It's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just to go back to the late two thousands, you know, how unstructured data or object store created, a lot of people realize that the value out of that now you've got graph graph value, you got graph network effect, you've got all kinds of new patterns. You guys have this notion of graph neural networks. Um, that's, that's, that's out there. What is, what is a graph neural network and what does it actually mean for deep learning and an AI perspective? >>Yeah, we have a graph is exactly what it sounds like. You have points that are connected to each other, that established relationships and the example of amazon.com. You might have buyers, distributors, sellers, um, and all of them are buying or recommending or selling different products. And they're represented in a graph if I buy something from you and from you, I'm connected to those end points and likewise more deeply across a supply chain or warehouse or other buyers and sellers across the network. What's new right now is that those connections now can be treated and trained like a neural network, understanding the relationship. How strong is that connection between that buyer and seller or that distributor and supplier, and then build up a network that figure out and understand patterns across them. For example, what products I may like. Cause I have this connection in my graph, what other products may meet those requirements, or also identifying things like fraud when, when patterns and buying patterns don't match, what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two captured by the frequency half I buy things or how I rate them or give them stars as she used cases, uh, this application graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, it's very exciting to a new application, but applying AI to optimizing business, to reducing fraud and letting us, you know, get access to the products that we want, the products that they have, our recommendations be things that, that excited us and want us to buy things >>Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads are changing. The game. People are refactoring their business with not just replatform, but actually using this to identify value and see cloud scale allows you to have the compute power to, you know, look at a note on an arc and actually code that. It's all, it's all science, all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS before? >>Yeah. 80 of us has been a great partner and one of the first cloud providers to ever provide GPS the cloud, uh, we most more recently we've announced two new instances, uh, the instance, which is based on the RA 10 G GPU, which has it was supports the Nvidia RTX technology or rendering technology, uh, for real-time Ray tracing and graphics and game streaming is their highest performance graphics, enhanced replicate without allows for those high performance graphics applications to be directly hosted in the cloud. And of course runs everything else as well, including our AI has access to our AI technology runs all of our AI stacks. We also announced with AWS, the G 5g instance, this is exciting because it's the first, uh, graviton or ARM-based processor connected to a GPU and successful in the cloud. Um, this makes, uh, the focus here is Android gaming and machine learning and France. And we're excited to see the advancements that Amazon is making and AWS is making with arm and the cloud. And we're glad to be part of that journey. >>Well, congratulations. I remember I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was getting, he was teasing this out, that they're going to build their own, get in there and build their own connections, take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new new interfaces and the new servers, new technology that you guys are doing, you're enabling applications. What does, what do you see this enabling as this, as this new capability comes out, new speed, more, more performance, but also now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >>Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, uh, led of course, by grab a tiny to be. I spend many others, uh, and by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to arm, we can help bring that innovation. That arm allows that open innovation because there's an open architecture to the entire ecosystem. Uh, we can help bring it forward, uh, to the state of the art in AI machine learning, the graphics. Um, we all have our software that we released is both supportive, both on x86 and an army equally, um, and including all of our AI stacks. So most notably for inference the deployment of AI models. We have our, the Nvidia Triton inference server. Uh, this is the, our inference serving software where after he was trained to model, he wanted to play it at scale on any CPU or GPU instance, um, for that matter. So we support both CPS and GPS with Triton. Um, it's natively integrated with SageMaker and provides the benefit of all those performance optimizations all the time. Uh, things like, uh, features like dynamic batching. It supports all the different AI frameworks from PI torch to TensorFlow, even a generalized Python code. Um, we're activating how activating the arm ecosystem as well as bringing all those AI new AI use cases and all those different performance levels, uh, with our partnership with AWS and all the different clouds. >>And you got to making it really easy for people to use, use the technology that brings up the next kind of question I want to ask you. I mean, a lot of people are really going in jumping in the big time into this. They're adopting AI. Either they're moving in from prototype to production. There's always some gaps, whether it's knowledge, skills, gaps, or whatever, but people are accelerating into the AI and leaning into it hard. What advancements have is Nvidia made to make it more accessible, um, for people to move faster through the, through the system, through the process? >>Yeah, it's one of the biggest challenges. The other promise of AI, all the publications that are coming all the way research now, how can you make it more accessible or easier to use by more people rather than just being an AI researcher, which is, uh, uh, obviously a very challenging and interesting field, but not one that's directly in the business. Nvidia is trying to write a full stack approach to AI. So as we make, uh, discover or see these AI technologies come available, we produce SDKs to help activate them or connect them with developers around the world. Uh, we have over 150 different STKs at this point, certain industries from gaming to design, to life sciences, to earth scientist. We even have stuff to help simulate quantum computing. Um, and of course all the, all the work we're doing with AI, 5g and robotics. So, uh, we actually just introduced about 65 new updates just this past month on all those SDKs. Uh, some of the newer stuff that's really exciting is the large language models. Uh, people are building some amazing AI. That's capable of understanding the Corpus of like human understanding, these language models that are trained on literally the continent of the internet to provide general purpose or open domain chatbots. So the customer is going to have a new kind of experience with a computer or the cloud. Uh, we're offering large language, uh, those large language models, as well as AI frameworks to help companies take advantage of this new kind of technology. >>You know, each and every time I do an interview with Nvidia or talk about Nvidia my kids and their friends, they first thing they said, you get me a good graphics card. Hey, I want the best thing in their rig. Obviously the gaming market's hot and known for that, but I mean, but there's a huge software team behind Nvidia. This is a well-known your CEO is always talking about on his keynotes, you're in the software business. And then you had, do have hardware. You were integrating with graviton and other things. So, but it's a software practices, software. This is all about software. Could you share kind of more about how Nvidia culture and their cloud culture and specifically around the scale? I mean, you, you hit every, every use case. So what's the software culture there at Nvidia, >>And it is actually a bigger, we have more software people than hardware people, people don't often realize this. Uh, and in fact that it's because of we create, uh, the, the, it just starts with the chip, obviously building great Silicon is necessary to provide that level of innovation, but as it expanded dramatically from then, from there, uh, not just the Silicon and the GPU, but the server designs themselves, we actually do entire server designs ourselves to help build out this infrastructure. We consume it and use it ourselves and build our own supercomputers to use AI, to improve our products. And then all that software that we build on top, we make it available. As I mentioned before, uh, as containers on our, uh, NGC container store container registry, which is accessible for me to bus, um, to connect to those vertical markets, instead of just opening up the hardware and none of the ecosystem in develop on it, they can with a low-level and programmatic stacks that we provide with Kuda. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make as well, >>Ram a little software is so much easier. I want to get that plug for, I think it's worth noting that you guys are, are heavy hardcore, especially on the AI side. And it's worth calling out, uh, getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about and looking at how they're doing? >>Yeah. Um, for training, it's all about time to solution. Um, it's not the hardware that that's the cost, it's the opportunity that AI can provide your business and many, and the productivity of those data scientists, which are developing, which are not easy to come by. So, uh, what we hear from customers is they need a fast time to solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it often. So in training is time to solution for inference. It's about our, your ability to deploy at scale. Often people need to have real time requirements. They want to run in a certain amount of latency, a certain amount of time. And typically most companies don't have a single AI model. They have a collection of them. They want, they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure leveraging the trading infant server. I mentioned before can actually run multiple models on a single GPU saving costs, optimizing for efficiency yet still meeting the requirements for latency and the real time experience so that your customers have a good, a good interaction with the AI. >>Awesome. Great. Let's get into, uh, the customer examples. You guys have obviously great customers. Can you share some of the use cases, examples with customers, notable customers? >>Yeah. I want one great part about working in videos as a technology company. You see, you get to engage with such amazing customers across many verticals. Uh, some of the ones that are pretty exciting right now, Netflix is using the G4 instances to CLA um, to do a video effects and animation content. And, you know, from anywhere in the world, in the cloud, uh, as a cloud creation content platform, uh, we work in the energy field that Siemens energy is actually using AI combined with, um, uh, simulation to do predictive maintenance on their energy plants, um, and, and, uh, doing preventing or optimizing onsite inspection activities and eliminating downtime, which is saving a lot of money for the engine industry. Uh, we have worked with Oxford university, uh, which is Oxford university actually has over two, over 20 million artifacts and specimens and collections across its gardens and museums and libraries. They're actually using convenient GPS and Amazon to do enhance image recognition, to classify all these things, which would take literally years with, um, uh, going through manually each of these artifacts using AI, we can click and quickly catalog all of them and connect them with their users. Um, great stories across graphics, about cross industries across research that, uh, it's just so exciting to see what people are doing with our technology together with, >>And thank you so much for coming on the cube. I really appreciate Greg, a lot of great content there. We probably going to go another hour, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up >>Now, the, um, really what Nvidia is about as accelerating cloud computing, whether it be AI, machine learning, graphics, or headphones, community simulation, and AWS was one of the first with this in the beginning, and they continue to bring out great instances to help connect, uh, the cloud and accelerated computing with all the different opportunities integrations with with SageMaker really Ks and ECS. Uh, the new instances with G five and G 5g, very excited to see all the work that we're doing together. >>Ian buck, general manager, and vice president of accelerated computing. I mean, how can you not love that title? We want more, more power, more faster, come on. More computing. No, one's going to complain with more computing know, thanks for coming on. Thank you. Appreciate it. I'm John Farrell hosted the cube. You're watching Amazon coverage reinvent 2021. Thanks for watching.
SUMMARY :
knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the AI. Uh, people are applying AI to things like, um, meeting transcriptions, I mean, you mentioned some of those apps, the new enablers, Yeah, it's the innovations on two fronts. technologies, along with the, you know, the AI training capabilities and different capabilities in I mean, I think one of the things you mentioned about the neural networks, You have points that are connected to each Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads And we're excited to see the advancements that Amazon is making and AWS is making with arm and interfaces and the new servers, new technology that you guys are doing, you're enabling applications. Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, I mean, a lot of people are really going in jumping in the big time into this. So the customer is going to have a new kind of experience with a computer And then you had, do have hardware. not just the Silicon and the GPU, but the server designs themselves, we actually do entire server I want to get that plug for, I think it's worth noting that you guys are, that that's the cost, it's the opportunity that AI can provide your business and many, Can you share some of the use cases, examples with customers, notable customers? research that, uh, it's just so exciting to see what people are doing with our technology together with, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up Uh, the new instances with G one's going to complain with more computing know, thanks for coming on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian buck | PERSON | 0.99+ |
John Farrell | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Ian Buck | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian buck | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Ford | PERSON | 0.99+ |
James Hamilton | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
G five | COMMERCIAL_ITEM | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
G 5g | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
Oxford university | ORGANIZATION | 0.99+ |
2013 | DATE | 0.98+ |
amazon.com | ORGANIZATION | 0.98+ |
over two | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
single service | QUANTITY | 0.97+ |
2021 | DATE | 0.97+ |
two fronts | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
over 20 million artifacts | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
about 65 new updates | QUANTITY | 0.93+ |
Siemens energy | ORGANIZATION | 0.92+ |
over 150 different STKs | QUANTITY | 0.92+ |
single GPU | QUANTITY | 0.91+ |
two new instances | QUANTITY | 0.91+ |
first thing | QUANTITY | 0.9+ |
France | LOCATION | 0.87+ |
two particular field | QUANTITY | 0.85+ |
SageMaker | TITLE | 0.85+ |
Triton | TITLE | 0.82+ |
first cloud providers | QUANTITY | 0.81+ |
NGC | ORGANIZATION | 0.77+ |
80 of | QUANTITY | 0.74+ |
past month | DATE | 0.68+ |
x86 | COMMERCIAL_ITEM | 0.67+ |
late | DATE | 0.67+ |
two thousands | QUANTITY | 0.64+ |
pandemics | EVENT | 0.64+ |
past few years | DATE | 0.61+ |
G4 | ORGANIZATION | 0.6+ |
RA | COMMERCIAL_ITEM | 0.6+ |
Kuda | ORGANIZATION | 0.59+ |
ECS | ORGANIZATION | 0.55+ |
10 G | OTHER | 0.54+ |
SageMaker | ORGANIZATION | 0.49+ |
TensorFlow | OTHER | 0.48+ |
Ks | ORGANIZATION | 0.36+ |
PA3 Ian Buck
(bright music) >> Well, welcome back to theCUBE's coverage of AWS re:Invent 2021. We're here joined by Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. I'm John Furrrier, host of theCUBE. Ian, thanks for coming on. >> Oh, thanks for having me. >> So NVIDIA, obviously, great brand. Congratulations on all your continued success. Everyone who does anything in graphics knows that GPU's are hot, and you guys have a great brand, great success in the company. But AI and machine learning, we're seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing in ML and AI that's accelerating computing to the cloud? >> Yeah. I mean, AI is kind of driving breakthroughs and innovations across so many segments, so many different use cases. We see it showing up with things like credit card fraud prevention, and product and content recommendations. Really, it's the new engine behind search engines, is AI. People are applying AI to things like meeting transcriptions, virtual calls like this, using AI to actually capture what was said. And that gets applied in person-to-person interactions. We also see it in intelligence assistance for contact center automation, or chat bots, medical imaging, and intelligence stores, and warehouses, and everywhere. It's really amazing what AI has been demonstrating, what it can do, and its new use cases are showing up all the time. >> You know, Ian, I'd love to get your thoughts on how the world's evolved, just in the past few years alone, with cloud. And certainly, the pandemic's proven it. You had this whole kind of fullstack mindset, initially, and now you're seeing more of a horizontal scale, but yet, enabling this vertical specialization in applications. I mean, you mentioned some of those apps. The new enablers, this kind of, the horizontal play with enablement for, you know, specialization with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >> Yeah. The innovation's on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIs, as well as machine learning techniques, that are just being invented by researchers and the community at large, including Amazon. You know, it started with these convolutional neural networks, which are great for image processing, but has expanded more recently into recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic, graph neural networks, where the actual graph now is trained as a neural network. You have this underpinning of great AI technologies that are being invented around the world. NVIDIA's role is to try to productize that and provide a platform for people to do that innovation. And then, take the next step and innovate vertically. Take it and apply it to a particular field, like medical, like healthcare and medical imaging, applying AI so that radiologists can have an AI assistant with them and highlight different parts of the scan that may be troublesome or worrying, or require some more investigation. Using it for robotics, building virtual worlds where robots can be trained in a virtual environment, their AI being constantly trained and reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box. To activate that, we are creating different vertical solutions, vertical stacks, vertical products, that talk the languages of those businesses, of those users. In medical imaging, it's processing medical data, which is obviously a very complicated, large format data, often three-dimensional voxels. In robotics, it's building, combining both our graphics and simulation technologies, along with the AI training capabilities and difference capabilities, in order to run in real time. Those are just two simple- >> Yeah, no. I mean, it's just so cutting-edge, it's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just go back to the late 2000s, how unstructured data, or object storage created, a lot of people realized a lot of value out of that. Now you got graph value, you got network effect, you got all kinds of new patterns. You guys have this notion of graph neural networks that's out there. What is a graph neural network, and what does it actually mean from a deep learning and an AI perspective? >> Yeah. I mean, a graph is exactly what it sounds like. You have points that are connected to each other, that establish relationships. In the example of Amazon.com, you might have buyers, distributors, sellers, and all of them are buying, or recommending, or selling different products. And they're represented in a graph. If I buy something from you and from you, I'm connected to those endpoints, and likewise, more deeply across a supply chain, or warehouse, or other buyers and sellers across the network. What's new right now is, that those connections now can be treated and trained like a neural network, understanding the relationship, how strong is that connection between that buyer and seller, or the distributor and supplier, and then build up a network to figure out and understand patterns across them. For example, what products I may like, 'cause I have this connection in my graph, what other products may meet those requirements? Or, also, identifying things like fraud, When patterns and buying patterns don't match what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two, captured by the frequency of how often I buy things, or how I rate them or give them stars, or other such use cases. This application, graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, is very exciting to a new application of applying AI to optimizing business, to reducing fraud, and letting us, you know, get access to the products that we want. They have our recommendations be things that excite us and want us to buy things, and buy more. >> That's a great setup for the real conversation that's going on here at re:Invent, which is new kinds of workloads are changing the game, people are refactoring their business with, not just re-platforming, but actually using this to identify value. And also, your cloud scale allows you to have the compute power to, you know, look at a note in an arc and actually code that. It's all science, it's all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS, specifically? >> Yeah, AWS have been a great partner, and one of the first cloud providers to ever provide GPUs to the cloud. More recently, we've announced two new instances, the G5 instance, which is based on our A10G GPU, which supports the NVIDIA RTX technology, our rendering technology, for real-time ray tracing in graphics and game streaming. This is our highest performance graphics enhanced application, allows for those high-performance graphics applications to be directly hosted in the cloud. And, of course, runs everything else as well. It has access to our AI technology and runs all of our AI stacks. We also announced, with AWS, the G5 G instance. This is exciting because it's the first Graviton or Arm-based processor connected to a GPU and successful in the cloud. The focus here is Android gaming and machine learning inference. And we're excited to see the advancements that Amazon is making and AWS is making, with Arm in the cloud. And we're glad to be part of that journey. >> Well, congratulations. I remember, I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was teasing this out, that they're going to build their own, get in there, and build their own connections to take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new interfaces, and the new servers, new technology that you guys are doing, you're enabling applications. What do you see this enabling? As this new capability comes out, new speed, more performance, but also, now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >> Well, so first off, I think Arm is here to stay. We can see the growth and explosion of Arm, led of course, by Graviton and AWS, but many others. And by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to Arm, we can help bring that innovation that Arm allows, that open innovation, because there's an open architecture, to the entire ecosystem. We can help bring it forward to the state of the art in AI machine learning and graphics. All of our software that we release is both supportive, both on x86 and on Arm equally, and including all of our AI stacks. So most notably, for inference, the deployment of AI models, we have the NVIDIA Triton inference server. This is our inference serving software, where after you've trained a model, you want to deploy it at scale on any CPU, or GPU instance, for that matter. So we support both CPUs and GPUs with Triton. It's natively integrated with SageMaker and provides the benefit of all those performance optimizations. Features like dynamic batching, it supports all the different AI frameworks, from PyTorch to TensorFlow, even a generalized Python code. We're activating, and help activating, the Arm ecosystem, as well as bringing all those new AI use cases, and all those different performance levels with our partnership with AWS and all the different cloud instances. >> And you guys are making it really easy for people to use use the technology. That brings up the next, kind of, question I wanted to ask you. I mean, a lot of people are really going in, jumping in big-time into this. They're adopting AI, either they're moving it from prototype to production. There's always some gaps, whether it's, you know, knowledge, skills gaps, or whatever. But people are accelerating into the AI and leaning into it hard. What advancements has NVIDIA made to make it more accessible for people to move faster through the system, through the process? >> Yeah. It's one of the biggest challenges. You know, the promise of AI, all the publications that are coming out, all the great research, you know, how can you make it more accessible or easier to use by more people? Rather than just being an AI researcher, which is obviously a very challenging and interesting field, but not one that's directly connected to the business. NVIDIA is trying to provide a fullstack approach to AI. So as we discover or see these AI technologies become available, we produce SDKs to help activate them or connect them with developers around the world. We have over 150 different SDKs at this point, serving industries from gaming, to design, to life sciences, to earth sciences. We even have stuff to help simulate quantum computing. And of course, all the work we're doing with AI, 5G, and robotics. So we actually just introduced about 65 new updates, just this past month, on all those SDKs. Some of the newer stuff that's really exciting is the large language models. People are building some amazing AI that's capable of understanding the corpus of, like, human understanding. These language models that are trained on literally the content of the internet to provide general purpose or open-domain chatbots, so the customer is going to have a new kind of experience with the computer or the cloud. We're offering those large language models, as well as AI frameworks, to help companies take advantage of this new kind of technology. >> You know, Ian, every time I do an interview with NVIDIA or talk about NVIDIA, my kids and friends, first thing they say is, "Can you get me a good graphics card?" They all want the best thing in their rig. Obviously the gaming market's hot and known for that. But there's a huge software team behind NVIDIA. This is well-known. Your CEO is always talking about it on his keynotes. You're in the software business. And you do have hardware, you are integrating with Graviton and other things. But it's a software practice. This is software. This is all about software. >> Right. >> Can you share, kind of, more about how NVIDIA culture and their cloud culture, and specifically around the scale, I mean, you hit every use case. So what's the software culture there at NVIDIA? >> Yeah, NVIDIA's actually a bigger, we have more software people than hardware people. But people don't often realize this. And in fact, that it's because of, it just starts with the chip, and obviously, building great silicon is necessary to provide that level of innovation. But it's expanded dramatically from there. Not just the silicon and the GPU, but the server designs themselves. We actually do entire server designs ourselves, to help build out this infrastructure. We consume it and use it ourselves, and build our own supercomputers to use AI to improve our products. And then, all that software that we build on top, we make it available, as I mentioned before, as containers on our NGC container store, container registry, which is accessible from AWS, to connect to those vertical markets. Instead of just opening up the hardware and letting the ecosystem develop on it, they can, with the low-level and programmatic stacks that we provide with CUDA. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make them so available. >> And programmable software is so much easier. I want to get that plug in for, I think it's worth noting that you guys are heavy hardcore, especially on the AI side, and it's worth calling out. Getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about, and looking at how they're doing? >> Yeah. For training, it's all about time-to-solution. It's not the hardware that's the cost, it's the opportunity that AI can provide to your business, and the productivity of those data scientists which are developing them, which are not easy to come by. So what we hear from customers is they need a fast time-to-solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it. >> John Furrier: Often. >> So in training, it's time-to-solution. For inference, it's about your ability to deploy at scale. Often people need to have real-time requirements. They want to run in a certain amount of latency, in a certain amount of time. And typically, most companies don't have a single AI model. They have a collection of them they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure. Leveraging the Triton inference server, I mentioned before, can actually run multiple models on a single GPU saving costs, optimizing for efficiency, yet still meeting the requirements for latency and the real-time experience, so that our customers have a good interaction with the AI. >> Awesome. Great. Let's get into the customer examples. You guys have, obviously, great customers. Can you share some of the use cases examples with customers, notable customers? >> Yeah. One great part about working at NVIDIA is, as technology company, you get to engage with such amazing customers across many verticals. Some of the ones that are pretty exciting right now, Netflix is using the G4 instances to do a video effects and animation content from anywhere in the world, in the cloud, as a cloud creation content platform. We work in the energy field. Siemens energy is actually using AI combined with simulation to do predictive maintenance on their energy plants, preventing, or optimizing, onsite inspection activities and eliminating downtime, which is saving a lot of money for the energy industry. We have worked with Oxford University. Oxford University actually has over 20 million artifacts and specimens and collections, across its gardens and museums and libraries. They're actually using NVIDIA GPU's and Amazon to do enhanced image recognition to classify all these things, which would take literally years going through manually, each of these artifacts. Using AI, we can quickly catalog all of them and connect them with their users. Great stories across graphics, across industries, across research, that it's just so exciting to see what people are doing with our technology, together with Amazon. >> Ian, thank you so much for coming on theCUBE. I really appreciate it. A lot of great content there. We probably could go another hour. All the great stuff going on at NVIDIA. Any closing remarks you want to share, as we wrap this last minute up? >> You know, really what NVIDIA's about, is accelerating cloud computing. Whether it be AI, machine learning, graphics, or high-performance computing and simulation. And AWS was one of the first with this, in the beginning, and they continue to bring out great instances to help connect the cloud and accelerated computing with all the different opportunities. The integrations with EC2, with SageMaker, with EKS, and ECS. The new instances with G5 and G5 G. Very excited to see all the work that we're doing together. >> Ian Buck, general manager and vice president of Accelerated Computing. I mean, how can you not love that title? We want more power, more faster, come on. More computing. No one's going to complain with more computing. Ian, thanks for coming on. >> Thank you. >> Appreciate it. I'm John Furrier, host of theCUBE. You're watching Amazon coverage re:Invent 2021. Thanks for watching. (bright music)
SUMMARY :
to theCUBE's coverage and you guys have a great brand, Really, it's the new engine And certainly, the pandemic's proven it. and the community at the things you mentioned and connections between the two, the compute power to, you and one of the first cloud providers This is kind of the harvest of all that. and all the different cloud instances. But people are accelerating into the AI so the customer is going to You're in the software business. and specifically around the scale, and build our own supercomputers to use AI especially on the AI side, and the productivity of and the real-time experience, the use cases examples Some of the ones that are All the great stuff going on at NVIDIA. and they continue to No one's going to complain I'm John Furrier, host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Furrrier | PERSON | 0.99+ |
Ian Buck | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Oxford University | ORGANIZATION | 0.99+ |
James Hamilton | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
G5 G | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
late 2000s | DATE | 0.99+ |
Graviton | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Accelerated Computing | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
2013 | DATE | 0.98+ |
A10G | COMMERCIAL_ITEM | 0.98+ |
both | QUANTITY | 0.98+ |
two fronts | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
single service | QUANTITY | 0.98+ |
PyTorch | TITLE | 0.98+ |
over 20 million artifacts | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
TensorFlow | TITLE | 0.95+ |
EC2 | TITLE | 0.94+ |
G5 instance | COMMERCIAL_ITEM | 0.94+ |
over 150 different SDKs | QUANTITY | 0.93+ |
SageMaker | TITLE | 0.93+ |
G5 | COMMERCIAL_ITEM | 0.93+ |
Arm | ORGANIZATION | 0.91+ |
first thing | QUANTITY | 0.91+ |
single GPU | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
about 65 new updates | QUANTITY | 0.89+ |
two new instances | QUANTITY | 0.89+ |
pandemic | EVENT | 0.88+ |
Triton | ORGANIZATION | 0.87+ |
PA3 | ORGANIZATION | 0.87+ |
Triton | TITLE | 0.84+ |
Invent | EVENT | 0.83+ |
G5 G. | COMMERCIAL_ITEM | 0.82+ |
two simple | QUANTITY | 0.8+ |
Simon Davies, Splunk | Splunk .conf21
>>Hey, welcome to the cubes coverage of splunk.com 21. I'm licensed Lisa Martin here. I've got some in Demis with me, a VP in APAC at Splunk Simon. Welcome to the program. >>So here we are, unfortunately at another virtual conference, but there has been a tremendous amount of there's an understatement, right? That we've seen in the last 18 months. We've seen this massive distribution of the workforce. We've seen huge increases in the threat landscape. We've seen things like solar winds, ransomware, increasing significantly acceleration and digital transformation. As companies tried to do whatever they could to enable digital workspaces. I wanted to unpack with you, uh, this 20, 21 state of security report that Splunk has. What are some of the key findings? And then we'll dig into some of the things that you're seeing in the APAC region. >>Yeah, look, we're excited about the report. It really highlighted, I think what a lot of organizations are going through. Um, one of the statistics that stood out for me was, um, 75% of infrastructure users are multi-cloud, but expecting to get that these expecting to increase to 87% of customers will be using multicloud environments. So the reason why that's important is the complexity that creates, uh, for cyber professionals in terms of trying to protect and defend, um, becomes exponentially harder with every new iteration or generation of infrastructure that companies consume. Um, most interesting. Um, we actually saw about a third of users or already using three cloud providers, but that is going to grow to 50% of, of customers will grow to being using three cloud providers or more within the next two years. So again, just that, that trend is going to continue. Uh, the leveraging of cloud infrastructures is a core way of businesses digitizing and modernizing. Um, and as cyber professionals, we have to think about how we're going to address that. >>Definitely. One of the things that I've been seeing and hearing in the last 18 months from a security perspective is that organizations say, you know, it's, it's really not a matter of if we get hit with ransomware, it's when, and I was really surprised to see the, that the state of security report found that 70% of security and it leaders worry they're going to be hit by a solar winds style attack. So the security landscape changing dramatically in the last 18 months. >>Yeah, absolutely. I think the, the, the research is feeding back what we were already hearing from the customers, um, around how this is a critical, uh, motion. And I think the one thing that we've seen as well as the board level agenda now, the risk and cyber has, and, uh, an organization's ability to react or recover. Um, when you have an, an event, um, is now becoming a high priority for organizations, we're seeing a lot of increased spending in cybersecurity as this becomes more and more, um, pretty for organizations for breasts. So yeah, the, the, on the ground experiences certainly matching what we're seeing in the research there. Um, and all of that is a data problem, right? Security is a data problem when something happens, how do I, how do I know? Where, how do I know when, how do I know what, and then how do I know what actions to take based upon the data that we need to get? >>So security being a data problem talked about the complexity of the multi-cloud environments, that percentages of organizations that are adopting that now what that trend is moving towards. Also complexity, I can imagine with data volumes only increasing, what are some of the key challenges that APAC organizations specifically are seeing as they are accelerating digital transformation and doing what they can to enable this distributed workforce? >>Yeah. So, so the hybrid multicloud environment you use, I guess, an indicator of increased complexity, I think we often overlook the fact that I think the hybrid world is here to stay as well. So nobody is a hundred percent cloud and nobody's a hundred percent on prem anymore. It's very much an environment now where I need to, um, I need to protect and defend across that entire surface area and increasingly with edge computing. Um, and as we're looking at, uh, organizations pushing, processing out to the edge of their, um, their operations and whether that's a distributed workforce or sensor-based environments, um, that becomes critical as well. We've got organizations like Intel, uh, that use us to basically monitor not only the cyber infrastructure, but the entire customer infrastructure that they're providing the fabric by census of course, environments, where you can imagine that the security becomes even more important. >>So I think that complexity and the data sources that are now being generated and the explosion of that is, is kind of critical. Um, for apex specifically, we saw some interesting trends we saw about 37% of organizations are using data to now support the compliance environments. Um, about 36% are bringing in non-security data. Um, and about 36%, it really started to use AI or machine learning tools to help them in that, that large scale data volume processing, um, that they weren't able to do before. And then lastly, security analytics really is starting to become, uh, a critical tool in the arsenal of cyber professionals with 34% of organizations saying they're already using some form of security analytics to help them address the threat actors. >>Is there a silver lining in terms of the it folks and the security folks becoming better collaborating better? Anything that you've seen in this report? >>Uh, well in the report, but also in the way that we're seeing SOC organizations use tools. Um, so, uh, the orchestration remediation and automation is a big industry trend, particularly when you look at things like implementing zero trust and how you would use that for, um, putting that additional layer of protection around an organization. Um, and that's where the ability to identify using machine learning or AI, uh, trends or events, understand the actions that need to be taken, understand the data sources that help address and remediate those and be able to automate that frees up the time and cyber security professionals. Um, and that's a critical step we're seeing because there's a shortage of skills and that's been an ongoing challenge, not only in Asia Pacific, but I think worldwide, >>Right. It has been a challenge worldwide. I was actually doing some cyber security work in the last month or so. And I read that this is the fifth consecutive year of that cybersecurity skills gap. So definitely a challenge there, but also if you flip the coin and opportunity. So in terms of some of those challenges that you mentioned, what are some of the key things that organizations and APAC can do to confront and combat those security challenges that are no doubt just only going to grow? >>Yeah, so I think, I think it's about, um, visibility, uh, and getting control, uh, and that's where again, data becomes key to that. So making sure you're capturing the right data, making sure that data is available, um, to your professionals, or if you're using a service provider, making sure that data is captured and available to the service providers, because that is increasingly what we see as the critical step to be able to, when something happens, how do you recover what your meantime to remediation, um, as, as the kind of critical motion. And so that's, again, what we could coming back to is security is a data problem. >>Security is a data problem. Got it. I do want to, uh, unpack a little bit some of the visibility challenges. That is one of the things that was identified. You mentioned that with so much complexity, multi-cloud being, uh, as, as hybrid work, something that's going to stay, what are some of the things that organizations can do and how can Splunk help to remove and mitigate those visibility challenges? >>So we've we just another interesting piece of research, um, it's called the state of data innovation report. Um, that really looked at the way organizations that categorize that data and organizations that actually build a data strategy, um, are actually much more prepared to react, uh, to engage and then to leverage that data for competitive differentiation in their markets. Um, and interestingly 33% of APAC organizations particularly rated their usage of data as better, uh, than, um, the industry average. Um, and 54% of APAC organizations already said they're using technologies like observability, which really helps them innovate around the data. Thinking about that next generation of service they're trying to provide. >>Did you see those are great numbers? It's about a third, um, are, are working on implementing technologies 54% were focused on that observability. Did you see any industries in particular that really leading edge there? Of course, every industry being affected by the pandemic, but I'm just curious if there were any, any ones that stood out >>So many great customer examples that we've got, uh, where we see organizations thinking differently about the way they engage their customers as a result of the digital transformation. Um, for me, one of the ones that stands out is Lenovo, um, you know, 50 billion plus multinational company servicing 180 markets around the world, um, when they looked at their observability approach and tried to understand how they were going to approach troubleshooting, um, when they had issues, if you think about the e-commerce experience for their consumers, um, they were able to reduce the, uh, reduce the downtime, um, and improve, um, the remediation time when there were incidents, uh, even though they had a 300% increase in traffic. And so for the ability for an organization to handle that kind of surge in digital, uh, interactions with their customers and do that to have clear visibility, using metrics, traces, and logs, to understanding exactly what's going on across complex, siloed multi, uh, services, uh, environments was, was critical to the Novo success. And, um, you know, not only from a cybersecurity point of view, but also having real time visibility into their infrastructure became critical as they service their customers. >>Right? One of the things I think we learned Simon during the pandemic, one of the many things is that access to real-time data real-time visibility real time, rather than visibility is no longer a nice to have it's. It was something that in the beginning was sort of organizations needing it to survive. Now organizations needing it to thrive it's that, that real-time visibility is really table stakes for organizations in any industry. >>W we, we kind of saw organizations go through three phases. There was the react phase. Then there was the adapt phase. So, you know, reacting was, first of all, kind of keep my people safe. The adapt phase was how am I going to work? And now we're seeing that next generation, which is really the evolve phase, right? Given the pandemic is still well COVID is still with us. Um, whether it's your, most of the countries, which are treating it more as an endemic or whether you're on the number of the countries still on that journey. And you're in Asia Pacific, we see different levels of, of vaccination status, different levels of, uh, companies starting to open up or countries starting to open up their borders and, um, life getting back to the, what is the new normal, um, all of that is still gonna evolve with a different way of working, moving forward, a different way of engaging our customers and our, our, uh, constituents, if you're a public sector, organization and data is underlying all of that. And for that, where we're kind of excited to be helping some of the largest organizations with that across, across the region, >>Did it is absolutely critical. You know, one of the things that we've also, I think observed in the last year and a half is the, the patients or the fuses of people getting smaller and smaller. So for organizations to have that visibility into data so that they can service their customers, whether it be healthcare or financial services or the tech sector for, for example, the access to that data is critical for brand reputation, reducing churn. And of course, ensuring that the customers are getting what they need to from that data. >>Yeah. A hundred percent. Um, gosh, so many examples across the region. One of the ones that jumps to mind is Flinders university, right? When, when they had to go remote, they had to go virtual, um, 25,000 students overnight, um, suddenly needing to be interacting by digital channels. How do you keep them secure? How do you keep them safe? How do you get insights, uh, in terms of the services that they need to, to protect that student population? >>So if you, if you kind of distill this down into data opportunities for organizations, we'll start with APAC, what do you think the top three data opportunities are of security as a data problem? What are the opportunities to combat that for an organization to be really successful? >>So I think, I think visibility is the first one. So making sure we're capturing the data, making sure we're capturing the right data. Um, and so the ability, uh, not only to capture the data, but to time sequence the data so I can actually understand what's happened. And when, um, the second then is, is, uh, control. Um, so ensuring that the right people have access to the right data, but we, we control that in a way that is specific to our organization. Um, and then lastly compliance. Um, and I think we're seeing a lot of new legislation starts coming around critical infrastructure, um, recognizing the importance of the digital infrastructure to the broader economy, um, and making sure that you're compliant with that critical infrastructure kind of requirements and environments as well as then the traditional regulated industries such as healthcare and financial services, um, become critical in that approach. So thinking about those three elements, and then thinking about how do I then use tools like automation and security analytics to really accelerate, um, the capabilities that we have as an organization. >>So observability control compliance, give me the 32nd pitch of how Splunk can help organizations achieve all three of those. >>So observability really is about getting insights into all of your environments. So, uh, it's all about metrics, traces and logs, which is about understanding exactly what's going on with every experience of every digital interaction I have with every customer and the ability to Splunk through that with zero, uh, zero sampling or full fidelity of that data is something we see our customers, particularly Navy, um, security, uh, look for me to it's all about orchestration and analytics. So how do I, how do I get that understanding that, that user behavior understanding the analytics around that, and then how machine learning becomes a critical part of that to help me scale my cyber infrastructure and defend. And then lastly resilience is really the core for all it systems in a digital world. Um, and being able to not only harden deliver resilient services like going over, I was able to do the 300% increase in their web traffic. Um, but also when something does go wrong and be able to remediate quickly become critical as well. >>Right? That quick remediation is because, like I was saying earlier, it's no longer a, if we get hit it's when organizations need to have that resilience baked in. Well, Simon, thank you for joining me, breaking down. Some of those reports what's going on in APAC, some of the trends and also some of the opportunities, security being a data problem, um, and organizations, what they can do to remediate that we appreciate your time. Thanks for having my pleasure for Simon Davies and Lisa Martin. You're watching the cubes coverage of splunk.com 21.
SUMMARY :
Welcome to the program. of the workforce. Um, one of the statistics that stood out for me was, um, 75% One of the things that I've been seeing and hearing in the last 18 months from Um, and all of that is a data problem, So security being a data problem talked about the complexity of the multi-cloud environments, Um, and as we're looking at, uh, organizations pushing, processing out to the edge Um, and about 36%, it really started to use AI or machine learning tools to help them in that, Um, and that's a critical step we're seeing because there's a shortage and combat those security challenges that are no doubt just only going to grow? as the critical step to be able to, when something happens, how do you recover what your meantime That is one of the things that was identified. Um, that really looked at the way organizations that categorize Of course, every industry being affected by the pandemic, Um, for me, one of the ones that stands out is Lenovo, um, you know, 50 billion plus multinational One of the things I think we learned Simon during the pandemic, one of the many things is that access to across the region, And of course, ensuring that the customers are getting what they need to from One of the ones that jumps to mind is Flinders university, right? Um, so ensuring that the right people have access to the right data, but we, So observability control compliance, give me the 32nd pitch of how Splunk the ability to Splunk through that with zero, uh, zero sampling or full fidelity of that data is something we see um, and organizations, what they can do to remediate that we appreciate your time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
Simon | PERSON | 0.99+ |
300% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
180 markets | QUANTITY | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
87% | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
APAC | ORGANIZATION | 0.99+ |
25,000 students | QUANTITY | 0.99+ |
34% | QUANTITY | 0.99+ |
54% | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Simon Davies | PERSON | 0.99+ |
33% | QUANTITY | 0.99+ |
second | QUANTITY | 0.98+ |
Splunk Simon | ORGANIZATION | 0.98+ |
last month | DATE | 0.98+ |
three elements | QUANTITY | 0.98+ |
about 36% | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
hundred percent | QUANTITY | 0.97+ |
about 37% | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
32nd pitch | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
Flinders university | ORGANIZATION | 0.97+ |
splunk.com | OTHER | 0.96+ |
last year and a half | DATE | 0.95+ |
three | QUANTITY | 0.94+ |
fifth consecutive year | QUANTITY | 0.93+ |
three cloud providers | QUANTITY | 0.93+ |
last 18 months | DATE | 0.93+ |
three phases | QUANTITY | 0.9+ |
Splunk .conf21 | OTHER | 0.86+ |
50 billion plus | QUANTITY | 0.83+ |
apex | ORGANIZATION | 0.82+ |
zero trust | QUANTITY | 0.82+ |
about | QUANTITY | 0.76+ |
one thing | QUANTITY | 0.75+ |
third | QUANTITY | 0.75+ |
COVID | EVENT | 0.72+ |
next two years | DATE | 0.7+ |
about a third | QUANTITY | 0.62+ |
users | QUANTITY | 0.56+ |
21 | TITLE | 0.51+ |
21 | OTHER | 0.47+ |
20 | QUANTITY | 0.44+ |
Keith Brooks, AWS | AWS Summit DC 2021
>>Yeah. Hello and welcome back to the cubes coverage of AWS public sector summit here in Washington D. C. We're live on the ground for two days. Face to face conference and expo hall and everything here but keith brooks who is the director and head of technical business development for a dress government Govcloud selling brains 10th birthday. Congratulations. Welcome to the cube. Thank you john happy to be E. C. 2 15 S three is 9.5 or no, that maybe they're 10 because that's the same day as sqs So Govcloud. 10 years, 20 years. What time >>flies? 10 years? >>Big milestone. Congratulations. A lot of history involved in Govcloud. Yes. Take us through what's the current situation? >>Yeah. So um let's start with what it is just for the viewers that may not be familiar. So AWS Govcloud is isolated. AWS cloud infrastructure and services that were purposely built for our U. S. Government customers that had highly sensitive data or highly regulated data or applications and workloads that they wanted to move to the cloud. So we gave customers the ability to do that with AWS Govcloud. It is subject to the fed ramp I and D O D S R G I L four L five baselines. It gives customers the ability to address ITAR requirements as well as Seaga's N'est ce MMC and Phipps requirements and gives customers a multi region architecture that allows them to also designed for disaster recovery and high availability in terms of why we built it. It starts with our customers. It was pretty clear from the government that they needed a highly secure and highly compliant cloud infrastructure to innovate ahead of demand and that's what we delivered. So back in august of 2011 we launched AWS GovCloud which gave customers the best of breed in terms of high technology, high security, high compliance in the cloud to allow them to innovate for their mission critical workloads. Who >>was some of the early customers when you guys launched after the C. I. A deal intelligence community is a big one but some of the early customers. >>So the Department of Health and Human Services, the Department of Veterans Affairs, the Department of Justice and the Department of Defense were all early users of AWS GovCloud. But one of our earliest lighthouse customers was the Nasa jet propulsion laboratory and Nasa Jpl used AWS GovCloud to procure Procure resources ahead of demand which allowed them to save money and also take advantage of being efficient and only paying for what they needed. But they went beyond just I. T. Operations. They also looked at how do they use the cloud and specifically GovCloud for their mission programs. So if you think back to all the way to 2012 with the mars curiosity rover, Nasa Jpl actually streamed and processed and stored that data from the curiosity rover on AWS Govcloud They actually streamed over 150 terabytes of data responded to over 80,000 requests per second and took it beyond just imagery. They actually did high performance compute and data analytics on the data as well. That led to additional efficiencies for future. Over there >>were entire kicking they were actually >>hard core missing into it. Mission critical workloads that also adhere to itar compliance which is why they used AWS GovCloud. >>All these compliance. So there's also these levels. I remember when I was working on the jetty uh stories that were out there was always like level for those different classifications. What does all that mean like? And then this highly available data and highly high availability all these words mean something in these top secret clouds. Can you take us through kind of meetings >>of those? Yeah absolutely. So it starts with the federal compliance program and the two most popular programs are Fed ramp and Dodi srg fed ramp is more general for federal government agencies. There are three levels low moderate and high in the short and skinny of those levels is how they align to the fisma requirements of the government. So there's fisma low fisma moderate fisma high depending on the sensitivity of the government data you will have to align to those levels of Fed ramp to use workloads and store data in the cloud. Similar story for D. O. D. With srg impact levels to 45 and six uh impacts levels to four and five are all for unclassified data. Level two is for less sensitive public defense data levels. Four and five cover more sensitive defense data to include mission critical national security systems and impact level six is for classified information. So those form the basis of security and compliance, luckily with AWS GovCloud celebrating our 10th anniversary, we address Fed ramp high for our customers that require that and D. O. D impact levels to four and five for a sensitive defense guy. >>And that was a real nuanced point and a lot of the competition can't do that. That's real people don't understand, you know, this company, which is that company and all the lobbying and all the mudslinging that goes on. We've seen that in the industry. It's unfortunate, but it happens. Um, I do want to ask you about the Fed ramp because what I'm seeing on the commercial side in the cloud ecosystem, a lot of companies that aren't quote targeting public sector are coming in on the Fed ramp. So there's some good traction there. You guys have done a lot of work to accelerate that. Any new, any new information to share their. >>Yes. So we've been committed to supporting the federal government compliance requirements effectively since the launch of GovCloud. And we've demonstrated our commitment to Fed ramp over the last number of years and GovCloud specifically, we've taken dozens of services through Fed ramp high and we're 100% committed to it because we have great relationships with the Fed ramp, Jabor the joint authorization board. We work with individual government agencies to secure agency A. T. O. S. And in fact we actually have more agency A. T. O. S. With AWS GovCloud than any other cloud provider. And the short and skinny is that represents the baseline for cloud security to address sensitive government workloads and sensitive government data. And what we're seeing from industry and specifically highly regulated industries is the standard that the U. S. Government set means that they have the assurance to run control and classified information or other levels of highly sensitive data on the cloud as well. So Fed ramp set that standard. It's interesting >>that the cloud, this is the ecosystem within an ecosystem again within crossover section. So for instance um the impact of not getting Fed ramp certified is basically money. Right. If you're a supplier vendor uh software developer or whatever used to being a miracle, no one no one would know right bed ramp. I'm gonna have to hire a whole department right now. You guys have a really easy, this is a key value proposition, isn't it? >>Correct. And you see it with a number of I. S. V. S. And software as the service providers. If you visit the federal marketplace website, you'll see dozens of providers that have Fed ramp authorized third party SAAS products running on GovCloud industry leading SAAS companies like Salesforce dot com driven technology Splunk essay PNS to effectively they're bringing their best of breed capabilities, building on top of AWS GovCloud and offering those highly compliant fed ramp, moderate fed ramp high capabilities to customers both in government and private industry that need that level of compliance. >>Just as an aside, I saw they've got a nice tweet from Teresa Carlson now it's plunk Govcloud yesterday. That was a nice little positive gesture uh, for you guys at GovCloud, what other areas are you guys moving the needle on because architecturally this is a big deal. What are some areas that you're moving the needle on for the GovCloud? >>Well, when I look back across the last 10 years, there were some pretty important developments that stand out. The first is us launching the second Govcloud infrastructure region in 2018 And that gave customers that use GovCloud specifically customers that have highly sensitive data and high levels of compliance. The ability to build fault tolerant, highly available and mission critical workloads in the cloud in a region that also gives them an additional three availability zones. So the launch of GovCloud East, which is named AWS GovCloud Us East gave customers to regions a total of six availability zones that allowed them accelerate and build more scalable solutions in the cloud. More recently, there is an emergence of another D O D program called the cybersecurity maturity model, C M M C and C M M C is something where we looked around the corner and said we need to Innovate to help our customers, particularly defense customers and the defense industrial based customers address see MMC requirements in the cloud. So with Govcloud back in December of 2020, we actually launched the AWS compliant framework for federal defense workloads, which gives customers a turnkey capability and tooling and resources to spin up environments that are configured to meet see MMC controls and D. O. D. Srg control. So those things represent some of the >>evolution keith. I'm interested also in your thoughts on how you see the progression of Govcloud outside the United States. Tactical Edge get wavelength coming on board. How does how do you guys look at that? Obviously us is global, it's not just the jet, I think it's more of in general. Edge deployments, sovereignty is also going to be world's flat, Right? I mean, so how does that >>work? So it starts back with customer requirements and I tie it back to the first question effectively we built Govcloud to respond to our U. S. Government customers and are highly regulated industry customers that had highly sensitive data and a high bar to meet in terms of regulatory compliance and that's the foundation of it. So as we look to other customers to include those outside of the US. It starts with those requirements. You mentioned things like edge and hybrid and a good example of how we marry the two is when we launched a W. S. Outpost in Govcloud last year. So outpost brings the power of the AWS cloud to on premises environments of our customers, whether it's their data centers or Coehlo environments by bringing AWS services, a. P. I. S and service and points to the customer's on premises facilities >>even outside the United States. >>Well, for Govcloud is focused on us right now. Outside of the U. S. Customers also have availability to use outpost. It's just for us customers, it's focused on outpost availability, geography >>right now us. Right. But other governments gonna want their Govcloud too. Right, Right, that's what you're getting at, >>Right? And it starts with the data. Right? So we we we spent a lot of time working with government agencies across the globe to understand their regulations and their requirements and we use that to drive our decisions. And again, just like we started with govcloud 10 years ago, it starts with our customer requirements and we innovate from there. Well, >>I've been, I love the D. O. D. S vision on this. I know jet I didn't come through and kind of went scuttled, got thrown under the bus or whatever however you want to call it. But that whole idea of a tactical edge, it was pretty brilliant idea. Um so I'm looking forward to seeing more of that. That's where I was supposed to come in, get snowball, snowmobile, little snow snow products as well, how are they doing? And because they're all part of the family to, >>they are and they're available in Govcloud and they're also authorized that fed ramp and Gov srg levels and it's really, it's really fascinating to see D. O. D innovate with the cloud. Right. So you mentioned tactical edge. So whether it's snowball devices or using outposts in the future, I think the D. O. D. And our defense customers are going to continue to innovate. And quite frankly for us, it represents our commitment to the space we want to make sure our defense customers and the defense industrial base defense contractors have access to the best debris capabilities like those edge devices and edge capable. I >>think about the impact of certification, which is good because I just thought of a clean crows. We've got aerospace coming in now you've got D O. D, a little bit of a cross colonization if you will. So nice to have that flexibility. I got to ask you about just how you view just in general, the intelligence community a lot of uptake since the CIA deal with amazon Just overall good health for eight of his gum cloud. >>Absolutely. And again, it starts with our commitment to our customers. We want to make sure that our national security customers are defense customers and all of the customers and the federal government that have a responsibility for securing the country have access to the best of breed capability. So whether it's the intelligence community, the Department of Defense are the federal agencies and quite frankly we see them innovating and driving things forward to include with their sensitive workloads that run in Govcloud, >>what's your strategy for partnerships as you work on the ecosystem? You do a lot with strategy. Go to market partnerships. Um, it's got its public sector pretty much people all know each other. Our new firms popping up new brands. What's the, what's the ecosystem looks like? >>Yeah, it's pretty diverse. So for Govcloud specifically, if you look at partners in the defense community, we work with aerospace companies like Lockheed martin and Raytheon Technologies to help them build I tar compliant E. R. P. Application, software development environments etcetera. We work with software companies I mentioned salesforce dot com. Splunk and S. A. P. And S. To uh and then even at the state and local government level, there's a company called Pay It that actually worked with the state of Kansas to develop the Icann app, which is pretty fascinating. It's a app that is the official app of the state of Kansas that allow citizens to interact with citizens services. That's all through a partner. So we continue to work with our partner uh broad the AWS partner network to bring those type of people >>You got a lot of MST is that are doing good work here. I saw someone out here uh 10 years. Congratulations. What's the coolest thing uh you've done or seen. >>Oh wow, it's hard to name anything in particular. I just think for us it's just seeing the customers and the federal government innovate right? And, and tie that innovation to mission critical workloads that are highly important. Again, it reflects our commitment to give these government customers and the government contractors the best of breed capabilities and some of the innovation we just see coming from the federal government leveraging the count now. It's just super cool. So hard to pinpoint one specific thing. But I love the innovation and it's hard to pick a favorite >>Child that we always say. It's kind of a trick question I do have to ask you about just in general, the just in 10 years. Just look at the agility. Yeah, I mean if you told me 10 years ago the government would be moving at any, any agile anything. They were a glacier in terms of change, right? Procure Man, you name it. It's just like, it's a racket. It's a racket. So, so, but they weren't, they were slow and money now. Pandemic hits this year. Last year, everything's up for grabs. The script has been flipped >>exactly. And you know what, what's interesting is there were actually a few federal government agencies that really paved the way for what you're seeing today. I'll give you some examples. So the Department of Veterans Affairs, they were an early Govcloud user and way back in 2015 they launched vets dot gov on gov cloud, which is an online platform that gave veterans the ability to apply for manage and track their benefits. Those type of initiatives paved the way for what you're seeing today, even as soon as last year with the U. S. Census, right? They brought the decennial count online for the first time in history last year, during 2020 during the pandemic and the Census Bureau was able to use Govcloud to launch and run 2020 census dot gov in the cloud at scale to secure that data. So those are examples of federal agencies that really kind of paved the way and leading to what you're saying is it's kind >>of an awakening. It is and I think one of the things that no one's reporting is kind of a cultural revolution is the talent underneath that way, the younger people like finally like and so it's cooler. It is when you go fast and you can make things change, skeptics turned into naysayers turned into like out of a job or they don't transform so like that whole blocker mentality gets exposed just like shelf where software you don't know what it does until the cloud is not performing, its not good. Right, right. >>Right. Into that point. That's why we spend a lot of time focused on education programs and up skilling the workforce to, because we want to ensure that as our customers mature and as they innovate, we're providing the right training and resources to help them along their journey, >>keith brooks great conversation, great insight and historian to taking us to the early days of Govcloud. Thanks for coming on the cube. Thanks thanks for having me cubes coverage here and address public sector summit. We'll be back with more coverage after this short break. Mhm. Mhm mm.
SUMMARY :
in Washington D. C. We're live on the ground for two days. A lot of history involved in Govcloud. breed in terms of high technology, high security, high compliance in the cloud to allow them but some of the early customers. So the Department of Health and Human Services, the Department of Veterans Affairs, itar compliance which is why they used AWS GovCloud. So there's also these levels. So it starts with the federal compliance program and the two most popular programs are a lot of companies that aren't quote targeting public sector are coming in on the Fed ramp. And the short and skinny is that represents the baseline for cloud security to address sensitive that the cloud, this is the ecosystem within an ecosystem again within crossover section. dot com driven technology Splunk essay PNS to effectively they're bringing what other areas are you guys moving the needle on because architecturally this is a big deal. So the launch of GovCloud East, which is named AWS GovCloud Us East gave customers outside the United States. So outpost brings the power of the AWS cloud to on premises Outside of the U. Right, Right, that's what you're getting at, to understand their regulations and their requirements and we use that to drive our decisions. I've been, I love the D. O. D. S vision on this. and the defense industrial base defense contractors have access to the best debris capabilities like those I got to ask you about just how you view just in general, securing the country have access to the best of breed capability. Go to market partnerships. It's a app that is the official app of the state of Kansas that What's the coolest thing uh you've done or seen. But I love the innovation and it's hard to pick a favorite ago the government would be moving at any, any agile anything. census dot gov in the cloud at scale to secure that data. the cloud is not performing, its not good. the workforce to, because we want to ensure that as our customers mature and as they innovate, Thanks for coming on the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
august of 2011 | DATE | 0.99+ |
December of 2020 | DATE | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
Department of Veterans Affairs | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Department of Health and Human Services | ORGANIZATION | 0.99+ |
Lockheed martin | ORGANIZATION | 0.99+ |
keith brooks | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Washington D. C. | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Department of Justice | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
last year | DATE | 0.99+ |
US | LOCATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Census Bureau | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
U. S. | LOCATION | 0.99+ |
U. S. Government | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
over 150 terabytes | QUANTITY | 0.99+ |
Keith Brooks | PERSON | 0.99+ |
10 years ago | DATE | 0.99+ |
2015 | DATE | 0.99+ |
six availability zones | QUANTITY | 0.99+ |
Raytheon Technologies | ORGANIZATION | 0.99+ |
10th anniversary | QUANTITY | 0.99+ |
Govcloud | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
2012 | DATE | 0.98+ |
9.5 | QUANTITY | 0.98+ |
first question | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
45 | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
Kansas | LOCATION | 0.98+ |
D. O. D. | LOCATION | 0.97+ |
three levels | QUANTITY | 0.97+ |
10th birthday | QUANTITY | 0.97+ |
Splunk | ORGANIZATION | 0.97+ |
GovCloud | ORGANIZATION | 0.97+ |
GovCloud East | TITLE | 0.97+ |
three availability zones | QUANTITY | 0.97+ |
2020 | DATE | 0.96+ |
U. S. Census | ORGANIZATION | 0.96+ |
over 80,000 requests per second | QUANTITY | 0.96+ |
four | QUANTITY | 0.96+ |
D. O. D | LOCATION | 0.96+ |
govcloud | ORGANIZATION | 0.96+ |
john | PERSON | 0.96+ |
eight | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
Four | QUANTITY | 0.95+ |
Nasa Jpl | ORGANIZATION | 0.95+ |
today | DATE | 0.94+ |
W. S. | LOCATION | 0.94+ |
GovCloud | TITLE | 0.94+ |
Fed ramp | TITLE | 0.94+ |
Kevin L. Jackson, GC GlobalNet | CUBE Conversation, September 2021
(upbeat music) >> Hello and welcome to this special CUBE conversation. I'm John Furrier, host of theCUBE here, remote in Washington, DC, not in Palo Alto, but we're all around the world with theCUBE as we are virtual. We're here recapping the Citrix Launchpad: Cloud (accelerating IT modernization) announcements with CUBE alumni Kevin Jackson, Kevin L. Jackson, CEO of GC Global Net. Kevin, great to see you. Thanks for coming on. >> No, thank you very much, John. It's always a pleasure to be on theCUBE. >> It's great to have. You always have great insights. But here, we're recapping the event, Citrix Launchpad: Cloud (accelerator IT modernization). And again, we're seeing this theme constantly now, IT modernization, application modernization. People are now seeing clearly what the pandemic has shown us all that there's a lot of projects that need to be up-leveled or kill. There's a lot of things happening and going on. What's your take of what you heard? >> Well, you know, from a general point of view, organizations can no longer put off this digitalization and the modernization of their IT. Many of these projects have been on a shelf waiting for the right time or, you know, the budget to get right. But when the pandemic hit, everyone found themselves in the virtual world. And one of the most difficult things was how do you make decisions in the virtual world when you can't physically be with someone? How do you have a meeting when you can't shake someone's hand? And they all sort of, you know, stared at each other and virtually, of course, to try to figure this out. And they dusted off all of the technologies they had on the shelf that they were, you know, they were told to use years ago, but just didn't feel that it was right. And now it became necessary. It became the way of life. And the thing that really jumped at me yesterday, well, jumped at me with Launchpad, the Launchpad of the cloud is that Citrix honed in on the key issues with this virtual world. I mean, delivering applications, knowing what the internet state is so that you could select the right sources for information and data. And making security holistic. So you didn't have to, it was no longer sort of this bolted on thing. So, I mean, we are in the virtual world to stay. >> You know, good call out there. Honing in was a good way to put it. One quote I heard from Tim (Minahan) was, you know, he said one thing that's become painfully evident is a lot of companies are going through the pandemic and they're experiencing the criticality of the application experience. And he says, "Application experience is the new currency." Okay, so the pandemic, we all kind of know what's going on there. It's highlighting all the needs. But this idea of an application experience is the new currency is a very interesting comment because, I mean, you nailed it. Everyone's working from home. The whole work is shifting. And the applications, they kind of weren't designed to be this way 100%. >> Right, right. You know, the thing about the old IT was that you would build something and you would deploy it and you would use it for a period of time. You know, a year, two years, three years, and then there would be an upgrade. You would upgrade your hardware, you would upgrade your applications, and then you go through the process again, you know? What was it referred to as, it wasn't modernization, but it was refresh. You know, you would refresh everything. Well today, refresh occurs every day. Sometimes two or three times a day. And you don't even know it's occurring. Especially in the application world, right? I think I was looking at something about Chrome, and I think we're at like Chrome 95. It's like Chrome is updated constantly as a regular course of business. So you have to deploy this, understand when it's going to be deployed, and the customers and users, you can't stop their work. So this whole application delivery and security aspect is completely different than before. That's why this, you know, this intent driven solution that Citrix has come up with is so revolutionary. I mean, by being able to know the real business needs and requirements, and then translating them to real policies that can be enforced, you can really, I guess, project the needs, requirement of the organization anywhere in the world immediately with the applications and with this security platform. >> I want to get your reactions to something because that's right on point there, because when we look at the security piece and the applications you see, okay, your mind goes okay, old IT, new IT. Now with cloud, with the pandemic showing that cloud scale matters, a couple themes have come from that used to be inside the ropes concepts. Virtualization, virtual, and automation. Those two concepts are going mainstream because now automation with data and virtual, virtual work, virtual CUBE, I mean, we're doing virtual interviews. Virtualization is coming here. So building on those things. New things are happening around those two concepts. Automation is becoming much more programmable, much more real time, not just repetitive tasks. Virtual is not just doing virtual work from home. It's integrating that virtual experience into other applications. This requires a whole new organizational structure mindset. What's your thoughts on that? >> Well, one of the things is the whole concept of automation. It used to be a nice to have. Something that you could do maybe to improve your particular process, not all of the processes. And then it became the only way of reacting to reality. Humans, it was no longer possible for humans to recognize a need to change and then execute on that change within the allotted time. So that's why automation became a critical element of every business process. And then it expanded that this automated process needed to be connect and interact with that automated process and the age of the API. And then the organization grew from only relying on itself to relying on its ecosystem. Now an organization had to automate their communications, their integration, the transfer of data and information. So automation is key to business and globalization creates that requirement, or magnifies that requirement. >> One of the things we heard in the event was, obviously Citrix has the experience with virtual apps, virtual desktop, all that stuff, we know that. But as the cloud grows in, they're making a direct statement around Citrix is going to add value on top of the cloud services. Because that's the reality of the hybrid, and now soon to be multi-cloud workflows or architectures. How do you see that evolve? Is that something that's being driven by the cloud or the app experience or both? What's your take on that focus of Citrix taking their concepts and leadership to add value on top of the cloud? >> To be honest, I don't like referring to the cloud. It gives an impression that there's only a single cloud and it's the same no matter what. That couldn't be further from the truth. A typical organization will consume services from three to five cloud service providers. And these providers aren't working with each other. Their services are unique, independent. And it's up to the enterprise to determine which applications and how those applications are presented to their employees. So it's the enterprise that's responsible for the employee experience. Integrating data from one cloud service provider to another cloud service provider within this automated business process or multiple business processes. So I see Citrix is really helping the enterprise to continually monitor performance from these independent cloud service provider and to optimize that experience. You know, the things like, where is the application being consumed for? What is the latency today on the internet? What type of throughput do I need from cloud service provider A versus cloud service provider B? All of this is continually changing. So the it's the enterprise that needs to constantly monitor the performance degradation and look at outages and all of that. So I think, you know, Citrix is on point by understanding that there's no single cloud. Hybrid and multi-cloud is the cloud. It's the real world. >> You know, that's a great call. And I think it's naive for enterprises to think that, you know, Microsoft is sitting there saying hmm, let's figure out a way to really work well with AWS. And vice versa, right? I mean, and you got Google, right? They all have their own specialties. I mean, Amazon web service has got great compliance action going on there. Much back stronger than Microsoft. Microsoft's got much deeper legacy and integration to their base, and Google's doing great with developers. So they're all kind of picking their lanes, but they all exist. So the question in the enterprise is what? Do I, how do I deal with that? And again, this is an opportunity for Citrix, right? So this kind of comes down to the single pane of glass (indistinct) always talks about, or how do I manage this new environment that I need to operate in? Because I will want to take advantage of some of the Google goodness and the Azure and the AWS. But now I got my own on premises. Bare metals grow. You're seeing more bare metal deals going down now because the cloud operations has come on premises. >> Yeah, and in fact, that's hybrid IT, right? I always see that there are an enterprise, when enterprise thinks about modernizing or digitally transforming a business process, you have three options, right? You could put it in your own data center. In fact, building a data center and optimizing a data center for a particular process is the cheapest and most efficient way of executing a business process. But it's only way cheaper and efficient if that process is also stable and consistent. I'll say, but some are like that. But you can also do a managed service provider. But that is a distinctly different approach. And the third option is a cloud service provider. So this is a hybrid IT environment. It's not just cloud. It's sort of, you know, it's not smart to think everything's going to go into the cloud. >> It's distributed computing. We see (indistinct). >> Yeah, yeah, absolutely. I mean, in today's paperless world, don't you still use a pen and paper and pencil? Yes. The right tool for the right job. So it's hybrid IT. Cloud is not always a perfect thing. And that's something that I believe Citrix has looked at. That interface between the enterprise and all of these choices when it comes to delivering applications, delivering the data, integrating that data, and making it secure. >> And I think that's a winning positioning to have this app experience, the currency narrative, because that ultimately is an outcome that you need to win on. And with the cloud and the cloud scale that goes on with all the multiple services now available, the company's business model is app driven, right? That's their application. So I love that, and I love that narrative. Also like this idea of app delivering security. It's kind of in the weeds a little bit, but it highlights this hybrid IT concept you were saying. So I got to ask you as the expert in the industry in this area, you know, as you have intent, what do they call it? Intent driven solution for app delivering security. Self healing, continuous optimization, et cetera, et cetera. The KPIs are changing, right? So I want to get your thoughts on that. Because now, as IT shifts to be much faster, whether it's security teams or IT teams to service that DevOps speed, shifting left everyone talks about, what's the KPIs that are changing? What is the new KPIs that the managers and people can work through as a north star or just tactically? What's your thoughts? >> Well, actually, every KPI has to relate to either the customer experience or the employee experience, and sometimes even more important, your business partner experience. That's the integration of these business processes. And one of the most important aspects that people really don't think about is the API, the application programming interface. You know, you think about software applications and you think about hardware, but how is this hardware deployed? How do you deploy and expand the number of servers based upon more usage from your customer? It's via the API. You manage the customer experience via APIs. You manage your ability to interact with your business partners through the API, their experience. You manage how efficient and effective your employees are through their experience with the IT and the applications through the API. So it's all about that, you know, that experience. Everybody yells customer experience, but it's also your employee experience and your partner experience. So that depends upon this integrated holistic approach to applications and the API security. The web app, the management of bots, and the protection of your APIs. >> Yeah, that really nailed it. I think the position is good. You know, if you can get faster app delivery, keep the security in line, and not bolt it on after the fact and reduce costs, that's a winning formula. And obviously, stitching together the service layer of app and software for all the cloud services is really key. I got to ask you though, Kevin, since you and I have riffed on theCUBE about this before, more importantly now than ever with the pandemic, look at the work edge. People working at home and what's causing the office spaces changing. The entire network architecture. I mean, I was talking to a big enterprise that said, oh yeah, we had, you know, the network for the commercial and the network for dial up now 100% provisioned for everyone at home. The radical change to the structural interface has completely changed the game. What is your view on this? I mean, give us your, where does it go? What happens next? >> So it's not what's next, it's where we are right now. And you need to be able to be, work from anywhere at any time across multiple devices. And on top of that, you have to be able to adapt to constant change in both the devices, the applications, the environment, and a business model. I did a interview with Citrix, actually, from an RV in the middle of a park, right? And it's like, we did video, we did it live. I think it was through LinkedIn live. But I mean, you need to be able to do anything from anywhere. And the enterprise needs to support that business imperative. So I think that's key. It's it's not the future, it's the today. >> I mean, the final question I have for you is, okay, is the frog in the boiling water? At what point does the CIO and the IT leaders, I mean, their minds are probably blown. I can only imagine. The conversations I've been having, it's been, you know, be agile, do it in the cloud, do it at speed, fix the security, programmable infrastructure. What? How fast can I run? This is the management challenge. How are people dealing with this when you talk to them? >> First of all, the IT professional needs to focus on the business needs, the business requirements, the business key performance indicators, not technology, and a business ROI. The CIO has to be right there in the C sweep of understanding what's needed by the business. And there also has to be an expert in being able to translate these business KPIs into IT requirements, all right? And understanding that all of this is going to be within a realm of constant change. So the CIO, the CTO, and the IT professional needs to realize their key deliverable is business performance. >> Kevin, great insight. Loved having you on theCUBE. Thanks for coming on. I really appreciate your time highlighting and recapping the Citrix Launchpad: Cloud announcements. Accelerating IT modernization can't go fast enough. People, they want to go faster. >> Faster, faster, yes. >> So great stuff. Thanks for coming, I appreciate it. >> Thank you, John. I really enjoyed it. >> Okay, it's theCUBE conversation. I'm John Furrier, host of theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
the world with theCUBE It's always a pleasure to be on theCUBE. that need to be up-leveled or kill. and the modernization of their IT. And the applications, and the customers and users, and the applications you see, okay, and the age of the API. One of the things we and it's the same no matter what. and the Azure and the AWS. And the third option is It's distributed computing. That interface between the enterprise What is the new KPIs that the managers and the protection of your APIs. and the network for dial up And the enterprise needs to support CIO and the IT leaders, and the IT professional highlighting and recapping the Citrix Launchpad: Cloud announcements. So great stuff. I really enjoyed it. I'm John Furrier, host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Kevin Jackson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Kevin L. Jackson | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
September 2021 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
Kevin L. Jackson | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
GC Global Net | ORGANIZATION | 0.99+ |
Minahan | PERSON | 0.99+ |
third option | QUANTITY | 0.99+ |
Chrome | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
three options | QUANTITY | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
pandemic | EVENT | 0.98+ |
two concepts | QUANTITY | 0.98+ |
One quote | QUANTITY | 0.98+ |
GC GlobalNet | ORGANIZATION | 0.97+ |
Chrome 95 | TITLE | 0.97+ |
five cloud service providers | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.96+ |
ORGANIZATION | 0.95+ | |
three times a day | QUANTITY | 0.95+ |
single pane | QUANTITY | 0.92+ |
single cloud | QUANTITY | 0.92+ |
Citrix Launchpad | TITLE | 0.9+ |
CEO | PERSON | 0.88+ |
theCUBE | ORGANIZATION | 0.87+ |
things | QUANTITY | 0.84+ |
years ago | DATE | 0.83+ |
First | QUANTITY | 0.82+ |
one cloud service provider | QUANTITY | 0.8+ |
Citrix Launchpad: Cloud | TITLE | 0.79+ |
One of | QUANTITY | 0.72+ |
Citrix Launchpad: Cloud | TITLE | 0.72+ |
agile | TITLE | 0.64+ |
Azure | TITLE | 0.51+ |
CUBE | EVENT | 0.49+ |
AWS Startup Showcase Opening
>>Hello and welcome today's cube presentation of eight of us startup showcase. I'm john for your host highlighting the hottest companies and devops data analytics and cloud management lisa martin and David want are here to kick it off. We've got a great program for you again. This is our, our new community event model where we're doing every quarter, we have every new episode, this is quarter three this year or episode three, season one of the hottest cloud startups and we're gonna be featured. Then we're gonna do a keynote package and then 15 countries will present their story, Go check them out and then have a closing keynote with a practitioner and we've got some great lineups, lisa Dave, great to see you. Thanks for joining me. >>Hey guys, >>great to be here. So David got to ask you, you know, back in events last night we're at the 14 it's event where they had the golf PGA championship with the cube Now we got the hybrid model, This is the new normal. We're in, we got these great companies were showcasing them. What's your take? >>Well, you're right. I mean, I think there's a combination of things. We're seeing some live shows. We saw what we did with at mobile world Congress. We did the show with AWS storage day where it was, we were at the spheres, there was no, there was a live audience, but they weren't there physically. It was just virtual and yeah, so, and I just got pained about reinvent. Hey Dave, you gotta make your flights. So I'm making my flights >>were gonna be at the amazon web services, public sector summit next week. At least a lot, a lot of cloud convergence going on here. We got many companies being featured here that we spoke with the Ceo and their top people cloud management, devops data, nelson security. Really cutting edge companies, >>yes, cutting edge companies who are all focused on acceleration. We've talked about the acceleration of digital transformation the last 18 months and we've seen a tremendous amount of acceleration in innovation with what these startups are doing. We've talked to like you said, there's, there's C suite, we've also talked to their customers about how they are innovating so quickly with this hybrid environment, this remote work and we've talked a lot about security in the last week or so. You mentioned that we were at Fortinet cybersecurity skills gap. What some of these companies are doing with automation for example, to help shorten that gap, which is a big opportunity >>for the job market. Great stuff. Dave so the format of this event, you're going to have a fireside chat with the practitioner, we'd like to end these programs with a great experienced practitioner cutting edge in data february. The beginning lisa are gonna be kicking off with of course Jeff bar to give us the update on what's going on AWS and then a special presentation from Emily Freeman who is the author of devops for dummies, she's introducing new content. The revolution in devops devops two point oh and of course jerry Chen from Greylock cube alumni is going to come on and talk about his new thesis castles in the cloud creating moats at cloud scale. We've got a great lineup of people and so the front ends can be great. Dave give us a little preview of what people can expect at the end of the fireside chat. >>Well at the highest level john I've always said we're entering that sort of third great wave of cloud. First wave was experimentation. The second big wave was migration. The third wave of integration, Deep business integration and what you're >>going to hear from >>Hello Fresh today is how they like many companies that started early last decade. They started with an on prem Hadoop system and then of course we all know what happened is S three essentially took the knees out from, from the on prem Hadoop market lowered costs, brought things into the cloud and what Hello Fresh is doing is they're transforming from that legacy Hadoop system into its running on AWS but into a data mess, you know, it's a passionate topic of mine. Hello Fresh was scaling they realized that they couldn't keep up so they had to rethink their entire data architecture and they built it around data mesh Clements key and christoph Soewandi gonna explain how they actually did that are on a journey or decentralized data >>measure it and your posts have been awesome on data measure. We get a lot of traction. Certainly you're breaking analysis for the folks watching check out David Landes, Breaking analysis every week, highlighting the cutting edge trends in tech Dave. We're gonna see you later, lisa and I are gonna be here in the morning talking about with Emily. We got Jeff Barr teed up. Dave. Thanks for coming on. Looking forward to fireside chat lisa. We'll see you when Emily comes back on. But we're gonna go to Jeff bar right now for Dave and I are gonna interview Jeff. Mm >>Hey Jeff, >>here he is. Hey, how are you? How's it going really well. So I gotta ask you, the reinvent is on, everyone wants to know that's happening right. We're good with Reinvent. >>Reinvent is happening. I've got my hotel and actually listening today, if I just remembered, I still need to actually book my flights. I've got my to do list on my desk and I do need to get my >>flights. Uh, >>really looking forward >>to it. I can't wait to see the all the announcements and blog posts. We're gonna, we're gonna hear from jerry Chen later. I love the after on our next event. Get your reaction to this castle and castles in the cloud where competitive advantages can be built in the cloud. We're seeing examples of that. But first I gotta ask you give us an update of what's going on. The ap and ecosystem has been an incredible uh, celebration these past couple weeks, >>so, so a lot of different things happening and the interesting thing to me is that as part of my job, I often think that I'm effectively living in the future because I get to see all this really cool stuff that we're building just a little bit before our customers get to, and so I'm always thinking okay, here I am now, and what's the world going to be like in a couple of weeks to a month or two when these launches? I'm working on actually get out the door and that, that's always really, really fun, just kind of getting that, that little edge into where we're going, but this year was a little interesting because we had to really significant birthdays, we had the 15 year anniversary of both EC two and S three and we're so focused on innovating and moving forward, that it's actually pretty rare for us at Aws to look back and say, wow, we've actually done all these amazing things in in the last 15 years, >>you know, it's kind of cool Jeff, if I may is is, you know, of course in the early days everybody said, well, a place for startup is a W. S and now the great thing about the startup showcases, we're seeing the startups that >>are >>very near, or some of them have even reached escape velocity, so they're not, they're not tiny little companies anymore, they're in their transforming their respective industries, >>they really are and I think that as they start ups grow, they really start to lean into the power of the cloud. They as they start to think, okay, we've we've got our basic infrastructure in place, we've got, we were serving data, we're serving up a few customers, everything is actually working pretty well for us. We've got our fundamental model proven out now, we can invest in publicity and marketing and scaling and but they don't have to think about what's happening behind the scenes. They just if they've got their auto scaling or if they're survivalists, the infrastructure simply grows to meet their demand and it's it's just a lot less things that they have to worry about. They can focus on the fun part of their business which is actually listening to customers and building up an awesome business >>Jeff as you guys are putting together all the big pre reinvented, knows a lot of stuff that goes on prior as well and they say all the big good stuff to reinvent. But you start to see some themes emerged this year. One of them is modernization of applications, the speed of application development in the cloud with the cloud scale devops personas, whatever persona you want to talk about but basically speed the speed of of the app developers where other departments have been slowing things down, I won't say name names, but security group and I t I mean I shouldn't have said that but only kidding but no but seriously people want in minutes and seconds now not days or weeks. You know whether it's policy. What are some of the trends that you're seeing around this this year as we get into some of the new stuff coming out >>So Dave customers really do want speed and for we've actually encapsulate this for a long time in amazon in what we call the bias for action leadership principle >>where >>we just need to jump in and move forward and and make things happen. A lot of customers look at that and they say yes this is great. We need to have the same bias fraction. Some do. Some are still trying to figure out exactly how to put it into play. And they absolutely for sure need to pay attention to security. They need to respect the past and make sure that whatever they're doing is in line with I. T. But they do want to move forward. And the interesting thing that I see time and time again is it's not simply about let's adopt a new technology. It's how do we >>how do we keep our workforce >>engaged? How do we make sure that they've got the right training? How do we bring our our I. T. Team along for this. Hopefully new and fun and exciting journey where they get to learn some interesting new technologies they've got all this very much accumulated business knowledge they still want to put to use, maybe they're a little bit apprehensive about something brand new and they hear about the cloud, but there by and large, they really want to move forward. They just need a little bit of >>help to make it happen >>real good guys. One of the things you're gonna hear today, we're talking about speed traditionally going fast. Oftentimes you meant you have to sacrifice some things on quality and what you're going to hear from some of the startups today is how they're addressing that to automation and modern devoPS technologies and sort of rethinking that whole application development approach. That's something I'm really excited to see organization is beginning to adopt so they don't have to make that tradeoff anymore. >>Yeah, I would >>never want to see someone >>sacrifice quality, >>but I do think that iterating very quickly and using the best of devoPS principles to be able to iterate incredibly quickly and get that first launch out there and then listen with both ears just >>as much >>as you can, Everything. You hear iterate really quickly to meet those needs in, in hours and days, not months, quarters or years. >>Great stuff. Chef and a lot of the companies were featuring here in the startup showcase represent that new kind of thinking, um, systems thinking as well as you know, the cloud scale and again and it's finally here, the revolution of deVOps is going to the next generation and uh, we're excited to have Emily Freeman who's going to come on and give a little preview for her new talk on this revolution. So Jeff, thank you for coming on, appreciate you sharing the update here on the cube. Happy >>to be. I'm actually really looking forward to hearing from Emily. >>Yeah, it's great. Great. Looking forward to the talk. Brand new Premier, Okay, uh, lisa martin, Emily Freeman is here. She's ready to come in and we're going to preview her lightning talk Emily. Um, thanks for coming on, we really appreciate you coming on really, this is about to talk around deVOPS next gen and I think lisa this is one of those things we've been, we've been discussing with all the companies. It's a new kind of thinking it's a revolution, it's a systems mindset, you're starting to see the connections there she is. Emily, Thanks for coming. I appreciate it. >>Thank you for having me. So your teaser video >>was amazing. Um, you know, that little secret radical idea, something completely different. Um, you gotta talk coming up, what's the premise behind this revolution, you know, these tying together architecture, development, automation deployment, operating altogether. >>Yes, well, we have traditionally always used the sclc, which is the software delivery life cycle. Um, and it is a straight linear process that has actually been around since the sixties, which is wild to me um, and really originated in manufacturing. Um, and as much as I love the Toyota production system and how much it has shown up in devops as a sort of inspiration on how to run things better. We are not making cars, we are making software and I think we have to use different approaches and create a sort of model that better reflects our modern software development process. >>It's a bold idea and looking forward to the talk and as motivation. I went into my basement and dusted off all my books from college in the 80s and the sea estimates it was waterfall. It was software development life cycle. They trained us to think this way and it came from the mainframe people. It was like, it's old school, like really, really old and it really hasn't been updated. Where's the motivation? I actually cloud is kind of converging everything together. We see that, but you kind of hit on this persona thing. Where did that come from this persona? Because you know, people want to put people in buckets release engineer. I mean, where's that motivation coming from? >>Yes, you're absolutely right that it came from the mainframes. I think, you know, waterfall is necessary when you're using a punch card or mag tape to load things onto a mainframe, but we don't exist in that world anymore. Thank goodness. And um, yes, so we, we use personas all the time in tech, you know, even to register, well not actually to register for this event, but a lot events. A lot of events, you have to click that drop down. Right. Are you a developer? Are you a manager, whatever? And the thing is personas are immutable in my opinion. I was a developer. I will always identify as a developer despite playing a lot of different roles and doing a lot of different jobs. Uh, and this can vary throughout the day. Right. You might have someone who has a title of software architect who ends up helping someone pair program or develop or test or deploy. Um, and so we wear a lot of hats day to day and I think our discussions around roles would be a better, um, certainly a better approach than personas >>lease. And I've been discussing with many of these companies around the roles and we're hearing from them directly and they're finding out that people have, they're mixing and matching on teams. So you're, you're an S R E on one team and you're doing something on another team where the workflows and the workloads defined the team formation. So this is a cultural discussion. >>It absolutely is. Yes. I think it is a cultural discussion and it really comes to the heart of devops, right? It's people process. And then tools deVOps has always been about culture and making sure that developers have all the tools they need to be productive and honestly happy. What good is all of this? If developing software isn't a joyful experience. Well, >>I got to ask you, I got you here obviously with server list and functions just starting to see this kind of this next gen. And we're gonna hear from jerry Chen, who's a Greylock VC who's going to talk about castles in the clouds, where he's discussing the moats that could be created with a competitive advantage in cloud scale. And I think he points to the snowflakes of the world. You're starting to see this new thing happening. This is devops 2.0, this is the revolution. Is this kind of where you see the same vision of your talk? >>Yes, so DeVOps created 2000 and 8, 2000 and nine, totally different ecosystem in the world we were living in, you know, we didn't have things like surveillance and containers, we didn't have this sort of default distributed nature, certainly not the cloud. Uh and so I'm very excited for jerry's talk. I'm curious to hear more about these moz. I think it's fascinating. Um but yeah, you're seeing different companies use different tools and processes to accelerate their delivery and that is the competitive advantage. How can we figure out how to utilize these tools in the most efficient way possible. >>Thank you for coming and giving us a preview. Let's now go to your lightning keynote talk. Fresh content. Premier of this revolution in Devops and the Freemans Talk, we'll go there now. >>Hi, I'm Emily Freeman, I'm the author of devops for dummies and the curator of 97 things every cloud engineer should know. I am thrilled to be here with you all today. I am really excited to share with you a kind of a wild idea, a complete re imagining of the S DLC and I want to be clear, I need your feedback. I want to know what you think of this. You can always find me on twitter at editing. Emily, most of my work centers around deVOps and I really can't overstate what an impact the concept of deVOPS has had on this industry in many ways it built on the foundation of Agile to become a default a standard we all reach for in our everyday work. When devops surfaced as an idea in 2008, the tech industry was in a vastly different space. AWS was an infancy offering only a handful of services. Azure and G C P didn't exist yet. The majority's majority of companies maintained their own infrastructure. Developers wrote code and relied on sys admins to deploy new code at scheduled intervals. Sometimes months apart, container technology hadn't been invented applications adhered to a monolithic architecture, databases were almost exclusively relational and serverless wasn't even a concept. Everything from the application to the engineers was centralized. Our current ecosystem couldn't be more different. Software is still hard, don't get me wrong, but we continue to find novel solutions to consistently difficult, persistent problems. Now, some of these end up being a sort of rebranding of old ideas, but others are a unique and clever take to abstracting complexity or automating toil or perhaps most important, rethinking challenging the very premises we have accepted as Cannon for years, if not decades. In the years since deVOps attempted to answer the critical conflict between developers and operations, engineers, deVOps has become a catch all term and there have been a number of derivative works. Devops has come to mean 5000 different things to 5000 different people. For some, it can be distilled to continuous integration and continuous delivery or C I C D. For others, it's simply deploying code more frequently, perhaps adding a smattering of tests for others. Still, its organizational, they've added a platform team, perhaps even a questionably named DEVOPS team or have created an engineering structure that focuses on a separation of concerns. Leaving feature teams to manage the development, deployment, security and maintenance of their siloed services, say, whatever the interpretation, what's important is that there isn't a universally accepted standard. Well, what deVOPS is or what it looks like an execution, it's a philosophy more than anything else. A framework people can utilize to configure and customize their specific circumstances to modern development practices. The characteristic of deVOPS that I think we can all agree on though, is that an attempted to capture the challenges of the entire software development process. It's that broad umbrella, that holistic view that I think we need to breathe life into again, The challenge we face is that DeVOps isn't increasingly outmoded solution to a previous problem developers now face. Cultural and technical challenge is far greater than how to more quickly deploy a monolithic application. Cloud native is the future the next collection of default development decisions and one the deVOPS story can't absorb in its current form. I believe the era of deVOPS is waning and in this moment as the sun sets on deVOPS, we have a unique opportunity to rethink rebuild free platform. Even now, I don't have a crystal ball. That would be very handy. I'm not completely certain with the next decade of tech looks like and I can't write this story alone. I need you but I have some ideas that can get the conversation started, I believe to build on what was we have to throw away assumptions that we've taken for granted all this time in order to move forward. We must first step back. Mhm. The software or systems development life cycle, what we call the S. D. L. C. has been in use since the 1960s and it's remained more or less the same since before color television and the touch tone phone. Over the last 60 or so odd years we've made tweaks, slight adjustments, massaged it. The stages or steps are always a little different with agile and deVOps we sort of looped it into a circle and then an infinity loop we've added pretty colors. But the sclc is more or less the same and it has become an assumption. We don't even think about it anymore, universally adopted constructs like the sclc have an unspoken permanence. They feel as if they have always been and always will be. I think the impact of that is even more potent. If you were born after a construct was popularized. Nearly everything around us is a construct, a model, an artifact of a human idea. The chair you're sitting in the desk, you work at the mug from which you drink coffee or sometimes wine, buildings, toilets, plumbing, roads, cars, art, computers, everything. The sclc is a remnant an artifact of a previous era and I think we should throw it away or perhaps more accurately replace it, replace it with something that better reflects the actual nature of our work. A linear, single threaded model designed for the manufacturer of material goods cannot possibly capture the distributed complexity of modern socio technical systems. It just can't. Mhm. And these two ideas aren't mutually exclusive that the sclc was industry changing, valuable and extraordinarily impactful and that it's time for something new. I believe we are strong enough to hold these two ideas at the same time, showing respect for the past while envisioning the future. Now, I don't know about you, I've never had a software project goes smoothly in one go. No matter how small. Even if I'm the only person working on it and committing directly to master software development is chaos. It's a study and entropy and it is not getting any more simple. The model with which we think and talk about software development must capture the multithreaded, non sequential nature of our work. It should embody the roles engineers take on and the considerations they make along the way. It should build on the foundations of agile and devops and represent the iterative nature of continuous innovation. Now, when I was thinking about this, I was inspired by ideas like extreme programming and the spiral model. I I wanted something that would have layers, threads, even a way of visually representing multiple processes happening in parallel. And what I settled on is the revolution model. I believe the visualization of revolution is capable of capturing the pivotal moments of any software scenario. And I'm going to dive into all the discrete elements. But I want to give you a moment to have a first impression, to absorb my idea. I call it revolution because well for one it revolves, it's circular shape reflects the continuous and iterative nature of our work, but also because it is revolutionary. I am challenging a 60 year old model that is embedded into our daily language. I don't expect Gartner to build a magic quadrant around this tomorrow, but that would be super cool. And you should call me my mission with. This is to challenge the status quo to create a model that I think more accurately reflects the complexity of modern cloud native software development. The revolution model is constructed of five concentric circles describing the critical roles of software development architect. Ng development, automating, deploying and operating intersecting each loop are six spokes that describe the production considerations every engineer has to consider throughout any engineering work and that's test, ability, secure ability, reliability, observe ability, flexibility and scalability. The considerations listed are not all encompassing. There are of course things not explicitly included. I figured if I put 20 spokes, some of us, including myself, might feel a little overwhelmed. So let's dive into each element in this model. We have long used personas as the default way to do divide audiences and tailor messages to group people. Every company in the world right now is repeating the mantra of developers, developers, developers but personas have always bugged me a bit because this approach typically either oversimplifies someone's career are needlessly complicated. Few people fit cleanly and completely into persona based buckets like developers and operations anymore. The lines have gotten fuzzy on the other hand, I don't think we need to specifically tailor messages as to call out the difference between a devops engineer and a release engineer or a security administrator versus a security engineer but perhaps most critically, I believe personas are immutable. A persona is wholly dependent on how someone identifies themselves. It's intrinsic not extrinsic. Their titles may change their jobs may differ, but they're probably still selecting the same persona on that ubiquitous drop down. We all have to choose from when registering for an event. Probably this one too. I I was a developer and I will always identify as a developer despite doing a ton of work in areas like devops and Ai Ops and Deverell in my heart. I'm a developer I think about problems from that perspective. First it influences my thinking and my approach roles are very different. Roles are temporary, inconsistent, constantly fluctuating. If I were an actress, the parts I would play would be lengthy and varied, but the persona I would identify as would remain an actress and artist lesbian. Your work isn't confined to a single set of skills. It may have been a decade ago, but it is not today in any given week or sprint, you may play the role of an architect. Thinking about how to design a feature or service, developer building out code or fixing a bug and on automation engineer, looking at how to improve manual processes. We often refer to as soil release engineer, deploying code to different environments or releasing it to customers or in operations. Engineer ensuring an application functions inconsistent expected ways and no matter what role we play. We have to consider a number of issues. The first is test ability. All software systems require testing to assure architects that designs work developers, the code works operators, that infrastructure is running as expected and engineers of all disciplines that code changes won't bring down the whole system testing in its many forms is what enables systems to be durable and have longevity. It's what reassures engineers that changes won't impact current functionality. A system without tests is a disaster waiting to happen, which is why test ability is first among equals at this particular roundtable. Security is everyone's responsibility. But if you understand how to design and execute secure systems, I struggle with this security incidents for the most part are high impact, low probability events. The really big disasters, the one that the ones that end up on the news and get us all free credit reporting for a year. They don't happen super frequently and then goodness because you know that there are endless small vulnerabilities lurking in our systems. Security is something we all know we should dedicate time to but often don't make time for. And let's be honest, it's hard and complicated and a little scary def sec apps. The first derivative of deVOPS asked engineers to move security left this approach. Mint security was a consideration early in the process, not something that would block release at the last moment. This is also the consideration under which I'm putting compliance and governance well not perfectly aligned. I figure all the things you have to call lawyers for should just live together. I'm kidding. But in all seriousness, these three concepts are really about risk management, identity, data, authorization. It doesn't really matter what specific issue you're speaking about, the question is who has access to what win and how and that is everyone's responsibility at every stage site reliability engineering or sorry, is a discipline job and approach for good reason. It is absolutely critical that applications and services work as expected. Most of the time. That said, availability is often mistakenly treated as a synonym for reliability. Instead, it's a single aspect of the concept if a system is available but customer data is inaccurate or out of sync. The system is not reliable, reliability has five key components, availability, latency, throughput. Fidelity and durability, reliability is the end result. But resiliency for me is the journey the action engineers can take to improve reliability, observe ability is the ability to have insight into an application or system. It's the combination of telemetry and monitoring and alerting available to engineers and leadership. There's an aspect of observe ability that overlaps with reliability, but the purpose of observe ability isn't just to maintain a reliable system though, that is of course important. It is the capacity for engineers working on a system to have visibility into the inner workings of that system. The concept of observe ability actually originates and linear dynamic systems. It's defined as how well internal states of a system can be understood based on information about its external outputs. If it is critical when companies move systems to the cloud or utilize managed services that they don't lose visibility and confidence in their systems. The shared responsibility model of cloud storage compute and managed services require that engineering teams be able to quickly be alerted to identify and remediate issues as they arise. Flexible systems are capable of adapting to meet the ever changing needs of the customer and the market segment, flexible code bases absorb new code smoothly. Embody a clean separation of concerns. Are partitioned into small components or classes and architected to enable the now as well as the next inflexible systems. Change dependencies are reduced or eliminated. Database schemas accommodate change well components, communicate via a standardized and well documented A. P. I. The only thing constant in our industry is change and every role we play, creating flexibility and solutions that can be flexible that will grow as the applications grow is absolutely critical. Finally, scalability scalability refers to more than a system's ability to scale for additional load. It implies growth scalability and the revolution model carries the continuous innovation of a team and the byproducts of that growth within a system. For me, scalability is the most human of the considerations. It requires each of us in our various roles to consider everyone around us, our customers who use the system or rely on its services, our colleagues current and future with whom we collaborate and even our future selves. Mhm. Software development isn't a straight line, nor is it a perfect loop. It is an ever changing complex dance. There are twirls and pivots and difficult spins forward and backward. Engineers move in parallel, creating truly magnificent pieces of art. We need a modern model for this modern era and I believe this is just the revolution to get us started. Thank you so much for having me. >>Hey, we're back here. Live in the keynote studio. I'm john for your host here with lisa martin. David lot is getting ready for the fireside chat ending keynote with the practitioner. Hello! Fresh without data mesh lisa Emily is amazing. The funky artwork there. She's amazing with the talk. I was mesmerized. It was impressive. >>The revolution of devops and the creative element was a really nice surprise there. But I love what she's doing. She's challenging the status quo. If we've learned nothing in the last year and a half, We need to challenge the status quo. A model from the 1960s that is no longer linear. What she's doing is revolutionary. >>And we hear this all the time. All the cube interviews we do is that you're seeing the leaders, the SVP's of engineering or these departments where there's new new people coming in that are engineering or developers, they're playing multiple roles. It's almost a multidisciplinary aspect where you know, it's like going into in and out burger in the fryer later and then you're doing the grill, you're doing the cashier, people are changing roles or an architect, their test release all in one no longer departmental, slow siloed groups. >>She brought up a great point about persona is that we no longer fit into these buckets. That the changing roles. It's really the driver of how we should be looking at this. >>I think I'm really impressed, really bold idea, no brainer as far as I'm concerned, I think one of the things and then the comments were off the charts in a lot of young people come from discord servers. We had a good traction over there but they're all like learning. Then you have the experience, people saying this is definitely has happened and happening. The dominoes are falling and they're falling in the direction of modernization. That's the key trend speed. >>Absolutely with speed. But the way that Emily is presenting it is not in a brash bold, but it's in a way that makes great sense. The way that she creatively visually lined out what she was talking about Is amenable to the folks that have been doing this for since the 60s and the new folks now to really look at this from a different >>lens and I think she's a great setup on that lightning top of the 15 companies we got because you think about sis dig harness. I white sourced flamingo hacker one send out, I oh, okay. Thought spot rock set Sarah Ops ramp and Ops Monte cloud apps, sani all are doing modern stuff and we talked to them and they're all on this new wave, this monster wave coming. What's your observation when you talk to these companies? >>They are, it was great. I got to talk with eight of the 15 and the amount of acceleration of innovation that they've done in the last 18 months is phenomenal obviously with the power and the fuel and the brand reputation of aws but really what they're all facilitating cultural shift when we think of devoPS and the security folks. Um, there's a lot of work going on with ai to an automation to really kind of enabled to develop the develops folks to be in control of the process and not have to be security experts but ensuring that the security is baked in shifting >>left. We saw that the chat room was really active on the security side and one of the things I noticed was not just shift left but the other groups, the security groups and the theme of cultural, I won't say war but collision cultural shift that's happening between the groups is interesting because you have this new devops persona has been around Emily put it out for a while. But now it's going to the next level. There's new revolutions about a mindset, a systems mindset. It's a thinking and you start to see the new young companies coming out being funded by the gray locks of the world who are now like not going to be given the we lost the top three clouds one, everything. there's new business models and new technical architecture in the cloud and that's gonna be jerry Chen talk coming up next is going to be castles in the clouds because jerry chant always talked about moats, competitive advantage and how moats are key to success to guard the castle. And then we always joke, there's no more moz because the cloud has killed all the boats. But now the motor in the cloud, the castles are in the cloud, not on the ground. So very interesting thought provoking. But he's got data and if you look at the successful companies like the snowflakes of the world, you're starting to see these new formations of this new layer of innovation where companies are growing rapidly, 98 unicorns now in the cloud. Unbelievable, >>wow, that's a lot. One of the things you mentioned, there's competitive advantage and these startups are all fueled by that they know that there are other companies in the rear view mirror right behind them. If they're not able to work as quickly and as flexibly as a competitor, they have to have that speed that time to market that time to value. It was absolutely critical. And that's one of the things I think thematically that I saw along the eighth sort of that I talked to is that time to value is absolutely table stakes. >>Well, I'm looking forward to talking to jerry chan because we've talked on the queue before about this whole idea of What happens when winner takes most would mean the top 3, 4 cloud players. What happens? And we were talking about that and saying, if you have a model where an ecosystem can develop, what does that look like and back in 2013, 2014, 2015, no one really had an answer. Jerry was the only BC. He really nailed it with this castles in the cloud. He nailed the idea that this is going to happen. And so I think, you know, we'll look back at the tape or the videos from the cube, we'll find those cuts. But we were talking about this then we were pontificating and riffing on the fact that there's going to be new winners and they're gonna look different as Andy Jassy always says in the cube you have to be misunderstood if you're really going to make something happen. Most of the most successful companies are misunderstood. Not anymore. The cloud scales there. And that's what's exciting about all this. >>It is exciting that the scale is there, the appetite is there the appetite to challenge the status quo, which is right now in this economic and dynamic market that we're living in is there's nothing better. >>One of the things that's come up and and that's just real quick before we bring jerry in is automation has been insecurity, absolutely security's been in every conversation, but automation is now so hot in the sense of it's real and it's becoming part of all the design decisions. How can we automate can we automate faster where the keys to automation? Is that having the right data, What data is available? So I think the idea of automation and Ai are driving all the change and that's to me is what these new companies represent this modern error where AI is built into the outcome and the apps and all that infrastructure. So it's super exciting. Um, let's check in, we got jerry Chen line at least a great. We're gonna come back after jerry and then kick off the day. Let's bring in jerry Chen from Greylock is he here? Let's bring him in there. He is. >>Hey john good to see you. >>Hey, congratulations on an amazing talk and thesis on the castles on the cloud. Thanks for coming on. >>All right, Well thanks for reading it. Um, always were being put a piece of workout out either. Not sure what the responses, but it seemed to resonate with a bunch of developers, founders, investors and folks like yourself. So smart people seem to gravitate to us. So thank you very much. >>Well, one of the benefits of doing the Cube for 11 years, Jerry's we have videotape of many, many people talking about what the future will hold. You kind of are on this early, it wasn't called castles in the cloud, but you were all I was, we had many conversations were kind of connecting the dots in real time. But you've been on this for a while. It's great to see the work. I really think you nailed this. I think you're absolutely on point here. So let's get into it. What is castles in the cloud? New research to come out from Greylock that you spearheaded? It's collaborative effort, but you've got data behind it. Give a quick overview of what is castle the cloud, the new modes of competitive advantage for companies. >>Yeah, it's as a group project that our team put together but basically john the question is, how do you win in the cloud? Remember the conversation we had eight years ago when amazon re event was holy cow, Like can you compete with them? Like is it a winner? Take all? Winner take most And if it is winner take most, where are the white spaces for Some starts to to emerge and clearly the past eight years in the cloud this journey, we've seen big companies, data breaks, snowflakes, elastic Mongo data robot. And so um they spotted the question is, you know, why are the castles in the cloud? The big three cloud providers, Amazon google and Azure winning. You know, what advantage do they have? And then given their modes of scale network effects, how can you as a startup win? And so look, there are 500 plus services between all three cloud vendors, but there are like 500 plus um startups competing gets a cloud vendors and there's like almost 100 unicorn of private companies competing successfully against the cloud vendors, including public companies. So like Alaska, Mongo Snowflake. No data breaks. Not public yet. Hashtag or not public yet. These are some examples of the names that I think are winning and watch this space because you see more of these guys storm the castle if you will. >>Yeah. And you know one of the things that's a funny metaphor because it has many different implications. One, as we talk about security, the perimeter of the gates, the moats being on land. But now you're in the cloud, you have also different security paradigm. You have a different um, new kinds of services that are coming on board faster than ever before. Not just from the cloud players but From companies contributing into the ecosystem. So the combination of the big three making the market the main markets you, I think you call 31 markets that we know of that probably maybe more. And then you have this notion of a sub market, which means that there's like we used to call it white space back in the day, remember how many whites? Where's the white space? I mean if you're in the cloud, there's like a zillion white spaces. So talk about this sub market dynamic between markets and that are being enabled by the cloud players and how these sub markets play into it. >>Sure. So first, the first problem was what we did. We downloaded all the services for the big three clowns. Right? And you know what as recalls a database or database service like a document DB and amazon is like Cosmo dB and Azure. So first thing first is we had to like look at all three cloud providers and you? Re categorize all the services almost 500 Apples, Apples, Apples # one number two is you look at all these markets or sub markets and said, okay, how can we cluster these services into things that you know you and I can rock right. That's what amazon Azure and google think about. It is very different and the beauty of the cloud is this kind of fat long tail of services for developers. So instead of like oracle is a single database for all your needs. They're like 20 or 30 different databases from time series um analytics, databases. We're talking rocks at later today. Right. Um uh, document databases like Mongo search database like elastic. And so what happens is there's not one giant market like databases, there's a database market And 30, 40 sub markets that serve the needs developers. So the Great News is cloud has reduced the cost and create something that new for developers. Um also the good news is for a start up you can find plenty of white speeds solving a pain point, very specific to a different type of problem >>and you can sequence up to power law to this. I love the power of a metaphor, you know, used to be a very thin neck note no torso and then a long tail. But now as you're pointing out this expansion of the fat tail of services, but also there's big tam's and markets available at the top of the power law where you see coming like snowflake essentially take on the data warehousing market by basically sitting on amazon re factoring with new services and then getting a flywheel completely changing the economic unit economics completely changing the consumption model completely changing the value proposition >>literally you >>get Snowflake has created like a storm, create a hole, that mode or that castle wall against red shift. Then companies like rock set do your real time analytics is Russian right behind snowflakes saying, hey snowflake is great for data warehouse but it's not fast enough for real time analytics. Let me give you something new to your, to your parallel argument. Even the big optic snowflake have created kind of a wake behind them that created even more white space for Gaza rock set. So that's exciting for guys like me and >>you. And then also as we were talking about our last episode two or quarter two of our showcase. Um, from a VC came on, it's like the old shelf where you didn't know if a company's successful until they had to return the inventory now with cloud you if you're not successful, you know it right away. It's like there's no debate. Like, I mean you're either winning or not. This is like that's so instrumented so a company can have a good better mousetrap and win and fill the white space and then move up. >>It goes both ways. The cloud vendor, the big three amazon google and Azure for sure. They instrument their own class. They know john which ecosystem partners doing well in which ecosystems doing poorly and they hear from the customers exactly what they want. So it goes both ways they can weaponize that. And just as well as you started to weaponize that info >>and that's the big argument of do that snowflake still pays the amazon bills. They're still there. So again, repatriation comes back, That's a big conversation that's come up. What's your quick take on that? Because if you're gonna have a castle in the cloud, then you're gonna bring it back to land. I mean, what's that dynamic? Where do you see that compete? Because on one hand is innovation. The other ones maybe cost efficiency. Is that a growth indicator slow down? What's your view on the movement from and to the cloud? >>I think there's probably three forces you're finding here. One is the cost advantage in the scale advantage of cloud so that I think has been going for the past eight years, there's a repatriation movement for a certain subset of customers, I think for cost purposes makes sense. I think that's a tiny handful that believe they can actually run things better than a cloud. The third thing we're seeing around repatriation is not necessary against cloud, but you're gonna see more decentralized clouds and things pushed to the edge. Right? So you look at companies like Cloudflare Fastly or a company that we're investing in Cato networks. All ideas focus on secure access at the edge. And so I think that's not the repatriation of my own data center, which is kind of a disaggregated of cloud from one giant monolithic cloud, like AWS east or like a google region in europe to multiple smaller clouds for governance purposes, security purposes or legacy purposes. >>So I'm looking at my notes here, looking down on the screen here for this to read this because it's uh to cut and paste from your thesis on the cloud. The excellent cloud. The of the $38 billion invested this quarter. Um Ai and ml number one, um analytics. Number two, security number three. Actually, security number one. But you can see the bubbles here. So all those are data problems I need to ask you. I see data is hot data as intellectual property. How do you look at that? Because we've been reporting on this and we just started the cube conversation around workflows as intellectual property. If you have scale and your motives in the cloud. You could argue that data and the workflows around those data streams is intellectual property. It's a protocol >>I believe both are. And they just kind of go hand in hand like peanut butter and jelly. Right? So data for sure. I. P. So if you know people talk about days in the oil, the new resource. That's largely true because of powers a bunch. But the workflow to your point john is sticky because every company is a unique snowflake right? Like the process used to run the cube and your business different how we run our business. So if you can build a workflow that leverages the data, that's super sticky. So in terms of switching costs, if my work is very bespoke to your business, then I think that's competitive advantage. >>Well certainly your workflow is a lot different than the cube. You guys just a lot of billions of dollars in capital. We're talking to all the people out here jerry. Great to have you on final thought on your thesis. Where does it go from here? What's been the reaction? Uh No, you put it out there. Great love the restart. Think you're on point on this one. Where did we go from here? >>We have to follow pieces um in the near term one around, you know, deep diver on open source. So look out for that pretty soon and how that's been a powerful strategy a second. Is this kind of just aggregation of the cloud be a Blockchain and you know, decentralized apps, be edge applications. So that's in the near term two more pieces of, of deep dive we're doing. And then the goal here is to update this on a quarterly and annual basis. So we're getting submissions from founders that wanted to say, hey, you missed us or he screwed up here. We got the big cloud vendors saying, Hey jerry, we just lost his new things. So our goal here is to update this every single year and then probably do look back saying, okay, uh, where were we wrong? We're right. And then let's say the castle clouds 2022. We'll see the difference were the more unicorns were there more services were the IPO's happening. So look for some short term work from us on analytics, like around open source and clouds. And then next year we hope that all of this forward saying, Hey, you have two year, what's happening? What's changing? >>Great stuff and, and congratulations on the southern news. You guys put another half a billion dollars into early, early stage, which is your roots. Are you still doing a lot of great investments in a lot of unicorns. Congratulations that. Great luck on the team. Thanks for coming on and congratulations you nailed this one. I think I'm gonna look back and say that this is a pretty seminal piece of work here. Thanks for sharing. >>Thanks john thanks for having us. >>Okay. Okay. This is the cube here and 81 startup showcase. We're about to get going in on all the hot companies closing out the kino lisa uh, see jerry Chen cube alumni. He was right from day one. We've been riffing on this, but he nails it here. I think Greylock is lucky to have him as a general partner. He's done great deals, but I think he's hitting the next wave big. This is, this is huge. >>I was listening to you guys talking thinking if if you had a crystal ball back in 2013, some of the things Jerry saying now his narrative now, what did he have a crystal >>ball? He did. I mean he could be a cuBA host and I could be a venture capital. We were both right. I think so. We could have been, you know, doing that together now and all serious now. He was right. I mean, we talked off camera about who's the next amazon who's going to challenge amazon and Andy Jassy was quoted many times in the queue by saying, you know, he was surprised that it took so long for people to figure out what they were doing. Okay, jerry was that VM where he had visibility into the cloud. He saw amazon right away like we did like this is a winning formula and so he was really out front on this one. >>Well in the investments that they're making in these unicorns is exciting. They have this, this lens that they're able to see the opportunities there almost before anybody else can. And finding more white space where we didn't even know there was any. >>Yeah. And what's interesting about the report I'm gonna dig into and I want to get to him while he's on camera because it's a great report, but He says it's like 500 services I think Amazon has 5000. So how you define services as an interesting thing and a lot of amazon services that they have as your doesn't have and vice versa, they do call that out. So I find the report interesting. It's gonna be a feature game in the future between clouds the big three. They're gonna say we do this, you're starting to see the formation, Google's much more developer oriented. Amazon is much more stronger in the governance area with data obviously as he pointed out, they have such experience Microsoft, not so much their developer cloud and more office, not so much on the government's side. So that that's an indicator of my, my opinion of kind of where they rank. So including the number one is still amazon web services as your long second place, way behind google, right behind Azure. So we'll see how the horses come in, >>right. And it's also kind of speaks to the hybrid world in which we're living the hybrid multi cloud world in which many companies are living as companies to not just survive in the last year and a half, but to thrive and really have to become data companies and leverage that data as a competitive advantage to be able to unlock the value of it. And a lot of these startups that we talked to in the showcase are talking about how they're helping organizations unlock that data value. As jerry said, it is the new oil, it's the new gold. Not unless you can unlock that value faster than your competition. >>Yeah, well, I'm just super excited. We got a great day ahead of us with with all the cots startups. And then at the end day, Volonte is gonna interview, hello, fresh practitioners, We're gonna close it out every episode now, we're going to do with the closing practitioner. We try to get jpmorgan chase data measures. The hottest area right now in the enterprise data is new competitive advantage. We know that data workflows are now intellectual property. You're starting to see data really factoring into these applications now as a key aspect of the competitive advantage and the value creation. So companies that are smart are investing heavily in that and the ones that are kind of slow on the uptake are lagging the market and just trying to figure it out. So you start to see that transition and you're starting to see people fall away now from the fact that they're not gonna make it right, You're starting to, you know, you can look at look at any happens saying how much ai is really in there. Real ai what's their data strategy and you almost squint through that and go, okay, that's gonna be losing application. >>Well the winners are making it a board level conversation >>And security isn't built in. Great to have you on this morning kicking it off. Thanks John Okay, we're going to go into the next set of the program at 10:00 we're going to move into the breakouts. Check out the companies is three tracks in there. We have an awesome track on devops pure devops. We've got the data and analytics and we got the cloud management and just to run down real quick check out the sis dig harness. Io system is doing great, securing devops harness. IO modern software delivery platform, White Source. They're preventing and remediating the rest of the internet for them for the company's that's a really interesting and lumbago, effortless acres land and monitoring functions, server list super hot. And of course hacker one is always great doing a lot of great missions and and bounties you see those success continue to send i O there in Palo alto changing the game on data engineering and data pipe lining. Okay. Data driven another new platform, horizontally scalable and of course thought spot ai driven kind of a search paradigm and of course rock set jerry Chen's companies here and press are all doing great in the analytics and then the cloud management cost side 80 operations day to operate. Ops ramps and ops multi cloud are all there and sunny, all all going to present. So check them out. This is the Cubes Adria's startup showcase episode three.
SUMMARY :
the hottest companies and devops data analytics and cloud management lisa martin and David want are here to kick the golf PGA championship with the cube Now we got the hybrid model, This is the new normal. We did the show with AWS storage day where the Ceo and their top people cloud management, devops data, nelson security. We've talked to like you said, there's, there's C suite, Dave so the format of this event, you're going to have a fireside chat Well at the highest level john I've always said we're entering that sort of third great wave of cloud. you know, it's a passionate topic of mine. for the folks watching check out David Landes, Breaking analysis every week, highlighting the cutting edge trends So I gotta ask you, the reinvent is on, everyone wants to know that's happening right. I've got my to do list on my desk and I do need to get my Uh, and castles in the cloud where competitive advantages can be built in the cloud. you know, it's kind of cool Jeff, if I may is is, you know, of course in the early days everybody said, the infrastructure simply grows to meet their demand and it's it's just a lot less things that they have to worry about. in the cloud with the cloud scale devops personas, whatever persona you want to talk about but And the interesting to put to use, maybe they're a little bit apprehensive about something brand new and they hear about the cloud, One of the things you're gonna hear today, we're talking about speed traditionally going You hear iterate really quickly to meet those needs in, the cloud scale and again and it's finally here, the revolution of deVOps is going to the next generation I'm actually really looking forward to hearing from Emily. we really appreciate you coming on really, this is about to talk around deVOPS next Thank you for having me. Um, you know, that little secret radical idea, something completely different. that has actually been around since the sixties, which is wild to me um, dusted off all my books from college in the 80s and the sea estimates it And the thing is personas are immutable in my opinion. And I've been discussing with many of these companies around the roles and we're hearing from them directly and they're finding sure that developers have all the tools they need to be productive and honestly happy. And I think he points to the snowflakes of the world. and processes to accelerate their delivery and that is the competitive advantage. Let's now go to your lightning keynote talk. I figure all the things you have to call lawyers for should just live together. David lot is getting ready for the fireside chat ending keynote with the practitioner. The revolution of devops and the creative element was a really nice surprise there. All the cube interviews we do is that you're seeing the leaders, the SVP's of engineering It's really the driver of how we should be looking at this. off the charts in a lot of young people come from discord servers. the folks that have been doing this for since the 60s and the new folks now to really look lens and I think she's a great setup on that lightning top of the 15 companies we got because you ensuring that the security is baked in shifting happening between the groups is interesting because you have this new devops persona has been One of the things you mentioned, there's competitive advantage and these startups are He nailed the idea that this is going to happen. It is exciting that the scale is there, the appetite is there the appetite to challenge and Ai are driving all the change and that's to me is what these new companies represent Thanks for coming on. So smart people seem to gravitate to us. Well, one of the benefits of doing the Cube for 11 years, Jerry's we have videotape of many, Remember the conversation we had eight years ago when amazon re event So the combination of the big three making the market the main markets you, of the cloud is this kind of fat long tail of services for developers. I love the power of a metaphor, Even the big optic snowflake have created kind of a wake behind them that created even more Um, from a VC came on, it's like the old shelf where you didn't know if a company's successful And just as well as you started to weaponize that info and that's the big argument of do that snowflake still pays the amazon bills. One is the cost advantage in the So I'm looking at my notes here, looking down on the screen here for this to read this because it's uh to cut and paste But the workflow to your point Great to have you on final thought on your thesis. We got the big cloud vendors saying, Hey jerry, we just lost his new things. Great luck on the team. I think Greylock is lucky to have him as a general partner. into the cloud. Well in the investments that they're making in these unicorns is exciting. Amazon is much more stronger in the governance area with data And it's also kind of speaks to the hybrid world in which we're living the hybrid multi So companies that are smart are investing heavily in that and the ones that are kind of slow We've got the data and analytics and we got the cloud management and just to run down real quick
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Emily Freeman | PERSON | 0.99+ |
Emily | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
David | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
John | PERSON | 0.99+ |
20 spokes | QUANTITY | 0.99+ |
lisa martin | PERSON | 0.99+ |
jerry Chen | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
11 years | QUANTITY | 0.99+ |
$38 billion | QUANTITY | 0.99+ |
Jerry | PERSON | 0.99+ |
Jeff Barr | PERSON | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
lisa Dave | PERSON | 0.99+ |
500 services | QUANTITY | 0.99+ |
jpmorgan | ORGANIZATION | 0.99+ |
lisa | PERSON | 0.99+ |
31 markets | QUANTITY | 0.99+ |
europe | LOCATION | 0.99+ |
two ideas | QUANTITY | 0.99+ |
15 companies | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
15 countries | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
each element | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first impression | QUANTITY | 0.99+ |
5000 | QUANTITY | 0.99+ |
eight years ago | DATE | 0.99+ |
both ways | QUANTITY | 0.99+ |
february | DATE | 0.99+ |
two year | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
David Landes | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
Gaza | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
97 things | QUANTITY | 0.98+ |
Monica Kumar & Tarkan Maner, Nutanix | CUBEconversation
(upbeat music) >> The cloud is evolving. You know, it's no longer a set of remote services somewhere off in the cloud, in the distance. It's expanding. It's moving to on-prem. On-prem workloads are connecting to the cloud. They're spanning clouds in a way that hides the plumbing and simplifies deployment, management, security, and governance. So hybrid multicloud is the next big thing in infrastructure, and at the recent Nutanix .NEXT conference, we got a major dose of that theme, and with me to talk about what we heard at that event, what we learned, why it matters, and what it means to customers are Monica Kumar, who's the senior vice president of marketing and cloud go-to-market at Nutanix, and Tarkan Maner, who's the chief commercial officer at Nutanix. Guys, great to see you again. Welcome to the theCUBE. >> Great to be back here. >> Great to see you, Dave. >> Okay, so you just completed another .NEXT. As an analyst, I like to evaluate the messaging at an event like this, drill into the technical details to try to understand if you're actually investing in the things that you're promoting in your keynotes, and then talk to customers to see how real it is. So with that as a warning, you guys are all in on hybrid multicloud, and I have my takeaways that I'd be happy to share, but, Tarkan, what were your impressions, coming out of the event? >> Look, you had a great entry. Our goal, as Monica is going to outline, too, cloud is not a destination. It's an operating model. Our customers are basically using cloud as a business model, as an operating model. It's not just a bunch of techno mumbo-jumbo, as, kind of, you outlined. We want to make sure we make cloud invisible to the customer so they can focus on what they need to focus on as a business. So as part of that, we want to make sure the workloads, the apps, they can run anywhere the way the customer wants. So in that context, you know, our entire story was bringing customer workloads, use-cases, partner ecosystem with ISVs and cloud providers and service providers and ISPs we're working with like Citrix on end user computing, like Red Hat on cloud native, and also bringing the right products, both in terms of infrastructure capability and management capability for both operators and application developers. So bringing all these pieces together and make it simple for the customer to use the cloud as an operating model. That was the biggest goal here. >> Great, thank you. Monica, anything you'd add in terms of your takeaways? >> Well, I think Tarkan said it right. We are here to make cloud complexity invisible. This was our big event to get thousands of our customers, partners, our supporters together and unveil our product portfolio, which is much more simplified, now. It's a cloud platform. And really have a chance to show them how we are building an ecosystem around it, and really bringing to life the whole notion of hybrid multicloud computing. >> So, Monica, could you just, for our audience, just summarize the big news that came out of .NEXT? >> Yeah, we actually made four different announcements, and most of them were focused around, obviously, our product portfolio. So the first one was around enhancements to our cloud platform to help customers build modern, software-defined data centers to speed their hybrid multicloud deployments while supporting their business-critical applications, and that was really about the next version of our flagship, AOS six, availability. We announced the general availability of that, and key features really included things like built-in virtual networking, disaster recovery enhancements, security enhancements that otherwise would need a lot of specialized hardware, software, and skills are now built into our platform. And, most importantly, all of this functionality being managed through a single interface, right? Which significantly decreases the operational overhead. So that was one announcement. The second announcement was focused around data services and really making it easy for customers to simplify data management, also optimize big data and database workloads. We announced capability that now improves performances of database workloads by 2x, big data workloads by 3x, so lots of great stuff there. We also announced a new service called Nutanix Data Lens, which is a new unstructured data governance service. So, again, I don't want to go into a lot of details here. Maybe we can do it later. That was our second big announcement. The third announcement, which is really around partnerships, and we'll talk more about that, is with Microsoft. We announced the preview of Nutanix Clusters and Azure, and that's really taking our entire flagship Nutanix platform and running it on Azure. And so, now, we are in preview on that one, and we're super excited about that. And then, last but not least, and I know Tarkan is going to go into a lot more detail, is we announced a strategic partnership with Citrix around the whole future of hybrid work. So lots of big news coming out of it. I just gave you a quick summary. There's a lot more around this, as well. >> Okay. Now, I'd like to give you my honest take, if you guys don't mind, and, Tarkan, I'll steal one of your lines. Don't hate me, okay? So the first thing I'm going to say is I think, Nutanix, you have the absolute right vision. There's no question in my mind. But what you're doing is not trivial, and I think it's going to play out. It's going to take a number of years. To actually build an abstraction layer, which is where you're going, as I take it, as a platform that can exploit all the respective cloud native primitives and run virtually any workload in any cloud. And then what you're doing, as I see it, is abstracting that underlying technology complexity and bringing that same experience on-prem, across clouds, and as I say, that's hard. I will say this: the deep dives that I got at the analyst event, it convinced me that you're committed to this vision. You're spending real dollars on focused research and development on this effort, and, very importantly, you're sticking to your true heritage of making this simple. Now, you're not alone. All the non-hyperscalers are going after the multicloud opportunity, which, again, is really challenging, but my assessment is you're ahead of the game. You're certainly focused on your markets, but, from what I've seen, I believe it's one of the best examples of a true hybrid multicloud-- you're on that journey-- that I've seen to date. So I would give you high marks there. And I like the ecosystem-building piece of it. So, Tarkan, you could course-correct anything that I've said, and I'd love for you to pick up on your comments. It takes a village, you know, you're sort of invoking Hillary Clinton, to bring the right solution to customers. So maybe you could talk about some of that, as well. >> Look, actually, you hit all the right points, and I don't hate you for that. I love you for that, as you know. Look, at the end of the day, we started this journey about 10 years ago. The last two years with Monica, with the great executive team, and overall team as a whole, big push to what you just suggested. We're not necessarily, you know, passionate about cloud. Again, it's a business model. We're passionate about customer outcomes, and some of those outcomes sometimes are going to also be on-prem. That's why we focus on this terminology, hybrid multicloud. It is not multicloud, it's not just private cloud or on-prem and non-cloud. We want to make sure customers have the right outcomes. So based on that, whether those are cloud partners or platform partners like HPE, Dell, Supermicro. We just announced a partnership with Supermicro, now, we're selling our software. HPE, we run on GreenLake. Lenovo, we run on TruScale. Big support for Lenovo. Dell's still a great partner to us. On cloud partnerships, as Monica mentioned, obviously Azure. We had a big session with AWS. Lots of new work going on with Red Hat as an ISV partner. Tying that also to IBM Cloud, as we move forward, as Red Hat and IBM Cloud go hand in hand, and also tons of workarounds, as Monica mentioned. So it takes a village. We want to make sure customer outcomes deliver value. So anywhere, for any app, on any infrastructure, any cloud, regardless standards or protocols, we want to make sure we have an open system coverage, not only for operators, but also for application developers, develop those applications securely and for operators, run and manage those applications securely anywhere. So from that perspective, tons of interest, obviously, on the Citrix or the UC side, as Monica mentioned earlier, we also just announced the Red Hat partnership for cloud services. Right before that, next we highlighted that, and we are super excited about those two partnerships. >> Yeah, so, when I talked to some of your product folks and got into the technology a little bit, it's clear to me you're not wrapping your stack in containers and shoving it into the cloud and hosting it like some do. You're actually going much deeper. And, again, that's why it's hard. You could take advantage of those things, but-- So, Monica, you were on the stage at .NEXT with Eric Lockhart of Microsoft. Maybe you can share some details around the focus on Azure and what it means for customers. >> Absolutely. First of all, I'm so grateful that Eric actually flew out to the Bay Area to be live on stage with us. So very super grateful for Eric and Azure partnership there. As I said earlier, we announced the preview of Nutanix Clusters and Azure. It's a big deal. We've been working on it for a while. What this means is that a select few organizations will have an opportunity to get early access and also help shape the roadmap of our offering. And, obviously, we're looking forward to then announcing general availability soon after that. So that's number one. We're already seeing tremendous interest. We have a large number of customers who want to get their hands on early access. We are already working with them to get them set up. The second piece that Eric and I talked about really was, you know, the reason why the work that we're doing together is so important is because we do know that hybrid cloud is the preferred IT model. You know, we've heard that in spades from all different industries' research, by talking to customers, by talking to people like yourselves. However, when customers actually start deploying it, there's lots of issues that come up. There's limited skill sets, resources, and, most importantly, there's a disparity between the on-premises networking security management and the cloud networking security management. And that's what we are focused on, together as partners, is removing that barrier, the friction between on-prem and Azure cloud. So our customers can easily migrate their workloads in Azure cloud, do cloud disaster recovery, create a burst into cloud for elasticity if they need to, or even use Azure as an on-ramp to modernize applications by using the Azure cloud services. So that's one big piece. The second piece is our partnership around Kubernetes and cloud native, and that's something we've already provided to the market. It's GA with Azure and Nutanix cloud platform working together to build Kubernetes-based applications, container-based applications, and run them and manage them. So there's a lot more information on nutanix.com/azure. And I would say, for those of our listeners who want to give it a try and who want their hands on it, we also have a test drive available. You can actually experience the product by going to nutanix.com/azure and taking the test drive. >> Excellent. Now, Tarkan, we saw recently that you announced services. You've got HPE GreenLake, Lenovo, their Azure service, which is called TruScale. We saw you with Keith White at HPE Discover. I was just with Keith White this week, by the way, face to face. Awesome guy. So that's exciting. You got some investments going on there. What can you tell us about those partnerships? >> So, look, as we talked through this a little bit, the HPE relationship is a very critical relationship. One of our fastest growing partnerships. You know, our customers now can run a Nutanix software on any HPE platform. We call it DX, is the platform. But beyond that, now, if the customers want to use HPE service as-a-service, now, Nutanix software, the entire stack, it's not only hybrid multicloud platform, the database capability, EUC capability, storage capability, can run on HPE's service, GreenLake service. Same thing, by the way, same way available on Lenovo. Again, we're doing similar work with Dell and Supermicro, again, giving our customers choice. If they want to go to a public club partner like Azure, AWS, they have that choice. And also, as you know, I know Monica, you're going to talk about this, with our GSI partnerships and new service provider program, we're giving options to customers because, in some other regions, HPE might not be their choice or Azure not be choice, and a local telco might the choice in some country like Japan or India. So we give options and capability to the customers to run Nutanix software anywhere they like. >> I think that's a really important point you're making because, as I see all these infrastructure providers, who are traditionally on-prem players, introduce as-a-service, one of the things I'm looking for is, sure, they've got to have their own services, their own products available, but what other ecosystem partners are they offering? Are they truly giving the customers choice? Because that's, really, that's the hallmark of a cloud provider. You know, if we think about Amazon, you don't always have to use the Amazon product. You can use actually a competitive product, and that's the way it is. They let the customers choose. Of course, they want to sell their own, but, if you innovate fast enough, which, of course, Nutanix is all about innovation, a lot of customers are going to choose you. So that's key to these as-a-service models. So, Monica, Tarkan mentioned the GSIs. What can you tell us about the big partners there? >> Yeah, definitely. Actually, before I talk about GSIs, I do want to make sure our listeners understand we already support AWS in a public cloud, right? So Nutanix totally is available in general, generally available on AWS to use and build a hybrid cloud offering. And the reason I say that is because our philosophy from day one, even on the infrastructure side, has been freedom of choice for our customers and supporting as large a number of platforms and substrates as we can. And that's the notion that we are continuing, here, forward with. So to talk about GSIs a bit more, obviously, when you say one platform, any app, any cloud, any cloud includes on-prem, it includes hyperscalers, it includes the regional service providers, as well. So as an example, TCS is a really great partner of ours. We have a long history of working together with TCS, in global 2000 accounts across many different industries, retail, financial services, energy, and we are really focused, for example, with them, on expanding our joint business around mission critical applications deployment in our customer accounts, and specifically our databases with Nutanix Era, for example. Another great partner for us is HCL. In fact, HCL's solution SKALE DB, we showcased at .NEXT just yesterday. And SKALE DB is a fully managed database service that HCL offers which includes a Nutanix platform, including Nutanix Era, which is our database service, along with HCL services, as well as the hardware/software that customers need to actually run their business applications on it. And then, moving on to service providers, you know, we have great partnerships like with Cyxtera, who, in fact, was the service provider partner of the year. That's the award they just got. And many other service providers, including working with, you know, all of the edge cloud, Equinix. So, I can go on. We have a long list of partnerships, but what I want to say is that these are very important partnerships to us. All the way from, as Tarkan said, OEMs, hyperscalers, ISVs, you know, like Red Hat, Citrix, and, of course, our service provider, GSI partnerships. And then, last but not least, I think, Tarkan, I'd love for you to maybe comment on our channel partnerships as well, right? That's a very important part of our ecosystem. >> No, absolutely. You're absolutely right. Monica. As you suggested, our GSI program is one of the best programs in the industry in number of GSIs we support, new SP program, enterprise solution providers, service provider program, covering telcos and regional service providers, like you suggested, OVH in France, NTT in Japan, Yotta group in India, Cyxtera in the US. We have over 50 new service providers signed up in the last few months since the announcement, but tying all these things, obviously, to our overall channel ecosystem with our distributors and resellers, which is moving very nicely. We have Christian Alvarez, who is running our channel programs globally. And one last piece, Dave, I think this was important point that Monica brought up. Again, give choice to our customers. It's not about cloud by itself. It's outcomes, but cloud is an enabler to get there, especially in a hybrid multicloud fashion. And last point I would add to this is help customers regardless of the stage they're in in their cloud migration. From rehosting to replatforming, repurchasing or refactoring, rearchitecting applications or retaining applications or retiring applications, they will have different needs. And what we're trying to do, with Monica's help, with the entire team: choice. Choice in stage, choice in maturity to migrate to cloud, and choice on platform. >> So I want to close. First of all, I want to give some of my impressions. So we've been watching Nutanix since the early days. I remember vividly standing around the conference call with my colleague at the time, Stu Miniman. The state-of-the-art was converged infrastructure, at the time, bolting together storage, networking, and compute, very hardware centric. And the founding team at Nutanix told us, "We're going to have a software-led version of that." And you popularized, you kind of created the hyperconverged infrastructure market. You created what we called at the time true private cloud, scaled up as a company, and now you're really going after that multicloud, hybrid cloud opportunity. Jerry Chen and Greylock, they just wrote a piece called Castles on the Cloud, and the whole concept was, and I say this all the time, the hyperscalers, last year, just spent a hundred billion dollars on CapEx. That's a gift to companies that can add value on top of that. And that's exactly the strategy that you're taking, so I like it. You've got to move fast, and you are. So, guys, thanks for coming on, but I want you to both-- maybe, Tarkan, you can start, and Monica, you can bring us home. Give us your wrap up, your summary, and any final thoughts. >> All right, look, I'm going to go back to where I started this. Again, I know I go back. This is like a broken record, but it's so important we hear from the customers. Again, cloud is not a destination. It's a business model. We are here to support those outcomes, regardless of platform, regardless of hypervisor, cloud type or app, making sure from legacy apps to cloud native apps, we are there for the customers regardless of their stage in their migration. >> Dave: Right, thank you. Monica? >> Yeah. And I, again, you know, just the whole conversation we've been having is around this but I'll remind everybody that why we started out. Our journey was to make infrastructure invisible. We are now very well poised to helping our customers, making the cloud complexity invisible. So our customers can focus on business outcomes and innovation. And, as you can see, coming out of .NEXT, we've been firing on all cylinders to deliver this differentiated, unified hybrid multicloud platform so our customers can really run any app, anywhere, on any cloud. And with the simplicity that we are known for because, you know, our customers love us. NPS 90 plus seven years in a row. But, again, the guiding principle is simplicity, portability, choice. And, really, our compass is our customers. So that's what we are focused on. >> Well, I love not having to get on planes every Sunday and coming back every Friday, but I do miss going to events like .NEXT, where I meet a lot of those customers. And I, again, we've been following you guys since the early days. I can attest to the customer delight. I've spent a lot of time with them, driven in taxis, hung out at parties, on buses. And so, guys, listen, good luck in the next chapter of Nutanix. We'll be there reporting and really appreciate your time. >> Thank you so much. >> Thank you so much, Dave. >> All right, and thank you for watching, everybody. This is Dave Vellante for theCUBE, and, as always, we'll see you next time. (light music)
SUMMARY :
and at the recent and then talk to customers and also bringing the right products, terms of your takeaways? and really bringing to just summarize the big news So the first one was around enhancements So the first thing I'm going to say is big push to what you just suggested. and got into the technology a little bit, and also help shape the face to face. and a local telco might the choice and that's the way it is. And that's the notion but cloud is an enabler to get there, and the whole concept was, We are here to support those outcomes, Dave: Right, thank you. just the whole conversation in the next chapter of Nutanix. and, as always, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Monica | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Monica Kumar | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tarkan | PERSON | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Hillary Clinton | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Eric Lockhart | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Keith White | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Tarkan Maner | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Christian Alvarez | PERSON | 0.99+ |
HCL | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
second | QUANTITY | 0.99+ |
Keith White | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Cyxtera | ORGANIZATION | 0.99+ |
HPE | TITLE | 0.99+ |
3x | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
seven years | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
second announcement | QUANTITY | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
TCS | ORGANIZATION | 0.99+ |
Azure | ORGANIZATION | 0.99+ |
Bay Area | LOCATION | 0.99+ |
two partnerships | QUANTITY | 0.99+ |
Nutanix Clusters | ORGANIZATION | 0.99+ |
UC | ORGANIZATION | 0.98+ |
one announcement | QUANTITY | 0.98+ |
over 50 new service providers | QUANTITY | 0.98+ |
Glenn Katz, Comcast | Fortinet Security Summit 2021
>> It's The Cube covering Fortinet Security Summit brought to you by Fortinet. >> Hey and welcome back to the cubes coverage of Fortinets championship series. Cybersecurity summit here in Napa valley Fortinet is sponsoring the PGA tour event, kicking off the season here, and the cubes here as part of the coverage. And today is cybersecurity day where they bring their top customers in. We got Glenn Katz SVP, general manager, Comcast Enterprise Solutions. Glenn, thanks for coming on The Cube. Thanks for taking time out of your day. - Thank you no This is great. This is great. >> Interviewer: Tell me to explain what you guys do in the Comcast business enterprise group. >> That's our Comcast business. We're a part of Comcast overall. I always like to explain what Comcast really is. If you look at Comcast, it's a technology innovation company by itself that happens to focus on communications and media type of, of markets, right? And if you look at the Comcast side there on the communication side, it's really everything residential with customers. Then there's the us Comcast business and we're the fastest growing entity over the last 15 years within Comcast. And we started in small business, voice, video, and data to small businesses. Then we moved up to provide fiber ethernet type of a transport to mid-market. And then my group started in 2014. And what we do is focus on managed services. It doesn't matter who the transport layer is for enterprise Fortune 1000 type companies. And then when you layer in all these managed wider network services. So that's my business unit. >> Interviewer: Well, we appreciate it we're a customer by the way in Palo Alto >> Glen: Oh great >> So give a shout out to you guys. Let's get into the talk you're giving here about cybersecurity, because I mean, right now with the pandemic, people are working at home. Obviously everyone knows the future of work is hybrid now you're going to see more decentralized defy and or virtual spaces where people are going to want to work anywhere and businesses want to have that extension, right? What people are talking about, and it's not new, but it's kind of new in the sense of reality, right? You've got to execute. This is a big challenge. >> Glen: It is - What's your thoughts on that, >> Well it's a big challenge. And one of the things that I'll try to, I'll speak to this afternoon here, which is at least from the enterprise perspective, which includes the headquarters, the enterprise, the branch locations, the digital commerce, everywhere else, commerce is being done. It's not just at a store anymore. It's everywhere. Even if you only have a store and then you have the remote worker aspect. I mean, they do that to your point earlier. We're not in that fortress sort of security mentality anymore. There's no more DMZs it's done. And so you've got to get down to the zero trust type of network architecture. And how do you put that together? And how does that work? Not just for remote workers that have to access the enterprise applications, but also for simple, you know, consumers or the business customers of these, of these enterprises that have to do business from over the phone or in the store. >> Interviewer: What are the some of the challenges you hear from your customers, obviously, business of the defend themselves now the, the, the attacks are there. There's no parameters. You mentioned no fortress. There's more edge happening, right? Like I said, people at home, what are the top challenges that you're hearing from customers? >> So the biggest challenge, and this is, I would think this is, this is mostly focused on the enterprise side of it is that the is two interesting phenomenons going on. This is sort of beginnings before the pandemic. And then of course the pandemic, the role of the CIO has been elevated to now, they have a real seat at the table. Budgets are increasing to a point, but the expertise needed in these, in these it departments for these large enterprises, it's, it's impossible to do what you were just talking about, which is create a staff of people that can do everything from enterprise applications, e-commerce analytics, the network. How do you secure that network all the way down to the end users? Right? So it's that middle portion. That's the biggest challenge because that takes a lot of work and a lot of effort. And that's where folks like Comcast can come in and help them out. That's their biggest challenge. They can handle the enterprise, they can handle the remote workers. They can handle their own applications, which are continually trying to be, you know, have to be it's competitive out there. It's that middle area, that communications layer that their challenged with. >> Interviewer: Yeah. And John Madison's EVP, CMO Ford. It's always talking about negative unemployment in cybersecurity. Nevermind just the staff that do cyber >> Glen: That's exactly right, that's given. If you're a business, you can't hire people fast enough and you might not have the budget for you want to manage service. So how do you get cyber as a service? >> Glen: Well, so it's even bigger than that. It's not just the cyber as a service because it's now a big package. That's what SASE really is SASE is Secure Access Service Edge. But think of it where I think of it is you've got remote users, remote workers, mobile apps on one side, you've got applications, enterprise or commercial that are now moved into different cloud locations. And in the middle, you've got two real fundamental layers, the network. And, and that includes uh, the actual transport, the software defined wide area, networking components, everything that goes with that, that's the network as a service. And then you've got the secure web gateway portion, which includes everything to secure all the data, going back and forth between your remote laptop, the point of sales. And let's say the cloud based applications, right? So that's really the center stage right there. >> Interviewer: And the cloud has brought more service at the top of the stack. I mean, people thought down stack up stack is kind of like a geeky terms. You're talking about innovation. If you're down stack with network and transport, those are problems that you have to solve on behalf of your customers And make that almost invisible. And that's your job >> That's our job. That's our job is to service provider What's interesting is though back in the day, I mean, when, I mean, back in the day, it could have been 10 years ago in 20. You really, you know, you had stable networks, they were ubiquitous, they were expensive and they were slow. That's kind of the MPLS legacy TDM. Yeah. So you just put them in and you walked away and you still did all your enterprise. You still did all of your applications, but you had your own private data centers. Everything was nicer. It was that fortress mentality right now. It's different. Now everybody needs broadband. Well guess what? Comcast is a big company, but we don't have broadband everywhere. ATT doesn't have it. Verizon doesn't have it Charter doesn't have it. Right. So you need, so now to think about that from enterprise, I'm going to go, I'll give you an example. All of our customers to fulfill a nationwide network, just for the broadband infrastructure, that's, you know, redundant. If you want to think of it that way we, we source probably 200 to 300 different providers to provide an ubiquitous network nationwide for broadband. Then we wrap a layer of the SD wan infrastructure for that, as an example, over the top of that, right? You can't do that by yourself. I mean, people try and they fail. And that's the role of a managed service provider like us is to pull all that together. Take that away. We have that expertise. >> Interviewer: I think this is a really interesting point. Let's just unpack that just for a second. Yeah. In the old days, we want to do an interconnect. You had an agreement. You did, you have your own stuff, do an interconnected connect. >> Glen: Yep. >> Now this, all this mishmash, you got to traverse multiple hops, different networks. >> Glen: That's right >> Different owners, different don't know what's on that. So you guys have to basically stitch this together, hang it together and make it work. And you guys put software on the top and make sure it's cool is that how it works? >> Glen: Yeah. Software and different technology components for the SD wan. And then we would deliver the shore and manager all that. And that's, that's where I really like what's happening in the industry, at least in terminology, which is they try, you have to try to simplify that because it's very, very complicated, but I'm going to give you the network as a service mean, I'm going to give you all the transport and you have to don't have to worry about it. I'm going to rent you the, the SD wan technology. And then I'm going to have in my gateways all these security components for a firewall as a service, zero trust network access, cloud brokerage services. So I will secure all of your data as you go to the cloud and do all of that for you. That's really what we, that's what we bring to the table. And that's what is really, really hard for enterprises to do today. Just because they can't, the expertise needed to do that is just not there. >> Interviewer: Well, what's interesting is that first you have to do it now because the reality of your business now is you don't do it. You won't have customers, but you're making it easier for them. So they don't have to think about it. - [Glen] That's right. >> But now you bring in hybrid networking hybrid cloud, they call it or multi-cloud right. It's essentially a distributed computing and essentially what you're doing, but with multiple typologies, >> Glen: that's right. >> Interviewer: I got an edge device. - [Glen] That's right. If I'm a business. - [Glen] That's right. >> That's where it could be someone working at home >> Glen: That's right. - Or it could be my retail >> Or whatever it could be. So edge is just an extension of what you guys already do. And is that right? Am I getting that right? >> Glen: Yeah that's exactly right. And, and, but the point is, is to make it economic and to make it really work for the end user. If you're a branch, you may have a, a application that's still being run via VPN, but you also need wifi internet for your customers because you want to use your mobile device. They've entered into your store and you want to be able to track that right. And push something to them. And then you've got the actual store applications could be point of sales could be back of house comparing that's going up to AWS. Azura whatever. Right. And that all has to be, it all has to come from one particular branch and someone has to be able to manage that capability. >> Interviewer: It's funny, - Its so different >> Interviewer: just as you're talking, I'm just thinking, okay. Facial recognition, high, high bandwidth requirements, >> Glen: Huge high bandwidth requirements >> Processing at the edge becomes huge. >> Glen: It does. >> So that becomes a new dynamic. >> Glen: It does. It's got to be more dynamic. It's not a static IP end point. >> Glen: Well, I'll give you another an example. Let's say it's, it seems silly, but it's so important from a business perspective, your quick service restaurant, the amount of digital sales from applications are just skyrocketing. And if you yourself, and particularly in the pandemic, you order something, or that goes up to the cloud, comes back through, goes to the point of sales. And then the, the back of house network in a particular restaurant, if that doesn't get there, because one line of you only have one internet connection and it's down, which sometimes happens, right? You lose business, you lose that customer. It's so important. So what's being pushed down to the edge is, you know, reliable broadband hybrid networks, where you have a primary wire line and a secondary wire line, maybe a tertiary wireless or whatever. And then a box, a device that can manage between those two so that you can keep that 99.9, 9% availability at your branch, just for those simple types of applications. >> Interviewer: You know Glenn, you as you're talking most people, when we talk tech, like this is mostly inside the ropes, Hey, I can get it. But most people can relate with the pandemic because they've ordered with their phone on - [Glen] Exactly right >> With the QR code. - [Glen] That's exactly right >> They see the menu - [Glen] That's right >> They get now what's happening - [Glen] That's right that their phone is now connected to the service. >> Glen: That's right >> This is not going away. The new normal. >> Glen: No, it's absolutely here. And what I've seen are there are many, many companies that already knew this and understood this pre pandemic. And they were, they had already changed their infrastructure to really fit what I was calling that network as a service in the SASE model, in different ways. Then there were a bunch that didn't, and I'm not going to name names, but you can look at those companies and you can see how they're, they're struggling terribly. But then there was this. Now there's a, a much bigger push and privatization again, see, I was sending, Hey, I asked for this before. It's not like the CIO didn't know, but management said, well, maybe it wasn't important. Now it is. And so you're seeing this actual amazing surge in business requests and requirements to go to the model that we're all talking about here, which is that SASE type of implementation high-speed broadband. That's not going away for the same reason. And you need a resilient network, right? Yes. >> Interesting. Best practice. Let's just take that advice to the, to the audience. I want to get your thoughts because people who didn't do any R and D or experimentation prior to the pandemic, didn't have cloud. Wasn't thinking about this new architecture got caught flat-footed. -Exactly. >> And they're hurting and or out of business. >> Correct. >> If people who were on the right side of that took advantage as a tailwind and they got lifts. >> That's exactly right. >> So what is the best practice? How should a business think about putting their toe in the water a little bit or jumping in and getting immersed in the new, new architecture? What advice would you give? Because people don't want to be in the wrong side of history. >> No, they don't. >> What's your guy's best practice? >> I may sound biased, but I'm really not trying to be biased. And this'll be some of the I'll speak about here later today. You have to try it. You, as the end user, the enterprise customer, to, to fulfill these types of needs, you've got to really probe your managed service providers. You've got to understand which ones, not just can give you a nice technology presentation and maybe a POC, but who's going to be there for the longterm who has the economic wherewithal to be able to give the resources needed to do what I was talking about, which is you're going to outsource your entire network to me and your sh, and a good portion of your security for the network to a service provider. that service provider has to be able to provide all that has to be able to have the financial capabilities, to be able to provide you with an operating type of model, not you have to buying equipment all the time. That service provider has to be able to have teams that can deliver all of that 200 to 300 different types of providers aggregate all that, and then be there for day two. Simple thing. Like if you know, most companies, if you're not a really large location, you can't afford to, you know, double types of routers that are connected. And if one fails you have fail over, right, most of them will have one router and they'll have, but they'll have two backup paths. Well, what happens is that router or switch, single switch fails? You need to have a meantime to repair a four hours. I mean, that's kind of basic and well do that. How do you do that? You've got to have depots around the entire country. These are the types of questions that any enterprise customers should be probing their managed service provider, right? It's not just about the technology. It's about how can you deliver this and assure this going forward. >> And agility too cause when, if, if things do change rapidly, being agile... >> Exactly >> means shifting and being flexible with your business. >> That's exactly right. And that's important. That's a really important question. And the agility comes from this financial agility, right? Like new threat, new box. I want, I want this old one. I'm going to upgrade to a different type of service. The service providers should be able to do that without me having to force you to go get some more CapEx and buy some more stuff. Cause that's number one. But the other agility is every enterprise is different. Every enterprise believes that its network is the only network in the world and they have opinions and they've tested different technologies. And you're going to have to adapt a little bit to that. And if you don't, you're not going to get out of this. >> It's funny. The old days non-disruptive operations was like a benefit, we have non-disrupt- now it's a table stakes. You can't disrupt businesses. - You can't. You can't at the branch at the remote worker. If you're on a zoom call or whatever, or you're on a teams call, we've all been there. We're still doing it. If it breaks in the middle of a presentation to a customer that's problem. >> Glenn thanks for coming on the cube with great insight. >> Oh great. This was fun. >> Are you exciting and plays golf? You're going to get out there on the range? >> I played, I played golf a lot when I was younger, but I haven't. And so I have a few other things I do, but I guess I'm going to have to learn now that we're also a sponsor of PGA, so yeah, for sure. >> Great. Well, great to have you on - All right thank you and great talk. Thanks for coming on and sharing your insight. >> This was great. I appreciate okay. >> Keep coverage here. Napa valley with Fortinet's Cybersecurity Summit as part of their PGA tour event, that's happening this weekend. I'm John for the Cube. Thanks for watching.
SUMMARY :
brought to you by Fortinet. and the cubes here as in the Comcast business enterprise group. And if you look at the So give a shout out to you guys. do that to your point earlier. you hear from your customers, is that the is two interesting just the staff that do cyber So how do you get cyber as a service? And in the middle, those are problems that you have to solve And that's the role of a managed did, you have your own stuff, you got to traverse multiple And you guys put software on the top but I'm going to give you the that first you have to do it now But now you bring in hybrid - [Glen] That's right. Glen: That's right. of what you guys already do. And that all has to be, Interviewer: just as you're talking, It's got to be more dynamic. to the edge is, you know, is mostly inside the ropes, With the QR code. connected to the service. This is not going away. And you need a resilient network, right? prior to the pandemic, And they're hurting the right side of that took to be in the wrong side of for the network to a service provider. And agility too cause when, flexible with your business. having to force you to go get You can't at the branch the cube with great insight. This was fun. but I guess I'm going to Well, great to have I appreciate okay. I'm John for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Comcast | ORGANIZATION | 0.99+ |
Glenn | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Glen | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Glenn Katz | PERSON | 0.99+ |
Glenn Katz | PERSON | 0.99+ |
John Madison | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
200 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ATT | ORGANIZATION | 0.99+ |
Comcast Enterprise Solutions | ORGANIZATION | 0.99+ |
99.9, 9% | QUANTITY | 0.99+ |
Napa valley | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Charter | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.98+ |
two interesting phenomenons | QUANTITY | 0.98+ |
four hours | QUANTITY | 0.98+ |
Fortinet Security Summit | EVENT | 0.98+ |
pandemic | EVENT | 0.97+ |
one line | QUANTITY | 0.97+ |
SASE | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
one side | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
CapEx | ORGANIZATION | 0.96+ |
one router | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
300 different providers | QUANTITY | 0.95+ |
PGA | EVENT | 0.94+ |
two real fundamental layers | QUANTITY | 0.94+ |
Fortinet | ORGANIZATION | 0.94+ |
day two | QUANTITY | 0.94+ |
one internet connection | QUANTITY | 0.93+ |
PGA | ORGANIZATION | 0.92+ |
this afternoon | DATE | 0.92+ |
zero | QUANTITY | 0.91+ |
Fortinet Security Summit 2021 | EVENT | 0.89+ |
Azura | ORGANIZATION | 0.88+ |
Fortinet | EVENT | 0.87+ |
later today | DATE | 0.86+ |
20 | QUANTITY | 0.85+ |
one particular branch | QUANTITY | 0.85+ |
Cybersecurity Summit | EVENT | 0.84+ |
SVP | PERSON | 0.84+ |
two backup | QUANTITY | 0.84+ |
last 15 years | DATE | 0.82+ |
a second | QUANTITY | 0.82+ |
300 different types | QUANTITY | 0.76+ |
single switch | QUANTITY | 0.75+ |