Image Title

Search Results for gs:

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great questions, Dave. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

WalmartORGANIZATION

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.99+

StarlinkORGANIZATION

0.99+

oneQUANTITY

0.98+

a dayQUANTITY

0.97+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

CubeORGANIZATION

0.9+

SuperCloud2ORGANIZATION

0.89+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.88+

one more pieceQUANTITY

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud


 

>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great question, Steve. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, and it does get disconnected on a regular basis. In fact, I forget the exact number, but some several dozen locations get disconnected daily just by virtue of the fact that there's construction going on and things are happening in the real world. When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)

Published Date : Jan 9 2023

SUMMARY :

and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

DavePERSON

0.99+

JackPERSON

0.99+

MicrosoftORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Martin FowlerPERSON

0.99+

USLOCATION

0.99+

Zhamak DehghaniPERSON

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

StarlinkORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

firstQUANTITY

0.99+

each stepQUANTITY

0.99+

FirstQUANTITY

0.99+

early 1960sDATE

0.98+

oneQUANTITY

0.98+

a dayQUANTITY

0.98+

GCPTITLE

0.97+

AzureTITLE

0.96+

WCNPTITLE

0.96+

10 millisecondsQUANTITY

0.96+

bothQUANTITY

0.96+

KubernetesTITLE

0.94+

Cloud SpannerTITLE

0.94+

LinkerdORGANIZATION

0.93+

CubeORGANIZATION

0.93+

tripletQUANTITY

0.92+

three cloud providersQUANTITY

0.91+

two core setsQUANTITY

0.88+

John FurrierPERSON

0.86+

one more pieceQUANTITY

0.86+

SuperCloud2ORGANIZATION

0.86+

two public cloudsQUANTITY

0.86+

thousand locationsQUANTITY

0.83+

Vice PresidentPERSON

0.8+

10-ishQUANTITY

0.79+

WCNPORGANIZATION

0.75+

decadesQUANTITY

0.75+

three different major regionsQUANTITY

0.74+

2021 027 Jim Walker


 

(bright upbeat music) >> Hello, and welcome back to the DockerCon 2021 virtual coverage. I'm John Furrie host of theCUBE here in Palo Alto with a remote interview with a great guest Cuban alumni, Jim Walker VP of Product Marketing at Cockroach Labs. Jim, great to see you remotely coming into theCUBE normally we're in person, soon we'll be back in real life. Great to see you. >> Great to see you as well John, I miss you. I miss senior live and in person. So this has got to do, I guess right? >> We we had the first multi-cloud event in New York city. You guys had was I think one of the last events that was going on towards the end of the year before the pandemic hit. So a lot's happened with Cockroach Labs over the past few years, accelerated growth, funding, amazing stuff here at DockerCon containerization of the world, containers everywhere and all places hybrid, pure cloud, edge everywhere. Give us the update what's going on with Cockroach Labs and then we'll get into what's going on at DockerCon. >> Yeah Cockroach Labs, this has been a pretty fun ride. I mean, I think about two and a half years now and John it's been phenomenal as the world kind of wakes up to a distributed systems and the containerization of everything. I'm happy we're at DockerCon talking about containerization 'cause I think it has radically changed the way we think about software, but more importantly it's starting to take hold. I think a lot of people would say, oh, it's already taken hold but if you start to think about like just, these kind of modern applications that are depending on data and what does containerization mean for the database? Well, Cockroach has got a pretty good story. I mean, gosh, before Escape I think the last time I talked to you, I was at CoreOS and we were playing the whole Kubernetes game and I remember Alex Povi talking about GIFEE Google infrastructure for everyone or for everyone else I should say. And I think that's what we've seen that kind of happened with the infrastructure layer but I think that last layer of infrastructure is the database. Like I really feel like the database is that dividing line between the business logic and infrastructure. And it's really exciting to see, just massive huge customers come to Cockroach to rethink what the database means in cloud, right? What does the database mean when we moved to distributed systems and that sort of thing, and so, momentum has been building here, we are, upwards of, oh gosh, over 300 paying customers now, thousands of Cockroach customers in the wild out there but we're seeing this huge massive attraction to CockroachCloud which is a great name. Come on, Johnny, you got to say, right? And our database as a service. So getting that out there and seeing the uptake there has just been, it's been phenomenal over the past couple of years. >> Yeah and you've got to love the Cockroach name, love it, survive nuclear war and winter all that good stuff as they say, but really the reality is that it's kind of an interesting play on words because one of the trends that we've been talking about, I mean, you and I've been telling this for years with our CUBE coverage around Amazon Web Services early on was very clear about a decade ago that there wasn't going to be one database to rule the world. They're going to many, many databases. And as you started getting into these cloud native deployments at scale, use your database of choice was the developer ethos just whatever it takes to get the job done. Now you start integrating this in a horizontally scalable way with the cloud, you have now new kinds of scale, cloud scale. And it kind of changed the game on the always on availability question which is how do I get high availability? How do I keep things running? And that is the number one developer challenge whether it's infrastructure as code, whether it's security shifting left, it all comes down to making sure stuff's running at scale and secure. Talk about that. >> Yeah, absolutely and it's interesting it's been, like I said, this journey in this arc towards distributed systems and truly like delivery of what people want in the cloud, it's been a long arc and it's been a long journey and I think we're getting to the point where people, they are starting to kind of bake resilience and scale into their applications and I think that's kind of this modern approach. Look we're taking legacy databases today. There are people are kind of lift and shift, move them into the cloud, try to run them there but they aren't just built for that infrastructure like the there's a fundamentally different approach and infrastructure when it talks, when you talk about cloud it's one of the reasons why John early on your conversations with the AWS Team and what they did, it's like, yeah, how do we give resilient and ubiquitous and always on scalable kind of infrastructure people. Well, that's great for those layers but when you start to get into the software that's running on these things, it isn't lift and shift and it's not even move and improve. You can't like just take a legacy system and change one piece of it to make it kind of take advantage of the scale and the resilience and the ubiquity of the cloud, because there's very very explicit challenges. For us, it's about re-architect and rebuild. Let's tear the database down and let's rethink it and build from the ground up to be cloud native. And I think the technologies that have done that, that have kind of built from scratch, to be cloud native are the ones that are I believe, three years from now that's what we're going to be talking about. I mean, this comes back to again, like the Genesis of what we did is Google Cloud Spanner. Spanner white paper and what Google did, they didn't build, they didn't use an existing database because they needed something for a transactional relational database. They hire a bunch of really incredible engineers, right? And I got like Jeff Dean and Sanjay Ghemawat over there, like designing and doing all these cool things, they build and I think that's what we're seeing and I think that's, to me the exciting part about data in the cloud as we move forward. >> Yeah, and I think the Google cloud infrastructure, everyone I think that's the same mindset for Amazon is that I want all the scale, but I don't want to do it like over 10 years I to do it now, which I love I want to get back to in a second, but I want to ask you specifically this definition of containerization of the database. I've heard that kicked around, love the concept. I kind of understand what it means but I want you to define it for us. What does it mean when someone says containerizing the database? >> Yeah, I mean, simply put the database in container and run it and that's all that I can think that's like, maybe step one I think that's kind of lift and shift. Let's put it in a container and run it somewhere. And that's not that hard to do. I think I could do that. I mean, I haven't coded in a long time but I think I could figure that out. It's when you start to actually have multiple instances of a container, right? And that's where things get really, really tricky. Now we're talking about true distributed systems. We're talking about how do you coordinate data? How do you balance data across multiple instances of a database, right? How do you actually have fail over so that if one node goes down, a bunch of them are still available. How do you guarantee transactional consistency? You can't just have four instances of a database, all with the same information in it John without any sort of coordination, right? Like you hit one node and you hit another one in the same account which transaction wins. And so the concepts in distributed systems around there's this thing called the cap theorem, there's consistency, availability, and partition tolerance and actually understanding how these things work especially for data in distributed systems, to make sure that it's going to be consistent and available and you're going to scale those things are not simple to solve. And again, it comes back to this. I don't think you can do it with legacy database. You kind of have to re-architect and it comes down to where data is stored, it comes down to how it's replicated, it comes down to really ultimately where it's physically located. I think when you deploy a database you think about the logical model, right? You think about tables, and normalization and referential integrity. The physical location is extremely important as we kind of moved to that kind of containerized and distributed systems, especially around data. >> Well, you guys are here at DockerCon 2021 Cockroach Labs good success, love the architectural flexibility that you guys offer. And again, bringing that scale, like you mentioned it's awesome value proposition, especially if people want to just program the infrastructure. What's going on with with DockerCon specifically a lot of talk about developer productivity, a lot of talk about collaboration and trust with containers, big story around security. What's your angle here at DockerCon this year? What's the big reveal? What's the discussion? What's the top conversation? >> Yeah, I mean look at where we are a containerized database and we are an incredibly great choice for developers. For us, it's look at there's certain developer communities that are important on this planet, John, and this is one of them, right? This is I don't know a developer doesn't have that little whale up in their status bar, right? And for us, you know me man, I believe in this tech and I believe that this is something that's driven and greatly simplify our lives over the next two to three to 10 to 15 years. And for us, it's about awareness. And I think once people see Cockroach, they're like oh my God, how did I ever even think differently? And so for us, it's kind of moving in that direction. But ultimately our vision where we want to be, is we want to abstract the database to a SQL API in the cloud. We want to make it so simple that I just have this rest interface, there's end points all over the planet. And as a developer, I never have to worry about scale. I never have to worry about DR right? It's always going to be on. And most importantly, I don't have to worry about low latency access to data no matter where I'm at on the planet, right? I can give every user this kind of sub 50 millisecond access to data or sub 20 millisecond access to data. And that is the true delivery of the cloud, right? Like I think that's what the developer wants out of the cloud. They want to code against a service like, and it's got to be consumption-based and you secure and I don't want to have to pay for stuff I'm not using and that all those things. And so, for us, that's what we're building to, and interacting in this environment is critical for us because I think that's where audiences. >> I want to get your thoughts on you guys do have success with a couple of different personas and developers out there, groups, classic developers, software developers which is this show is that DockerCon full of developers KubeCon a lot of operators cool, and some dads, but mostly cloud native operations. Here's a developer shops. So you guys got to hit the developers which really care about building fast and building the scale and last with security. Architects you had success with, which is the classic, cloud architecture, which now distributed computing, we get that. But the third area I would call the kind of the role that both the architects and the developers had to take on which is being the DevOps person or then becomes the SRE in the group, right? So most startups have the DevOps team developers. They do DevOps natively and within every role. So they're the same people provisioning. But as you get larger and an enterprise, the DevOps role, whether it's in a team or group takes on this SRE site reliability engineer. This is a new dynamic that brings engineering and coding together. It's like not so much an ops person. It's much more of like an engineering developer. Why is that role so important? And we're seeing more of it in dev teams, right? Seeing an SRE person or a DevOps person inside teams, not a department. >> Yeah, look, John, we, yeah, I mean, we employ an army of SREs that manage and maintain our CockroachCloud, which is CockroachDB as a service, right? How do you deliver kind of a world-class experience for somebody to adopt a managed service a database such as ours, right? And so for us, yeah I mean, SREs are extremely important. So we have personal kind of an opinion on this but more importantly, I think, look at if you look at Cockroach and the architecture of what we built, I think Kelsey Hightower at one point said, I am going to probably mess this up but there was a tweet that he wrote. It's something like, CockroachDB is the Spanner as Kubernetes is the board. And if you think about that, I mean that's exactly what this is and we built a database that was actually amenable to the SRE, right? This is exactly what they want. They want it to scale up and down. They want it to just survive things. They want to be able to script this thing and basically script the world. They want to actually, that's how they want to manage and maintain. And so for us, I think our initial audience was definitely architects and operators and it's theCUBE con crowd and they're like, wow, this is cool. This is architected just like Kubernetes. In fact, like at etcd, which is a key piece of Kubernetes but we contribute back up to NCD our raft implementation. So there's a lot of the same tech here. What we've realized though John, with database is interesting. The architect is choosing a database sometimes but more often than not, a developer is choosing that database. And it's like they go out, they find a database, they just start building and that's what happens. So, for us, we made a very critical decision early on, this database is wire compatible with Postgres and it speaks to SQL syntax which if you look at some of the other solutions that are trying to do these things, those things are really difficult to do at the end. So like a critical decision to make sure that it's amenable so that now we can build the ORMs and all the tools that people would use and expect that of Postgres from a developer point of view, but let's simplify and automate and give the right kind of like the platform that the SREs need as well. And so for us the last year and a half is really about how do we actually build the right tooling for the developer crowd too. And we've really pushed really far in that world as well. >> Talk about the aspect of the scale of like, say startup for instance, 'cause you made this a great example borg to Kubernetes 'cause borg was Google's internal Kubernetes, like thing. So you guys have Spanner which everyone knows is a great product at Google had. You guys with almost the commercial version of that for the world. Is there, I mean, some people will say and I'll just want to challenge you on this and we'll get your thoughts. I'm not Google, I'll never be Google, I don't need that scale. Or so how do you address that point because some people say, well this might dismiss the notion of using it. How do you respond to that? >> Yeah, John, we get this all the time. Like, I'm not global. My application's not global. I don't need this. I don't need a tank, right? I just need, like, I just need to walk down the road. You know what I mean? And so, the funny thing is, even if you're in a single region and you're building a simple application, does it need to be always on does it need to be available. Can it survive the failure of a server or a rack or an AZ it doesn't have to survive the failure of a region but I tell you what, if you're successful, you're going to want to start actually deploying this thing across multiple regions. So you can survive a backhoe hit in a cable and the entire east coast going out, right? Like, and so with Cockroach, it's real easy to do that. So it's four little SQL commands and I have a database that's going to span all those regions, right? And I think that's important but more importantly, think about scale, when a developer wants to scale, typically it's like, okay, I'm going to spin up Postgres and I'm going to keep increasing my instance size. So I'm going to scale vertically until I run out of room. And then I'm going to have to start sharding this database. And when you start doing that, it adds this kind of application complexity that nobody really wants to deal with. And so forget it, just let the database deal with all that. So we find this thing extremely useful for the single developer in a very small application but the beauty thing is, if you want to go global, great just keep that in notes. Like when that application does take off and it's the next breakthrough thing, this database going to grow with you. So it's good enough to kind of start small but it's the scale fast, it'll go global if you want to, you have that option, I guess, right? >> I mean, why wouldn't you want optionality on this at all? So clearly a good point. Let me ask you a question, take me through a use case where with Cockroach, some scenario develops nicely, you can point to the visibility of the use case for the developer and then kind of how it played out and then compare that and contrast that to a scenario that doesn't go well, like where where we're at plays out well, for an example, and then if they didn't deploy it they got hung up and went sideways. >> Yeah like Cockroach was built for transactional workloads. That that's what we are like, we are optimized for the speed of light and consistent transactions. That's what we do, and we do it very well. At least I think so, right. But I think, like my favorite customer of all of ours is DoorDash and about a year ago DoorDash came to us and said, look at we have a transactional database that can't handle the right volume that we're getting and falls over. And they they'd significant challenges and if you think about DoorDash and DoorDash is business they're looking at an IPO in the summer and going through these, you can't have any issues. So like system's got to be up and running, right? And so for them, it was like we need something that's reliable. We need something that's not going to come down. We need something that's going to scale and handle burst and these sort of things and their business is big, their businesses not just let me deliver food all the time. It's deliver anything, like be that intermediary between a good and somebody's front door. That's what DoorDash wants to be. And for us, yeah, their transactions and that backend transactional system is built on Cockroach. And that's one year ago, they needed to get experienced. And once they did, they started to see that this was like very, very valuable and lots of different workloads they had. So anywhere there's any sort of transactional workload be it metadata, be it any sort of like inventory, or transaction stuff that we see in companies, that's where people are coming to us. And it's these traditional relational workloads that have been wrapped up in these transactional relational databases what built for the cloud. So I think what you're seeing is that's the other shoe to drop. We've seen this happen, you're watching Databricks, you're watching Snowflake kind of do this whole data cloud and then the analytical side John that's been around for a long time and there's that move to the cloud. That same thing that happened for OLAP, is got to happen for OLTP. Where we don't do well is when somebody thinks that we're an analytic database. That's not what we're built for, right? We're optimized for transactions and I think you're going to continue to see these two sides of the world, especially in cloud especially because I think that the way that our global systems are going to work you don't want to do analytics across multiple regions, it doesn't make sense, right? And so that's why you're going to see this, the continued kind of two markets OLAP and OLTP going on and we're just, we're squaring that OLTP side of the world. >> Yeah talking about the transaction processing side of it when you start to change a distributed architecture that goes from core edge, core on premises to edge. Edge being intelligent edge, industrial edge, whatever you're going to have more action happening. And you're seeing, Kubernetes already kind of talking about this and with the containers you got, so you've got kind of two dynamics. How does that change the nature of, and the level of volume of transactions? >> Well, it's interesting, John. I mean, if you look at something like Kubernetes it's still really difficult to do multi-region or multicloud Kubernetes, right? This is one of those things that like you start to move Kubernetes to the edge, you're still kind of managing all these different things. And I think it's not the volumes, it's the operational nightmare of that. For us, that's federate at the data layer. Like I could deploy Cockroach across multiple Kubernetes clusters today and you're going to have one single logical database running across those. In fact you can deploy Cockroach today on top of three public cloud providers, I can have nodes in AWS, I could have nodes in GCP, I could have nodes running on VMs in my data center. Any one of those nodes can service requests and it's going to look like a single logical database. Now that to me, when we talked about multicloud a year and a half ago or whatever that was John, that's an actual multicloud application and delivering data so that you don't have to actually deal with that in your application layer, right? You can do that down in the guts of the database itself. And so I think it's going to be interesting the way that these things gets consumed and the way that we think about where data lives and where our compute lives. I think that's part of what you're thinking about too. >> Yeah, so let me, well, I got you here. One of the things on my mind I think people want to maybe get clarification on is real quick while you're here. Take a minute to explain that you're seeing a CockroachDB and CockroachCloud. There are different products, you mentioned you've brought them both up. What's the difference for the developers watching? What's the difference of the two and when do I need to know the difference between the two? >> So to me, they're really one because CockroachCloud is CockroachDB as a service. It's our offering that makes it a world-class easy to consume experience of working with CockroachDB, where we take on all the hardware we take on the SRE role, we make sure it's up and running, right? You're getting connection, stringing your code against it. And I think, that's side of our world is really all about this kind of highly evolved database and delivering that as a service and you can actually use it's CockroachDB. I think it was just gets really interesting John is the next generation of what we're building. This serverless version of our database, where this is just an API in the cloud. We're going to have one instance of Cockroach with multi-tenant database in there and any developer can actually spin up on that. And to me, that gets to be a really interesting world when the world turns serverless, and we have, we're running our compute in Lambda and we're doing all these great things, right? Or we're using cloud run and Google, right? But what's the corresponding database to actually deal with that? And that to me is a fundamentally different database 'cause what is scale in the serverless world? It's autonomous, right? What scale in the current, like Cockroach world but you kind of keep adding nodes to it, you manage, you deal with that, right? What does resilience mean in a serverless world? It's just, yeah, its there all the time. What's important is latency when you get to kind of serverless like where are these things deployed? And I think to me, the interesting part of like the two sides of our world is what we're doing with serverless and kind of this and how we actually expose the core value of CockroachDB in that way. >> Yeah and I think that's one of the things that is the Nirvana or the holy grail of infrastructure as code is making it, I won't say irrelevant, but invisible if you're really dealing with a database thing, hey I'm just scaling and coding and the database stuff is just working with compute, just whatever, how that's serverless and you mentioned Lambda that's the action because you don't want the file name and deciding what the database is just having it happen is more productivity for the developers that kind of circles back to the whole productivity message for the developers. So I totally get that I think that's a great vision. The question I have for you Jim, is the big story here is developer simplicity. How you guys making it easier to just deploy. >> John is just an extension of the last part of the conversation. I don't want to developer to ever have to worry about a database. That's what Spencer and Peter and Ben have in their vision. It's how do I make the database so simple? It's simple, it's a SQL API in the cloud. Like it's a rest interface, I code against it, I run queries against it, I never have to worry about scaling the thing. I never have to worry about creating active, passive, and primary and secondary. All these like the DevOps side of it, all this operation stuff, it's just kind of done in the background dude. And if we can build it, and it's actually there now where we have it in beta, what's the role of the cost-based optimizer in this new world that we've had in databases? How are you actually ensuring data is located close to users and we're automating that so that, when John's in Australia doing a show, his data is going to follow him there. So he has fast access to that, right? And that's the kind of stuff that, we're talking about the next generation of infrastructure John, not like we're not building for today. Like, look at Cockroach Labs is not building for like 2021. Sure, do we have something that's great. We're building something that's 22 and 23 and 24, right? Like what do we need to be as a extremely productive set of engineers? And that's what we think about all day. How do we make data easy for the developer? >> Well, Jim, great to have you on VP of Product Marketing at Cockroach Labs, we've known each other for a long time. I got to ask you while I had got you here final question is, you and I have chatted about the many waves of in open source and in the computer industry, what's your take on where we are now. And I see you're looking at it from the Cockroach Labs perspective which is large scale distributed computing kind of you're on the new side of history, the right side of history, cloud native. Where are we right now? Compare and contrast for the folks watching who we're trying to understand the importance of where we are in the industry, where are we in and what's your take? >> Yeah John I feel fortunate to be in a company such as this one and the past couple that I've like been around and I feel like we are in the middle of a transformation. And it's just like the early days of this next generation. And I think we're seeing it in a lot of ways in infrastructure, for sure but we're starting to see it creep up into the application layer. And for me, it is so incredibly exciting to see the cloud was, remember when cloud was like this thing that people were like, oh boy maybe I'll do it. Now it's like, it's anything net new is going to be on cloud, right? Like we don't even think twice about it and the coming nature of cloud native and actually these technologies that are coming are going to be really interesting. I think the other piece that's really interesting John is the changing role of open source in this whole game, because I think of open source as code consumption and community, right? I think about those and then there's license of course, I think people were always there. A lot of people wrapped around the licensing. Consumption has changed, John. Back when we were talking to Dupe, consumption was like, oh, it's free, I get this thing I could just download it use it. Well consumption over the past three years, everybody wants everything as a service. And so we're ready to pay. For us, how do we bring free back to the service? And that's what we're doing. That's what I find like I am so incredibly excited to go through this kind of bringing back free beer to open source. I think that's going to be great 'cause if I can give you a database free up to five gig or 10 gig, man and it's available all over the planet has fully featured, that's coming, that's bringing our community and our code which is all open source and this consumption model back. And I'm super excited about that. >> Yeah, free beer who doesn't like free beer of course, developers love free beer and a great t-shirt too that's soft. Make sure you get that, get the soft >> You just don't want free puppy, you know what I mean? It was just like, yeah, that sounds painful. >> Well Jim, great to see you remotely. Can't wait to see you in person at the next event. And we've got the fall window coming up. We'll see some events. I think KubeCon in LA is going to be in-person re-invent a data breast for sure we'll be in person. I know that for a fact we'll be there. So we'll see you in person and congratulations on the work at Cockroach Labs. >> Thanks, John, great to see you again. All right, this keep coverage of DockerCon 2021. I'm John Furrie your host of theCUBE. Thanks for watching.

Published Date : May 19 2021

SUMMARY :

Jim, great to see you Great to see you as of the world, containers and the containerization of everything. And that is the number and I think that's, to of containerization of the database. and it comes down to where data is stored, that you guys offer. And that is the true the developers had to take on and basically script the world. of that for the world. and it's the next breakthrough thing, for the developer and then is that's the other shoe to drop. and the level of volume of transactions? and the way that we think One of the things on my mind And I think to me, the and the database stuff is And that's the kind of stuff I got to ask you while I had And it's just like the early and a great t-shirt too that's soft. puppy, you know what I mean? Well Jim, great to see you remotely. Thanks, John, great to see you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RajPERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

CaitlynPERSON

0.99+

Pierluca ChiodelliPERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

JimPERSON

0.99+

AdamPERSON

0.99+

Lisa MartinPERSON

0.99+

Lynn LucasPERSON

0.99+

Caitlyn HalfertyPERSON

0.99+

$3QUANTITY

0.99+

Jonathan EbingerPERSON

0.99+

Munyeb MinhazuddinPERSON

0.99+

Michael DellPERSON

0.99+

Christy ParrishPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Ed AmorosoPERSON

0.99+

Adam SchmittPERSON

0.99+

SoftBankORGANIZATION

0.99+

Sanjay GhemawatPERSON

0.99+

DellORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

AshleyPERSON

0.99+

AmazonORGANIZATION

0.99+

Greg SandsPERSON

0.99+

Craig SandersonPERSON

0.99+

LisaPERSON

0.99+

Cockroach LabsORGANIZATION

0.99+

Jim WalkerPERSON

0.99+

GoogleORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

Ashley GaarePERSON

0.99+

DavePERSON

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

Rob EmsleyPERSON

0.99+

CaliforniaLOCATION

0.99+

LynnPERSON

0.99+

AWSORGANIZATION

0.99+

Allen CranePERSON

0.99+

Day One Wrap | Google Cloud Next 2018


 

(upbeat music) >> Live from San Francisco, it's theCUBE covering Google Cloud Next 2018, brought to you by Google Cloud, and it's Ecosystem Partners. >> Hello everyone, and welcome back theCUBE live coverage, here in San Francisco, the Moscone South. I'm John Furrier with the SiliconANGLE on theCube, with my cohost Dave Vellante, for next three days. Day one, wrap up of Google Next here. Google Cloud's premiere event. This is a different Google. It's a world changing event, in my opinion, of Google. Dave, I want to analyze day one as we put it in the books. Let's analyze and let's look at it, and critique and observe the moves that Google's making vis-à-vis the competition. And Diane Greene, who's on theCUBE earlier, great guest. Kind of in her comfort zone here on theCUBE because she talks, she's an engineer, she's super smart. She thinks free thoughts but she really has a good chessboard view of the landscape. My big walk away today is that she's got full command of what she wants to do, but she's in an uncomfortable position that I think she's not used to. And that is at VMworld, at VMware, she didn't have competition. First mover, changes the market. Certainly, winning at all fronts when VMware was starting. And they morphed over and then you know the history of Vmware: sold to EMC and then now the rest is history. But they really changed the category. They created a category. And were very successful in IT with virtual machines. She's got competition in Cloud. She's playing from behind. She's got the big guns. She's going to bring out the howitzers, you know? I mean she's got Spanner, BigQuery, all the Scale, Kubernetes. Which the internal name is Borg which has been running on the Google infrastructure. Provisioning services on all their applications with billions and billions of users. If she can translate that, that's key. So that's one observation. And the second one is that Google is taking a data centric view. Their competitive advantage is dealing with data. And if you look at everything that they're doing from TensorFlow for AI and all the themes here. They are positioning Google as with a place to bring your data. Okay, that is clear to me as a stake in the ground. With the large scale technical infrastructure they're going to roll out with SREs. Those two things to me are the front and center major power moves that they're making. The rest wrapping around it is Kubernetes, Istio, a service oriented architecture managing services not products and providing large scale value to their customers that don't want to be Google. They want to be like Google in the benefits of Scale, which comes in automation. And I think I head room for Google Cloud is IT operations. So that's kind of like my take. I think day one, the people we've had on from Google sharp as nails, no enterprise tech. Jennifer Lin, Deepti, Diane Greene. The list goes on and on. What's your take? >> Well so, first of all with what's goin' on here and Diane Greene, the game she's playing now. Completely different obviously than VMware. Where it was all about cutting costs. Vmware, when you think about it, sold for $635 million to EMC way back when. So, it was just a little scratch compared to what we're talkin' about now. She didn't have the resources. The IT business, you remember Nick Carr's famous piece on HBR 'Does IT Matter?' That was the sentiment back then. IT, waste of time, undifferentiated. Just cut costs. Cut, cut, cut. Perfect for Vmware. The game they're playing now is totally different. As you said they were late to the enterprise. Ironically, late to the "enterprise cloud" >> They got competition >> They got competition. Obviously the two big ones Microsoft and, of course, AWS. But so what might take away here is: the differentiation. So they're not panicking. They're obviously playing the open source card. Kubernetes, TensorFlow, etc. Giving back to the community. Data, they're definitely going to lead in AI and machine intelligence. No question about it. So they're going to play that card. The database, we had the folks from Cloud Spanner on today. Amazing technology. Where as you think about it, they're talkin' about a transaction-oriented database. We heard a customer today, talking about we replaced Oracle. Right? We got rid of Oracle, now-- >> When was the last time you heard that? Not many times. >> It's not often. No, and they're only $120 million company. But to her point was it's game changing for us. It's a 10-X value proposition. And we're getting the same quality that we're getting out of our Oracle databases. They're leading with apps on Google Cloud. Twitter is there. Spotify. They obviously have a lot of history. So that's part of it, part to focus. We on SiliconANGLE.com, there's a great article by Mark Albertson. He talked about the-- he compared the partner Ecosystem. Google's only about 13,000 partners. Amazon 100,000. Azure 70,000. So a long way to go there. Serverless, this is they're catching up on serverless. But they're still behind. Kind of still in Beta, right? &But serverless, John, I'd love your take on this. Can be as profound as virtualization was. Last to developer love. They've got juice with developers. And then the technology. Massive scale. We heard things about Spanner, the relational semantics. BigQuery, Kubernetes, TensorFlow. They have this automate or die culture. You talked about this in your article. That's a bottoms-up engineering culture. Much different than the traditional enterprise top-down "Go take that hill! "You're going to get shot at but take that hill by midnight" >> It's true. Well I mean, first of all, I think developers are in charge. I think one of the things that's happening is that it's clear is that every company, whether you're a start up or large enterprise, has to come to grips with if they're going to be a software company. And that's easy to say "Oh, that's easy. You just hire some software developers" No, it's not that easy. One, there's software developers coming out. But the way IT was built and the way people were buying IT, it's just not compatible with what software developers want to do. They want to work in a company that's actually building software. They don't want to be servicing infrastructure. So, saying that everyone's going to be a software company is one thing. That's true. And so that's the challenge. And I think Google has an opportunity. Just like Oedipus has been dominating with service-oriented approach managing services. By creating building blocks that create large Scale that allow people to write software easily. And I think that's the keyword. How do I make things common interface. You asked Diane Greene about common primitives. They're going to do the foundational work needed. It might be slower. But at a core primitive, they'll do that work. Because it'll make everything a faster. This is a different mind shift. So again, you also asked one of the guests, I forget who it was, IT moves at a very slow speeds. It's like a caravan-- >> You said glacial >> But yeah, well that used to be. But they have to move faster. So the challenge is: how do you blend the speed of technology, specifically on how modern software is being written, when you have Cloud Scale opportunities? Because this is not a cost cutting environment. People want to press the gas, not the brake. So you have a flywheel developing in technology, where if you are right on a business model observation, where you can create differentiation for a business, this is now the Cloud's customers. You know, you're a bank, you're a financial institution, you're manufacturing, you're a media company. If you can see an opportunity to create a competitive advantage, the Cloud is going to get you there really fast. So, I'm not too hung up on who has the better serverless. I look at it like a car. I want to drive the car. I always want to make sure the engine doesn't fall out or tires don't break. But so you got to look at it, this is a whole 'nother world. If you're not in the Cloud, you're basically on horse and buggy. So yeah, you're not going to have to buy hay. You don't have to deal with horses and clean up all the horse crap on the street. I mean all of that goes away. So IT, buying IT, is like horse and buggy. Cloud is like the sports car. And the question is 'Do I need air-conditioning?' 'Do I need power windows?' This is a whole new view. And people just want to get the job done. So this is about business. Future work. Making money. >> So-- >> And technology is going to facilitate that. So I think the Cloud game is going to get different very fast. >> Well I want to pick up on a couple things you said. Software, every company's becoming a software company. Take Andreessen, said 'Software is eating the world' If software's eating the world, data is eating software. So you've got to become a data company, as well as, a software company. And data has to be at the core of your business in order to compete. And data is not at the core of most company's businesses. So how do they close that gap? >> Yeah >> You've talked about the innovation sandwich. Cloud, data, and AI are sort of the cocktail that's going to drive innovation in the future. So if data is not at the core of your company, how are you going to close that AI gap? Well the way you're going to close is you're going to buy AI from companies like Google and Amazon and others. So that's one point. >> Yeah, and if you don't have an innovation sandwich, if you don't have the data, it's a wish sandwich. You wish you had some meat. >> You wish you had it right (Laughing) Wish I had some meat. You know the other thing is, you mentioned Diane Greene in her keynotes said "We provide consistency "with a common core set of primitives" And I asked her about that because it's really different than what Amazon does. So Amazon, if you think about Amazon data pipeline, and we know because were customers. We use DynamoDB, we use S3, we use all these different services in the data pipeline. Well, each of those has a different API. And you got to learn that world. What Google's doing, they're just simplifying that with a common set of primitives. Now, Diane mentioned, she said there's a trade off. It takes us longer to get to market if-- >> Yeah, but the problem is, here's the problem. Multicloud is a real dynamic. So even though they have a common set of primitives, if you go to Azure or AWS you still have different primitives over there. So the world of Multicloud isn't as simple as saying 'moving workloads' yet. So although you're startin' to see good signs within Google to say 'Oh, that's on prim, that's in the Cloud' 'Okay that's hybrid' within Google. The question is when I don't have to hire an IT staff to manage my deployments on Azure or my deployments on AWS. That's a whole different world. You still got to learn skill sets on those other-- >> That's true >> On other Clouds >> But as your pipeline, as your data pipeline grows and gets more and more complex, you've got to have skill sets that grow. And that's fine. But then it's really hard to predict where I should put data sometimes and what. Until you get the bill at the end of the month and you go "Oh I should've put that in S3 instead of Aurora" Or whatever it is. And so Google is trying to simplify that and solve that problem. Just a different philosophy. Stu Miniman asked Andy Jassy about this, and his answer on theCUBE was 'Look we want to have fine grain control over those primitives in case the market changes. We can make the change and it doesn't affect all the other APIs we have' So that was the trade off that they made. Number one. Number two is that we can get to market faster. And Diane admitted it slows us down but it simplifies things. Different philosophy. Which comes back to differentiation. If you're going to win in the enterprise you have to believe. I get the sense that these guys believe. >> Well and I think there's a belief but as an architectural decision, Amazon and Google are completely different animals. If you look at Amazon and you look at some of the decisions they make. Their client base is significantly larger. They've been in business longer. The sets of services they have dwarf Google. Google is like on the bar chart Andy Jassy puts up, it's like here, and then everyone else is down here, and Google's down here. >> Yeah and the customer references, I mean, it's just off the charts >> So Google is doing, they're picking their spots to compete in. But they're doing it in a very smart engineering way. They can bring out the big guns. And this is what I would do. I love this strategy. You got hardened large scale technology that's been used internally and you're not trying to peddle that to customers. You're tweaking it and making it consumable. Bigtable, BigQuery, Spanner. This is tech. Kubernetes. This is Google essentially being smart. Consuming the tech is not necessarily shoving it down someone's throat. Amazon, on the other hand, has more of a composability side. And some people will use some services on Amazon and not others. I wouldn't judge that right now. It's too early to tell. But these are philosophy decisions. We'll see how the bet pans out. That's a little bit longer term. >> I want to ask you about the Cisco deal. It seems like a match made in heaven. And I want to talk specifically about some of the enterprise guys, particularly Dell, Cisco, and HPE. So you got Dell, with VMware, in bed with Amazon in a big way. We were just down at DC last month, we heard all about that. And we're going to hear more about it this fall at re:Invent. Cisco today does a deal with Google. Perfect match, right? Cisco needs a cloud, Google needs an enterprise partner. Boom. Where's that leave HP? HP's got no cloud. All right, and are they trying to align? I guess Azure, right? >> Google's ascension-- >> Is that where they go? They fall to Azure? >> Well that's what habit is. That's the relationship. The Wintel. >> Right >> But back up with HP for a second. The ascension of Google Cloud into the upper echelon of players will hurt a few people. One of them's obviously Oracle, right? And they've mentioned Oracle and the Cloud Spanner thing. So I think Oracle will be flat-footed by, if Google Cloud continues the ascension. HPE has to rethink, and they kind of look bad on this, because they should be partnering with Google Cloud because they have no Cloud themselves. And the same with Dell. If I'm Dell and HP, I got to get out of the ITOps decimation that's coming. Because IT operations and the manageability piece is going to absolutely be decimated in the next five years. If you're in the ITOps business or IT management, ITOM, ITIL, it's going to get crushed. It's going to get absolutely decimated. It's going to get vaporized. The value is going to be shifted to another part of the stack. And if you're not looking at that if your HPE, you could essentially get flat-footed and get crushed. So HP's got to be thinking differently. But what Google and Amazon have, in my opinion, and you could even stretch and say Alibaba if you want a gateway to China, is that what the Wintel relationship of Windows and Intel back in the 80s and 90s that created massive innovations So I see a similar dynamic going on now, where the Cloud players, we call them Cloud native, Amazon and Google for instance, are creating that new dynamic. I didn't mention Microsoft because I don't consider them yet in the formal position to be truly enabling the kind of value that Google and Amazon will value because-- >> Really? Why not? >> Because of the tech. Well and I think Amazon is more, I mean Microsoft is more of a compatibility mode (Talking over each Other) I run Microsoft. I've got a single server. I've got Office. Azure's got good enough, I'm not really looking for 10-X improvement. So I think a lot of Microsoft's success is just holding the line. And the growth and the stock has been a function of the operating model of Cloud. And we'll see what they do at their show. But I think Microsoft has got to up their game a bit. Now they're not mailing it in. They're doing a good job. But I just think that Google and Amazon are stronger Cloud native players straight up on paper, right? And if you look up their capability. So the HPEs and the Ecosystems have to figure out who's the new partner that's going to make the market. And rising tide will float all boats. So to me, if I am at HP I'm thinking to myself "Okay, I got to manage services. "I better get out in front of the next wave "or I'm driftwood" >> Well Oracle is an interesting case too. You mentioned Oracle. And somebody said to me today 'Oracle they're really hurting' And I'm like most companies would love to be hurting that badly but-- >> Oracles not hurting >> Their strategy of same-same but it's the same Oracle stack brought into the Cloud. They're sending a message to the customers 'Look you don't have to go to another Cloud. 'We've got you covered. We're investing in R&D', which they do by the way. But it was really interesting to hear from the Cloud Spanner customer today that they got a 10-X value, 10-X reduction in costs, and a 10-X capability of scaling relative to Oracle that was powerful to hear that. >> There's no doubt in my mind. Oracle's not hurting. Oracle's got thousands and thousands of customers that do hundreds of millions of dollars in revenue. And categories that people would love to have. The question on Oracle is the price pressure is an innovator's dilemma because there's no doubt that Oracle could just snap a few fingers and replicate the kind of deliverables that people are offering. The question is can they get the premium that they're used to getting. One. Number two, if everyone's a software company, are they truly delivering the value that's expected. To be a software company, to be competitive, not to make the lights run-- >> To enable >> To enable competitive-- (Talking over each other) Competitive advantage at a level, that's to me, going to be the real test of how Cloud morphs. And I question that you got to be agile and have a real top line revenue numbers where using technology at a cost benefit ratio that drives value-- >> But with Oracle-- >> If Oracle can get there then that's what we'll see >> The reason why they'll continue to win is because they move at the speed of the CIO. The CIO, and they'll say all the right things: AI-infused, block chain, and machine learning, and all that stuff. And the CIOs will eat it up because it's a safe bet. >> Well, I want to get your thoughts because I talked about this a couple years ago. Last year we started harping on it. We got it more into theCUBE conversation around Cloud being horizontally scalable yet at the top of the stack you've got vertical differentiation. That's great for data. Diane Greene in her key notes said that the vertical focus with engineering resources tied to it it's a key part of their strategy. Highlighted healthcare was their first vertical. Talked about National Institute of Health deal-- >> Retail >> NGOs, financial service, manufacturing, transportation, gaming and media. You got Fortnight on there, a customer in both Clouds. Start ups and retail. >> Yeah he had the target cities >> Vertical strategy is kind of an old enterprise play book TABE. Is that a viable one? Because now with the kind of data, if you got the data sandwich, maybe specialism and verticals can Scale. Your thoughts? >> I'll tell you why it is. I'll tell you why it's viable. Because of digital. So for years, these vertical stacks have been hardened. And the expertise and the business process and the knowledge within that vertical industry, retail, transportation, financial services, etc., has been hardened. But with digital, you're seeing it all over the place. Amazon getting into content. Apple getting into content. Amazon getting into groceries. Google getting into healthcare. So digital allows you to not only disrupt horizontally at the technology layer, but also vertically within industries. I think it's a very powerful disruption agenda. >> Analytics seems to be the killer app. That's the theme here: data. Maybe take it to the next step. That's where the specialism is. That's where the value's created. Why not have vertical specialty? >> No and >> Makes a lot of sense >> And it's a different spin. It's not the traditional-- >> Stack >> Sort of hire a bunch of people with that knowledge in that stack. No, it's really innovate and change the game and change the business model. I love it. >> That was a great surprise to me. Dave, great kicking off day one here this morning. Ending day one here with this wrap up. We got three days of wall-to-wall coverage. Go to siliconANGLE.com. We've got a great Cloud special Rob Hof, veteran chief of the team. Mark Albertson, and the rest of the crew, put some great stories together. Go to theCUBE.net and check out the video coverage there. That's where we're going to be live. And of course WIKIBAN.com for the analyst coverage from Peter Burris and his team. Check that out. Of course theCUBE here. Day one. Thanks for watching. See you tomorrow

Published Date : Jul 25 2018

SUMMARY :

brought to you by Google Cloud, the howitzers, you know? and Diane Greene, the So they're going to play that card. When was the last time you heard that? So that's part of it, part to focus. And so that's the challenge. the Cloud is going to get is going to get different very fast. And data is not at the core So if data is not at the Yeah, and if you don't And I asked her about that So the world of Multicloud I get the sense that these guys believe. Google is like on the bar They can bring out the big guns. I want to ask you about the Cisco deal. That's the relationship. And the same with Dell. And the growth and the stock And somebody said to me today but it's the same Oracle and replicate the kind of deliverables And I question that you got to be agile And the CIOs will eat it that the vertical focus You got Fortnight on there, if you got the data sandwich, And the expertise and the business process That's the theme here: data. It's not the traditional-- and change the game Mark Albertson, and the rest of the crew,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenePERSON

0.99+

Eric HerzogPERSON

0.99+

James KobielusPERSON

0.99+

Jeff HammerbacherPERSON

0.99+

DianePERSON

0.99+

IBMORGANIZATION

0.99+

Mark AlbertsonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

JenniferPERSON

0.99+

ColinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Rob HofPERSON

0.99+

UberORGANIZATION

0.99+

Tricia WangPERSON

0.99+

FacebookORGANIZATION

0.99+

SingaporeLOCATION

0.99+

James ScottPERSON

0.99+

ScottPERSON

0.99+

Ray WangPERSON

0.99+

DellORGANIZATION

0.99+

Brian WaldenPERSON

0.99+

Andy JassyPERSON

0.99+

VerizonORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

Rachel TobikPERSON

0.99+

AlphabetORGANIZATION

0.99+

Zeynep TufekciPERSON

0.99+

TriciaPERSON

0.99+

StuPERSON

0.99+

Tom BartonPERSON

0.99+

GoogleORGANIZATION

0.99+

Sandra RiveraPERSON

0.99+

JohnPERSON

0.99+

QualcommORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

FranceLOCATION

0.99+

Jennifer LinPERSON

0.99+

Steve JobsPERSON

0.99+

SeattleLOCATION

0.99+

BrianPERSON

0.99+

NokiaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Peter BurrisPERSON

0.99+

Scott RaynovichPERSON

0.99+

RadisysORGANIZATION

0.99+

HPORGANIZATION

0.99+

DavePERSON

0.99+

EricPERSON

0.99+

Amanda SilverPERSON

0.99+

Craig McLuckie, Heptio - Google Next 2017 - #GoogleNext17 - #theCUBE


 

(upbeat music) >> Announcer: Live from Silicon Valley, it's theCUBE, covering Google Cloud Next '17. >> Welcome back to theCUBE's coverage of Google Next 2017, 10,000 people are in San Francisco, SiliconANGLE media, we've got reporters there, as well as the Wikibon analysts. I've been up there for the analyst's event, some of the keynotes, and we're getting thought leaders, partners, really getting lots of viewpoints as to what's happening, not just in the Google Cloud, but really the multi-Cloud world. And that's why I'm really excited to bring back a guest that we've had on the program before, Craig Mcluckie, who, four months ago, was with Google, but he's now the CEO of Heptio, and he's also one of the co-creators of Kubernetes, which anybody that's watching the event, definitely has been hearing, plenty about Kubernete so, welcome back to the program. >> Thanks for having me back. >> Yeah, absolutely, I know you were part of, a little event that kind of went before the Google Cloud event, brought in some people in the Cloud ecosystem, talk about a lot was going on. Maybe start us off with, what led you to kind of pop out of Google, what is Heptio, and how does that kind of extend what you're doing with Kubernetes when you're at Google? >> Certainly. So Heptio is a company that has been created, by my co-founder Joe and myself, to bring Kubernetes-- >> Stu: That's Joe Beda. >> Joe Beda. >> Stu: Yeah. To bring Kubernetes to enterprises, and the thing that really motivated me to start this company was the sense that there was not a unfettered Kubernetes company in existence. I spoke to a lot of organizations, that were having tremendous success with Kubernetes. It was transforming the way they approached infrastructure management. It created new levels of portability for their workloads. But they wanted to use Kubernetes on their own terms, in ways that made sense to them. And, most every other organization that is creating a Kubernetes distro, has attached it to other technologies. It's either attached to an opinionated operating system, or it's attached to a specific cloud environment, or it's attached to a Paas, and it just didn't meet the way that most of the customers I saw wanted to use the technology. I felt that a key missing part of this ecosystem, was a company that would meet the open source community where it is and help customers that just needed a little bit more help. A little more help with training, bit of documentation support, and the tools they needed to make themselves successful in the environments that they wanted to operate in. And that's what motivated Joe and I to start this company. >> Yeah, and it's interesting, cause you look at the biggest contributors, Google's there, you've got Red Hat, you've got, as you said, people that have their viewpoint as to where that fits. I think that that helps the development overall, but maybe you can help us unpack there. Why do you want, is it separate? Is there that opinionated-ness? What's inherently sub-optimal about that? (laughing) >> I think part of the key value in Kubernetes is the fact that it supports a common framework in a highly heterogonous world. Meaning you can mix together a broad variety of things, to your needs. So you could mix together, the right operating system, in the right hosting environment, with the right networking stack. And you could run general applications that are then managed and performed in a very efficient and easy to use way. And, one of the things that I think is really important, is this idea that customers should have choice, they should be picking the infrastructure based on the merits of the infrastructure. They should pick the OS that works for them, and they should be able to put together a system that operates tremendously well. And, I think it's particularly critical, at this juncture, that a layer emerges that allows customers, and service providers, to mix together the sort of things that they want to use, and consume, in a way that's agnostic to the infrastructure and the operating environment. I see the mainstream cloud providers, taking us in some ways back to the world of the mainframe. If you think about what we're starting to see, with companies like Amazon, who are spectacularly successful in the market, is this world where you have this deeply vertically integrated service provider, that provides not only the compute, but also the set of core services, and almost everything else that you need to run. And, at the end of the day, it's getting to a point where, a customer has to kind of pick their service provider. And, you know, for using IBM, but it was also sub-optimal from an ecosystem perspective. It inhibited innovation in many ways. And it was the emergence of Wintel, that sort of Windows and Intel ecosystem that really opened up the vendor ecosystem, and drove a tremendous amount of innovation and advancement. And, you know, when I think about what enterprise customers want and need today, they want that abstraction. They want a safe way to separate out the set of services that run their business, the set of technologies that they build and maintain, from the underlying infrastructure. And I think that's what driving a lot of the popularity of Kubernetes, is this idea that it is a logical infrastructure abstraction, that lets you pick the environment that you operate in, purely based on the merits of the environment. >> Yeah, it's been a struggle, I mean, I know through my entire career in IT, we've had that discussion of "do I just standardize on what we have? Cause, the enterprise today, absolutely. Every time I put a new technology in, it doesn't displace, it adds to it. So, I talked to lots of customers, still using mainframe. They're using the Wintel stuff, they using public cloud, they're using, you know, yes and and and, and therefore, managing it, orchestrating it, doing all those pieces that are difficult. The challenge when I put an abstraction layer in, and one of the big challenges is, how to really get the full value out of the pieces that I had. Sam Ramji said that, when he was at Cloud Foundry, they were trying to make it so that you really don't care which cloud, whether it's on premises or public cloud environments. And he said one of the reasons he joined Google was because he felt you couldn't make, if you went least common denominator or something, there was things Google was doing that nobody else can do. So there's always that balance of, "can I put an abstraction layer or virtualize something, and take advantage of it?" Or "do I just go all in with one vendor?" I mean, IBM back in the day, did lots of great things to make it simple, and cloud is trying to make it simple, lots of things, Amazon of course, no doubt that they're trying to vertically integrate everything they would like to do. You know, all your services. So, where do you see that balance? And, it's interesting, does it solve customers the best to be able to say "okay, you can take your mess that you have", and therefore, is this a silver bullet to help them solve it? >> I think it's a really good point. And, consistently, as I look through history, a lot of the platforms that people have pursued, that created this sort of complete decoupling, introduced this lowest common denominator problem, where you had to trade off a set of things that you really wanted with the capabilities of the platform. And, you know, I think that absolutely, in some cases, it makes a tremendous amount of sense, to invest in a vendor specific technology. So let's take an example out of Google, Cloud Spanner. Cloud Spanner has, it's literally the only, globally consistent, well right now it's regionally consistent, but it's literally the only globally consistent relational store available. There is nothing like it. The CockroachDB folks are building something that emulates some of the behavior, but without the true time API, that sort of atomic clock, you know, crazy infrastructure that Google's built. It adds very little utility. And so, in certain applications and certain workloads, if what you really want is a globally replicated, highly consistent relational data store, there is literally only one provider on the planet that would deliver it, which is Google. However, you might look at, you know, something that Amazon provides, and they may have some other service. Perhaps you've already built something on RedShift, and you want to be able to use that. Or Microsoft might offer up some other technologies that make sense to you. And, I think it's really important for enterprises to have the option. There's times when, for a given workload, it makes tremendous amount of sense, to put on a vendor, if you're looking to run something that has, deep machine learning hooks, or needs some other science fiction technology that Google's bringing to the world. It makes sense to run that on Google. For applications that are potentially integrated into a productivity suite, if you're an Office 365 user, it probably makes sense to host it on Microsoft. And then, perhaps there's some other pieces that you run on Amazon. And I don't think it's going to be pick one cloud provider and live in the static world forever. I think the landscape is constantly evolving and shifting. And, one of the things technologies like Kubernetes provide is an option. An option to move, an option to decide which specific services you want to pull through and use in which application. Recognizing that those are going to bind you to that cloud provider in perpetuity, but not necessarily pulling the entirety of your IT structure through. >> Yeah, Craig, I'm curious. When I look out as to kind of the people that commentate on this space, one of the things they say "Kubernetes is interesting, but this whole hybrid cloud thing, kill all the on premises stuff, public cloud's really where it's at." I know when I talk to most companies, they got plenty of on premises stuff, most infrastructure that is bought is still, there's a lot of it going on premises. So companies are sorting out what applications go where, what data goes where. Diane Green, suddenly 5% of the world's data really is in the public cloud today. What's your view on kind of that on premises, public cloud piece, and Kubernetes' role there? >> Yeah, I think it's a great question. And I have had some really interesting conversations with CIOS in the past. I remember in my very earliest days, pooh-poohing the idea of the private cloud, and having a really intense CIO look across the thing and he was like "you will pry my data centers from my cold, dead hands". (Stu laughing) He literally said that to me. And so, there's certainly a lot of passion in this space, and I think, at the end of the day, one has to be pragmatic. You know, first of all, one has to recognize that, if you're an organization that has bought significant data center footprint, you're probably going to want to continue to use that asset that you've acquired, and that's, you're going to want to use that in perpetuity. If you're a company, and most large companies are also naturally heterogonous, meaning as you go through an acquisition, the acquired portion of your company may have a profoundly different IT portfolio. You know, may have a different set of environments. And so, I think the world certainly benefits from an abstraction layer that allows you to train your engineers with a certain set of skills, and then be highly decoupled from the infrastructure environment you run in. And I think, again, Kubernetes is delivering some of that promise in a way that I think really resonates with customers. >> Absolutely, and even, we've been telling people for years "stop building data centers"? You know, there's very few companies that want to build data centers even, yes Google talks about their data centers, but Amazon? Gets their data center space from lots of other players there. But, if I stop building data centers today, I'm going to have em for another 25 30 years, and even it, what am I going to owe myself? I talked to plenty of the big financial guys, they're not going to move all of their information. They want to have it under their control, whether it's their own data center, a hosted managed environment there. So, we're going to be living with this multi-cloud thing for a long time. >> There is another thing that I don't think people have fully internalized yet, which is in many ways, the way that cloud provider data centers are structured is around power sources. At the end of the day, it's around cheap power and cooling. As you start looking at the dynamics of what's happening to our energy grid, it's no longer being quite as centralized as it was. And, it starts to beg the question "does it make sense to think about smaller units that are more distributed? Does it make sense to start really thinking about Edge compute capacity?" The option to deploy something really close to your customers if you need low latency and attainment scenarios. Or, the option to push a lot of capacity into your distribution center, if you're running high, heavy IoT workloads, where you just don't want to put all that data on the network. And so I think that, again, certainly, I think that people underestimate the power of the Amazon, Microsoft and Google. People that are still building data centers today, don't realize quite how remarkable the vendors at that scale are, in terms of their ability to build and run these things. But I do think that there are some interesting options, in terms of regional locality, data sovereignty, Edge latency, that legitimize, other types of deployment. >> Yeah, and you talked about IoT, Edge computing absolutely is something that comes up a lot there. At AWS Re:Invent last year, Amazon put their serverless solution using Greengrass, out at the Edge because there's tons of centers that I might not have the networking, or I can't have the latency I need to do the compute there. How does things like serverless at the Edge, and IoT play into the discussion of Kubernetes? >> I think it plays really well, insofar as, Kubernetes, it's not intrinsically magic. What it has done is created a relatively simple, and turns out, pretty reusable abstraction that lets you run a broad array of workloads. I wouldn't say it's exactly cracked the serverless paradigm in terms of event-driven, low cost of activation computing, but that's something that can certainly be built on top of it. The thing that it does do, is it provides you the ability to manage an application as if it were software as a service, in a location that is remote from you, by providing you a very principled, automated framework for operations. >> Alright, Craig, last thing I want you to do is give us an update on Heptio. How many people do you have? How are you engaging with customers? What's the business model look like for that? What can you share? >> So, we're currently 13 people. We've been in business for four months, and we've been able to hire some really amazing folks, out of the distributed systems communities. We are at a point where we're starting to provide our first supported configurations of Kubernetes. We don't position ourselves as a distribution provider, we rather like to think of ourselves as an organization that's invested in helping users get the most of the Upstream community. Right now, our focus is on training, support, and services, and over time, if we do that really well, we do aspire to provide a more robust set of product capabilities that help organizations succeed. For now, the thing that we focus most relentlessly on is helping customers manage down the cost of supporting a cluster. How do we create a better way for folks to understand what a configuration should look like? When are they likely to encounter issues? And if they do encounter those issues, helping them resolve them in the lowest friction and least painful way possible. >> Alright, and any relationships with the public cloud guys? Or what do you work with when you talk about OpenStack, Amazon, Google, Microsoft, what's the relationship and how do those work? >> So we announced the first joint quick start for Kubernetes with the Amazon folks last Tuesday. And that's been going pretty well. We're getting a lot of positive feedback around that. And we're now starting to think more broadly in terms of providing supported configurations on premises and then on Microsoft. So Amazon, for us, was the obvious starting point. It felt like an under-supported community from a Kubernetes perspective, insofar as, Microsoft had our friend Brenda Burns, who helped us build communities in the first place. And he's been doing some great work to bring Kubernetes to the Azure container service. What we really wanted to do was to make sure that Kubernetes runs well on Amazon, and that it is naturally integrated into the Amazon operating model, so cloud formation templates, and we have a really principled way to manage, maintain, upgrade and support those clusters. >> Alright, Craig Mcluckie, co-creator of Kubernetes, and CEO of Heptio. Really appreciate you coming here to our Palo Alto studio, helping us as we get towards the end of two days of live coverage of Google Cloud Next 2017. You're watching theCUBE. (upbeat music)

Published Date : Mar 10 2017

SUMMARY :

Announcer: Live from Silicon Valley, it's theCUBE, and he's also one of the co-creators of Kubernetes, in the Cloud ecosystem, talk about a lot was going on. So Heptio is a company that has been created, and it just didn't meet the way that but maybe you can help us unpack there. and almost everything else that you need to run. customers the best to be able to say And I don't think it's going to be pick one When I look out as to kind of the people that commentate the infrastructure environment you run in. I talked to plenty of the big financial guys, Or, the option to push a lot of capacity or I can't have the latency I need to do the compute there. that lets you run a broad array of workloads. What's the business model look like for that? For now, the thing that we focus most relentlessly on and that it is naturally integrated Really appreciate you coming here to our Palo Alto studio,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Craig McluckiePERSON

0.99+

Sam RamjiPERSON

0.99+

CraigPERSON

0.99+

JoePERSON

0.99+

Diane GreenPERSON

0.99+

Brenda BurnsPERSON

0.99+

IBMORGANIZATION

0.99+

four monthsQUANTITY

0.99+

Craig McLuckiePERSON

0.99+

Joe BedaPERSON

0.99+

13 peopleQUANTITY

0.99+

Palo AltoLOCATION

0.99+

HeptioORGANIZATION

0.99+

Cloud SpannerTITLE

0.99+

10,000 peopleQUANTITY

0.99+

two daysQUANTITY

0.99+

25 30 yearsQUANTITY

0.99+

Office 365TITLE

0.98+

AWSORGANIZATION

0.98+

StuPERSON

0.98+

WintelORGANIZATION

0.98+

four months agoDATE

0.98+

IntelORGANIZATION

0.98+

oneQUANTITY

0.98+

Silicon ValleyLOCATION

0.98+

Google CloudTITLE

0.98+

last yearDATE

0.98+

Cloud FoundryORGANIZATION

0.97+

OpenStackORGANIZATION

0.96+

5%QUANTITY

0.96+

last TuesdayDATE

0.96+

todayDATE

0.95+

one providerQUANTITY

0.95+

KubernetesORGANIZATION

0.95+

CIOSTITLE

0.94+

KubernetesTITLE

0.93+

WikibonORGANIZATION

0.93+

first placeQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

first jointQUANTITY

0.89+

KuberneteTITLE

0.88+

Google Cloud Next 2017TITLE

0.88+

Tarun Thakur, Datos IO - Google Next 2017 - #GoogleNext17 - #theCUBE


 

(The Cube Theme) >> Voiceover: Live from Silicon Valley, it's the Cube, covering Google Cloud Next '17. >> Hey, welcome back here, and we're here live in Palo Alto for a special two days of coverage of Google Next 2017. I've John Furrier here in The Cube. We have reporters and analysts on the ground who are calling in, getting reaction on all the great news, and of course, Google's march to the enterprise cloud really is the big story, of course, they have their cloud they've been powering with their infrastructure and it had great presence, powering their own stuff, just like Amazon.com had Amazon webservices, Google Cloud now powering Google and others. Diane Green, new CEO, taking the reins, making things happen, we covered that news, and for an entrepreneurial perspective we have Tarun Thakur who is a co-founder and CEO Datos.io, former entrepreneur at Data Domain, been in the business, newly funded, Series A entrepreneur funded with True Ventures and Lightspeed. >> That is correct, John, thank you. >> Thanks for coming on. Tell us what you guys do first. Explain what you guys as a company are doing. >> Absolutely. I'd love to first thank you for the opportunity. It's a pleasure to be here. About Datos, I'll sort of zoom out a little bit and if you really see what's really happening out in the industry, our founding premise, me and my co-founder, Prasenjit, our founding principle is very simple. There are some transformative changes happening in the application era. I was just listening to Akash talk rom SAP, and enterprise workloads are moving to the cloud. That was our founding premise, that not only do you not have those IOT workloads, these SAS workloads, the real time analytics workloads, being born in the cloud, but you have all these traditional workloads that are moving as fast as they can to the cloud. So if you really look at that transformative change, we have a very simple founding premise: applications define the choice of the IT stack underneath it. What do we mean by that? The choice of the database, the choice of the storage, the choice of all the data management tooling around it, starting with protection, starting with governance, compliance, and so on and so forth, right? So if the application workloads are under disruption, and they're moving to the cloud, the impact it has on the IT stack underneath is phenomenal. >> So Tarun, you guys had a great write-up in the Register, Chris Miller, who is well known in the story, 'cause we all follow him, he's a great guy, and very fair, but he can be critical, too, he's very snarky. We like his columns. He called you guys the Tesla of the backup world. What does he mean by that? Does he mean it like you have all the bells and whistles of a modern thing, or is there a specific nuance to why he's calling you the Tesla of the backup world? >> No, this is excellent, John. You know, we are fortunate and we're honored. >> Electric backup? I mean, what's happening here? (laughing) I mean, what does he mean by that? What's the meaning? >> Couldn't have given us a better privilege than what he gave. Had a chance to host him in the office, small office, much smaller than what you have here, in December, and a 45 minute session became a two hour session and really he dug into why the Tesla, and essentially it goes back to, John, you had the traditional workloads running on your traditional databases, classical scale-operational databases like Oracle and SQL. Now, you're dealing with these next generation, hyperscale distributed applications. IOT real time analytic is building on that team, those are being deployed fundamentally on distributed architectures. Your Apache Cassandra, your Amazon Dynamo DB, your Google Spanner, now that we're talking in the context of Google Cloud Next, right? When you look at those distributed architectures, there's so much fundamental shift. You don't run them on shared storage, you don't have media servers anymore in the cloud- >> You have the edge. You have the edge out there. >> You have the edge computing. Given all those changes, you have to fundamentally rethink of backup, and that's essentially what we did. Just going back to Tesla, Tesla was started with a fundamentally seminal architecture. >> So you thought this from the ground up. That's essentially one point, and the other one is that it's modern in the sense of it's really taken advantage of the new architecture. >> That's absolutely right, you know, when we started, again, back in June of 2014, we really started with the end in mind, ten years, the next ten years ahead of us, and the end in mind was, "Look, it's going to be distributed architectures, "it's going to be your hyperscale applications, the webscale applications, and you need to be able to understand data and protect it and recover it and manage your data at that scale. >> Okay, so you guys are also Google partners, so you have an interesting perspective. You're on the front lines, Series A entrepreneur, you haven't cleared the runway yet. You still have to prove yourself. The game is just starting; you don't end it with the financing. That's just validation for the vision and the mission, and you've had some good press so far from Chris, now as you execute, you have a partner in Google. What's your analysis of Google, and as someone who's close to them, certainly as an entrepreneur, you're nimble, you're fast, you understand the tech, you mentioned Spanner, great horizontal scale of opportunity, but some of the enterprises might be a little slower, and they have different orientation, so help us understand what's Google doing? What's their main focus? >> I'll give you an answer in three part series. Number one, we are, again, a start-up, seriously, as you said, we have a lot ahead of us, even though we've been out here for three years, it feels like yesterday. (laughing) >> John: It's a grind. >> It is a grind, but to partner Google Cloud, one of our key marquee customers, a Fortune 100 home improvement retailer, under NDA, cannot take their name out of respect. >> John: Well the register says Home Depot. (laughing) >> Okay. >> Okay, so- >> I'll let Chris do the honor, but it's a Fortune 100 home improvement retailer, John, and their line of business, their entire e-commerce platform, the CIO down has moved their entire platform, migrated from DB2 to Google Cloud. It's not running on DB2 on Google Cloud platform, it's running all on a distributed massive scale- >> So did they sunset DB2 or did they completely- >> Tarun: Completely migrated away from DB2. >> Okay. >> It's part of the digital transformation journey Home Depot is at. They are three years in, they have two more years to go, and as part of the digital transformation journey they're on, they are now running their e-commerce website, which, think of you and I going to Thanksgiving and buying your home tools, and that application runs on a highly scalable Apache Cassandra database on Google Cloud. Now, second part, going back to large-scale enterprises, Home Depot, being how progressive they are, they understood cloud does not mean recoverability. Cloud gives me the scale, cloud gives me the economics, cloud gives me the availability, but it doesn't give me the point in time, and I need myself to be covered against that "what if" moment. We have hold-the-delta moments, we have hold-the-gitlack moments, SalesForce.com down with that human error, right? You don't want to be in that position as a Home Depot. >> You mean Amazon went down? >> Tarun: And Amazon. >> Yeah, Amazon went down. >> And if you read the analysis, the analysis was, "We're sorry guys, there was a human error. "Somebody was meant to change this directory; "he changed that directory." >> So this is a whole new game. One of the fears that the enterprises have is that in a new architecture, besides security, which is a huge issue, we'll have another segment on that shortly, but is that I want to leverage the capabilities of the partner in the cloud, because manageability, certain things, I don't want to build on my own, and so I can see you guys being a new modern piece because the data piece is so important because I'm storing at the edge, I'm not moving data around, so there's no data in motion as much as it is on premise. Is that a big part of this? >> It is, from a, I'll zoom out again, from a CIO perspective, we pitched this to about 100+ CIOs so far. From there it is truly, and I hate to use this word, but it's truly a multi-cloud world, John. They have invested in private clouds and an on-prem infrastructure that ain't going anywhere anytime soon. They are moving some of their SAP instances to a CenturyLink, MSPs, the managed service providers, but they know, as a CIO, I have my application developers and I have my lines of businesses- >> John: And they have their operations guys, too. >> Who want to go as fast as they can. I'll come back to the operations in a second because you'll be very surprised to hear this, but again from a CIO down, he wants to make his application developers to go as fast as they can, and he wants the lines of business just to go open up the next applications- >> John: Because that's top-line revenue right there. >> That's top-line revenue right there. So they want scale, they want agility, but they don't want to sacrifice that insurance piece. Going back to the IT ops and the dev-ops and the classical ops, you'll be surprised, we've been working with this team, our lead-in to the Fortune 100 home improvement retailer was a line of business, but right now it's all about their core IT team. Their IT ops team, the database admins, the database ops people, they are the ones who are really running this product day-to-day, day in and day out, and scaling it, and using it at the pace they need to. >> What's the big misconception, if you could point to, about Google, because one of the things we're trying to surface is that Amazon and Google, it's not apples to apples comparison, they're different clouds, and it is multi-cloud, I want to get you to that question today, but we can get to that in a second, what your definition of that means, but for now, what is the big misconception in your mind, people might misconstrue with Google? >> That's a great question, John, and I was hearing your previous interview with Akash, and again, I'll give you our partner-centric view; a young start-up built something disruptive for that platform. We got Amazon as the first platform. We have a good set of customers running on Amazon, and of course, this home improvement retailer took us to Google Cloud, "Hey guys, if you want to work with us, "you have to support Google Cloud." We went to Google Cloud, and the amount of pull that we got from Google Cloud folks to make it happen in less than three months was phenomenal. They didn't stop at that. They brought their solution architect team, Google Cloud, wrote a paper about Datos, their team, and posted it on their website. "How to use Datos on Google Cloud." Fascinating. Amazon has never done that. It, again, speaks to if you see all the announcements that came out yesterday, Google Cloud has been a significant- >> Well Google's partnering, Google's partnering, one of the things that came out of today's news that has been teased out is Diane Green said in the keynote, "I like partnering." She used the word, "I like partnering," meaning Google, and she has that DNA. She's from VM, where she knows the valley game, she understands ecosystems. She also likes to work on some cool stuff, which could be a double-edged sword. She's always been innovating. But Google has the tech, and she knows enterprise, so they're marching down that road. What areas would you say Google needs to sharpen up a little bit to kind of move faster on? I mean, obviously there's no critique on them; they're pedaling as fast as they can, but in the areas you think they should work on, is it security, is it the data side, what are the things that you think they've got to pedal a little faster on. >> I would definitely start with enterprising touch. I think they need to really amp up the game around enterprise. >> John: You mean the people, the process? >> The people, the processes, the onboarding, the deployment, giving them the blue templates, giving them reference architectures, giving them, hand ruling them a little bit, and I think that'll go a long ways- >> John: The basic enterprise motions. >> Yes, you need that. You're a cloud; that doesn't mean my database guy is not going to need the help of a Google Cloud admin to help me onboard. They need that wrap-up. From their point on they build phenomenal scalable services. Snap invested two billion dollars in Google Cloud. They understand- >> And Amazon got the other half, but- >> The underlying infrastructure is there. >> Yeah but this is the thing. The problem that, the problem is that there's two perspectives of what we see. One is people want to run like Google in the sense of how they're scaling, but not everyone has Google-like infrastructure, so I think Google has to kind of, they want the developers, in my mind, they get a A+ there, with open source, what they do with Kubernetes and whatnot, the operational orientation is something they've got to work on, SLAs are more important than price. >> Managing the orchestration piece, giving them the visibility, letting them come on and come off, and going back to multi-cloud, I'll tell you again, the same customer took us to a use case, which is so fascinating, John. They want on-prem backup and recovery. Remember, protection is the Trojan horse. Protection, it all starts with protection. >> It's always one of those things that's always been front and center. You saw that. It used to be kind of a throw-away thing. "Oh, what about backup? "Oh, we didn't factor in." Now it's front and center, certainly cloud is going to be impacted because data's everywhere. Data's going to be highly frictionless. Okay, question, and final question on this piece, where we talk about what you guys are doing, what does multi-cloud mean, or two questions: what is the definition of multi-cloud, and what does cloud-native mean to you? Define those terms. >> Absolutely. Those two terms are very, very close to us. So multi-cloud, I'll begin with that. I'll give you a customer use case that will hopefully ground the conversation. A multi-cloud essentially means from a customer perspective, I'm going to run on-prem infrastructure, I want to be able to recover or manage that data in the cloud, I don't want to make multiple copies, I don't want to duplicate data, I want to recover a version of that data in the cloud, why? Because I have my application developers who want to test staff. I want my DR to be in a different cloud. I do not want to put all my eggs in one basket. So again, it is truly- >> John: It's a diversity issue. >> It is, and they want multiple-use cases to be spread across clouds. Some clouds have strength in DR, some clouds, like Amazon, have strength in orchestration, and onboarding, and some cloud platforms like Google Cloud have strengths in, hey, you can bring your application developers and you don't have to worry about retail. Some of the retailers, like Gap, like Safeway, like eBay, those guys will hesitate to go to Amazon because they know Amazon, at the heart, is a retail business. >> So conflict there. Now, cloud-native. Define cloud-native. >> Cloud-native, to us, is you have Oracle running that database natively within the services of the cloud. For example, take Amazon Dynamo DB. It's a beautiful example of a cloud-native service. You don't run Dynamo DB on-prem. It was built ultimately for the cloud. Cloud Spanner, another example of cloud-native. It is built for that infrastructure, floor ground up, and has been nurtured for the last ten years for the elastic infrastructure. >> Alright, Tarun, great to have you on. Quick plug for what you guys are doing. What's next? You got the Series A, you're getting customers, you got a big customer you can't talk about, but it's in the Register article, Home Depot. What other things are you working on? What's the key priorities? Hiring? You've got some new announcements coming up I hear. Rumor mill, I won't say who they are, but you're partnering. What's the key focus? What's your key objectives? >> No, we only stay focused on building, and as you early on said, it's still early for us. We want to stay focused on getting customer acquisition, customer momentum, deploying those customers, making them happy customers, having them become referenceable customers for us, and of course, the next big focus for me personally is going to be bringing some of the people in the team, some of the people who can help me scale the company- >> John: Engineering- >> Engineering, marketing, business development, sales, go to market, so that's going to be second we're to focus, and third, and again, you'll hear the announcement coming very quickly, we're going to be partnering with some of the leading enterprise infrastructure companies, both on their enterprise traditional storage companies, and some of the leading, I'm just going to leave it at that. >> And True Ventures is the seed investor and Lightspeed on the Series A, the True company on the Series A with them. 'Cause they tend to follow, they don't leave you hanging. >> Yeah, Puneet is excellent. I love him. >> Yeah, John Callahan's company's got great stuff. And they had some great eggs, they had FitBit and they've got a lot of great stuff going on. >> Well they're excellent, excellent pro-entrepreneur people. Great to work with as well. >> High integrity, great people. Tarun, thanks for coming on and sharing the entrepreneurial perspective, the innovation perspective, certainly as a Google partner, good to have your reaction and analysis. >> Thank you, John. >> It's The Cube, bringing you all the action from Google Next here in our studio. More Google Next coverage after this short break. (The Cube Theme)

Published Date : Mar 8 2017

SUMMARY :

Voiceover: Live from Silicon Valley, it's the Cube, We have reporters and analysts on the ground who are calling Tell us what you guys do first. I'd love to first thank you for the opportunity. So Tarun, you guys had a great write-up in the Register, You know, we are fortunate and we're honored. and essentially it goes back to, John, you had the You have the edge out there. You have the edge computing. modern in the sense of it's really taken advantage of the "it's going to be your hyperscale applications, the webscale You're on the front lines, Series A entrepreneur, you Number one, we are, again, a start-up, seriously, as you It is a grind, but to partner Google Cloud, one of our key John: Well the register says Home Depot. I'll let Chris do the honor, but it's a Fortune 100 home and as part of the digital transformation journey they're And if you read the analysis, the analysis was, One of the fears that the enterprises have is that in a new They are moving some of their SAP instances to a I'll come back to the operations in a second because you'll Their IT ops team, the database admins, the database ops It, again, speaks to if you see all the announcements that side, what are the things that you think they've got to pedal I think they need to really amp up the game around going to need the help of a Google Cloud admin to help me the operational orientation is something they've got to work and going back to multi-cloud, I'll tell you again, talk about what you guys are doing, what does multi-cloud recover or manage that data in the cloud, I don't want to Some of the retailers, like Gap, like Safeway, like eBay, So conflict there. Cloud-native, to us, is you have Oracle running that Alright, Tarun, great to have you on. is going to be bringing some of the people in the team, go to market, so that's going to be second we're to focus, 'Cause they tend to follow, they don't leave you hanging. I love him. And they had some great eggs, they had FitBit and they've Great to work with as well. Tarun, thanks for coming on and sharing the entrepreneurial It's The Cube, bringing you all the action from Google

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenPERSON

0.99+

JohnPERSON

0.99+

CenturyLinkORGANIZATION

0.99+

TarunPERSON

0.99+

Chris MillerPERSON

0.99+

AmazonORGANIZATION

0.99+

June of 2014DATE

0.99+

Tarun ThakurPERSON

0.99+

ChrisPERSON

0.99+

GoogleORGANIZATION

0.99+

Home DepotORGANIZATION

0.99+

DecemberDATE

0.99+

Home DepotORGANIZATION

0.99+

two hourQUANTITY

0.99+

John CallahanPERSON

0.99+

John FurrierPERSON

0.99+

45 minuteQUANTITY

0.99+

True VenturesORGANIZATION

0.99+

two questionsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

TeslaORGANIZATION

0.99+

three yearsQUANTITY

0.99+

ten yearsQUANTITY

0.99+

Data DomainORGANIZATION

0.99+

two daysQUANTITY

0.99+

two termsQUANTITY

0.99+

Amazon.comORGANIZATION

0.99+

two billion dollarsQUANTITY

0.99+

eBayORGANIZATION

0.99+

DB2TITLE

0.99+

one basketQUANTITY

0.99+

less than three monthsQUANTITY

0.99+

second partQUANTITY

0.99+

LightspeedORGANIZATION

0.99+

two perspectivesQUANTITY

0.99+

yesterdayDATE

0.99+

SafewayORGANIZATION

0.99+

first platformQUANTITY

0.99+

PrasenjitPERSON

0.99+

Datos.ioORGANIZATION

0.99+

firstQUANTITY

0.99+

SnapORGANIZATION

0.99+

oneQUANTITY

0.99+

one pointQUANTITY

0.98+