Can We Beat the AKS Sorting Network?
>> Thank you so much for inviting me. I'd like to talk about some new understanding, on sorting circuits. We've been working on this line of work since 2017. So I'll mention the results from a few papers, to paint essentially, the frontier of our understanding. This is trying work with many wonderful collaborators. Some of these studies were also partly motivated by a somewhat different quest, that is: how to construct optimal oblivious RAM. Although, in this talk I'll mostly focus on the circuit model rather than the ORAM. At the end of the talk, however, I'll quickly mention how the circuit results are related to the optimal ORAM, and how they are not related. Like, in other words, how the circuit techniques actually depart from the techniques we use to construct optimal ORAM. I'll only have time to mention the results and you'll have to read our paper for the details. Sorting circuits have been studied for many decades. A long-standing open question in the complexity and algorithms literature is the following: Does there exist sorting circuits of o(n log n) size? So what do we know about this question? First, we know that to get anything better than (n log n), it cannot be comparison-based. Comparison-based has a well-known (n log n) lower bound. And this lower bound applies no matter whether it's the RAM model or circuit model. If we forego the comparison-based requirement, however, we know that on the RAM, sorting can be accomplished in nearly linear time. Unfortunately, these RAM algorithms, critically rely on dynamic memory axes and cannot be converted to the circuit model in a way that preserves efficiency. Okay, so we've been stuck on this question for several decades. As I mentioned, this is one of the well-known long-standing open questions in the complexity and algorithms literature. Somehow we cannot make progress either on the upper bound, and lower bound fronts. In some sense, it's almost surprising that after so many years, we still don't understand sorting circuits. So let's see why we are so stuck. On the upper bound front, we are stuck at the AKS sorting network from 1983. The AKS sorting network is comparison based, so it is actually optimal in the comparative-based model. A long-standing question is: Can we beat AKS if we forego the comparison based restriction? It turns out that we haven't made any progress at all along this front. On the lower bound side, we also seem to be pretty stuck. In fact, not only do we not know how to prove and (n log n) lower bound sorting circuits. In fact, we don't know how to prove a superlinear lower bound. And in fact, it turns out proving superlinear circuit lower bound for any problem into the n key, is beyond the reach of current techniques. Okay, despite all these long-standing barriers, we were able to make a little progress in terms of understanding sorting circuit complexity, both on the upper bound and the lower bound fronts. On the upper bound side, somewhat imprecisely speaking, we showed that sorting (n) elements, each tagged with the k-bit key can be accomplished with a circuit of size and timescale. So if, for example, K is asymptotically smaller than log n, we can actually defeat the AKS sorting network. Our result can also be viewed as a generalization of the AKS sorting circuits. And note that I'm ignoring poly log star terms in the bound. On the lower bound side, we showed that essentially the above upper bound is tight for every choice of (k), either assuming the indivisibility model or assuming the Li-Li network coding conjecture. So let me explain. The indivisibility model assumes that the element's payload strings are opaque, and the circuit does not perform any encoding or computation on the element's payload strings. And indeed, you know, almost all of the algorithms we know are, indeed, in the indivisibility model. Now the Li-Li conjecture is a well-known conjecture in the area of network coding. It posits that network coding cannot help anything beyond the standard multi-commodity flow rates in undirected graphs. So while no one knows how to prove unconditional super linear circuit low bounds, we were able to prove a conditional lower bound. And, you know, the lower bound also implies that if the Li-Li network coding conjecture is true, then one cannot build a sorting circuit of o(n log n) size for the, you know, case of general keys. So for the rest of the talk, let me say something about this upper bound and why it turns out to be very much nontrivial. In fact, it turns out that even for the 1-bit key special case, the result is very much nontrivial and there are many natural barriers towards achieving these results. So essentially in the 1-bit special keys, right, the result says we can sort 1-bit keys in the linear-sized circuits. I also want to mention that in the problem formulation, besides the 1-bit key, every element also has a payload string. And when you start, you have to carry the payload string around because otherwise had it not been the payload string, right? You could just, like, count how many ones there are, and then write down the answer. Before I talk about, you know, even why the 1-bit case has many barriers, let me actually quickly mention that the 1-bit case has a very cool implication. It implies that median can be computed in the linear-sized circuit, as well. Remember in your undergrad algorithms class, we learned the textbook Blum's algorithm for computing median on the RAM. And we know that it can be computed in linear time deterministically on the RAM. And in fact, this is one of my favorites when I teach undergrad algorithms. So you would almost expect that, I mean, of course, naturally, you should be able to do the same, you know, in the linear-sized circuit. But it turns out to be much harder than you might expect. And no one really knows how until our work. In some sense, the natural barriers for sorting 1-bit key also apply to median too. And for both of these problems, sorting 1-bit keys and median, in the circuit model, like, believe it or not, the prior best-known solution is actually AKS sorting circuits itself. And nothing better is known so far. And so to help you understand why, you know, for something so natural, like, it's so natural that if I didn't tell you it's hard, you'd almost take for granted. And, and let me explain why there are natural barriers. So the first barrier was actually described, even in Knuth's textbook from the 1970s. And, you know, the textbook said essentially, such a result would not have been possible in the comparative-based model. And the reason is because of the zero-one principle, right, so the zero-one principle is that any comparison-based sorting circuit, if it can sort zero-one keys, it can also sort general keys. So, therefore, the (n log n) lower bound for comparative-based sorting actually applies even to the 1-bit key case. Okay, well, to the best of our knowledge, our existing sorting circuit constructions indeed are in the comparison-based model. You know, it's been a natural question, like, whether we can achieve anything better using non comparison-based techniques. Nothing is known, and this is not like the RAMs model, right? In the RAM model, we know how to make use of non comparison-based techniques to get interesting results. The second barrier was actually recently shown in our own work, as well as a work by [00:09:07] and others. Okay, so it turns out for the 1-bit key special case, if you require stability in the sorting, again there's (n log n) barrier. And again, the barrier holds either assuming the indivisibility model or the Li-Li network coding conjecture. Here, stability means that for the elements with the same key, we insist that the other in the output array must respect the relative other in the input array. Okay, so to get a linear-sized sorting circuit for 1-bit keys, not only do we have to forgo the comparison-based restriction, we also have to forego stability. So finally, you know, when we overcome these barriers, and we are eventually able to construct a sorting circuit for 1-bit keys, the next question is how to upgrade it to a sorting circuit for k-bit keys. And here, we encounter another challenge. And the challenge is exactly because the 1-bit sorting circuit is not stable, right? Had it been stable, you know, a natural idea would be to use radiant sorts. But radiant sorts expects that the 1-bit sorting case, you know, is stable. So in fact, to do this upgrade, we came up with a new technique, which is like a clever two-parameter recursion technique. Okay, so I won't have time to go into details. Let me quickly comment about the techniques at a very high level. So, essentially, we start with Pippenger's self-routing super concentrator. Imprecisely speaking, if we directly converted his super concentrator construction to the circuit model, we would incur (n log n), but then we can rely on the cool observation that was actually made in the earlier work on constructing smart-depth, perfect, oblivious, parallel RAM. So in this work we observed that Pippenger's super concentrated construction actually has the online phase and the offline phase. So interestingly, the offline phase depends only on metadata, and it never looks at the element's payload strings. And also, interestingly, it's also the offline phase that's causing the (n log n). Whereas the online face is easier to implement in linear-sized circuits. So, by exploiting the fact that the offline phase operates only on metadata, we are able to use a recursive bootstrapping technique to essentially squash the (n log n) to something like (n polylog * n). So last, but not least, let me say a few words about how this is related to optimal ORAM. In optimal ORAM, we essentially need an oblivious algorithm that sorts 1-bit keys in linear time, and this is on the RAM, right? So to get that, we also rely on Pippenger's super concentrated construction, and we rely on the same offline, online insights, but then to get rid of the (n log n) in the oblivious RAM model actually almost requires, like, in some sense, the opposite techniques from the circuit model, right? So in the, on the RAM model, we know that every word has log n bits, and we can simultaneously perform log n bitwise operations in unit costs, you know, because this is like operation on a single word. So therefore on oblivious RAM, one of the core techniques we use is packing. But, you know, packing is, like, there's no free packing in the circuit model, like in a circuit model, every wire, every gate would have counted. So therefore, the algorithm tricks we use in the circuit model is actually rather different. Okay, I guess this is about as much as I can say about this line of work. To summarize, it's almost surprising that after so many years, we still don't understand sorting circuits. In fact, you know, it looks like we've been pretty stuck since 1980s. So we were able to actually push the frontier of our understanding a little bit, both in terms of upper and lower bounds. Thank you so much for your attention.
SUMMARY :
actually depart from the techniques we use
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Steve Manly | PERSON | 0.99+ |
Sanjay | PERSON | 0.99+ |
Rick | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Fernando Castillo | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Balanta | PERSON | 0.99+ |
Erin | PERSON | 0.99+ |
Aaron Kelly | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Fernando | PERSON | 0.99+ |
Phil Bollinger | PERSON | 0.99+ |
Doug Young | PERSON | 0.99+ |
1983 | DATE | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
25 | QUANTITY | 0.99+ |
Pat Gelsing | PERSON | 0.99+ |
Data Torrent | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Aaron | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
AWS Partner Network | ORGANIZATION | 0.99+ |
Maurizio Carli | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Drew Clark | PERSON | 0.99+ |
March | DATE | 0.99+ |
John Troyer | PERSON | 0.99+ |
Rich Steeves | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
Phu Hoang | PERSON | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
1 | QUANTITY | 0.99+ |
Cook Industries | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dave Valata | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Stephen Jones | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Better Cybercrime Metrics Act | TITLE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud
>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great questions, Dave. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)
SUMMARY :
and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Martin Fowler | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
early 1960s | DATE | 0.99+ |
Starlink | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
a day | QUANTITY | 0.97+ |
GCP | TITLE | 0.97+ |
Azure | TITLE | 0.96+ |
WCNP | TITLE | 0.96+ |
10 milliseconds | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.94+ |
Cloud Spanner | TITLE | 0.94+ |
Linkerd | ORGANIZATION | 0.93+ |
triplet | QUANTITY | 0.92+ |
three cloud providers | QUANTITY | 0.91+ |
Cube | ORGANIZATION | 0.9+ |
SuperCloud2 | ORGANIZATION | 0.89+ |
two core sets | QUANTITY | 0.88+ |
John Furrier | PERSON | 0.88+ |
one more piece | QUANTITY | 0.86+ |
two public clouds | QUANTITY | 0.86+ |
thousand locations | QUANTITY | 0.83+ |
Vice President | PERSON | 0.8+ |
10-ish | QUANTITY | 0.79+ |
WCNP | ORGANIZATION | 0.75+ |
decades | QUANTITY | 0.75+ |
three different major regions | QUANTITY | 0.74+ |
Jack Greenfield, Walmart | A Dive into Walmart's Retail Supercloud
>> Welcome back to SuperCloud2. This is Dave Vellante, and we're here with Jack Greenfield. He's the Vice President of Enterprise Architecture and the Chief Architect for the global technology platform at Walmart. Jack, I want to thank you for coming on the program. Really appreciate your time. >> Glad to be here, Dave. Thanks for inviting me and appreciate the opportunity to chat with you. >> Yeah, it's our pleasure. Now we call what you've built a SuperCloud. That's our term, not yours, but how would you describe the Walmart Cloud Native Platform? >> So WCNP, as the acronym goes, is essentially an implementation of Kubernetes for the Walmart ecosystem. And what that means is that we've taken Kubernetes off the shelf as open source, and we have integrated it with a number of foundational services that provide other aspects of our computational environment. So Kubernetes off the shelf doesn't do everything. It does a lot. In particular the orchestration of containers, but it delegates through API a lot of key functions. So for example, secret management, traffic management, there's a need for telemetry and observability at a scale beyond what you get from raw Kubernetes. That is to say, harvesting the metrics that are coming out of Kubernetes and processing them, storing them in time series databases, dashboarding them, and so on. There's also an angle to Kubernetes that gets a lot of attention in the daily DevOps routine, that's not really part of the open source deliverable itself, and that is the DevOps sort of CICD pipeline-oriented lifecycle. And that is something else that we've added and integrated nicely. And then one more piece of this picture is that within a Kubernetes cluster, there's a function that is critical to allowing services to discover each other and integrate with each other securely and with proper configuration provided by the concept of a service mesh. So Istio, Linkerd, these are examples of service mesh technologies. And we have gone ahead and integrated actually those two. There's more than those two, but we've integrated those two with Kubernetes. So the net effect is that when a developer within Walmart is going to build an application, they don't have to think about all those other capabilities where they come from or how they're provided. Those are already present, and the way the CICD pipelines are set up, it's already sort of in the picture, and there are configuration points that they can take advantage of in the primary YAML and a couple of other pieces of config that we supply where they can tune it. But at the end of the day, it offloads an awful lot of work for them, having to stand up and operate those services, fail them over properly, and make them robust. All of that's provided for. >> Yeah, you know, developers often complain they spend too much time wrangling and doing things that aren't productive. So I wonder if you could talk about the high level business goals of the initiative in terms of the hardcore benefits. Was the real impetus to tap into best of breed cloud services? Were you trying to cut costs? Maybe gain negotiating leverage with the cloud guys? Resiliency, you know, I know was a major theme. Maybe you could give us a sense of kind of the anatomy of the decision making process that went in. >> Sure, and in the course of answering your question, I think I'm going to introduce the concept of our triplet architecture which we haven't yet touched on in the interview here. First off, just to sort of wrap up the motivation for WCNP itself which is kind of orthogonal to the triplet architecture. It can exist with or without it. Currently does exist with it, which is key, and I'll get to that in a moment. The key drivers, business drivers for WCNP were developer productivity by offloading the kinds of concerns that we've just discussed. Number two, improving resiliency, that is to say reducing opportunity for human error. One of the challenges you tend to run into in a large enterprise is what we call snowflakes, lots of gratuitously different workloads, projects, configurations to the extent that by developing and using WCNP and continuing to evolve it as we have, we end up with cookie cutter like consistency across our workloads which is super valuable when it comes to building tools or building services to automate operations that would otherwise be manual. When everything is pretty much done the same way, that becomes much simpler. Another key motivation for WCNP was the ability to abstract from the underlying cloud provider. And this is going to lead to a discussion of our triplet architecture. At the end of the day, when one works directly with an underlying cloud provider, one ends up taking a lot of dependencies on that particular cloud provider. Those dependencies can be valuable. For example, there are best of breed services like say Cloud Spanner offered by Google or say Cosmos DB offered by Microsoft that one wants to use and one is willing to take the dependency on the cloud provider to get that functionality because it's unique and valuable. On the other hand, one doesn't want to take dependencies on a cloud provider that don't add a lot of value. And with Kubernetes, we have the opportunity, and this is a large part of how Kubernetes was designed and why it is the way it is, we have the opportunity to sort of abstract from the underlying cloud provider for stateless workloads on compute. And so what this lets us do is build container-based applications that can run without change on different cloud provider infrastructure. So the same applications can run on WCNP over Azure, WCNP over GCP, or WCNP over the Walmart private cloud. And we have a private cloud. Our private cloud is OpenStack based and it gives us some significant cost advantages as well as control advantages. So to your point, in terms of business motivation, there's a key cost driver here, which is that we can use our own private cloud when it's advantageous and then use the public cloud provider capabilities when we need to. A key place with this comes into play is with elasticity. So while the private cloud is much more cost effective for us to run and use, it isn't as elastic as what the cloud providers offer, right? We don't have essentially unlimited scale. We have large scale, but the public cloud providers are elastic in the extreme which is a very powerful capability. So what we're able to do is burst, and we use this term bursting workloads into the public cloud from the private cloud to take advantage of the elasticity they offer and then fall back into the private cloud when the traffic load diminishes to the point where we don't need that elastic capability, elastic capacity at low cost. And this is a very important paradigm that I think is going to be very commonplace ultimately as the industry evolves. Private cloud is easier to operate and less expensive, and yet the public cloud provider capabilities are difficult to match. >> And the triplet, the tri is your on-prem private cloud and the two public clouds that you mentioned, is that right? >> That is correct. And we actually have an architecture in which we operate all three of those cloud platforms in close proximity with one another in three different major regions in the US. So we have east, west, and central. And in each of those regions, we have all three cloud providers. And the way it's configured, those data centers are within 10 milliseconds of each other, meaning that it's of negligible cost to interact between them. And this allows us to be fairly agnostic to where a particular workload is running. >> Does a human make that decision, Jack or is there some intelligence in the system that determines that? >> That's a really great question, Dave. And it's a great question because we're at the cusp of that transition. So currently humans make that decision. Humans choose to deploy workloads into a particular region and a particular provider within that region. That said, we're actively developing patterns and practices that will allow us to automate the placement of the workloads for a variety of criteria. For example, if in a particular region, a particular provider is heavily overloaded and is unable to provide the level of service that's expected through our SLAs, we could choose to fail workloads over from that cloud provider to a different one within the same region. But that's manual today. We do that, but people do it. Okay, we'd like to get to where that happens automatically. In the same way, we'd like to be able to automate the failovers, both for high availability and sort of the heavier disaster recovery model between, within a region between providers and even within a provider between the availability zones that are there, but also between regions for the sort of heavier disaster recovery or maintenance driven realignment of workload placement. Today, that's all manual. So we have people moving workloads from region A to region B or data center A to data center B. It's clean because of the abstraction. The workloads don't have to know or care, but there are latency considerations that come into play, and the humans have to be cognizant of those. And automating that can help ensure that we get the best performance and the best reliability. >> But you're developing the dataset to actually, I would imagine, be able to make those decisions in an automated fashion over time anyway. Is that a fair assumption? >> It is, and that's what we're actively developing right now. So if you were to look at us today, we have these nice abstractions and APIs in place, but people run that machine, if you will, moving toward a world where that machine is fully automated. >> What exactly are you abstracting? Is it sort of the deployment model or, you know, are you able to abstract, I'm just making this up like Azure functions and GCP functions so that you can sort of run them, you know, with a consistent experience. What exactly are you abstracting and how difficult was it to achieve that objective technically? >> that's a good question. What we're abstracting is the Kubernetes node construct. That is to say a cluster of Kubernetes nodes which are typically VMs, although they can run bare metal in certain contexts, is something that typically to stand up requires knowledge of the underlying cloud provider. So for example, with GCP, you would use GKE to set up a Kubernetes cluster, and in Azure, you'd use AKS. We are actually abstracting that aspect of things so that the developers standing up applications don't have to know what the underlying cluster management provider is. They don't have to know if it's GCP, AKS or our own Walmart private cloud. Now, in terms of functions like Azure functions that you've mentioned there, we haven't done that yet. That's another piece that we have sort of on our radar screen that, we'd like to get to is serverless approach, and the Knative work from Google and the Azure functions, those are things that we see good opportunity to use for a whole variety of use cases. But right now we're not doing much with that. We're strictly container based right now, and we do have some VMs that are running in sort of more of a traditional model. So our stateful workloads are primarily VM based, but for serverless, that's an opportunity for us to take some of these stateless workloads and turn them into cloud functions. >> Well, and that's another cost lever that you can pull down the road that's going to drop right to the bottom line. Do you see a day or maybe you're doing it today, but I'd be surprised, but where you build applications that actually span multiple clouds or is there, in your view, always going to be a direct one-to-one mapping between where an application runs and the specific cloud platform? >> That's a really great question. Well, yes and no. So today, application development teams choose a cloud provider to deploy to and a location to deploy to, and they have to get involved in moving an application like we talked about today. That said, the bursting capability that I mentioned previously is something that is a step in the direction of automatic migration. That is to say we're migrating workload to different locations automatically. Currently, the prototypes we've been developing and that we think are going to eventually make their way into production are leveraging Istio to assess the load incoming on a particular cluster and start shedding that load into a different location. Right now, the configuration of that is still manual, but there's another opportunity for automation there. And I think a key piece of this is that down the road, well, that's a, sort of a small step in the direction of an application being multi provider. We expect to see really an abstraction of the fact that there is a triplet even. So the workloads are moving around according to whatever the control plane decides is necessary based on a whole variety of inputs. And at that point, you will have true multi-cloud applications, applications that are distributed across the different providers and in a way that application developers don't have to think about. >> So Walmart's been a leader, Jack, in using data for competitive advantages for decades. It's kind of been a poster child for that. You've got a mountain of IP in the form of data, tools, applications best practices that until the cloud came out was all On Prem. But I'm really interested in this idea of building a Walmart ecosystem, which obviously you have. Do you see a day or maybe you're even doing it today where you take what we call the Walmart SuperCloud, WCNP in your words, and point or turn that toward an external world or your ecosystem, you know, supporting those partners or customers that could drive new revenue streams, you know directly from the platform? >> Great question, Steve. So there's really two things to say here. The first is that with respect to data, our data workloads are primarily VM basis. I've mentioned before some VMware, some straight open stack. But the key here is that WCNP and Kubernetes are very powerful for stateless workloads, but for stateful workloads tend to be still climbing a bit of a growth curve in the industry. So our data workloads are not primarily based on WCNP. They're VM based. Now that said, there is opportunity to make some progress there, and we are looking at ways to move things into containers that are currently running in VMs which are stateful. The other question you asked is related to how we expose data to third parties and also functionality. Right now we do have in-house, for our own use, a very robust data architecture, and we have followed the sort of domain-oriented data architecture guidance from Martin Fowler. And we have data lakes in which we collect data from all the transactional systems and which we can then use and do use to build models which are then used in our applications. But right now we're not exposing the data directly to customers as a product. That's an interesting direction that's been talked about and may happen at some point, but right now that's internal. What we are exposing to customers is applications. So we're offering our global integrated fulfillment capabilities, our order picking and curbside pickup capabilities, and our cloud powered checkout capabilities to third parties. And this means we're standing up our own internal applications as externally facing SaaS applications which can serve our partners' customers. >> Yeah, of course, Martin Fowler really first introduced to the world Zhamak Dehghani's data mesh concept and this whole idea of data products and domain oriented thinking. Zhamak Dehghani, by the way, is a speaker at our event as well. Last question I had is edge, and how you think about the edge? You know, the stores are an edge. Are you putting resources there that sort of mirror this this triplet model? Or is it better to consolidate things in the cloud? I know there are trade-offs in terms of latency. How are you thinking about that? >> All really good questions. It's a challenging area as you can imagine because edges are subject to disconnection, right? Or reduced connection. So we do place the same architecture at the edge. So WCNP runs at the edge, and an application that's designed to run at WCNP can run at the edge. That said, there are a number of very specific considerations that come up when running at the edge, such as the possibility of disconnection or degraded connectivity. And so one of the challenges we have faced and have grappled with and done a good job of I think is dealing with the fact that applications go offline and come back online and have to reconnect and resynchronize, the sort of online offline capability is something that can be quite challenging. And we have a couple of application architectures that sort of form the two core sets of patterns that we use. One is an offline/online synchronization architecture where we discover that we've come back online, and we understand the differences between the online dataset and the offline dataset and how they have to be reconciled. The other is a message-based architecture. And here in our health and wellness domain, we've developed applications that are queue based. So they're essentially business processes that consist of multiple steps where each step has its own queue. And what that allows us to do is devote whatever bandwidth we do have to those pieces of the process that are most latency sensitive and allow the queue lengths to increase in parts of the process that are not latency sensitive, knowing that they will eventually catch up when the bandwidth is restored. And to put that in a little bit of context, we have fiber lengths to all of our locations, and we have I'll just use a round number, 10-ish thousand locations. It's larger than that, but that's the ballpark, and we have fiber to all of them, but when the fiber is disconnected, and it does get disconnected on a regular basis. In fact, I forget the exact number, but some several dozen locations get disconnected daily just by virtue of the fact that there's construction going on and things are happening in the real world. When the disconnection happens, we're able to fall back to 5G and to Starlink. Starlink is preferred. It's a higher bandwidth. 5G if that fails. But in each of those cases, the bandwidth drops significantly. And so the applications have to be intelligent about throttling back the traffic that isn't essential, so that it can push the essential traffic in those lower bandwidth scenarios. >> So much technology to support this amazing business which started in the early 1960s. Jack, unfortunately, we're out of time. I would love to have you back or some members of your team and drill into how you're using open source, but really thank you so much for explaining the approach that you've taken and participating in SuperCloud2. >> You're very welcome, Dave, and we're happy to come back and talk about other aspects of what we do. For example, we could talk more about the data lakes and the data mesh that we have in place. We could talk more about the directions we might go with serverless. So please look us up again. Happy to chat. >> I'm going to take you up on that, Jack. All right. This is Dave Vellante for John Furrier and the Cube community. Keep it right there for more action from SuperCloud2. (upbeat music)
SUMMARY :
and the Chief Architect for and appreciate the the Walmart Cloud Native Platform? and that is the DevOps Was the real impetus to tap into Sure, and in the course And the way it's configured, and the humans have to the dataset to actually, but people run that machine, if you will, Is it sort of the deployment so that the developers and the specific cloud platform? and that we think are going in the form of data, tools, applications a bit of a growth curve in the industry. and how you think about the edge? and allow the queue lengths to increase for explaining the and the data mesh that we have in place. and the Cube community.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Martin Fowler | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Starlink | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two things | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
early 1960s | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
a day | QUANTITY | 0.98+ |
GCP | TITLE | 0.97+ |
Azure | TITLE | 0.96+ |
WCNP | TITLE | 0.96+ |
10 milliseconds | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.94+ |
Cloud Spanner | TITLE | 0.94+ |
Linkerd | ORGANIZATION | 0.93+ |
Cube | ORGANIZATION | 0.93+ |
triplet | QUANTITY | 0.92+ |
three cloud providers | QUANTITY | 0.91+ |
two core sets | QUANTITY | 0.88+ |
John Furrier | PERSON | 0.86+ |
one more piece | QUANTITY | 0.86+ |
SuperCloud2 | ORGANIZATION | 0.86+ |
two public clouds | QUANTITY | 0.86+ |
thousand locations | QUANTITY | 0.83+ |
Vice President | PERSON | 0.8+ |
10-ish | QUANTITY | 0.79+ |
WCNP | ORGANIZATION | 0.75+ |
decades | QUANTITY | 0.75+ |
three different major regions | QUANTITY | 0.74+ |
Madhura Maskasky & Sirish Raghuram | KubeCon + CloudNativeCon NA 2022
(upbeat synth intro music) >> Hey everyone and welcome to Detroit, Michigan. theCUBE is live at KubeCon CloudNativeCon, North America 2022. Lisa Martin here with John Furrier. John, this event, the keynote that we got out of a little while ago was, standing room only. The Solutions hall is packed. There's so much buzz. The community is continuing to mature. They're continuing to contribute. One of the big topics is Cloud Native at Scale. >> Yeah, I mean, this is a revolution happening. The developers are coming on board. They will be running companies. Developers, structurally, will be transforming companies with just, they got to get powered somewhere. And, I think, the Cloud Native at Scale speaks to getting everything under the covers, scaling up to support developers. In this next segment, we have two Kube alumnis. We're going to talk about Cloud Native at Scale. Some of the things that need to be there in a unified architecture, should be great. >> All right, it's going to be fantastic. Let's go under the covers here, as John mentioned, two alumni with us, Madhura Maskasky joins us, co-founder of Platform9. Sirish Raghuram, also co-founder of Platform9 joins us. Welcome back to theCUBE. Great to have you guys here at KubeCon on the floor in Detroit. >> Thank you for having us. >> Thank you for having us. >> Excited to be here >> So, talk to us. You guys have some news, Madhura, give us the sneak peak. What's going on? >> Definitely, we are very excited. So, we have John, not too long ago we spoke about our very new open source project called Arlon. And, we were talking about the launch of Arlon in terms of its first release and etcetera. And, just fresh hot of the press, we, Platform9 had its 5.6 release which is its most recent release of our product. And there's a number of key interesting announcements that we'd like to share as part of that. I think, the prominent one is, Platform9 added support for EKS Kubernetes cluster management. And, so, this is part of our vision of being able to add value, no matter where you run your Kubernetes clusters, because, Kubernetes or cluster management, is increasingly becoming commodity. And, so, I think the companies that succeed are going to add value on top, and are going to add value in a way that helps end users, developers, DevOps solve problems that they encounter as they start running these environments, with a lot of scale and a lot of diversity. So, towards that, key features in the 5.6 six release. First, is the very first package release of the product online, which is the open source project that we've kicked off to do cluster and application, entire cluster management at scale. And, then there's few other very interesting capabilities coming out of that. >> I want to just highlight something and then get your thoughts on this next, this release 5.6. First of all, 5.6, it's been around for a while, five reps, but, now, more than ever, you mentioned the application in Ops. You're seeing WebAssembly trends, you're seeing developers getting more and more advanced capability. It's going to accelerate their ability to write code and compose applications. So, you're seeing a application tsunami coming. So, the pressure is okay, they're going to need infrastructure to run all that stuff. And, so, you're seeing more clusters being spun up, more intelligence trying to automate. So you got the automation, so you got the dynamic, the power dynamic of developers and then under the covers. What does 5.6 do to push the mission forward for developers? How would you guys summarize that for people watching? what's in it for them right now? >> So it's, I think going back to what you just said, right, the breadth of applications that people are developing on top of something like Kubernetes and Cloud Native, is always growing. So, it's not just a number of clusters, but also the fact that different applications and different development groups need these clusters to be composed differently. So, a certain version of the application may require some set of build components, add-ons, and operators, and extensions. Whereas, a different application may require something entirely different. And, now, you take this in an enterprise context, right. Like, we had a major media company that worked with us. They have more than 10,000 pods being used by thousands of developers. And, you now think about the breadth of applications, the hundreds of different applications being built. how do you consistently build, and compose, and manage, a large number of communities clusters with a a large variety of extensions that these companies are trying to manage? That's really what I think 5.6 is bringing to the table. >> Scott Johnston just was on here early as the CEO of Docker. He said there's more applications being pushed now than in the history of application development combined. There's more and more apps coming, more and more pressure on the system. >> And, that's where, if you go, there's this famous landscape chart of the CNCF ecosystem technologies. And, the problem that people here have is, how do they put it all together? How do they make sense of it? And, what 5.6 and Arlon and what Platform9 is doing is, it's helping you declaratively capture blueprints of these clusters, using templates, and be able to manage a small number of blueprints that helps you make order out of the chaos of these hundreds of different projects, that are all very interesting and powerful. >> So Project Arlon really helping developers produce the configuration and the deployment complexities of Kubernetes at scale. >> That's exactly right. >> Talk about the, the impact on the business side. Ease of use, what's the benefits for 5.6? What's does it turn into for a benefit standpoint? >> Yeah, I think the biggest benefit, right, is being able to do Cloud Native at Scale faster, and while still keeping a very lean Ops team that is able to spend, let's say 70 plus percent of their time, caring for your actual business bread and butter applications, and not for the infrastructure that serves it, right. If you take the analogy of a restaurant, you don't want to spend 70% of your time in building the appliances or setting up your stoves etcetera. You want to spend 90 plus percent of your time cooking your own meal, because, that is your core key ingredient. But, what happens today in most enterprises is, because, of the level of automation, the level of hands-on available tooling, being there or not being there, majority of the ops time, I would say 50, 70% plus, gets spent in making that kitchen set up and ready, right. And, that is exactly what we are looking to solve, online. >> What would a customer look like, or prospect environment look like that would be really ready for platform9? What, is it more apps being pushed, big push on application development, or is it the toil of like really inefficient infrastructure, or gaps in skills of people? What does an environment look like? So, someone needs to look at their environment and say, okay, maybe I should call platform9. What's it look like? >> So, we generally see customers fall into two ends of the barbell, I would say. One, is the advanced communities users that are running, I would say, typically, 30 or more clusters already. These are the people that already know containers. They know, they've container wise... >> Savvy teams. >> They're savvy teams, a lot of them are out here. And for them, the problem is, how do I manage the complexity at scale? Because, now, the problem is how do I scale us? So, that's one end of the barbell. The other end of the barbell, is, how do we help make Kubernetes accessible to companies that, as what I would call the mainstream enterprise. We're in Detroit in Motown, right, And, we're outside of the echo chamber of the Silicon Valley. Here's the biggest truth, right. For all the progress that we made as a community, less than 20% of applications in the enterprise today are running on Kubernetes. So, what does it take? I would say it's probably less than 10%, okay. And, what does it take, to grow that in order of magnitude? That's the other kind of customer that we really serve, is, because, we have technologies like Kube Word, which helps them take their existing applications and start adopting Kubernetes as a directional roadmap, but, while using the existing applications that they have, without refactoring it. So, I would say those are the two ends of the barbell. The early adopters that are looking for an easier way to adopt Kubernetes as an architectural pattern. And, the advanced savvy users, for whom the problem is, how do they operationally solve the complexity of managing at scale. >> And, what is your differentiation message to both of those different user groups, as you talked about in terms of the number of users of Kubernetes so far? The community groundswell is tremendous, but, there's a lot of opportunity there. You talked about some of the barriers. What's your differentiation? What do you come in saying, this is why Platform9 is the right one for you, in the both of these groups. >> And it's actually a very simple message. We are the simplest and easiest way for a new user that is adopting Kubernetes as an architectural pattern, to get started with existing applications that they have, on the infrastructure that they have. Number one. And, for the savvy teams, our technology helps you operate with greater scale, with constrained operations teams. Especially, with the economy being the way it is, people are not going to get a lot more budget to go hire a lot more people, right. So, that all of them are being asked to do more with less. And, our team, our technology, and our teams, help you do more with less. >> I was talking with Phil Estes last night from AWS. He's here, he is one of their engineer open source advocates. He's always on the ground pumping up AWS. They've had great success, Amazon Web Services, with their EKS. A lot of people adopting clusters on the cloud and on-premises. But Amazon's doing well. You guys have, I think, a relationship with AWS. What's that, If I'm an Amazon customer, how do I get involved with Platform9? What's the hook? Where's the value? What's the product look like? >> Yeah, so, and it kind of goes back towards the point we spoke about, which is, Kubernetes is going to increasingly get commoditized. So, customers are going to find the right home whether it's hyperscalers, EKS, AKS, GKE, or their own infrastructure, to run Kubernetes. And, so, where we want to be at, is, with a project like Arlon, Sirish spoke about the barbell strategy, on one end there is these advanced Kubernetes users, majority of them are running Kubernetes on AKS, right? Because, that was the easiest platform that they found to get started with. So, now, they have a challenge of running these 50 to 100 clusters across various regions of Amazon, across their DevTest, their staging, their production. And, that results in a level of chaos that these DevOps or platform... >> So you come in and solve that. >> That is where we come in and we solve that. And it, you know, Amazon or EKS, doesn't give you tooling to solve that, right. It makes it very easy for you to create those number of clusters. >> Well, even in one hyperscale, let's say AWS, you got regions and locations... >> Exactly >> ...that's kind of a super cloud problem, we're seeing, opportunity problem, and opportunity is that, on Amazon, availability zones is one thing, but, now, also, you got regions. >> That is absolutely right. You're on point John. And the way we solve it, is by using infrastructure as a code, by using GitOps principles, right? Where you define it once, you define it in a yaml file, you define exactly how for your DevTest environment you want your entire infrastructure to look like, including EKS. And then you stamp it out. >> So let me, here's an analogy, I'll throw out this. You guys are like, someone learns how to drive a car, Kubernetes clusters, that's got a couple clusters. Then once they know how to drive a car, you give 'em the sports car. You allow them to stay on Amazon and all of a sudden go completely distributed, Edge, Global. >> I would say that a lot of people that we meet, we feel like they're figuring out how to build a car with the kit tools that they have. And we give them a car that's ready to go and doesn't require them to be trying to... ... they can focus on driving the car, rather than trying to build the car. >> You don't want people to stop, once they get the progressions, they hit that level up on Kubernetes, you guys give them the ability to go much bigger and stronger. >> That's right. >> To accelerate that applications. >> Building a car gets old for people at a certain point in time, and they really want to focus on is driving it and enjoying it. >> And we got four right behind us, so, we'll get them involved. So that's... >> But, you're not reinventing the wheel. >> We're not at all, because, what we are building is two very, very differentiated solutions, right. One, is, we're the simplest and easiest way to build and run Cloud Native private clouds. And, this is where the operational complexity of trying to do it yourself. You really have to be a car builder, to be able to do this with our Platform9. This is what we do uniquely that nobody else does well. And, the other end is, we help you operate at scale, in the hyperscalers, right. Those are the two problems that I feel, whether you're on-prem, or in the cloud, these are the two problems people face. How do you run a private cloud more easily, more efficiently? And, how do you govern at scale, especially in the public clouds? >> I want to get to two more points before we run out of time. Arlon and Argo CD as a service. We previously mentioned up coming into KubeCon, but, here, you guys couldn't be more relevant, 'cause Intuit was on stage on the keynote, getting an award for their work. You know, Argo, it comes from Intuit. That ArgoCon was in Mountain View. You guys were involved in that. You guys were at the center of all this super cloud action, if you will, or open source. How does Arlon fit into the Argo extension? What is Argo CD as a service? Who's going to take that one? I want to get that out there, because, Arlon has been talked about a lot. What's the update? >> I can talk about it. So, one of the things that Arlon uses behind the scenes, is it uses Argo CD, open source Argo CD as a service, as its key component to do the continuous deployment portion of its entire, the infrastructure management story, right. So, we have been very strongly partnering with Argo CD. We, really know and respect the Intuit team a lot. We, as part of this effort, in 5.6 release, we've also put out Argo CD as a service, in its GA version, right. Because, the power of running Arlon along with Argo CD as a service, in our mind, is enabling you to run on one end, your infrastructure as a scale, through GitOps, and infrastructure as a code practices. And on the other end, your entire application fleet, at scale, right. And, just marrying the two, really gives you the ability to perform that automation that we spoke about. >> But, and avoid the problem of sprawl when you have distributed teams, you have now things being bolted on, more apps coming out. So, this is really solves that problem, mainly. >> That is exactly right. And if you think of it, the way those problems are solved today, is, kind of in disconnected fashion, which is on one end you have your CI/CD tools, like Argo CD is an excellent one. There's some other choices, which are managed by a separate team to automate your application delivery. But, that team, is disconnected from the team that does the infrastructure management. And the infrastructure management is typically done through a bunch of Terraform scripts, or a bunch of ad hoc homegrown scripts, which are very difficult to manage. >> So, Arlon changes sure, as they change the complexity and also the sprawl. But, that's also how companies can die. They're growing fast, they're adding more capability. That's what trouble starts, right? >> I think in two ways, right. Like one is, as Madhura said, I think one of the common long-standing problems we've had, is, how do infrastructure and application teams communicate and work together, right. And, you've seen Argo's really get adopted by the application teams, but, it's now something that we are making accessible for the infrastructure teams to also bring the best practices of how application teams are managing applications. You can now use that to manage infrastructure, right. And, what that's going to do is, help you ultimately reduce waste, reduce inefficiency, and improve the developer experience. Because, that's what it's all about, ultimately. >> And, I know that you just released 5.6 today, congratulations on that. Any customer feedback yet? Any, any customers that you've been able to talk to, or have early access? >> Yeah, one of our large customers is a large SaaS retail company that is B2C SaaS. And, their feedback has been that this, basically, helps them bring exactly what I said in terms of bring some of the best practices that they wanted to adopt in the application space, down to the infrastructure management teams, right. And, we are also hearing a lot of customers, that I would say, large scale public cloud users, saying, they're really struggling with the complexity of how to tame the complexity of navigating that landscape and making it consumable for organizations that have thousands of developers or more. And that's been the feedback, is that this is the first open source standard mechanism that allows them to kind of reuse something, as opposed to everybody feels like they've had to build ad hoc solutions to solve this problem so far. >> Having a unified infrastructure is great. My final question, for me, before I end up, for Lisa to ask her last question is, if you had to explain Platform9, why you're relevant and cool today, what would you say? >> If I take that? I would say that the reason why Platform9, the reason why we exist, is, putting together a cloud, a hybrid cloud strategy for an enterprise today, historically, has required a lot of DIY, a lot of building your own car. Before you can drive a car, or you can enjoy the car, you really learn to build and operate the car. And that's great for maybe a 100 tech companies of the world, but, for the next 10,000 or 50,000 enterprises, they want to be able to consume a car. And that's why Platform9 exists, is, we are the only company that makes this delightfully simple and easy for companies that have a hybrid cloud strategy. >> Why you cool and relevant? How would you say it? >> Yeah, I think as Kubernetes becomes mainstream, as containers have become mainstream, I think automation at scale with ease, is going to be the key. And that's exactly what we help solve. Automation at scale and with ease. >> With ease and that differentiation. Guys, thank you so much for joining me. Last question, I guess, Madhura, for you, is, where can Devs go to learn more about 5.6 and get their hands on it? >> Absolutely. Go to platform9.com. There is info about 5.6 release, there's a press release, there's a link to it right on the website. And, if they want to learn about Arlon, it's an open source GitHub project. Go to GitHub and find out more about it. >> Excellent guys, thanks again for sharing what you're doing to really deliver Cloud Native at Scale in a differentiated way that adds ostensible value to your customers. John, and I, appreciate your insights and your time. >> Thank you for having us. >> Thanks so much >> Our pleasure. For our guests and John Furrier, I'm Lisa Martin. You're watching theCUBE Live from Detroit, Michigan at KubeCon CloudNativeCon 2022. Stick around, John and I will be back with our next guest. Just a minute. (light synth outro music)
SUMMARY :
One of the big topics is Some of the things that need to be there Great to have you guys here at KubeCon So, talk to us. And, just fresh hot of the press, So, the pressure is okay, they're to what you just said, right, as the CEO of Docker. of the CNCF ecosystem technologies. produce the configuration and impact on the business side. because, of the level of automation, or is it the toil of One, is the advanced communities users of the Silicon Valley. in the both of these groups. And, for the savvy teams, He's always on the ground pumping up AWS. that they found to get started with. And it, you know, Amazon or you got regions and locations... but, now, also, you got regions. And the way we solve it, Then once they know how to drive a car, of people that we meet, to go much bigger and stronger. and they really want to focus on And we got four right behind us, And, the other end is, What's the update? And on the other end, your But, and avoid the problem of sprawl that does the infrastructure management. and also the sprawl. for the infrastructure teams to also bring And, I know that you of bring some of the best practices today, what would you say? of the world, ease, is going to be the key. to learn more about 5.6 there's a link to it right on the website. to your customers. be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Madhura Maskasky | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sirish Raghuram | PERSON | 0.99+ |
Madhura | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Detroit | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Scott Johnston | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Sirish | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Platform9 | ORGANIZATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
Phil Estes | PERSON | 0.99+ |
100 tech companies | QUANTITY | 0.99+ |
less than 20% | QUANTITY | 0.99+ |
less than 10% | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
Motown | LOCATION | 0.99+ |
first release | QUANTITY | 0.99+ |
more than 10,000 pods | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
two alumni | QUANTITY | 0.99+ |
two ways | QUANTITY | 0.99+ |
Arlon | ORGANIZATION | 0.99+ |
5.6 | QUANTITY | 0.98+ |
Mountain View | LOCATION | 0.98+ |
One | QUANTITY | 0.98+ |
two more points | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
EKS | ORGANIZATION | 0.98+ |
last night | DATE | 0.98+ |
Cloud Native | TITLE | 0.98+ |
70 plus percent | QUANTITY | 0.97+ |
one end | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
90 plus percent | QUANTITY | 0.97+ |
DevTest | TITLE | 0.97+ |
Argo | ORGANIZATION | 0.97+ |
50,000 enterprises | QUANTITY | 0.96+ |
Kube | ORGANIZATION | 0.96+ |
two ends | QUANTITY | 0.96+ |
Intuit | ORGANIZATION | 0.96+ |
five reps | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Kubernetes | TITLE | 0.95+ |
GitOps | TITLE | 0.95+ |
Cloud Native | TITLE | 0.95+ |
platform9.com | OTHER | 0.95+ |
hundreds of different applications | QUANTITY | 0.95+ |
Michael Foster & Doron Caspin, Red Hat | KubeCon + CloudNativeCon NA 2022
(upbeat music) >> Hey guys, welcome back to the show floor of KubeCon + CloudNativeCon '22 North America from Detroit, Michigan. Lisa Martin here with John Furrier. This is day one, John at theCUBE's coverage. >> CUBE's coverage. >> theCUBE's coverage of KubeCon. Try saying that five times fast. Day one, we have three wall-to-wall days. We've been talking about Kubernetes, containers, adoption, cloud adoption, app modernization all morning. We can't talk about those things without addressing security. >> Yeah, this segment we're going to hear container and Kubernetes security for modern application 'cause the enterprise are moving there. And this segment with Red Hat's going to be important because they are the leader in the enterprise when it comes to open source in Linux. So this is going to be a very fun segment. >> Very fun segment. Two guests from Red Hat join us. Please welcome Doron Caspin, Senior Principal Product Manager at Red Hat. Michael Foster joins us as well, Principal Product Marketing Manager and StackRox Community Lead at Red Hat. Guys, great to have you on the program. >> Thanks for having us. >> Thank you for having us. >> It's awesome. So Michael StackRox acquisition's been about a year. You got some news? >> Yeah, 18 months. >> Unpack that for us. >> It's been 18 months, yeah. So StackRox in 2017, originally we shifted to be the Kubernetes-native security platform. That was our goal, that was our vision. Red Hat obviously saw a lot of powerful, let's say, mission statement in that, and they bought us in 2021. Pre-acquisition we were looking to create a cloud service. Originally we ran on Kubernetes platforms, we had an operator and things like that. Now we are looking to basically bring customers in into our service preview for ACS as a cloud service. That's very exciting. Security conversation is top notch right now. It's an all time high. You can't go with anywhere without talking about security. And specifically in the code, we were talking before we came on camera, the software supply chain is real. It's not just about verification. Where do you guys see the challenges right now? Containers having, even scanning them is not good enough. First of all, you got to scan them and that may not be good enough. Where's the security challenges and where's the opportunity? >> I think a little bit of it is a new way of thinking. The speed of security is actually does make you secure. We want to keep our images up and fresh and updated and we also want to make sure that we're keeping the open source and the different images that we're bringing in secure. Doron, I know you have some things to say about that too. He's been working tirelessly on the cloud service. >> Yeah, I think that one thing, you need to trust your sources. Even if in the open source world, you don't want to copy paste libraries from the web. And most of our customers using third party vendors and getting images from different location, we need to trust our sources and we have a really good, even if you have really good scanning solution, you not always can trust it. You need to have a good solution for that. >> And you guys are having news, you're announcing the Red Hat Advanced Cluster Security Cloud Service. >> Yes. >> What is that? >> So we took StackRox and we took the opportunity to make it as a cloud services so customer can consume the product as a cloud services as a start offering and customer can buy it through for Amazon Marketplace and in the future Azure Marketplace. So customer can use it for the AKS and EKS and AKS and also of course OpenShift. So we are not specifically for OpenShift. We're not just OpenShift. We also provide support for EKS and AKS. So we provided the capability to secure the whole cloud posture. We know customer are not only OpenShift or not only EKS. We have both. We have free cloud or full cloud. So we have open. >> So it's not just OpenShift, it's Kubernetes, environments, all together. >> Doron: All together, yeah. >> Lisa: Meeting customers where they are. >> Yeah, exactly. And we focus on, we are not trying to boil the ocean or solve the whole cloud security posture. We try to solve the Kubernetes security cluster. It's very unique and very need unique solution for that. It's not just added value in our cloud security solution. We think it's something special for Kubernetes and this is what Red that is aiming to. To solve this issue. >> And the ACS platform really doesn't change at all. It's just how they're consuming it. It's a lot quicker in the cloud. Time to value is right there. As soon as you start up a Kubernetes cluster, you can get started with ACS cloud service and get going really quickly. >> I'm going to ask you guys a very simple question, but I heard it in the bar in the lobby last night. Practitioners talking and they were excited about the Red Hat opportunity. They actually asked a question, where do I go and get some free Red Hat to test some Kubernetes out and run helm or whatever. They want to play around. And do you guys have a program for someone to get start for free? >> Yeah, so the cloud service specifically, we're going to service preview. So if people sign up, they'll be able to test it out and give us feedback. That's what we're looking for. >> John: Is that a Sandbox or is that going to be in the cloud? >> They can run it in their own environment. So they can sign up. >> John: Free. >> Doron: Yeah, free. >> For the service preview. All we're asking for is for customer feedback. And I know it's actually getting busy there. It's starting December. So the quicker people are, the better. >> So my friend at the lobby I was talking to, I told you it was free. I gave you the sandbox, but check out your cloud too. >> And we also have the open source version so you can download it and use it. >> Yeah, people want to know how to get involved. I'm getting a lot more folks coming to Red Hat from the open source side that want to get their feet wet. That's been a lot of people rarely interested. That's a real testament to the product leadership. Congratulations. >> Yeah, thank you. >> So what are the key challenges that you have on your roadmap right now? You got the products out there, what's the current stake? Can you scope the adoption? Can you share where we're at? What people are doing specifically and the real challenges? >> I think one of the biggest challenges is talking with customers with a slightly, I don't want to say outdated, but an older approach to security. You hear things like malware pop up and it's like, well, really what we should be doing is keeping things into low and medium vulnerabilities, looking at the configuration, managing risk accordingly. Having disparate security tools or different teams doing various things, it's really hard to get a security picture of what's going on in the cluster. That's some of the biggest challenges that we talk with customers about. >> And in terms of resolving those challenges, you mentioned malware, we talk about ransomware. It's a household word these days. It's no longer, are we going to get hit? It's when? It's what's the severity? It's how often? How are you guys helping customers to dial down some of the risk that's inherent and only growing these days? >> Yeah, risk, it's a tough word to generalize, but our whole goal is to give you as much security information in a way that's consumable so that you can evaluate your risk, set policies, and then enforce them early on in the cluster or early on in the development pipeline so that your developers get the security information they need, hopefully asynchronously. That's the best way to do it. It's nice and quick, but yeah. I don't know if Doron you want to add to that? >> Yeah, so I think, yeah, we know that ransomware, again, it's a big world for everyone and we understand the area of the boundaries where we want to, what we want to protect. And we think it's about policies and where we enforce it. So, and if you can enforce it on, we know that as we discussed before that you can scan the image, but we never know what is in it until you really run it. So one of the thing that we we provide is runtime scanning. So you can scan and you can have policy in runtime. So enforce things in runtime. But even if one image got in a way and get to your cluster and run on somewhere, we can stop it in runtime. >> Yeah. And even with the runtime enforcement, the biggest thing we have to educate customers on is that's the last-ditch effort. We want to get these security controls as early as possible. That's where the value's going to be. So we don't want to be blocking things from getting to staging six weeks after developers have been working on a project. >> I want to get you guys thoughts on developer productivity. Had Docker CEO on earlier and since then I had a couple people messaging me. Love the vision of Docker, but Docker Hub has some legacy and it might not, has does something kind of adoption that some people think it does. Are people moving 'cause there times they want to have these their own places? No one place or maybe there is, or how do you guys see the movement of say Docker Hub to just using containers? I don't need to be Docker Hub. What's the vis-a-vis competition? >> I mean working with open source with Red Hat, you have to meet the developers where they are. If your tool isn't cutting it for developers, they're going to find a new tool and really they're the engine, the growth engine of a lot of these technologies. So again, if Docker, I don't want to speak about Docker or what they're doing specifically, but I know that they pretty much kicked off the container revolution and got this whole thing started. >> A lot of people are using your environment too. We're hearing a lot of uptake on the Red Hat side too. So, this is open source help, it all sorts stuff out in the end, like you said, but you guys are getting a lot of traction there. Can you share what's happening there? >> I think one of the biggest things from a developer experience that I've seen is the universal base image that people are using. I can speak from a security standpoint, it's awesome that you have a base image where you can make one change or one issue and it can impact a lot of different applications. That's one of the big benefits that I see in adoption. >> What are some of the business, I'm curious what some of the business outcomes are. You talked about faster time to value obviously being able to get security shifted left and from a control perspective. but what are some of the, if I'm a business, if I'm a telco or a healthcare organization or a financial organization, what are some of the top line benefits that this can bubble up to impact? >> I mean for me, with those two providers, compliance is a massive one. And just having an overall look at what's going on in your clusters, in your environments so that when audit time comes, you're prepared. You can get through that extremely quickly. And then as well, when something inevitably does happen, you can get a good image of all of like, let's say a Log4Shell happens, you know exactly what clusters are affected. The triage time is a lot quicker. Developers can get back to developing and then yeah, you can get through it. >> One thing that we see that customers compliance is huge. >> Yes. And we don't want to, the old way was that, okay, I will provision a cluster and I will do scans and find things, but I need to do for PCI DSS for example. Today the customer want to provision in advance a PCI DSS cluster. So you need to do the compliance before you provision the cluster and make all the configuration already baked for PCI DSS or HIPAA compliance or FedRAMP. And this is where we try to use our compliance, we have tools for compliance today on OpenShift and other clusters and other distribution, but you can do this in advance before you even provision the cluster. And we also have tools to enforce it after that, after your provision, but you have to do it again before and after to make it more feasible. >> Advanced cluster management and the compliance operator really help with that. That's why OpenShift Platform Plus as a bundle is so popular. Just being able to know that when a cluster gets provision, it's going to be in compliance with whatever the healthcare provider is using. And then you can automatically have ACS as well pop up so you know exactly what applications are running, you know it's in compliance. I mean that's the speed. >> You mentioned the word operator, I get triggering word now for me because operator role is changing significantly on this next wave coming because of the automation. They're operating, but they're also devs too. They're developing and composing. It's almost like a dashboard, Lego blocks. The operator's not just manually racking and stacking like the old days, I'm oversimplifying it, but the new operators running stuff, they got observability, they got coding, their servicing policy. There's a lot going on. There's a lot of knobs. Is it going to get simpler? How do you guys see the org structures changing to fill the gap on what should be a very simple, turn some knobs, operate at scale? >> Well, when StackRox originally got acquired, one of the first things we did was put ACS into an operator and it actually made the application life cycle so much easier. It was very easy in the console to go and say, Hey yeah, I want ACS my cluster, click it. It would get provisioned. New clusters would get provisioned automatically. So underneath it might get more complicated. But in terms of the application lifecycle, operators make things so much easier. >> And of course I saw, I was lucky enough with Lisa to see Project Wisdom in AnsibleFest. You going to say, Hey, Red Hat, spin up the clusters and just magically will be voice activated. Starting to see AI come in. So again, operations operator is got to dev vibe and an SRE vibe, but it's not that direct. Something's happening there. We're trying to put our finger on. What do you guys think is happening? What's the real? What's the action? What's transforming? >> That's a good question. I think in general, things just move to the developers all the time. I mean, we talk about shift left security, everything's always going that way. Developers how they're handing everything. I'm not sure exactly. Doron, do you have any thoughts on that. >> Doron, what's your reaction? You can just, it's okay, say what you want. >> So I spoke with one of our customers yesterday and they say that in the last years, we developed tons of code just to operate their infrastructure. That if developers, so five or six years ago when a developer wanted VM, it will take him a week to get a VM because they need all their approval and someone need to actually provision this VM on VMware. And today they automate all the way end-to-end and it take two minutes to get a VM for developer. So operators are becoming developers as you said, and they develop code and they make the infrastructure as code and infrastructure as operator to make it more easy for the business to run. >> And then also if you add in DataOps, AIOps, DataOps, Security Ops, that's the new IT. It seems to be the new IT is the stuff that's scaling, a lot of data's coming in, you got security. So all that's got to be brought in. How do you guys view that into the equation? >> Oh, I mean you become big generalists. I think there's a reason why those cloud security or cloud professional certificates are becoming so popular. You have to know a lot about all the different applications, be able to code it, automate it, like you said, hopefully everything as code. And then it also makes it easy for security tools to come in and look and examine where the vulnerabilities are when those things are as code. So because you're going and developing all this automation, you do become, let's say a generalist. >> We've been hearing on theCUBE here and we've been hearing the industry, burnout, associated with security professionals and some DataOps because the tsunami of data, tsunami of breaches, a lot of engineers getting called in the middle of the night. So that's not automated. So this got to get solved quickly, scaled up quickly. >> Yes. There's two part question there. I think in terms of the burnout aspect, you better send some love to your security team because they only get called when things get broken and when they're doing a great job you never hear about them. So I think that's one of the things, it's a thankless profession. From the second part, if you have the right tools in place so that when something does hit the fan and does break, then you can make an automated or a specific decision upstream to change that, then things become easy. It's when the tools aren't in place and you have desperate environments so that when a Log4Shell or something like that comes in, you're scrambling trying to figure out what clusters are where and where you're impacted. >> Point of attack, remediate fast. That seems to be the new move. >> Yeah. And you do need to know exactly what's going on in your clusters and how to remediate it quickly, how to get the most impact with one change. >> And that makes sense. The service area is expanding. More things are being pushed. So things will, whether it's a zero day vulnerability or just attack. >> Just mix, yeah. Customer automate their all of things, but it's good and bad. Some customer told us they, I think Spotify lost the whole a full zone because of one mistake of a customer because they automate everything and you make one mistake. >> It scale the failure really. >> Exactly. Scaled the failure really fast. >> That was actually few contact I think four years ago. They talked about it. It was a great learning experience. >> It worked double edge sword there. >> Yeah. So definitely we need to, again, scale automation, test automation way too, you need to hold the drills around data. >> Yeah, you have to know the impact. There's a lot of talk in the security space about what you can and can't automate. And by default when you install ACS, everything is non-enforced. You have to have an admission control. >> How are you guys seeing your customers? Obviously Red Hat's got a great customer base. How are they adopting to the managed service wave that's coming? People are liking the managed services now because they maybe have skills gap issues. So managed service is becoming a big part of the portfolio. What's your guys' take on the managed services piece? >> It's just time to value. You're developing a new application, you need to get it out there quick. If somebody, your competitor gets out there a month before you do, that's a huge market advantage. >> So you care how you got there. >> Exactly. And so we've had so much Kubernetes expertise over the last 10 or so, 10 plus year or well, Kubernetes for seven plus years at Red Hat, that why wouldn't you leverage that knowledge internally so you can get your application. >> Why change your toolchain and your workflows go faster and take advantage of the managed service because it's just about getting from point A to point B. >> Exactly. >> Well, in time to value is, you mentioned that it's not a trivial term, it's not a marketing term. There's a lot of impact that can be made. Organizations that can move faster, that can iterate faster, develop what their customers are looking for so that they have that competitive advantage. It's definitely not something that's trivial. >> Yeah. And working in marketing, whenever you get that new feature out and I can go and chat about it online, it's always awesome. You always get customers interests. >> Pushing new code, being secure. What's next for you guys? What's on the agenda? What's around the corner? We'll see a lot of Red Hat at re:Invent. Obviously your relationship with AWS as strong as a company. Multi-cloud is here. Supercloud as we've been saying. Supercloud is a thing. What's next for you guys? >> So we launch the cloud services and the idea that we will get feedback from customers. We are not going GA. We're not going to sell it for now. We want to get customers, we want to get feedback to make the product as best what we can sell and best we can give for our customers and get feedback. And when we go GA and we start selling this product, we will get the best product in the market. So this is our goal. We want to get the customer in the loop and get as much as feedback as we can. And also we working very closely with our customers, our existing customers to announce the product to add more and more features what the customer needs. It's all about supply chain. I don't like it, but we have to say, it's all about making things more automated and make things more easy for our customer to use to have security in the Kubernetes environment. >> So where can your customers go? Clearly, you've made a big impact on our viewers with your conversation today. Where are they going to be able to go to get their hands on the release? >> So you can find it on online. We have a website to sign up for this program. It's on my blog. We have a blog out there for ACS cloud services. You can just go there, sign up, and we will contact the customer. >> Yeah. And there's another way, if you ever want to get your hands on it and you can do it for free, Open Source StackRox. The product is open source completely. And I would love feedback in Slack channel. It's one of the, we also get a ton of feedback from people who aren't actually paying customers and they contribute upstream. So that's an awesome way to get started. But like you said, you go to, if you search ACS cloud service and service preview. Don't have to be a Red Hat customer. Just if you're running a CNCF compliant Kubernetes version. we'd love to hear from you. >> All open source, all out in the open. >> Yep. >> Getting it available to the customers, the non-customers, they hopefully pending customers. Guys, thank you so much for joining John and me talking about the new release, the evolution of StackRox in the last season of 18 months. Lot of good stuff here. I think you've done a great job of getting the audience excited about what you're releasing. Thank you for your time. >> Thank you. >> Thank you. >> For our guest and for John Furrier, Lisa Martin here in Detroit, KubeCon + CloudNativeCon North America. Coming to you live, we'll be back with our next guest in just a minute. (gentle music)
SUMMARY :
back to the show floor Day one, we have three wall-to-wall days. So this is going to be a very fun segment. Guys, great to have you on the program. So Michael StackRox And specifically in the code, Doron, I know you have some Even if in the open source world, And you guys are having and in the future Azure Marketplace. So it's not just OpenShift, or solve the whole cloud security posture. It's a lot quicker in the cloud. I'm going to ask you Yeah, so the cloud So they can sign up. So the quicker people are, the better. So my friend at the so you can download it and use it. from the open source side that That's some of the biggest challenges How are you guys helping so that you can evaluate So one of the thing that we we the biggest thing we have I want to get you guys thoughts you have to meet the the end, like you said, it's awesome that you have a base image What are some of the business, and then yeah, you can get through it. One thing that we see that and make all the configuration and the compliance operator because of the automation. and it actually made the What do you guys think is happening? Doron, do you have any thoughts on that. okay, say what you want. for the business to run. So all that's got to be brought in. You have to know a lot about So this got to get solved and you have desperate environments That seems to be the new move. and how to remediate it quickly, And that makes sense. and you make one mistake. Scaled the contact I think four years ago. you need to hold the drills around data. And by default when you install ACS, How are you guys seeing your customers? It's just time to value. so you can get your application. and take advantage of the managed service Well, in time to value is, whenever you get that new feature out What's on the agenda? and the idea that we will Where are they going to be able to go So you can find it on online. and you can do it for job of getting the audience Coming to you live,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael Foster | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Doron | PERSON | 0.99+ |
Doron Caspin | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
December | DATE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two minutes | QUANTITY | 0.99+ |
seven plus years | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
five | DATE | 0.99+ |
one mistake | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
a week | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two providers | QUANTITY | 0.99+ |
Two guests | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
four years ago | DATE | 0.98+ |
five times | QUANTITY | 0.98+ |
one issue | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
zero day | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
OpenShift | TITLE | 0.98+ |
last night | DATE | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
one image | QUANTITY | 0.97+ |
last years | DATE | 0.97+ |
First | QUANTITY | 0.97+ |
Azure Marketplace | TITLE | 0.97+ |
One thing | QUANTITY | 0.97+ |
telco | ORGANIZATION | 0.97+ |
Day one | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
Docker Hub | TITLE | 0.96+ |
Docker Hub | ORGANIZATION | 0.96+ |
10 plus year | QUANTITY | 0.96+ |
Doron | ORGANIZATION | 0.96+ |
Project Wisdom | TITLE | 0.96+ |
day one | QUANTITY | 0.95+ |
Lego | ORGANIZATION | 0.95+ |
one change | QUANTITY | 0.95+ |
a minute | QUANTITY | 0.95+ |
ACS | TITLE | 0.95+ |
CloudNativeCon '22 | EVENT | 0.94+ |
Kubernetes | TITLE | 0.94+ |
Matt LeBlanc & Tom Leyden, Kasten by Veeam | VMware Explore 2022
(upbeat music) >> Hey everyone and welcome back to The Cube. We are covering VMware Explore live in San Francisco. This is our third day of wall to wall coverage. And John Furrier is here with me, Lisa Martin. We are excited to welcome two guests from Kasten by Veeam, please welcome Tom Laden, VP of marketing and Matt LeBlanc, not Joey from friends, Matt LeBlanc, the systems engineer from North America at Kasten by Veeam. Welcome guys, great to have you. >> Thank you. >> Thank you for having us. >> Tom-- >> Great, go ahead. >> Oh, I was going to say, Tom, talk to us about some of the key challenges customers are coming to you with. >> Key challenges that they have at this point is getting up to speed with Kubernetes. So everybody has it on their list. We want to do Kubernetes, but where are they going to start? Back when VMware came on the market, I was switching from Windows to Mac and I needed to run a Windows application on my Mac and someone told me, "Run a VM." Went to the internet, I downloaded it. And in a half hour I was done. That's not how it works with Kubernetes. So that's a bit of a challenge. >> I mean, Kubernetes, Lisa, remember the early days of The Cube Open Stack was kind of transitioning, Cloud was booming and then Kubernetes was the paper that became the thing that pulled everybody together. It's now de facto in my mind. So that's clear, but there's a lot of different versions of it and you hear VMware, they call it the dial tone. Usually, remember, Pat Gelter, it's a dial tone. Turns out that came from Kit Colbert or no, I think AJ kind of coined the term here, but it's since been there, it's been adopted by everyone. There's different versions. It's open source. AWS is involved. How do you guys look at the relationship with Kubernetes here and VMware Explore with Kubernetes and the customers because they have choices. They can go do it on their own. They can add a little bit with Lambda, Serverless. They can do more here. It's not easy. It's not as easy as people think it is. And then this is a skill gaps problem too. We're seeing a lot of these problems out there. What's your take? >> I'll let Matt talk to that. But what I want to say first is this is also the power of the cloud native ecosystem. The days are gone where companies were selecting one enterprise application and they were building their stack with that. Today they're building applications using dozens, if not hundreds of different components from different vendors or open source platforms. And that is really what creates opportunities for those cloud native developers. So maybe you want to... >> Yeah, we're seeing a lot of hybrid solutions out there. So it's not just choosing one vendor, AKS, EKS, or Tanzu. We're seeing all the above. I had a call this morning with a large healthcare provider and they have a hundred clusters and that's spread across AKS, EKS and GKE. So it is covering everything. Plus the need to have a on-prem solution manage it all. >> I got a stat, I got to share that I want to get your reactions and you can laugh or comment, whatever you want to say. Talk to big CSO, CXO, executive, big company, I won't say the name. We got a thousand developers, a hundred of them have heard of Kubernetes, okay. 10 have touched it and used it and one's good at it. And so his point is that there's a lot of Kubernetes need that people are getting aware. So it shows that there's more and more adoption around. You see a lot of managed services out there. So it's clear it's happening and I'm over exaggerating the ratio probably. But the point is the numbers kind of make sense as a thousand developers. You start to see people getting adoption to it. They're aware of the value, but being good at it is what we're hearing is one of those things. Can you guys share your reaction to that? Is that, I mean, it's hyperbole at some level, but it does point to the fact of adoption trends. You got to get good at it, you got to know how to use it. >> It's very accurate, actually. It's what we're seeing in the market. We've been doing some research of our own, and we have some interesting numbers that we're going to be sharing soon. Analysts don't have a whole lot of numbers these days. So where we're trying to run our own surveys to get a grasp of the market. One simple survey or research element that I've done myself is I used Google trends. And in Google trends, if you go back to 2004 and you compare VMware against Kubernetes, you get a very interesting graph. What you're going to see is that VMware, the adoption curve is practically complete and Kubernetes is clearly taking off. And the volume of searches for Kubernetes today is almost as big as VMware. So that's a big sign that this is starting to happen. But in this process, we have to get those companies to have all of their engineers to be up to speed on Kubernetes. And that's one of the community efforts that we're helping with. We built a website called learning.kasten.io We're going to rebrand it soon at CubeCon, so stay tuned, but we're offering hands on labs there for people to actually come learn Kubernetes with us. Because for us, the faster the adoption goes, the better for our business. >> I was just going to ask you about the learning. So there's a big focus here on educating customers to help dial down the complexity and really get them, these numbers up as John was mentioning. >> And we're really breaking it down to the very beginning. So at this point we have almost 10 labs as we call them up and they start really from install a Kubernetes Cluster and people really hands on are going to install a Kubernetes Cluster. They learn to build an application. They learn obviously to back up the application in the safest way. And then there is how to tune storage, how to implement security, and we're really building it up so that people can step by step in a hands on way learn Kubernetes. >> It's interesting, this VMware Explore, their first new name change, but VMWorld prior, big community, a lot of customers, loyal customers, but they're classic and they're foundational in enterprises and let's face it. Some of 'em aren't going to rip out VMware anytime soon because the workloads are running on it. So in Broadcom we'll have some good action to maybe increase prices or whatnot. So we'll see how that goes. But the personas here are definitely going cloud native. They did with Tanzu, was a great thing. Some stuff was coming off, the fruit's coming off the tree now, you're starting to see it. CNCF has been on this for a long, long time, CubeCon's coming up in Detroit. And so that's just always been great, 'cause you had the day zero event and you got all kinds of community activity, tons of developer action. So here they're talking, let's connect to the developer. There the developers are at CubeCon. So the personas are kind of connecting or overlapping. I'd love to get your thoughts, Matt on? >> So from the personnel that we're talking to, there really is a split between the traditional IT ops and a lot of the people that are here today at VMWare Explore, but we're also talking with the SREs and the dev ops folks. What really needs to happen is we need to get a little bit more experience, some more training and we need to get these two groups to really start to coordinate and work together 'cause you're basically moving from that traditional on-prem environment to a lot of these traditional workloads and the only way to get that experience is to get your hands dirty. >> Right. >> So how would you describe the persona specifically here versus say CubeCon? IT ops? >> Very, very different, well-- >> They still go ahead. Explain. >> Well, I mean, from this perspective, this is all about VMware and everything that they have to offer. So we're dealing with a lot of administrators from that regard. On the Kubernetes side, we have site reliability engineers and their goal is exactly as their title describes. They want to architect arch applications that are very resilient and reliable and it is a different way of working. >> I was on a Twitter spaces about SREs and dev ops and there was people saying their title's called dev ops. Like, no, no, you do dev ops, you don't really, you're not the dev ops person-- >> Right, right. >> But they become the dev ops person because you're the developer running operations. So it's been weird how dev ops been co-opted as a position. >> And that is really interesting. One person told me earlier when I started Kasten, we have this new persona. It's the dev ops person. That is the person that we're going after. But then talking to a few other people who were like, "They're not falling from space." It's people who used to do other jobs who now have a more dev ops approach to what they're doing. It's not a new-- >> And then the SRE conversation was in site, reliable engineer comes from Google, from one person managing multiple clusters to how that's evolved into being the dev ops. So it's been interesting and this is really the growth of scale, the 10X developer going to more of the cloud native, which is okay, you got to run ops and make the developer go faster. If you look at the stuff we've been covering on The Cube, the trends have been cloud native developers, which I call dev ops like developers. They want to go faster. They want self-service and they don't want to slow down. They don't want to deal with BS, which is go checking security code, wait for the ops team to do something. So data and security seem to be the new ops. Not so much IT ops 'cause that's now cloud. So how do you guys see that in, because Kubernetes is rationalizing this, certainly on the compute side, not so much on storage yet but it seems to be making things better in that grinding area between dev and these complicated ops areas like security data, where it's constantly changing. What do you think about that? >> Well there are still a lot of specialty folks in that area in regards to security operations. The whole idea is be able to script and automate as much as possible and not have to create a ticket to request a VM to be billed or an operating system or an application deployed. They're really empowered to automatically deploy those applications and keep them up. >> And that was the old dev ops role or person. That was what dev ops was called. So again, that is standard. I think at CubeCon, that is something that's expected. >> Yes. >> You would agree with that. >> Yeah. >> Okay. So now translating VM World, VMware Explore to CubeCon, what do you guys see as happening between now and then? Obviously got re:Invent right at the end in that first week of December coming. So that's going to be two major shows coming in now back to back that're going to be super interesting for this ecosystem. >> Quite frankly, if you compare the persona, maybe you have to step away from comparing the personas, but really compare the conversations that we're having. The conversations that you're having at a CubeCon are really deep dives. We will have people coming into our booth and taking 45 minutes, one hour of the time of the people who are supposed to do 10 minute demos because they're asking more and more questions 'cause they want to know every little detail, how things work. The conversations here are more like, why should I learn Kubernetes? Why should I start using Kubernetes? So it's really early day. Now, I'm not saying that in a bad way. This is really exciting 'cause when you hear CNCF say that 97% of enterprises are using Kubernetes, that's obviously that small part of their world. Those are their members. We now want to see that grow to the entire ecosystem, the larger ecosystem. >> Well, it's actually a great thing, actually. It's not a bad thing, but I will counter that by saying I am hearing the conversation here, you guys'll like this on the Veeam side, the other side of the Veeam, there's deep dives on ransomware and air gap and configuration errors on backup and recovery and it's all about Veeam on the other side. Those are the guys here talking deep dive on, making sure that they don't get screwed up on ransomware, not Kubernete, but they're going to Kub, but they're now leaning into Kubernetes. They're crossing into the new era because that's the apps'll end up writing the code for that. >> So the funny part is all of those concepts, ransomware and recovery, they're all, there are similar concepts in the world of Kubernetes and both on the Veeam side as well as the Kasten side, we are supporting a lot of those air gap solutions and providing a ransomware recovery solution and from a air gap perspective, there are a many use cases where you do need to live. It's not just the government entity, but we have customers that are cruise lines in Europe, for example, and they're disconnected. So they need to live in that disconnected world or military as well. >> Well, let's talk about the adoption of customers. I mean this is the customer side. What's accelerating their, what's the conversation with the customer at base, not just here but in the industry with Kubernetes, how would you guys categorize that? And how does that get accelerated? What's the customer situation? >> A big drive to Kubernetes is really about the automation, self-service and reliability. We're seeing the drive to and reduction of resources, being able to do more with less, right? This is ongoing the way it's always been. But I was talking to a large university in Western Canada and they're a huge Veeam customer worth 7000 VMs and three months ago, they said, "Over the next few years, we plan on moving all those workloads to Kubernetes." And the reason for it is really to reduce their workload, both from administration side, cost perspective as well as on-prem resources as well. So there's a lot of good business reasons to do that in addition to the technical reliability concerns. >> So what is those specific reasons? This is where now you start to see the rubber hit the road on acceleration. >> So I would say scale and flexibility that ecosystem, that opportunity to choose any application from that or any tool from that cloud native ecosystem is a big driver. I wanted to add to the adoption. Another area where I see a lot of interest is everything AI, machine learning. One example is also a customer coming from Veeam. We're seeing a lot of that and that's a great thing. It's an AI company that is doing software for automated driving. They decided that VMs alone were not going to be good enough for all of their workloads. And then for select workloads, the more scalable one where scalability was more of a topic, would move to Kubernetes. I think at this point they have like 20% of their workloads on Kubernetes and they're not planning to do away with VMs. VMs are always going to be there just like mainframes still exist. >> Yeah, oh yeah. They're accelerating actually. >> We're projecting over the next few years that we're going to go to a 50/50 and eventually lean towards more Kubernetes than VMs, but it was going to be a mix. >> Do you have a favorite customer example, Tom, that you think really articulates the value of what Kubernetes can deliver to customers where you guys are really coming in and help to demystify it? >> I would think SuperStereo is a really great example and you know the details about it. >> I love the SuperStereo story. They were a AWS customer and they're running OpenShift version three and they need to move to OpenShift version four. There is no upgrade in place. You have to migrate all your apps. Now SuperStereo is a large French IT firm. They have over 700 developers in their environment and it was by their estimation that this was going to take a few months to get that migration done. We're able to go in there and help them with the automation of that migration and Kasten was able to help them architect that migration and we did it in the course of a weekend with two people. >> A weekend? >> A weekend. >> That's a hackathon. I mean, that's not real come on. >> Compared to thousands of man hours and a few months not to mention since they were able to retire that old OpenShift cluster, the OpenShift three, they were able to stop paying Jeff Bezos for a couple of those months, which is tens of thousands of dollars per month. >> Don't tell anyone, keep that down low. You're going to get shot when you leave this place. No, seriously. This is why I think the multi-cloud hybrid is interesting because these kinds of examples are going to be more than less coming down the road. You're going to see, you're going to hear more of these stories than not hear them because what containerization now Kubernetes doing, what Dockers doing now and the role of containers not being such a land grab is allowing Kubernetes to be more versatile in its approach. So I got to ask you, you can almost apply that concept to agility, to other scenarios like spanning data across clouds. >> Yes, and that is what we're seeing. So the call I had this morning with a large insurance provider, you may have that insurance provider, healthcare provider, they're across three of the major hyperscalers clouds and they do that for reliability. Last year, AWS went down, I think three times in Q4 and to have a plan of being able to recover somewhere else, you can actually plan your, it's DR, it's a planned migration. You can do that in a few hours. >> It's interesting, just the sidebar here for a second. We had a couple chats earlier today. We had the influences on and all the super cloud conversations and trying to get more data to share with the audience across multiple areas. One of them was Amazon and that super, the hyper clouds like Amazon, as your Google and the rest are out there, Oracle, IBM and everyone else. There's almost a consensus that maybe there's time for some peace amongst the cloud vendors. Like, "Hey, you've already won." (Tom laughs) Everyone's won, now let's just like, we know where everyone is. Let's go peace time and everyone, then 'cause the relationship's not going to change between public cloud and the new world. So there's a consensus, like what does peace look like? I mean, first of all, the pie's getting bigger. You're seeing ecosystems forming around all the big new areas and that's good thing. That's the tides rise and the pie's getting bigger, there's bigger market out there now so people can share and share. >> I've never worked for any of these big players. So I would have to agree with you, but peace would not drive innovation. And in my heart is with tech innovation. I love it when vendors come up with new solutions that will make things better for customers and if that means that we're moving from on-prem to cloud and back to on-prem, I'm fine with that. >> What excites me is really having the flexibility of being able to choose any provider you want because you do have open standards, being cloud native in the world of Kubernetes. I've recently discovered that the Canadian federal government had mandated to their financial institutions that, "Yes, you may have started all of your on cloud presence in Azure, you need to have an option to be elsewhere." So it's not like-- >> Well, the sovereign cloud is one of those big initiatives, but also going back to Java, we heard another guest earlier, we were thinking about Java, right once ran anywhere, right? So you can't do that today in a cloud, but now with containers-- >> You can. >> Again, this is, again, this is the point that's happening. Explain. >> So when you have, Kubernetes is a strict standard and all of the applications are written to that. So whether you are deploying MongoDB or Postgres or Cassandra or any of the other cloud native apps, you can deploy them pretty much the same, whether they're in AKS, EKS or on Tanzu and it makes it much easier. The world became just a lot less for proprietary. >> So that's the story that everybody wants to hear. How does that happen in a way that is, doesn't stall the innovation and the developer growth 'cause the developers are driving a lot of change. I mean, for all the talk in the industry, the developers are doing pretty good right now. They've got a lot of open source, plentiful, open source growing like crazy. You got shifting left in the CICD pipeline. You got tools coming out with Kubernetes. Infrastructure has code is almost a 100% reality right now. So there's a lot of good things going on for developers. That's not an issue. The issue is just underneath. >> It's a skillset and that is really one of the biggest challenges I see in our deployments is a lack of experience. And it's not everyone. There are some folks that have been playing around for the last couple of years with it and they do have that experience, but there are many people that are still young at this. >> Okay, let's do, as we wrap up, let's do a lead into CubeCon, it's coming up and obviously re:Invent's right behind it. Lisa, we're going to have a lot of pre CubeCon interviews. We'll interview all the committee chairs, program chairs. We'll get the scoop on that, we do that every year. But while we got you guys here, let's do a little pre-pre-preview of CubeCon. What can we expect? What do you guys think is going to happen this year? What does CubeCon look? You guys our big sponsor of CubeCon. You guys do a great job there. Thanks for doing that. The community really recognizes that. But as Kubernetes comes in now for this year, you're looking at probably the what third year now that I would say Kubernetes has been on the front burner, where do you see it on the hockey stick growth? Have we kicked the curve yet? What's going to be the level of intensity for Kubernetes this year? How's that going to impact CubeCon in a way that people may or may not think it will? >> So I think first of all, CubeCon is going to be back at the level where it was before the pandemic, because the show, as many other shows, has been suffering from, I mean, virtual events are not like the in-person events. CubeCon LA was super exciting for all the vendors last year, but the attendees were not really there yet. Valencia was a huge bump already and I think Detroit, it's a very exciting city I heard. So it's going to be a blast and it's going to be a huge attendance, that's what I'm expecting. Second I can, so this is going to be my third personally, in-person CubeCon, comparing how vendors evolved between the previous two. There's going to be a lot of interesting stories from vendors, a lot of new innovation coming onto the market. And I think the conversations that we're going to be having will yet, again, be much more about live applications and people using Kubernetes in production rather than those at the first in-person CubeCon for me in LA where it was a lot about learning still, we're going to continue to help people learn 'cause it's really important for us but the exciting part about CubeCon is you're talking to people who are using Kubernetes in production and that's really cool. >> And users contributing projects too. >> Also. >> I mean Lyft is a poster child there and you've got a lot more. Of course you got the stealth recruiting going on there, Apple, all the big guys are there. They have a booth and no one's attending you like, "Oh come on." Matt, what's your take on CubeCon? Going in, what do you see? And obviously a lot of dynamic new projects. >> I'm going to see much, much deeper tech conversations. As experience increases, the more you learn, the more you realize you have to learn more. >> And the sharing's going to increase too. >> And the sharing, yeah. So I see a lot of deep conversations. It's no longer the, "Why do I need Kubernetes?" It's more, "How do I architect this for my solution or for my environment?" And yeah, I think there's a lot more depth involved and the size of CubeCon is going to be much larger than we've seen in the past. >> And to finish off what I think from the vendor's point of view, what we're going to see is a lot of applications that will be a lot more enterprise-ready because that is the part that was missing so far. It was a lot about the what's new and enabling Kubernetes. But now that adoption is going up, a lot of features for different components still need to be added to have them enterprise-ready. >> And what can the audience expect from you guys at CubeCon? Any teasers you can give us from a marketing perspective? >> Yes. We have a rebranding sitting ready for learning website. It's going to be bigger and better. So we're not no longer going to call it, learning.kasten.io but I'll be happy to come back with you guys and present a new name at CubeCon. >> All right. >> All right. That sounds like a deal. Guys, thank you so much for joining John and me breaking down all things Kubernetes, talking about customer adoption, the challenges, but also what you're doing to demystify it. We appreciate your insights and your time. >> Thank you so much. >> Thank you very much. >> Our pleasure. >> Thanks Matt. >> For our guests and John Furrier, I'm Lisa Martin. You've been watching The Cube's live coverage of VMware Explore 2022. Thanks for joining us. Stay safe. (gentle music)
SUMMARY :
We are excited to welcome two customers are coming to you with. and I needed to run a and you hear VMware, they the cloud native ecosystem. Plus the need to have a They're aware of the value, And that's one of the community efforts to help dial down the And then there is how to tune storage, So the personas are kind of and a lot of the people They still go ahead. and everything that they have to offer. the dev ops person-- So it's been weird how dev ops That is the person that we're going after. the 10X developer going to and not have to create a ticket So again, that is standard. So that's going to be two of the people who are but they're going to Kub, and both on the Veeam side not just here but in the We're seeing the drive to to see the rubber hit the road that opportunity to choose any application They're accelerating actually. over the next few years and you know the details about it. and they need to move to I mean, that's not real come on. and a few months not to mention since and the role of containers and to have a plan of being and that super, the and back to on-prem, I'm fine with that. that the Canadian federal government this is the point that's happening. and all of the applications and the developer growth and that is really one of How's that going to impact and it's going to be a huge attendance, and no one's attending you like, the more you learn, And the sharing's and the size of CubeCon because that is the part It's going to be bigger and better. adoption, the challenges, of VMware Explore 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt LeBlanc | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Pat Gelter | PERSON | 0.99+ |
Tom Leyden | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Tom Laden | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one hour | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Joey | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10 minute | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
Western Canada | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
7000 VMs | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
97% | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
20% | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two groups | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
Windows | TITLE | 0.99+ |
third day | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
over 700 developers | QUANTITY | 0.99+ |
learning.kasten.io | OTHER | 0.98+ |
AKS | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Veeam | PERSON | 0.98+ |
VMware Explore 2022 | TITLE | 0.98+ |
VMWare Explore | ORGANIZATION | 0.98+ |
CubeCon | EVENT | 0.98+ |
One example | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
three months ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
EKS | ORGANIZATION | 0.97+ |
Lyft | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
Kasten | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
three times | QUANTITY | 0.97+ |
SuperStereo | TITLE | 0.97+ |
third year | QUANTITY | 0.96+ |
Shreyans Mehta, Cequence Security | AWS re:Inforce 2022
(gentle upbeat music) >> Okay, welcome back everyone to theCUBE's live coverage here in Boston, Massachusetts for AWS RE:INFORCE 22. I'm John Furrier, your host with Dave Vellante co-host of theCUBE, and Shreyans Metah, CTO and founder of Cequence Security. CUBE alumni, great to see you. Thanks for coming on theCUBE. >> Yeah. Thanks for having me here. >> So when we chatted you were part of the startup showcase. You guys are doing great. Congratulations on your business success. I mean, you guys got a good product in hot market. >> Yeah. >> You're here before we get into it. I want to get your perspective on the keynote and the talk tracks here and the show. But for the folks that don't know you guys, explain what you guys, take a minute to explain what you guys do and, and key product. >> Yeah, so we are the unified API protection place, but I mean a lot of people don't know what unified API protection is but before I get into that, just just talking about Cequence, we've been around since 2014. But we are protecting close to 6 billion API transactions every day. We are protecting close to 2 billion customer accounts, more than 2 trillion dollars in customer assets and a hundred million plus sort of, data points that we look at across customer base. That's that's who we are. >> I mean, of course we all know APIs is, is the basis of cloud computing and you got successful companies like Stripe, for instance, you know, you put API and you got a financial gateway, billions of transactions. What's the learnings. And now we're in a mode now where single point of failure is a problem. You got more automation you got more reasoning coming a lot more computer science next gen ML, AI there too. More connections, no perimeter. Right? More and more use cases, more in the cloud. >> Yeah. So what, what we are seeing today is, I mean from six years ago to now, when we started, right? Like the monolith apps are breaking down into microservices, right? What effectively, what that means is like every of the every such microservices talking APIs, right? So what used to be a few million web applications have now become billions of APIs that are communicating with each other. I mean, if you look at the, I mean, you spoke about IOT earlier, I call, I call like a Tesla is an application on four wheels that is communicating to its cloud over APIs. So everything is API yesterday. 80% traffic on internet is APIs. >> Now that's dated transit right there. (laughing) Couldn't resist. >> Yeah. >> Fully encrypted too. >> Yeah. >> Yeah, well hopefully. >> Maybe, maybe, maybe. (laughing) We dunno yet, but seriously everything is talking to an API. >> Yeah. >> Every application. >> Yeah. And, and there is no single choke point, right? Like you spoke about it. Like everybody is hosting their application in the cloud environments of their choice, AWS being one of them. But it's not the only one. Right? The, the, your APIs are hosted behind a CDN. Your APIs are hosted on behind an API gateway behind a load balancer in guest controllers. There is no single. >> So what's the problem? What's the problem now that you're solving? Because one was probably I can imagine connecting people, connecting the APIs. Now you've got more operational data. >> Yeah. >> Potential security hacks? More surface area? What's the what's what are you facing? >> Well, I can speak about some of the, our, some of the well known sort of exploits that have been well published, right. Everybody gets exploited, but I mean some of the well knowns. Now, if you, if you heard about Expedian last year there was a third party API that was exposing your your credit scores without proper authentication. Like Facebook had Ebola vulnerability sometime ago, where people could actually edit somebody else's videos online. Peloton again, a well known one. So like everybody is exposed, right. But that is the, the end results. All right? But it all starts with people don't even know where their APIs are and then you have to secure it all the way. So, I mean, ultimately APIs are prone to business logic attacks, fraud, and that's what, what you need to go ahead and protect. >> So is that the first question is, okay, what APIs do I need to protect? I got to take a API portfolio inventory. Is that? >> Yeah, so I think starting point is where. Where are my APIs? Right, so we spoke about there's no single choke point. Right, so APIs could be in, in your cloud environment APIs could be behind your cloud front, like we have here at RE:INFORCE today. So APIs could be behind your AKS, Ingrid controllers API gateways. And it's not limited to AWS alone, right. So, so knowing the unknown is, is the number one problem. >> So how do I find him? I asked Fred, Hey, where are our API? No, you must have some automated tooling to help me. >> Yeah, so, I, Cequence provides an option without any integration, what we call it, the API spider. Whereas like we give you visibility into your entire API attack surface without any integration into any of these services. Where are your APIs? What's your API attack surface about? And then sort of more details around that as well. But that is the number one. Is that agent list or is that an agent? >> There's no agent. So that means you can just sign up on our portal and then, then, then fire it away. And within a few minutes to an hour, we'll give you complete visibility into where your API is. >> So is it a full audit or is it more of a discovery? >> Or both? >> So, so number one, it's it's discovery, but we are also uncovering some of the potential vulnerabilities through zero knowledge. Right? So. (laughing) So, we've seen a ton of lock for J exposed server still. Like recently, there was an article that lock four J is going to be endemic. That is going to be here. >> Long time. >> (laughs) For, for a very long time. >> Where's your mask on that one? That's the Covid of security. >> Yeah. Absolutely absolutely. So, you need to know where your assets are what are they exposing? So, so that is the first step effectively discovering your attack surface. Yeah. >> I'm sure it's a efficiency issue too, with developers. The, having the spider allows you to at least see what's connecting out there versus having a meeting and going through code reviews. >> Yeah. Right? Is that's another big part of it? >> So, it is actually the last step, but you have, you actually go through a journey. So, so effectively, once you're discovering your assets you actually need to catalog it. Right. So, so I know where they're hosted but what are developers actually rolling out? Right. So they are updating your, the API endpoints on a daily basis, if not hourly basis. They have the CACD pipelines. >> It's DevOps. (laughing) >> Welcome to DevOps. It's actually why we'll do it. >> Yeah, and people have actually in the past created manual ways to catalog their APIs. And that doesn't really work in this new world. >> Humans are terrible at manual catalogization. >> Exactly. So, cataloging is really the next step for them. >> So you have tools for that that automate that using math, presumably. >> Exactly. And then we can, we can integrate with all these different choke points that we spoke about. There's no single choke points. So in any cloud or any on-prem environment where we actually integrate and give you that catalog of your APIs, that becomes your second step really. >> Yeah. >> Okay, so. >> What's the third step? There's the third step and then compliance. >> Compliance is the next one. So basically catalog >> There's four steps. >> Actually, six. So I'll go. >> Discovery, catalog, then compliance. >> Yeah. Compliance is the next one. So compliance is all about, okay, I've cataloged them but what are they really exposing? Right. So there could be PII information. There could be credit card, information, health information. So, I will treat every API differently based on the information that they're actually exposing. >> So that gives you a risk assessment essentially. >> Exactly. So you can, you can then start looking into, okay. I might have a few thousand API endpoints, like, where do I prioritize? So based on the risk exposure associated with it then I can start my journey of protecting so. >> That that's the remediation that's fixing it. >> Okay. Keep going. So that's, what's four. >> Four. That was that one, fixing. >> Yeah. >> Four is the risk assessment? >> So number four is detecting abuse. >> Okay. >> So now that I know my APIs and each API is exposing different business logic. So based on the business you are in, you might have login endpoints, you might have new account creation endpoint. You might have things around shopping, right? So pricing information, all exposed through APIs. So every business has a business logic that they end up exposing. And then the bad guys are abusing them. In terms of scraping pricing information it could be competitors scraping pricing. They will, we are doing account take. So detecting abuse is the first step, right? The fifth one is about preventing that because just getting visibility into abuse is not enough. I should be able to, to detect and prevent, natively on the platform. Because if you send signals to third party platforms like your labs, it's already too late and it's too course grain to be able to act on it. And the last step is around what you actually spoke about developers, right? Like, can I shift security towards the left, but it's not about shifting left. Just about shifting left. You obviously you want to bring in security to your CICD pipelines, to your developers, so that you have a full spectrum of API securities. >> Sure enough. Dave and I were talking earlier about like how cloud operations needs to look the same. >> Yeah. >> On cloud premise and edge. >> Yes. Absolutely. >> Edge is a wild card. Cause it's growing really fast. It's changing. How do you do that? Cuz this APIs will be everywhere. >> Yeah. >> How are you guys going to reign that in? What's the customers journey with you as they need to architect, not just deploy but how do you engage with the customer who says, "I have my environment. I'm not going to be to have somebody on premise and edge. I'll use some other clouds too. But I got to have an operating environment." >> Yeah. "That's pure cloud." >> So, we need, like you said, right, we live in a heterogeneous environment, right? Like effectively you have different, you have your edge in your CDN, your API gateways. So you need a unified view because every gateway will have a different protection place and you can't deal with 5 or 15 different tools across your various different environments. So you, what we provide is a unified view, number one and the unified way to protect those applications. So think of it like you have a data plane that is sprinkled around wherever your edges and gateways and risk controllers are and you have a central brains to actually manage it, in one place in a unified way. >> I have a computer science or computer architecture question for you guys. So Steven Schmidt again said single controls or binary states will fail. Obviously he's talking from a security standpoint but I remember the days where you wanted a single point of control for recovery, you talked about microservices. So what's the philosophy today from a recovery standpoint not necessarily security, but recovery like something goes wrong? >> Yeah. >> If I don't have a single point of control, how do I ensure consistency? So do I, do I recover at the microservice level? What's the philosophy today? >> Yeah. So the philosophy really is, and it's very much driven by your developers and how you want to roll out applications. So number one is applications will be more rapidly developed and rolled out than in the past. What that means is you have to empower your developers to use any cloud and serverless environments of their choice and it will be distributed. So there's not going to be a single choke point. What you want is an ability to integrate into that life cycle and centrally manage that. So there's not going to be a single choke point but there is going to be a single control plane to manage them off, right. >> Okay. >> So you want that unified, unified visibility and protection in place to be able to protect these. >> So there's your single point of control? What about the company? You're in series C you've raised, I think, over a hundred million dollars, right? So are you, where are you at? Are you scaling now? Are you hiring sales people or you still trying to sort of be careful about that? Can you help us understand where you're at? >> Yeah. So we are absolutely scaling. So, we've built a product that is getting, that is deployed already in all these different verticals like ranging from finance, to detail, to social, to telecom. Anybody who has exposure to the outside world, right. So product that can scale up to those demands, right? I mean, it's not easy to scale up to 6 billion requests a day. So we've built a solid platform. We've rolled out new products to complete the vision. In terms of the API spider, I spoke about earlier. >> The unified, >> The unified API protection covers three aspects or all aspects of API life cycle. We are scaling our teams from go to market motion. We brought in recently our chief marketing officer our chief revenue officer as well. >> So putting all the new, the new pieces in place. >> Yeah. >> So you guys are like API observability on steroids. In a way, right? >> Yeah, absolutely. >> Cause you're doing the observability. >> Yes. >> You're getting the data analysis for risk. You're having opportunities and recommendations around how to manage the stealthy attacks. >> From a full protection perspective. >> You're the API store. >> Yeah. >> So you guys are what we call best of breed. This is a trend we're seeing, pick something that you're best in breed in. >> Absolutely. >> And nail it. So you're not like an observability platform for everything. >> No. >> You guys pick the focus. >> Specifically, APS. And, so basically your, you can have your existing tools in place. You will have your CDN, you will have your graphs in place. So, but for API protection, you need something specialized and that stuff. >> Explain why I can't just rely on CDN infrastructure, for this. >> So, CDNs are, are good for content delivery. They do your basic TLS, and things like that. But APIs are all about your applications and business that you're exposing. >> Okay, so you, >> You have no context around that. >> So, yeah, cause this is, this is a super cloud vision that we're seeing of structural change in the industry, a new thing that's happening in real time. Companies like yours are be keeping a focus and nailing it. And now the customer's can assemble these services and company. >> Yeah. - Capabilities, that's happening. And it's happening like right now, structural change has happened. That's called the cloud. >> Yes. >> Cloud scale. Now this new change, best of brief, what are the gaps? Because I'm a customer. I got you for APIs, done. You take the complexity away at scale. I trust you. Where are the other gaps in my architecture? What's new? Cause I want to run cloud operations across all environments and across clouds when appropriate. >> Yeah. >> So I need to have a full op where are the other gaps? Where are the other best of breed components that need to be developed? >> So it's about layered, the layers that you built. Right? So, what's the thing is you're bringing in different cloud environments. That is your infrastructure, right? You, you, you either rely on the cloud provider for your security around that for roll outs and operations. Right? So then is going to be the next layer, which is about, is it serverless? Is it Kubernetes? What about it? So you'll think about like a service mesh type environment. Ultimately it's all about applications, right? That's, then you're going to roll out those applications. And that's where we actually come in. Wherever you're rolling out your applications. We come in baked into that environment, and for giving you that visibility and control, protection around that. >> Wow, great. First of all, APIs is the, is what cloud is based on. So can't go wrong there. It's not a, not a headwind for you guys. >> Absolutely. >> Great. What's a give a quick plug for the company. What are you guys looking to do hire? Get customers who's uh, when, what, what's the pitch? >> So like I started earlier, Cequence is around unified API protection, protecting around the full life cycle of your APIs, ranging from discovery all the way to, to testing. So, helping you throughout the, the life cycle of APIs, wherever those APIs are in any cloud environment. On-prem or in the cloud in your serverless environments. That's what Cequence is about. >> And you're doing billions of transactions. >> We're doing 6 billion requests every day. (laughing) >> Which is uh, which is, >> A lot. >> Unheard for a lot of companies here on the floor today. >> Sure is. Thanks for coming on theCUBE, sure appreciate it. >> Yeah. >> Good, congratulations to your success. >> Thank you. >> Cequence Security here on theCUBE at RE:INFORCE. I'm chatting with Dave Vellante, more coverage after this short break. (upbeat, gentle music)
SUMMARY :
I'm John Furrier, your host So when we chatted you were and the talk tracks here and the show. We are protecting close to and you got a financial gateway, means is like every of the Now that's dated transit right there. everything is talking to an API. But it's not the only one. What's the problem now and then you have to So is that the first question is, okay, So APIs could be behind your AKS, No, you must have some But that is the number one. So that means you can that lock four J is going to be endemic. That's the Covid of security. So, so that is the first step effectively The, having the spider allows you to Yeah. So, it is actually the It's DevOps. Welcome to DevOps. actually in the past Humans are terrible the next step for them. So you have tools for that and give you that catalog What's the third step? Compliance is the next one. So I'll go. Compliance is the next one. So that gives you a risk So based on the risk That that's the So that's, what's four. That was that one, fixing. So based on the business you are in, needs to look the same. How do you do that? What's the customers journey with you Yeah. So you need a unified view but I remember the days where What that means is you have So you want that So product that can scale from go to market motion. So putting all the new, So you guys are like API You're getting the So you guys are what So you're not like an observability you can have your existing tools in place. for this. and business that you're exposing. And now the customer's can assemble these That's called the cloud. I got you for APIs, done. the layers that you built. It's not a, not a headwind for you guys. What are you guys looking to do hire? So, helping you throughout And you're doing (laughing) here on the floor today. Thanks for coming on on theCUBE at RE:INFORCE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
5 | QUANTITY | 0.99+ |
Shreyans Metah | PERSON | 0.99+ |
third step | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cequence Security | ORGANIZATION | 0.99+ |
second step | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Shreyans Mehta | PERSON | 0.99+ |
first question | QUANTITY | 0.99+ |
more than 2 trillion dollars | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
four steps | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
15 different tools | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
RE:INFORCE | ORGANIZATION | 0.99+ |
6 billion requests | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
six years ago | DATE | 0.98+ |
billions | QUANTITY | 0.98+ |
single choke point | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
single point | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three aspects | QUANTITY | 0.97+ |
Tesla | ORGANIZATION | 0.97+ |
over a hundred million dollars | QUANTITY | 0.97+ |
AKS | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
one place | QUANTITY | 0.96+ |
yesterday | DATE | 0.96+ |
each API | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
Four | QUANTITY | 0.96+ |
Stripe | ORGANIZATION | 0.95+ |
CTO | PERSON | 0.95+ |
an hour | QUANTITY | 0.94+ |
First | QUANTITY | 0.93+ |
80% traffic | QUANTITY | 0.91+ |
series C | OTHER | 0.9+ |
fifth one | QUANTITY | 0.9+ |
up to 6 billion requests a day | QUANTITY | 0.89+ |
single choke points | QUANTITY | 0.88+ |
million web applications | QUANTITY | 0.86+ |
6 billion API transactions | QUANTITY | 0.83+ |
four | QUANTITY | 0.83+ |
single control plane | QUANTITY | 0.83+ |
close to 2 billion customer accounts | QUANTITY | 0.83+ |
Ingrid | PERSON | 0.81+ |
Peloton | LOCATION | 0.78+ |
DevOps | TITLE | 0.74+ |
re:Inforce 2022 | TITLE | 0.73+ |
APIs | QUANTITY | 0.72+ |
transactions | QUANTITY | 0.71+ |
single controls | QUANTITY | 0.71+ |
22 | TITLE | 0.68+ |
a hundred million | QUANTITY | 0.68+ |
Expedian | ORGANIZATION | 0.68+ |
IOT | TITLE | 0.67+ |
Ebola | OTHER | 0.62+ |
Kubernetes | TITLE | 0.61+ |
Cequence | ORGANIZATION | 0.59+ |
zero | QUANTITY | 0.59+ |
minutes | QUANTITY | 0.53+ |
Haseeb Budhani, Rafay & Kevin Coleman, AWS | AWS Summit New York 2022
(gentle music) (upbeat music) (crowd chattering) >> Welcome back to The City That Never Sleeps. Lisa Martin and John Furrier in New York City for AWS Summit '22 with about 10 to 12,000 of our friends. And we've got two more friends joining us here today. We're going to be talking with Haseeb Budhani, one of our alumni, co-founder and CEO of Rafay Systems, and Kevin Coleman, senior manager for Go-to Market for EKS at AWS. Guys, thank you so much for joining us today. >> Thank you very much for having us. Excited to be here. >> Isn't it great to be back at an in-person event with 10, 12,000 people? >> Yes. There are a lot of people here. This is packed. >> A lot of energy here. So, Haseeb, we've got to start with you. Your T-shirt says it all. Don't hate k8s. (Kevin giggles) Talk to us about some of the trends, from a Kubernetes perspective, that you're seeing, and then Kevin will give your follow-up. >> Yeah. >> Yeah, absolutely. So, I think the biggest trend I'm seeing on the enterprise side is that enterprises are forming platform organizations to make Kubernetes a practice across the enterprise. So it used to be that a BU would say, "I need Kubernetes. I have some DevOps engineers, let me just do this myself." And the next one would do the same, and then next one would do the same. And that's not practical, long term, for an enterprise. And this is now becoming a consolidated effort, which is, I think it's great. It speaks to the power of Kubernetes, because it's becoming so important to the enterprise. But that also puts a pressure because what the platform team has to solve for now is they have to find this fine line between automation and governance, right? I mean, the developers, you know, they don't really care about governance. Just give me stuff, I need to compute, I'm going to go. But then the platform organization has to think about, how is this going to play for the enterprise across the board? So that combination of automation and governance is where we are finding, frankly, a lot of success in making enterprise platform team successful. I think, that's a really new thing to me. It's something that's changed in the last six months, I would say, in the industry. I don't know if, Kevin, if you agree with that or not, but that's what I'm seeing. >> Yeah, definitely agree with that. We see a ton of customers in EKS who are building these new platforms using Kubernetes. The term that we hear a lot of customers use is standardization. So they've got various ways that they're deploying applications, whether it's on-prem or in the cloud and region. And they're really trying to standardize the way they deploy applications. And Kubernetes is really that compute substrate that they're standardizing on. >> Kevin, talk about the relationship with Rafay Systems that you have and why you're here together. And two, second part of that question, why is EKS kicking ass so much? (Haseeb and Kevin laughing) All right, go ahead. First one, your relationship. Second one, EKS is doing pretty well. >> Yep, yep, yep. (Lisa laughing) So yeah, we work closely with Rafay, Rafay, excuse me. A lot of joint customer wins with Haseeb and Co, so they're doing great work with EKS customers and, yeah, love the partnership there. In terms of why EKS is doing so well, a number of reasons, I think. Number one, EKS is vanilla, upstream, open-source Kubernetes. So customers want to use that open-source technology, that open-source Kubernetes, and they come to AWS to get it in a managed offering, right? Kubernetes isn't the easiest thing to self-manage. And so customers, you know, back before EKS launched, they were banging down the door at AWS for us to have a managed Kubernetes offering. And, you know, we launched EKS and there's been a ton of customer adoption since then. >> You know, Lisa, when we, theCUBE 12 years, now everyone knows we started in 2010, we used to cover a show called OpenStack. >> I remember that. >> OpenStack Summit. >> What's that now? >> And at the time, at that time, Kubernetes wasn't there. So theCUBE was present at creation. We've been to every KubeCon ever, CNCF then took it over. So we've been watching it from the beginning. >> Right. And it reminds me of the same trend we saw with MapReduce and Hadoop. Very big promise, everyone loved it, but it was hard, very difficult. And Hadoop's case, big data, it ended up becoming a data lake. Now you got Spark, or Snowflake, and Databricks, and Redshift. Here, Kubernetes has not yet been taken over. But, instead, it's being abstracted away and or managed services are emerging. 'Cause general enterprises can't hire enough Kubernetes people. >> Yep. >> They're not that many out there yet. So there's the training issue. But there's been the rise of managed services. >> Yep. >> Can you guys comment on what your thoughts are relative to that trend of hard to use, abstracting away the complexity, and, specifically, the managed services? >> Yeah, absolutely. You want to go? >> Yeah, absolutely. I think, look, it's important to not kid ourselves. It is hard. (Johns laughs) But that doesn't mean it's not practical, right. When Kubernetes is done well, it's a thing of beauty. I mean, we have enough customer to scale, like, you know, it's like a, forget a hockey stick, it's a straight line up, because they just are moving so fast when they have the right platform in place. I think that the mistake that many of us make, and I've made this mistake when we started this company, was trivializing the platform aspect of Kubernetes, right. And a lot of my customers, you know, when they start, they kind of feel like, well, this is not that hard. I can bring this up and running. I just need two people. It'll be fine. And it's hard to hire, but then, I need two, then I need two more, then I need two, it's a lot, right. I think, the one thing I keep telling, like, when I talk to analysts, I say, "Look, somebody needs to write a book that says, 'Yes, it's hard, but, yes, it can be done, and here's how.'" Let's just be open about what it takes to get there, right. And, I mean, you mentioned OpenStack. I think the beauty of Kubernetes is that because it's such an open system, right, even with the managed offering, companies like Rafay can build really productive businesses on top of this Kubernetes platform because it's an open system. I think that is something that was not true with OpenStack. I've spent time with OpenStack also, I remember how it is. >> Well, Amazon had a lot to do with stalling the momentum of OpenStack, but your point about difficulty. Hadoop was always difficult to maintain and hiring against. There were no managed services and no one yet saw that value of big data yet. Here at Kubernetes, people are living a problem called, I'm scaling up. >> Yep. And so it sounds like it's a foundational challenge. The ongoing stuff sounds easier or manageable. >> Once you have the right tooling. >> Is that true? >> Yeah, no, I mean, once you have the right tooling, it's great. I think, look, I mean, you and I have talked about this before, I mean, the thesis behind Rafay is that, you know, there's like 8, 12 things that need to be done right for Kubernetes to work well, right. And my whole thesis was, I don't want my customer to buy 10, 12, 15 products. I want them to buy one platform, right. And I truly believe that, in our market, similar to what vCenter, like what VMware's vCenter did for VMs, I want to do that for Kubernetes, right. And that the reason why I say that is because, see, vCenter is not about hypervisors, right? vCenter is about hypervisor, access, networking, storage, all of the things, like multitenancy, all the things that you need to run an enterprise-grade VM environment. What is that equivalent for the Kubernetes world, right? So what we are doing at Rafay is truly building a vCenter, but for Kubernetes, like a kCenter. I've tried getting the domain. I couldn't get it. (Kevin laughs) >> Well, after the Broadcom view, you don't know what's going to happen. >> Ehh. (John laughs) >> I won't go there! >> Yeah. Yeah, let's not go there today. >> Kevin, EKS, I've heard people say to me, "Love EKS. Just add serverless, that's a home run." There's been a relationship with EKS and some of the other Amazon tools. Can you comment on what you're seeing as the most popular interactions among the services at AWS? >> Yeah, and was your comment there, add serverless? >> Add serverless with AKS at the edge- >> Yeah. >> and things are kind of interesting. >> I mean, so, one of the serverless offerings we have today is actually Fargate. So you can use Fargate, which is our serverless compute offering, or one of our serverless compute offerings with EKS. And so customers love that. Effectively, they get the beauty of EKS and the Kubernetes API but they don't have to manage nodes. So that's, you know, a good amount of adoption with Fargate as well. But then, we also have other ways that they can manage their nodes. We have managed node groups as well, in addition to self-managed nodes also. So there's a variety of options that customers can use from a compute perspective with EKS. And you'll continue to see us evolve the portfolio as well. >> Can you share, Haseeb, can you share a customer example, a joint customer example that you think really articulates the value of what Rafay and AWS are doing together? >> Yeah, absolutely. In fact, we announced a customer very recently on this very show, which is MoneyGram, which is a joint AWS and Rafay customer. Look, we have enough, you know, the thing about these massive customers is that, you know, not everybody's going to give us their logo to use. >> Right. >> But MoneyGram has been a Rafay plus EKS customer for a very, very long time. You know, at this point, I think we've earned their trust, and they've allowed us to, kind of say this publicly. But there's enough of these financial services companies who have, you know, standardized on EKS. So it's EKS first, Rafay second, right. They standardized on EKS. And then they looked around and said, "Who can help me platform EKS across my enterprise?" And we've been very lucky. We have some very large financial services, some very large healthcare companies now, who, A, EKS, B, Rafay. I'm not just saying that because my friend Kevin's here, (Lisa laughs) it's actually true. Look, EKS is a brilliant platform. It scales so well, right. I mean, people try it out, relative to other platforms, and it's just a no-brainer, it just scales. You want to build a big enterprise on the backs of a Kubernetes platform. And I'm not saying that's because I'm biased. Like EKS is really, really good. There's a reason why so many companies are choosing it over many other options in the market. >> You're doing a great job of articulating why the theme (Kevin laughs) of the New York City Summit is scale anything. >> Oh, yeah. >> There you go. >> Oh, yeah. >> I did not even know that but I'm speaking the language, right? >> You are. (John laughs) >> Yeah, absolutely. >> One of the things that we're seeing, also, I want to get your thoughts on, guys, is the app modernization trend, right? >> Yep. >> Because unlike other standards that were hard, that didn't have any benefit downstream 'cause they were too hard to get to, here, Kubernetes is feeding into real app for app developer pressure. They got to get cloud-native apps out. It's fairly new in the mainstream enterprise and a lot of hyperscalers have experience. So I'm going to ask you guys, what is the key thing that you're enabling with Kubernetes in the cloud-native apps? What is the key value? >> Yeah. >> I think, there's a bifurcation happening in the market. One is the Kubernetes Engine market, which is like EKS, AKS, GKE, right. And then there's the, you know, what, back in the day, we used to call operations and management, right. So the OAM layer for Kubernetes is where there's need, right. People are learning, right. Because, as you said before, the skill isn't there, you know, there's not enough talent available to the market. And that's the opportunity we're seeing. Because to solve for the standardization, the governance, and automation that we talked about earlier, you know, you have to solve for, okay, how do I manage my network? How do I manage my service mesh? How do I do chargebacks? What's my, you know, policy around actual Kubernetes policies? What's my blueprinting strategy? How do I do add-on management? How do I do pipelines for updates of add-ons? How do I upgrade my clusters? And we're not done yet, there's a longer list, right? This is a lot, right? >> Yeah. >> And this is what happens, right. It's just a lot. And really, the companies who understand that plethora of problems that need to be solved and build easy-to-use solutions that enterprises can consume with the right governance automation, I think they're going to be very, very successful here. >> Yeah. >> Because this is a train, right? I mean, this is happening whether, it's not us, it's happening, right? Enterprises are going to keep doing this. >> And open-source is a big driver in all of this. >> Absolutely. >> Absolutely. >> And I'll tag onto that. I mean, you talked about platform engineering earlier. Part of the point of building these platforms on top of Kubernetes is giving developers an easier way to get applications into the cloud. So building unique developer experiences that really make it easy for you, as a software developer, to take the code from your laptop, get it out of production as quickly as possible. The question is- >> So is that what you mean, does that tie your point earlier about that vertical, straight-up value once you've set up it, right? >> Yep. >> Because it's taking the burden off the developers for stopping their productivity. >> Absolutely. >> To go check in, is it configured properly? Is the supply chain software going to be there? Who's managing the services? Who's orchestrating the nodes? >> Yep. >> Is that automated, is that where you guys see the value? >> That's a lot of what we see, yeah. In terms of how these companies are building these platforms, is taking all the component pieces that Haseeb was talking about and really putting it into a cohesive whole. And then, you, as a software developer, you don't have to worry about configuring all of those things. You don't have to worry about security policy, governance, how your app is going to be exposed to the internet. >> It sounds like infrastructure is code. >> (laughs) Yeah. >> Come on, like. >> (laughs) Infrastructure's code is a big piece of it, for sure, for sure. >> Yeah, look, infrastructure's code actually- >> Infrastructure's sec is code too, the security. >> Yeah. >> Huge. >> Well, it all goes together. Like, we talk about developer self-service, right? The way we enable developer self-service is by teaching developers, here's a snippet of code that you write and you check it in and your infrastructure will just magically be created. >> Yep. >> But not automatically. It's going to go through a check, like a check through the platform team. These are the workflows that if you get them right, developers don't care, right. All developers want is I want to compute. But then all these 20 things need to happen in the back. That's what, if you nail it, right, I mean, I keep trying to kind of pitch the company, I don't want to do that today. But if you nail that, >> I'll give you a plug at the end. >> you have a good story. >> But I got to, I just have a tangent question 'cause you reminded me. There's two types of developers that have emerged, right. You have the software developer that wants infrastructures code. I just want to write my code, I don't want to stop. I want to build in shift-left for security, shift-right for data. All that's in there. >> Right. >> I'm coding away, I love coding. Then you've got the under-the-hood person. >> Yes. >> I've been to the engines. >> Certainly. >> So that's more of an SRE, data engineer, I'm wiring services together. >> Yeah. >> A lot of people are like, they don't know who they are yet. They're in college or they're transforming from an IT job. They're trying to figure out who they are. So question is, how do you tell a person that's watching, like, who am I? Like, should I be just coding? But I love the tech. Would you guys have any advice there? >> You know, I don't know if I have any guidance in terms of telling people who they are. (all laughing) I mean, I think about it in terms of a spectrum and this is what we hear from customers, is some customers want to shift as much responsibility onto the software teams to manage their infrastructure as well. And then some want to shift it all the way over to the very centralized model. And, you know, we see everything in between as well with our EKS customer base. But, yeah, I'm not sure if I have any direct guidance for people. >> Let's see, any wisdom? >> Aside from experiment. >> If you're coding more, you're a coder. If you like to play with the hardware, >> Yeah. >> or the gears. >> Look, I think it's really important for managers to understand that developers, yes, they have a job, you have to write code, right. But they also want to learn new things. It's only fair, right. >> Oh, yeah. >> So what we see is, developers want to learn. And we enable for them to understand Kubernetes in small pieces, like small steps, right. And that is really, really important because if we completely abstract things away, like Kubernetes, from them, it's not good for them, right. It's good for their careers also, right. It's good for them to learn these things. This is going to be with us for the next 15, 20 years. Everybody should learn it. But I want to learn it because I want to learn, not because this is part of my job, and that's the distinction, right. I don't want this to become my job because I want, I want to write my code. >> Do what you love. If you're more attracted to understanding how automation works, and robotics, or making things scale, you might be under-the-hood. >> Yeah. >> Yeah, look under the hood all day long. But then, in terms of, like, who keeps the lights on for the cluster, for example. >> All right, see- >> That's the job. >> He makes a lot of value. Now you know who you are. Ask these guys. (Lisa laughing) Congratulations on your success on EKS 2. >> Yeah, thank you. >> Quick, give a plug for the company. I know you guys are growing. I want to give you a minute to share to the audience a plug that's going to be, what are you guys doing? You're hiring? How many employees? Funding? Customer new wins? Take a minute to give a plug. >> Absolutely. And look, I come see, John, I think, every show you guys are doing a summit or a KubeCon, I'm here. (John laughing) And every time we come, we talk about new customers. Look, platform teams at enterprises seem to love Rafay because it helps them build that, well, Kubernetes platform that we've talked about on the show today. I think, many large enterprises on the financial service side, healthcare side, digital native side seem to have recognized that running Kubernetes at scale, or even starting with Kubernetes in the early days, getting it right with the right standards, that takes time, that takes effort. And that's where Rafay is a great partner. We provide a great SaaS offering, which you can have up and running very, very quickly. Of course, we love EKS. We work with our friends at AWS. But also works with Azure, we have enough customers in Azure. It also runs in Google. We have enough customers at Google. And it runs on-premises with OpenShift or with EKS A, right, whichever option you want to take. But in terms of that standardization and governance and automation for your developers to move fast, there's no better product in the market right now when it comes to Kubernetes platforms than Rafay. >> Kevin, while we're here, why don't you plug EKS too, come on. >> Yeah, absolutely, why not? (group laughing) So yes, of course. EKS is AWS's managed Kubernetes offering. It's the largest managed Kubernetes service in the world. We help customers who want to adopt Kubernetes and adopt it wherever they want to run Kubernetes, whether it's in region or whether it's on the edge with EKS A or running Kubernetes on Outposts and the evolving portfolio of EKS services as well. We see customers running extremely high-scale Kubernetes clusters, excuse me, and we're here to support them as well. So yeah, that's the managed Kubernetes offering. >> And I'll give the plug for theCUBE, we'll be at KubeCon in Detroit this year. (Lisa laughing) Lisa, look, we're giving a plug to everybody. Come on. >> We're plugging everybody. Well, as we get to plugs, I think, Haseeb, you have a book to write, I think, on Kubernetes. And I think you're wearing the title. >> Well, I do have a book to write, but I'm one of those people who does everything at the very end, so I will never get it right. (group laughing) So if you want to work on it with me, I have some great ideas. >> Ghostwriter. >> Sure! >> But I'm lazy. (Kevin chuckles) >> Ooh. >> So we got to figure something out. >> Somehow I doubt you're lazy. (group laughs) >> No entrepreneur's lazy, I know that. >> Right? >> You're being humble. >> He is. So Haseeb, Kevin, thank you so much for joining John and me today, >> Thank you. >> talking about what you guys are doing at Rafay with EKS, the power, why you shouldn't hate k8s. We appreciate your insights and your time. >> Thank you as well. >> Yeah, thank you very much for having us. >> Our pleasure. >> Thank you. >> We appreciate it. With John Furrier, I'm Lisa Martin. You're watching theCUBE live from New York City at the AWS NYC Summit. John and I will be right back with our next guest, so stick around. (upbeat music) (gentle music)
SUMMARY :
We're going to be talking Thank you very much for having us. This is packed. Talk to us about some of the trends, I mean, the developers, you know, in the cloud and region. that you have and why And so customers, you know, we used to cover a show called OpenStack. And at the time, And it reminds me of the same trend we saw They're not that many out there yet. You want to go? And, I mean, you mentioned OpenStack. Well, Amazon had a lot to do And so it sounds like it's And that the reason why Well, after the Broadcom view, (John laughs) Yeah, let's not go there today. and some of the other Amazon tools. I mean, so, one of the you know, the thing about these who have, you know, standardized on EKS. of the New York City (John laughs) So I'm going to ask you guys, And that's the opportunity we're seeing. I think they're going to be very, I mean, this is happening whether, big driver in all of this. I mean, you talked about Because it's taking the is taking all the component pieces code is a big piece of it, is code too, the security. here's a snippet of code that you write that if you get them right, at the end. I just want to write my I'm coding away, I love coding. So that's more of But I love the tech. And then some want to If you like to play with the hardware, for managers to understand This is going to be with us Do what you love. the cluster, for example. Now you know who you are. I want to give you a minute Kubernetes in the early days, why don't you plug EKS too, come on. and the evolving portfolio And I'll give the plug And I think you're wearing the title. So if you want to work on it with me, But I'm lazy. So we got to (group laughs) So Haseeb, Kevin, thank you so much the power, why you shouldn't hate k8s. Yeah, thank you very much at the AWS NYC Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Kevin Coleman | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Rafay | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Haseeb | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
EKS | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
Haseeb Budhani | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Rafay Systems | ORGANIZATION | 0.99+ |
20 things | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
two people | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
one platform | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
MoneyGram | ORGANIZATION | 0.99+ |
15 products | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Rafay | ORGANIZATION | 0.99+ |
12 things | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Second one | QUANTITY | 0.98+ |
8 | QUANTITY | 0.98+ |
10, 12,000 people | QUANTITY | 0.98+ |
vCenter | TITLE | 0.98+ |
Detroit | LOCATION | 0.98+ |
12 years | QUANTITY | 0.98+ |
New York City Summit | EVENT | 0.97+ |
EKS A | TITLE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Dave Cope, Spectro Cloud | Kubecon + Cloudnativecon Europe 2022
(upbeat music) >> theCUBE presents KubeCon and CloudNativeCon Europe 22, brought to you by the Cloud Native Computing Foundation. >> Valencia, Spain, a KubeCon, CloudNativeCon Europe 2022. I'm Keith Towns along with Paul Gillon, Senior Editor Enterprise Architecture for Silicon Angle. Welcome Paul. >> Thank you Keith, pleasure to work with you. >> We're going to have some amazing people this week. I think I saw stat this morning, 65% of the attendees, 7,500 folks. First time KubeCon attendees, is this your first conference? >> It is my first KubeCon and it is amazing to see how many people are here and to think of just a couple of years ago, three years ago, we were still talking about, what the Cloud was, what the Cloud was going to do and how we were going to integrate multiple Clouds. And now we have this whole new framework for computing that is just rifled out of nowhere. And as we can see by the number of people who are here this has become the dominant trend in Enterprise Architecture right now how to adopt Kubernetes and containers, build microservices based applications, and really get to that transparent Cloud that has been so elusive. >> It has been elusive. And we are seeing vendors from startups with just a few dozen people, to some of the traditional players we see in the enterprise space with 1000s of employees looking to capture kind of lightning in a bottle so to speak, this elusive concept of multicloud. >> And what we're seeing here is very typical of an early stage conference. I've seen many times over the years where the floor is really dominated by companies, frankly, I've never heard of that. The many of them are only two or three years old, you don't see the big dominant computing players with the presence here that these smaller companies have. That's very typical. We saw that in the PC age, we saw it in the early days of Unix and it's happening again. And what will happen over time is that a lot of these companies will be acquired, there'll be some consolidation. And the nature of this show will change, I think dramatically over the next couple or three years but there is an excitement and an energy in this auditorium today that is really a lot of fun and very reminiscent of other new technologies just as they requested. >> Well, speaking of new technologies, we have Dave Cole, CRO, Chief Revenue Officer. >> That's right. >> Chief Marketing Officer of Spectrum Cloud. Welcome to the show. >> Thank you. It's great to be here. >> So let's talk about this big ecosystem, Kubernetes. >> Yes. >> Solve problem? >> Well the dream is... Well, first of all applications are really the lifeblood of a company, whether it's our phone or whether it's a big company trying to connect with its customers about applications. And so the whole idea today is how do I build these applications to build that tight relationship with my customers? And how do I reinvent these applications rapidly in along comes containerization which helps you innovate more quickly? And certainly a dominant technology there is Kubernetes. And the question is, how do you get Kubernetes to help you build applications that can be born anywhere and live anywhere and take advantage of the places that it's running? Because everywhere has pluses and minuses. >> So you know what, the promise of Kubernetes from when I first read about it years ago is, runs on my laptop? >> Yeah. >> I can push it to any Cloud, any platforms. >> That's right, that's right. >> Where's the gap? Where are we in that phase? Like talk to me about scale? Is it that simple? >> Well, that is actually the problem is that today, while the technology is the dominant containerization technology in orchestration technology, it really still takes a power user, it really hasn't been very approachable to the masses. And so was these very expensive highly skilled resources that sit in a dark corner that have focused on Kubernetes, but that now is trying to evolve to make it more accessible to the masses. It's not about sort of hand wiring together, what is a typical 20 layer stack, to really manage Kubernetes and then have your engineers manually can reconfigure it and make sure everything works together. Now it's about how do I create these stacks, make it easy to deploy and manage at scale? So we've gone from sort of DIY Developer Centric to all right, now how do I manage this at scale? >> Now this is a point that is important, I think is often overlooked. This is not just about Kubernetes. This is about a whole stack of Cloud Native Technologies. And you who is going to integrate that all that stuff, piece that stuff together? Obviously, you have a role in that. But in the enterprise, what is the awareness level of how complex this stack is and how difficult it is to assemble? >> We see a recognition of that we've had developers working on Kubernetes and applications, but now when we say, how do we weave it into our production environments? How do we ensure things like scalability and governance? How do we have this sort of interesting mix of innovation, flexibility, but with control? And that's sort of an interesting combination where you want developers to be able to run fast and use the latest tools, but you need to create these guardrails to deploy it at scale. >> So where do the developers fit in that operation stack then? Is Kubernetes an AIOps or an ops task or is it sort of a shared task across the development spectrum? >> Well, I think there's a desire to allow application developers to just focus on the application and have a Kubernetes related technology that ensures that all of the infrastructure and related application services are just there to support them. And because the typical stack from the operating system to the application can be up to 20 different layers, components, you just want all those components to work together, you don't want application developers to worry about those things. And the latest technologies like Spectra Cloud there's others are making that easy application engineers focus on their apps, all of the infrastructure and the services are taken care of. And those apps can then live natively on any environment. >> So help paint this picture for us. I get AKS, EKS, Anthos, all of these distributions OpenShift, the Tanzu, where's Spectra Cloud helping me to kind of cobble together all these different distros, I thought distro was the thing just like Linux has different distros, Randy said different distros. >> That actually is the irony, is that sort of the age of debating the distros largely is over. There are a lot of distros and if you look at them there are largely shades of gray in being different from each other. But the Kubernetes distribution is just one element of like 20 elements that all have to work together. So right now what's happening is that it's not about the distribution it's now how do I again, sorry to repeat myself, but move this into scale? How do I move it into deploy at scale to be able to manage ongoing at scale to be able to innovate at-scale, to allow engineers as I said, use the coolest tools but still have technical guardrails that the enterprise knows, they'll be in control of. >> What does at-scale mean to the enterprise customers you're talking to now? What do they mean when they say that? >> Well, I think it's interesting because we think scale's different because we've all been in the industry and it's frankly, sort of boring old word. But today it means different things, like how do I automate the deployment at-scale? How do I be able to make it really easy to provision resources for applications on any environment, from either a virtualized or bare metal data center, Cloud, or today Edge is really big, where people are trying to push applications out to be closer to the source of the data. And so you want to be able to deploy it-scale, you want to manage at-scale, you want to make it easy to, as I said earlier, allow application developers to build their applications, but ITOps wants the ability to ensure security and governance and all of that. And then finally innovate at-scale. If you look at this show, it's interesting, three years ago when we started Spectra Cloud, there are about 1400 businesses or technologies in the Kubernetes ecosystem, today there's over 1800 and all of these technologies made up of open source and commercial all version in a different rates, it becomes an insurmountable problem, unless you can set those guardrails sort of that balance between flexibility, control, let developers access the technologies. But again, manage it as a part of your normal processes of a scaled operation. >> So Dave, I'm a little challenged here, because I'm hearing two where I typically consider conflicting terms. Flexibility, control. >> Yes. >> In order to achieve control, I need complexity, in order to choose flexibility, I need t-shirt, one t-shirt fits all and I get simplicity. How can I get both that just doesn't compute. >> Well, that's the opportunity and the challenge at the same time. So you're right. So developers want choice, good developers want the ability to choose the latest technology so they can innovate rapidly. And yet ITOps, wants to be able to make sure that there are guardrails. And so with some of today's technologies, like Spectra Cloud, it is, you have the ability to get both. We actually worked with dimensional research, and we sponsor an annual state of Kubernetes survey. We found this last summer, that two out of three IT executives said, you could not have both flexibility and control together, but in fact they want it. And so it is this interesting balance, how do I give engineers the ability to get anything they want, but ITOps the ability to establish control. And that's why Kubernetes is really at its next inflection point. Whereas I mentioned, it's not debates about the distro or DIY projects. It's not big incumbents creating siloed Kubernetes solutions, but in fact it's about allowing all these technologies to work together and be able to establish these controls. And that's really where the industry is today. >> Enterprise , enterprise CIOs, do not typically like to take chances. Now we were talking about the growth in the market that you described from 1400, 1800 vendors, most of these companies, very small startups, our enterprises are you seeing them willing to take a leap with these unproven companies? Or are they holding back and waiting for the IBMs, the HPS, the MicrosoftS to come in with the VMwares with whatever they solution they have? >> I think so. I mean, we sell to the global 2000. We had yesterday, as a part of Edge day here at the event, we had GE Healthcare as one of our customers telling their story, and they're a market share leader in medical imaging equipment, X-rays, MRIs, CAT scans, and they're starting to treat those as Edge devices. And so here is a very large established company, a leader in their industry, working with people like Spectra Cloud, realizing that Kubernetes is interesting technology. The Edge is an interesting thought but how do I marry the two together? So we are seeing large corporations seeing so much of an opportunity that they're working with the smaller companies, the latest technology. >> So let's talk about the Edge a little, you kind of opened it up there. How should customers think about the Edge versus the Cloud Data Center or even bare metal? >> Actually it's a... Well bare metal is fairly easy is that many people are looking to reduce some of the overhead or inefficiencies of the virtualized environment. But we've had really sort of parallel little white tornadoes, we've had bare metal as infrastructure that's been developing, and then we've had orchestration developing but they haven't really come together very well. Lately, we're finally starting to see that come together. Spectra Cloud contributed to open source a metal as a service technology that finally brings these two worlds together, making bare metal much more approachable to the enterprise. Edge is interesting, because it seems pretty obvious, you want to push your application out closer to your source of data, whether it's AI inferencing, or IoT or anything like that, you don't want to worry about intermittent connectivity or latency or anything like that. But people have wanted to be able to treat the Edge as if it's almost like a Cloud, where all I worry about is the app. So really, the Edge to us is just the next extension in a multi-Cloud sort of motif where I want these Edge devices to require low IT resources, to automate the provisioning, automate the ongoing version management, patch management, really act like a Cloud. And we're seeing this as very popular now. And I just used the GE Healthcare example of that, imagine a CAT scan machine, I'm making this part up in China and that's just an Edge device and it's doing medical imagery which is very intense in terms of data, you want to be able to process it quickly and accurately, as close to the endpoint, the healthcare provider is possible. >> So let's talk about that in some level of details, we think about kind of Edge and these fixed devices such as imaging device, are we putting agents on there, or we looking at something talking back to the Cloud? Where does special Cloud inject and help make that simple, that problem of just having dispersed endpoints all over the world simpler? >> Sure. Well we announced our Edge Kubernetes, Edge solution at a big medical conference called HIMMS, months ago. And what we allow you to do is we allow the application engineers to develop their application, and then you can de you can design this declarative model this cluster API, but beyond Cluster profile which determines which additional application services you need and the Edge device, all the person has to do with the endpoint is plug in the power, plug in the communications, it registers the Edge device, it automates the deployment of the full stack and then it does the ongoing versioning and patch management, sort of a self-driving Edge device running Kubernetes. And we make it just very easy. No IT resources required at the endpoint, no expensive field engineering resources to go to these endpoints twice a year to apply new patches and things like that, all automated. >> But there's so many different types of Edge devices with different capabilities, different operating systems, some have no operating system. I mean that seems, like a much more complex environment, just calling it the Edge is simple, but what you're really talking about is 1000s of different devices, that you have to run your applications on how are you dealing with that? >> So one of the ways is that we're really unbiased. In other words, we're OS and distro agnostic. So we don't want to debate about which distribution you like, we don't want to debate about which OS you want to use. The truth is, you're right. There's different environments and different choices that you'll want to make. And so the key is, how do you incorporate those and also recognize everything beyond those, OS and Kubernetes and all of that and manage that full stack. So that's what we do, is we allow you to choose which tools you want to use and let it be deployed and managed on any environment. >> And who's... >> So... >> I'm sorry Keith, who's responsible for making Kubernetes run on the Edge device. >> We do. We provision the entire stack. I mean, of course the company does using our product, but we provision the entire Kubernetes infrastructure stack, all the application services and the application itself on that device. >> So I would love to dig into like where pods happen and all that. But, provisioning is getting to the point that is a solve problem. Day two. >> Yes. >> Like you just mentioned HIMMS, highly regulated environments. How does Spectra Cloud helping with configuration management, change control, audit, compliance, et cetera, the hard stuff. >> Yep. And one of the things we do, you bring up a good point is we manage the full life cycle from day zero, which is sort of create, deploy, all the way to day two, which is about access control, security, it's about ongoing versioning in a patch management. It's all of that built into the platform. But you're right, like the medical industry has a lot of regulations. And so you need to be able to make sure that everything works, it's always up to the latest level have the highest level of security. And so all that's built into the platform. It's not just a fire and forget it really is about that full life cycle of deploying, managing on an ongoing basis. >> Well, Dave, I'd love to go into a great deal of detail with you about kind of this day two ops and I think we'll be covering a lot more of that topic, Paul, throughout the week, as we talk about just as we've gotten past, how do I deploy Kubernetes pod, to how do I actually operate IT? >> Absolutely, absolutely. The devil is in the details as they say. >> Well, and also too, you have to recognize that the Edge has some very unique requirements, you want very small form factors, typically, you want low IT resources, it has to be sort of zero touch or low touch because if you're a large food provider with 20,000 store locations, you don't want to send out field engineers two or three times a year to update them. So it really is an interesting beast and we have some exciting technology and people like GE are using that. >> Well, Dave, thanks a lot for coming on theCUBE, you're now KubeCon, you've not been on before? >> I have actually, yes its... But I always enjoy it. >> Great conversation. From Valencia, Spain. I'm Keith Towns, along with Paul Gillon and you're watching theCUBE, the leader in high tech coverage. (upbeat music)
SUMMARY :
brought to you by the Cloud I'm Keith Towns along with Paul Gillon, pleasure to work with you. of the attendees, and it is amazing to see kind of lightning in a bottle so to speak, And the nature of this show will change, we have Dave Cole, Welcome to the show. It's great to be here. So let's talk about this big ecosystem, and take advantage of the I can push it to any approachable to the masses. and how difficult it is to assemble? to be able to run fast and the services are taken care of. OpenShift, the Tanzu, is that sort of the age And so you want to be So Dave, I'm a little challenged here, in order to choose the ability to get anything they want, the MicrosoftS to come in with the VMwares and they're starting to So let's talk about the Edge a little, So really, the Edge to us all the person has to do with the endpoint that you have to run your applications on OS and Kubernetes and all of that run on the Edge device. and the application itself on that device. is getting to the point the hard stuff. It's all of that built into the platform. The devil is in the details as they say. it has to be sort of But I always enjoy it. the leader
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Cole | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
Dave Cope | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Randy | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
65% | QUANTITY | 0.99+ |
20 layer | QUANTITY | 0.99+ |
Keith Towns | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
20 elements | QUANTITY | 0.99+ |
Spectro Cloud | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
7,500 folks | QUANTITY | 0.99+ |
Spectrum Cloud | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Valencia, Spain | LOCATION | 0.99+ |
Spectra Cloud | TITLE | 0.99+ |
three years ago | DATE | 0.99+ |
first conference | QUANTITY | 0.98+ |
Edge | TITLE | 0.98+ |
1400 | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
one element | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
IBMs | ORGANIZATION | 0.98+ |
First time | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.98+ |
months ago | DATE | 0.97+ |
last summer | DATE | 0.97+ |
over 1800 | QUANTITY | 0.97+ |
CloudNativeCon Europe 2022 | EVENT | 0.97+ |
about 1400 businesses | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Kubecon | ORGANIZATION | 0.96+ |
CloudNativeCon Europe 22 | EVENT | 0.96+ |
twice a year | QUANTITY | 0.96+ |
Edge | ORGANIZATION | 0.95+ |
two worlds | QUANTITY | 0.95+ |
Centric | ORGANIZATION | 0.94+ |
Linux | TITLE | 0.93+ |
couple of years ago | DATE | 0.93+ |
Cloudnativecon | ORGANIZATION | 0.93+ |
up to 20 different layers | QUANTITY | 0.92+ |
day zero | QUANTITY | 0.92+ |
Anthos | TITLE | 0.91+ |
AKS | TITLE | 0.91+ |
OpenShift | TITLE | 0.9+ |
Unix | TITLE | 0.9+ |
this morning | DATE | 0.9+ |
Silicon Angle | ORGANIZATION | 0.89+ |
Haseeb Budhani, Rafay & Adnan Khan, MoneyGram | Kubecon + Cloudnativecon Europe 2022
>> Announcer: theCUBE presents "Kubecon and Cloudnativecon Europe 2022" brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to theCUBE coverage of Kubecon 2022, E.U. I'm here with my cohost, Paul Gillin. >> Pleased to work with you, Keith. >> Nice to work with you, Paul. And we have our first two guests. "theCUBE" is hot. I'm telling you we are having interviews before the start of even the show floor. I have with me, we got to start with the customers first. Enterprise Architect Adnan Khan, welcome to the show. >> Thank you so much. >> Keith: CUBE time first, now you're at CUBE-alumni. >> Yup. >> And Haseeb Budhani, CEO Arathi, welcome back. >> Nice to talk to you again today. >> So, we're talking all things Kubernetes and we're super excited to talk to MoneyGram about their journey to Kubernetes. First question I have for Adnan. Talk to us about what your pre-Kubernetes landscape looked like? >> Yeah. Certainly, Keith. So, we had a traditional mix of legacy applications and modern applications. A few years ago we made the decision to move to a microservices architecture, and this was all happening while we were still on-prem. So, your traditional VMs. And we started 20, 30 microservices but with the microservices packing. You quickly expand to hundreds of microservices. And we started getting to that stage where managing them without sort of an orchestration platform, and just as traditional VMs, was getting to be really challenging, especially from a day two operational. You can manage 10, 15 microservices, but when you start having 50, and so forth, all those concerns around high availability, operational performance. So, we started looking at some open-source projects. Spring cloud, we are predominantly a Java shop. So, we looked at the spring cloud projects. They give you a number of initiatives for doing some of those management. And what we realized again, to manage those components without sort of a platform, was really challenging. So, that kind of led us to sort of Kubernetes where along with our journey new cloud, it was the platform that could help us with a lot of those management operational concerns. >> So, as you talk about some of those challenges, pre-Kubernetes, what were some of the operational issues that you folks experienced? >> Yeah, certain things like auto scaling is number one. I mean, that's a fundamental concept of cloud native, right? Is how do you auto scale VMs, right? You can put in some old methods and stuff, but it was really hard to do that automatically. So, Kubernetes with like HPA gives you those out of the box. Provided you set the right policies, you can have auto scaling where it can scale up and scale back, so we were doing that manually. So, before, you know, MoneyGram, obviously, holiday season, people are sending more money, Mother's Day. Our Ops team would go and basically manually scale VMs. So, we'd go from four instances to maybe eight instances, but that entailed outages. And just to plan around doing that manually, and then sort of scale them back was a lot of overhead, a lot of administration overhead. So, we wanted something that could help us do that automatically in an efficient and intrusive way. That was one of the things, monitoring and and management operations, just kind of visibility into how those applications were during what were the status of your workloads, was also a challenge to do that. >> So, Haseeb, I got to ask the question. If someone would've came to me with that problem, I'd just say, "You know what? Go to the plug to cloud." How does your group help solve some of these challenges? What do you guys do? >> Yeah. What do we do? Here's my perspective on the market as it's playing out. So, I see a bifurcation happening in the Kubernetes space. But there's the Kubernetes run time, so Amazon has EKS, Azure as AKS. There's enough of these available, they're not managed services, they're actually really good, frankly. In fact, retail customers, if you're an Amazon why would you spin up your own? Just use EKS, it's awesome. But then, there's an operational layer that is needed to run Kubernetes. My perspective is that, 50,000 enterprises are adopting Kubernetes over the next 5 to 10 years. And they're all going to go through the same exact journey, and they're all going to end up potentially making the same mistake, which is, they're going to assume that Kubernetes is easy. They're going to say, "Well, this is not hard. I got this up and running on my laptop. This is so easy, no worries. I can do EKS." But then, okay, can you consistently spin up these things? Can you scale them consistently? Do you have the right blueprints in place? Do you have the right access management in place? Do you have the right policies in place? Can you deploy applications consistently? Do you have monitoring and visibility into those things? Do your developers have access when they need it? Do you have the right networking layer in place? Do you have the right chargebacks in place? Remember you have multiple teams. And by the way, nobody has a single cluster, so you got to do this across multiple clusters. And some of them have multiple clouds. Not because they want to be multiple clouds, because, but sometimes you buy a company, and they happen to be in Azure. How many dashboards do you have now across all the open-source technologies that you have identified to solve these problems? This is where pain lies. So, I think that Kubernetes is fundamentally a solve problem. Like our friends at AWS and Azure, they've solved this problem. It's like a AKS, EKS, et cetera, EGK for that matter. They're great, and you should use them, and don't even think about spinning up QB best clusters. Don't do it, use the platforms that exist. And commensurately on-premises, OpenShift is pretty awesome. If you like it, use it. But then when it comes to the operations layer, that's where today, we end up investing in a DevOps team, and then an SRE organization that need to become experts in Kubernetes, and that is not tenable. Can you, let's say unlimited capital, unlimited budgets. Can you hire 20 people to do Kubernetes today? >> If you could find them. >> If you can find 'em, right? So, even if you could, the point is that, see five years ago when your competitors were not doing Kubernetes, it was a competitive advantage to go build a team to do Kubernetes so you could move faster. Today, you know, there's a high chance that your competitors are already buying from a Rafay or somebody like Rafay. So, now, it's better to take these really, really sharp engineers and have them work on things that make the company money. Writing operations for Kubernetes, this is a commodity now. >> How confident are you that the cloud providers won't get in and do what you do and put you out of business? >> Yeah, I mean, absolutely. In fact, I had a conversation with somebody from HBS this morning and I was telling them, I don't think you have a choice, you have to do this. Competition is not a bad thing. If we are the only company in a space, this is not a space, right? The bet we are making is that every enterprise, they have an on-prem strategy, they have at least a handful of, everybody's got at least two clouds that they're thinking about. Everybody starts with one cloud, and then they have some other cloud that they're also thinking about. For them to only rely on one cloud's tools to solve for on-prem, plus that second cloud, they potentially they may have, that's a tough thing to do. And at the same time, we as a vendor, I mean, the only real reason why startups survive, is because you have technology that is truly differentiator. Otherwise, I mean, you got to build something that is materially interesting, right? We seem to have- >> Keith: Now. Sorry, go ahead. >> No, I was going to, you actually have me thinking about something. Adnan? >> Yes. >> MoneyGram, big, well known company. a startup, adding, working in a space with Google, VMware, all the biggest names. What brought you to Rafay to solve this operational challenge? >> Yeah. A good question. So, when we started out sort of in our Kubernetes, we had heard about EKS and we are an AWS shop, so that was the most natural path. And we looked at EKS and used that to create our clusters. But then we realized very quickly, that, yes, to Haseeb's point, AWS manages the control plane for you, it gives you the high availability. So, you're not managing those components which is some really heavy lifting. But then what about all the other things like centralized dashboard? What about, we need to provision Kubernetes clusters on multicloud, right? We have other clouds that we use, or also on-prem, right? How do you do some of that stuff? We also, at that time were looking at other tools also. And I had, I remember come up with an MVP list that we needed to have in place for day one or day two operations before we even launch any single applications into production. And my Ops team looked at that list and literally, there was only one or two items that they could check off with EKS. They've got the control plane, they've got the cluster provision, but what about all those other components? And some of that kind of led us down the path of, you know, looking at, "Hey, what's out there in this space?" And we realized pretty quickly that there weren't too many. There were some large providers and capabilities like Antos, but we felt that it was a little too much for what we were trying to do at that point in time. We wanted to scale slowly. We wanted to minimize our footprint, and Rafay seemed to sort of, was a nice mix from all those different angles. >> How was the situation affecting your developer experience? >> So, that's a really good question also. So, operations was one aspect to it. The other part is the application development. We've got MoneyGram is when a lot of organizations have a plethora of technologies from Java, to .net, to node.js, what have you, right? Now, as you start saying, okay, now we're going cloud native and we're going to start deploying to Kubernetes. There's a fair amount of overhead because a tech stack, all of a sudden goes from, just being Java or just being .net, to things like Docker. All these container orchestration and deployment concerns, Kubernetes deployment artifacts, (chuckles) I got to write all this YAML as my developer say, "YAML hell." (panel laughing) I got to learn Docker files. I need to figure out a package manager like HELM on top of learning all the Kubernetes artifacts. So, initially, we went with sort of, okay, you know, we can just train our developers. And that was wrong. I mean, you can't assume that everyone is going to sort of learn all these deployment concerns and we'll adopt them. There's a lot of stuff that's outside of their sort of core dev domain, that you're putting all this burden on them. So, we could not rely on them in to be sort of CUBE cuddle experts, right? That's a fair amount overhead learning curve there. So, Rafay again, from their dashboard perspective, saw the managed CUBE cuddle, gives you that easy access for devs, where they can go and monitor the status of their workloads. They don't have to figure out, configuring all these tools locally, just to get it to work. We did some things from a DevOps perspective to basically streamline and automate that process. But then, also Rafay came in and helped us out on kind of that providing that dashboard. They don't have to break, they can basically get on through single sign on and have visibility into the status of their deployment. They can do troubleshooting diagnostics all through a single pane of glass, which was a key key item. Initially, before Rafay, we were doing that command line. And again, just getting some of the tools configured was huge, it took us days just to get that. And then the learning curve for development teams "Oh, now you got the tools, now you got to figure out how to use it." >> So, Haseeb talk to me about the cloud native infrastructure. When I look at that entire landscape number, I'm just overwhelmed by it. As a customer, I look at it, I'm like, "I don't know where to start." I'm sure, Adnan, you folks looked at it and said, "Wow, there's so many solutions." How do you engage with the ecosystem? You have to be at some level opinionated but flexible enough to meet every customer's needs. How do you approach that? >> So, it's a really tough problem to solve because... So, the thing about abstraction layers, we all know how that plays out, right? So, abstraction layers are fundamentally never the right answer because they will never catch up, because you're trying to write a layer on top. So, then we had to solve the problem, which was, well, we can't be an abstraction layer, but then at the same time, we need to provide some, sort of like centralization standardization. So, we sort of have this the following dissonance in our platform, which is actually really important to solve the problem. So, we think of a stack as floor things. There's the Kubernetes layer, infrastructure layer, and EKS is different from AKS, and it's okay. If we try to now bring them all together and make them behave as one, our customers are going to suffer. Because there are features in EKS that I really want, but then if you write an abstraction then I'm not going to get 'em so not okay. So, treat them as individual things that we logic that we now curate. So, every time EKS, for example, goes from 1.22 to 1.23, we write a new product, just so my customer can press a button and upgrade these clusters. Similarly, we do this for AKS, we do this for GK. It's a really, really hard job, but that's the job, we got to do it. On top of that, you have these things called add-ons, like my network policy, my access management policy, my et cetera. These things are all actually the same. So, whether I'm EKS or AKS, I want the same access for Keith versus Adnan, right? So, then those components are sort of the same across, doesn't matter how many clusters, doesn't matter how many clouds. On top of that, you have applications. And when it comes to the developer, in fact I do the following demo a lot of times. Because people ask the question. People say things like, "I want to run the same Kubernetes distribution everywhere because this is like Linux." Actually, it's not. So, I do a demo where I spin up access to an OpenShift cluster, and an EKS cluster, and then AKS cluster. And I say, "Log in, show me which one is which?" They're all the same. >> So, Adnan, make that real for me. I'm sure after this amount of time, developers groups have come to you with things that are snowflakes. And as a enterprise architect, you have to make it work within your framework. How has working with Rafay made that possible? >> Yeah, so I think one of the very common concerns is the whole deployment to Haseeb's point, is you are from a deployment perspective, it's still using HELM, it's still using some of the same tooling. How do you? Rafay gives us some tools. You know, they have a command line Add Cuddle API that essentially we use. We wanted parity across all our different environments, different clusters, it doesn't matter where you're running. So, that gives us basically a consistent API for deployment. We've also had challenges with just some of the tooling in general that we worked with Rafay actually, to actually extend their, Add Cuddle API for us so that we have a better deployment experience for our developers. >> Haseeb, how long does this opportunity exist for you? At some point, do the cloud providers figure this out, or does the open-source community figure out how to do what you've done and this opportunity is gone? >> So, I think back to a platform that I think very highly of, which has been around a long time and continues to live, vCenter. I think vCenter is awesome. And it's beautiful, VMware did an incredible job. What is the job? It's job is to manage VMs, right? But then it's for access, it's also storage. It's also networking in a sec, right? All these things got done because to solve a real problem, you have to think about all the things that come together to help you solve that problem from an operations perspective. My view is that this market needs essentially a vCenter, but for Kubernetes, right? And that is a very broad problem. And it's going to spend, it's not about a cloud. I mean, every cloud should build this. I mean, why would they not? It makes sense. Anto exist, right? Everybody should have one. But then, the clarity in thinking that the Rafay team seems to have exhibited, till date, seems to merit an independent company, in my opinion, I think like, I mean, from a technical perspective, this product's awesome, right? I mean, we seem to have no real competition when it comes to this broad breadth of capabilities. Will it last? We'll see, right? I mean, I keep doing "CUBE" shows, right? So, every year you can ask me that question again, and we'll see. >> You make a good point though. I mean, you're up against VMware, You're up against Google. They're both trying to do sort of the same thing you're doing. Why are you succeeding? >> Maybe it's focused. Maybe it's because of the right experience. I think startups, only in hindsight, can one tell why a startup was successful. In all honesty, I've been in a one or two startups in the past, and there's a lot of luck to this, there's a lot of timing to this. I think this timing for a product like this is perfect. Like three, four years ago, nobody would've cared. Like honesty, nobody would've cared. This is the right time to have a product like this in the market because so many enterprises are now thinking of modernization. And because everybody's doing this, this is like the boots strong problem in HCI. Everybody's doing it, but there's only so many people in the industry who actually understand this problem, so they can't even hire the people. And the CTO said, "I got to go. I don't have the people, I can't fill the seats." And then they look for solutions, and via that solution, that we're going to get embedded. And when you have infrastructure software like this embedded in your solution, we're going to be around with the... Assuming, obviously, we don't score up, right? We're going to be around with these companies for some time. We're going to have strong partners for the long term. >> Well, vCenter for Kubernetes I love to end on that note. Intriguing conversation, we could go on forever on this topic, 'cause there's a lot of work to do. I don't think this will over be a solved problem for the Kubernetes as cloud native solutions, so I think there's a lot of opportunities in that space. Haseeb Budhani, thank you for rejoining "theCUBE." Adnan Khan, welcome becoming a CUBE-alum. >> (laughs) Awesome. Thank you so much. >> Check your own profile on the sound's website, it's really cool. From Valencia, Spain, I'm Keith Townsend, along with my Host Paul Gillin . And you're watching "theCUBE," the leader in high tech coverage. (bright upbeat music)
SUMMARY :
brought to you by Red Hat, Welcome to theCUBE Nice to work with you, Paul. now you're at CUBE-alumni. And Haseeb Budhani, Talk to us about what your pre-Kubernetes So, that kind of led us And just to plan around So, Haseeb, I got to ask the question. that you have identified So, even if you could, the point I don't think you have a Keith: Now. No, I was going to, you to solve this operational challenge? that to create our clusters. I got to write all this YAML So, Haseeb talk to me but that's the job, we got to do it. developers groups have come to you so that we have a better to help you solve that problem Why are you succeeding? And the CTO said, "I got to go. I love to end on that note. Thank you so much. on the sound's website,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Townsend | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Haseeb Budhani | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
20 | QUANTITY | 0.99+ |
Adnan | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Adnan Khan | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
20 people | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
50 | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Adnan Khan | PERSON | 0.99+ |
HBS | ORGANIZATION | 0.99+ |
Rafay | PERSON | 0.99+ |
50,000 enterprises | QUANTITY | 0.99+ |
node.js | TITLE | 0.99+ |
Valencia, Spain | LOCATION | 0.99+ |
two items | QUANTITY | 0.98+ |
second cloud | QUANTITY | 0.98+ |
vCenter | TITLE | 0.98+ |
HPA | ORGANIZATION | 0.98+ |
first two guests | QUANTITY | 0.98+ |
eight instances | QUANTITY | 0.98+ |
one cloud | QUANTITY | 0.98+ |
Haseeb | PERSON | 0.98+ |
today | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
hundreds of microservices | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Linux | TITLE | 0.98+ |
EKS | ORGANIZATION | 0.98+ |
Mother's Day | EVENT | 0.98+ |
Arathi | PERSON | 0.97+ |
Haseeb | ORGANIZATION | 0.97+ |
Docker | TITLE | 0.97+ |
First question | QUANTITY | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
four years ago | DATE | 0.97+ |
MoneyGram | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
15 microservices | QUANTITY | 0.97+ |
single cluster | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
30 microservices | QUANTITY | 0.95+ |
single | QUANTITY | 0.95+ |
one aspect | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
Rafay | ORGANIZATION | 0.94+ |
EKS | TITLE | 0.94+ |
Cloudnativecon | ORGANIZATION | 0.94+ |
Azure | ORGANIZATION | 0.94+ |
two startups | QUANTITY | 0.94+ |
theCUBE | TITLE | 0.94+ |
AKS | ORGANIZATION | 0.94+ |
Dave Cope, Spectro Cloud | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe 22 brought to you by the cloud native computing foundation. >>Lisia Spain, a cuon cloud native con Europe 2022. I'm Keith towns, along with Paul Gillon, senior editor, enterprise architecture for Silicon angle. Welcome Paul, >>Thank you, Keith pleasure to work >>With you. You know, we're gonna have some amazing people this week. I think I saw stat this morning, 65% of the attendees, 7,500 folks. First time Q con attendees. This is your first conference. >>It is my first cubic con and it is amazing to see how many people are here and to think of, you know, just a couple of years ago, three years ago, we were still talking about what the cloud was and what the cloud was gonna do and how we were gonna integrate multiple clouds. And now we have this whole new framework for computing that is just rifled out of, out of nowhere. And as we can see by the number of people who are here, this has become a, a, this is the dominant trend in enterprise architecture right now, how to adopt Kubernetes and containers, build microservices based applications, and really get to that, that transparent cloud that has been so elusive. >>It has been elusive. And we are seeing vendors from startups with just a, a few dozen people to some of the traditional players we see in the enterprise space with thousands of employees looking to capture kind of lightning in a bottle, so to speak this elusive concept of multi-cloud. >>And what we're seeing here is very typical of an early stage conference. I've seen many times over the years where the, the floor is really dominated by companies, frankly, I've never heard of that. Many of them are only two or three years old, and you don't see the big, the big dominant computing players with, with the presence here that these smaller companies have. That's very typical. We saw that in the PC age, we saw it in the early days of Unix and, and it's happening again. And what will happen over time is that a lot of these companies will be acquired. There'll be some consolidation. And the nature of this show will change, I think, dramatically over the next couple or three years, but there is an excitement and an energy in this auditorium today that is, is really a lot of fun and very reminiscent of other new technologies just as they press it. >>Well, speaking of new technologies, we have Dave Cole, CR O chief revenue officer that's right. Chief marketing officer that's right of spec cloud. Welcome to the show. Thank >>You. It's great to be here. >>So let's talk about this big ecosystem. Okay. Kubernetes. Yes. Solve problem. >>Well, you know, the, the dream is, well, first of all, applications are really the lifeblood of a company, whether it's our phone or whether it's a big company trying to connect with its customer, it's about applications. And so the whole idea today is how do I build these applications to build that tight relationship with my customers? And how do I reinvent these applications rapidly in, along comes containerization, which helps you innovate more quickly. And certainly a dominant technology. There is Kubernetes. And the, the question is how do you get Kubernetes to help you build applications that can be born anywhere and live anywhere and take advantage of the places that it's running, cuz everywhere has pluses and minuses. >>So you know what the promise of Kubernetes from when I first read about it years ago is runs on my laptop. Yep. I can push it to any cloud, any platform that's that's right. Where's the gap. Where are we in that, in that phase? Like talk to me about scale. Is that, is that, is it that simple? >>Well, that act is actually the problem is that date while the technology is the dominant containerization technology and orchestration technology, it really still takes a power user. It really hasn't been very approachable to the masses. And so it was these very expensive, highly skilled resources that sit in a dark corner that have focused on Kubernetes, but that, that now is trying to evolve to make it more accessible to the masses. It's not about sort of hand wiring together. What is a typical 20 layer stack to really manage Kubernetes and then have your engineers manually can reconfigure it and make sure everything works together. Now it's about how do I create these stacks, make it easy to deploy and manage at scale. So we've gone from sort of DIY developer centric to all right, now, how do I manage this at scale? >>Now this is a point that is important, I think is often overlooked. This is not just about Kubernetes. This is about a whole stack of cloud native technologies. Yes. And you who is going to, who is going to integrate that, all that stuff, piece that stuff together, right? Obviously you have a, a role in that. Yes. But in the enterprise, what is the awareness level of how complex this stack is and how difficult it is to assemble? >>We, we see a recognition of that, that we've had developers working on Kubernetes and applications, but now when we say, how do we weave it into our production environments? How do we ensure things like scalability and governance? How do we have this sort of interesting mix of innovation, flexibility, but with control. And that's sort of an interesting combination where you want developers to be able to run fast and use the latest tools, but you need to create these guardrails to deploy it at scale. >>So where do the developers fit in that operation stack then? Is this, is Kubernetes an AI ops or an ops a task, or is it sort of a shared task across the development spectrum? >>Well, I think there's a desire to allow application developers, to just focus on the application and have a Kubernetes related technology that ensures that all of the infrastructure and related application services are just there to support them. And because the typical stack from the operating system to the application can be up to 20 different layers components. You just want all those components to work together. You don't want application developers to worry about those things. And the latest technologies like spectra cloud there's others are making that easy application engineers focus on their apps, all of the infrastructure and the services are taken care of. And those apps can then live natively on any environment. >>So help paint this picture for us. You know, I get got AKs ETS and those, all of these distributions OpenShift, the tan zoo, where is spec cloud helping me to kind of cobble together all these different distros I thought distro was the, was the thing like, just like Lennox has different distros, you know, right. Randy said different distros >>That actually is the irony. Is that sort of the age of debating, the distros largely is over. There are a lot of distros and if you look at them, there are largely shades of gray in being different from each other. But the Kubernetes distribution is just one element of like 20 elements that all have to work together. So right now what's what's happening is that it's not about the distribution it's now, how do I, again, sorry to repeat myself, but move this into a, into scale. How do I move it into deploy at scale, to be able to manage ongoing at scale, to be able to innovate at scale, to allow engineers, as I said, use the coolest tools, but still have technical guardrails that the, the enterprise knows they'll be in control of what, >>What does at scale mean to the enterprise customers you're talking to now? What do they mean when they say that? >>Well, I think it's interesting cuz we think scale's different cuz we've all been in the industry and it's frankly sort of boring old wor word, but today it means different things. Like how do I automate the deployment at scale? How do I be able to make it really easy to provision resources for applications on any environment from either a virtualized or bare metal data center cloud or today edge is really big where people are trying to push applications out to be closer to this source of the data. And so you want to be able to deploy it scale you wanna manage at scale, you wanna make it easy to, as I said earlier, allow application developers to build their applications, but it ops wants the ability to ensure security and governance and all of that. And then finally innovate at scale. If you look at this show, it's interesting, three years ago, when we started spectra cloud, there are about 1400 businesses or technologies in the Kubernetes ecosystem today there's over 1800 and all of these technologies made up of open source and commercial, all versioning at different rates. It becomes an insurmountable problem unless you can set those guardrails sort of that balance between flexibility and control, let developers access the technologies. But again, manage it as a part of your normal processes of a, of a scale of operation. >>So, so Dave, I'm a little challenged here cuz I'm hearing two where I typically consider conflicting terms. Okay. Flexibility control. Yes. In order to achieve control, I need complexity in order to choose flexibility. I need t-shirt one t-shirt fits all right. To and I, and I, and I get simplicity. How can I get both that just doesn't you know, compute >>Well thus the opportunity and the challenge at the same time. So you're right. So developers want choice, good developers want the ability to choose the latest technology so they can innovate rapidly. And yet it ops wants to be able to make sure that there are guard rails. And so with some of today's technologies like spectral cloud, it is you have the ability to get both. We actually worked with dimensional research and we sponsor an annual state of Kubernetes survey. We found this last summer, that two out of three, it executives said you could not have both flexibility and control together, but in fact they want it. And so it is this interesting balance. How do I give engineers the ability to get anything they want, but it ops the ability to establish control. And that's why Kubernetes is really at its next inflection point. Whereas I mentioned, it's not debates about the distro or DIY projects. It's not big incumbents creating siloed Kubernetes solutions. But in fact it's about allowing all these technologies to work together and be able to establish these controls. And that's, that's really where the industry is today. >>Enterprise enterprise CIOs do not typically like to take chances. Now we were talking about the growth in the market that you described from 1400, 1800 vendors. Most of these companies, very small startups are, are enterprises. Are you seeing them willing to take a leap with these unproven companies or are they holding back and waiting for the IBMs, the HPS, the Microsofts to come in with the VMwares with whatever they solution they have? >>I, I think so. I mean, we sell to the global 2000. We had yesterday as a part of edge day here at the event, we had GE healthcare as one of our customers telling their story. And they're a market share leader in medical imaging equipment. X-rays MRIs, cat scans, and they're, they're starting to treat those as edge devices. And so here is a very large established company, a leader in their industry, working with people like spectral cloud, realizing that Kubernetes is interesting technology. The edge is an interesting thought, but how do I marry the two together? So we are seeing large corporations seeing so much of an opportunity that they're working with the smaller companies, the latest technology. >>So let's talk about the edge a little. You kind of opened it up there. Yeah. How should customers think about the edge versus the cloud data center or even bare metal? >>Actually it's a well bare bare metal is fairly easy is that many people are looking to reduce some of the overhead or inefficiencies of the virtualized environment. And, but we've had really sort of parallel little white tornadoes. We've had bare metal as infrastructure that's been developing and then we've had orchestration technology's developing, but they haven't really come together very well lately. We're finally starting to see that come together. Spectra cloud contributed to open source a metal as a service technology that finally brings these two worlds together. Making bare metal much more approachable to the inters enterprise edge is interesting because it seems pretty obvious. You wanna push your application out closer to your source of data, whether it's AI in fencing or O T or anything like that, you don't wanna worry about intermittent connectivity or latency or anything like that. But people have wanted to be able to treat the edge as if it's almost like a cloud where all I worry about is the app. >>So really the edge to us is just the next extension in a multi-cloud sort of motif where I want these edge devices to require low it resources to automate the provisioning, automate the ongoing version management patch management really act like a cloud. And we're seeing this as very, very popular now. And I just used the GE healthcare example of that. Imagine a cat scan machine, I'm making this part up in China and that's just an edge device. And it's, it's doing medical imagery, which is very intense in terms of data. You want to be able to process it quickly and accurately as close to the endpoint, the healthcare provider as possible. >>So let's talk about that in some level of detail, as we think about kind of edge and you know, these fixed devices such as imaging device, are we putting agents on there? Are we looking at something talking back to the cloud, where does special cloud inject and help make that simple, that problem of just having dispersed endpoints all over the world? Simpler? >>Sure. Well we announced our edge Kubernetes edge solution at a big medical conference called, called hymns months ago. And what we allow you to do is we allow the application engineers to develop their application. And then you can de you can design this declarative model, this cluster API, but beyond cluster profile, which determines which additional application services you need and the edge device, all the person has to do with the endpoint is plug in the power plug in the communications. It registers the edge device. It automates the deployment of the full stack. And then it does the ongoing versioning and patch management, sort of a self-driving edge device running Kubernetes. And we make it just very, very easy. No, it resources required at the endpoint, no expensive field engineering resources to go to these endpoints twice a year to apply new patches and things like that, all >>Automated, but there's so many different types of edge devices with different capabilities, different operating systems, some have no operating system. Yeah. I mean, what, that seems like a much more complex environment, just calling it, the edge is simple, but what you're really talking about is thousands of different devices, right? That you have to run your applications on how, how are you dealing with that? >>So one of the ways is that we're really unbiased. In other words, we're OS and distro agnostic. So we don't want to debate about which distribution you like. We don't want to debate about, you know, which OS you want to use. The truth is you're right. There's different environments and different choices that you'll wanna make. And so the key is, is how do you incorporate those and also recognize everything beyond those, you know, OS and Kubernetes and all of that and manage that full stack. So that's what we do is we allow you to choose which tools you want to use and let it be deployed and managed on any environment. >>And who's respo, I'm sorry, key. Who's responsible for making Kubernetes run on the edge device. >>We do. We provision the entire stack. I mean, of course the company does using our product, but we provision the entire Kubernetes infrastructure stack all the application services and the application itself on that device. >>So I would love to dig into like where pods happen and all that, but provisioning is getting to the point that it's a solve problem. Day two. Yes. Like we, you know, you just mentioned hymns, highly regulated environments. How does spec cloud helping with configuration management change control, audit, compliance, et cetera, the hard stuff. >>Yep. And one of the things we do, you bring up a good point is we manage the full life cycle from day zero, which is sort of create, deploy all the way to day two, which is about, you know, access control, security. It's about ongoing versioning and patch management. It's all of that built into the platform. And, but you're right. Like the medical industry has a lot of regulations. And so you need to be able to make sure that everything works. It's always up to the latest level, have the highest level of security. And so all that's built into the platform. It's not just a fire and forget it really is about that full life cycle of deploying, managing on an ongoing basis. >>Well, Dave, I'd love to go into a great deal of detail with you about kind of this day two option. I think we'll be covering a lot more of that topic, Paul, throughout the week, as we talk about just, you know, as we've gotten past, you know, how do I deploy Kubernetes pod to how do I actually operate it? >>Absolutely, absolutely. The devil is in the details as they say, >>Well, and also too, you have to recognize that the edge has some very unique requirements. You want very small form factors. Typically you want low it resources. It has to be sort of zero touch or low touch because if you're a large food provider with 20,000 store locations, you don't wanna send out field engineers two or three times a year to update them. So it really is an interesting beast and we have some exciting technology and people like GE are using that. >>Well, Dave, thanks a lot for coming on to Q you're now Cub Alon. You've not been on before. >>I have actually. Yes. Oh. But I always enjoy it. >>It's great conversation. Foria Spain. I'm Keith towns along with Paul Gillon and you're watching the cue, the leader in high tech coverage.
SUMMARY :
The cube presents, Coon and cloud native con Europe 22 brought to I'm Keith towns, along with Paul Gillon, senior editor, enterprise architecture morning, 65% of the attendees, 7,500 folks. It is my first cubic con and it is amazing to see how many people are here and to think of, a few dozen people to some of the traditional players we see in the enterprise space with And the nature Welcome to the show. So let's talk about this big ecosystem. And so the So you know what the promise of Kubernetes from when I first read about it years ago is runs Well, that act is actually the problem is that date while the technology is the dominant containerization And you who is going where you want developers to be able to run fast and use the latest tools, but you need to create these from the operating system to the application can be up to 20 different layers components. different distros, you know, right. Is that sort of the age of debating, the distros largely is over. And so you want to be able to deploy it scale you wanna manage I get both that just doesn't you know, compute How do I give engineers the ability to get anything they want, but it ops the ability Now we were talking about the growth in the market that you described from 1400, day here at the event, we had GE healthcare as one of our customers So let's talk about the edge a little. is the app. So really the edge to us is just the next extension in a multi-cloud sort of motif And what we allow you to do is we allow the application a much more complex environment, just calling it, the edge is simple, but what you're really talking about is thousands And so the key is, is how do you incorporate those and also recognize everything Who's responsible for making Kubernetes run on the edge device. I mean, of course the company does using our product, is getting to the point that it's a solve problem. And so all that's built into the platform. Well, Dave, I'd love to go into a great deal of detail with you about The devil is in the details as they say, Well, and also too, you have to recognize that the edge has some very unique requirements. Well, Dave, thanks a lot for coming on to Q you're now Cub Alon. I have actually. I'm Keith towns along with Paul Gillon and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillon | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Cope | PERSON | 0.99+ |
Dave Cole | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Randy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
20 layer | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
65% | QUANTITY | 0.99+ |
Spectro Cloud | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
20 elements | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
7,500 folks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first conference | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last summer | DATE | 0.98+ |
one element | QUANTITY | 0.98+ |
IBMs | ORGANIZATION | 0.98+ |
First time | QUANTITY | 0.98+ |
Cloudnativecon | ORGANIZATION | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Kubecon | ORGANIZATION | 0.97+ |
over 1800 | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
1400 | QUANTITY | 0.96+ |
20,000 store | QUANTITY | 0.96+ |
about 1400 businesses | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
twice a year | QUANTITY | 0.95+ |
two worlds | QUANTITY | 0.95+ |
first cubic con | QUANTITY | 0.94+ |
couple of years ago | DATE | 0.94+ |
Cub Alon | PERSON | 0.93+ |
Day two | QUANTITY | 0.93+ |
this morning | DATE | 0.92+ |
Unix | TITLE | 0.91+ |
zero | QUANTITY | 0.91+ |
months ago | DATE | 0.91+ |
years | DATE | 0.9+ |
day two | QUANTITY | 0.89+ |
Kubernetes | ORGANIZATION | 0.88+ |
day zero | QUANTITY | 0.86+ |
Lisia Spain | PERSON | 0.85+ |
three times a year | QUANTITY | 0.82+ |
Keith | LOCATION | 0.82+ |
2022 | EVENT | 0.82+ |
thousands of employees | QUANTITY | 0.81+ |
up to 20 different layers | QUANTITY | 0.81+ |
Foria | LOCATION | 0.8+ |
1800 vendors | QUANTITY | 0.8+ |
two option | QUANTITY | 0.78+ |
2022 | DATE | 0.77+ |
Tracie Zenti & Thomas Anderson | Red Hat Summit 2022
(gentle music) >> We're back at the Seaport in Boston. I'm Dave Vellante with my co-host, Paul Gillin. Tracie Zenti is here. She's the Director of Global Partner Management at Microsoft, and Tom Anderson is the Vice President of Ansible at Red Hat. Guys, welcome to theCube. >> Hi, thank you. >> Yep. >> Ansible on Azure, we're going to talk about that. Why do I need Ansible? Why do I need that kind of automation in Azure? What's the problem you're solving there? >> Yeah, so automation itself is connecting customers' infrastructure to their end resources, so whether that infrastructure's in the cloud, whether it's in the data center, or whether it's at the edge. Ansible is the common automation platform that allows customers to reuse automation across all of those platforms. >> And so, Tracie, I mean, Microsoft does everything. Why do you need Red Hat to do Ansible? >> We want that automation, right? We want our customers to have that ease of use so they can be innovative and bring their workloads to Azure. So that's exactly why we want Ansible. >> Yeah, so kind of loaded questions here, right, as we were sort of talking offline. The nature of partnerships is changing. It's about co-creating, adding value together, getting those effects of momentum, but maybe talk about how the relationship started and how it's evolving and I'd love to have your perspective on the evolving nature of ecosystems. >> Yeah, I think the partnership with Red Hat has been strong for a number of years. I think my predecessor was in the role for five years. There was a person in there for a couple years before that. So I think seven or eight years, we've been working together and co-engineering. Red Hat enterprised Linux. It's co-engineered. Ansible was co-engineered. We work together, right? So we want it to run perfectly on our platform. We want it to be a good customer experience. I think the evolution that we're seeing is in how customers buy, right? They want us to be one company, right? They want it to be easy. They want be able to buy their software where they run it on the cloud. They don't want to have to call Red Hat to buy and then call us to buy and then deploy. And we can do all that now with Ansible's the first one we're doing this together and we'll grow that on our marketplace so that it's easy to buy, easy to deploy, easy to keep track of. >> This is not just Ansible in the marketplace. This is actually a fully managed service. >> That's right. >> What is the value you've added on top of that? >> So it runs in the customer account, but it acts kind of like SaaS. So Red Hat gets to manage it, right? And it's in their own tenant. So they get in the customer's own tenant, right? So with a service principle, Red Hat's able to do that management. Tom, do you want to add anything to that? >> Yeah, the customers don't have to worry about managing Ansible. They just worry about using Ansible to automate their infrastructure. So it's a kind of a win-win situation for us and for our customers. We manage the infrastructure for them and the customer's resources themselves and they get to just focus on automating their business. >> Now, if they want to do cross-cloud automation or automation to their hybrid cloud, will you support that as well? >> 100%. >> Absolutely. >> Yeah. >> We're totally fine with that, right? I mean, it's unrealistic to think customers run everything in one place. That isn't enterprise. That's not reality. So yeah, I'm fine with that. >> Well, that's not every cloud provider. >> No (laughing) that's true. >> You guys over here, at Amazon, you can't even say multicloud or you'll get thrown off the stage. >> Of course we'd love it to all run on Azure, but we want our customers to be happy and have choice, yeah. >> You guys have all, I mean, you've been around a long time. So you had a huge on-prem state, brought that to the cloud, and Azure Stack, I mean, it's been around forever and it's evolved. So you've always believed in, whatever you call it, Hybrid IT, and of course, you guys, that's your call of mission. >> Yeah, exactly. >> So how do you each see hybrid? Where's the points of agreement? It sounds like there's more overlap than gaps, but maybe you could talk about your perspective. >> Yeah, I don't think there are any points of disagreement. I think for us, it's meeting our customers where their center of gravity is, where they see their center of management gravity. If it's on Azure, great. If it's on their data center, that's okay, too. So they can manage to or from. So if Azure is their center of gravity, they can use automation, Ansible automation, to manage all the things on Azure, things on other cloud providers, things in their data center, all the way out to their edge. So they have the choice of what makes the most sense to them. >> And Azure Arc is obviously, that's how Azure Stack is evolving, right? >> Yeah, and we have Azure Arc integration with Ansible. >> Yeah. >> So yeah, absolutely. And I mean, we also have Rell on our marketplace, right? So you can buy the basement and you could buy the roof and everything in between. So we're growing the estate on marketplace as well to all the other products that we have in common. So absolutely. >> How much of an opportunity, just go if we go inside? Give us a little peak inside Microsoft. How much of an opportunity does Microsoft think about multi-cloud specifically? I'm not crazy about the term multicloud, 'cause to me, multicloud, runs an Azure, runs an AWS, runs on Google, maybe runs somewhere else. But multicloud meaning that common experience, your version of hybrid, if you will. How serious is Microsoft about that as a business opportunity? A lot of people would say, well, Microsoft really doesn't want. They want everything in their cloud. But I'd love to hear from you if that is good. >> Well, we have Azure Red Hat OpenShift, which is a Microsoft branded version of OpenShift. We have Ansible now on our marketplace. We also, of course, we have AKS. So I mean, container strategy runs anywhere. But we also obviously have services that enhance all these things. So I think, our marketplace is a third party marketplace. It is designed to let customers buy and run easily on Azure and we'd want to make that experience good. So I don't know that it's... I can't speak to our strategy on multicloud, but what I can speak to is when businesses need to do innovation, we want it to be easy to do that, right? We want it to be easy to buy, defined, buy, deploy, manage, and that's what we're trying to accomplish. >> Fair to say, you're not trying to stop it. >> No, yeah, yeah. >> Whether or not it evolves into something that you heavily lean into or see. >> When we were talking before the cameras turned on, you said that you think marketplaces are the future. Why do you say that? And how will marketplaces be differentiated from each other in the future? >> Well, our marketplace is really, first of all, I think, as you said off camera, they're now. You can buy now, right? There's nothing that stops you. But to me, it's an extension of consumerization of IT. I've been in IT and manageability for about 23 years and full automation is what we and IT used to always talk about, that single pane of glass. How do you keep track of everything? How do you make it easy? How do you support? And IT is always eeking out that last little bit of funding to do innovation, right? So what we can do with consumerization of IT is make it easier to innovate. Make it cheaper to innovate, right? So I think marketplaces do that, right? They've got gold images you can deploy. You're also able to deploy custom images. So I think the future is as particularly with ours, like we support, I don't remember the exact number, but over a hundred countries of tax calculation. We've got like 17 currencies. So as we progress and customers can run from anywhere in the world and buy from anywhere in the world and make it simple to do those things that used to take maybe two months to spin up services for innovation and Ansible helps with that, that's going to help enterprises innovate faster. And I think that's what marketplaces are really going to bring to the forefront is that innovation. >> Tom, why did Ansible, I'm going to say one, I mean, you're never done. But it was unclear a few years ago, which automation platform was going to win in the marketplace and clearly, Ansible has taken a leading position. Why? What were the factors that led to that? >> Honestly, it was the strength of the community, right? And Red Hat leaning into that community to support that community. When you look out at the upstream community for Ansible and the number of participants, active participants that are contributing to the community just increases its value to everybody. So the number of integrations, the number of things that you can automate with Ansible is in the thousands and thousands, and that's not because a group of Red Hat engineers wrote it. That's because our community partners, like Microsoft wrote the user integrations for Ansible. F5 does theirs. Customers take those and expand on them. So the number of use cases that we can address through the community and through our partners is immense. >> But that doesn't just happen. I mean, what have you done to cultivate that community? >> Well, it's in Red Hat's DNA, right? To be the catalyst in a community, to bring partners and users together, to share their knowledge and their expertise and their skills, and to make the code open. So anybody can go grab Ansible from upstream and start doing stuff with it, if they want. If they want to mature on it and management for it and support all the other things that Red Hat provides, then they come to us for a subscription. So it's really been about sort of catalyzing and supporting that community, and Red Hat is a good steward of these upstream communities. >> Is Azure putting Ansible to use actually within your own platform as opposed to being a managed service? Are you adopting Ansible for automation of the Azure Platform? >> I'll let you answer that. >> So two years ago, Microsoft presented at AnsibleFest, our fall conference, Budd Warrack, I'm butchering his last name, but he came on and told how the networking team at Microsoft supports about 35,000 access points across hundreds of buildings, all the Microsoft campuses using Ansible to do that. Fantastic story if you want to go on YouTube and look up that use case. So Microsoft is an avid user of the Ansible technology in their environment. >> Azure is kind of this really, I mean, incredible strategic platform for Microsoft. I wonder if you could talk about Azure as a honeypot for partners. I mean, it seems, I mean, the momentum is unbelievable. I mean, I pay attention to their earnings calls every quarter of Azure growth, even though I don't know what the exact number is, 'cause they won't give it to me but they give me the growth rates and it's actually accelerating. >> No lie. (Tracie laughing) >> I've got my number. It's in the tens of billions. I mean, I'm north of 35 billion, but growing at the high 30%. I mean, it's remarkable. So talk about the importance of that to the ecosystem as a honey pot. >> Paul Satia said it right. Many times partners are essential to our strategy. But if you think about it, software solves problems. We have software that solves problems. They have software that solves problems, right? So when IT and customers are thinking of solving a problem, they're thinking software, right? And we want that software to run on Azure. So partners have to be essential to our strategy. Absolutely. It's again, we're one team to the customer. They want to see that as working together seamlessly. They don't want it to be hardware Azure plus software. So that's absolutely critical to our success. >> And if I could add for us, the partners are super important. So some of our launch partners are like F5 and CyberArk who have certified Ansible content for Ansible on Azure. We have service provider partners like Accenture and Kindra that are launching with us and providing our joint customers with help to get up to speed. So it really is a partner play. >> Absolutely. >> Where are you guys taking this? Where do you want to see it go? What are some of the things that observers should pay attention to as marketers of success and evolution? >> Well, certainly for us, it's obviously customer adoption, but it is providing them with patterns. So out of the box patterns that makes it easy for them to get up and running and solve the use cases and problems that they run into most frequently. Problems ain't the right word. Challenges or opportunities on Azure to be able to automate the things. So we're really leaning into the different use cases, whether it's edge, whether it's cloud, whether it's cloud to edge, all of those things. We want to provide users with out of the box Ansible content that allows 'em to just get up and automating super fast, and doing that on Azure makes it way easier for us because we don't have to focus on the install and the setting up and configuring it. It's all just part of the experience >> And Tracie, for Microsoft, it's world domination with a smile. (all laughing) >> Of course. No, of course not. No, I think it's to continue to grow the co-engineering we do across all of the Red Hat products. I can't even tell you the number of things we work on together, but to look forward strategically at what opportunities we have across our products and theirs to integrate like Arc and Ansible, and then making it all easy to buy, making it available so that customers have choice and they can buy how they want to and simplify. So we're just going to continue to do that and we're at that infancy right now and as we grow, it'll just get easier and easier with more and more products. >> Well, bringing the edge into the equation is going to be really interesting. Microsoft with its gaming, vector is amazing, and recent, awesome acquisitions. All the gamers are excited about that and that's a huge edge play. >> You'll have to bring my son on for that interview. >> Yeah. >> My son will interview. >> He knows more than all of us, I'm sure. What about Ansible? What's ahead for Ansible? >> Edge, so part of the Red Hat play at the Edge. We've getting a lot of customer pull for both industrial Edge use cases in the energy sector. We've had a joint customer with Azure that has a combined Edge platform. Certainly, the cloud stuff that we're announcing today is a huge growth area. And then just general enterprise automation. There's lots of room to run there for Ansible. >> And lots of industries, right? >> Yeah. >> Telco, manufacturing. >> Retail. >> Retail. >> Yeah. >> Yeah. There's so many places to go, yeah, that need the help. >> The market's just, how you going to count it anymore? It's just enormous. >> Yeah. >> It's the entire GDP the world. But guys, thanks for coming to theCUBE. >> Yeah. >> Great story. Congratulations on the partnership and the announcements and look forward to speaking with you in the future. >> Yeah, thanks for having us. >> Thanks for having us. >> You're very welcome. And keep it right there. This is Dave Vellante for Paul Gillin. This is theCUBE's coverage of Red Hat Summit 2022. We'll be right back at Seaport in Boston. (gentle music)
SUMMARY :
and Tom Anderson is the Vice President going to talk about that. that allows customers to reuse automation Why do you need Red Hat to do Ansible? to have that ease of use and I'd love to have your perspective so that it's easy to buy, easy to deploy, Ansible in the marketplace. So Red Hat gets to manage it, right? Yeah, the customers don't have to worry to think customers run at Amazon, you can't even say multicloud it to all run on Azure, and of course, you guys, So how do you each see hybrid? So they can manage to or from. Yeah, and we have Azure and you could buy the roof But I'd love to hear It is designed to let customers Fair to say, you're into something that you from each other in the future? and buy from anywhere in the world I'm going to say one, So the number of use to cultivate that community? and to make the code open. of the Ansible technology to their earnings calls No lie. So talk about the importance of that So partners have to be the partners are super important. and solve the use cases and problems And Tracie, for Microsoft, across all of the Red Hat products. is going to be really interesting. You'll have to bring my What about Ansible? There's lots of room to There's so many places to going to count it anymore? But guys, thanks for coming to theCUBE. and look forward to speaking of Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tracie | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Tracie Zenti | PERSON | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Paul Satia | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
17 currencies | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
Kindra | ORGANIZATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
Seaport | LOCATION | 0.99+ |
Thomas Anderson | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two months | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
F5 | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.98+ |
one team | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
about 23 years | QUANTITY | 0.98+ |
Red H | ORGANIZATION | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
Azure Arc | TITLE | 0.98+ |
tens of billions | QUANTITY | 0.98+ |
two years ago | DATE | 0.97+ |
Azure | TITLE | 0.97+ |
one company | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
Azure Arc | TITLE | 0.97+ |
Edge | ORGANIZATION | 0.97+ |
OpenShift | TITLE | 0.97+ |
30% | QUANTITY | 0.97+ |
about 35,000 access points | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
Linux | TITLE | 0.95+ |
Azure Stack | TITLE | 0.95+ |
each | QUANTITY | 0.94+ |
Budd Warrack | PERSON | 0.94+ |
Breaking Analysis: The Improbable Rise of Kubernetes
>> From theCUBE studios in Palo Alto, in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vollante. >> The rise of Kubernetes came about through a combination of forces that were, in hindsight, quite a long shot. Amazon's dominance created momentum for Cloud native application development, and the need for newer and simpler experiences, beyond just easily spinning up computer as a service. This wave crashed into innovations from a startup named Docker, and a reluctant competitor in Google, that needed a way to change the game on Amazon and the Cloud. Now, add in the effort of Red Hat, which needed a new path beyond Enterprise Linux, and oh, by the way, it was just about to commit to a path of a Kubernetes alternative for OpenShift and figure out a governance structure to hurt all the cats and the ecosystem and you get the remarkable ascendancy of Kubernetes. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we tapped the back stories of a new documentary that explains the improbable events that led to the creation of Kubernetes. We'll share some new survey data from ETR and commentary from the many early the innovators who came on theCUBE during the exciting period since the founding of Docker in 2013, which marked a new era in computing, because we're talking about Kubernetes and developers today, the hoodie is on. And there's a new two part documentary that I just referenced, it's out and it was produced by Honeypot on Kubernetes, part one and part two, tells a story of how Kubernetes came to prominence and many of the players that made it happen. Now, a lot of these players, including Tim Hawkin Kelsey Hightower, Craig McLuckie, Joe Beda, Brian Grant Solomon Hykes, Jerry Chen and others came on theCUBE during formative years of containers going mainstream and the rise of Kubernetes. John Furrier and Stu Miniman were at the many shows we covered back then and they unpacked what was happening at the time. We'll share the commentary from the guests that they interviewed and try to add some context. Now let's start with the concept of developer defined structure, DDI. Jerry Chen was at VMware and he could see the trends that were evolving. He left VMware to become a venture capitalist at Greylock. Docker was his first investment. And he saw the future this way. >> What happens is when you define infrastructure software you can program it. You make it portable. And that the beauty of this cloud wave what I call DDI's. Now, to your point is every piece of infrastructure from storage, networking, to compute has an API, right? And, and AWS there was an early trend where S3, EBS, EC2 had API. >> As building blocks too. >> As building blocks, exactly. >> Not monolithic. >> Monolithic building blocks every little building bone block has it own API and just like Docker really is the API for this unit of the cloud enables developers to define how they want to build their applications, how to network them know as Wills talked about, and how you want to secure them and how you want to store them. And so the beauty of this generation is now developers are determining how apps are built, not just at the, you know, end user, you know, iPhone app layer the data layer, the storage layer, the networking layer. So every single level is being disrupted by this concept of a DDI and where, how you build use and actually purchase IT has changed. And you're seeing the incumbent vendors like Oracle, VMware Microsoft try to react but you're seeing a whole new generation startup. >> Now what Jerry was explaining is that this new abstraction layer that was being built here's some ETR data that quantifies that and shows where we are today. The chart shows net score or spending momentum on the vertical axis and market share which represents the pervasiveness in the survey set. So as Jerry and the innovators who created Docker saw the cloud was becoming prominent and you can see it still has spending velocity that's elevated above that 40% red line which is kind of a magic mark of momentum. And of course, it's very prominent on the X axis as well. And you see the low level infrastructure virtualization and that even floats above servers and storage and networking right. Back in 2013 the conversation with VMware. And by the way, I remember having this conversation deeply at the time with Chad Sakac was we're going to make this low level infrastructure invisible, and we intend to make virtualization invisible, IE simplified. And so, you see above the two arrows there related to containers, container orchestration and container platforms, which are abstraction layers and services above the underlying VMs and hardware. And you can see the momentum that they have right there with the cloud and AI and RPA. So you had these forces that Jerry described that were taking shape, and this picture kind of summarizes how they came together to form Kubernetes. And the upper left, Of course you see AWS and we inserted a picture from a post we did, right after the first reinvent in 2012, it was obvious to us at the time that the cloud gorilla was AWS and had all this momentum. Now, Solomon Hykes, the founder of Docker, you see there in the upper right. He saw the need to simplify the packaging of applications for cloud developers. Here's how he described it. Back in 2014 in theCUBE with John Furrier >> Container is a unit of deployment, right? It's the format in which you package your application all the files, all the executables libraries all the dependencies in one thing that you can move to any server and deploy in a repeatable way. So it's similar to how you would run an iOS app on an iPhone, for example. >> A Docker at the time was a 30% company and it just changed its name from .cloud. And back to the diagram you have Google with a red question mark. So why would you need more than what Docker had created. Craig McLuckie, who was a product manager at Google back then explains the need for yet another abstraction. >> We created the strong separation between infrastructure operations and application operations. And so, Docker has created a portable framework to take it, basically a binary and run it anywhere which is an amazing capability, but that's not enough. You also need to be able to manage that with a framework that can run anywhere. And so, the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat open stack deployment. You could run on another major cloud provider like rec. >> Now Google had this huge cloud infrastructure but no commercial cloud business compete with AWS. At least not one that was taken seriously at the time. So it needed a way to change the game. And it had this thing called Google Borg, which is a container management system and scheduler and Google looked at what was happening with virtualization and said, you know, we obviously could do better Joe Beda, who was with Google at the time explains their mindset going back to the beginning. >> Craig and I started up Google compute engine VM as a service. And the odd thing to recognize is that, nobody who had been in Google for a long time thought that there was anything to this VM stuff, right? Cause Google had been on containers for so long. That was their mindset board was the way that stuff was actually deployed. So, you know, my boss at the time, who's now at Cloudera booted up a VM for the first time, and anybody in the outside world be like, Hey, that's really cool. And his response was like, well now what? Right. You're sitting at a prompt. Like that's not super interesting. How do I run my app? Right. Which is, that's what everybody's been struggling with, with cloud is not how do I get a VM up? How do I actually run my code? >> Okay. So Google never really did virtualization. They were looking at the market and said, okay what can we do to make Google relevant in cloud. Here's Eric Brewer from Google. Talking on theCUBE about Google's thought process at the time. >> One interest things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998 which is the same year that VMware started was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic Unix processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well, that's a good model for us. And we can take what we know internally which was called Borg a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. >> Now, Eric Brewer gave us the bumper sticker version of the story there. What he reveals in the documentary that I referenced earlier is that initially Google was like, why would we open source our secret sauce to help competitors? So folks like Tim Hockin and Brian Grant who were on the original Kubernetes team, went to management and pressed hard to convince them to bless open sourcing Kubernetes. Here's Hockin's explanation. >> When Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on we realized we know what this is going to we know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes, once you get beyond two or three of them, and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please this is going to happen with us or without us. And I think it, the world would be better if we helped. >> So the open source strategy became more compelling as they studied the problem because it gave Google a way to neutralize AWS's advantage because with containers you could develop on AWS for example, and then run the application anywhere like Google's cloud. So it not only gave developers a path off of AWS. If Google could develop a strong service on GCP they could monetize that play. Now, focus your attention back to the diagram which shows this smiling, Alex Polvi from Core OS which was acquired by Red Hat in 2018. And he saw the need to bring Linux into the cloud. I mean, after all Linux was powering the internet it was the OS for enterprise apps. And he saw the need to extend its path into the cloud. Now here's how he described it at an OpenStack event in 2015. >> Similar to what happened with Linux. Like yes, there is still need for Linux and Windows and other OSs out there. But by and large on production, web infrastructure it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was, it was by having a truly open consistent API and a commitment into not breaking APIs and, so on. That allowed Linux to really become ubiquitous in the data center. Yes, there are other OSs, but Linux buy in large for production infrastructure, what is being used. And I think you'll see a similar phenomenon happen for this next level up cause we're treating the whole data center as a computer instead of trading one in visual instance is just the computer. And that's the stuff that Kubernetes to me and someone is doing. And I think there will be one that shakes out over time and we believe that'll be Kubernetes. >> So Alex saw the need for a dominant container orchestration platform. And you heard him, they made the right bet. It would be Kubernetes. Now Red Hat, Red Hat is been around since 1993. So it has a lot of on-prem. So it needed a future path to the cloud. So they rang up Google and said, hey. What do you guys have going on in this space? So Google, was kind of non-committal, but it did expose that they were thinking about doing something that was you know, pre Kubernetes. It was before it was called Kubernetes. But hey, we have this thing and we're thinking about open sourcing it, but Google's internal debates, and you know, some of the arm twisting from the engine engineers, it was taking too long. So Red Hat said, well, screw it. We got to move forward with OpenShift. So we'll do what Apple and Airbnb and Heroku are doing and we'll build on an alternative. And so they were ready to go with Mesos which was very much more sophisticated than Kubernetes at the time and much more mature, but then Google the last minute said, hey, let's do this. So Clayton Coleman with Red Hat, he was an architect. And he leaned in right away. He was one of the first outside committers outside of Google. But you still led these competing forces in the market. And internally there were debates. Do we go with simplicity or do we go with system scale? And Hen Goldberg from Google explains why they focus first on simplicity in getting that right. >> We had to defend of why we are only supporting 100 nodes in the first release of Kubernetes. And they explained that they know how to build for scale. They've done that. They know how to do it, but realistically most of users don't need large clusters. So why create this complexity? >> So Goldberg explains that rather than competing right away with say Mesos or Docker swarm, which were far more baked they made the bet to keep it simple and go for adoption and ubiquity, which obviously turned out to be the right choice. But the last piece of the puzzle was governance. Now Google promised to open source Kubernetes but when it started to open up to contributors outside of Google, the code was still controlled by Google and developers had to sign Google paper that said Google could still do whatever it wanted. It could sub license, et cetera. So Google had to pass the Baton to an independent entity and that's how CNCF was started. Kubernetes was its first project. And let's listen to Chris Aniszczyk of the CNCF explain >> CNCF is all about providing a neutral home for cloud native technology. And, you know, it's been about almost two years since our first board meeting. And the idea was, you know there's a certain set of technology out there, you know that are essentially microservice based that like live in containers that are essentially orchestrated by some process, right? That's essentially what we mean when we say cloud native right. And CNCF was seated with Kubernetes as its first project. And you know, as, as we've seen over the last couple years Kubernetes has grown, you know, quite well they have a large community a diverse con you know, contributor base and have done, you know, kind of extremely well. They're one of actually the fastest, you know highest velocity, open source projects out there, maybe. >> Okay. So this is how we got to where we are today. This ETR data shows container orchestration offerings. It's the same X Y graph that we showed earlier. And you can see where Kubernetes lands not we're standing that Kubernetes not a company but respondents, you know, they doing Kubernetes. They maybe don't know, you know, whose platform and it's hard with the ETR taxon economy as a fuzzy and survey data because Kubernetes is increasingly becoming embedded into cloud platforms. And IT pros, they may not even know which one specifically. And so the reason we've linked these two platforms Kubernetes and Red Hat OpenShift is because OpenShift right now is a dominant revenue player in the space and is increasingly popular PaaS layer. Yeah. You could download Kubernetes and do what you want with it. But if you're really building enterprise apps you're going to need support. And that's where OpenShift comes in. And there's not much data on this but we did find this chart from AMDA which show was the container software market, whatever that really is. And Red Hat has got 50% of it. This is revenue. And, you know, we know the muscle of IBM is behind OpenShift. So there's really not hard to believe. Now we've got some other data points that show how Kubernetes is becoming less visible and more embedded under of the hood. If you will, as this chart shows this is data from CNCF's annual survey they had 1800 respondents here, and the data showed that 79% of respondents use certified Kubernetes hosted platforms. Amazon elastic container service for Kubernetes was the most prominent 39% followed by Azure Kubernetes service at 23% in Azure AKS engine at 17%. With Google's GKE, Google Kubernetes engine behind those three. Now. You have to ask, okay, Google. Google's management Initially they had concerns. You know, why are we open sourcing such a key technology? And the premise was, it would level the playing field. And for sure it has, but you have to ask has it driven the monetization Google was after? And I would've to say no, it probably didn't. But think about where Google would've been. If it hadn't open source Kubernetes how relevant would it be in the cloud discussion. Despite its distant third position behind AWS and Microsoft or even fourth, if you include Alibaba without Kubernetes Google probably would be much less prominent or possibly even irrelevant in cloud, enterprise cloud. Okay. Let's wrap up with some comments on the state of Kubernetes and maybe a thought or two about, you know, where we're headed. So look, no shocker Kubernetes for all its improbable beginning has gone mainstream in the past year or so. We're seeing much more maturity and support for state full workloads and big ecosystem support with respect to better security and continued simplification. But you know, it's still pretty complex. It's getting better, but it's not VMware level of maturity. For example, of course. Now adoption has always been strong for Kubernetes, for cloud native companies who start with containers on day one, but we're seeing many more. IT organizations adopting Kubernetes as it matures. It's interesting, you know, Docker set out to be the system of the cloud and Kubernetes has really kind of become that. Docker desktop is where Docker's action really is. That's where Docker is thriving. It sold off Docker swarm to Mirantis has made some tweaks. Docker has made some tweaks to its licensing model to be able to continue to evolve its its business. To hear more about that at DockerCon. And as we said, years ago we expected Kubernetes to become less visible Stu Miniman and I talked about this in one of our predictions post and really become more embedded into other platforms. And that's exactly what's happening here but it's still complicated. Remember, remember the... Go back to the early and mid cycle of VMware understanding things like application performance you needed folks in lab coats to really remediate problems and dig in and peel the onion and scale the system you know, and in some ways you're seeing that dynamic repeated with Kubernetes, security performance scale recovery, when something goes wrong all are made more difficult by the rapid pace at which the ecosystem is evolving Kubernetes. But it's definitely headed in the right direction. So what's next for Kubernetes we would expect further simplification and you're going to see more abstractions. We live in this world of almost perpetual abstractions. Now, as Kubernetes improves support from multi cluster it will be begin to treat those clusters as a unified group. So kind of abstracting multiple clusters and treating them as, as one to be managed together. And this is going to create a lot of ecosystem focus on scaling globally. Okay, once you do that, you're going to have to worry about latency and then you're going to have to keep pace with security as you expand the, the threat area. And then of course recovery what happens when something goes wrong, more complexity, the harder it is to recover and that's going to require new services to share resources across clusters. So look for that. You also should expect more automation. It's going to be driven by the host cloud providers as Kubernetes supports more state full applications and begins to extend its cluster management. Cloud providers will inject as much automation as possible into the system. Now and finally, as these capabilities mature we would expect to see better support for data intensive workloads like, AI and Machine learning and inference. Schedule with these workloads becomes harder because they're so resource intensive and performance management becomes more complex. So that's going to have to evolve. I mean, frankly, many of the things that Kubernetes team way back when, you know they back burn it early on, for example, you saw in Docker swarm or Mesos they're going to start to enter the scene now with Kubernetes as they start to sort of prioritize some of those more complex functions. Now, the last thing I'll ask you to think about is what's next beyond Kubernetes, you know this isn't it right with serverless and IOT in the edge and new data, heavy workloads there's something that's going to disrupt Kubernetes. So in that, by the way, in that CNCF survey nearly 40% of respondents were using serverless and that's going to keep growing. So how is that going to change the development model? You know, Andy Jassy once famously said that if they had to start over with Amazon retail, they'd start with serverless. So let's keep an eye on the horizon to see what's coming next. All right, that's it for now. I want to thank my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team, who also manages the breaking analysis podcast, Kristin Martin and Cheryl Knight help get the word out on socials, so thanks to all of you. Remember these episodes, they're all available as podcasts wherever you listen, just search breaking analysis podcast. Don't forget to check out ETR website @etr.ai. We'll also publish. We publish a full report every week on wikibon.com and Silicon angle.com. You can get in touch with me, email me directly david.villane@Siliconangle.com or DM me at D Vollante. You can comment on our LinkedIn post. This is Dave Vollante for theCUBE insights powered by ETR. Have a great week, everybody. Thanks for watching. Stay safe, be well. And we'll see you next time. (upbeat music)
SUMMARY :
bringing you data driven and many of the players And that the beauty of this And so the beauty of this He saw the need to simplify It's the format in which A Docker at the time was a 30% company And so, the union of Docker and Kubernetes and said, you know, we And the odd thing to recognize is that, at the time. And so scaling that up, you and pressed hard to convince them and said, you know, please And he saw the need to And that's the stuff that Kubernetes and you know, some of the arm twisting in the first release of Kubernetes. of Google, the code was And the idea was, you know and dig in and peel the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie Chan | PERSON | 0.99+ |
Chris Aniszczyk | PERSON | 0.99+ |
Hockin | PERSON | 0.99+ |
Dave Vollante | PERSON | 0.99+ |
Solomon Hykes | PERSON | 0.99+ |
Craig McLuckie | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Brian Grant | PERSON | 0.99+ |
Eric Brewer | PERSON | 0.99+ |
1998 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Tim Hockin | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Alex Polvi | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Craig McLuckie | PERSON | 0.99+ |
Clayton Coleman | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Jerry | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Joe Beda | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu Miniman | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
17% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
23% | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
1800 respondents | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
39% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
Hen Goldberg | PERSON | 0.99+ |
fourth | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Chad Sakac | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
david.villane@Siliconangle.com | OTHER | 0.99+ |
first project | QUANTITY | 0.99+ |
Craig | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
Alexis Richardson, Weaveworks | CUBE Conversation
(bright upbeat music) >> Hey everyone, welcome to theCUBE's AWS startup showcase. This is season two of the startup showcase, episode one. I'm your host, Lisa Martin. Pleased to be welcoming back one of our alumni, Alexis Richardson, the founder >> Hey. >> and CEO of Weaveworks. Alexis, welcome back to the program. >> Thank you so much, Lisa, I'm really happy to be here. Good to see you again. >> Likewise. So it's been a while since we've had Weaveworks on the program. Give the audience an overview of Weaveworks. You were founded in 2014, pioneering getopts, automating Kubernetes across all industries, but help us understand, unpack that a bit. >> Well, so my previous role was at Pivotal, where I was head of application platform and I was responsible for Spring and Vfabric, and some pieces of Cloud Foundry. And you may remember back in those days, everybody wanted to build like a Heroku, but for the enterprise. And so they were asking, how can we build more cloud services? And my team was involved in building out cloud services, but we were running into trouble with the technology that we had. And then when containers appeared, we thought this is the technology for us to roll out cloud services. So with some of my team, we decided to start a new company, Weaveworks, really intending to focus on developers. Because these new containers were pretty cool, but they were really complex operational centric tools, and enterprise developers need simplicity. That's what we'd learned from things like Spring. They want simplicity, productivity, velocity, all of that stuff, they don't want operational complexity. So Weaveworks' mission is to make applications easy for developers with containers. >> Talk to me about how you've accomplished that over the last seven years, and some of the things that you're doing to facilitate a DevOps practice within organizations across any industry? >> Yeah, well, our story is pretty interesting because of course in 2014, all of this was incredibly new. You couldn't even take two containers and put them together into a single application. So forget about enterprise. What we did was we built a network, which gave the company its name, Weave. But then we spent several years building out more and more pieces of the stack. We decided that we should go to market commercially because we're an open source company with a commercial SaaS. And we thought we would be like new Relic, that there'll be lots of customers in the cloud. And, therefore, they would need monitoring and management. And Weave started writing a SaaS based on Kubernetes, which was what we chose as our platform, back in the day, very, very, very early. We were one of the very first companies to start running Kubernetes in production other than Google. And so what we learned was customers didn't want to have management and monitoring for applications in the cloud, based on Kubernetes. Because they were all still struggling to get Docker working, to get basic Kubernetes clusters set up. And they kept saying to us "this is great, we love your tool, but we really need simpler things right now." So what we had done was we'd learned how to operate Kubernetes. And we discovered that we were doing it in this specific way, a way that meant that we could be reliable, we could set things up remotely, we could move things between zones. And so we called this approach getopts. So we've named the practice of getopts, which is really DevOps for Kubernetes. We decided that it was exciting after we had an outage and made a very quick recovery. Told people about it and they said, "well, we can't even Kubernetes started, let alone recover it from a crash." So we started evangelizing getopts and saying to people that we knew how to set up and run Kubernetes as operators for developers of apps, based on this experience. And people said, "well, why don't you help us do that?" So we pivoted the company away from a SaaS business, doing management, and straight back into enterprise software, providing a solution for people to run Kubernetes stacks, deploy applications, detect drifts, and operate them at scale. And we've never looked back. And since then we've built, very successfully, a big business out of telco customers, banks, car companies, really global two thousands. Starting from that open source base, continuing to respect that, but always keeping in mind helping developers build applications at scale. >> So in terms of that pivot that you've made, it sounds like you made that in conjunction with developers across industries to really understand what the right direction is here. What's the approach, what's their appetite? Talk to me about a customer example or two that really you think articulate the value and the right decision that that pivot was and how you're helping customers to really further their DevOps practice. >> Well, one of our first customers was actually Fidelity in this new world. Fidelity has a very advanced technology organization, a very forward thinking CTO, who I seem to recall is, or CEO, who I think is female. Really is into technology as a source of, you know, velocity and business strength. And we were brought to Fidelity by our partner, Amazon. And they said, "look, Fidelity have been using your open source tools, they want to run on Kubernetes, the early EKS service on AWS, but they need help, because what they want is a shared application platform that people can use across Fidelity to deploy and manage apps." So the idea Fidelity had was they're going to split their IT into a platform team, that was going to provide this platform, and a bunch of app teams that were going to write business apps like risk management, other financial processing. Paths, basically. And we came in to help Fidelity. And what we did was help Fidelity rollout, using getopts, a Amazon wide application platform. We also helped them to build, this was very early days for us post pivot, we really helped them to build an add on layer. So you could take any Kubernetes cluster and add other components to it, and then you'd have your platform right there. And the whole stack would be managed by getopts, which nobody had done before. Nobody who'd come up with a way of managing the whole stack, so you could start and stop stacks wherever you wanted, at will, correctly. I mean, if you talk to people about what's hard in IT, they'll tell you shutting down Kubernetes is hard, 'cause I know I'm never going to know how to start it again. So being able to start and stop things, move them around is really crucial. What Fidelity also wanted, which made I think the whole thing even more exciting, was to duplicate this environment on Azure and actually also on-premise later on. So where Fidelity are today is the whole Fidelity platform runs on Microsoft and on Amazon and on-premise, using three different implementations of Kubernetes. But using this platform technology and getopts that we helped Fidelity rollout. And if you want to know a bit about the story, type FIDEKS, F I D E K S into Google and you'll find a video of me three or four years ago on stage at Cube Con talking with a Fidelity chief architect about this story. It's pretty exciting and these are early days for these new Kubernetes platforms. >> Early days, but so transformative. And I can't imagine the events of the last few years without having this capability and this technology to facilitate such pivots and transformation where we would all be. I want to kind of dig into some use cases, 'cause one of the things that you just mentioned with the Fidelity example got me thinking use case of hybrid, multi-cloud, but also continuous app development. Talk to me about some of the key use cases that you work with customers on. >> Well you just named two. So hybrid and multi-cloud is absolutely critical, and also sovereign, which is when you're actually offline and you only update your cloud periodically. That's one of the major use cases for us. And what customers want there is they want consistency. They want a single operating model, across all of these different locations, so that all of their teams can get trained on one set of technologies and then move from place to place. They're not looking for magic, where apps move with the sun or any of that stuff. They just want to know they can base everything on a single, homogeneous skillset and have scale across their teams. Maybe tens of thousands of developers, all who know how to do the same thing. That's a really important use case. You also mentioned continuous delivery. That's probably the second really critical use case for us. People say, "I've got Kubernetes set up now, and I have Jenkins." At JP Morgan once told me they had 40,000 Jenkins servers, or something like that, you know, Jenkins at scale. And they're like, "okay, how do I push changes from Jenkins into the cloud?" So getopts provides a bridge between the world of CI and the runtime of Kubernetes. So one group of our customers is help me to put that middle piece of CD that gets you CI, CD, to Kubernetes, that's a classic. And then what they're looking for is an increase in velocity. And what we typically see is people go from deploying once every six months to deploying once a week, to deploying once a day, to deploying several times a day. And then they split things up into teams and suddenly, wow, that vision of microservices has come and everybody's excited 'cause IT velocity has gone up by two X. Another really >> So, >> Sorry, carry on. >> Go ahead, I was just going to say in terms of IT velocity it sounds like that's a major business outcome that you're enabling for, whether it's teleco, financial services, or whatnot. That velocity is, as you just described, is rapidly accelerating. >> Yeah, if you go to our website, you'll find a bunch of these use cases. And one that I really like is NatWest mettle, which is another financial example. They're not all financial by the way. But there's some metrics in there. We're getting people up to two X productivity, which at scale is huge, really makes a difference. Also, meantime to recovery. If you know the metric space, you'll know these are all DORA metrics. And DORA, which was acquired by Google a couple of years ago, is a really fantastic analyst in the space that came up with a bunch of ways of thinking about how to measure your performance as a business and IT organization. Recovery time and things like this that you really need to focus on if you're in this world. >> Well, from an IT velocity perspective, if I translate that to business outcomes, especially given the dynamics in the market over the last two years, this is transformative and probably helped a lot of organizations to pivot multiple times during the last couple of years. To get to that survival mode and into that thriving mode, enabling organizations to meet customer demand that was changing faster, et cetera. That's a really big imperative that this technology can deliver to the business. >> Yeah, I mean, that's been huge for us. So when the pandemic first began, obviously, we had some road bumps and there were some challenges, but what we found out very quickly was that people were moving into digital much faster. And we've been mostly enabling them, not just in finance, as I said, but also, car companies, utilities, et cetera. The other one, of course, is modern operations. So, everyone's excited about the potential for automation. If I have thousands and thousands of developers and thousands of applications, do I need thousands of operations staff? And the answer is, with Kubernetes in this new era, you can reduce your operational loads. So that actually very few people are needed to keep systems up, to do basic monitoring, to do redeployments and so on, which are all boring infrastructure tasks that no developer wants to do. If we can automate all of that, we can modernize the whole IT space. And that's what I think the promise of Kubernetes that we're also seeing as well. So applications speed first and then operational competence second. >> So you guys had a launch, here we are in early calendar year 2022, you guys had a launch just about six or eight weeks ago in November of 2021, where you were launching announcing the GA of Weave getopts enterprise, which is a licensed product building on the free open source Weave getopts core. Talk to me about that and what the significance of that is. >> Well, this is an enterprise solution that helps customers build these critical use cases, like shared service platform or secure DevOps or multi-cloud, using getopts, which gives them higher security, lower costs of management, and better operations, and higher velocity. And all of it is taking all the best practices that we've learned starting from those days of running our own Kubernetes stack and then through those early customers like Fidelity into the modern era where we have an at-scale platform for these people. And the crucial properties are it provides you with a platform, it provides you with trusted delivery, and it provides you with what we call release orchestration, which is when you deploy things at scale into production, using tools like canaries and other modern practices. So, all of it is enabling what we call the cloud native enterprise, application delivery, modern operations. >> So what's the upgrade path for customers that are using the free open-source tier to the enterprise package, what does that look like? >> The good news is it's an add on. So, I have been in the industry a while and I strongly believe it's really important that if you have an open source product, you shouldn't ask people to delete it or uninstall it to install your enterprise product, unless you really, really, really have to. And I'm not trying to be picky here. Maybe there are cases where it's important, but actually in our case, it's very simple. If you're already using one of our upstream tools, like Flux, for example, then going from Flux to Weave getopts enterprise is an add-on installation. So you don't have to change or take out what you're doing. You might be using Flux without knowing it. You may not be aware of this, but it's also insight as your AKS and ARC, it's inside the Amazon EKS anywhere bundle. It's available on Alibaba, VMware have used it in cartographer and Tanzu application platform. And even Red Hat use it too in some cases. So you may be using it already, from one of the big vendors who are partners of ours, as a precursor to buying Weave getopts enterprise. So, you know, don't be scared. Get in touch is what I would say to people. >> Get in touch. And of course, folks can go to weave.works to learn more about that. And, also we want to watch the Weave.works space, 'cause you have some news coming out relatively soon that sounds pretty exciting, Alexis. >> Well, I mentioned trusted delivery. And I think one of the things with that is no CIO wants to go faster, unless they also have the safety wheels on, let's face it. And the big question we get asked is "I love this getopts stuff, but how can I bring my team with me? How can I introduce change?" I have all of these approvals mechanisms in place, can I move into the world of getopts? And the answer is yes, yes you can because we now support policy engines as baked into our enterprise product. Now, if you don't know what policy is, it's really a way of applying rules to what you're seeing in IT. And you can detect whether something passes or fails conditions, which means that we can detect if something bad is about to happen in a deployment and stop it from happening, this is really critical. It also goes hand in hand with things like supply chain and security, which I'm sure we read about in the news far too much. >> Yeah, pretty much daily supply chain and security >> Pretty much daily. >> is one of those things that we're all in every generation concerned about. Well, Alexis, it's been a pleasure having you back on the program, talking to us about what's new at Weaveworks, the direction that you're going, how you're helping organizations across industries really advance their DevOps practice. And we will check weave.works in the next couple of weeks for more on that news that you started to break a little bit with us today. We appreciate your time, Alexis. >> Thank you very much, indeed, take care. >> Likewise. For Alexis Richardson, I'm Lisa Martin. Keep it right here on theCUBE, your leader in hybrid tech event coverage. (bright music) (music fades)
SUMMARY :
the founder and CEO of Weaveworks. Good to see you again. Weaveworks on the program. And you may remember back in those days, and saying to people that we knew and the right decision that that pivot was and getopts that we And I can't imagine the and then move from place to place. That velocity is, as you just described, And one that I really and into that thriving mode, And the answer is, with Talk to me about that and what And the crucial properties are So, I have been in the industry a while And of course, folks can go to And the answer is yes, yes you can for more on that news that you started your leader in hybrid tech event coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Alexis Richardson | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Fidelity | ORGANIZATION | 0.99+ |
Alexis | PERSON | 0.99+ |
November of 2021 | DATE | 0.99+ |
JP Morgan | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Weaveworks | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
NatWest | ORGANIZATION | 0.99+ |
once a day | QUANTITY | 0.99+ |
40,000 | QUANTITY | 0.99+ |
three | DATE | 0.98+ |
early calendar year 2022 | DATE | 0.98+ |
today | DATE | 0.98+ |
once a week | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
Alibaba | ORGANIZATION | 0.98+ |
two thousands | QUANTITY | 0.98+ |
Pivotal | ORGANIZATION | 0.98+ |
Weave | ORGANIZATION | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
two containers | QUANTITY | 0.97+ |
Jenkins | TITLE | 0.97+ |
Weaveworks' | ORGANIZATION | 0.97+ |
Flux | TITLE | 0.97+ |
Kubernetes | TITLE | 0.96+ |
single application | QUANTITY | 0.96+ |
weave.works | ORGANIZATION | 0.96+ |
four years ago | DATE | 0.96+ |
Azure | TITLE | 0.95+ |
eight weeks ago | DATE | 0.95+ |
DORA | ORGANIZATION | 0.94+ |
first customers | QUANTITY | 0.93+ |
Relic | ORGANIZATION | 0.92+ |
single | QUANTITY | 0.91+ |
about six | DATE | 0.9+ |
first companies | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
telco | ORGANIZATION | 0.89+ |
Heroku | ORGANIZATION | 0.89+ |
James Watters, VMware | AWS re:Invent 2021
(upbeat music) >> Welcome back everyone to theCUBE's continuous coverage of AWS re:Invent 2021. I'm John Furrier, your host of theCUBE. We're here with James Watters, CTO of Modern Applications at VMware here to talk about the big Tanzu cloud native application wave, the modernization's here. James, great to see you. Thanks for coming on. >> Hey John, great to have you back on. And really excited about re:Invent this year. And I've been watching your coverage of it. There's lots of exciting stuff going on in this space. >> Awesome. Well, James, you've been riding the wave of, I would call cloud 1.0, 2.0 what do you want to call it, the initial wave of cloud where the advent of replatforming is there. You know all these benefits and things are moving fast. Things are being developed. A lot of endeavors, things are tracking. Some are kicking, Kubernetes kicks in, and now the big story is over the past year and a half. Certainly the pandemic highlighted is this big wave that's hitting now, which is the real, the modernization of the enterprise, the modernization of software development. And even Amazon was saying that in one of our talks that the sovereign life cycles over it should be completely put away to bed. And that DevOps is truly here. And you add security, you got DevSecOps. So an entirely new, large scale, heavy use of data, new methodologies are all hitting right now. And if you're not on that wave your driftwood, what's your take? >> Oh, I think you're dead right, John, and you know, kind of the first 10 years of working on this for sort of proving that the microservices, the container, the declared of automation, the DevOps patterns were the future. And I think everyone's agreed now. And I think DevSecOps and the trends around app modernization are really around bringing that to scale for enterprises. So the conversations I tend to be having are, Hey, you've done a little Kubernetes. You've done some modern apps and APIs, but how do you really scale this across your enterprise? That's what I think is exciting today. And that's what we're talking about. Some of the tools we're bringing to Amazon to help people achieve faster, consumption, better scale, more security. >> You know, one of the things about VMware that's been impressive over the years is that on the wave of IT, they already had great operational install base. They did a deal with Amazon Ragu did that. I think 2016, that kind of cleared the air. They're not going to do their own cloud or they have cloud efforts kind of solidifies that. And then incomes, Kubernetes, and then you saw a completely different cloud native wave coming in with the Tanzu, the Heptio acquisition. And since then a lot's been done. Can you just take us through the Tanzu evolution because I think this is a cornerstone of what's happening right now. >> Yeah, that's a great question, John. I think that the emergence of Kubernetes as a common set of APIs that every cloud and almost every infrastructure agrees on was a huge one. And the way I talked to our clients about is that VMware is doing a couple of things in this space. The first is that we're recognizing that as an infrastructure or baking Kubernetes into every vSphere, be it vSphere on-prem, be it VMC on Amazon. You're just going to find Kubernetes is a big part of each year. So that's kind of a big step one, but it's in some ways the same way that Amazon is doing with EKS and Azure is doing with AKS, but like every infrastructure provider is bringing Kubernetes everywhere. And then that kind of unleashes this really exciting moment where you've got this global control plane that you can program to be your DevSecOps platform. And Kubernetes has this incredible model of extensibility where you can add CRDs and program, right against the Kubernetes APIs with your additional features and functions you want your DevSecOps pipeline. And so it's created this opportunity for Tanzu to kind of have then a global control plane, which we call Tanzu Mission Control to bring all of those Kubernetes running on different clouds together. And then the last thing that we'll talk about a little bit more is this Tanzu Application Platform, which is bringing a developer experience to Kubernetes. So that you're not always starting with what I like to say, like, oh, I have Git, I have Kubernetes, am I done? There's a lot more to the story than that. >> I want to get to this Tanzu Application Platform on EKS. I think that's a big story at VMware. We've seen that, but before we do that for the folks out there watching who are like, I'm now seeing this, whether they're young, new to the industry or enterprises who have replatforming or refactoring, trying to understand what is a modern application. So give us the definition in your words, what is a modern application? >> You know, John, it's a great question. And I tend to start with why and like, hey, how did we get here? And you, you and I both, I think, used to work for the bigger iron vendors back in the day. And we've seen the age of the big box Silicon Valley. I don't know, I worked at Sun just across the aisle here and basically we'd sell you a big box and then once or twice a year, you'd change the software on it. And so in a sense, like there was no chance to do user-oriented design or any of these things. Like you kind of got what you got and you hope to scale it. And then modern applications have been much more of the age of like what you might say, like Instagram or some of these modern apps that are very user-oriented and how you're changing that user interface that user design might change every week based on user feedback. And you're constantly using big data to adjust that modern app experience. And so modern apps to me are inherently iterative and inherently scalable and amenable to change. And that's where the 12 factor application manifesto was written, a blog was written a decade ago, basically saying here's how you can start to design apps to be constantly upgradable. So to me, modern apps, 12 of factors, one of them Kubernetes compatible, but the real point is that they should be flexible to be constantly iterated on maybe at least once a week at a minimum and designed and engineered to do that. And that takes them away from the old vertically scaled apps that kind of ran on 172 processors that you would infrequently update in the past. Those are what you might call like cloud apps. Is that helpful? >> Yeah, totally helpful. And by the way, those old iron vendors, they're now called the on-premise vendors and, you know, HPE, Dell and whatnot, IBM. But the thing about the cloud is, is that you have the true infrastructure as code happening. It's happened, it's happening, but faster and better and greater the goodness there. So you got DevSecOps, which is just DevOps with security. So DevSecOps is the standard now that everyone's shooting for. So what that means is I'm a developer, I just want to write code, the infrastructure got to work for me. So things like Lambda functions are all great things. So assuming that there's going to be this now programmable layer for developers just to do stuff. What is, in context to that need, what is the Tanzu Application Platform about and how does it work? >> Yeah, that's a great question, John. So once you have Kubernetes, you have this abundance of programmable, inner infrastructure resources. You can do almost anything with it, right? Like you can run machine learning workflows, you can run microservices, you can build APIs, you can import legacy apps to it, but it doesn't come out of the box with a set of application patterns and a set of controllers that are built for just, you know, modern apps. It comes with sort of a lot of flexibility and it expects you to understand a pretty broad surface area of APIs. So what we're doing is we're following in the footsteps of companies like Netflix and Uber, et cetera, all of which built kind of a developer platform on top of their Kubernetes infrastructure to say, here's your more templatized path to production. So you don't have to configure everything. You're just changing the right parts of the application. And we kind of go through three steps. The first is an application template that says, here's how to build a streaming app on Kubernetes, click here, and you'll get in your version control and we'll build a Kubernetes manifest for it. Two, is an automated containerization, which is we'll take your app and auto create a container for it so that we know it's secure and you can't make a mistake. And then three is that it will auto detect your application and build a Kubernetes deployment for it so that you can deploy it to Kubernetes in a reliable way. We're basically trying to reduce the burden on the developer from having to understand everything about Kubernetes, to really understanding their domain of the application. Does that make sense? >> Yeah, and this kind of is inline, you mentioned Netflix early on. They were one of the pioneers in inside AWS, but they had the full hyperscaler developers. They had those early hardcore devs that are like unicorns. No, you can't hire these people. They're just not many enough in the world. So the world's becoming, I won't say democratization, that's an overused word, but what we're getting to is if I get this right, you're saying you're going to eliminate the heavy lifting, the boring mundane stuff. >> Yeah, even at Netflix as is great of a developers they have, they still built kind of a microservices or an application platform on top of AWS. And I think that's true of Kubernetes today, which if you go to a Kubernetes conference, you'll often see, don't expose Kubernetes to developers. So tons of application platforms starts to really solve that question. What do you expose to a developer when they want to consume Kubernetes? >> So let's ask you, I know you do a lot of customer visits, that's one of the jobs that make you go out in the field which you like doing and working backwards on the customers has been in the DNA of VMware for years. What is the big narrative with the customers? What's their pain point? How else has the pandemics shown them projects that are working and not working, and they want to come out of it with a growth strategy. VMware is now an independent company. You guys got the platform, what are the customers doing with it? >> Well, I'll give you one example. You know, I went out and I was chatting with the retailer, had seen their online sales goes from one billion to like three billion during the pandemic. And they had been using kind of packaged shopping cart software before like a basic online store that they bought and configured. And they realized they needed to get great at modern apps to keep up with customer demand. And so I would say in general, we've seen the drive, the need for modern apps and digital transformation is just really skyrocketing and everyone's paying attention to it. And then I think they're looking for a trusted partner and they're debating, do we build it all in-house or do we turn to a partner that can help us build this above the cloud? And I think for the people that want an enterprise trusted brand, they'll have a lot of engineering talent behind it. There's been strong interest in Tanzu. And I think the big message we're trying to get out is that Tanzu can not only help you in your on-prem infrastructure, but it can also really help you on public cloud. And I think people are surprised by just how much. >> It's just in the common thread. I see that it's that point is right on is that these companies that don't digitize their business and build an application for their customer are going to get taken away by a startup. I mean, we've seen, it's so easy if you don't have an app for that, you're out of business. I mean, this is like, no, no, it's not like maybe we should do the cloud, let's get proactive. Pretty much it's critical path now for companies. So I'm sure you agree with that, but what's the progress of most of the enterprises? What percentage do you think are having this realization? >> I would say at least 70, 80%, if not more, are there now, and 10 years ago, I used to kind of have to tell stories, like, you know, some startups going to come along and they might disrupt you and people kind of give you that like, yeah, yeah, yeah. You know, I get it. And now it's sort of like, hey, someone's already in our market with an API. Tell me how to build API first apps we need to compete. And that's the difference in the strategic conversation kind of post pandemic and post, you know, the last 10 years. >> All right, final question for you 'cause this is right great thread. I've seen having a web interface it's not good enough, to your point. You got to have an application that they're engaging with, with all the modern capabilities, because the needs there, the expectation for the customers there. What new things are you seeing beyond mobile that are coming around the pike for enterprises, obviously web to mobile, mobile to what? What's next? >> I think the thing that's interesting is there is a bigger push to say more and more of what we do should be an API both internally, like, hey, other teams might want to consume some of these services as a well-formed API. I call it kind of like Stripe MB. Like you look at all these companies, they're like, Hey, stripes worth a hundred billion dollars now because they built a great API. What about us? And so I've seen a lot of industries from automotive to of course financial services and others that are saying, what if we gave our developers internally great APIs? And what if we also expose those APIs externally, we could get a lot, a more rat, fast moving business than the traditional model we might've had in the past. >> It's interesting, you know, commoditizing and automating a way infrastructure or software or capable workflows is actually normal. And if you can unify that in a way that's just better I mean, you have a lower cost structure, but the value doesn't go away, right? So I think a lot of this comes down to, beauty's in the eye of the beholder. I mean, that's how DevSecOps works. I mean, it's agile, it's faster, but you still have to achieve the value of the net is lower cost. What's your take on that? >> Well, I think you're dead right, John. And I think this is what was surprising about Stripe is it was possible before Stripe to go out as a developer and kind of pulled together a backend that did payments, but boy, it was hard. And I think that's the same thing with kind of this tons of application platform and the developer experience focus is people are realizing they can't hire enough developers. So this is the other thing that's happened during the pandemic and the great resignation, if you will, the war for talent is on. And you know, when I talked to a customer, like we might be able to help you, even 30% with your developer productivity, there's like one out of four developers. You might not have to be able to have to recruit they're all in. And so I think that API first model and the developer experience model are the same thing, which is like, it doesn't have to just be possible. It should be excellent. >> Well, great insight learning a lot. Of course, we should move to theCube API and we'll plug into your applications. We're here in the studio with our API, James. Great to have you on. Final word, what's your take this, the big story for re:Invent. If you had to summarize this year's re:Invent going in to 2022, what would you say is happening in this industry right now? >> You know, I'm just super excited about the EKS market and how fast it's growing. We're seeing EKS in a lot of places. We're super excited about helping EKS customers scale. And I think it's great to see Amazon adopting that standard API from Kubernetes. And I think that's going to be, just awesome to watch the creativity the industry is going to have around it. >> Well, great insight, thanks for coming on. And again, we'll work on that Cube API for you. The virtualization of theCUBE is here. We're virtual, which we could be in-person and hope to see you in-person soon. Thanks for coming on. >> You too John, thank you. >> Okay, Cube's coverage of alias re:Invent 2021. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
about the big Tanzu cloud Hey John, great to have you back on. that the sovereign life cycles over it for sort of proving that the is that on the wave of IT, And the way I talked to our for the folks out there watching And I tend to start with why is that you have the true so that you can deploy it to So the world's becoming, I And I think that's true What is the big narrative is that Tanzu can not only help you most of the enterprises? And that's the difference in it's not good enough, to your point. and others that are saying, And if you can unify that And I think this is what Great to have you on. And I think that's going to be, and hope to see you in-person soon. of alias re:Invent 2021.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
James Watters | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
one billion | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
three billion | QUANTITY | 0.99+ |
DevSecOps | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Stripe | ORGANIZATION | 0.99+ |
Lambda | TITLE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Sun | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
once | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.98+ |
first 10 years | QUANTITY | 0.98+ |
172 processors | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
AKS | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
HPE | ORGANIZATION | 0.97+ |
Heptio | ORGANIZATION | 0.97+ |
a decade ago | DATE | 0.97+ |
12 of factors | QUANTITY | 0.96+ |
three steps | QUANTITY | 0.96+ |
vSphere | TITLE | 0.96+ |
pandemics | EVENT | 0.96+ |
Two | QUANTITY | 0.96+ |
Amazon Ragu | ORGANIZATION | 0.96+ |
DevOps | TITLE | 0.95+ |
10 years ago | DATE | 0.95+ |
EKS | ORGANIZATION | 0.94+ |
first model | QUANTITY | 0.94+ |
today | DATE | 0.93+ |
re:Invent | EVENT | 0.93+ |
12 factor | QUANTITY | 0.93+ |
past year and a half | DATE | 0.91+ |
2021 | TITLE | 0.9+ |
Mathew Ericson, Commvault and David Ngo, Metallic | KubeCon + CloudNativeCon NA 2020
>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon North America 2020 virtual brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hi, and welcome back to theCUBE. I'm Joep Piscaer, I'm covering KubeCon CloudNativeCon here remotely from the Netherlands. And I'm joined by Commvault, Mathew Pearson, he's a Senior Product Manager, as well as David Ngo, Vice President of Metallic Products and Engineering to talk about the cloud native space and data protection in the Cloud Native space. So both, welcome to the show. And I want to start off with kind of the why question, right? Why are we here obviously, but also why are we talking about data protection? I thought we had that figured out. So David, can you shed some light on how, data protection is totally different in the cloud native container space? >> Sure, absolutely, thank you. I think the thing to keep in mind is that, containers are an evolution and a revolution actually in the virtualization space in the cloud space. What we're seeing is that customers are turning more and more to SaaS based applications and infrastructure in order to modernize their data centers and their data state in their compute environments. And when they do that, they're looking for solutions that match how they deploy their applications. And SaaS for us is an important area of that space. So, Metallic is Commvault portfolio of SaaS delivered and SaaS native data protection capabilities and offerings to allow customers to take the advantage of the best SaaS that is easy to try, easy to buy, easy to deploy, no infrastructure required and combine that with the technology and experience of Commvault. It'll build over last 20 years to deliver an enterprise grade data protection solution delivered as SaaS. And so, with Kubernetes and deploying in the cloud and modernizing applications I think that's very appealing to customers to also be able to modernize their data protection. >> Yeah, so I get the SaaS part. I mean, SaaS is an important way of delivering services. It is especially in the mid-market, something customers prefer, they want to have that simplicity, that easy onboarding as well as the OPEX of paying a subscription fee instead of longer term fees. So, the delivery model makes sense that fits into, the paradigm of making it simple, getting started easily. I get that, but Metallic isn't a traditional backup solution in that sense, right? It's not backing up necessarily just physical machines or just virtual machines. It has a relevance in the cloud native space. And the way I understand it, and please, if you can shed some light on that, Matt, is how is it different? What does it do that kind of makes it stand apart? >> Yeah, look, what we've found is the application developers can be in control now. So it's not like a traditional backup, that's what's changed. At this point, the application developer is free to create the infrastructure that he or she needs. And that freedom has meant that a bunch of stateful applications, the apps that we didn't think were going to live in Kubernetes have made their way to Kubernetes and they're making their way fast. So why is Metallic different? Because it's taking its lead from the developer. So it's using things like namespaces and label selectors. So basically take input from the developer on what information is important and needs to be protected and then protecting it. So it's your easy button to keep that Kubernetes development protected while you keep pace with the innovation within the organization. >> So you raise a valid point, cloud native has many advantages. It also has an extra challenge to account for which is fragmentation, right? In the olden days, let's call it that. We had a virtual machine, maybe a couple dozen that made up an application. And it was fairly easy to pinpoint the kind of the sort of conference of an application. This is my application. But now with cloud native, applications data can basically live anywhere. In a single cloud vendor, in many different cloud accounts, across different services, even across the public clouds themselves, like in a true multi-cloud scenario and figuring out what is part of an application in that enormous fragmentation is a challenge I think is understated and underestimated in a lot of operational environments with customers, with their applications in production. And that's where I think a product needs to figure out how to make sure an application is still backed up, is still protected in the way that is necessary for that given application. So I wonder how that works with Metallic. How do you kind of figure out what part of that enormous fragmentation is part of a single application? >> Yeah, so Metallic effectively integrates and speaks natively with the kube-apiserver. So it's taking its lead from the system of truth which is the orchestrator, which is Kubernetes itself. So for example, if you say everything in your production namespace needs protection, every night or every four hours, whatever that may be, it steps out and asks Kubernetes what applications exist there. It then maps all of the associated API resources associated with that application including the persistent volumes and persistent volume claims, man throws up and grabs the data from them as well. And that allows us to then reapply or reschedule that application either back to that original cluster or to another one for application mobility, where they are. >> So how do you make sure you, it kind of, what's the central point where everything comes together for that given application? Is that something the developer does as part of their release process or as part of their CICD? How do you figure out what components are part of an application? >> That is definitely a big challenge in the industry today? So, today we use label selectors predominantly. We find developers have been educating us on what works for them. And they've said, "Our CICD system is going "to label everything associated with this app, "as namespaced, then non-named space resources. 'So just here, take my label, grab everything under that, "and you will be good." The reality is that doesn't work for every business. Some businesses drop things into a specific namespace. And then you've got the added challenge that all of your data doesn't actually just live in Kubernetes. What about your image registries? What about it HCD? What about your Source Code Control and CICD systems? So we're finding that even VMs as well are playing a part in this ecosystem right now until applications can fully migrate. >> Yeah, and then let's zoom out on that a little bit. I mean, I think it's great that developers now kind of have flipped the paradigm where backup and data protection used to be something squarely in the OPS domain. It's now made its way into the .dev domain where it's become fairly easy to tag resources as application X, application Y, and then it automatically gets pulled into the backup based on policies. I mean, that's great, but let's zoom out a little bit and figure out, why is this happening? Why are developers even being put in a position of backing up their applications? So David, do you want to shed some light on that for me? >> Sure, I think data protection is always going to be a requirement and you'll have persistent data, right? There are other elements of applications that will always need to be protected and data protection is often something that is an afterthought, but it's something that needs to be considered from the beginning. And Metallic in being able to support deployments, not just in the cloud, but on-premises as well. We support any number of certified distributions of Kubernetes, gives you the flexibility to make sure that there was apps and that data is protected no matter where it lives. Being able to do that from a single pane of glass, being able to manage your Kubernetes deployments in different environments is very important there. >> So let's dive into that a little bit. I hear you say, Certified Kubernetes Distributions. So what's kind of the common denominator we need to use Metallic in an environment? Because I hear On-Prem, I hear public cloud. So it seems to me like this is a pretty broad product in terms of what it supports in its scope. But what's the lowest common denominator for instance, in the On-Prem environment? >> Sure, so we support all CNCF certified distributions of Kubernetes today. And in the cloud, we support Azure with AKS and AWS with EKS. So you can really use the one Metallic environment, the one interface to be able to manage all of those environments. >> And so what about that storage underneath? Is that all through CSI? >> Yes. So we support CSI on the backend of the Kubernetes applications, and we can then protect all the data stored there. >> And so how does this, I mean, you acquired Hedvig about a year ago, I want to say. Not sure on the exact date, but you acquired Hedvig a little while ago. So how does that come into play in Metallic offering? >> Sure, the Hedvig distributed storage platform is a fantastic platform on which to provision and scale Kubernates's applications and clusters. And that having full integration with Kubernetes on the storage side, we support that natively and really builds on the value that Commvault can bring as a whole with all of its offerings as a platform to Kubernetes. >> All right. So, zooming out just a little more, I want to get a feel for the cover of the portfolio of Commvault, as we're ushering into this cloud native era, as we're helping customers make that move and make that transition. What's the positioning of Metallic basically in the transformation customers are going through from On-Prem kind of lift and shift cloud into the cloud native space? >> Yeah, so with today's announcements, our hybrid cloud support and our hybrid cloud initiatives really help customers manage data wherever it lives as I've mentioned earlier. Customers can start with workloads On-Prem and start protecting workloads that they either have migrated or starting to build in the cloud natively and really cover the gamut of infrastructure and hypervisors and file systems and storage locations amongst all of these locations. So from our perspective, we think that hybrid is here to stay, right? There are very few customers who are either going to be all on-premises or all in the cloud. Most customers have some requirement that keeps them in a hybrid configuration, and we see that being prevalent for quite some time. So supporting customers in their transformation, right? Where they are moving applications from on-premises to the cloud, either refactoring or lift and shift, or what have you. It's very important to them, it's very important for us to be able to support that motion. And we look forward to helping them along the way. >> Awesome, so one last question for Matt. I mean, Metallic is a set of servers, right? That means you run it, you operate it, you build it. So I wonder, is Metallic itself cloud native? How does it scale? What are kind of the big components that Metallic has made up of? >> So Metallic itself is absolutely cloud native. It is sitting inside Azure today. I won't go into all the details. In fact, David could probably provide far more detail there. But I think Metallic is cloud native with respect to the fact that it's speaking natively to your applications, your cloud instances, your Vms. And then it's giving you the agility and the ability to move them where you need them to be. And that's assisting people in that migration. So in the past, we helped people get from P to V. Now that there are virtualized, applications like Metallic can protect you wherever you are and get you to wherever you need to be, especially into your next cloud of choice. And there's always another cloud. What I'm interested to see and what I'm hoping to see out of KubeCon is how are we doing with KubeVirt and Kubernetes becoming the orchestrator of the data center. And how are we doing with some of these other projects like application CRDs and hierarchical namespaces that are truly going to build a multi-tenanted software defined, distributed application ecosystem, that Metallic I can speak natively to via Kubernetes. >> Awesome. Well, thank you both for being with me here today. I certainly learned a ton about Metallic. I learned a lot about the challenges in cloud native that'll certainly be an area of development in the next couple of years. As you know, that the CNCF will continue to support projects in this space and vendors to work with us in that space as well. So that's it for now. I'm Joep Piscaer, I'm covering for KubeCon here remotely from the Netherlands. I will see you next time, thanks. (bright upbeat music)
SUMMARY :
the Cloud Native Computing Foundation in the cloud native container space? and deploying in the cloud And the way I understand it, and please, So basically take input from the developer is still protected in the way And that allows us to challenge in the industry today? kind of have flipped the the flexibility to make sure in the On-Prem environment? And in the cloud, we of the Kubernetes applications, So how does that come into and really builds on the value Metallic basically in the and really cover the What are kind of the big components So in the past, we helped in the next couple of years.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Joep Piscaer | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
David Ngo | PERSON | 0.99+ |
Metallic | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Netherlands | LOCATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
AKS | ORGANIZATION | 0.99+ |
Mathew Pearson | PERSON | 0.99+ |
today | DATE | 0.99+ |
CloudNativeCon | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
Metallic Products and Engineering | ORGANIZATION | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
Commvault | ORGANIZATION | 0.96+ |
Kubernetes | TITLE | 0.96+ |
.dev | OTHER | 0.95+ |
EKS | ORGANIZATION | 0.94+ |
Hedvig | ORGANIZATION | 0.94+ |
CloudNativeCon North America 2020 | EVENT | 0.93+ |
one last question | QUANTITY | 0.91+ |
single application | QUANTITY | 0.91+ |
single pane | QUANTITY | 0.9+ |
Kubernetes | ORGANIZATION | 0.89+ |
every four hours | QUANTITY | 0.88+ |
Mathew Ericson | PERSON | 0.87+ |
Azure | TITLE | 0.87+ |
NA 2020 | EVENT | 0.87+ |
KubeCon CloudNativeCon | EVENT | 0.85+ |
a couple dozen | QUANTITY | 0.81+ |
Kubernates | TITLE | 0.77+ |
next couple of years | DATE | 0.72+ |
about a year ago | DATE | 0.72+ |
single cloud | QUANTITY | 0.7+ |
one | QUANTITY | 0.69+ |
Vice President | PERSON | 0.69+ |
night | QUANTITY | 0.65+ |
KubeVirt | ORGANIZATION | 0.64+ |
one interface | QUANTITY | 0.64+ |
theCUBE | ORGANIZATION | 0.62+ |
Cloud Native | LOCATION | 0.59+ |
Commvault | PERSON | 0.59+ |
last 20 years | QUANTITY | 0.54+ |
ton | QUANTITY | 0.52+ |
Miska Kaipiainen, Mirantis | Mirantis Launchpad 2020
>> Announcer: From around the globe, it's theCUBE, with digital coverage at Mirantis Launchpad 2020, brought to you by Mirantis. >> Welcome back. And I'm Stu Miniman, and this is theCUBE's coverage of Mirantis Launchpad 2020. Of course we're spending a lot of time talking about Kubernetes. We're going to be digging in talking about some of the important developer tooling that Mirantis is helping to proliferate in the market, solve some real important challenges in the space. So happy to welcome to the program Miska Kaipiainen. He is the senior director of engineering with Mirantis. Miska, thanks so much for joining us. Welcome to theCUBE. >> Thank you so much. >> All right, so Miska, I notice you've got on the Kontena sweatshirt. You were the founder of the company, did some tools. One of the tools that you and your team helped create was Lens. You and your team joined Mirantis, and recently Lens was pulled in. So maybe if you could just give us a little bit about your background. You do some coding yourself, the team that you have there, and let's tee up the conversation, 'cause it's that Lens piece that we're going to spend a bunch of time talking about. >> Yeah, so the background of what we did, basically Kontena, we started back in 2015, and we a the focus on creating technologies around the container orchestration technologies to basically to make developer tooling that are very easy to use for the developers. So during the years at Kontena, we did many different types of products, and maybe the most interesting product that we created was Lens. And now really when we joined Mirantis in January this year, so we have been able to work on Lens, and actually, since the Lens was made open source, fully open source in March this year, so it's been really kind of picking up, and now Mirantis acquired the whole technology, so we can really start investing even more in the development. >> All right, so let's talk specifically about Lens. As I teed up at the beginning, we're talking about managing multiple clusters. Gosh, and I think back to 2015. It was early on. Most people were still learning about Docker, Docker swarms, Kubernetes, Mesos. There were a lot of fights over how orchestration would be done. A little bit different discussion about what developers were doing, how they scaled out configurations, how they manage those. So help us understand kind of that core, what Lens does, and how the product has matured and expanded over those last five years. >> Yeah, so over the last five years, so originally Lens was developed for our internal product. So like Mesosphere and Docker, and they all have their own orchestration technologies even before Kubernetes. And we also started working on our own orchestration technology. And I'm a huge believer in when we are dealing with very complex technologies, so if you can visualize it and make it kind of more interesting to look at, so it will kind of help with the adoption, and it's kind of more acceptable to the market. And that's why we started doing Lens. And over the years, we turned Lens to work with Kubernetes environments, and nowadays really Lens is very much loved by the Kubernetes developers, who are those people who need to deal with the Kubernetes clusters on a daily basis. So they are not necessarily those ops people who are creating those clusters , but they are the people who actually use those clusters. >> Well, of course that that general adoption is something that, you know, super important. You have some stats you can share on, you talk about the love of developers. You said it's open source, it's available on GitHub, but how many people are using it? What are some of those usage stats? >> Yeah, so it was interesting. So when we released Lens open source under MIT license in March, so since then we have been getting, in half a year, we have been getting 8,000 stargazers on GitHub. That is kind of mind-blowing because we try to create projects and trying to create anything that would get a lot of traction in the past, but truly, it totally happened just now after years of trying. So it has been since the last six months, it's been just amazing the adopts and we have more than 50,000 users using Lens and the retention is great. People keep on coming back. So yeah, the numbers look very, very good for Lens, and we are just getting started. >> Yeah, well, it's something that this community definitely is huge growth, and anybody in this space remembers just the huge adoption of Docker, which of course the enterprise piece of Docker is now part of Mirantis. Inside those developers, help us understand a little bit more, what is it that has them really not only looking at the GitHubs, starring it, as you said, they're the stargazers. It's like a favorite, for those that aren't in the system. I've had a chance to look at some of the demos, and it seems rather straightforward. But if you could, just in your words, explain what it is that it solves for developers that otherwise they either had to do themselves or they had to cobble together a lot of different tools. We know developers out there. The wonderful thing is there's no shortage of tools to choose from. It's about the right tool that can do the right thing. >> Absolutely, absolutely. So Lens, we are calling it IDE for a reason. So we are talking about IDE for Kubernetes developers. And what does it mean actually is that we are taking all those necessary tools and technologies and packaging them, integrating them seamlessly together for the purpose of making it more easy for developers to deploy, operate, observe, inspect their workloads that are running on Kubernetes clusters. And I think the main benefits that Lens will provide for these developers is that if you're a newcomer in the Kubernetes ecosystem, so Lens gives you a very easy way to learn Kubernetes because it's so visual. And for more experienced users, it just radically improves the, let's say the speed of business and the way how you can perform things with your clusters. >> So one of the pieces that that Lens does is that multi-cluster management. So first of all, I believe, as you said, it's open source and can work with, is it any certified Kubernetes out there, whether it be from the public cloud, companies like VMware and Red Hat that have Kubernetes, of course, Mirantis has Kubernetes, too. And secondly, I think you teased out a little bit, but help help us understand a little bit. Multi-cluster management is something that the big players, you hear Azure and Google Cloud talking about how they look at managing not only other environments, but oh yeah, we can have other clusters and we can help you manage it. I think that's more on the ops side of things, as opposed to, as you said, this is really a developer tool set. >> Yeah, so of course, all the organizations, they want to most likely have some sort of centralized system where they can manage multiple clusters, and some companies provide systems for on-premises, and some public cloud vendors, they provide systems for provisioning those clusters on their own own systems. And then we have also the kind of multicloud management systems. Most of these technologies, they are really designed for the operations side, so how the IT administrations can manage these multiple clusters. So now if you look at the situation from the developer's perspective, they are now given access to certain number of clusters from different environments. And by the way, some of these clusters are also running on their local development environments on their laptops. So what Lens is doing is basically provides a unified user experience across all these clusters no matter what is the flavor of the Kubernetes. It can be the Minikube. It can be from AKS. It can be Mirantis Enterprise, Docker Enterprise offering, or whatever. So it kind of brings them all together and makes it very easy to navigate and go around and do your work. >> Yeah, well, that's, the promise of Kubernetes isn't that it just levels the playing field amongst everything. As I've talked to the founders of Kubernetes, people like Joe Beda said it's not a silver bullet. It's a thin layer. But that skillset is what's so important because there is a lot of difference between every platform they deal with. So as a developer, it's nice to have some tools that I can work across those environments. From a developer standpoint, I think it's on Windows, Linux, Mac, works across those environment. What do you hear from your customers? How are they using it? Is this something that they're like, oh hey, I can go make an adjustment on my mobile when I'm not necessarily in the office? Are we not quite there yet? >> Actually, it's kind of funny, because sometimes we hear these type of requests that we would like to have a mobile app version of Lens. I don't know how that would actually work in practice. So we haven't been doing anything on that front yet. I think still the most common use case is that developers, they are given access to clusters from somewhere and they are just desperately trying to find a kind of convenient way how to navigate around these different clusters and how to manage their workloads. And I think Lens is hitting the sweet spot in there with the ease of use. >> All right, so let me understand. It's been open sourced, yet Mirantis owns it. Is there a service or support? Does this tie into other products in the Mirantis portfolio? How do people get it? What do they need to, if anything, pay for it? And help us understand how this fits into the broader Mirantis story. >> Yes, so it's still kind of early days, so we just kind of announced that Lens is now part of Mirantis, let's say portfolio. So I must say that still the kind of main focus for us is around improving Lens and making it better for developers. So that's much more important than trying to think about the ways how potentially we could monetize this. So, but there are plans going ahead, going around for different ways how we can better support bigger enterprises who want to start using Lens in a big scale. >> Well, yeah, that's so important. Of course, developers, we need to lower the friction, help them adopt things fast. Miska, just get your general viewpoint, though. One of the big value propositions that Mirantis has is of course allowing enterprises to take advantage of these new types of solutions, especially today around Kubernetes. So help us understand from your standpoint the philosophy of what your team's helping to build and the customer engagements that you're having. >> Yes, so Mirantis, of course, has a broad portfolio of products, and many of those products, of course, are related to Kubernetes. And so we have many products which I'm also one of the leading development efforts around those. So some of the products are related to how to manage image repositories and registries. Some of them are related to how to handle the helm charts, which has basically become the defacto packaging format for Kubernetes applications. And we are kind of trying to bring all these different products and technologies together in a way that make it even more easy for developers then to access through Lens. So it's still a little bit work in progress, of course, since the Lens ecosystem is quite new, but we are on track there trying to make a beautiful one kind of experience for our customers. >> All right, well, final question I have for you. As you said, it's new there, but it gives a little taste as to feedback you're getting from the community. Anything we should be looking at on kind of the near to mid-term road map when it comes to Lens. >> Oh yeah, so we are just barely scratching the surface of the potential on what we can do with Lens. So one of the big features that we will be releasing still during this year in a couple of months time is going to be the extension API, which will allow all these cloud-native technology ecosystem vendors to bring their own technologies easily available and accessible through Lens. So it is possible for third parties to extend the user interface with their own kind of unique features and visualizations. And we are already actively working with certain partners to integrate their technologies through this extension API. So that's going to be huge. It's going to be game-changer. >> Well, the great thing about an open source project is people can go out, they can grab it now, they can give feedback, participate in the community. Miska, thank you so much for joining us and great to chat. >> Thank you for having me. Thank you. >> All right, stay with us for more coverage of Mirantis Launchpad 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright music)
SUMMARY :
the globe, it's theCUBE, some of the important developer tooling One of the tools that you and maybe the most interesting product and how the product has matured Yeah, so over the last five years, Well, of course that So it has been since the last six months, that can do the right thing. and the way how you can perform and we can help you manage it. flavor of the Kubernetes. the promise of Kubernetes and how to manage their workloads. in the Mirantis portfolio? So I must say that still the and the customer engagements So some of the products are related to on kind of the near to mid-term road map of the potential on what and great to chat. Thank you for having me. and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Miska | PERSON | 0.99+ |
Miska Kaipiainen | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
March | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Joe Beda | PERSON | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
Kontena | ORGANIZATION | 0.99+ |
more than 50,000 users | QUANTITY | 0.99+ |
Lens | TITLE | 0.98+ |
Linux | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
January this year | DATE | 0.98+ |
Windows | TITLE | 0.98+ |
March this year | DATE | 0.98+ |
Kubernetes | TITLE | 0.98+ |
today | DATE | 0.97+ |
half a year | QUANTITY | 0.97+ |
GitHub | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
Red Hat | ORGANIZATION | 0.96+ |
AKS | ORGANIZATION | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
secondly | QUANTITY | 0.93+ |
Docker | TITLE | 0.92+ |
Mirantis Launchpad 2020 | TITLE | 0.92+ |
Mesosphere | TITLE | 0.91+ |
VMware | ORGANIZATION | 0.91+ |
GitHubs | ORGANIZATION | 0.9+ |
Azure | TITLE | 0.9+ |
8,000 stargazers | QUANTITY | 0.86+ |
last six months | DATE | 0.82+ |
this year | DATE | 0.82+ |
last five years | DATE | 0.8+ |
Lens | ORGANIZATION | 0.77+ |
MIT | ORGANIZATION | 0.77+ |
Kubernetes | ORGANIZATION | 0.76+ |
Mac | TITLE | 0.74+ |
first | QUANTITY | 0.71+ |
Google Cloud | TITLE | 0.71+ |
Shaun O'Meara, Mirantis | Mirantis Launchpad 2020
>> Narrator: From around the globe, its theCUBE with digital coverage of Mirantis Launchpad 2020 brought to you by Mirantis . >> Welcome back, I'm Stu Miniman and this is theCUBE coverage of Mirantis Launchpad 2020, really looking at how Mirantis Docker Enterprise are coming together, changes happening in the field and to help us dig into that customer and product discussion. Happy to welcome to the program, Shaun O'Meara. He is the global Field Chief Technology Officer with Mirantis coming to us from Germany. Shaun thanks so much for joining us. >> Thanks for having me. >> All right, so let's start with the customers. I always love talking to the Field CTOs you're out there. You're talking strategy, you're getting into some of the architecture, lots of customers, probably still, trying to figure out that whole cloud native containerization, Kubernetes and modernization piece. So when you talk to your customers, what are some of their biggest challenges they're facing and those main discussion points that bring them to talk to Mirantis. >> Very good question, I think you've just laid it out yourself in many ways. It's complexity our customers are dealing with more and more change, more and more options, and it's driving complexity in their environments, and they're looking for ways to deal with that complexity and to allow more and more access and reduce barriers to getting applications and getting tools to market. And if we look at it and we look at the way the world is going today, we have multiple cloud environments. We have every single developer on the face of the planet wants to use different tools, different ways to build applications that don't want to be dictated to. Now, if you turn that around and you look at what operators have to deal with, it's just more and more complexity. Ultimately, that complexity is growing and we're looking for ways to make it easier, simpler, and subsequently increase the speed of getting applications to market for our customers. >> Yeah, You know we talk a bit about some of the macro challenges that customers have. What talk you kind of teed up a little bit, the operators and the developers. I remember a couple of years ago, I had the opportunity to interview Solomon Hykes and of course the founder from Docker. And there was that talk of well, containerization, it's this wonderful thing for developers. And he's like, hold on Stu we actually, really started looking at this or the operators we want that unit of operation to be closer to the application. So it should be simpler, it used to be okay, how many different applications do they have on a server or VMs all over the place and containers I could really have this microservice or this application is a container. So there is some operational simplicity there, but how is that dynamic inside the customer? Of course, we've seen the growth and the importance and the embracing of developers, but there's still the DevOps adoption and we'd love to be able to say one of these years that, oh, we don't have silos anymore and everybody works together and we're all on the same page. >> Oh yeah, the reality is in the big enterprise companies and the companies that are building applications for market today, your big financial services companies, there's still a very clear separation between operators and developers. A lot of that is driven by legislation, a lot of that's driven by just old fashioned thinking in many ways, but developers are starting to have a lot more influence on what applications are used and the infrastructure. We just see with the rise of AWS, all the contenders to AWS in the form of Azure and Google. Developers are starting to have a lot more power over that decision, but they're still highly dependent on operators to deliver those platforms that they use, and to make sure that the platforms that they're running their applications on top of, are stable and run well in production situations. There's a big difference between building something on your laptop in one or two instances, and then trying to push it out to a massive scalable cloud platform. And I think those are the areas that we can have a lot of impact, and that's where we are building our tools for at the moment. >> Well, great. Let's dig into those tools a little bit, as I said, at the beginning, we're familiar, Mirantis had the Mirantis Cloud Platform for a few years, big embrace of Kubernetes and then Docker Enterprise, it comes into the mix. So help me understand a little bit, what is kind of the solution set to portfolio? How does Mirantis present that today? >> Yeah, well, it's been an interesting eight, nine months now of the whole process since with the Docker Enterprise business, a couple of key areas. So if we look at what MCP was, and MCP still here today apparently, it focuses on delivering all the components necessary to have an effective cloud platform. So lifecycle management, lifecycle management of all those underlying components, which in their own right is extremely complex set of software. What we focused on there was understanding in enterprise infrastructure, the right way to do that. As soon as you bring in from the Docker Enterprise business is that they have a scalable, large, well deployed container platform. And many thousands of users across the world in all sorts of different scales and production systems. We are merging that knowledge that we have around infrastructure, infrastructure management, and simplifying access to infrastructure with this platform that provides for all that application, hosting provides for all the control of containers, plus all the security components around the container lifecycle. And delivering in such a way that you can choose your underlying preference. So we're no longer looking to lock you in to say, you have to go on-prem, you have to go into cloud. We're saying, we'll give you the choice, but we'll also give you a standardized platform for your developers across all of those potential infrastructure environments, so I'll use it again, public cloud, private cloud, bare metal on-premise, or your options like the VMware of this world. By consolidating all of that into one platform, we're giving you that as a developer, the ability to write applications that'll run anyway and sorry, go on. >> No, please finish up, I've just got to follow, yeah. >> But that simplicity drives and like that's simple choice across all those platforms essentially drives speed. It takes away the typical barriers that we're seeing in our customers. We hear every reason, we love Docker Enterprise because it solves the problem of getting containers, it solves a problem of securing containers, but it takes four teams to deploy it. Same for the MCP, we're saying is we cannot do that in a day and provide other self sets. So you can deploy a brand new container cloud in minutes rather than days or weeks. And that's one of the biggest changes that we're bringing to the product. >> Yeah, so absolutely what we hear from customers that agility, that speed that you talk about is the imperative, especially talk about 2020, everybody has had to readjust often accelerating some of the plans they had to meet the realities of what we have today. What I want to understand is when you talk about that single platform being able to be in any environment, oftentimes there's a misnomer that it's about portability. Most customers we talked to, they're not moving things lots of places. They do want that operational consistency, wherever they go. At the same time, you mentioned the rise of AWS and the hyperscalers often when they're now going to have to manage multiple clusters, it's not that I choose one Kubernetes and I use it anywhere, but I might be using AKS Azure had an early version of it, of course, Amazon has a couple of options now for enterprises. So help us understand how the Mirantis solutions, fit with the clouds, leverage cloud services and if I have multiple clusters, you even mentioned VMware, I might have a VMware cluster, have something from Mirantis, have something from one of the hyperscalers. Is that what you're seeing from your customers today? And how do they and how do they want to manage that going forward? Because we understand this is still a maturing space. >> So I mean, that's exactly the point. What we're seeing from our customers is that they have policies to go cloud first. They still have a lot of infrastructure on-premise. The question is which cloud, which cloud suits their needs in which region. Now, all of a sudden you've got a risk management policy from an organization that says, well, I have to go to Azure and I have to go to AWS. That's using them as examples. The deployment and management of those two platforms is completely different. Just the learning curve for a developer who wants to focus on writing code, to build a platform on top of AWS is barely extensive. Yes, it's easy to get started, but if you really want to deal with the fine print of how to run some in production, it's not that simple. There are potentially a thousand different buttons, you can click when deploying an instance on Amazon. So what we're saying is, instead of you having to deal with that we're going to abstract that pain from you. We're going to say we'll deploy Docker Enterprise on top of Amazon, on top of Google, on top of Azure, on top of your VMware cluster, give you a consistent interface to that, consistent set of tools across all those platforms, still consuming those platforms as you would, but solve all those dependency problems. To set up a cube cluster on top of Amazon, I'm not talking about an AKS or something like that right now, but the sort of cube cluster means I have to set up load balances, I have to set up networks, I have to set up monitors, I have to set up the instances, I have to deploy Kubernetes, and then I'm only getting started. I still haven't integrated that to my corporate identity management. We're saying we'll bring all that to give them, we are bringing all that together in the form of Docker Enterprise container cloud. >> Yeah, definitely as you said, we need more simplicity here. The promise of cloud it's supposed to be simplicity and now of course we have the paradox of choice when it comes there. >> Yeah. >> One of the other things we've seen, rapid change a lot the last year or so is many of the offerings out there are now managed services. So as you said, I don't want to have to build all of those pieces. I want to just be able to go to somebody. What are you hearing from your customers? How does manage service fit into what Mirantis is doing? >> Great, well, what we're hearing from customers is they want the pain to go away. The answer to that could be delivered through software that's really easy to use and doesn't set up any barriers and gets them started fast, which is where we focus from a product perspective. Mirantis also has a strong manage services on so we've been doing manage services for some of the biggest enterprises in the world for MCP products for many years. We've brought those teams forward and we're now offering those same managed services on top of all of our platforms. So Docker Enterprise container cloud, we'll deploy it for you, we'll manage it for you. We'll handle all the dependencies around getting container cloud up and running within your organization, and then offer you that hands on service. So when you build clusters, when you want clusters that are much more longer lived, we can handle all the extra detail that goes around those. Short term, so if you just want quick clusters for your developers, easy access, you still have that as part of the service. So we're focusing on how fast can we get you started? How fast can you get up the cushions to market, not put any infrastructure barriers on the way, or where there are traditional infrastructure barriers find ways around us. That still acceptable to those enterprise operators who still have list as long as my arm, probably twice as long as my arm of fine print that they have to comply with for everything under the sun, the regulators, et cetera. >> Yeah, Shaun since you are based in Europe, I'm wondering if you can give us a little bit of the perspective on cloud adoption there, here in North America, discussion point has been for many years, just that massive movement to public cloud, of course governance the key issue in Europe and above also kind of the COVID impact, anecdotally there's lots of discussions of acceleration of public cloud. So what's the reality on the ground? How do your enterprise customers look at public cloud? How fast or slow are they moving and what is the 2020 impact? >> So interesting if you'd asked me this question, six months ago, seven months ago, pre COVID, I would have said public cloud is growing. People are still building some small private clouds for very unique use cases, looking at where our customers are now, all of a sudden there's a risk balance. So they're driving into public cloud, but they want those public clouds to be with European companies and European operators, or at least to have some level of security. You know, recently the European community canceled the privacy shield legislation that was in place between the US and Europe, which meant all of a sudden, a lot of companies in Europe had to look for other places to store their data, or had to deal with different rules around storing the data that they may have but in the US previously. What we're seeing customers saying is we have to go multicloud. The drive is no longer we can accept one vendor risk. We want to remove that risk, we will still have equipment on-premise. So on-prem equipment is still important to us, but as a backup to the public cloud, and as a way to secure our data and the mechanisms that we own and can touch and control. That's the operator's view. If we talk to developers, people writing applications, if they are not forced to, they will go public cloud almost every time. It's just easier for them. And that's really what we're, that's really the challenge that we're also trying to focus on here. >> Yeah, I'm curious, are there any European cloud providers that are rising to the top the big three have such a large megaphone that they kind of drown out a lot of discussion and understand that there's pockets and many local suppliers, and of course thousands of kind of cloud service providers out there, but any ones that are good partners of Mirantis or ones that you're hearing. >> There are a couple. >> Yeah. >> Sorry, there are a couple, I dunno if I can mention them here, but there's some great ones providing very unique businesses, places like the Netherlands, very unique, very focused business where they're taking advantage of specific laws within, well, the Netherlands and Germany, there's another company that we're working very closely with that feels that they can do a much more affordable, much more hands on service or cloud. So their cloud experience provide everything developers want, but at the same time handle those operator requirements and those enterprise requirements within Germany. So focusing on the GDPR laws, focusing on German technology laws, which are very complex, very much focused on privacy. And there are a few unique companies like that across Europe, I know of one in Italy, there's a company that focuses on providing cloud services to the EU government themselves, who we've worked with in the past. So yeah, but as you say, it's the big three, they're growing, they're dealing with those challenges. We see them as resources, we see them as partners to what we're trying to achieve. We certainly not trying to compete with them at that level. >> Absolutely, all right, Shaun final question I have for you, tell us what your customers see as the real differentiation, what draws them to Mirantis and what we should expect to see over the coming months? >> So I think choice is a key differentiator. We're offering choice, we're not trying to tell you you can only use one cloud platform or one cloud provider. And that's extremely important as one of the key differentiators. I've mentioned this many times, simplicity, driving simplicity at all levels, from the operator through to the developer, to the consumer of the cloud, let's make it easy. Let's truly reduce the friction to getting started, all right that's one of the really key focus areas for us and that's something we talk about all the time in every meeting and we question ourselves constantly is, does this make it easier? And then security is a major component for us. We really focus on security as part of our tool sets, providing that standardized platform and that standardized security across all of these environments, and ultimately reducing the complexity. >> Shaun O'Meara, thank you so much. Great to hear that the real customer interaction and what they're dealing with today. >> Sure, thank you very much. >> Be sure to check out the tracks for developers, for infrastructure as well as all the rest theCUBE interviews on the Mirantis Launchpad site of course powered by CUBE365. I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)
SUMMARY :
to you by Mirantis . and to help us dig into of the architecture, of the planet wants to and of course the founder from Docker. all the contenders to AWS in Mirantis had the Mirantis the ability to write just got to follow, yeah. Same for the MCP, we're saying of the plans they had that they have policies to go cloud first. and now of course we have the paradox of the offerings out there that they have to comply with and above also kind of the COVID impact, or had to deal with different that are rising to the top So focusing on the GDPR laws, of the key differentiators. and what they're dealing with today. the tracks for developers,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shaun | PERSON | 0.99+ |
Shaun O'Meara | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Germany | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Netherlands | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
two platforms | QUANTITY | 0.99+ |
Solomon Hykes | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
seven months ago | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
GDPR | TITLE | 0.98+ |
ORGANIZATION | 0.98+ | |
today | DATE | 0.97+ |
four teams | QUANTITY | 0.97+ |
CUBE365 | ORGANIZATION | 0.97+ |
one cloud platform | QUANTITY | 0.97+ |
two instances | QUANTITY | 0.97+ |
EU government | ORGANIZATION | 0.96+ |
nine months | QUANTITY | 0.96+ |
European | OTHER | 0.96+ |
Mirantis Docker Enterprise | ORGANIZATION | 0.96+ |
single platform | QUANTITY | 0.96+ |
a day | QUANTITY | 0.96+ |
Docker Enterprise | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.94+ |
one cloud provider | QUANTITY | 0.92+ |
Azure | TITLE | 0.91+ |
Docker | ORGANIZATION | 0.9+ |
couple of years ago | DATE | 0.88+ |
theCUBE | ORGANIZATION | 0.88+ |
Kubernetes | ORGANIZATION | 0.85+ |
first | QUANTITY | 0.83+ |
a couple | QUANTITY | 0.83+ |
a lot of companies | QUANTITY | 0.82+ |
thousands of users | QUANTITY | 0.77+ |
MCP | TITLE | 0.77+ |
Sheng Liang, Rancher Labs | CUBE Conversation, July 2020
>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stu Miniman coming to you from our Boston area studio and this is a special CUBE Conversation, we always love talking to startups around the industry, understanding how they're creating innovation, doing new things out there, and oftentimes one of the exits for those companies is they do get acquired, and happy to welcome back to the program one of our CUBE alumni, Sheng Liang, he is the cofounder and CEO of Rancher, today there was an announcement for a definitive acquisition of SUSE, who our audience will know well, we were at SUSECON, so Sheng, first of all, thank you for joining us, and congratulations to you and the team on joining SUSE here in the near future. >> Thank you, Stu, I'm glad to be here. >> All right, so Sheng, why don't you give our audience a little bit of context, so I've known Rancher since the very early days, I knew Rancher before most people had heard the word Kubernetes, it was about containerization, it was about helping customers, there was that cattles versus pets, so that Rancher analogy was, hey, we're going to be your rancher and help you deal with that sprawl and all of those pieces out there, where you don't want to know them by name and the like, so help us understand how what was announced today is meeting along the journey that you set out for with Rancher. >> Absolutely, so SUSE is the largest independent opensource software company in the world, and they're a leader in enterprise Linux. Today they announced they have signed a definitive agreement to acquire Rancher, so we started Rancher about six years ago, as Stu said, to really build the next generation enterprise compute platform. And in the beginning, we thought we're going to just base our technology based on Docker containers, but pretty soon Kubernetes was just clearly becoming an industry standard, so Rancher actually became the most widely used enterprise Kubernetes platform, so really with the combination of Rancher and SUSE going forward, we're going to be able to supply the enterprise container platform of choice for lots and lots of customers out there. >> Yeah, just for our audience that might not be as familiar with Rancher, why don't you give us your position in where we are with the Kubernetes landscape, I've talked about many times on theCUBE, a few years ago it was all about "Hey, are we going to have some distribution war?" Rancher has an option in that space, but today it's multicloud, Rancher works with all of the cloud Kubernetes versions, so what is it that Rancher does uniquely, and of course as you mentioned, opensource is a key piece of what you're doing. >> Exactly, Stu, thanks for the question. So this is really a good lead-up into describing what Rancher does, and some of the industry dynamics, and the great opportunity we see with SUSE. So many of you, I'm sure, have heard about Kubernetes, Kubernetes is this container orchestration platform that basically works everywhere, and you can deploy all kinds of applications, and run these applications through Kubernetes, it doesn't really matter, fundamentally, what infrastructure you use anymore, so the great thing about Kubernetes is whether you deploy your apps on AWS or on Azure, or on on-premise bare metal, or vSphere clusters, or out there in IoT gateways and 5G base stations and surveillance cameras, literally everywhere, Kubernetes will run, so it's, in our world I like to think about Kubernetes as the standard for compute. If you kind of make the analogy, what's the standard of networking, that's TCPIP, so networking used to be very different, decades ago, there used to be different kinds of networking and at best you had a local area network for a small number of computers to talk to each other, but today with TCPIP as a standard, we have internet, we have Cisco, we have Google, we have Amazon, so I really think as successful as cloud computing has been, and how much impact it has had to actually push digital transformation and app modernization forward, a lot of organizations are kind of stuck between their desire to take advantage of a cloud provider, one specific cloud provider, all the bells and whistles, versus any cloud provider, not a single cloud provider can actually supply infrastructure for everything that a large enterprise would need. You may be in a country, you may be in some remote locations, you may be in your own private data center, so the market really really demands a standard form of compute infrastructure, and that turned out to be Kubernetes, that is the true, Kubernetes started as a way Google internally ran their containers, but what it really hit the stride was a couple years ago, people started realizing for once, compute could be standardized, and that's where Rancher came in, Rancher is a Kubernetes management platform. We help organizations tie together all of their Kubernetes clusters, regardless where they are, and you can see this is a very natural evolution of organizations who embark on this Kubernetes journey, and by definition Rancher has to be open, because who, this is such a strategic piece of software, who would want their single point of control for all compute to be actually closed and proprietary? Rancher is 100% opensource, and not only that, Rancher works with everyone, it really doesn't matter who implements Kubernetes for you, I mean Rancher could implement Kubernetes for you, we have a Kubernetes distro as well, we actually have, we're particularly well-known for Kubernetes distro design for resource constrained deployments on the edge, called K3S, some of you might have heard about it, but really, we don't care, I mean we work with upstream Kubernetes distro, any CNCF-compliant Kubernetes distro, or one of many many other popular cloud hosted Kubernetes services like EKS, GKE, AKS, and with Rancher, enterprise can start to treat all of these Kubernetes clusters as fungible resources, as catalysts, so that is basically our vision, and they can focus on modernizing their application, running their application reliably, and that's really what Rancher's about. >> Okay, so Sheng, being acquired by SUSE, I'd love to hear a little bit, what does this mean for the product, what does it mean for your customers, what does it mean for you personally? According to Crunchbase, you'd raised 95 million dollars, as you said, over the six years. It's reported by CNBC, that the acquisition's in the ballpark of 600 to 700 million, so that would be about a 6X increment over what was invested, not sure if you can comment on the finances, and would love to hear what this means going forward for Rancher and its ecosystem. >> Yeah, actually, I know there's tons of rumors going around, but the acquisition price, SUSE's decided not to disclose the acquisition price, so I'm not going to comment on that. Rancher's been a very cash-efficient business, there's been no shortage of funding, but even amounts to 95 million dollars that we raised, we really haven't spent majority of it, we probably spent just about a third of the money we raised, in fact our last run to fundraise was just three, four month ago, it was a 40 million dollar series D, and we didn't even need that, I mean we could've just continued with the series C money that we raised a couple years ago, which we barely started spending either. So the great thing about Rancher's business is because we're such a product-driven company, with opensource software, you develop a unique product that actually solves a real problem, and then there's just no barrier to adoption, so this stuff just spreads organically, people download and install, and then they put it in mission-critical production. Then they seek us out for commercial subscription, and the main value they're getting out of commercial subscription is really the confidence that they can actually rely on the software to power their mission-critical workload, so once they really start using Rancher, they recognize that Rancher as an organization provide, so this business model's worked out really well for us. Vast majority of our deals are based on inbound leads, and that's why we've been so efficient, and that's I think one of the things that really attracted SUSE as well. It's just, these days you don't just want a business that you have to do heavy weight, heavy duty, old fashioned enterprise (indistinct), because that's really expensive, and when so much of that value is building through some kind of a bundling or locking, sooner or later customers know better, right? They want to get away. So we really wanted to provide a opensource, and open, more important than opensource is actually open, lot of people don't realize there are actually lots of opensource software even in the market that are not really quite open, that might seem like a contradiction, but you can have opensource software which you eventually package it in a way, you don't even make the source code available easily, you don't make it easy to rebuild the stuff, so Rancher is truly open and opensource, people just download opensource software, run it in the day they need it, our Enterprise subscription we will support, the day they don't need it, they will actually continue to run the same piece of software, and we'd be happy to continue to provide them with patches and security fixes, so as an organization we really have to provide that continuous value, and it worked out really well, because, this is such a important piece of software. SUSE has this model that I saw on their website, and it really appeals to us, it's called the power of many, so SUSE, turns out they not only completely understand and buy into our commitment to open and opensource, but they're completely open in terms of supporting the whole ecosystem, the software stack, that not only they produce, but their partners produce, in many cases even their competitors produce, so that kind of mentality really resonated with us. >> Yeah, so Sheng, you wrote in the article announcing the acquisition that when the deal closes, you'll be running engineering and innovation inside of SUSE, if I remember right, Thomas Di Giacomo has a similar title to that right now in SUSE, course Melissa Di Donato is the CEO of SUSE. Of course the comparison that everyone will have is you are now the OpenShift to SUSE. You're no stranger to OpenShift, Rancher competes against RedHat OpenShift out on the market. I wonder if you could share a little bit, what do you see in your customer base for people out there that says "Hey, how should I think of Rancher "compared to what RedHat's been doing with OpenShift?" >> Yeah, I mean I think RedHat did a lot of good things for opensource, for Linux, for Kubernetes, and for the community, OpenShift being primarily a Kubernetes distro and on top of that, RedHat built a number of enhanced capabilities, but at the end of the day, we don't believe OpenShift by itself actually solves the kind of problem we're seeing with customers today, and that's why as much investment has gone into OpenShift, we just see no slowdown, in fact an acceleration of demand of Rancher, so we don't, Rancher always thrived by being different, and the nice thing about SUSE being a independent company, as opposed to a part of a much larger organization like RedHat, is where we're going to be as an organization 100% focused on bringing the best experience to customers, and solve customers' business problems, as they transform their legacy application suite into cloud-native infrastructure. So I think the opportunity is so large, and there's going to be enough market there for multiple players, but we measure our success by how many people, how much adoption we're actually getting out of our software, and I said in the beginning, Rancher is the most widely used enterprise Kubernetes platform, and out of that, what real value we're delivering to our customers, and I think we solve those problems, we'll be able to build a fantastic business with SUSE. >> Excellent. Sheng, I'm wondering if we could just look back a little bit, you're no stranger to acquisitions, remember back when Cloud.com was acquired by Citrix, back when we had the stack wars between CloudStack and OpenStack and the like, I'm curious what lessons you learned having gone through that, that you took away, and prepared you for what you're doing here, and how you might do things a little bit differently, with the SUSE acquisition. >> Yeah, my experience with Cloud.com acquired by Citrix was very good, in fact, and a lot of times, you really got to figure out a way to adapt to actually make sure that Rancher as a standalone business, or back then, Cloud.com was a standalone business, how are they actually fitting to the acquirer's business as a whole? So when Cloud.com was acquired, it was pretty clear, as attractive as the CloudStack business was, really the bigger prize for Citrix was to actually modernize and cloudify their desktop business, which absolutely was like a two billion dollar business, growing to three billion dollars back then, I think it's even bigger now, with now everyone working remote. So we at Citrix, we not only continued to grow the CloudStack business, but more importantly, one of the things I'm the most proud of is we really played up a crucial role in modernizing and cloudifying the Citrix mainline business. So this time around, I think the alignment between what Rancher does and what SUSE does is even more apparent, obviously, until the deal actually closes, we're not really allowed to actually plan or execute on some of the integration synergies, but at a higher level, I don't see any difficulty for SUSE to be able to effectively market, and service their global base of customers, using the Rancher technology, so it's just the synergy between Kubernetes and Linux is just so much stronger, and in some sense, I think I've used this term before, Kubernetes is almost like the new Linux, so it just seems like a very natural place for SUSE to evolve into anyway, so I'm very very bullish about the potential synergy with the acquisition, I just can't wait to roll up my hands and get going as soon as the deal closes. >> All right, well Sheng, thank you so much for joining us, absolutely from our standpoint, we look at it, it's a natural fit of what Rancher does into SUSE, as you stated. The opensource vision, the community, and customer-focused absolutely align, so best of luck with the integration, looking forward to seeing you when you have your new role and hearing more about Rancher's journey, now part of SUSE. Thanks for joining us. >> Thank you Stu, it's always great talking to you. >> All right, and be sure, we'll definitely catch up with Rancher's team at the KubeCon + CloudNativeCon European show, which is of course virtual, as well as many other events down the road. I'm Stu Miniman, and thank you for watching theCUBE.
SUMMARY :
leaders all around the world, and oftentimes one of the is meeting along the journey And in the beginning, we and of course as you mentioned, and the great opportunity that the acquisition's in the ballpark and the main value they're getting is the CEO of SUSE. and for the community, CloudStack and OpenStack and the like, and cloudifying the looking forward to seeing you always great talking to you. events down the road.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Citrix | ORGANIZATION | 0.99+ |
Melissa Di Donato | PERSON | 0.99+ |
Thomas Di Giacomo | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Sheng Liang | PERSON | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
CNBC | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
three billion dollars | QUANTITY | 0.99+ |
Rancher | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Sheng | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sheng Liang | PERSON | 0.99+ |
600 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
95 million dollars | QUANTITY | 0.99+ |
July 2020 | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two billion dollar | QUANTITY | 0.99+ |
Crunchbase | ORGANIZATION | 0.98+ |
700 million | QUANTITY | 0.98+ |
Rancher Labs | ORGANIZATION | 0.98+ |
RedHat | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
OpenShift | TITLE | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
Linux | TITLE | 0.97+ |
SUSECON | ORGANIZATION | 0.97+ |
CloudStack | TITLE | 0.96+ |
today | DATE | 0.96+ |
four month ago | DATE | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
decades ago | DATE | 0.96+ |
Ashesh Badani, Red Hat | Red Hat Summit 2020
>> Announcer: From around the globe, it's theCUBE, with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of Red Hat Summit, happening digitally, interviewing practitioners, executives, and thought leaders from around the world. Happy to welcome back to our program, one of our CUBE alumni, Ashesh Badani, who's the Senior Vice President of Cloud Platforms with Red Hat. Ashesh, thank you so much for joining us, and great to see you. >> Yeah, likewise, thanks for having me on, Stu. Good to see you again. >> All right, so, Ashesh, since the last time we had you on theCUBE a few things have changed. One of them is that IBM has now finished the acquisition of Red Hat, and I've heard from you from a really long time, you know, OpenShift, it's anywhere and it's everywhere, but with the acquisition of Red Hat, it just means this only runs on IBM mainframes and IBM Cloud, and all things blue, correct? >> Well, that's true for sure, right? So, Stu, you and I have been talking for many, many times. As you know, we've been committed to hybrid multi-cloud from the very get-go, right? So, OpenShift supported to run on bare metal, on virtualization platforms, whether they come from us, or VMware, or Microsoft Hyper-V, on private clouds like OpenStack, as well as AWS, Google Cloud, as well as on Azure. Now, with the completion of the IBM acquisition of Red Hat, we obviously always partnered with IBM before, but given, if you will, a little bit of a closer relationship here, you know, IBM's been very keen to make sure that they promote OpenShift in all their platforms. So as you can probably see, OpenShift on IBM Cloud, as well as OpenShift on Z on mainframe, so regardless of how you like OpenShift, wherever you like OpenShift, you will get it. >> Yeah, so great clarification. It's not only on IBM, but of course, all of the IBM environments are supported, as you said, as well as AWS, Google, Azure, and the like. Yeah, I remember years ago, before IBM created their single, condensed conference of THINK, I attended the conference that would do Z, and Power, and Storage, and people would be like, you know, "What are they doing with that mainframe?" I'm like, "Well, you do know that it can run Linux." "Wait, it can run Linux?" I'm like, "Oh my god, Z's been able to run Linux "for a really long time." So you want your latest Container, Docker, OpenShift stuff on there? Yeah, that can sit on a mainframe. I've talked to some very large, global companies that that is absolutely a part of their overall story. So, OpenShift-- >> Interesting you say that, because we already have customers who've been procuring OpenShift on mainframe, so if you made the invest mainframe, it's running machine learning applications for you, looking to modernize some of the applications and services that run on top in OpenShift on mainframe now is an available option, which customers are already taking advantage of. So exactly right to your point, we're seeing that in the market today. >> Yeah, and Ashesh, maybe it's good to kind of, you know, you've got a great viewpoint as to customers deploying across all sorts of environments, so you mentioned VMware environments, the public cloud environment. It was our premise a few years ago on theCUBE that Kubernetes get staked into all the platforms, and absolutely, it's going to just be a layer underneath. I actually think we won't be talking a lot about Kubernetes if you fast-forward a couple of years, just because it's in there. I'm using it in all of my environments. So what are you seeing from your customers? Where are we in that general adoption, and any specifics you can give us about, you know, kind of the breadth and the depth of what you're seeing from your customer base? >> Yeah, so, you're exactly right. We're seeing that adoption continue on the path it's been on. So we've got now, over 1700 customers for OpenShift, running in all of these environments that you mentioned, so public, private, a combination of the two, running on traditional virtualization environments, as well as ensuring that they run in public cloud at scale. In some cases managed by customers, in other cases managed by us on their behalf in a public cloud. So, we're seeing all permutation, if you will, of that in play today. We're also seeing a huge variety of workloads, and to me, that's actually really interesting and fascinating. So, earliest days, as you'd expect, people trying to play with micro-services, so trying to build new market services and run it, so cloud native, what have you. Then as we're ensuring that we're supporting stateful application, right. Now you're starting to see if your legacy applications move on, ensuring that we can run them, support them at scale, within the platform 'cause we're looking to modernize applications. We'll talk maybe in a few minutes also about lift-and-shift that we got to play as well. But now also we're starting to see new workloads come on. So just most recently we announced some of the work that we're doing with a series of partners, from NVIDIA to emerging AI ML, AI, artificial intelligence machine learning, frameworks or ISVs, looking to bring those to market. Been ensuring that those are supported and can run with OpenShift. Right, our partnership with NVIDIA, ensuring OpenShift be supported on GPU based environment for specific workloads, whether it be performance sensitive or specific workloads that take advantage of underlying hardware. So starting now to see a wide variety if you will, of application types is also something that we're starting, right, so numbers of customers increasing, types of workloads, you know, coming on increasing, and then the diversity of underlying deployment environments. Where they're running all services. >> Ashesh, such an important piece and I'm so glad you talked about it there. 'Cause you know my background's infrastructure and we tend to look at things as to "Oh well, I moved from VM to a container, "to cloud or all these other things," but the only reason infrastructure exists is to run my application, is my data and my application that are the most important things out there. So Ashesh, let me get in some of the news that you got here, your team work on a lot of things, I believe one of them talks about some of those, those new ways that customers are building applications and how OpenShift fits into those environments. >> Yeah, absolutely. So look, we've been on this journey as you know for several years now. You know recently we announced the GA of OpenShift Service Mesh in support of Istio, increasing an interest as for turning microservices will take advantage of close capabilities that are coming in. At this event we're now also announcing the GA of OpenShift Serverless. We're starting to see obviously a lot of interest, right, we've seen the likes of AWS spawn that in the first instance, but more and more customers are interested in making sure that they can get a portable way to run serverless in any Kubernetes environment, to take advantage of open source projects as building blocks, if you will, so primitives in, within Kubernetes to allow for serverless capabilities, allow for scale down to zero, supporting serving and eventing by having portable functions run across those environments. So that's something that is important to us and we're starting to see support of in the marketplace. >> Yeah, so I'd love just, obviously I'm sure you've got lots of break outs in the OpenShift Serverless, but I've been talking to your team for a number of years, and people, it's like "Oh, well, just as cloud killed everything before it, "serverless obviates the need for everything else "that we were going to use before." Underlying OpenShift Serverless, my understanding, Knative either is the solution, or a piece of the solution. Help us understand what serverless environment this ties into, what this means for both your infrastructure team as well as your app dev team. >> Yeah, great, great question, so Knative is the basis of our serverless solution that we're introducing on OpenShift to the marketplace. The best way for me to talk about this is there's no one size fits all, so you're going to have specific applications or service that will take advantage of serverless capabilities, there will be some others that will take advantage of running within OpenShift, there'll be yet others, we talked about the AI ML frameworks, that will run with different characteristics, also within the platform. So now the platform is being built to help support a diversity, a multitude of different ways of interacting with it, so I think maybe Stu, you're starting to allude to this a little bit, right, so now we're starting to focus on, we've got a great set of building blocks, on the right compute network storage, a set of primitives that Kubernetes laid out, thinking of the notions of clustering and being able to scale, and we'll talk a little bit about management as well of those clusters. And then it changes to a, "What are the capabilities now, "that I need to build to make sure "that I'm most effective, most efficient, "regard to these workloads that I bring on?" You're probably hearing me say workloads now, several times, because we're increasingly focused on adoption, adoption, adoption, how can we ensure that when these 1700 plus, hopefully, hundreds if not thousands more customers come on, how they can get the most variety of applications onto this platform, so it can be a true abstraction over all the underlying physical resources that they have, across every deployment that they put out. >> All right, well Ashesh, I wish we could spend another hour talking about the serverless piece, I definitely am going to make sure I check out some of the breakouts that cover the piece that we talked to you, but, I know there's a lot more that the OpenShift update adds, so what other announcements, news, do you have to cover for us? >> Yeah, so a couple other things I want to make sure I highlight here, one is a capability called ACM, advanced cluster management, that we're introducing. So it was an experimental work that was happening with the IBM team, working on cluster management capabilities, we'd been doing some of that work ourselves, within Red Hat, as part of IBM and Red Hat coming together. We've had several folks from IBM actually join Red Hat, and so we're now open sourcing and providing this cluster management capability, so this is the notion of being able to run and manage these different clusters from OpenShift, at scale, across multiple environments, be able to check on cluster health, be able to apply policy consistently, provide governance, ensure that appropriate applications are running in appropriate clusters, and so on, a series of capabilities, to really allow for multiple clusters to be run at scale and managed effectively, so that's one set of, go ahead, Stu. >> Yeah, if I could, when I hear about multicluster management, I think of some of the solutions that I've heard talked about in the industry, so Azure Arc from Microsoft, Tanzu from VMware, when they talk about multicluster management, it is not only the Kubernetes solutions that they're offering, but also, how do I at least monitor, if not even allow a little bit of control across these environments? So when you talk about cluster management, is that all the OpenShift pieces, or things like AKS, EKS, other options out there, how do those fit into the overall management story? >> Yeah, that's absolutely our goal, right, so we've got to get started somewhere, right? So we obviously want to make sure that we bring into effect the solution to manage OpenShift clusters at scale, and then of course as we would expect, multiple other clusters exist, from Kubernetes, like the ones you mentioned, from the cloud providers as well as others from third parties and we want the solution to manage that as well. But obviously we're going to sort of take steps to get to the endpoint of this journey, so yes, we will get there, we've got to get started somewhere. >> Yeah, and Ashesh, any guides, when you look at people, some of the solutions I mentioned out there, when they start out it's "Here's the vision." So what guidance would you give to customers about where we are, how fast they can expect these things to mature, and I know anything that Red Hat does is going to be fully open source and everything, what's your guidance out there as to what customers should be looking for? >> Yeah, so we're at an interesting point, I think, in this Kubernetes journey right now, and so when we, if you will, started off, and Stu you and I have been talking about this for at least five years if not longer, was this notion that we want to provide a platform that can be portable and successfully run in multiple deployment environments. And we've done that over these years. But all the while when we were doing that, we're always thinking about, what are the capabilities that are needed that are perhaps not developed upstream, but will be over time, but we can ensure that we can look ahead and bring that into the platform. And for a really long time, and I think we still do, right, we at Red Hat take a lot of stick for saying "Hey look, you form the platform." Our outcome back to that has always been, "Look, we're trying to help solve problems "that we believe enterprise customers have, "we want to ensure that they're available open source, "and we want to upstream those capabilities always, "back into the community." But, let's say making available a platform without RBAC, role-based access control, well it's going to be hard then for enterprises to adopt that, we've got to make sure we introduce that capability, and then make sure that it's supported upstream as well. And there's a series of capabilities and features like that that we work through. We've always provided an abstraction within OpenShift to make it more productive for developers and administrators to use it. And we always also support working with kubectl or the command line interface from kube as well. And then we always hear back from folks saying "Well, you've got your own abstraction, "that might make that seem impossible," Nope, you can use both kubectl GPUs or C commands, whichever one is better for you, have at it, we're just trying to be more productive. And now increasingly what we're seeing in the marketplace is this notion that we've got to make sure we work our way up from not just laying out a Kubernetes distribution, but thinking about the additional capability, additional services that you can provide, that would be more valuable to customers, and I think Stu, you were making the point earlier, increasingly, the more popular and the more successful Kubernetes becomes, the less you will see and hear of it, which by the way is exactly the way it should be, because that becomes then the basis of your underlying infrastructure, you are confident that you've got a rock solid bottom, and now you as a customer, you as a user, are focusing all of your energy and time on building the productive application and services on top. >> Yeah, great great points there Ashesh, the vision people always talked about is "If I'm leveraging cloud services, "I shouldn't have to worry "about what version they're running." Well, when it comes to Kubernetes, ultimately we should be able to get there, but I know there's always a little bit of a delta between the latest and newest version of Kubernetes that comes out, and what the managed services, and not only managed services, what customers are doing in their own environment. Even my understanding, even Google, which is where Kubernetes came out of, if you're looking at GKE, GKE is not on the latest, what are we on, 1.19, from the community, Ashesh, so what's Red Hat's position on this, what version are you up to, how do you think customers should think about managing across those environments, because boy, I've got too many scars from interoperability history, go back 10 or 15 years and everything, "Oh, my server BIOS doesn't work on that latest "kernel.org version of what we're doing for Linux." Red Hat is probably better prepared than any company in the industry, to deal with that massive change happening from a code-based standpoint, I've heard you give presentations on the history of Linux and Kubernetes, and what's going forward, so when it comes to the release of Kubernetes, where are you with OpenShift, and how should people be thinking about upgrading from versions? >> Yeah, another excellent point, Stu, it's clearly been following us pretty closely over the years, so where we came at this, was we actually learned quite a bit from our experience in the company with OpenStack. And so what would happen with OpenStack is, you would have customers that are on a certain version of Openstack, and then they kept saying "Hey look, we want to consume close to trunk, "we want new features, we want to go faster." And we'd obviously spent some time, from the release in community to actually shipping our distribution into customer's hand, there's going to be some amount of time for testing and QE to happen, and some integration points that need to be certified, before we make it available. We often found that customers lagged, so there'd be let's say a small subset if you will within every customer or several customers who want to be consuming close to trunk, a majority actually want stability. Especially as time wore on, they were more interested in stability. And you can understand that, because now if you've got mission critical applications running on it you don't necessarily want to go and put that at risk. So the challenge that we addressed when we actually started shipping OpenShift four last summer, so about a year ago, was to say, "How can we provide you basically a way "to help upgrade your clusters, "essentially remotely, so you can upgrade, "if you will, your clusters, or at least "be able to consume them at different speeds." So what we introduced with OpenShift four was this ability to give you over the air updates, so the best way to think about it is with regard to a phone. So you have your phone, your new OS upgrades show up, you get a notification, you turn it on, and you say "Hey, pull it down," or you say at a certain point of time, or you can go off and delay it, do it at a different point in time. That same notion now exists within OpenShift. Which is to say, we provide you three channels, so there's a stable channel where you say "Hey look, maybe this cluster in production, "no rush here, I'll stay at or even a little behind," there's a fast channel for "Hey, I want to be up latest and greatest," or there's a third channel which allows for essentially features that are being in developed, or are still in early stage of development to be pushed out to you. So now you can start consuming these upgrades based on "Hey, I've got a dev team, "on day one I get these quicker," "I've got these applications that are stable in production, "no rush here." And then you can start managing that better yourself. So now if you will, those are capabilities that we're introducing into a Kubernetes platform, a standard Kubernetes platform, but adding additional value, to be able to have that be managed much much, in a much better fashion that serves the different needs of different parts of an organization, allows for them to move at different speeds, but at the same time, gives you that same consistent platform regardless of where you are. >> All right, so Ashesh, we started out the conversation talking about OpenShift anywhere and everywhere, so in the cloud, you talked about sitting on top of VMware, VM Farms is very prevalent in the data centers, or bare metal. I believe since I saw, one of the updates for OpenShift is how Red Hat virtualization is working with OpenShift there, and a lot of people out there are kind of staring out what VMware did with VSphere seven, so maybe you can set it up with a little bit of a compare contrast as to how Red Hat's doing this rollout, versus what you're seeing your partner VMware doing, or how Kubernetes fits into the virtualization environment. >> Yeah, I feel like we're both approaching it from different perspective and learnset that we come at it, so if I can, the VMware perspective is likely "Hey look, there's all these installations of VSphere "in the marketplace, how can we make sure "that we help bring containers there," and they've come up with a solution that you can argue is quite complicated in the way how they're achieving it. Our approach is a different one, right, so we always looked at this problem from the get-go with regard to containers as a new paradigm shift, it's not necessarily a revolution, because most companies that we're looking at are working with existing application services, but it's an evolution in the way you're thinking about the world, but this is definitely the long term future. And so how can we then think about introducing this environment, this application platform into the environment, and then be able to build a new application in it, but also bring in existing applications to the form? And so with this release of OpenShift, what we're introducing is something that we're calling OpenShift Virtualization, which is a few of our existing applications, certain VMs, how can we ensure that we bring those VMs into the platform, they've been certified, data security boundaries around it, or certain constraints or requirements have been put by your internal organization around it, and we can keep all of those, but then still encapsulate that VM as a container, have that be run natively within an environment orchestrated by OpenShift, Kubernetes as the primary orchestrator of those VMs, just like it does with everything else that's cloud-native, or is running directly as containers as well. We think that's extremely powerful, for us to really bring now the promise of Kubernetes into a much wider market, so I talked about 1700 customers, you can argue that that 1700 is the early majority, or if you will, almost the scratching of the surface of the numbers that we believe will adopt this platform. To get, if you held the next setup, whatever, five, 10, 20,000 customers, we'll have to make sure we meet them where they are. And so introducing this notion of saying "We can help migrate," with a series of tools that Rock's providing, these VM-based applications, and then have them run within Kubernetes in a consistent fashion, is going to be extremely powerful, and we're really excited about it, by those capabilities, bringing that to our customers. >> Well Ashesh, I think that puts a great exclamation point as to how we go from these early days off to the vast majority of environments, Ashesh, one thing, congratulations to you and the team on the growth, the momentum, all the customer stories, I'd love the opportunity to talk to many of the Red Hat customers about their digital transformation and how your cloud platforms have been a piece of it, so once again, always a pleasure to catch up with you. >> Likewise, thanks a lot, Stuart, good chatting with you, and hope to see you in person soon sometime. >> Absolutely, we at theCUBE of course hope to see you at events later in 2020, for the time being, we of course fully digital, always online, check out theCUBE.net for all of the archives as well as the events including all the digital ones that we are doing, I'm Stu Miniman, and as always, thanks for watching theCUBE. (calm music)
SUMMARY :
brought to you by Red Hat. and great to see you. Good to see you again. we had you on theCUBE a few things have changed. So as you can probably see, OpenShift on IBM Cloud, and Power, and Storage, and people would be like, you know, so if you made the invest mainframe, and any specifics you can give us about, you know, So, we're seeing all permutation, if you will, So Ashesh, let me get in some of the news that you got here, spawn that in the first instance, but I've been talking to your team Yeah, great, great question, so Knative is the basis so this is the notion of being able to run from Kubernetes, like the ones you mentioned, So what guidance would you give to customers and so when we, if you will, started off, GKE is not on the latest, what are we on, 1.19, Which is to say, we provide you three channels, so in the cloud, you talked about sitting on top of VMware, is the early majority, or if you will, to you and the team on the growth, the momentum, and hope to see you in person soon sometime. Absolutely, we at theCUBE of course hope to see you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ashesh | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first instance | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Kubernetes | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
over 1700 customers | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
Red Hat Summit 2020 | EVENT | 0.98+ |
three channels | QUANTITY | 0.98+ |
15 years | QUANTITY | 0.98+ |
OpenShift Serverless | TITLE | 0.98+ |
both | QUANTITY | 0.97+ |
Knative | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
GKE | ORGANIZATION | 0.96+ |
Azure Arc | TITLE | 0.96+ |
thousands more customers | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
third channel | QUANTITY | 0.96+ |
last summer | DATE | 0.96+ |
RBAC | TITLE | 0.95+ |
zero | QUANTITY | 0.93+ |
Simon Taylor, HYCU | CUBE Conversation, March 2020
>> From the SiliconANGLE Media office in Boston massachusetts, it's theCUBE. (techno music) Now, here's your host Stu Miniman. >> Hi, and welcome to a special CUBE conversation here in our Boston area studio. One of the biggest topics we've been digging into as we head through 2020, has really been multi-cloud and as the customers as they're really going through their own transformations understanding what they're doing in their data center to modernize what's happening between all of the public clouds they use, and all the services that fit amongst them. Happy to bring back one of our CUBE alumni to dig into a specific topic. Simon Taylor, who's the CEO of HYCU. Of course data protection, a big piece. A big buzz in the industry for a number of years, in one of those areas, in multi-cloud, that's definitely of big importance. Simon, great to see you, thanks so much for joining us. >> Thank you so much for having me back on, it's exciting to be here. >> All right, so, Simon, first, give us the update. >> Sure. >> It's 2020. We've seen you at many of the conferences we go to. You're based in Boston, so not to far for you to come out to our Boston area studio here. You know a 40 minute drive without traffic so, >> Not bad at all. >> give us the latest on HYCU. >> Certainly well and Stu, thanks again for having me into your studio, it's gorgeous, everything looks great. It's a lot easier than traveling over to Europe to see you. So this is very very convenient actually. But since we last spoke, which I think was about six months ago now, HYCU has been growing fast and furiously, you know we started out with the world's first purpose built backup and recovery product for Nutanix Of course, we added VMware we added Google Cloud, we wrapped all the data together into multi-cloud data protection as a service, and we called that HYCU Protege. Well I am so thrilled to announce that in just the three months since we've launched Protege, we have seen hundreds of customers flocking to it. And what we're finding is that customers are calling us and they're saying things like, "let me get this straight, "I'm already backing up my data on-prem with you, "I can now migrate to the cloud, "bring it back again for disaster recovery as a service, "and it's all part of HYCU?" and we say yes, you know, and they say, "and this is all offered as a service?" Yes, "and it's natively integrated "into all the platforms that I'm using?" Yes. And I think so customers today, are more and more in need of the kind of expertise that HYCUs providing because they're looking now much more strategically than ever before, at what workloads to leave on-prem and which workloads to migrate to the cloud, and they want to make sure that, that entire data pathway is protected from beginning to end. >> Yeah, it's really interesting stuff, I think back to early in my career that you know that data protection layer was like, "well, this is what I'm running "and don't change it." Think about like when you've rolled out like virtual tape as a technology it was, you know, "I don't want to have to change my backup "because that is just something that runs "and I don't do it." For last five years or so it feels like customers. There's so much change in their environment that they are looking for things that are more flexible, you talked about some of the flexible adoption models for payment and the like that they're looking for. So, you know, what do you think customers are just more embracing of that change, is it just that changes their daily business and therefore data protection needs to come along with that. Well it's funny you asked because just a few years ago I was on theCUBE with you and you said to me, "you guys have a perpetual license model, "what are you doing about that?" and I said, "don't worry, it is shifting to as a service it's going subscription," which was super important for the market is, I've had conversations with folks who are selling cooking gear and they're trying to sell that as a service, I saw yesterday, somebody, I think Panera Bread, is offering a coffee as a service. You know, I think what we've started to realize is that the convenience of the as a service model, the flexibility, which I would argue was probably driven by cloud technology and cloud technology adoption, is something the market has truly embraced and I think anybody who's not moved in that direction at this point is probably very much being left behind. >> Okay, another technology that often goes hand in hand in discussion with data protection is security. Of course ransomware is a hot topic conversation the last few years, how does that fit into your conversations with customers, what are you saying? >> That's a great question. So you know one of our advisory board members, his name is Kevin Powers, and he runs the Boston College cyber security program. I had the privilege and the honor of attending the FBI Boston College cyber program recently at a large scale event at Boston College, and FBI Director Ray was actually on hand to talk about this problem, and it was incredible you know he said, "cyber crime as a service "is becoming a major issue," you're talking about the commoditization of hard to build malware, that's now just skyrocketing off the charts, the amount of cyber exploitation that's going on across the world. This is creating massive massive issues for the FBI because they've got so many thousands of cases, they've got to deal with. And while they're doing a fantastic job. We believe prevention is certainly the key. So one of the things that has been really really wonderful as a CEO to watch has been the way that some of our customers have actually been able to crack the code in terms of not having to give in to these bad actors. We've had actual customers who have had ransomware attacks had millions of dollars in data, literally stolen from them, and they've been told, "you've got to deposit, "$5 million on this Bitcoin account by midnight, "or we're deleting the data." Right? Because HYCU is Linux based because HYCU is not Windows Server based because HYCU is natively integrated into all the platforms that we support. We were able to help those customers get their data back without paying a penny. So I think that that's one of those moments where you really sort of say to yourself, "God I'm glad I'm in this business here," we've built a product that doesn't just do what we say it's going to do, it does a heck of a lot more. And I think it's it's absolutely a massive problem and data protection is really a key part of the answer, >> You know it's great to hear their success stories there, you know I think back to earlier days where it'd be like well you know what if I set up for disasters and data protection and things like that, well maybe I haven't thought about it or maybe I kind of implemented it but I've never really tested it, but there's more and more reasons why I might actually need to leverage these technologies that I've deployed, and it's nice to know that they're there. You know it's not just an insurance thing that I've never used. >> Oh absolutely. Yeah, absolutely. >> All right. So I started off our discussion time in talking about multi-cloud So you talked about earlier we first first met it was at the Nutanix shows in their environments, and some of that you've gone along with Nutanix as they've gone through hybrid and multi-cloud what they call enterprise Cloud Messaging. >> Sure. >> And play with those environments so bring us up to speed. What have your big customers doing with cloud where does HYCU fit in and what are the updates on your product. >> Yeah, sure. And I'll start off by saying that at this point about a third of all AHV customers are using a HYCU for backup AND recovery. >> And just for our audience that doesn't know, AHV of course is Nutanix's >> Yes. >> Acropolis Hypervisor >> Absolutely. >> That comes baked into their solution as an alternative to people like VMware. >> Perfectly said as always sir, yes very much, and you know we've been thrilled as the rise of AHV and Nutanix has sort of taken the market by storm. And when we started out, you know we use to came on the show with zero customers and a new product and said, "we believe in AHV and we think it's going to be great "and we're going to back it up." And that's really paid off in spades for us, which was wonderful, but we also recognize that customers needed that VMware backups. We built a VADP integration and then we started going after the public cloud. So we started with Google Cloud, and we said we're going to build the world's first purpose built backup and recovery as a service for GCP. We launched that last year and it was tremendous you know some of the world's largest companies and organizations and governments are actually now running HYCU specifically for Google Cloud. So we've been thrilled about that. I think the management team at GCP has done a terrific job of making sure that Google can be really competitive in the cloud wars, and we're thrilled to support them. >> Yeah, and I'm glad you've got some customer stories on Google because you know the industry watchers out there it's like, "well you know Google they're number three," and you know we know that Google has some really strong data products Where they're very well known but I'm curious when you're talking to your customers. Is there anything that's kind of commonalities to why customers are using Google and you know what feedback you're hearing from your customers out there. >> Sure I mean I'll start off by saying this, we've polled our customers and we've now got over 1,300 customers in 56 countries. So we polled all of them and we just said, "how many data silos do you have, "how many platforms, how many clouds?" The average was five. Right, so the first thing to say is that I think almost all of these large enterprise customers in public sector and private sector are really using all of them, the extent to which they may be using AWS versus Azure versus GCP, versus Nutanix versus VMware on-prem. we can argue and debate but I think all customers at this point of any size and scale are trying them all out. I think what Google's done really well is they've started to build a really strong partner program. I think where they were a little bit sort of late to the party in terms of AWS and Azure being there sort of first. But I think what Thomas Kurian did when he came in is he sort of tripled down on sort of building out that ecosystem and saying, "what's really important "to make cloud customers comfortable "that their data is going to be as safe on Google Cloud, "as it was on-prem," and I'm thrilled that they've elected to make data protection sort of one of the key pillars of that strategy, not just because we're a data protection company, but because I do think that that was one of the encumbrances in terms of that evolution to cloud. >> Yeah, absolutely, seen a huge growth in the ecosystem around Google. The other big cloud provider that has a very strong partner ecosystem is the one when I went to the show last year, their CEO Satya Nadella talked about trust, so of course talking about Microsoft and Azure, very large ecosystem there, trying to emphasize, maybe against others and by the way you saw this as much of a shot against Google >> Sure. >> you know, how do I trust Google with my data and information from the consumer side as AWS is I might be concerned that they might be competing against them. So, how about the Microsoft relationship? >> It's a great question. So again, so when we started on-prem, with our initial purpose built backup recovery products. We added Google Cloud. You know I'm now thrilled to announce that we're also going to be launching Azure backup and recovery. It's also native, it is purpose built into the Azure Marketplace. All the things you've come to expect from HYCU backup. The simplicity, the fact that it's SLO based. The fact that you can actually go in and decide how many times a day you want a different recovery point et cetera. All of those levels of configuration are now baked in to HYCUs own purpose built backup and recovery as a service for Azure. But I think the important thing to remember about this wonderful wonderful new addition to our portfolio. Is that, it is a critical component of HYCU Protege. So getting back to your question from before about multi-cloud data protection and what we're seeing, we call this the year of migration, because for all of these cloud platforms, what are they really trying to do they need to move massive amounts of data in a safe and resilient manner, to the cloud. So remember after we built out these purpose built backup recovery services, Azure is now one of those. We then pulled all that data together under a single pane of glass we called it HYCU Protege. We then said to customers, we're going to enable you to automatically migrate with the touch of a button an entire workload to the cloud, and then bring it back again for disaster recovery, and we will protect the data on-prem in the cloud and back again. >> Yeah, it's interesting 'cause when we kind of look at what's happening in the marketplace, for many years it was a discussion of what's moving from the data center to the public cloud, some things are moving back from the environment edge, of course, pulls things even further. Often it's, I say it's not even migration anymore it's just mobility, because we are going to be moving things and spinning things up and building things in many more places, and it's going to change. As we started out that conversation, there's so much change going on that so you're giving customers some optionality there, so that this isn't just a one way, you know, let's stick it on a truck put it on this thing and get it to that environment but I need to be able to enable some of that optionality and know what I'm doing today but also knowing that you know six months a year from now, we know things are going to be different >> Yes, yes! >> And in each of these some of those environments. >> Absolutely. We call it the three Ds data assurance, data mobility, and disaster recovery. So I think the ability to not only protect your data, whether it's on-prem as it journeys to the cloud or whether it's in the cloud, the ability to actually assist the customer in the migration. And what I hear time and time again is, "oh but Azure has a tool," or "Google has a tool for migration." Of course they have tools for migration, but I think the challenge for customers is, how do I affect that data resiliency, how do I ensure that I can move the data as a complete workload. Moving an entire SAP HANA instance, for example, to the cloud. And it protected the entire time as it journeys up there, and then bring it back for the disaster recovery without professional services. Because again, you know HYCU it's about simplicity, we want to make sure that these customers can get the same level of readiness, the same ease of deployment that they get from their cloud vendor, when they're thinking about the data protection and the migration. >> All right, I want to click down one layer >> Please. >> in here. We're talking about multi-cloud, you talk about simplicity. >> Sure. >> Well, Kubernetes might not be the simplest thing out there but it absolutely is a fundamental piece of the infrastructure in a multi-cloud environment so you know your partners, Google with GKE, Azure with AKS and >> And Carbon. >> Carbon with a K from Nutanix everyone now, I say it's not about distributions it's really every platform that you're going to use is going to have Kubernetes built into it so what does that mean from a data protection standpoint? Do you just plug into all of these environments you've tested it got customers using it? >> It's a great question it comes up, as you can imagine, all the time. I think it's something that is becoming more and more ready for prime time. A lot of the major vendors are moving to it, making heavy investments in Kubernetes, we ourselves have over 100 customers that are actively using Kubernetes in one form or another and backing the data up using HYCU so there's no question in my mind that HYCU is Kubernetes ready. I think what's really exciting for us is some of the native integrations we're working on with Google and with Nutanix so whether it's Carbon whether it's GKE, we want to make sure that when we work with these platforms that we mimic, how the platform is supporting Kubernetes, so that our customers can get the same experience from HYCU that they're getting from the platform provider itself. >> All right, Simon want to give you the final word. Bring us inside your customers what they're doing with multi-cloud and where HYCU fits there, here in 2020. Sure, we talked about prime time. Cloud for many years has been something that I think large enterprises have talked a big game about, but have been really dipping their toe in the water with. What we've seen the last two years, is a massive massive at scale migration to the largest three public clouds, whether that's GCP, whether that's Azure or the other one. (laughing) We're thrilled to support GCP and Azure because GCP and Azure, we believe do provide the most value to our customers. But I think the name of the game here is not just supporting a customer in the cloud, it's understanding that every customer today is to is on a journey, whether they're on-prem, whether their journeying to cloud or they're in cloud those three Ds, data assurance, which is our backup, data mobility, which is the automated migration, or disaster recovery readiness. That's the name of the game and that's how HYCU wants to help. >> All right, Simon Taylor. Always a pleasure to catch up with you thank you so much for the HYCU updates, >> Stu thanks so much for having us on. >> All right, be sure to check out www.thecube.net for all of our inventory of the shows that we've been at the videos we've done, you can even search on keywords in companies, I'm Stu Miniman and thank you for watching theCUBE. (Techno Music)
SUMMARY :
From the SiliconANGLE Media office and all the services that fit amongst them. it's exciting to be here. You're based in Boston, so not to far and we say yes, you know, is that the convenience of the as a service model, the last few years, how does that fit and data protection is really a key part of the answer, and it's nice to know that they're there. Yeah, absolutely. So you talked about earlier we first first met and what are the updates on your product. And I'll start off by saying that at this point as an alternative to people like VMware. and it was tremendous you know and you know what feedback you're hearing Right, so the first thing to say is and by the way you saw this as much of a shot against Google and information from the consumer side We then said to customers, we're going to enable you and get it to that environment And in each of these the ability to actually assist the customer you talk about simplicity. and backing the data up using HYCU is not just supporting a customer in the cloud, Always a pleasure to catch up with you I'm Stu Miniman and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
ORGANIZATION | 0.99+ | |
FBI | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Simon Taylor | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Kevin Powers | PERSON | 0.99+ |
Simon | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
$5 million | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
HYCU | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
March 2020 | DATE | 0.99+ |
GCP | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
40 minute | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Boston College | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Ray | PERSON | 0.99+ |
HYCUs | ORGANIZATION | 0.99+ |
Panera Bread | ORGANIZATION | 0.99+ |
56 countries | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.98+ |
over 100 customers | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
www.thecube.net | OTHER | 0.98+ |
Kubernetes | TITLE | 0.98+ |
first | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
AKS | ORGANIZATION | 0.97+ |
SAP HANA | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
over 1,300 customers | QUANTITY | 0.97+ |
zero customers | QUANTITY | 0.96+ |
AHV | ORGANIZATION | 0.96+ |
GKE | ORGANIZATION | 0.96+ |
one layer | QUANTITY | 0.96+ |
one form | QUANTITY | 0.96+ |
hundreds of customers | QUANTITY | 0.95+ |
Kaustubh Das, Cisco | Cisco Live EU Barcelona 2020
(upbeat music) >> Announcer: Live from Barcelona, Spain it's theCUBE covering Cisco Live 2020, brought to you by Cisco and its ecosystem partners. >> Welcome back. This is theCUBE's live coverage of Cisco Live 2020 here in Barcelona, Spain. I'm Stu Miniman. My co-host for this segment is Dave Volante. John Furrier is also in the house. We're doing a little more than three days wall-to-wall coverage. One of the big themes we're talking about this week is in this complicated world, networking, containerization, applications going through transformation. Future work simplification is something that is very important and helping us to really tease through and understand some of the integration, some of the announcements where Cisco is helping to simplify the environment, happy to welcome back to the program one of our Cube alumni, Kaustubh Das who is a Vice President of Product Management at Cisco. KD, thanks so much for joining us. >> Oh, I'm delighted to be here, it's great to be here. >> All right. So but up on the main stage, they walk through a number of the announcement. Listen Tony, I was talking about some of the pieces and two of the announcements from the main stage are under your purview. So why don't we start there, walk us through the news. >> Yeah, so there's two two major announcements. The first one's called Cisco Intersight Workload Optimizer. And what it is, it's a way to have visibility into your data center, all the way from the applications and in fact, the user journeys within those applications, all the way down through the virtualization there, through the app servers, through the container platforms down into the servers, the networks, storage lands. So you have a map of the data center. You have a common data set that the application owner and the infrastructure owner can both look at and you finally have a common vocabulary so that it helps them to troubleshoot faster so on a fast reactor way, they talking the same language not pointing fingers at each other or do things proactively to prevent problems from happening when you see a server running hot, a virtual machine running hot, an application server running hot. You can diagnose it and have that conversation before it happens. >> My understanding is that Intersight and there's also some integrations with AppDynamics there, AppD which of course we know we talk to that team at the Amazon Cloud shows a lot. So that common vocabulary spans between my hybrid and multi cloud environments. Am I getting that right? >> Correct and there's two pieces even within that. So certainly that's integrations with AppD so from AppD we get information about the application performance. We get information about the business metrics associated with the application performance. We get information about the journeys that user take within the application and then we take that data then we stitch it together with infrastructure data to map how many applications are dependent on which application servers, how many VMs are those dependent on, what does those VMs run on? What hosts are they dependent on, what networks do they Traverse, what lands do they run on? And each one of these is an API call into that element in the infrastructure stack. Each API call gives us a little bit of data and then we piece together this data to create this map of the of the entire data center. There's a multi cloud aspect to it obviously and so we also make API calls into AWS and Azure and clouds out there and we get data about utilization of the various instance types. We get data about performance from the cloud as well. >> So two announcements. Insight Workload Optimizer and HyperFlex AppDynamics, is that right or they are separate? >> HyperFlex application platform. >> Okay. >> So if we look at the, let me just put these two in context. Every enterprise is doing two things. It's trying to run application that it already hosts and then it's writing some bespoke new applications. So the first announcement, the Cisco Intersight Workload Optimizer and the integration of the AppD, that helps us be more performant for applications we're running, to have troubleshoot faster, to have reduced cost in a multiply cloud environment. The second announcement Dave, the HyperFlex application platform, it's really targeted towards developers who are writing new applications on a container platform. And for those developers, IT needs to give them a simple appliance like easy to use container as a service platform. So what HX AP HyperFlex application platform is is a container as a service platform driven from the cloud so that the developer gets the same experience that they get when they go to an AWS and and request a pod. But they get it on-prem and it's fully 100% upstream Kubernetes compliant. It's curated by us so it's very simple appliance like feel for development environments on container. >> Okay. So Insight Workload Optimizer, it really attacks the problem of sort of the mystery of what goes on inside VMs and the application team, the infrastructure team, they're not talking to each other. You're bringing a common, like you said parlance together. >> Kaustubh: Correct. >> Really so they can solve problems and that that trickles down to cost optimization as well as performance. >> It does, aha. >> And I understand hyper HyperFlex app platform it's really bringing that cloud experience to on-prem for hybrid environments. >> For our new development. So if you're developing on containers, you're probably using Kubernetes but you're probably using this entire kind of ecosystem of open source tools. >> Yeah. >> And we make that simple. >> Okay. >> We make it simple for developers to use that and variety to provide that to developers. >> Okay. since underneath, there's HyperFlex. is there still virtualization involved in there and how does this tie in with the rest of the Kubernete solutions that we were talking about with your cloud partner? >> Great, great. Great question. So yes, there is HyperFlex underneath this. So to develop, you need a platform. The best platform we think is the elastic platform that is hyper-convergence. And with type of flex, we took storage networking and compute, packaged it together, made it super simple. We're doing the same thing with Kubernetes. So it's the same concept that how do you take complex things, package it together and make it almost appliance like. We said we're doing the same thing with Kubernetes. Now Stu, the point about virtualization is a good one. A lot of container deployments today are run in virtual machines. And they run in virtual machines for good reason, for isolation, for multi-tenancy, for all these kinds of ignition. However, the promise of containers was to sort of get rid of the tax that you pay when you deploy a virtualization environment. And what we're giving out right now is no tax, no virtualization tax virtualization environment. So we have a layer over transition in there. It's designed for this use case so it does give the isolation, it does give the multi-tenancy benefits but you don't need to need to pay additionally for it if you're deploying on containers-- >> Job wise it is some KB and base type solution >> Kaustubh: Correct. >> Underneath, it makes a lot of sense if you look at the large virtualization player out there. It's been talking about how do I enable the infrastructure that's all virtualized and everything and bring them along to that journey >> Correct. >> For that bridge if you will to the environment? Sure containerization sometimes I want to be able to spin it up super fast. It leaves, it dies, but if I'm putting something in my data center, probably the characteristics I'm looking at are a little bit different. >> Correct, correct. The other thing it does and you touched on it a little bit was we have a homogeneous environment with the major clouds out there. So one of the things developers want to do is they want to develop in one place and they want to deploy in another place so develop on Amazon and deploy on-prem or Azure. We've got an environment with very native integrations so that it's natively integrated into EKS and AKS. And we facilitate that develop anywhere, deploy anywhere motion for developers who are trying to build on this. >> So okay. What does the customer have to do to consume these solutions? >> So our customer right now for this one is IT operations. It maybe helps to bit back a little bit on why we did this. I had a lot of customers come to me and they said listen, I'm IT, I'm in the business of taking shrink-wrap software, taking enterprise-grade resilient infrastructure, putting that together. I'm not in the business of getting open source drops, every week, every day, every month, putting them together by making sure all the versions line up and doing that again and again and again. So the putting together an Ikea piece part of open source software has not been traditionally the IT operator's business. So our customer is that IT operator. What they need to do is they buy a, if they may have a HyperFlex system already, or they buy a HyperFlex effect system. They add on a license for the HyperFlex application platform. They have an Intersight license. This is delivered from the cloud so Intersight manages that deployment, manages the lifecycle, manages the upgrades and so forth. If they have a state that spreads across multiple sites, Intersight is cloud-based so it can actually reach all those sites and so they're in business. >> Okay, so very low prerequisite. You just got to have the product and you can add on to it. >> Yeah, I have the HyperFlex system, add on to the license, you're done. >> So I'm curious. How unique do you see this in the marketplace? I think the keynotes this morning is that there's no other company that can actually do this. I wonder if you can sort of add some color to that and just help our viewers understand the uniqueness of Cisco's offer. >> Sure. So I think it's unique on a number of different dimensions. The first dimension is HyperFlex itself. We've had an appliance mentality to this for a long time and we really co-designed the software and the hardware to build the most performance hyper-converged system out there. We took the same approach when we went down the path of Kubernetes and building this container platform. And so it's called design software and infrastructure together. The second thing is we said we're going to be 100% upstream Kubernetes compliant right, so if you look at the major offerings out there in this space, they're often several months actually behind where the open source is, where the upstream of the sources and developers don't want that. They want the latest and greatest, they want they want to be current, right. So we are far ahead of most of the other offerings out there in terms of how close they are to their upstream commodities. The final piece is Intersight. Intersight gives us immense ability to have scale where especially if you're developing on containers and micro services, you're talking tens of thousands, many tens of thousands of N nodes, maybe more. And being in the cloud, we have the scale and we have reached so a lot of our customers have distributed assets and branches and you know, hotel chains with hotels and so forth. Intersight allows us the ability to actually deploy across a distributed asset class with with the centralized kind of provisioning. >> You see a huge uptake right now and containers generally Kubernetes, specifically. It's sort of across the board but I wonder if you could comment on how much of that demand and activity is coming from sort of the traditional IT roles versus with other hoody developers? >> Yeah, that's that's a great question. So yes, there is a on a hype cycle it's at the top of the hype cycle. Everybody's in actual adoption. I think it's pretty good as well right. So that is every company I talk to is doing something in containers, every company. But usually, it starts at the developers. It starts with like you described with the folks in the hoodies and that's great. I mean they're experimenting, they're getting this thing. What hasn't happened is it hasn't gotten mainstream. And things can mainstream is when IT picks it up. It certifies hey this is resilient, this is enterprise-grade, I can stand behind it, I can manage the lifecycle of it. That's what we're enabling here. I'm giving IT a path to mainstream containers, to mainstream Kubernetes so that the adoption kind of takes it from that pipe cycle to mainstream adoption. >> Do you see K.D. new sort of data protection approaches or thinking as containers come into play? I mean they're ephemeral, you know microservices sometimes aren't so micro. Like you say, they're running often times inside a VM. So how are people thinking about protecting containers? >> Yeah, yeah, that's a big topic in itself. I mean one of the things that we found is even though they were supposed to be ephemeral, they require persistent storage so we've implemented within hyperflex a CSI plugin that provides that persistent storage layer to containers. Then once you do that, all of the data protection mechanism of HyperFlex come into play. So within the cluster, the resiliency, the triple replication, the backups, the partnerships we have with their other data protection pairs, all of those mechanisms become available instantly and those are enterprise-grade. Those are ones that IT knows and can stand behind. Those become available to containers right away >> Great. >> But it's great, great question. >> Awesome. >> Just want to go back to when you were talking about Intersight and the reach and the scale of the solution reminds me that Cisco has a strong legacy in global environment. What I'm curious about, we've talked a little bit about Edge computing in the past. >> Kaustubh: Yes. >> Where are you seeing Edge today? Where is that going? What should we be looking at in that space when it comes to Edge? >> Yeah, no, it's a big part of our customer demand. In fact, we haven't seen I think all flash was the other technology that took place so fast but Edge has been really phenomenal in its growth rate. Over the last year, we've seen I think probably up to 15% to 20% of my engagements are in this space on at least the hyper convert side. So we see that as a big growth area. More and more deployments are happening. They're being centrally managed, deployed at the edges and so the only solution that scales to something like that is something that's based on the cloud. But it's not just enough to be based in the cloud. You've got to maintain that entire lifecycle right? You've got to make sure you can do installs, upgrades, you know OS installs, health monitoring and so as we built that Intersight platform, we've added all those capabilities to it over time So we started with hey this is a SAS-based management platform and then we added telemetry and then we said if we can actually match signatures, now machines can manage machines. So a good amount of my support calls are now machines calling each other and then fixing themselves. So that's just path-breaking from an informant Edge environment. You don't have an IT person, add an Edge location. You want to drop, ship an appliance there, and you want to be able to see it remotely. So I think it's a completely new operating model. >> I know we got to go but I want to run your scenario by K.D.'s. Do share with me from one of my breaking analysis. Look Dave, you mentioned Flash, that's what triggered me. (laughing) So think of containers and Kubernetes, think of like Flash. Remember Flash used to be the separate thing which we used to think it was a separate market and now it's just everywhere, it's embedded in everything. >> Kaustubh: Yes. >> So the same thing is going to happen with Kubernetes. It's going to be embedded in solutions. This is exactly what it is. By 2023, we're probably not going to be talking about it as a separate thing, maybe that's sooner. It's really just going to be ubiquitous, yeah. >> No, I totally agree. I think the underpinnings that you need for that future, you need a common infrastructure platform and a common management platform. So you don't want to have a new Silo creator and this has been our philosophy even for hyperconvergence. We said hey, there's going to be converging infrastructure that will be hyper converted. But they need to be the same management system, they need to be the same fabric. And so if it's Silo is not going to work. Same thing for containers you know. It's got to be the same platform in this case, it's HyperFlex. Hyperflex runs virtualization, it runs containers with HXAP. You get all of those benefits that I've talked about. It's all management insights, it's a common management platform across both of those. At some point, these are all tools in somebody's tool kit and you pick the right one for the job. >> Kaustubh, it is wonderful to hear the company that has been dominant in one of the silos for so long of course helping to bring the silos together work across the domains. Congratulations on that good news, always great to have you. >> Yeah, always great to be here, thank you. >> Dave: Thank you. >> For Dave Folante, I'm Stu Miniman back from lunch where we hear more from Cisco live in Barcelona 2020. Thank you for watching theCUBE.
SUMMARY :
brought to you by Cisco and its ecosystem partners. John Furrier is also in the house. and two of the announcements from the main stage and in fact, the user journeys within those applications, and there's also some integrations with AppDynamics there, and so we also make API calls into AWS and Azure is that right or they are separate? so that the developer gets the same experience that they get the infrastructure team, they're not talking to each other. and that that trickles down to cost optimization to on-prem for hybrid environments. So if you're developing on containers, We make it simple for developers to use that and how does this tie in So to develop, you need a platform. and bring them along to that journey For that bridge if you will So one of the things developers want to do What does the customer have to do So the putting together an Ikea piece part You just got to have the product and you can add on to it. add on to the license, you're done. the uniqueness of Cisco's offer. the software and the hardware to build is coming from sort of the traditional IT roles So that is every company I talk to I mean they're ephemeral, you know microservices I mean one of the things that we found But it's great, about Intersight and the reach and the scale of the solution and so the only solution that scales to something like that and now it's just everywhere, it's embedded in everything. So the same thing is going to happen with Kubernetes. But they need to be the same management system, Congratulations on that good news, always great to have you. Thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Folante | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Kaustubh Das | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Tony | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
two pieces | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Ikea | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Kaustubh | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Intersight | ORGANIZATION | 0.99+ |
first announcement | QUANTITY | 0.99+ |
two announcements | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
HyperFlex | TITLE | 0.99+ |
second thing | QUANTITY | 0.98+ |
second announcement | QUANTITY | 0.98+ |
AppD | TITLE | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
first dimension | QUANTITY | 0.98+ |
each one | QUANTITY | 0.98+ |
KD | PERSON | 0.97+ |
AppDynamics | ORGANIZATION | 0.97+ |
last year | DATE | 0.97+ |
K.D.'s. | PERSON | 0.97+ |
2023 | DATE | 0.97+ |
today | DATE | 0.97+ |
Kubernetes | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Each API | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
this week | DATE | 0.96+ |
first one | QUANTITY | 0.96+ |
20% | QUANTITY | 0.95+ |
Edge | TITLE | 0.95+ |
Silo | TITLE | 0.94+ |
more than three days | QUANTITY | 0.94+ |
hyper HyperFlex | TITLE | 0.93+ |
Barcelona | LOCATION | 0.92+ |
up to 15% | QUANTITY | 0.92+ |
HXAP | TITLE | 0.92+ |
Flash | TITLE | 0.91+ |
Kubernete | TITLE | 0.9+ |
two major announcements | QUANTITY | 0.9+ |
Cube | ORGANIZATION | 0.9+ |
this morning | DATE | 0.88+ |
Cisco Live 2020 | EVENT | 0.88+ |
Edge | ORGANIZATION | 0.87+ |
Balaji Siva, OpsMx | CUBE Conversation, January 2020
(funky music) >> Everyone, welcome to theCUBE studios here in Palo Alto for a CUBE Conversation, I'm John Furrier, we're here with a great guest, Balaji Sivasubramanian, did I get it right? Okay, okay, VP of product and business development at OpsMx, formerly with Cisco doing networking, now you're doing a lot of DevOps, you guys got a great little business there. Realtime hardcore DevOps. >> Absolutely, so we help large enterprises do the digital transformation to help them achieve that transformation. >> You know, Stu Miniman and I were talking about cloud-native, one of the reasons I wanted to bring you in was, we've been talking about cloud-native going mainstream. And cloud-native essentially codewords for cloud, microservices, essentially DevOps 2.0, whatever you want to call it, it's the mainstream of DevOps, DevOps for the past 10 years has been kind of reserved for the pioneers who built out using open source, to the fast followers, building large startups, to now larger companies, now DevOps is turning into cloud-native where you see in the cloud, born in the cloud, on-premises cloud operations, which is hybrid, and now the advent of multicloud, which really brings the edge conversation into view, really a disruption around networking, and data, and this is impacting developers. And pioneers like Netflix, used Spinnaker to kind of deploy, that's what you guys do, this is the real thread for the next 10 years is data, software, is now part of everyday developer life. Now bring that into DevOps, that seems to be a real flashpoint. >> Yeah, so if you look at some of the challenges enterprises have to get the velocity that they have, the technology was a barrier. So with the Docker option, with the Cloudred option, cloud basically made the infrastructure on demand, and then the Docker really allowed, Microsoft architecture allowed people to have velocity in development. Now their bottleneck has been, "Now I can develop faster, I can bring up infra faster, "but how do I deploy things faster?" Because at the end of the day, that's what is the last mile, so to say, of solving the full puzzle. So I think that's where things like Spinnaker or some of the new tools like Tekton and all those things coming up, that allows these enterprises to take their container-based applications, and functions in some cases, and deploy to various clouds, AWS or Google or Azure. >> Balaji, tell me about your view on cloud-native, you look at, just look at the basic data out there, you got AWS, you got KubeCon, which is really the Linux Foundation, CNCF, I mean the vendors that are in there, and the commercialization is going crazy. Then you got the cloud followers from Amazon, you got Azure basically pivoting Office 365 and getting more cloud action. They're investing heavily in Google GCP, Google Cloud Platform. All of 'em talk about microservices. What's your view of the state of cloud-native? >> Yeah, I think, I probably talked to hundreds of customers this last year, and these are large, Fortune 100, 200 companies to smaller companies. 100% of them are doing containers, 100% of them are doing Kubernetes in some fashion or form. If you look at larger enterprises like the financial sectors, and other, what do you call the more Fortune 100 companies, they do actually do OpenShift. RedHat OpenShift for their Kubernetes, even though Kubernetes is free, whatever, but they definitely look at OpenShift as a way to deploy container-based applications. And many of them are obviously looking at AKS, EKS and other cloud form factors of the same thing. And the most thing I've seen is AWS. EKS is the most common one, Azure some parts, and GKS somewhat, so I mean you know the market trend that's there. So essentially, AWS is where most of the developments are happening. >> What do you think about the mainstream IT, typical IT company that's driven by IT, they're transforming just a few, I'd say about a year ago, most hands were like "Oh, the big cloud providers are going to be "not creating an opportunity for the Splunks in the world, "and other people," but now with that shifting, mainstream companies going to the cloud, it's actually been good for those companies, so you're seeing that collision between pure cloud-native and typical corporation enterprise, that are moving to the cloud or moving to at least hybrid. That's helping these Splunks of the world, the Datadogs, and all these other companies. >> I think there's two attacks on those companies that you talk about. One is obviously the open source movement, it's attacking everything. So anything you have in IT is attacked by open source. Software is eating the world, but open source is eating software. Because software is easy to be open source. Hardware, you can't eat it, there's no open source, nobody's doing free hardware for you. But open source software is eating the software, in some sense, but anyway, so any software vendors are fully, everybody's considering open source first. Many companies are doing open source first, so if you want to look at Datadog or Prometheus, I may look at Prometheus. If I look at IBM uDeploy or Spinnaker, I may look at Spinnaker, so everything Kubernetes or maybe some other forms of communities. So I think these vendors that you talk about, one is the open source part of it, the other is that when you go to the cloud, the providers all provide the basic things already. If you look at Google Cloud, I was actually reading about Google networking a lot of things, lot of the load balancers, and all those things are inbuilt as part of the fabric. Things that you typically use, a router or a firewall or those things, they're all inbuilt, so why would I use a F5 load balancer and things like that? So I would say that I don't think their life is that easy, but there's definitely-- >> All right, so here's the question, who's winning and who's losing with cloud-native? I mean what is really going on in that marketplace, what's the top story, what's the biggest thing people should pay attention to, and who's winning and who's losing? >> I think the channelization of the cloud-native technology is definitely helping vendors like AWS, and basically the cloud vendors. Because no longer you have to go to VMware to get anything done, they have proprietary software that they had and you don't have to go there anymore, everybody can provide it, so the vendors, I would say the customers, obviously, because now they have more choices, they're not vendor locked in, they can go to EKS or AKS in a heartbeat and nothing happens. So customers and vendors are big winners. And then I would say the code providers are big winners. open source is really hurting some of the vendors we talked about earlier, I would say the big guys are the-- >> Cloud's getting bigger, the cloud guys are getting bigger and bigger, more powerful. What about VMware, you mentioned VMware, anything to their proprietary, they also run on AWS, natively, so they're still hanging around, they got the operators. But they're not hitting the devs, but they have this new movement with the Kubernetes, they acquired a company to do that. >> I would say that the AWS, VMware on AWS, essentially is, I would say almost a no-op for VMware in some sense, in some sense. They have to be, it's almost like a place to sell their ware. They used to be on-prem vendors already have the infrastructure, then VMware goes, sells to that customer A. Now the customer says that "I'm not using it on on-prem server A, "I'm on AWS, can you provide me the same software." So essentially, number one, by moving to the cloud, they're essentially selling to the same customers, the same stuff, number one. Number two is but once now I'm in the cloud, I would obviously PWA my workload to native AWS or Google, so I think in the long run, I would say that it's a strategy to survive, but I don't think it's a long-term successful. >> Operators don't move that fast, devs move much faster. I got to ask you, in the developer world, and cloud-native and DevOps 2.0, 3.0, what are the biggest challenges that's slowing it down, why isn't it going faster, or is it going fast, what's your view on that? >> Yeah, I think I would say that the biggest change is obviously, I just said, the people. In some sense, people have to transform, and in large organizations, there's a lot of inertia that allows people, they are deploying existing services the way they're deploying services, some of them are custom-built, the guy who wrote it and they no longer exist, they've moved on, and so some of them are built like that, but I think the inertia is basically now "How do I transform them over to the new model?" If the application itself is getting more broken into more microservices, then it's a great opportunity for me to migrate, but if it's not, then I'm not going to touch something that's actually there. So I would say is that technology's complex. Actually, every day we have people, there's a lot of interest, there's a lot of people learning, learning, learning new stuff, but I cannot hire one Kubernetes good engineer if I want to try hard, independently at least. Because it's hard. >> 'Cause they're working somewhere else, right? >> Well they work somewhere else, or the technology is still early enough that people are learning in droves, don't get me wrong there, but I think it's still fairly complex for them to digest all of that. I think in five years, fast forward five years, you would see that technology, knowledge would be more, so it would be easier to hire those people, because if we want to transform internally, let's say I have my enterprise, I want to transform, I need to hire people to do that. >> What are the use cases, what are the top use cases that you're seeing in your work and out in the field in the business that people are rallying around, they can get some wins, top three use cases for end to end cloud-native development? >> I would say the use cases are like if I'm doing any kind of container-based applications, obviously, I would like to do through the new model of doing things, because I don't want to build based on legacy technology, for sure. I would say that the other ones are new age companies, they're definitely adopting cloud first, and they're able to leverage the existing models, the new models more quickly. I mean obviously there are two things, I think that if I'm doing something new, I take advantage of that. >> Do you think microservices is overrated right now, or is it hyped up, or is it? >> No, I think it's real, absurdly real. >> And what's the big use case there? >> The velocity that people get by adopting microservices. Before, I used to work at Cisco, and there's a software release I have planned for six months to release a software because there's so many engineers, and developing so many features, they develop it over a period of time, and then when they actually integrate, there's two, three months of testing before it gets out, because the guy who wrote the code probably left the company already, by the time the software actually sees the light of day. >> Give some data from your perspective, you don't have to name companies, but for the people that are successful with DevOps, at an operating level, what kind of frequency of updates are they doing per day, just give us some order of magnitude numbers on what is a success in terms of it? >> Yeah, I mean the great examples are something like Netflix and all, 7000 deployments a day, but obviously that's in the top of the pyramid, so to say. Many of the other customers are doing, some are bringing in one to two a week, these are very good companies. This is for the service level, I'm not talking about the whole application. Because the application may have 10, 20, 50 services in some way, so there's a lot of updates going on every week, so if you look at a week timeframe, you may have 50 updates for that service, but I think individual service level, essentially it could be one or two a week, and obviously the frequency varies depending on-- >> Just a lot of software being updated all the time. >> Absolutely, absolutely. >> Well Balaji, great to have you in, and I got to say, it's been, we could use your commentary and your insight in some CUBE interviews, love to invite you back, thanks for coming in, appreciate it. I'm John Furrier, here in the CUBE Conversation we have thought leader conversations with experts. From our expert network theCUBE, CUBE alumni, and again, all about bringing you the data here from theCUBE studios, I'm John Furrier, thanks for watching. (funky music)
SUMMARY :
here in Palo Alto for a CUBE Conversation, do the digital transformation one of the reasons I wanted to bring you in was, Because at the end of the day, that's what is the last mile, I mean the vendors that are in there, EKS and other cloud form factors of the same thing. "Oh, the big cloud providers are going to be the other is that when you go to the cloud, so the vendors, I would say the customers, obviously, Cloud's getting bigger, the cloud guys already have the infrastructure, then VMware goes, I got to ask you, in the developer world, is obviously, I just said, the people. or the technology is still early enough and they're able to leverage the existing models, before it gets out, because the guy who wrote the code and obviously the frequency varies depending on-- in some CUBE interviews, love to invite you back,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Balaji Sivasubramanian | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
50 updates | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
January 2020 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
OpsMx | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
OpenShift | TITLE | 0.99+ |
DevOps | TITLE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
five years | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
Balaji Siva | PERSON | 0.98+ |
Kubernetes | TITLE | 0.98+ |
20 | QUANTITY | 0.98+ |
AKS | ORGANIZATION | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
DevOps 2.0 | TITLE | 0.98+ |
EKS | ORGANIZATION | 0.98+ |
Linux Foundation | ORGANIZATION | 0.97+ |
Spinnaker | ORGANIZATION | 0.97+ |
Balaji | PERSON | 0.97+ |
last year | DATE | 0.97+ |
VMware | ORGANIZATION | 0.96+ |
200 companies | QUANTITY | 0.96+ |
two attacks | QUANTITY | 0.96+ |
Datadogs | ORGANIZATION | 0.94+ |
Kubernetes | ORGANIZATION | 0.94+ |
three use cases | QUANTITY | 0.94+ |
Docker | TITLE | 0.94+ |
two a week | QUANTITY | 0.92+ |
Spinnaker | TITLE | 0.91+ |
hundreds of customers | QUANTITY | 0.91+ |
Azure | TITLE | 0.91+ |
first | QUANTITY | 0.89+ |
7000 deployments a day | QUANTITY | 0.89+ |
Cloud Platform | TITLE | 0.88+ |
Cloudred | TITLE | 0.88+ |
3.0 | TITLE | 0.87+ |
50 services | QUANTITY | 0.86+ |
KubeCon | ORGANIZATION | 0.84+ |
three months | QUANTITY | 0.83+ |
Prometheus | TITLE | 0.81+ |
about a year ago | DATE | 0.81+ |
Balaji | TITLE | 0.79+ |
Nicholas Klick, GitLab | GitLab Commit 2020
>> Presenter: From San Francisco, it's theCUBE. Covering GitLab Commit 2020. Brought to you by GitLab. >> Hi, I'm Stu Miniman, and this is theCUBE's coverage of GitLab Commit 2020 here in San Francisco. You might notice some of our guests have some jackets on. It is a little cooler than normal here in San Francisco, but the community and knowledge is keeping us all warm. Joining us for the first time on the program is Nicholas Klick, who is an engineering manager at GitLab. Thanks so much for joining us. >> Thanks for inviting me. >> Alright, so you had an interesting topic. The state of serverless in 2020 was the session that you gave. Definitely a topic we love covering on theCUBE, something I personally have been digging into, trying to understand. Definitely something that the developers, and especially the app devs that I speak with, are very bullish on, so what is the state of serverless in 2020? >> That's actually a good question. So, my talk was actually broken into two parts. One was, like initially I just wanted to help provide a clear definition of what serverless is. In my opinion, serverless is more than just functions. There are a lot of other a lot of other technologies, like backend is a service, API gateways, service integration proxies that you can stitch together to create dynamic applications. So, I created a more expanded definition of what serverless is from my perspective, and the other part was to really talk about three things that I'm finding exciting right now in the serverless space. The first was Knative, and the fact that Knative is likely going to go to GA pretty soon, so it'll be production ready, and we can finally build production workloads on it. The second is that running serverless at the edge I find to be an exciting topic. And then finally, talking more in depth on those, the service integrations. Of how you can actually create applications that don't include functions at all, so functionless serverless. >> Yeah, so a lot of things I definitely want to tease out of that, but Nicholas, I guess maybe we should step back a second-- >> Nicholas: Okay. >> And was there survey work, or was there something done, or is this kind of something related to your job that you put together as just an important topic? >> Yeah, I know this is just me speaking as someone that works in the space and sees the technology is evolving and just my opinions, I guess. >> Okay, when I talk to the practitioners, when you go and say, "Oh, they're interested in it." Chances are they're doing stuff on Amazon, is like what kind of the first piece of it tends to be. There are lots of open source projects out there, but it's still this kind of dominated by Amazon. Azure has some pieces, of course. Google has things they're doing. I liked how you teased out that serverless definitely isn't a thing, and the definition, and even the term itself, gets people all riled up and things like that, so I hate getting into the ontological arguments, but the promise of it is that I can build applications in a different way, and I shouldn't have to think about some of the underlying components, hence the name serverless, kind of-- >> Right. >> does that, but it definitely is a change in mindset as to how I build and consume environments. >> Right. Right, and like another point that I made in the talk, that I believe pretty strongly, is that serverless is not something that's going to replace monoliths and microservices. I believe it's another tool in the tool belt of the developer, of the operator, to solve problems, and that we should look at it like that. It shouldn't be, it's not the next progression in application architecture. >> Yeah, I've met some companies that are 100%, they've built everything on serverless, but that's like saying I've met plenty of companies that are all in the cloud. It depends on what you do and what your business is. >> Nicholas: Right. >> When we look at the enterprise, it is a broad spectrum, and making changes along that path is something that typically takes a decade or more, and they have hundreds, if not thousands of applications, and therefore, we understand. I've got my stuff running on my mainframe through my latest microservice architecture, and everything in between. >> Right, and I mean I'm speaking as an employee of GitLab, and we have a very well known monolith that we deploy, and so for my opinion, I don't believe that monoliths are going to die any time soon. >> Alright, I'd love you to tease out some of those pieces that you talked about, the three items you talked about: Knative. You know, Knative is interesting. The thing I poked at when I go to KubenCon and CloudNativeCon is today I mentioned when I think about customers, most of them are using Amazon. The second choice is they're probably doing Azure, and today Knative directly doesn't work with EKS, AKS, or the like. I know there's a solution like trigger match that actually will interact-- >> Right. >> Between the Amazon and there, but don't you need the buy-in of Amazon and Microsoft for Knative to be taken seriously. And the other thing is, Google still hasn't opened up the-- >> Right. >> the Google controls, the governance of both Istio and Knative, and there are some concerns in the ecosystem about that, so what makes you so bullish on Knative. >> Yeah, so I'm definitely aware of some of the discussions around Knative. From my perspective, I think that Knative is, if someone is already operating a lot of Kubernetes infrastructure, if they already have those, that infrastructure running, then deploying Knative to it is not that much more of a it doesn't require additional resources and expense, so it could be, again it depends on their use case, and I think that, when I think about serverless, I try to remain pragmatic, so if I'm already using Kubernetes, and I want a simple serverless runtime, Knative would be a great option in that situation. If I want to be able to work cross-cloud, like this is another opportunity that Knative provides, is the ability of deploying to any Kubernetes cluster anywhere, so it has that, you know, that, there's not a vendor lock-in issue with Knative. >> Yeah, and absolutely there was initially some concern that, could serverless actually be the ultimate lock-in? >> Right. >> I'm going to go deep on one provider and don't have a way. There, open source groups like the CNCF trying to help along those ways-- >> Sure. >> Knative absolutely along those ways looking at that environment. From a GitLab customer's standpoint, GitLab's not tied to whether you're doing containers or serverless or VMs or in the environment. What does it mean for GitLab customers? If I want to look at serverless, how does that fit into my overall work flow? >> Yeah, so initially at GitLab we focused on providing the ability to deploy to Knative. That was, we were very early in the Knative space, and I think that as it's matured, as those APIs have matured, then our product has kind of developed, and so right now we enable you to be able to create Kubernetes clusters through our interface and then deploy your function run times directly from your GitLab repo. We've also, are kind of growing in our our examples and documentation of how to integrate GitLab CI/CD with Lambda. That's another big area that we're moving into as well. >> Great. As you look forward to 2020, we've got a whole new decade in front of us, what should, what do you think people should be watching on in the maturity of this space. >> Yeah, so I think that the point that I touched on earlier of the service integrations, I think that that is something you're going to see more and more of. Of the providers themselves linking together their different services and enabling you to create these dynamic applications without a lot of glue that you have to manually create in between. I think that we're going to see, you know, more open source frameworks, like, for example, Service Framework or Terraform that people want the, I mean, I know that a lot of people use, for example, AWS SAM. People want easier ways, and faster ways, to be able to deploy their serverless, so you have the bootstrapping of serverless. I guess, another thing that I expect is that the serverless, the serverless development life cycle will mature, in that whether going from bootstrapping to testing, deployment, monitoring security, I believe you're going to see companies that will start to really fill in that entire space, the same way that they do for monoliths and microservices. >> Yeah, absolutely. Thank you so much, Nicholas. Definitely something we've been tracking over the last year or so. You start to see many in the tool chain of cloud native environments digging into serverless, helping to mature those solutions, and definitely an area to watch closely. >> Great. >> Alright. Lot's more coverage. Check out theCUBE.net for all the events that we will be at through 2020 as well. If you can go back and see we've actually done Serverlessconf a couple of years, many of the other cloud and cloud native shows. Search in our index. I'm Stu Miniman, and thank you for watching theCUBE. (energetic electronic music)
SUMMARY :
Brought to you by GitLab. but the community and knowledge is keeping us all warm. and especially the app devs that I speak with, and the other part was to really talk about three things and sees the technology is evolving and the definition, and even the term itself, but it definitely is a change in mindset as to how I build and that we should look at it like that. that are all in the cloud. and making changes along that path is something that monoliths are going to die any time soon. the three items you talked about: Knative. And the other thing is, so what makes you so bullish on Knative. and I think that, when I think about serverless, There, open source groups like the CNCF trying to help or VMs or in the environment. and so right now we enable you to be able to create in the maturity of this space. and enabling you to create these dynamic applications and definitely an area to watch closely. and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nicholas | PERSON | 0.99+ |
Nicholas Klick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
GitLab | ORGANIZATION | 0.99+ |
Knative | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two parts | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first piece | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
GA | LOCATION | 0.98+ |
second choice | QUANTITY | 0.98+ |
three items | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
a decade | QUANTITY | 0.97+ |
one provider | QUANTITY | 0.96+ |
Istio | ORGANIZATION | 0.96+ |
theCUBE.net | OTHER | 0.95+ |
CNCF | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
three things | QUANTITY | 0.94+ |
Terraform | TITLE | 0.87+ |
Kubernetes | TITLE | 0.81+ |
Azure | TITLE | 0.8+ |
applications | QUANTITY | 0.78+ |
AKS | ORGANIZATION | 0.72+ |
Commit 2020 | TITLE | 0.69+ |
KubenCon | EVENT | 0.68+ |
Lambda | TITLE | 0.61+ |
EKS | ORGANIZATION | 0.61+ |
SAM | TITLE | 0.58+ |
Service | TITLE | 0.57+ |
GitLab | TITLE | 0.56+ |
theCUBE | TITLE | 0.56+ |
CloudNativeCon | EVENT | 0.51+ |
GitLab Commit 2020 | TITLE | 0.48+ |
Deepak Singh, AWS & Abby Fuller, AWS | AWS re:Invent 2019
>> Narrator: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with it's ecosystem partners. >> Welcome back, about 65,000 here in attendance, at AWS re:Invent 2019. You're watching theCUBE, and I am Stu Miniman, the host for this seg, and happy to welcome back to our program two of our CUBE alumni. Sitting to my right is Abby Fuller, who is the principal technologist for containers and Linux, with Amazon Web Services. Sitting to her right is Deepak Singh, Vice President of Compute Services, also with AWS. Thank you so much for joining us on the program. >> Thanks for having us. >> Thank you for having us. >> Stu: All right, so as I said, both of you have been on the program, and boy your team's been busy. I mean, one of the things I love, first of all, there is a roadmap for many of the things that are going on. So, we do understand what's happen in the future, but, Deepak, maybe just tell us a little bit about your group and kind of the main focus, and let's start there. >> Deepak: So, my group goes beyond containers. It includes things like Linux systems, our high performance computing organization. But for the purposes of re:Invent, let's stick to the containers org. The containers org owns all of AWS's containerized products. So that includes ECS, EKS, Fargate. We also own our service mesh offering, which is App Mesh. So the way I like to think about it is, it's the right way to build applications in the modern era group, and it's a team that stays quite busy, because this is such a hot space to be in. >> Stu: All right, so we're going to talk mostly about containers, but your shirt is talking about the Linux piece. Tell us what your shirt says. >> Deepak: Ahh, yes, this is the only right way to spell AMI. Unfortunately, my previous, when I was in New York, Corey was at the table interviewing me, and I wore this just for him. >> Stu: So, so, so, if it is AMI, then we're going to spend some time talking about EKS. >> Yes. (Abby chuckling) >> And Esses. >> Yes, which one? (Deepak laughing) We will figure that. For AWS is AWS, I think, is how we will do it. So, absolutely, we're not going to talk about ontological arguments in there. But, Abby, a whole lot of new services in the container space. I want to put a pin and put Fargate to aside for a second. >> Abby: Sure. >> Cause lots of things we want to dig into there. But a lot of other things have been announced, in like the last month or so. Maybe, give us a little bit of a view. >> Yeah, I think a couple big ones for us. So, Fargate and Spot, so run on spare Fargate Capacity for up to a 70% discount off of standard Fargate pricing. (mumbling) things like vulnerability image for scanning for images on ECR. We launched, over the last few days as re:Invent, a capacity providers for ECS, which let's you run, split your traffic between on-demand and spot instances in the same cluster. We also launched something called Cluster Auto Scaler. So, some finer-grained control over how your cluster scales in on ECS. >> Stu: All right, want to take a quick step back. So , Fargate, announced a couple of years ago. >> Deepak: Yep. >> Was only first supported on ECS. Definitely, I've talked to lots of customers, very excited about it. >> Deepak: Yep. >> Maybe talk to us a little bit about how Fargate fits in the whole container discussion. >> Deepak: Yeah. >> And we'll hit with the news. >> Yeah, and, actually, a good way to think about it is from a native US standpoint. If you're a customer running containers, the way we think about our services is: You need a place to store those containers, so that's ECR. You could use your own registry, you could pick a third party one, that's fine. But most of our customers just use ECR. Then you pick your containers carrier. That's either ECS or EKS depending on your preferences. And then you need to figure out where you want to run your containers. And, of course, when we launched ECS five years ago, at re:Invent, there was only one way to do it: On EC2 instances. And two years ago, we added in what in our mind is a cloud native natural way to run containers, which is Fargate. So Fargate serves as a runtime compute engine for containers, and you can pick your scheduler on top of it, and go make hay with your applications. So that's kind of how we think the hierarchy works, and it works pretty well for most customers. They'll start off often with EC2 and move to Fargate over time or mix and match, and it's kind of fascinating to see how many customers of ours have decided they want to be all-in on Fargate. Which is a great place to be for us. >> Stu: Okay, but the big news which actually got a good cheer in the key note yesterday, is Fargate for EKS. So what's the importance of this? >> Yeah I think (mumbling) I think it's saying we've been talking to customers about for a while and it's the ability to run your Kubernetes pods on Fargate Capacity. I think it's really speaking to folks love Kubernetes as a tool and as a community, but it can be a pretty significant lift operationally. And with Fargate they can use APIs that they want or the open source tooling that they want but they don't have to worry about provisioning and managing that EC2 capacity. >> Stu: All right, so Deepak I actually was having a conversation with a good AWS customer, yesterday, and he said he actually started out on Kubernetes before EKS existed, on AKS. And migrated over to AWS when EKS became available. And he said Fargate really interests me, but one of the main reasons he does Kubernetes is he wants to have some portability, has some concerns that, he knows what services he uses and how if he needed to move something there, what do you say to customer that says Fargate's interesting me, but I'm concerned I'm going to get locked in if I buy into this model. >> I would say that he shouldn't worry about it, because of two reasons: maybe more than two. One is: the unit in Fargate that you interact with and work on is the same unit that you interact and work on with Kubernetes in general. Which is the Kubernetes pod. It's the broadspec, it's just a pod, no difference. You can take that same pod and run it on Timbuktu cloud and it will still run. So that's part one. The other one is that he's using the same tools, he's using coup CDL. And in fact you can mix and match your Kubernetes casters. You can run 95% of the application on Fargate, and five percent of it on EC2. All they are doing is changing the part annotation, and if you decide you want to run none of it on Fargate, you just flip that and suddenly everything is running on EC2 capacity. So actually think there's that much to worry about, because it's just the same pod. It's still the same tooling, the operational model is a lot simpler. >> So Abby, we've talked to you at DockerCon, and KubeCon, simplicity is not the word that we hear when we talk about this whole container space. >> Abby: Sure. >> Traditionally. How are we doing overall? I mean, I'm watching the community here, and it's like, wait, Fargate sounds cool but where's my persistent volumes? You know, where are we in, you know give us a little bit of the road map as to where we are to make this, you know, simple and managing more of my environment. >> Yeah, I think the way that I like to look at it, right, is that we've spent, and it's not just us, but we spent a lot of time looking at things like patterns and abstractions that help make these work flows easier for developers. And I think one of the launches that's interesting in that vein is the ECS CLI version two, which we launched a few days ago. And that will help you deploy like a production ready containerized application. It'll help you with the CICD angle, it'll help you with the monitoring and the observability. So I think it's about abstracting away, and adding patterns on top to make some of these common operations and work flows really modular and repeatable, and extendable. And then it's about having the ability to customize where I need to. So being able to run on Fargate, but also to use work loads running on EC2 where I need to, and being able to mix and match, and to focus my energy where I really get any benefit from customizing, rather than having to do the whole thing from the ground up. >> Stu: You know, feedback I've gotten from my friends and the app dev community, is that hybrid is more and more becoming a standard deployment model. Obviously things like outposts and some of the other solutions from Amazon are extending the AWS model of doing things, but many of them also look at just Kubernetes, >> Deepak: Yep >> as a layer to do that. How should we be thinking of this from your solutions? >> Deepak: Yeah, so I thought without both, though, if you noticed in Andy's announcement yesterday, among the list of services available on day one were ECS and EKS. And actually app meshes well weren't on the list, but app meshes available on our post on day one as well. I think when we think about customers who want to run and stay in their own capacity and their own data centers, because EKS is built on (mumbling) Kubernetes with no modifications, the same application, as long as they're running on upstream Kubernetes, on their side, will just run on EKS. And there's a number of models that work there. A great model is the kind that SisCo is running, where they will manage it for you in both places. They become the first person you call, and on AWS it's just EKS. And on premise (mumbling) it's what SisCo has decided to build. Our pro-serf team will also help you by example. So I think there's a number of modes that work there but the key part, and it's the reason why we have stayed with (mumbling) stream Kubernetes, is we never want to make someone say, oh we can't use EKS because they're (mumbling). Somehow modified Kubernetes, and I think that is super important for us. >> Stu: Yeah, I mean Abby I know you're an active participant in the community, what do you say to people that look at Amazon, Deepak you talked a little bit about Fargate. You don't need to be concerned to the same images, so speak a little bit, maybe if you could, to Amazon's community participation, and what you're generally hearing from your customers. >> Abby: Yeah, so I think the root of it right is that we're all building with the same building blocks. I think something that Amazon has been really strong at is open sourcing primitive. So, Firecracker last year, I think was a good example. And we, I think we do really well with saying we built this to solve a problem for us, but we think you might want it too. And in terms of community support, we have been open sourcing more over the last year, we open source our road maps in November last year. We run developer previews off the GitHub road map, App Mesh has a public preview channel as well, so we've been trying to involve the community participation earlier and earlier in our product development life cycle, so that, especially with things like service mesh, where it's really pretty new, we can make sure that we have the voice of all our users and our customers, and there, as early as possible. But to get their hands on keyboards to try it out as soon as they can. >> Deepak: And actually a great example of that is, a word that Weave Works has done. Talking about people who can run Kubernetes on AWS and on premises, they have this project called "Weave Ignite" where they're basically running Kubernetes on Firecracker on premises. And then on AWS a customer just runs on EKS, as an example. And that, I think that part has been not everybody realizes that this is possible. But I think the fact that people are doing it is, excites us a lot. >> Stu: All right, I know you're both meeting with a lot of customers this week, maybe Deepak start with you. Any surprises or any misconceptions other than I know there a lot of people wearing teal shirts, with a certain pronunciation. But bring us inside some of the mind set of your customers here. >> Deepak: So actually, our conversation is very consistent. I think the community as a whole, our customer base has a whole, they all want to get to the same place. How can we move really quickly? How can we give our developers the ability to be more productive? Without putting our company at risk, having the right level of governance? Having the right controls, in place? And I think that's mainly consistent theme across the board. I guess the one thing that would be hard to remind people of a little bit, is a lot of people often think Fargate sits on top of ECS and EKS, it sits below that, and actually the fact that now there is an EKS Fargate, people understand that more quickly. Before that it was a little trickier. But other than that, I think our customers almost all. They come from different places, have very similar problems, they want developers to move quickly and develop deliver business value, and platform engineering teams that we speak to want to figure out how to get out of the way. And that's been great! >> It's interesting, Abby, I love your view point from the developer community Andy talked on stage about very much, to do true transformation, there needs to be the leadership driving things down. I'm curious what you're seeing, customers you're talked to, people you had, cause many of these tools we're talking about, you know, started in the developer world. >> Yeah, I mean there's been, like an increasing amount of curiosity around the cultural side of it. So how can I get my team to work like that? How can I get my team to ship more safely, more quickly, but getting operations out of the way? And I think you see more and more interest in that. So how can we build the tools that work the way our developers do? So we get all the thing that we want, so security and compliance and availability. The developers get what they want, which is easy work flows that match the way they want to work. So you see a lot of curiosity around that. So how do we get to the place where we can run everything on Fargate, and benefit from all the new serverless, severless style (mumbling). >> Stu: All right, real quick just give you the final word. Any websites, or events, or things that people should know when they want to learn more and get engaged? >> Yeah, I think I'd send people first and foremost to the GitHub public road maps. It is the easiest, fastest way to let us hear your voice, and what you want to see us build next. I think especially these next couple weeks coming out of re:Invent, as people start to get their hands on what we announced, think I'm really curious for them to take that back, and then be like, this is great, but here's what I want to see next. And I'd love to see that happen on the road maps. >> Yeah, about a month or so ago, maybe a couple months, we started a dedicated blog for containers on AWS site. One of the nice things about it is a lot of the contributors to that blog site are principal engineers, and engineers in our organization. For example, one of our, the principal engineers in my org are Malcolm Featonby, has a whole blog post on how should to think about scaling and best practices. I think I would encourage people who've now seen what we have, all the new services we're developing, and that's where you'll get the details on how you can use them, how we built them, and I encourage everybody to go to that blog site and check out what we're doing. >> Stu: All right, Deepak, Abby, congratulation to you and your team, great progress, and really appreciate (mumbling) are able to look at the road map, and definitely hope to catch up with you both soon. >> Abby: Thanks so much! >> Thank you so much. >> Stu: All right, I'm Stu Miniman, and back with much more, right in a second, thank for watching theCube. (Techno music)
SUMMARY :
Brought to you by Amazon Web Services and Intel, and happy to welcome back to our program on the program, and boy your team's been busy. So the way I like to think about it is, Stu: All right, so we're going to talk and I wore this just for him. then we're going to spend some time talking about EKS. in the container space. in like the last month or so. which let's you run, split your traffic between Stu: All right, want to take a quick step back. Definitely, I've talked to lots of customers, Maybe talk to us a little bit about how Fargate fits and it's kind of fascinating to see Stu: Okay, but the big news which actually and it's the ability to run your Kubernetes pods and how if he needed to move something there, So actually think there's that much to worry about, and KubeCon, simplicity is not the word that we hear as to where we are to make this, you know, and to focus my energy where I really get any benefit and the app dev community, is that hybrid as a layer to do that. is running, where they will manage it for you and what you're generally hearing from your customers. but we think you might want it too. And that, I think that part of your customers here. and platform engineering teams that we speak to there needs to be the leadership driving things And I think you see more and more Stu: All right, real quick just give you and foremost to the GitHub public road maps. a lot of the contributors to that blog site and definitely hope to catch up with you both soon. and back with much more, right in a second,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Deepak | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Abby Fuller | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Malcolm Featonby | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
Andy | PERSON | 0.99+ |
Corey | PERSON | 0.99+ |
two reasons | QUANTITY | 0.99+ |
five percent | QUANTITY | 0.99+ |
Abby | PERSON | 0.99+ |
November last year | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ECR | TITLE | 0.99+ |
five years ago | DATE | 0.98+ |
SisCo | ORGANIZATION | 0.98+ |
US | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
both places | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
ECS | TITLE | 0.98+ |
Linux | TITLE | 0.97+ |
DockerCon | ORGANIZATION | 0.97+ |
one way | QUANTITY | 0.97+ |
Fargate | ORGANIZATION | 0.96+ |
EKS | TITLE | 0.96+ |
more than two | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
Fargate | TITLE | 0.95+ |
EC2 | TITLE | 0.95+ |