Image Title

Search Results for KU con:

Owen Garrett, Deepfence | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain in Coon and cloud native con Europe, 2022. I'm Keith Townsend, along with my host, Paul Gillon senior editor, enterprise architecture at Silicon angle. We are continuing the conversation here at KU con cloud native con around security app defense. Paul, were you aware it was this many security challenges and, and that were native to like cloud native >>Well there's security challenges with every new technology. And as we heard, uh, today from our, some of our earlier guests, uh, containers and Kubernetes naturally introduce new variables in the landscape and that creates the potential vulnerabilities. So there's a whole industry that's evolving around that. And what we've been looking at today, yesterday, we talked very much about managing Kubernetes today. We're talking about many of the nuances of building a, a Kubernetes based environment and security is clearly one of them. >>So welcome our guests on Garrett, head of products. >>Thank >>You and community at deep fence. You know what I'm going. I'm going to start out the question with a pretty interesting security at scale is one of your taglines. >>Absolutely. >>What does that mean? Exactly. >>So Kubernetes is all about scale securing applications and Kubernetes is a completely different game to securing your traditional monolithic legacy enterprise applications. Kubernetes grows it scales it's elastic, and the perimeter around a Kubernetes application is very, very porous. There are lots of entry points. So you can't think about securing a cloud native application. The way that you might have secured a monolith securing a monolith is like securing a castle. You build a wall around it. You put guards on the gate. You control, who comes in and out, and job is more or less done securing a cloud native application. It's like securing a city. People are roaming through the city without checks and balances. There are lots of services in the city that you've got to check and monitor. It's extremely porous. So sec, all of the security problems in Kubernetes with cloud native applications, they're amplified by scale, the size of the application, the number of nodes and the complexity of the application and the way that it's built and delivered. >>That's, uh, kind of a chilling phrase. The perimeter is porous. Uh, yeah, companies are adopting Kubernetes right now. Evidently bringing in all of these new, these new, uh, vulnerability points. Do they know what they're getting into >>Many don't, there's, there's a huge amount of work around trying to help organizations make the transition from thinking about applications as single components to thinking about them as microservices with multiple little, little components, it's a really essential step because that's what allows businesses to evolve, to digitize, to deliver services, using APIs, mobile, mobile apps. So it's a necessary technical change, but it brings with it. Lots of challenges and security is one of those biggest challenges. >>So as I'm thinking about that poorest nature, I can't help, but think, you know, if I have my, my traditional IPS does a really great job of blocking that centralized data center and access to that centralized data center. As I think about that city example that you gave me, I'm thinking, you know what? I have intruders or not even intruders. I have bad actors within my city. You >>Do you, how >>Do, how does deep defense help protect me from those bad actors that are inside or roaming the city? >>So this is the wonderful, unique technology we have within deep fence. So we install little sensors, little lightweight sensors on each host. That's running your application on Kubernetes nodes as a Damon set against Fargate instances on Docker hosts on bare metal. And those sensors install little taps into the network using E B P F and they monitor the workloads. So it's a little bit like having CCTV cameras throughout your city tracking what's happening. There are a lot of solutions which we'll look at what happens on a workload traditional XDR solutions that look for things like process changes or file system changes. And we gather those signals indicators of compromise, but those alone are too little too late. They tell you that a breach has probably already happened. What deep defense does is we also look at the network. We gather network signals. We can see someone using a, a reconnaissance tool roaming through your application, sending probe traffic to try and find weak points. >>We can see them then elevating the level of attack and trying to weaponize a particular exploit that they might have find, or vulnerability that they find. We can see everything that comes into each of the components, not just at the perimeter, but right inside your application. We see what happens in those components process file, integrity, changes. And we see what comes out, attempt exfiltrate, something that looks like a database file or et cetera password. And we put all of these little subtle signals, the indicators of attack, the network based signals and the indicators of compromise. We put those together and we build a picture of the threats against each of the workloads in your cloud, native application. There's lots and lots of background, recon traffic. We see that you generally don't need to worry about that. It's just noise. But as that elevates and you see evidence of exploits and later spread, we identify that we'll let you know, or we can step in and we can proactively block the behavior that's causing those problems. So we can stop someone from accessing a component, or if a component's compromised, we can, we can freeze it and restart it. And this is a key part of the technology within our threat striker security observability platform, >>Uh, false alerts are the bane of the security ministry's existence. What do you do to protect against those? >>So we use a range of heuristics and a degree, a small degree of machine learning to try and piece together. What's happening. It's a complicated picture. So some of your viewers will have heard of a might attack matrix. So a dictionary of techniques and tactics and, and protocols that attackers might use in order to attack an infrastructure. So we gather the signals, those TTPs, and we then build a model to try and understand how those little signals pieced together. So maybe there's, you know, there's a guy with a striped striped vest that is trying the doors in your city, you know, a low level criminal who isn't getting anywhere. We'll pick that up and that's low risk. But then if we see that person infiltrate a building, because they find an open door, then that raises the level of risk. So we monitor the growing level of risk against each workload. >>And once it hits a level of concern, then we let you know, but you can then forensically go back in time and look at all of the signals that surround that. So we don't just tell you, there was an alert and a file was compromised in your workload, do something about it. We tell you the file was compromised. And prior to that, there were these events, process failures. Those could have been caused by network events that are correlated to a vulnerability that we know. And those in, in turn could have been discovered by recon traffic. So we help you build that entire active picture up. Every application's different. You need to have the context to understand and interpret signals that a solution like threat striker gives you, and we give you that context. >>So I would push back. If I'm a platform team, say, you know what? I have a service mesh. I, I have trusted traffic going to trucked traffic going from trusted sources. I'm, I'm cutting off the problem even before it happens. Why should I use, uh, deep fix? >>So a service mesh won't cut off the problem. It'll just hide the problem because a service mesh will just encrypt the traffic between each of the components. It doesn't stop the bad traffic flowing. If a component is compromised, people can still talk to another component and the service mesh happily encrypts it and hides it. What we do. We love service meshes because we can decrypt the traffic or we can inspect the individual application components before they talk to the mesh side car. So we can pull out and see the plane, text traffic. We can identify things that other tools wouldn't have a hope of, of identifying. >>So, you know, you, you just, uh, triggered something. >>Yeah. >>A lot of companies do not like decrypting that traffic after it's been sent, they don't want anyone else, including security tools to see it. Yeah. How do you ensure, how do you serve those clients? >>So we serve those clients by having an architecture that sits entirely on premise in their infrastructure. Their sensitive data never leaves their network, their VPCs, their, their boundary. They install a threat striker console. So this is the tool that does all of the analysis and make the protection decisions. They run that themselves. They deploy the threat, striker sensors in their production environment. They talk over secure links, authenticated to the console. So everything sits within their power view, their level of their degree of control. >>So if, if they're building a, a, a cloud application though, or, or a hybrid cloud application, how do you connect? How do you deal with the cloud side? >>So whether their production environments are next to the threat striker console, whether they're running on remote clouds, our sensors will run in all of those environments and the console will manage a complex hybrid environment. It will show you traffic running in your Kubernetes cluster and AWS traffic Mon running on your VMs on Google traffic, running in your 4g instances on again, on AWS and on your on-prem instances, it gathers that data securely from each of those remote places, sends it to the console that you own and operate securely. So you have full control over what is captured. It's encrypted, it's authenticated, it's streamed back. So it never leaves your level of control. >>Talk to me about the overhead. How is this deployed and managed with MI environment? >>So there are two components, as we've learned, we have the console. All of the work is done on the console, the any necessary decryption, all the calculation that runs on a Kubernetes cluster, that, that you would deploy, that you would scale. So that's fully in your control. Then you need to install little sensors on each of your production environments to bring the data back to the console. >>Now those on pots, or are those in running inside of, uh, containers themselves. >>So they are container based. They're typically deployed as a demon set. So one instance per node in your Kubernetes cluster, they are, we have put a lot of engineering work into making those as lightweight as possible. They do very little analysis themselves. They do a little bit of pre-filtering of network traffic to reduce the bandwidth, and then they pass the packets back to the management console. So our goal is to have the minimal impact on customers, production environments, so that they can scale and operate without an impact on the performance or availability of their applications. And we have customers who are monitoring services running on literally thousands of Kubernetes nodes and streaming the data back to their management console and using that to analyze from a single point of control what's going on in their applications. >>So we hear time and again, CIOs complaining that they have too many point security products. Yes, I think average of 87 in, in, in the enterprise, according to, to one survey, aren't you just another, >>And that is the big challenge with security. There is no silver bullet product that will secure everything that you have. You have your, the what, you're the, what you're securing scales over space from your infrastructure to the containers and the workloads and the application code. It scales over time. Are you secure? Are you putting security measures in, at shift left development when you deploy or are you securing production? And it scales over the environments. There is no silver bullet that will provide best to breed security across that entire set of dimensions. There are large organizations that will present you with holistic solutions, which are a bunch of different solutions with the same logo on them, bundle together under the same umbrella. Those don't necessarily solve the problem. You need to understand the risks that your organization is faced. And then what are the best to breed solutions for each of those risks and for the life cycle of your application at deep fence, we are about securing your production environment. >>Your developers have built applications. They've secured those applications using tools like SNCC, and they've ticked and signed off saying with this list of documented vulnerabilities, my application is secure. It's now ready to go into production. But when I talk to, to application security people to ops people, and I say, are the applications in your Kubernetes environment? Are they secure? They say, look, honestly, I don't know, the developers have signed off something, but that's not what I'm running. I've had to inject things into the application. So it's different. There could have been issues that were, that were discovered after the developers signed it off. The developers made exceptions, but also 60, 80% of the code I'm running in production. Didn't come from my development team. It's infrastructure, it's third party modules. So when you look at security as a whole, you realize there are so many ax axis that you have to consider. There are so many points along these, a axis, and you need to figure out in a kind of a van diagram fashion, how are you going to address security issues at each of those points? So when it comes to production security, if you want a best breed solution for finding vulnerabilities in your production environment, threat map, open source, we'll do that. And then for monitoring attack behavior threat striker enterprise will do that. Then deep defense is a great set of solutions to look at. >>So on. Thanks for stopping by security at layers is a repetitive thing that we hear security experts talk about. Not one solution will solve every problem when it comes to security from Valencia Spain, I'm Keith Townson, along with Paul Gillon and you're watching the Q the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

The cube presents, Coon and cloud native con Europe, 2022, brought to you by red hat, We are continuing the conversation And as we heard, uh, I'm going to start out the question with a pretty interesting security at scale is What does that mean? So sec, all of the security problems in Kubernetes with cloud native applications, all of these new, these new, uh, vulnerability points. So it's a necessary technical that you gave me, I'm thinking, you know what? So we install We see that you generally don't need to worry about What do you do to protect against those? So we gather the signals, those TTPs, and we then build a model to So we help you build that entire active picture up. If I'm a platform team, say, you know what? So we can pull How do you ensure, how do you serve those clients? So we serve those clients by having an architecture that sits entirely on premise So you have full control over what is captured. Talk to me about the overhead. So that's fully in your control. Now those on pots, or are those in running inside of, uh, So our goal is to have the minimal impact on customers, So we hear time and again, CIOs complaining that they have too many point security products. And that is the big challenge with security. So when you look at security as a whole, you realize there are so many ax axis that you have So on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillonPERSON

0.99+

Keith TownsonPERSON

0.99+

yesterdayDATE

0.99+

PaulPERSON

0.99+

Owen GarrettPERSON

0.99+

two componentsQUANTITY

0.99+

thousandsQUANTITY

0.99+

AWSORGANIZATION

0.99+

KubernetesTITLE

0.98+

EuropeLOCATION

0.98+

eachQUANTITY

0.98+

Valencia SpainLOCATION

0.98+

CloudnativeconORGANIZATION

0.98+

each hostQUANTITY

0.98+

todayDATE

0.98+

Valencia SpainLOCATION

0.98+

KubeconORGANIZATION

0.97+

oneQUANTITY

0.96+

2022DATE

0.96+

one surveyQUANTITY

0.96+

DeepfenceORGANIZATION

0.95+

one instanceQUANTITY

0.94+

single pointQUANTITY

0.93+

GarrettPERSON

0.93+

each workloadQUANTITY

0.89+

GoogleORGANIZATION

0.86+

87 inQUANTITY

0.8+

one solutionQUANTITY

0.8+

80%QUANTITY

0.8+

DockerTITLE

0.76+

single componentsQUANTITY

0.73+

red hatORGANIZATION

0.72+

KubernetesORGANIZATION

0.71+

60,QUANTITY

0.7+

SiliconORGANIZATION

0.7+

DamonTITLE

0.67+

lots of servicesQUANTITY

0.65+

SNCCORGANIZATION

0.64+

KU conORGANIZATION

0.64+

conORGANIZATION

0.64+

so many pointsQUANTITY

0.53+

Coon and cloud native conORGANIZATION

0.51+

FargateTITLE

0.49+

cloud nativeEVENT

0.49+

CoonORGANIZATION

0.46+

cloud native conEVENT

0.43+

axisCOMMERCIAL_ITEM

0.38+

axisTITLE

0.28+

Varun Talwar, Tetrate | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 22 brought to you by the cloud native computing foundation. >>Welcome to ity of Spain and cube con coup con cloud native con Europe 2022 is near the end of the day. That's okay. We, we, we have plenty of energy because we're bringing it. I'm Keith Townsend, along with my coho, Paul Gillon Paul, this has been an amazing day. Thus far. We've talked to some incredible folks. You got a chance to walk the show floor. Yeah. So I'm really excited to hear what's the vibe of the show floor, 7,500 people in Europe following the protocols, but getting stuff done. >>Well, first I have to say that I haven't traveled for two years. So getting out to a show by, by itself is, is an amazing experience, but a show like this with all of the energy and the crowd, she is enormously crowded at lunchtime today. It's hard to believe how many people have made it, made it all the way here out on the floor. The boots are crowded. The, the demonstrations are what you would expect at a show like this. Lots of code, lots of, lots of block diagrams, lots of architecture. I think the audience is eating it up. You know, when they're, they're on their laptops, they're coding on their laptops. And this is very much symbolic of the crowd that comes to a cubic con. And it's, it's a, just a delight to see them outta here. I so much fun. >>So speaking of lots of gold, we have Bome Toro co-founder of pet trade, but, you know, just saw, didn't realize this Isto becoming part of CNCF was the latest on infield. >>Yeah. Is still is, you know, it was always one of those service mesh projects, which was very widely adopted. And it's great to see that going into the cloud native computing foundation. And I think what happened with Kubernetes, like just became the defacto container orchestrator. I think similar thing is happening with Isto and service mesh. >>What, >>So I'm sorry, Keith, what's the process like of becoming adopted by and incubated by the CNCF? >>Yeah, I mean, it's pretty simple. It's an application process into the foundation where you say, you know what the project is about, how diverse is your contributor base, how many people are using it. And it goes through a review of with TC. It goes through a review of like all the users and contributors. And if you see a good base of deployments in production, if you see a diverse of contributors, then you can basically be part of the CNCF. And as you know, CNCF is very flexible on governance. Basically it's like, bring your own governance. And then the projects can basically seamlessly go in and, you know, get into incubation and gradually graduate >>Another project close and dear to you Envoy. Yes. Now I've always considered Envoy just as what it is. It's a, I've always used it as, as a load balancer type thing. So I've always considered it somewhat of a gateway proxy, but Envoy gateway was announced last week. Yes. >>So Envoy is basically won the data plane war of in cloud native workloads. Right. And, but, and this was over the last five years, Envoy was announced even way before Rio and it is used in various deployment models. You can use it as a front load balancer. You can use it as an Ingres in Kubernetes. You can use it as a side car and a service mesh like steel, and it's lightweight dynamically, programmable, very open with a white community. But what we looked at when we looked at the Envoy base, was it still, wasn't very approachable for application developers. Like when you still see like the nouns that it uses in terms of clusters and so on is not what an application developer was used to. And so Envoy gateway is really an effort to make Envoy even more stronger out of the box for an application developer to use it as an API gateway. >>Right? Because if you think about it, ultimately, you know, people de developers start deploying workloads onto their Kubernetes clusters. They need some functionality like an API gateway to expose their services and you wanna make it really, really easy and simple. Right? I often say like what, what engine X was to like static websites like Envoy gateway will be to like, you know, APIs and it's really few the community coming together. We are a big part, but also VMware and as well as end users, like in this case, fidelity who is investing heavily into Envoy and API gateway use cases, joining forces saying, let's do this in upstream Envoy. >>I'd like to go back to IIO because this is a major step in IIOS development. Where do you see SIO coming into the picture? And Kubernetes is already broadly accepted. Is IIO generally adopted as an after an after step to, to Kubernetes or are they increasingly being adopted together? >>Yeah. So usually it's adopted as a follow on step and the reason is primarily the learning curve, right. It's just get used to all the Kubernetes and, you know, it takes a while for people to understand the concepts, get applications going, and then, you know, studio was made to basically solve, you know, three big problems there. Right. Which is around observability traffic management and security. Right. So as people deploy more services, they figure out, okay, how do I connect them? How do I secure all the connections and how do I do more fine grain routing? I'm doing more frequent deployments with Kubernetes, but I would like to do Canary releases to make safer rollouts. Right. And those are the problems that Isto solves. And I don't really want to know the metrics of like, yes, it'll be, I it's good to know all the node level and CPO level metrics. >>But really what I want to know is how are my services performing? Where is the latency, right? Where is the error rate? And those are the things thatto gives out of the box. So that's like a very natural next step for people using Kubernetes. And, you know, Tetra was really formed as a company to enable enterprises, to adopt STO Envoy and service mission, their environment. Right? So we do everything from run an academy for like courses and certifications on Envoy and STO to a distribution, which is, you know, compliant with various bills and tooling as well as a whole platform on top of STO to make it usable and deployment in a large enterprise. >>So paint the end to end for me, for STO in Envoy. I know they can be used in similar fashions is like side cars, but how they work together to deliver value. >>Yeah. So if you step back from technology a little bit, right, and you like, sort of look at what customers are doing and facing, right. Really it is about, they have applications. They have some applications that new workloads going into Kubernetes and cloud native. They have a lot of legacy workloads, a lot of workloads on VMs and with different teams in different clouds or due to acquisitions. They're very heterogeneous right now. Our mission Tetrad's mission is power. The world's application traffic, but really the business value that we are going after is consistency of application operations. Right? And I'll tell you how powerful that is because the more places you can deploy Envoy into the more places you can deploy studio into, the more consistency you can get for the value pillars of observability, traffic management, and security. Right. And really, if you think about what is the journey for an enterprise to migrate from workloads into Kubernetes or from data centers into cloud, the challenges are around security and connectivity, right? Because if it's Kubernetes fabric, the same Kubernetes app and data center can be deployed exactly as is it in cloud. Right. Right. So why is it hard to migrate to cloud, right. The challenges come in the security and networking layer. >>Right. So let's talk about that with some granularity and you can maybe gimme some concrete examples, right? Because it, as I think about the hybrid infrastructure where I have VMs on premises, cloud, native stuff, running in the public cloud, or even cloud native next to VMs, right. I do security differently when I'm in the VM world. I say, you know what, this IP address, can't talk to this Oracle database server. Right. That's not how cloud native works. Right. I, I can't say if I have a cloud, if I have a cloud native app talking to a Oracle database, there's no IP address. Yeah. But how do I, how, how do I secure the communication between the two? Exactly. >>So I think you hit it straight on the head. So which is with things like Kubernetes, IP is no longer a really a valid noun where you can say, because things will auto scale either from Kubernetes or, you know, the cloud autoscales. So really the noun that is becoming now is service. So, and I could have many instances of it. They could go scale up and down. But what I'm saying is this service, which, you know, some app server, some application can talk to the article service. Hmm. And what we have done with the te trade service bridge, which is why we call our platform service bridge, because it's all about bridging all the services is whatever you're running on, the VM can be onboarded onto the mesh, like as if it were a ity service. Right. And then my policy around this service can talk to this service is same in Kubernetes is same for Kubernetes talking to VM it's same for VM to VM, both in terms of access control in terms of encryption. What we do is because it's the Envoy, proxy goes everywhere and the traffic is going through them. We actually take care of distributing, certs, encrypting, everything, and it becomes, and that is what leads to consistent application operations. And that's where the value is. >>We're seeing a lot of activity around observ observability right now, a lot of different tools, both open source and proprietary STO certainly part of the open telemetry project, I believe. Are you part of that? Yes. But the customers are still piecing together a lot of tools on their own. Right. Do you see a, a more coherent framework forming around observability? >>I think very much so. And there are layers of observability, right? So the thing is like, if we tell you there is latency between these two services at L seven layer, the first question is, is it the service? Is it the Envoy? Or is it the network? It sounds like a very simple question. It's actually not that easy to answer. And that is one of the questions we answer in like platforms like ours. Right. But even that is not the end. It, if it's neither of these three, it could be the node. It could be the hardware underneath. Right. And those, you realize like those are different observability tools that work on each layer. So I think there's a lot of work to be done, to enable end users to go from app, like from top to bottom to make, reduce what is called MTTR or meantime to, you know, resolution of an issue, where is the problem. >>But I think with tools like what is being built now, it is becoming easier, right? It is because one of the things we have to realize is with things like Kubernetes, we made the development of microservices easier. Right. And that's great. But as a result, what is happening is that more things are getting broken down. So there is more network in between. So that's harder. It gets to troubleshoot harder. It gets to secure everything harder. It gets to get visibility from everywhere. Right. So I often say like, actually, if you're going embarking down microservices journey, you actually are, you better have a platform like this. Otherwise, you know, you're, you're taking on operational cost. >>Wow. J's paradox. The more accessible we make something, the more it gets used, the more complex it is. That's been a theme here at KU con cloud native con Europe, 2022 from Licia Spain. I'm Keith Townsend, along with my host, Paul Gillman. And you're watching the queue, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

you by the cloud native computing foundation. So I'm really excited to hear what's The, the demonstrations are what you would expect at a show like this. of pet trade, but, you know, just saw, didn't realize this Isto And I think what happened with Kubernetes, And as you know, CNCF is very flexible Another project close and dear to you Envoy. like the nouns that it uses in terms of clusters and so on is not what an Because if you think about it, ultimately, you know, Where do you see SIO coming the concepts, get applications going, and then, you know, a distribution, which is, you know, compliant with various bills and tooling So paint the end to end for me, for STO in Envoy. can deploy studio into, the more consistency you can get for the value pillars So let's talk about that with some granularity and you can maybe gimme some concrete examples, So I think you hit it straight on the head. But the customers are still piecing together a So the thing is like, if we tell you there of the things we have to realize is with things like Kubernetes, we made the development the queue, the leader in high tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillmanPERSON

0.99+

EuropeLOCATION

0.99+

Keith TownsendPERSON

0.99+

KeithPERSON

0.99+

Varun TalwarPERSON

0.99+

CNCFORGANIZATION

0.99+

last weekDATE

0.99+

two yearsQUANTITY

0.99+

each layerQUANTITY

0.99+

7,500 peopleQUANTITY

0.99+

first questionQUANTITY

0.99+

IIOSTITLE

0.99+

two servicesQUANTITY

0.99+

twoQUANTITY

0.99+

threeQUANTITY

0.98+

IstoORGANIZATION

0.98+

bothQUANTITY

0.98+

2022DATE

0.98+

KubernetesTITLE

0.98+

OracleORGANIZATION

0.98+

CoonORGANIZATION

0.97+

TetradORGANIZATION

0.97+

EnvoyTITLE

0.97+

SpainLOCATION

0.97+

EnvoyORGANIZATION

0.97+

KubernetesORGANIZATION

0.97+

oneQUANTITY

0.97+

todayDATE

0.96+

KubeconORGANIZATION

0.96+

Paul Gillon PaulPERSON

0.96+

CloudnativeconORGANIZATION

0.92+

TetraORGANIZATION

0.92+

firstQUANTITY

0.9+

IIOTITLE

0.88+

TCORGANIZATION

0.88+

one of the questionsQUANTITY

0.86+

three big problemsQUANTITY

0.86+

Bome ToroORGANIZATION

0.84+

SIOTITLE

0.83+

cloud native con EuropeORGANIZATION

0.83+

STOTITLE

0.82+

last five yearsDATE

0.82+

KU con cloud native conORGANIZATION

0.8+

MTTRTITLE

0.79+

cloud native computing foundationORGANIZATION

0.79+

lots of block diagramsQUANTITY

0.78+

22QUANTITY

0.78+

Licia SpainLOCATION

0.7+

codeQUANTITY

0.7+

lotsQUANTITY

0.67+

cube con coup con cloudORGANIZATION

0.56+

RioORGANIZATION

0.55+

L sevenOTHER

0.41+

conORGANIZATION

0.4+

2022EVENT

0.39+

nativeCOMMERCIAL_ITEM

0.37+

EuropeCOMMERCIAL_ITEM

0.37+

Christopher Voss, Microsoft | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain in co con cloud native con Europe, 2022. I'm Keith Townsend with my cohos on Rico senior. Etti senior it analyst at gig home. Exactly 7,500 people I'm told en Rico. What's the flavor of the show so far, >>It's a fantastic mood. I mean, I found a lot of people wanting to track talk about what they're doing with Kubernetes, sharing their, you know, stories, some word stories that meet tough. And you know, this is where you learn actually, because we had a lot of zoom calls, webinar and stuff, but it is when you talk a video, oh, I did it this way and it didn't work out very well. So, and, and you start a conversation like this that is really different from learning from zoom. When, you know, everybody talks about things that working well, they did it, right. No, it's here that you learn from other experiences. >>So we're talking to amazing people the whole week, talking about those experiences here on the queue, fresh on the queue for the first time, Chris Vos, senior software engineer at Microsoft Xbox, Chris, welcome to the queue. >>Thank you so much for having >>Me. So first off, give us a high level picture of the environment that you're running at Microsoft. >>Yeah. So, you know, we've got 20, well probably close to 30 clusters at this point around the globe, you know, 700 to a thousand pods per cluster, roughly. So about 22,000 pods total. So yeah, it's pretty pretty sizable footprint and yeah. So we've been running on Kubernetes since 2018 and well actually might be 2017, but anyways, so yeah, that, that's kind of our, our footprint. >>Yeah. So all of that, let's talk about the basics, which is security across multiple I'm assuming containers, work, microservices, et cetera. Why did you and the team settle on link or do >>Yeah, so previously we had our own kind of solution for managing TLS certs and things like that. And we found it to be pretty painful pretty quickly. And so we knew, you know, we wanted something that was a little bit more abstracted away from the developers and, and things like that that allowed us to move quickly. And so we began investigating, you know, solutions to that. And a few of our colleagues went to Cuban in San Diego in 2019 cloud native con as well. And basically they just, you know, sped it all up. And actually funny enough, my, my old manager was one of the people who was there and he went to the link D booth and they had a thing going that was like, Hey, get set up with MTLS in five minutes. And he was like, this is something we want to do, why not check this out? And he was able to do it. And so that, that put it on our radar. And so yeah, we investigated several others and Leer D just perfectly fit exactly what we needed. >>So, so in general, we are talking about, you know, security at scale. So how you manage security to scale and also flexibility, right. But you know, what is the you, this there, you told us about the five minutes to start using there, but you know, again, we are talking about word stories. We talk about, you know, all these. So what, what, what kind of challenges you found at the beginning when you start adopting this technology? >>So the biggest ones were around getting up and running with like a new service, especially in the beginning, right. We were, you know, adding a new service almost every day. It felt like. And so, you know, basically it took someone going through a whole bunch of different repos, getting approvals from everyone to get the SEARCHs minted, all that fun stuff, getting them put into the right environments and in the right clusters to make sure that, you know, everybody is talking appropriately. And just the amount of work that, that took alone was just a huge headache and a huge barrier to entry for us to, you know, quickly move up the number of services we have. So, >>So I'm, I'm trying to wrap my head around the scale of the challenge. When I think about certification or certificate management, I have to do it on a small scale and the, the, every now and again, when a certificate expires, it is just a troubleshooting pain. Yes. So as I think about that, it costs, it's not just certificates across 22,000 pods or it's certificates across 22,000 pods in multiple applications. How were you doing that before link D like, what was the, what and what were the pain points? Like? What happens when a certificate either fails or expired up not, not updated? >>So, I mean, to be completely honest, the biggest thing is we're just unable to make the calls, you know, out or, or in, based on yeah. What is failing basically. But, you know, we saw essentially an uptick in failures around a certain service and pretty quickly, I pretty quickly, we got used to the fact that it was like, oh, it's probably a cert expiration issue. And so we tried, you know, a few things in order to make that a little bit more automated and things like that, but we never came to a solution that like didn't require every engineer on the team to know essentially quite a bit about this, just to get into it, which was a huge issue. >>So talk about day two after you've deployed link D how did this alleviate software engineers and what was like the, the benefits of now having this automated way of managing >>Certs? So the biggest thing is like, there is no touch from developers, everyone on our team. Well, I mean, there are a lot of people who are familiar with security and certs and all of that stuff, but no one has to know it. Like it's not a requirement. Like for instance, I knew nothing about it when I joined the team. And even when I was setting up our newer clusters, I knew very little about it. And I was still able to really quickly set up blinker D, which was really nice. And, and it's been, you know, essentially we've been able to just kind of set it and not think about it too much. Obviously, you know, there are parts of it that you have to think about. We monitor it and all that fun stuff, but, but yeah, it's been pretty painless almost day one. It took a lot, a long time to trust it for developers. You know, anytime there was a failure, it's like, oh, could this be link or D you know, but after a while, like now we don't have that immediate assumption because people have built up that trust, but >>Also you have this massive infrastructure, I mean, 30 cluster. So I guess that it's quite different to manage a single cluster and 30. So what are the, you know, consideration that you have to do to install this software on, you know, 30 different cluster manage different, you know, versions probably etcetera, etcetera, et cetera. >>So, I mean, you know, the, the, as far as like, I guess, just to clarify, are you asking specifically with Linky or are you just asking in more in general? Well, >>I mean, you, you can take the, the question in the, in two ways, so, okay. Yeah. Yes. Link in particular, but the 30 cluster also quite interesting. >>Yeah. So, I mean, you know, more generally, you know, how we manage our clusters and things like that. We have, you know, a CLI tool that we use in order to like, change context very quickly and switch and communicate with whatever cluster we're trying to connect to and, you know, are we debugging or getting logs, whatever. And then, you know, with link D it's nice because again, you know, we, we, aren't having to worry about like, oh, how is this cert being inserted in the right node or, or not the right node, but in the right cluster or things like that. Whereas with link D we don't, we don't really have that concern when we spin up our, our clusters, essentially we get the root certificate and, and everything like that packaged up, passed along to link D on installation. And then essentially there's not much we have to do after that. >>So talk to me about your upcoming coming section here at Q con what's the, what's the high level talking points? Like what, what will attendees learn? >>Yeah. So it's, it's a journey. Those are the sorts of talks that I find useful. Having not been, you know, I, I'm not a deep Kubernetes expert from, you know, decades or whatever of experience, but I think >>Nobody is >>Also true. That's another story. That's a, that's, that's a job posting decades of requirements for >>Of course. Yeah. But so, you know, it, it's a journey it's really just like, Hey, what made us decide on a service mesh in the first place? What made us choose link D and then what are the ways in which, you know, we, we use link D so what are those, you know, we use some of the extra plugins and things like that. And then finally, a little bit about more, what we're gonna do in the future. >>Let's talk about not just necessarily the future as in two or three days from now, or two or three years from now. Well, the future after you immediately solve the, the low level problems with link D what were some of the, the surprises, because link D in service me in general has have side benefits. Do you experience any of those side benefits as well? >>Yeah, it's funny, you know, writing the, the blog post, you know, I hadn't really looked at a lot of the data in years on, you know, when we did our investigations and things like that. And we had seen that we like had very low latency and low CPU utilization and things like that. And looking at some of that, I found that we were actually saving time off of requests. And I couldn't really think of why that was, and I was talking with someone else and the biggest, unfortunately, all that data's gone now, like the source data. So I can't go back and verify this, but it, it makes sense, you know, there's the availability zone routing that linker D supports. And so I think that's actually doing it where, you know, essentially if a node is closer to another node, it's essentially, you know, routing to those ones. So when one service is talking to another service and maybe on they're on the same node, you know, it, it short circuits that, and allows us to gain some, some time there. It's not huge, but it adds up after, you know, 10, 20 calls down the line. Right. >>In general. So you are saying that it's smooth operations in, in ATS, very, you know, simplifying your life. >>And again, we didn't have to really do anything for that. It, it, it handled that for it was there. Yeah. Yep. Yeah, exactly. >>So we know one thing when I do it on my laptop, it works fine when I do it with across 22,000 pods, that's a different experience. What were some of the lessons learned coming out of KU con 2018 in San Diego was there? I wish I would've ran to the microphone folks, but what were some of the hard lessons learned scaling link D across the 22,000 nodes? >>So, you know, the, the first one, and this seems pretty obvious, but was just not something I knew about was the high availability mode of link D so obviously makes sense. You would want that in a, you know, a large scale environment. So like, that's one of the big lessons that like, we didn't ride away. No. Like one of the mistakes we made in, in one of our pre-production clusters was not turning that on. And we were kind of surprised. We were like, whoa, like all of these pods are spinning up, but they're having issues like actually getting injected and things like that. And we found, oh, okay. Yeah, you need to actually give it some, some more resources, but it's still very lightweight considering, you know, they have high availability mode, but it's just a few instances still. >>So from, even from a, you know, binary perspective and running link D how much overhead is it? >>That is a great question. So I don't remember off the top of my head, the numbers, but it's very lightweight. We, we evaluated a few different service missions and it was the lightest weight that we encountered at that point. >>And then from a resource perspective, is it a team of link D people? Is it a couple of people, like how >>To be completely honest for a long time, it was one person, Abraham who actually is the person who proposed this talk. He couldn't make it to Valencia, but he essentially did probably 95% of the work to get a into production. And then this was before we even had a team dedicated to our infrastructure. And so we have, now we have a team dedicated, we're all kind of Linky folks, if not Linky experts, we at least can troubleshoot basically. And things like that. So it's, I think a group of six people on our team, and then, you know, various people who've had experience with it >>On other teams, but I'm not dedicated just to that. >>I mean, >>No one is dedicated just to it. No, it's pretty like pretty light touch once it's, once it's up and running, it took a very long time for us to really understand it and, and to, you know, get like, not getting started, but like getting to where we really felt comfortable letting it go in production. But once it was there, like, it is very, very light touch. >>Well, I really appreciate you stopping by Chris. It's been an amazing conversation to hear how Microsoft is using a open source project. Exactly. At scale. It's just a few years ago, when you would've heard the concept of Microsoft and open source together and like, oh, that's just, you know, but >>They have changed a lot in the last few years now, there are huge contributors. And, you know, if you go to Azure, it's full of open source stuff, every >>So, yeah. Wow. The Cuban 2022, how the world has changed in so many ways from Licia Spain, I'm Keith Townsend, along with a Rico senior, you're watching the, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. What's the flavor of the show so far, And you know, on the queue, fresh on the queue for the first time, Chris Vos, Me. So first off, give us a high level picture of the environment that you're at this point around the globe, you know, 700 to a thousand pods per you and the team settle on link or do And so we began investigating, you know, solutions to that. So, so in general, we are talking about, you know, security at scale. And so, you know, basically it took someone going through a whole How were you doing that before link D like, what was the, what and what were the pain points? we tried, you know, a few things in order to make that a little bit more automated and things like that, You know, anytime there was a failure, it's like, oh, could this be link or D you know, but after a while, you know, consideration that you have to do to install this software on, Link in particular, but the 30 cluster also quite interesting. And then, you know, with link D it's nice Having not been, you know, I, I'm not a deep Kubernetes expert from, Also true. What made us choose link D and then what are the ways in which, you know, we, we use link D so what Well, the future after you immediately solve I hadn't really looked at a lot of the data in years on, you know, when we did our investigations and very, you know, simplifying your life. And again, we didn't have to really do anything for that. So we know one thing when I do it on my laptop, it works fine when I do it with across 22,000 So, you know, the, the first one, and this seems pretty obvious, but was just not something I knew about was So I don't remember our team, and then, you know, various people who've had experience with it you know, get like, not getting started, but like getting to where together and like, oh, that's just, you know, but you know, if you go to Azure, it's full of open source stuff, every how the world has changed in so many ways from Licia Spain,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

ChrisPERSON

0.99+

Christopher VossPERSON

0.99+

2017DATE

0.99+

Chris VosPERSON

0.99+

AbrahamPERSON

0.99+

20QUANTITY

0.99+

95%QUANTITY

0.99+

700QUANTITY

0.99+

San DiegoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

10QUANTITY

0.99+

30QUANTITY

0.99+

five minutesQUANTITY

0.99+

2019DATE

0.99+

22,000 podsQUANTITY

0.99+

six peopleQUANTITY

0.99+

ValenciaLOCATION

0.99+

twoQUANTITY

0.99+

2018DATE

0.99+

two waysQUANTITY

0.99+

oneQUANTITY

0.99+

20 callsQUANTITY

0.99+

7,500 peopleQUANTITY

0.99+

22,000 podsQUANTITY

0.99+

first timeQUANTITY

0.98+

CubanLOCATION

0.98+

firstQUANTITY

0.98+

one serviceQUANTITY

0.98+

Valencia SpainLOCATION

0.98+

EuropeLOCATION

0.98+

LinkyORGANIZATION

0.97+

three daysQUANTITY

0.97+

2022DATE

0.97+

one personQUANTITY

0.97+

first oneQUANTITY

0.97+

link DORGANIZATION

0.96+

KubeconORGANIZATION

0.96+

30 clusterQUANTITY

0.96+

22,000 nodesQUANTITY

0.96+

KU con 2018EVENT

0.95+

CoonORGANIZATION

0.94+

Licia SpainPERSON

0.94+

30 clustersQUANTITY

0.94+

day twoQUANTITY

0.92+

link DOTHER

0.92+

XboxCOMMERCIAL_ITEM

0.91+

RicoORGANIZATION

0.91+

Q conORGANIZATION

0.91+

about 22,000 podsQUANTITY

0.91+

KubernetesPERSON

0.9+

few years agoDATE

0.9+

three yearsQUANTITY

0.89+

linkORGANIZATION

0.86+

single clusterQUANTITY

0.85+

one thingQUANTITY

0.82+

Leer DORGANIZATION

0.79+

a thousand podsQUANTITY

0.77+

CloudnativeconORGANIZATION

0.75+

lastDATE

0.74+

clusterQUANTITY

0.74+

MTLSORGANIZATION

0.72+

EttiORGANIZATION

0.72+

AzureTITLE

0.71+

RicoLOCATION

0.69+

ATSORGANIZATION

0.68+

yearsDATE

0.64+

cloud native conORGANIZATION

0.61+

CubanPERSON

0.6+

day oneQUANTITY

0.59+

decadesQUANTITY

0.56+

linkOTHER

0.56+

KubernetesORGANIZATION

0.53+

linkTITLE

0.52+

22EVENT

0.5+

Greg Muscarella, SUSE | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain and con cloud native con 20 Europe, 2022. I'm your host, Keith Townson alongside a new host en Rico senior reti, senior editor. I'm sorry, senior it analyst at giong Enrique. Welcome to the program. >>Thank you very much. And thank you for having me. It's exciting. >>So thoughts, high level thoughts of CU con first time in person again in couple years? >>Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet, uh, with, uh, you know, people like you again. I mean, we, we met several times over the internet, over zoom codes. I, I started to eat these zoom codes. <laugh> because they're very impersonal in the end. And like last night we, we are together group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is, uh, is a really cool, it's really cool. There are a lot from people interviews and, you know, real people doing real stuff, not just, uh, you know, again, in personal calls, you don't even know if they're telling the truth, but when you can, you know, look in their eyes, what they're doing, I, I think that's makes a difference. >>So speaking about real people, meeting people for the first time, new jobs, new roles, Greg Moscarella enterprise container management in general manager at SUSE, welcome to the show, welcome back clue belong. >>Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain, uh, ability to get to know people a lot more. So it's absolutely fantastic to be here. >>So Greg, tell us about your new role and what SUSE has gone on at KU con. >>Sure. So I joined SA about three months ago to lead the rancher business unit, right? So our container management pieces and, you know, it's a, it's a fantastic time. Cause if you look at the transition from virtual machines to containers and to moving to micro services, right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry and rancher's been setting the stage. And again, I'm go back to being here. Rancher's all about the community, right? So this is a very open, independent, uh, community driven product and project. And so this, this is kinda like being back to our people, right. And being able to reconnect here. And so, you know, doing it, digital is great, but, but being here is changes the game for us. So we, we feed off that community. We feed off the energy. So, uh, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions you've seen, I mean, we've seen just massive adoption, uh, of containers and Kubernetes overall, and rancher has been been right there with some amazing companies doing really interesting things that I'd never thought of before. Uh, so I'm, I'm still learning on this, but, um, but it's been great so far. >>Yeah. And you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe smaller organization adopting Kubernetes in the cloud, but actually large organization thinking guide and more and more the edge. So what's your opinion on this, you know, expansion of Kubernetes towards the edge. >>So I think you're, I think you're exactly right. And that's actually a lot of meetings I've been having here right now is these are some of these interesting use cases. So people who, uh, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5g and you have all these base stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications that wanna run on that same environment, uh, spoke recently with some of our, our good friends at a major automotive manufacturer, doing things in their factories, right. That can't take the latency of being somewhere else. Right? So they have robots on the factory floor, the latency that they would experience if they tried to run things in the cloud meant that robot would've moved 10 centimeters. >>By the time, you know, the signal got back, it may not seem like a lot to you, but if, if, if you're an employee, you know, there, you know, uh, a big 2000 pound robot being 10 centimeters closer to you may not be what you, you really want. Um, there's, there's just a tremendous amount of activity happening out there on the retail side as well. So it's, it's amazing how people are deploying containers in retail outlets. You know, whether it be fast food and predicting, what, what, how many French fries you need to have going at this time of day with this sort of weather. Right. So you can make sure those queues are actually moving through. It's, it's, it's really exciting and interesting to look at all the different applications that are happening. So yes, on the edge for sure, in the public cloud, for sure. In the data center and we're finding is people want to common platform across those as well. Right? So for the management piece too, but also for security and for policies around these things. So, uh, it really is going everywhere. >>So talk to me, how do, how are we managing that as we think about pushing stuff out of the data center, out of the cloud cloud, closer to the edge security and life cycle management becomes like top of mind thought as, as challenges, how is rancher and sushi addressing >>That? Yeah. So I, I think you're, again, spot on. So it's, it starts off with the think of it as simple, but it's, it's not simple. It's the provisioning piece. How do we just get it installed and running right then to what you just asked the management piece of it, everything from your firmware to your operating system, to the, the cluster, uh, the Kubernetes cluster, that's running on that. And then the workloads on top of that. So with rancher, uh, and with the rest of SUSE, we're actually tacking all those parts of the problems from bare metal on up. Uh, and so we have lots of ways for deploying that operating system. We have operating systems that are, uh, optimized for the edge, very secure and ephemeral container images that you can build on top of. And then we have rancher itself, which is not only managing your Kubernetes cluster, but can actually start to manage the operating system components, uh, as well as the workload components. >>So all from your single interface, um, we mentioned policy and security. So we, yeah, we'll probably talk about it more, um, uh, in a little bit, but, but new vector, right? So we acquired a company called new vector, just open sourced, uh, that here in January, that ability to run that level of, of security software everywhere again, is really important. Right? So again, whether I'm running it on, whatever my favorite public cloud providers, uh, managed Kubernetes is, or out at the edge, you still have to have security, you know, in there. And, and you want some consistency across that. If you have to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers lives as much as possible. >>Yeah. From this point of view, are you implying that even you, you are matching, you know, self, uh, let's say managed clusters at the, at the very edge now with, with, you know, added security, because these are the two big problems lately, you know, so having something that is autonomous somehow easier to manage, especially if you are deploying hundreds of these that's micro clusters. And on the other hand, you need to know a policy based security that is strong enough to be sure again, if you have these huge robots moving too close to you, because somebody act the class that is managing them, that could be a huge problem. So are you, you know, approaching this kind of problems? I mean, is it, uh, the technology that you are acquired, you know, ready to, to do this? >>Yeah. I, I mean, it, it really is. I mean, there's still a lot of innovation happening. Don't, don't get me wrong. We're gonna see a lot of, a lot more, not just from, from SA and rancher, but from the community, right. There's a lot happening there, but we've come a long way and we've solved a lot of problems. Uh, if I think about, you know, how do you have this distributed environment? Uh, well, some of it comes down to not just, you know, all the different environments, but it's also the applications, you know, with microservices, you have very dynamic environment now just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where like, you might even be able to set an IP address in a port and some configuration on that. It's like, well, your workload's now dynamically moving. >>So not only do you have to have that security capability, like the ability to like, look at a process or look at a network connection and stop it, you have to have that, uh, manageability, right? You can't expect an operator or someone to like go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your, your, uh, your resources. And I think that's really one of the key things that new vector really brings is because we have so much intelligence about what's going on there. Like the configuration is pretty high level, and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. Uh, and it's that, that combination, again, that manageability with that high functionality, um, that, that is what's making it so popular. And what brings that security to those edge locations or cloud locations or your data center >>Mm-hmm <affirmative> so one of the challenges you're kind of, uh, touching on is this abstraction on upon abstraction. When I, I ran my data center, I could put, uh, say this IP address, can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do, uh, some analysis. Where are you seeing the ball moving to when it comes to customers, thinking about all these layers of abstraction I IP address doesn't mean anything anymore in cloud native it's yes, I need one, but I'm not, I'm not protecting based on IP address. How are customers approaching security from the name space perspective? >>Well, so it's, you're absolutely right. In fact, even when you go to I P six, like, I don't even recognize IP addresses anymore. <laugh> >>Yeah. Doesn't mean anything like, oh, just a bunch of, yes, those are numbers, ER, >>And colons. Right. You know, it's like, I don't even know anymore. Right. So, um, yeah, so it's, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. Right? So this static thing that I can sort of know and, and love and touch and kind of protect to this almost living, breathing thing, which is moving all around, it's a swarm of, you know, pods moving all over the place. And so, uh, it, it is, I mean, that's what Kubernetes has done for the workload side of it is like, how do you get away from, from that, that pet to a declarative approach to, you know, identifying your workload and the components of that workload and what it should be doing. And so if we go on the security side some more like, yeah, it's actually not even namespace namespace. >>Isn't good enough. We wanna get, if we wanna get to zero trust, it's like, just cuz you're running in my namespace doesn't mean I trust you. Right. So, and that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod, every single connection we can look at and it's at the protocol layer. So if you say you're on my database and I have a mye request going into it, I can confirm that that's actually a mye protocol being spoken and it's well formed. Right. And I know that this endpoint, you know, which is a, uh, container image or a pod name or some, or a label, even if it's in the same name, space is allowed to talk to and use this protocol to this other pod that's running in my same name space. >>Right. So I can either allow or deny. And if I can, I can look into the content that request and make sure it's well formed. So I'll give you an example is, um, do you guys remember the log four J challenges from not too long ago, right. Was, was a huge deal. So if I'm doing something that's IP and port based and name space based, so what are my protections? What are my options for something that's got log four J embedded in like I either run the risk of it running or I shut it down. Those are my options. Like those neither one of those are very good. So we can do, because again, we're at the protocol layers like, ah, I can identify any log for J protocol. I can look at whether it's well formed, you know, or if it's malicious, if it's malicious, I can block it. If it's well formed, I can let it go through. So I can actually look at those, those, um, those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that, that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just name space level is one of the key differences. So I talk about the evolution or how we're evolving with, um, with the security. Like we've grown a lot, we've got a lot more coming. >>So let's talk about that a lot more coming what's in the pipeline for SUSE. >>Well, how, before I get to that, we just announced new vector five. So maybe I can catch us up on what was released last week. Uh, and then we can talk a little bit about going, going forward. So new vector five, introduce something called um, well, several things, but one of the things I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also have run time security, right? So any, any container that's running within your environment has processes that are running that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the, a knit, um, function was and that container we're not gonna let it run. Uh, so the, the configuration that you have to put in there is, is a lot simpler. Um, so that's something that's in, in new vector five, um, the web application firewall. So this layer seven security inspection has gotten a lot more granular now. So it's that pod Topo security, um, both for ingress egress and internal on the cluster. Right. >>So before we get to what's in the pipeline, one question around new vector, how is that consumed and deployed? >>How is new vector consumed, >>Deployed? And yeah, >>Yeah, yeah. So, uh, again with new vector five and, and also rancher 2 65, which just were released, there's actually some nice integration between them. So if I'm a rancher customer and I'm using 2 65, I can actually just deploy that new vector with a couple clicks of the button in our, uh, in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that has the rights can just click they're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using, uh, rancher, you're using some other, uh, container management platform, new vector still works. Awesome. You can deploy it there still in a few clicks. Um, you're just gonna get into, you have to log into your new vector, uh, interface and, and use it from there. >>So that's how it's deployed. It's, it's very, it's very simple to use. Um, I think what's actually really exciting about that too, is we've opensourced it? Um, so it's available for anyone to go download and try, and I would encourage people to give it a go. Uh, and I think there's some compelling reasons to do that now. Right? So we have pause security policies, you know, depreciated and going away, um, pretty soon in, in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next, uh, for your security within your Kubernetes. >>So, Paul, we appreciate you stopping by from ity of Spain. I'm Keith Townsend, along with en Rico Sinte. Thank you. And you're watching the, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. Welcome to the program. And thank you for having me. I had the chance to meet, uh, with, uh, you know, people like you again. So speaking about real people, meeting people for the first time, new jobs, So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has gone So our container management pieces and, you know, it's a, it's a fantastic time. you know, maybe smaller organization adopting Kubernetes in the cloud, So people who, uh, whether it be, you know, By the time, you know, the signal got back, it may not seem like a lot to you, to what you just asked the management piece of it, everything from your firmware to your operating system, If you have to have a different platform for each of your environments, And on the other hand, you need to know a policy based security that is strong have to evolve from a fairly static policy where like, you might even be able to set an IP address in a port and some So not only do you have to have that security capability, like the ability to like, Where are you seeing the In fact, even when you go to I P six, like, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. And I know that this endpoint, you know, and also go pod to pod, you know, not just name space level is one of the key differences. the configuration that you have to put in there is, is a lot simpler. Of course, if you aren't using, uh, rancher, you're using some other, So I think it's a great time to look at what's coming next, uh, for your security within your So, Paul, we appreciate you stopping by from ity of Spain.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsonPERSON

0.99+

SUSEORGANIZATION

0.99+

Greg MuscarellaPERSON

0.99+

PaulPERSON

0.99+

10 centimetersQUANTITY

0.99+

Keith TownsendPERSON

0.99+

JanuaryDATE

0.99+

Greg MoscarellaPERSON

0.99+

last weekDATE

0.99+

SpainLOCATION

0.99+

GregPERSON

0.99+

2000 poundQUANTITY

0.99+

one questionQUANTITY

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

Valencia SpainLOCATION

0.97+

todayDATE

0.97+

KubeconORGANIZATION

0.97+

first timeQUANTITY

0.95+

single interfaceQUANTITY

0.95+

two big problemsQUANTITY

0.95+

eachQUANTITY

0.94+

CoonORGANIZATION

0.94+

ingressORGANIZATION

0.94+

zeroQUANTITY

0.9+

three months agoDATE

0.9+

CloudnativeconORGANIZATION

0.88+

22EVENT

0.86+

SUSETITLE

0.86+

fiveTITLE

0.85+

I P sixOTHER

0.84+

EuropeLOCATION

0.81+

giong EnriquePERSON

0.81+

log fourOTHER

0.8+

2 65COMMERCIAL_ITEM

0.79+

2022DATE

0.78+

vector fiveTITLE

0.77+

couple yearsQUANTITY

0.75+

rancherORGANIZATION

0.73+

FrenchOTHER

0.73+

cloud native computingORGANIZATION

0.73+

KubernetesORGANIZATION

0.72+

last nightDATE

0.71+

single connectionQUANTITY

0.71+

one of the reasonsQUANTITY

0.69+

RicoORGANIZATION

0.68+

Rico SintePERSON

0.67+

SAORGANIZATION

0.66+

aboutDATE

0.66+

layer sevenOTHER

0.65+

vectorOTHER

0.64+

5gQUANTITY

0.64+

65COMMERCIAL_ITEM

0.62+

cloud native conORGANIZATION

0.55+

telcoORGANIZATION

0.55+

2TITLE

0.54+

SALOCATION

0.53+

egressORGANIZATION

0.52+

hundredsQUANTITY

0.51+

CU conEVENT

0.46+

KU con.ORGANIZATION

0.44+

vectorCOMMERCIAL_ITEM

0.39+

20EVENT

0.31+

David Safaii | KubeCon + CloudNativeCon NA 2021


 

>>Welcome back to Los Angeles, Lisa Martin and Dave Nicholson here on day three of the cubes, coverage of coop con and cloud native con north America, 21, Dave, we've had a lot of great conversations. The last three days it's been jam packed. Yes, it has been. And yes, it has been fantastic. And it's been live. Did we mention that it's inline live in Los Angeles and we're very pleased to welcome one of our alumni back to the program. David Stephanie is here. The CEO of Trulio David. Welcome back. It's good to see you. >>Thanks for having me. It's good to be here. Isn't it great to be in person? Oh man. It's been a reunion. >>It hasn't been a reunion and they have Ubered been talking about these great little, have you seen these wristbands that they have? I actually asked >>For two, cause I'm a big hugger, so >>Excellent. So, so here we are day three of coupon. That's actually probably day five, our third day of coverage. I'm losing track to it's Friday. I know that, that I can tell you, you guys announced two dot five a couple of weeks ago. Tell us what's in that. What's exciting. Before we crack open Twilio, uh, choy. >>Sure, sure. Well, it's been exciting to be here. Look, the theme right of resiliency realize has been it's right up our wheelhouse, right? To signal that more people are getting into production type of environments. More people require data protection for cloud native applications, right? And, uh, there's two dot five releases. It is as an answer to what we're seeing in the market. It really is centered predominantly around, uh, ransomware protection. And uh, you know, for us, when we look at this, I I've done a lot of work in, in cybersecurity, my career. And we took a hard look about a year ago around this area. How do we do this? How do we participate? How do we protect and help people recover? Because recovery that's part of the security conversation. You can talk about all the other things, but recovery is just as important. And we look at, uh, everything from a zero trust architecture that we provide now to adhering, to NIST standards and framework that's everything from immutability. Uh, so you can't touch the backups now, right? Uh, th that's fine to encryption, right? We'll encrypt from the application all the way to that, to the storage repository. And we'll leverage Keem in that system. So it's kind of like Bitcoin, right? You need a key to get your coin. You as an end-user only have your key to your data alone. And that's it. So all these things become more and more important as we adopt more cloud native technology. And >>As the threat landscape changes dramatically. >>Oh yeah. I got to tell you right. Every time we, you, you publish an application into another cloud, it's a new vector, right? So now I'm living in a multi-cloud world where multiple applications in my data now lives, right? So people are trying to attack backups through, uh, consoles and the ministry of consoles to the actual back of themselves. So new vectors, new problems need new solutions. >>And you mentioned, you mentioned something, you, you, you asked the question, how do we participate? And we are here at KU con uh, w uh, cloud native foundation. So what about, what's your connection to the open source community and efforts there? How do you participate in that? >>Yeah, so it's a really great question because, you know, uh, we are a closed source solution that focuses all of our efforts on the open source community and protecting cloud native applications. Our roots have been protecting cloud native applications since 2013, 2014, and with a lot of very large logos. And, um, you know, through time there are open source projects that do emerge, you know, in this community. And for example, Valero is an open source data protection platform, um, for all of its goodness, as a, as a community-based project, they're also deficiencies, right? So Valero in itself is, uh, focuses only on label based applications. It doesn't really scale. It doesn't have a UI it's really CLI driven, which is good for some people and it's free. But you know, if you need to really talk about an enterprise grade platform, this is where we pick up, you know, we, in our last release, we gave you the ability to capture your Valero based backups. And now you want to be an adult with an enterprise caliber, you know, backup solution and continue to protect your environment and have compliance and governance needs all satisfied. That's where, that's where we really stand out. >>Well, when you're talking to customers in any industry, what are the things that you talk about in terms of relief, categorizing the key differentiators that really make Trulia stand out above the competition? >>Yeah. Cause there, there a bunch of, they're a bunch of great competitors out there. There's no doubt about it. A lot of the legacy folks that you do see perhaps on those show floor, they do tuck in Valero and under the, under the covers, they can check a box or you can set aside some customer needs some of the pure play people that, that we do see out there, great solutions too. But really where we shine is, you know, we are the most flexible agnostic solution that there is in this market. And we've had people like red hat and Susa and verandas, digital ocean and HPS morale. And the list goes on, certify, say, Trulio is the solution of choice. And now no matter where you are in this journey or who you're using, we have your back. So there's a lot of flexibility. There we are complete storage agnostic. >>We are cloud agnostic in going back to how you want to build our architecture application. People are in various phases in their, in their journey. A lot of times, many moons ago, you may have started with just a label based application. Then you have another department that has a new technique and they want to use helm, or you may be adopting open shift and you're using operators to us. It doesn't matter. You have peace of mind. So whether you have, you have to protect multiple departments or you as an end user, as one single tenant are using various techniques, we'll discover or protect and we can move forward. >>So if you looked at, if you look at it from a workload basis, um, and you look at your customers are the workloads that you're protecting. What's, what's the mix of what you think of as legacy virtualized things versus containerized things. And then, and then, and then the other kind of follow on to that is, um, are you seeing a lot of modernization and migration or are you seeing people leave the legacy things alone and then develop net new in sort of separate silos? >>Yeah. So that's a great question. And I, to tell you the answer varies, that's, that's the honest answer, right? You end up having, you may have a group or a CIO that says, look, your CTO says, we're moving to this new architecture. The water's great, bring your applications in. And so either it's, we're going to lift and shift an application and then start to break it apart over time and develop microservices, or we're gonna start net new. And it really does run, run the gambit. And so, you know, as we look at, for some of those people, they have peace of mind that they can bring their two on applications in and we can recover. And for some people that say, look, I'm going to start brand new, and these are gonna be stateless applications. Um, we've seen this story before, right? Our, our, uh, uh, I joke around, it's kinda like the movie Groundhog's day. >>Uh, you know, we, we started many moons ago within the OpenStack world and we started with stateless to stateful. Always, always, always finds a way, but for the stateless people, um, when you start thinking about security, I've had conversations with CSOs around the world who say, I'm going to publish a stainless application. What I'm concerned about things like drift, you know, what's happening in runtime may be completely different than what I intended. So now we give you the ability to capture that runtime state compare. The two things identify what's changed. If you don't like what you see, and you can take that point in time recovery into a sandbox and forensically take it apart. You know, one of our superpowers, if you will, is the, our point in time, backups are all in an open format. Everyone else has proprietary Schemos. So the benefit of an open format is you have the ability to leverage a lot of third party tooling. So take a point in time, run scanners across it. And it, God forbid Trulio goes away. You still have access and you can recreate a point in time. So when you start thinking about compliance, heavy environments, think about telcos, right? Or financial institutions. They have to keep things for 15 years, right? Technologies change, architectures change. You can't have that lock-in >>So we continue to thrive. And on that front, one of the marketing terms that we hear a lot, and I want to get your opinion on this as a feature proofing, how do you, what does, what does it mean to you and Trillium and how do you enable that for organizations, like you said, for the FSI is I have to keep data for 15 years and other industries that have to keep it for maybe even longer. >>I mean, right. The future proof, uh, you know, terminology, that's part of our mantra actually, when I talked about, you know, a superpower being as agnostic and flexible as can be right, as long as you adhere to standards, right? The standards that are out here, we have that agnostic play. And then again, not just capturing an applications, metadata data, but that open format, right? Giving you that open capability to unpack something. So you're not, there is no, there is no vendor lock-in with us at all. So all these things play a part into, into future-proofing yourself. And because we live and breathe cloud native applications, you know, it's not just Kubernetes right? Over the course of time, there'll be other things, right. You're going to see mixed workloads too. They're gonna be VM based in the cloud and container based in the cloud and server lists as well. But you, as long as you have that framework to continuously build off of it, that's, that's where we go. You know, uh, it shouldn't matter where your application lives, right? At the end of the day, we will protect the application and its data. It can live anywhere. So conversations around multi-cloud change, we start to think and talk across cloud, right? The ability to move your application, your data, wherever it, wherever it needs to be to. >>Well, you talked about recoverability and that is the whole point of backing up video. You have to be able to recover something that we've seen in the last 18, 19 months. Anyone can backup >>Data. >>That's right. That's right. If you can't recover it, or if you can't recover it in time. Yeah. We're talking like going on a business potential and we've seen the massive changes in the security landscape in the last 18, 19 months ransomware. I was looking at some, some cybersecurity data that showed that just in the first half of this calendar year, January one to June 30, 20, 21, ransomware was up nearly 11 X DDoS attacks are up. We've got this remote workforce. That's going to probably persist for a while. So the ability to recover data from not if we get hit by ransomware, but when we get hit by ransomware is >>When you're, you're absolutely right. And, and, and to your plate anyway. So anyone can back up anything. When you look at it, it's at its highest form. We talk about point time where you orchestration, right. Backup is a use case. Dr. Is a use case, right? How do you, reorchestrate something that's complex, right? The containers, these applications in the cloud native space, there are morphous, they're living things, right? The metadata is different from one day to the next, the data itself is different from when one day the net to the next. So that's, what's so great about Trillium. It's such an elegant solution. It allows your, reorchestrate a point in time when and where you need it. So yes. You have to be able to recover. Yes. It's not a matter of if, but when. Right. And that's why recovery is part of that security conversation. Um, you know, I I've seen insurance companies, right? They want to provide insurance for ransomware. Well, you're gonna have enough attacks where they don't want to provide that insurance anymore. It costs too much. The investment that you make with, with Trulio will save you so much more money down the road. Right. Uh, who's our product manager actually gave a talk about that yesterday and the economics were really interesting. >>Hmm. So how has the recovery methodology who participates in that changed over time? As, as we, you know, as we are in this world of developer operators who take on greater responsibility for infrastructure things. Yeah. Who's, who's responsible for backup and recovery today and how, how has that changed >>Everyone? Everyone's responsible. So, you know, we rewind however many years, right? And it used predominantly CIS admin that was in charge of backup administrator, but a ticket in your backup administrator, right. Cloud native space and application lifecycle management is a team sport. Security is a team sport. It's a holistic approach. Right? So when you think about the, the team that you put out on the field, whether your DevOps, your SRE dev sec ops it ops, you're all going to have a need for point in time, we orchestration for various things and the term may not be backup. Right? It's something else. And maybe for test dev purposes, maybe for forensic purposes, maybe for Dr. Right. So I say it's a team sport and security as a holistic thing that everyone has to get on board with >>The three orchestration is exactly the right way to talk about absolute these processes. It's not just recovery, you're rebuilding >>Yeah. A complex environment. It's always changing. >>That's one of the guarantees. It's always going to be changing >>That much. >>Can you give us a, leave us with a customer example that you think really articulates the value of what Trulio delivers? >>Yeah. So it's interesting. I won't say who the customer is, but I'll tell you it's in the defense agency, it's a defense agency. Uh, they have developers all over the place. Uh, they need self-service capabilities for the tenants to mind their own backups. So you don't need to contact someone, right. They can build, they have one >>Dashboard, single pane of glass or truth to manage all their Corinthians applications. And it gives them that infrastructure to progress whether your dev ops or not your it ops, uh, this, this group has rolled it out across the nation and they're using in their work with very sensitive environments. So now we have they're back. And what are some of the big business outcomes that they're achieving already? >>The big business outcomes? Well, so operational efficiencies are definitely first and foremost, right? Empowering the end user with more tools, right? Because we've seen this shift left and people talking about dev ops, right. So how do I empower them to do more? So I see that operational efficiency, the recoverability aspect, God forbid, something goes wrong. How do you, how do you do that in the cost of that? Um, and then also, um, being native to the environment, the Trillium solution is built for Kubernetes. It is built on go. It is a Qubit stateless Kubernetes application. So you have to have seamless integration into these environments. And then going back to what I was saying before, knowing peace of mind, the credibility aspect, that it is blessed by, you know, red hat and suicide Mirandas and all these other, other folks in the field, um, that you can guarantee it's going to work >>Well, that helps to give your customers the confidence that there, and that confidence might sound trivial. It's not, especially when we're talking about security, it's not at all that, that's a, that's a big business outcome for you guys. When a customer says, I'm confident I have the right solution, we're going to be able to recover when things happen, we try, we fully trust in the solution that we're, >>And we'll bring more into production faster that helps everyone out here too. Right? It feels good. You have that credibility. You have that assurance that I can move faster and I can move into different clouds faster. And that's, we're gonna continue to put, we're gonna continue to push the envelope there. You know, coming a, as we look into, you know, going forward, we're going to come out with other capabilities. That's going to continue to differentiate ourselves from, from folks. Uh, we'll, we'll talk about in time, the ability to propagate data across multiple clouds simultaneously. So making RTOs look at the split seconds and minutes. And so I hope that we can have that conversation next time we were together, because it's really exciting. >>Any, any CTA that you want to give to the audience, any, any, uh, like upcoming or recent webinars that you think they would be really benefit from? >>I guess one thing I put out there is that, um, I understand that people need to continuously learn. There is a skillset hole in, in this market. We can, we understand that, you know, and people look to us as not just a vendor, but a partner. And a lot of the questions that we do get are how do I do this? Or how do I do that? Engage us, ask us to consume our product is really, really easy. You can download from the website or go to an, you know, red hats operator hub, or go to the marketplace over at Susa, and let's begin to begin and we're here to help. And so reach out, right? We want everyone to be successful. >>Awesome. trillium.io. David, thank you for joining us. This has been an exciting conversation. Good >>To see you all. >>Likewise. Good to see you in person take care. We look forward to the next time we see you when unpacking what other great things are going on on Trulia. We appreciate your >>Time. Thank you so much. Good to be here >>For David's fie and David Nicholson, the two Davids I'm going to sandwich. I'm Lisa Martin, you we're coming to you live from Los Angeles. This is Q con cloud native con north America, 2021. Stick around our next guest joins us momentarily.

Published Date : Oct 26 2021

SUMMARY :

It's good to see you. It's good to be here. So, so here we are day three of coupon. And uh, you know, for us, I got to tell you right. And you mentioned, you mentioned something, you, you, you asked the question, how do we participate? to be an adult with an enterprise caliber, you know, backup solution and continue to And now no matter where you are in this journey or who We are cloud agnostic in going back to how you want to build our architecture application. So if you looked at, if you look at it from a workload basis, And I, to tell you the answer varies, So the benefit of an open format is you have the ability to leverage a lot And on that front, one of the marketing terms that we hear a lot, and I want to get your opinion on this as as long as you have that framework to continuously build off of it, that's, that's where we go. Well, you talked about recoverability and that is the whole point of backing up video. So the ability to recover data from not if we get hit by ransomware, The investment that you make with, As, as we, you know, as we are in this world So when you think about the, the team that you put out on the field, It's not just recovery, you're rebuilding It's always changing. It's always going to be changing So you don't need to contact someone, right. And it gives them that infrastructure to progress whether your dev ops or not your it ops, So you have to have seamless integration into these environments. Well, that helps to give your customers the confidence that there, and that confidence might sound as we look into, you know, going forward, we're going to come out with other capabilities. You can download from the website or go to an, you know, red hats operator hub, David, thank you for joining us. We look forward to the next time we see you when unpacking what other Good to be here I'm Lisa Martin, you we're coming to you live from Los Angeles.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

David SafaiiPERSON

0.99+

David NicholsonPERSON

0.99+

15 yearsQUANTITY

0.99+

2014DATE

0.99+

Los AngelesLOCATION

0.99+

June 30DATE

0.99+

twoQUANTITY

0.99+

2013DATE

0.99+

David StephaniePERSON

0.99+

DavePERSON

0.99+

HPSORGANIZATION

0.99+

one dayQUANTITY

0.99+

two thingsQUANTITY

0.99+

yesterdayDATE

0.99+

21DATE

0.99+

third dayQUANTITY

0.98+

KubeConEVENT

0.98+

SusaORGANIZATION

0.98+

20DATE

0.97+

oneQUANTITY

0.97+

FridayDATE

0.97+

CloudNativeConEVENT

0.97+

red hatORGANIZATION

0.97+

2021DATE

0.96+

digital oceanORGANIZATION

0.96+

firstQUANTITY

0.95+

TruliaORGANIZATION

0.95+

day threeQUANTITY

0.94+

north AmericaLOCATION

0.93+

ValeroORGANIZATION

0.93+

threeQUANTITY

0.93+

day fiveQUANTITY

0.92+

a year agoDATE

0.92+

verandasORGANIZATION

0.9+

KU conORGANIZATION

0.9+

KubernetesTITLE

0.9+

many moons agoDATE

0.89+

one single tenantQUANTITY

0.89+

trillium.ioOTHER

0.89+

UberedORGANIZATION

0.89+

DavidsPERSON

0.88+

one thingQUANTITY

0.88+

TrulioORGANIZATION

0.87+

couple of weeks agoDATE

0.87+

January oneDATE

0.85+

21QUANTITY

0.85+

TruliaPERSON

0.83+

first half of this calendar yearDATE

0.81+

todayDATE

0.81+

zero trustQUANTITY

0.81+

single paneQUANTITY

0.8+

Groundhog's dayTITLE

0.79+

many moonsDATE

0.79+

FSIORGANIZATION

0.79+

OpenStackTITLE

0.78+

TwilioORGANIZATION

0.78+

BitcoinOTHER

0.78+

NA 2021EVENT

0.77+

Q con cloud native conORGANIZATION

0.71+

TrilliumORGANIZATION

0.7+

aboutDATE

0.68+

monthsQUANTITY

0.68+

NISTORGANIZATION

0.68+

QubitTITLE

0.66+

dot fiveORGANIZATION

0.64+

last 18, 19 monthsDATE

0.58+

11 XQUANTITY

0.58+

TrulioTITLE

0.53+

SRETITLE

0.52+

last 18, 19DATE

0.52+

Dr.PERSON

0.47+

KeemTITLE

0.46+

Matt Provo and Tom Ellery | KubeCon + CloudNativeCon NA 2021


 

>> Welcome back to Los Angeles. The cube is live. It feels so good to say that. I'm going to say that again. The cube is alive in Los Angeles. We are a coop con cloud native con 21. Lisa Martin with Dave Nicholson. We're talking to storm forge next. Cool name, right? We're going to get to the bottom of that. Please welcome Matt Provo, the founder and CEO of storm forge and Tom Ellery, the SVP of revenue storm forge, guys, welcome to the program. Thanks for having us. So storm forge, you have to say it like that. Like I feel like do you guys wear Storm trooper outfits on Halloween. >> Sometimes Storm trooper? The colors are black. You know, we hit anvils from time to time. >> I thought I, I thought they, that I saw >> Or may not be a heavy metal band that might be infringing on our name. It's all good. That's where we come from. >> I see. So you, so you started the company in 2015. Talk to me about the Genesis of the company. What were some of the gaps in the market that you saw that said we got to come in here and solve this? >> Yeah, so I was fortunate to always know. I think when you start a company, sometimes you, you know exactly the set of problems that you want to go after and potentially why you might be uniquely set up to solve it. What we knew at the beginning was we had a number of really talented data scientists. I was frustrated by the buzzwords around AI and machine learning when under the hood, this really a lot of vaporware. And so at the outset, really the, the point was build something real at the core, connect that to a set of problems that could drive value. And when we looked at really the beginnings of Kubernetes and containerization five, six years ago at its Genesis, we saw just a bunch of opportunity for machine learning, to play the right kind of role if we could build it correctly. And so at the outset it was what's going on. Why are people are people moving content workloads over to containers in the first place? And, you know, because of the flexibility and the portability around Kubernetes, we then ran into quickly its complexity. And within that complexity was really the foundation to set up the company and the solution for prob a set of problems uniquely and most beneficially solved by using machine learning. And so when we sort of brought that together and designed out some ideas, we, we did what any, any founder with a product background would do. We went and talked to a bunch of potential users and kind of tried to validate the problems themselves and, and got a really positive response. So. >> So Tom, from a business perspective, what, what attracted you to this? >> Well, initially I wasn't attracted just, I'll say that just from a startup standpoint. So I've been in the industry for 30 years, I've done six or seven pre IPO companies. I was exiting a private company. I did not want to go do another startup company, but being in the largest enterprise companies for the last 20 years, you see Kubernetes like wildfire in these places. And you knew there was huge amount of complexity and sophistication when they deployed it. So I started talking to Matt early on. He explained what they were doing and how unique the offer was around machine learning. I already knew the problems that customers had at scale with Kubernetes. So it was for me, I said, all right, I'm going to take one more run at this with Matt. I think we're, we're in a great position to differentiate ourselves. So that was really the launch pad for me, was really the technology and the market space. Those, those two things in combination are very exciting for us as a business. >> And, you know, a couple of bottles of amazing wine and a number of dinners that. >> Helps as well. >> That definitely helped twist his arm? >> Now tell us, just really kind of get into the technology. What does it do? How does it help facilitate the Kubernetes environment? >> Yeah, absolutely. So when organizations start moving workloads over to Kubernetes and get their applications up and running, there's a number of amazing organizations, whether it's through cloud providers or otherwise that that sort of solved that day one problem, those challenges. And as I was mentioning, you know, they moved because of flexibility and so developers love it and it starts to create a great experience, but there's these set of expectations. >> Where, where typically are these moving from? What you, what, what are the, what are the top three environments these are, that these are moving out of? >> Yeah. I mean, of course, non containerized environments, more generally. They could be coming from, you know, bare metal environment and it could be coming from kind of a VM driven environment. >> Okay. >> So when you look back at kind of the, the growth and Genesis and of VMs, you see a lot of parallels to what we're seeing now with, with containerization. And so as you move, it's, it's exciting. And then you get smacked in the face with the complexity, for all of the knobs that are able to be turned within a Kubernetes environment. It gives developers a lot of flexibility. These knobs, as you turn them, you have no visibility into how into the impact on the application itself. And so often organizations are become, you know, becoming more agile shipping, you know, shipping code more quickly, but then all of a sudden the, the cloud bill comes and they've, over-provisioned by 80, 90%, the, they didn't need nearly as many resources. And so what we do is we help understand the unique goals and requirements for each of the applications that are running in Kubernetes. And we have machine learning capabilities that can predict very accurately what organizations will need from a resource standpoint, in order to meet their goals, not just from a cost standpoint, but also from a performance standpoint. And so we allow organizations to typically save usually between 40 and 60% off their cloud bill and usually increased performance between 30 and 50%. Historically developers had to choose between cost and performance and their worldview on the application environment was very limited to a small set of what we would call parameters or metrics that they could choose from. And machine learning allows that world to just be blown open and not many humans are, are sophisticated in the way we think about multidimensional math to be able to make those kinds of predictions. You're talking about billions and billions of combinations, not just in a static environment, but an ongoing basis. So our technology sits in the middle of all that chaos and, and allows it to allows organizations just to re reap a whole lot of benefits that they otherwise may not ever find. >> Those numbers that you mentioned were, were big from a cost savings perspective than a performance increased perspective, which is so critical these days is in the last 18 months, we've seen so much change. We've seen massive pivots from companies in every industry to survive first of all, and then to be able to thrive and be able to iterate quickly enough to develop new products and services and get them to market to be competitive. >> Yeah. >> Yeah. Sorry. I mean, the thing that's interesting, there was an article by Andreessen Horowitz. I don't know if you've taken to the cloud paradox. So we actually, if you start looking at that great example would be some of these cloud companies that are growing like astronomical rates, snowflakes, like phenomenal what they're doing, but go look at their cogs and what it's doing. Also, it's growing almost proportionately as the revenues growing. So you need to be able to solve that problem in a way that is sophisticated enough with machine learning algorithms, that people don't have to be in the loop to do it. And that the math can prove out the solution as you go out and scale your environments. And a lot of companies now are all transitioning over SAS based platforms, and they're going to start running into these problems that they go as they go to scale. And those are the areas that we're really focused and concentrating on as an organization. >> As the leader of sales, talk to me about the voice of the customer. What are some- you've been there six months or so we heard, we heard about the wine and the dinners is obvious. >> We haven't done a lot of that over the last 18 months. >> You'll have to make for lost time then >> As soon as he closes more business. >> Oh, oh there we go, we got that on camera! >> There's, there's been three, a market spaces that we've had some really good success in that. So we talked about a SAS marketplace. So there's a company that does Drupal and Matt knows very well up in Boston, Aquia. And they have every customer is a unique snowflake customer. So they need to optimize each of their customers in order to ensure the cost as well as performance for that customer on their site works appropriately. So that's one example of a SAS based company that where we can go in and help them optimize without humans doing the optimization and the math and the machine learning from storm forge doing that. So that's an area, the other area that we've seen some really good traction Cantonese with GSI. So part of our go to market model is with GSI. So if you think about what a GSI does, a lot of times customers are struggling either initially deploying Kubernetes or putting it in for 12, 18 months and realizing we're starting to scale, we got all kinds of performance issues. How do I solve it? A lot of these people go to the Accentures, the cognizance and other ones, and start flying their ninjas into kind of solve the problem. So we're getting a lot of traction with them because they're using our tool as a way to help solve the customer's problems. And they're in the largest enterprise customers as possible. >> So if I'm hearing what you're saying correctly, you're saying that when I deploy server less applications, I may in fact, get a bill for servers that are being used? Is it, is that what you're telling us? >> They're there in fact may be a bill for what was coined as server less. That is very difficult to understand, by the way, >> That's crazy talk, Matt. >> And connect back. >> Yeah. But absolutely we deal with that all the time. It's a, it's a painful process from time to time. >> Have you, have you, have you seen the statistics that's going on with how people, I mean, there was huge inertia from every CIO that you had have a cloud strategy in place. Everyone ran out and had a cloud strategy in place. And then they started deploying on Kubernetes. Now they're realizing, oh wow, we can run it, but it's costing us more than it ever costs us on prem and the operational complexity associated with that. So there's not enough people in the industry to help solve that problem, especially at the grass roots, that's where you need sophisticated solutions like storm forge and machine learning to help solve this at scale problem in a way that humans could never solve. >> And I would, I would just add to that, that the, the same humans managing the Kubernetes application environments today are likely the same humans that we're managing it in a, in a BM world. So there's a huge skills gap. I love what Castin announced at KU KU con this year around their learning environment where it's free. Come learn Kubernetes and this, and we need more of that. There's an enormous skills gap and, and the problems are complex enough in and of themselves. But when we have, when you add that to the skills gap, it it's, it presents a lot of challenges for organizations. >> What are some the ways in which you think that gap can start to be made smaller. >> Yeah. I mean, I think as more workloads get moved over, over, you know, over time, you see, you see more and more people becoming comfortable in an environment where scale is a part of what they have to manage and take care of. I love what the Linux foundation and the CNCF are doing around Kubernetes certifications, you know, more and more training. I think you're going to see training, you know, availability for more and more developers and practitioners be adopted more widely. You know, and I think that, you know, as the tool chain itself hardens within a CCD world in a containerized world, as that hardens, you're going to, you're going to start seeing more and more individuals who are comfortable across all these different tools. If you look at the CNCF landscape, I mean, today compared to four or five years ago, it's growing like crazy. And so, but, but there's also consolidation taking place within the tools. And people have an opportunity to, to learn and gain expertise within us. Which is very marketable by the way, >> Absolutely >> My employees often show me their LinkedIn profiles and remind me of how , how much they're getting recruited, but they've been loyal. So it's been a fantastic. >> Are there are so many parallels when you look at a VM in virtualization and what's happening with covers, obviously all the abstractions and stuff, but there was this whole concept of VM sprawl, you know, maybe 10 years in, if you think about the Kubernetes environment, that is exponentially bigger problem because of how many they're spitting up versus how, how many you spun up in VM. So those things ultimately need to be solved. It's not just going to be solved with people. It needs to be solved with sophisticated software. That's the only way you're going to solve a problem at scale like that. No matter how many people you have in the industry, it's just never going to solve the problem. >> So when you're in customer conversations, Tom, what are you say are like the top three differentiators that really set storm forage apart? >> Well, so the first one is we're very focused on Kubernetes only. So that's all we do is just Kubernetes environment. So we understand not just the applications that run in Kubernetes, but we understand the underlying architectures and techniques, which we think is really important. From a solution standpoint, >> So you're specialists? >> We are absolutely specialists. The other areas obviously are machine learning and the sophistication of our machine learning. And Matt said this really well, early on, I mean, the buzzwords are all out there. You can read them all up, all over the place for the last five to seven year AI and ML. And a lot of them are very hollow, but our whole foundation was based on machine learning and PhDs from Harvard. That's where we came out of from a technology background. So we were solving more, we weren't just solving the Kubernetes problems. We were solving machine learning problems. And so that's another really big area of differential for us. And I think the ability to actually scale and not just deal with small problems, but very large problems, because our focus is the fortune 2000 companies. And most of them have been deploying like financial services and stuff, Kubernetes for three, four or five years. And so they have had scale challenges that they're trying to solve. >> Yeah. It's Lisa and I talk about this concept of machine learning and looking under the covers and trying to find out is the machine really learning? Is it really learning or is it people are telling the machine, you need to do this. If you see that Where's the machine actually making those correlations and doing something intelligently. So can you give us an example of something that is actually happening that's intelligent? >> Well, so the, the, if this, then that problem is actually a huge source of my original frustration for starting the company, because you, you, you tag AI as a buzzword onto a lot of stuff. And we see that growing like crazy. And so I literally at the beginning said, if we can't actually build something real, that solves problems, like we're going to hang it up. And, you know, as Tom said, we came out of Harvard and, you know, there was a challenge initially of, are we just going to build like a really amazing algorithm? That's so heavy, it can never be productized or commercialized and it really should have just stayed in academia. And, you know, I the I, I will say a couple of things. One is I do not believe that that black box AI is a thing. We believe in what we would call human, augmented AI. So we want to empower practitioners and developers into the process instead of automate them out. We just want to give them the information and we want to save time for them and make their lives easier. But there's a kill switch on the technology. They can intervene at any point in time. They can direct the technology as they see fit. And what's really, really interesting is because their worldview of this application environment gets opened up by all the predictions and all of the learning that actually is taking place and, you know, give it because that worldview is open, they then get into a kind of a tinkering or experimental mindset with the technology. And they start thinking about all these other scenarios that they never were able to explore previously with the application. And, and so the machine learning itself is on an ongoing basis. Understanding changes in traffic, understanding and changes, changes in workloads for the application or demand. If you thought about like surge pricing for Uber, you know, because of a, a big game that took place. And you know, that, that change in peaks and valleys in demand, our, our technology not only understands those reactively, but it starts to build models and predict proactively in advance of the events that are going to take place on, on what ne- what kind of resources need to be allocated. And why that's the other piece around it is often solutions are giving you a little bit of a what, but they certainly are not giving you any explanation of the why. So the holy grail really like in our world is kind of truly explainable AI, which we're not there yet. Nobody's there yet. But human augmented AI with, with actual intelligence that's taking place that also is relevant to business outcomes is, is pretty exciting. So that's why where try to operate. >> Very exciting guys. Thanks for joining us, talking to us about storm forage, to feel like we need some store in forge. T-shirts what do you think? >> (unintelligible) >> See, I'm not even asking for the bottle of wine. I liked that idea. I thank Matt and Tom, thank you so much for joining us exciting company. Congratulations on your success. And we look forward to seeing what great things are to come from storm forage. >> Thanks so much for the time. >> Our pleasure. For Dave Nicholson. I'm Lisa Martin. We are alive in Los Angeles, the cube covering Kube con and cloud native con 21 stick around. Dave and I will be right back with our next guest.

Published Date : Oct 15 2021

SUMMARY :

So storm forge, you have You know, we hit anvils from time to time. Or may not be a heavy metal band that gaps in the market that you saw that And so at the outset, really the, for the last 20 years, you see Kubernetes And, you know, a couple of bottles of the technology. and so developers love it and it starts to coming from, you know, and of VMs, you see a lot and then to be able to And that the math and the dinners is obvious. that over the last 18 months. ninjas into kind of solve the for what was coined as server less. all the time. in the industry to help But when we have, when you add that to the that gap can start to be made smaller. and the CNCF are doing around Kubernetes So it's been a fantastic. of VM sprawl, you know, maybe 10 years in, Well, so the first because our focus is the So can you give us an example of something and all of the learning to feel like we need some store in forge. See, I'm not even asking for the the cube covering Kube

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

Tom ElleryPERSON

0.99+

MattPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

2015DATE

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Matt ProvoPERSON

0.99+

LisaPERSON

0.99+

Andreessen HorowitzPERSON

0.99+

12QUANTITY

0.99+

sixQUANTITY

0.99+

threeQUANTITY

0.99+

BostonLOCATION

0.99+

Los AngelesLOCATION

0.99+

10 yearsQUANTITY

0.99+

30 yearsQUANTITY

0.99+

UberORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

fourQUANTITY

0.99+

six monthsQUANTITY

0.99+

storm forgeORGANIZATION

0.99+

two thingsQUANTITY

0.99+

five yearsQUANTITY

0.99+

50%QUANTITY

0.99+

LinkedInORGANIZATION

0.99+

oneQUANTITY

0.99+

first oneQUANTITY

0.99+

eachQUANTITY

0.99+

KubeConEVENT

0.98+

KubernetesTITLE

0.98+

six years agoDATE

0.98+

seven yearQUANTITY

0.98+

60%QUANTITY

0.98+

CloudNativeConEVENT

0.98+

HarvardORGANIZATION

0.98+

billionsQUANTITY

0.98+

fourDATE

0.98+

CNCFORGANIZATION

0.98+

AccenturesORGANIZATION

0.97+

SASORGANIZATION

0.97+

OneQUANTITY

0.97+

todayDATE

0.96+

40QUANTITY

0.96+

18 monthsQUANTITY

0.96+

HalloweenEVENT

0.95+

30QUANTITY

0.95+

GSITITLE

0.94+

five years agoDATE

0.94+

firstQUANTITY

0.94+

KubernetesORGANIZATION

0.93+

this yearDATE

0.9+

80, 90%QUANTITY

0.84+

LinuxORGANIZATION

0.84+

NA 2021EVENT

0.83+

CastinORGANIZATION

0.82+

last 18 monthsDATE

0.81+

last 20 yearsDATE

0.79+

three differentiatorsQUANTITY

0.78+

cloud native conORGANIZATION

0.77+

2000 companiesQUANTITY

0.77+

seven pre IPOQUANTITY

0.76+

Clayton Coleman, Red Hat | KubeCon 2017


 

from Austin Texas it's the cube covering cube con and cloud native con 2017 brought to you by Red Hat the Lenox foundation and the cubes ecosystem partners welcome back to the cube Silicon angle media's two-day live production of KU con and cloud native con ops to minimun my co-host for the same segment is matt Probert happy to welcome back to the program clayton Coleman who's the architect of containerized application infrastructure with the red hat clayton great to see you it's great to see you too alright so first of all 4100 people you impressed I am I'm hugely impressed every year this gets bigger and bigger the community is out in force people building on top of kubernetes and in the cloud any ecosystem around it for us and it's it's really phenomenal yes so John Fourier interviewed you last year at the Seattle show I think it's what triple the size the number of projects are gone from four to 14 but at the core I mean it's kubernetes and you spend you know quite a lot of your time tell us you know what have you been working on the last year I know what was it what's important in your life well I think the biggest things that we've really tried to focus on are making kubernetes a good foundation for both a community as well as for a technology stack because kubernetes is about empowering developers it's about empowering operations teams and we always anticipated there to be many levels and many ways of building on top of Rene's to make it an ecosystem so that people can build and deploy software but other people besides us can succeed and I think that's more than anything else in the last year it's about ensuring that everyone besides the kubernetes community is successful not just kubernetes itself yeah it's interesting when we think back to like Linux it's you know Red Hat you did quite well with Linux also you know from the enterprise standpoint from from the company we appreciate what Red Hat had did to make sure that Linux could be used by everyone seems like a lot of you know similar themes that we see but how could you kind of compare good drives Linux versus kubernetes today it's interesting everyone is a lot more conscious of open source and the idea of building a platform because of the example of Linux and so we've tried to actually be pretty conscious about that which is we want there to be a strong community we want there to be a technical respect among not just the core of the project but also the different layers and the cloud the cloud native Foundation has actually done a really good job of bringing together mutually complementary technologies but also helping and support those communities from a RedHat perspective a lot of the things we work on our stability security reliability we also work on extension because extension to us allows us both to support customers but also to help the open source ecosystems that we depend on that I'm sorry just for audience can you explain what extension is sure extension is actually it's a number of things in kubernetes we really want you to be possible if we're gonna build in kubernetes things that make running applications easier we want everyone else to build their own tools that make it easier to run applications and we don't want to be opinionated and kind of the same way as maybe some other ecosystems about who gets to build what instead we want to open the doors for vendors for partners for deployers for individual users to build their own extensions and points of contact with kubernetes to really solve their own problems we can't solve all those problems but that plays really nicely into it right the cloud native foundation has gone from four projects to 14 that's right just a year and you're talking about the extensions what do you want people to take away from that proliferation of projects that are all being supported and seen as essential to the eCos kubernetes it takes the spectrum we want we want everyone to be able to use kubernetes and to use the other projects either independently on their own but I think a lot of us in the kubernetes community in the CN CF communities believe that a lot of these tools work really well together and finding new opportunities to make it easy to work together so Prometheus is a great example it's exploded across the ecosystem I think at the last cognitive con Prometheus was really the talk of the show and I think what I've seen is that a lot of people around the ecosystem not just in that core community on a very specific project I've taken the ideas that underlie that technology I tried to apply it to other things that they were doing so you see people building integrations into Prometheus you see in flux DB working with Prometheus to share data press a lot of really exciting cross collaboration and the end goal really is to make building and running applications easier which is something we really believe in as well all right you use the word a spectrum when you talk about users out there there's lots of them that are kind of in the 101 phase we know there's people doing things through production what are you seeing you know kind of the help us walk through some of the spectrum as to where customers are what you're seeing some of the big challenges that they're facing in spectrum really there's no other word because the range of people using kubernetes in production and development is so incredibly diverse I would say the two extremes are people who are today deploying micro services based production applications on public cloud and they're bringing you know three or four or ten or 100 applications it might be a two or three developer team and they're really finding a lot of value in that because kubernetes is taking a lot of the heavy lifting and they can rely on that to keep their applications running into rapidly deploy on the complete other end is giant corporations people with you know decades of investment in IT finding ways to use kubernetes and OpenShift which is the product that Red Hat ships around kubernetes to empower you know tens of thousands or hundreds of thousands of applications and in those models kubernetes is just one small part of the larger hole and this is where ecosystem really comes into play in the middle I think we're starting to see a lot of really exciting things as people have the they've got their one team working together and they as they start reaching out and bringing other teams as companies grow as they say find more reasons to use kubernetes they start asking questions like well how do I have all these teams working together without impacting the other teams and that's where multi-tenancy that's a real specialty for Red Hat and OpenShift is multi-tenancy and we're actually really excited to work with people in the community to build out these technologies at many different levels to have you know kind of that spectrum tart to spread from the middle as well you know one of the things coming into this show you know the last year or two was like okay who's gonna win kind of the orchestration battle and it's like okay kubernetes you know here it is well now there's like 42 different providers you know open ship being one of them where does Red Hat you know look to add value to the customers is it just a piece of the platform how does Red Hat look at it and how to customers when do they come to you when do they say you know oh wait I'm just gonna go build all my own pieces and and use some of the Red Hat pieces I if working with open source and Linux has taught us anything it's that one of the key components of a successful story is a distribution the idea of curating making a few choices making it easy to bring things into that distribution and we've actually started to really apply the distribution mindset to kubernetes so if you look at openshift it is a platform it has two that help you run tens of thousands of applications together with tens of thousands of users to bring operational control but it's hard it's about taking the best technologies in the community and bringing them together and so I would actually expect over the next year or two to start seeing the idea of the distribution emerge in kubernetes in the cloud native ecosystem where you know we won't it's not ever gonna just be one company dominating open-source that's not how open-source works I would expect to see an effort at thinking about kubernetes is before the kernel if you will and bringing together all of the successful technologies like the ones that we've seen at cloud native con here today and bringing even more of them letting people mix and match to find the solutions that work for them I really like that view of a - because you're saying that the open source at its core is open and unup enya nated while distributions are an outlet to have opinion in refined business problems so how do you see that playing out a little bit there's always going to be some trade-off when you make choices for people and so I think the way that we look at it is we try to make choices that make sense when you're dealing at certain scales when long term support and life cycle becomes really critical you know if you can't afford a production outage because you have 10,000 applications running together then it becomes really important to focus on those but at the same time we actually expect there to be different choices and trade-offs to be made and we'll want to actually encourage people to mix and match the different parts of the ecosystem and what patterns are you seeing in enterprise readiness or any enterprise feature sets that are combining into what you hope to see out of the distribution at the heart of its security tends to come up a lot you know everybody everybody who's making the leap from we made the leap from bare metal to virtualization and then at a large number of management platforms grew to encompass it and virtualization brought its own changes containers were starting to mature and how we understand how the software lifecycle works with containers how it works in large multi-tenant environments I think the next step will be as we become more mature that a lot of these patterns will be baked in and so you'll see you know standard solutions we all kind of need to work together to make those standard solutions happen we're actually seeing that in a number of the things you know even today I'll talk about the CNC F conformance profile for kubernetes it's a new effort that intends to take the tests that we use in kubernetes to make sure it's working correctly and use that to say this is something that you can rely on every kubernetes distribution also supporting and just like any other mechanism that we use to make sure that we're delivering something that is stable and predictable across a wide number of spaces I would expect in the future to see things like conformance for multi-tenancy conformance for security specifically in cube and to see vendors bring their own approaches partners ecosystem players integrating their solutions and then new open-source solutions fitting into that as well the keynote this morning there were a lot of these projects you know getting to the next Rev Cooper net is gonna be a 1.9 many of the underlying kind of supporting pieces are hitting kind of the 1 dot Oh out there your top contributor for kubernetes what's that experience like today lots of new people still coming on how's the balance of kind of the you know few that are heavily involved versus kind of the majority when we started kubernetes it was a very there's an interesting mix it was a lot of engineers working on very concrete ideas things that we'd want to try to bring to fruition together in the community and it's been a very deliberate goal over the last two years to broaden that into a successful and healthy open-source ecosystem which means a lot of mentoring which means working to find the different ways that people can contribute in an ecosystem Sarah Novotny from Google often uses the chop wood carry water analogy there's many different ways that people can work together and everyone has a spot so we spent a lot of time being very deliberate about being open trying to organize ways for new contributors to get oriented and to bring their value but at the same time we actually want to mentor and grow the next level of technical leadership in kubernetes no I won't be here forever and I don't want to be here forever I want people to replace me in the open-source community because that's a healthy community yeah I think the stratification of contribution is one of the number-one signs of success from from my perspective and I see a distinct different invitation for each type of user so you have the user you have the administrator and then you also have a developer are there any things you've noticed changing in one of those patterns that like really hits home for you I think the developer pattern is the most interesting you know there's a lot of focus on how do you use kubernetes in many different ways and a lot of developers want to get their hands on and dig in and so there's actually been a lot of great community projects that are focused on making kubernetes easy to consume at a small scale all of that then ties back into well in kubernetes we want to be pretty opinion if we're going to be a kernel there needs to be a space for things like the compiler and the programming language distributions I'm actually hoping that we can keep that focus on making sure there's a good set of projects in the ecosystem that meat developers where they are so that they can start using kubernetes and then I don't want to say trick but trick them into becoming contributors and help us get that feedback about how we can make Rene's better helping to paint the fence is very fun that's right all right Clayton last question I have for you you're doing two keynotes this week give our audience that you know won't be there in person give us a taste for that and especially want to hear kind of the outlook for kind of the next 12 months through 2018 sure so my first keynote tomorrow is a just a real quick one I'm going to try and convince everybody that kubernetes should be boring and I'll leave it at that you know boring is good in very specific ways boring equals mature right I would certainly hope so and on Friday I'm gonna talk about what's coming up in the kubernetes ecosystem in 2018 a lot of people have finally jumped on board the kubernetes bandwagon and what I'd like to do is kind of help people find those exciting projects to get involved with if if we're gonna have a vibrant ecosystem and community helping people understand where they can get involved and to find the things that match their interest is going to be really important okay anything specific that you're super excited about looking forward to next year or any project or I've got to say and this is not a it's not a company line but sto is incredibly exciting because one of our goals with kubernetes was always about making it easier to run applications and sto and the idea of service mesh is taking that to the next level and I actually hope to see even more projects like that over the next few years in the ecosystem that solve things like server lists and databases of service and I think we're actually starting to really see that develop yeah well companies are all looking to move faster get those applications up and running this do definitely one of the ones we heard buzzing before the show Clayton Coleman thanks so much for joining us again hope to catch up with you again soon for metroburg I'm Stu Mittleman we'll be back with lots more coverage here from the cubes coverage of KU con and cloud native con 2017 here in Austin Texas you're watching the Q you

Published Date : Dec 6 2017

SUMMARY :

that in a number of the things you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sarah NovotnyPERSON

0.99+

John FourierPERSON

0.99+

PrometheusTITLE

0.99+

Stu MittlemanPERSON

0.99+

threeQUANTITY

0.99+

2018DATE

0.99+

ClaytonPERSON

0.99+

twoQUANTITY

0.99+

10,000 applicationsQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

LinuxTITLE

0.99+

fourQUANTITY

0.99+

tenQUANTITY

0.99+

Red HatORGANIZATION

0.99+

HatTITLE

0.99+

first keynoteQUANTITY

0.99+

matt ProbertPERSON

0.99+

two-dayQUANTITY

0.99+

next yearDATE

0.99+

last yearDATE

0.99+

100 applicationsQUANTITY

0.99+

tens of thousands of usersQUANTITY

0.99+

Austin TexasLOCATION

0.99+

14QUANTITY

0.98+

last yearDATE

0.98+

Red HatTITLE

0.98+

two keynotesQUANTITY

0.98+

FridayDATE

0.98+

KU conEVENT

0.98+

tomorrowDATE

0.98+

kernelTITLE

0.98+

tens of thousands of applicationsQUANTITY

0.98+

4100 peopleQUANTITY

0.98+

101 phaseQUANTITY

0.97+

Clayton ColemanPERSON

0.97+

todayDATE

0.97+

a yearQUANTITY

0.97+

two extremesQUANTITY

0.97+

LenoxORGANIZATION

0.97+

GoogleORGANIZATION

0.97+

oneQUANTITY

0.97+

one teamQUANTITY

0.96+

hundreds of thousands of applicationsQUANTITY

0.96+

red hat claytonORGANIZATION

0.96+

Austin TexasLOCATION

0.96+

RedHatTITLE

0.95+

this weekDATE

0.95+

bothQUANTITY

0.95+

42 different providersQUANTITY

0.93+

clayton ColemanPERSON

0.93+

OpenShiftORGANIZATION

0.93+

each typeQUANTITY

0.93+

RenePERSON

0.92+

four projectsQUANTITY

0.91+

last yearDATE

0.89+

one smallQUANTITY

0.88+

Red HatTITLE

0.87+

cloud native con 2017EVENT

0.87+

next yearDATE

0.85+

Silicon angle mediaORGANIZATION

0.83+

last two yearsDATE

0.82+

KubeCon 2017EVENT

0.82+

this morningDATE

0.81+

next 12 monthsDATE

0.8+

firstQUANTITY

0.76+

OpenShiftTITLE

0.76+

metroburgLOCATION

0.76+

kubernetesTITLE

0.74+

decadesQUANTITY

0.73+