Image Title

Search Results for KubeConCloudNativeCon Europe 2022:

Owen Garrett, Deepfence | Kubecon + Cloudnativecon Europe 2022


 

(bouncy string music) >> TheCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the cloud native computing foundation, and its ecosystem partners. >> Welcome to Valencia, Spain in KubeCon and CloudNativeCon Europe 2022. I'm your host, Keith Townsend. And we're getting to the end of the day, but the energy level has not subsided on the show floors. Still plenty of activity, plenty of folks talking. I have, as a second time guest, this KubeCon, which is unusual, but not, I don't think, disappointing in any way, we're going to have plenty of content for you. Owen, you're the CPO, Owen Garrett, you're the CPO of... >> Of Deepfence. >> App Deepfence. >> Yeah. >> We're going to shift the conversation a little bit. Let's talk about open source availability, open source security availability for everybody. I drive a pretty nice SUV back home and it has all these cool safety features, that warns me when I'm dozing off, it lets me know when I'm steering into another lane, and I'm thinking, why isn't it just a standard thing on every vehicle? Isn't safety important? Think about that for open source security. Why isn't open source security just this thing available to every project and product? >> Keith, I love that analogy. And thanks for having me back! We had a lot of fun yesterday. >> Yeah, we did. >> Yeah. We, at Deepfence, we really believe security is something that everybody should benefit from. Because if applications aren't secure, if vulnerabilities find their way into production, then your mother, my aunt, uncle, using the internet, use an app, their identity is stolen, through no fault of their own, because the developer of that application didn't have access to the tools that he or she needed to secure the application. Security is built around public knowledge. When there are vulnerabilities, they're shared with the community. And we firmly believe that we should provide open source, accessible tools that takes that public knowledge and makes it easy for anybody to benefit from it. So at Deepfence, we've created a software platform, it's 100% open source, called ThreatMapper. And the job of this platform is to scan your applications as they're running and find, identify, are there security vulnerabilities that will find their way into production? So we'll look for these vulnerabilities, we'll use the wisdom of the community to inform that, and we'll help you find the vulnerabilities and identify which ones you've got to fix first. >> So when you say use the wisdom of the community, usually one of the hard things to crack is the definitions, what we called virus definitions in the past. >> Yes. How do we identify the latest threats? And that's usually something that's locked behind value. How do you do that >> You're right. when it comes to open source? >> You're right. And it's worrying, 'cause some organizations will take that and they'll hide that extra value and they'll only make it available to paying customers. Ethically, I think that's really wrong. That value is out there. It's just about getting it into hands of users, of developers. And what we will do is we'll take public feeds, like the CVEs from the NVD, National Vulnerability Database, we'll take feeds from operating system vendors, for language packs, and then we help organizations understand the context so they can unlock the value. The problem with security scanning is you find hundreds of thousands of false positives. Like in your SUV. As you drive down the street there are hundreds of things that you could hit. >> You're right. >> But you don't hit any of them. They're false positives, you don't need to worry about them. It's the one that walks across the road that you've got to avoid, you need to know about. We do the same with security vulnerabilities. We help you understand of these thousands of issues that might be present in your applications, which are the ones that really important? 'Cause developers, they're short of time. They can't fix everything. So we help them focus on the things that are going to give the biggest bang for their time. Not for the buck, because we're not charging them for it, but for their time. So when they invest time in improving the security of the applications, we, with our open source, accessible projects, will help guide them to invest that as best as possible. >> So I'm a small developer. I lead a smaller project, just a couple of developers. I don't have a dedicated security person. What's my experience in adopting this open source solution? Now I biting off more than I can chew and creating too much overhead? >> We try and make it as easy as possible to consume. So you're a developer, you're building applications, you're here at KubeCon, so you're probably deploying them onto Kubernetes, and you've probably used tools already to check them and make sure that there aren't vulnerabilities. But, nevertheless, you've got to let some of those vulnerable packages into production and there could be issues that were disclosed after you scanned. So with our tool, you place a little agent in your Kubernetes cluster, it's a DaemonSet, it's a one held command to push it out, and that talks back to the console that you own. So everything stays with you. Nothing comes to us, we respect your privacy. And you can use that to then scan and inventory your applications anytime you want and say, is this application still secure or are there new vulnerabilities disclosed recently that I didn't know about? And we make the user experience as easy as we can. We've had some fantastic chats on the demo booth here at KubeCon, and hey, if times were different, I'd love to have you across the booth, and we'll click and see. The user experience is as quick and as sweet and as joyable as we can make it. >> All right. We've had a nice casual chat up to this point, but we're going to flip the switch a little bit. I'm going to change personalities. >> All right. >> It's almost like, if you're an comic book fan, the Incredible Hulk. Keith, the mild-mannered guy with a button up shirt. Matter of fact, I'm going to unbutton my jacket. >> Okay. >> And we're going to get a little less formal. A little less formal, but a little bit more serious, and we're going to, in a second, start CUBE clock and you're going to give me the spiel. You're going to go from open source to commercial and you're going to try and convince me- >> Okay. >> In 60 seconds, or less, you can leave five seconds on the table and say you're done, why you should do- >> Here's the challenge. >> Why I should listen to you. >> Owen: Why you should listen to Deepfence. >> Why should you listen to app Deepfence? So I'm going to put the shot clock in my ear. Again, people never start on time. You need to use your whole 60 seconds. Start, CUBE clock. >> Keith, (dramatic horn music) you build and deploy applications, on Kubernetes or in the cloud. Your developers have ticked it off and signed off- >> Zero from zero is still zero. >> Saying they're secure, but do you know if they're still secure when they're running in production? With Deepfence ThreatMapper, it's an open source tool. >> You've got to call- >> You can scan them. >> Before you ball. You can find the issues >> Like you just thought out. >> In those applications running in your production environment and prioritize them so you know what to fix first. But, Keith, you can't always fix them straight away. >> Brands need to (indistinct). >> So deploy ThreatStryker, our enterprise platform, to then monitor those applications, see what's happening in real time. (dramatic horn music) Is someone attacking them? Are they gaining control? And if we see >> Success without, the exploits happening- success without passion- >> We will step in, >> Is nothing. >> Tell you what's going on. >> You got to have passion! >> And we can put the thumb on the attacker. We can stop them reaching the application by fire rolling just them. We can freeze the application (dramatic horn music) so it restarts, so you can go and investigate later. >> Keith: Five seconds. >> Be safe, shift left, (dramatic string music) but also, secure on the right hand side. >> That's it. I think you hit it out the park. Great job on- >> Cheers, Keith. >> Cheers. You did well under the pressure. TheCUBE, we bring the values. We're separating the signal from the noise. 60 seconds. That's a great explanation. From Valencia, Spain, I'm Keith Townsend, and you're watching theCUBE, the leader in high tech coverage. (bouncy percussive music)

Published Date : May 20 2022

SUMMARY :

brought to you by Red Hat, but the energy level has not We're going to shift the Keith, I love that analogy. and we'll help you find So when you say use the How do you do that You're right. and then we help organizations that are going to give the and creating too much overhead? and that talks back to I'm going to change personalities. Matter of fact, I'm going to going to give me the spiel. Owen: Why you should So I'm going to put the you build and deploy applications, is still zero. but do you know if they're still secure You can find the issues and prioritize them so you to then monitor those applications, We can freeze the application secure on the right hand side. I think you hit it out the park. and you're watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

KeithPERSON

0.99+

Owen GarrettPERSON

0.99+

OwenPERSON

0.99+

five secondsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

100%QUANTITY

0.99+

DeepfenceORGANIZATION

0.99+

60 secondsQUANTITY

0.99+

thousandsQUANTITY

0.99+

Five secondsQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

KubeConEVENT

0.99+

yesterdayDATE

0.99+

second timeQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.97+

ZeroQUANTITY

0.97+

zeroQUANTITY

0.96+

DeepfenceTITLE

0.95+

CloudNativeCon Europe 2022EVENT

0.95+

KubernetesTITLE

0.94+

oneQUANTITY

0.94+

NVDORGANIZATION

0.91+

CloudnativeconORGANIZATION

0.9+

KubeConORGANIZATION

0.9+

TheCUBEORGANIZATION

0.88+

firstQUANTITY

0.87+

KubeconORGANIZATION

0.85+

EuropeLOCATION

0.82+

hundreds of thingsQUANTITY

0.74+

ThreatMapperTITLE

0.73+

HulkPERSON

0.6+

NationalORGANIZATION

0.59+

2022DATE

0.55+

positivesQUANTITY

0.52+

issuesQUANTITY

0.49+

theCUBEORGANIZATION

0.47+

ThreatStrykerTITLE

0.47+

secondQUANTITY

0.44+

DatabaseORGANIZATION

0.38+

Nick Van Wiggeren, PlanetScale | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, KubeCon, CloudNativeCon Europe 2022. I'm Keith Townsend, your host. And we're continuing the conversations around ecosystem cloud native, 7,500 people here, 170 plus show for sponsors. It is for open source conference, I think the destination. I might even premise that this may be, this may eventually roll to the biggest tech conference in the industry, maybe outside of AWS re:Invent. My next guest is Nick van Wiggeren. >> Wiggeren. >> VP engineering of PlanetScale. Nick, I'm going to start off the conversation right off the bat PlanetScale cloud native database, why do we need another database? >> Well, why don't you need another database? I mean, are you happy with yours? Is anyone happy with theirs? >> That's a good question. I don't think anyone is quite happy with, I don't know, I've never seen a excited database user, except for guys with really (murmurs) guys with great beards. >> Yeah. >> Keith: Or guys with gray hair maybe. >> Yeah. Outside of the dungeon I think... >> Keith: Right. >> No one is really is happy with their database, and that's what we're here to change. We're not just building the database, we're actually building the whole kind of start to finish experience, so that people can get more done. >> So what do you mean by getting more done? Because MySQL has been the underpinnings of like massive cloud database deployments. >> 100% >> It has been the de-facto standard. >> Nick: Yep. >> For cloud databases. >> Nick: Yep. >> What is PlanetScale doing in enabling us to do that I can't do with something like a MySQL or a SQL server? >> Great question. So we are MySQL compatible. So under the hood it's a lot of the MySQL you know and love. But on top of that we've layered workflows, we've layered scalability, we've layered serverless. So that you can get all of the the parts of the MySQL, that dependability, the thing that people have used for 20, 30 years, right? People don't even know a world before MySQL. But then you also get this ability to make schema changes faster. So you can kind of do your work quicker get to the business objectives faster. You can scale farther. So when you get to your MySQL and you say, well, can we handle adding this one feature on top? Can we handle the user growth we've got? You don't have to worry about that either. So it's kind of the best of both worlds. We've got one foot in history and we've got one foot in the new kind of cloud native database world. We want to give everyone the best of both. >> So when I think of serverless because that's the buzzy world. >> Yeah. >> But when I think of serverless I think about developers being able to write code. >> Yep. >> Deploy the code, not worry about VM sizes. >> Yep. >> Amount of disk space. >> Yep. >> CPU, et cetera. But we're talking about databases. >> Yep. >> I got to describe what type of disk I want to use. I got to describe the performance levels. >> Yep. >> I got all the descriptive stuff that I have to do about infrastructures. Databases are not... >> Yep. >> Keith: Serverless. >> Yep. >> They're the furthest thing from it. >> So despite what the name may say, I can guarantee you PlanetScale, your PlanetScale database does run on at least one server, usually more than one. But the idea is exactly what you said. So especially when you're starting off, when you're first beginning your, let's say database journey. That's a word I use a lot. The furthest thing from your mind is, how many CPUs do I need? How many disk iOS do I need? How much memory do I need? What we want you to be able to do is get started on focusing on shipping your code, right? The same way that Lambda, the same way that Kubernetes, and all of these other cloud native technologies just help people get done what they want to get done. PlanetScale is the same way, you want a database, you sign up, you click two buttons, you've got a database. We'll handle scaling the disk as you grow, we'll handle giving you more resources. And when you get to a spot where you're really starting to think about, my database has got hundreds of gigabytes or petabytes, terabytes, that's when we'll start to talk to you a little bit more about, hey, you know it really does run on a server, we ain't got to help you with the capacity planning, but there's no reason people should have to do that up front. I mean, that stinks. When you want to use a database you want to use a database. You don't want to use, 747 with 27 different knobs. You just want to get going. >> So, also when I think of serverless and cloud native, I think of stateless. >> Yep. >> Now there's stateless with databases, help me reconcile like, when you say it's cloud native. >> Nick: Yep. >> How is it cloud native when I think of cloud native as stateless? >> Yeah. So it's cloud native because it exists where you want it in the cloud, right? No matter where you've deployed your application on your own cloud, on a public cloud, or something like that, our job is to meet you and match the same level of velocity and the same level of change that you've got on your kind of cloud native setup. So there's a lot of state, right? We are your state and that's a big responsibility. And so what we want to do is, we want to let you experiment with the rest of the stateless workloads, and be right there next to you so that you can kind of get done what you need to get done. >> All right. So this concept of clicking two buttons... >> Nick: Yeah. >> And deploying, it's a database. >> Nick: Yep. >> It has to run somewhere. So let's say that I'm in AWS. >> Nick: Yep. >> And I have AWS VPC. What does it look like from a developer's perspective to consume the service? >> Yeah. So we've got a couple of different offerings, and AWS is a great example. So at the very kind of the most basic database unit you click, you get an endpoint, a host name, a password, and the username. You feed that right into your application and it's TLS secure and stuff like that, goes right into the database no problem. As you grow larger and larger, we can use things like AWS PrivateLink and stuff like that, to actually start to integrate more with your AWS environment, all the way over to what we call PlanetScale Managed. Which is where we actually deploy your data plan in your AWS account. So you give us some permissions and we kind of create a sub-account and stuff like that. And we can actually start sending pods, and hold clusters and stuff like that into your AWS account, give you a PrivateLink, so that everything looks like it's kind of wrapped up in your ownership but you still get the same kind of PlanetScale cloud experience, cloud native experience. >> So how do I make calls to the database? I mean, do I have to install a new... >> Nick: Great question. >> Like agent, or do some weird SQL configuration on my end? Or like what's the experience? >> Nope, we just need MySQL. Same way you'd go, install MySQL if you're on a Mac or app store to install MySQL on analytics PC, you just username, password, database name, and stuff like that, you feed that into your app and it just works. >> All right. So databases are typically security. >> Nick: Yep. >> When my security person. >> Nick: Yep. >> Sees a new database. >> Nick: Yep. >> Oh, they get excited. They're like, oh my job... >> Nick: I bet they do. >> My job just got real easy. I can find like eight or nine different findings. >> Right. >> How do you help me with compliance? >> Yeah. >> And answering these tough security questions from security? >> Great question. So security's at the core of what we do, right? We've got security people ourselves. We do the same thing for all the new vendors that we onboard. So we invest a lot. For example, the only way you can connect to a PlanetScale database even if you're using PrivateLink, even if you're not touching the public internet at all, is over TLS secured endpoint, right? From the very first day, the very first beta that we had we knew not a single byte goes over the internet that's not encrypted. It's encrypted at rest, we have audit logging, we do a ton internally as well to make sure that, what's happening to your database is something you can find out. The favorite thing that I think though is all your schema changes are tracked on PlanetScale, because we provide an entire workflow for your schema changes. We actually have like a GitHub Polar Request style thing, your security folks can actually look and say, what changes were made to the database day in and day out. They can go back and there's a full history of that log. So you actually have, I think better security than a lot of other databases where you've got to build all these tools and stuff like that, it's all built into PlanetScale. >> So, we started out the conversation with two clicks but I'm a developer. >> Nick: Yeah. >> And I'm developing a service at scale. >> Yep. >> I want to have a SaaS offering. How do I automate the deployment of the database and the management of the database across multiple customers? >> Yeah, so everything is API driven. We've got an API that you can use supervision databases to make schema changes, to make whatever changes you want to that database. We have an API that powers our website, the same API that customers can use to kind of automate any part of the workflow that they want. There's actually someone who did talk earlier using, I think, wwww.crossplane.io, or they can use Kubernetes custom resource definitions to provision PlanetScale databases completely automatically. So you can even do it as part of your standard deployment workflow. Just create a PlanetScale database, create a password, inject it in your app, all automatically. >> So Nick, as I'm thinking about scale. >> Yep. >> I'm thinking about multiple customers. >> Nick: Yep. >> I have a successful product. >> Nick: Yep. >> And now these customers are coming to me with different requirements. One customer wants to upgrade once every 1/4, another one, it's like, you know what? Just bring it on. Like bring the schema changes on. >> Yep. >> I want the latest features, et cetera. >> Nick: Right. >> How do I manage that with PlanetScale? When I'm thinking about MySQL it's a little, that can be a little difficult. >> Nick: Yeah. >> But how does PlanetScale help me solve that problem? >> Yeah. So, again I think it's that same workflow engine that we've built. So every database has its own kind of deploy queue, its own migration system. So you can automate all these processes and say, on this database, I want to change this schema this way, on this database I'm going to hold off. You can use our API to drive a view into like, well, what's the schema on this database? What's schema on this database? What version am I running on this database? And you can actually bring all that in. And if you were really successful you'd have this single plane of glass where you can see what's the status of all my databases and how are they doing, all powered by kind of the PlanetScale API. >> So we can't talk about databases without talking about backup. >> Nick: Yep. >> And recovery. >> Yep. >> How do I back this thing up and make sure that I can fall back? If someone deleted a table. >> Nick: Yep. >> It happens all the time in production. >> Nick: Yeah, 100%. >> How do I recover from it? >> So there's two pieces to this, and I'm going to talk about two different ways that we can help you solve this problem. One of them is, every PlanetScale database comes with backups built in and we test them fairly often, right? We use these backups. We actually give you a free daily backup on every database 'cause it's important to us as well. We want to be able to restore from backup, we want to be able to do failovers and stuff like that, all that is handled automatically. The other thing though is this feature that we launched in March called the PlanetScale Rewind. And what Rewind is, is actually a schema migration undo button. So let's say, you're a developer you're dropping a table or a column, you mean to drop this, but you drop the other one on accident, or you thought this column was unused but it wasn't. You know when you do something wrong, you cause an incident and you get that sick feeling in your stomach. >> Oh, I'm sorry. I've pulled a drive that was written not ready file and it was horrible. >> Exactly. And you kind of start to go, oh man, what am I going to do next? Everyone watching this right now is probably squirming in their seat a bit, you know the feeling. >> Yeah, I know the feeling >> Well, PlanetScale gives you an undo button. So you can click, undo migration, for 30 minutes after you do the migration and we'll revert your schema with all the data in it back to what your database looked like before you did that migration. Drop a column on accident, drop a table on accident, click the Rewind button, there's all the data there. And, the new rights that you've taken while that's happened are there as well. So it's not just a restore to a point in time backup. It's actually that we've replicated your rights sent them to both the old and the new schema, and we can get you right back to where you started, downtime solved. >> Both: So. >> Nick: Go ahead. >> DBAs are DBAs, whether they've become now reformed DBAs that are cloud architects, but they're DBAs. So there's a couple of things that they're going to want to know, one, how do I get my zero back up in my hands? >> Yeah. >> I want my, it's MySQL data. >> Nick: Yeah. >> I want my MySQL backup. >> Yeah. So you can just take backups off the database yourself the same way that you're doing today, right? MySQL dump, MySQL backup, and all those kinds of things. If you don't trust PlanetScale, and look, I'm all about backups, right? You want them in two different data centers on different mediums, you can just add on your own backup tools that you have right now and also use that. I'd like you to trust that PlanetScale has the backups as well. But if you want to keep doing that and run your own system, we're totally cool with that as well. In fact, I'd go as far as to say, I recommend it. You never have too many backups. >> So in a moment we're going to run Kube clock. So get your... >> Okay, all right. >> You know, stand tall. >> All right. >> I'll get ready. I'm going to... >> Nick: I'm tall, I'm tall. >> We're both tall. The last question before Kube clock. >> Nick: Yeah. >> It is, let's talk a little nerve knobs. >> Nick: Okay. >> The reform DBA. >> Nick: Yeah. >> They want, they're like, oh, this query ran a little bit slow. I know I can squeeze a little bit more out of that. >> Nick: Yeah. >> Who do they talk to? >> Yeah. So that's a great question. So we provide you some insights on the product itself, right? So you can take a look and see how are my queries performing and stuff like that. Our goal, our job is to surface to you all the metrics that you need to make that decision. 'Cause at the end of the day, a reform DBA or not it is still a skill to analyze the performance of a MySQL query, run and explain, kind of figure all that out. We can't do all of that for you. So we want to give you the information you need either knowledge or you know, stuff to learn whatever it is because some of it does have to come back to, what's my schema? What's my query? And how can I optimize it? I'm missing an index and stuff like that. >> All right. So, you're early adopter of the Kube clock. >> Okay. >> I have to, people say they're ready. >> Nick: Ooh, okay. >> All the time people say they're ready. >> Nick: Woo. >> But I'm not quite sure that they're ready. >> Nick: Well, now I'm nervous. >> So are you ready? >> Do I have any other choice? >> No, you don't. >> Nick: Then I am. >> But are you ready? >> Sure, let's go. >> All right. Start the Kube clock. (upbeat music) >> Nick: All right, what do you want me to do? >> Go. >> All right. >> You said you were ready. >> I'm ready, all right, I'm ready. All right. >> Okay, I'll reset. I'll give you, I'll give, see people say they're ready. >> All right. You're right. You're right. >> Start the Kube clock, go. >> Okay. Are you happy with how your database works? Are you happy with the velocity? Are you happy with what your engineers and what your teams can do with their database? >> Follow the dream not the... Well, follow the green... >> You got to be. >> Not the dream. >> You got to be able to deliver. At the end of the day you got to deliver what the business wants. It's not about performance. >> You got to crawl before you go. You got to crawl, you got to crawl. >> It's not just about is my query fast, it's not just about is my query right, it's about, are my customers getting what they want? >> You're here, you deserve a seat at the table. >> And that's what PlanetScale provides, right? PlanetScale... >> Keith: Ten more seconds. >> PlanetScale is a tool for getting done what you need to get done as a business. That's what we're here for. Ultimately, we want to be the best database for developing software. >> Keith: Two, one. >> That's it. End it there. >> Nick, you took a shot, I'm buying it. Great job. You know, this is fun. Our jobs are complex. >> Yep. >> Databases are hard. >> Yep. >> It is the, where your organization keeps the most valuable assets that you have. >> Nick: A 100%. >> And we are having these tough conversations. >> Nick: Yep. >> Here in Valencia, you're talking to the leader in tech coverage. From Valencia, Spain, I'm Keith Townsend, and you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 20 2022

SUMMARY :

brought to you by Red Hat, in the industry, conversation right off the bat I don't think anyone is quite happy with, Outside of the dungeon I think... We're not just building the database, So what do you mean it's a lot of the MySQL you know and love. because that's the buzzy world. being able to write code. Deploy the code, But we're talking about databases. I got to describe what I got all the descriptive stuff But the idea is exactly what you said. I think of stateless. when you say it's cloud native. and be right there next to you So this concept of clicking two buttons... And deploying, So let's say that I'm in AWS. consume the service? So you give us some permissions So how do I make calls to the database? you feed that into your So databases are typically security. Oh, they get excited. I can find like eight or the only way you can connect So, we started out the and the management of the database So you can even do it another one, it's like, you know what? How do I manage that with PlanetScale? So you can automate all these processes So we can't talk about databases and make sure that I can fall back? that we can help you solve this problem. and it was horrible. And you kind of start to go, and we can get you right that they're going to want to know, So you can just take backups going to run Kube clock. I'm going to... The last question before Kube clock. It is, I know I can squeeze a the metrics that you need of the Kube clock. I have to, sure that they're ready. Start the Kube clock. All right. see people say they're ready. All right. Are you happy with what your engineers Well, follow the green... you got to deliver what You got to crawl before you go. you deserve a seat at the table. And that's what what you need to get done as a business. End it there. Nick, you took a shot, the most valuable assets that you have. And we are having the leader in high tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DeLisaPERSON

0.99+

KeithPERSON

0.99+

Rebecca KnightPERSON

0.99+

AnviPERSON

0.99+

2009DATE

0.99+

Keith TownsendPERSON

0.99+

EuropeLOCATION

0.99+

Nick van WiggerenPERSON

0.99+

Avni KhatriPERSON

0.99+

JigyasaPERSON

0.99+

IndiaLOCATION

0.99+

CanadaLOCATION

0.99+

Nick Van WiggerenPERSON

0.99+

one yearQUANTITY

0.99+

MexicoLOCATION

0.99+

Jigyasa GroverPERSON

0.99+

CambridgeLOCATION

0.99+

Red HatORGANIZATION

0.99+

two piecesQUANTITY

0.99+

NickPERSON

0.99+

ValenciaLOCATION

0.99+

fiveQUANTITY

0.99+

OaxacaLOCATION

0.99+

eightQUANTITY

0.99+

New DelhiLOCATION

0.99+

RomaniaLOCATION

0.99+

AWSORGANIZATION

0.99+

Khan AcademyORGANIZATION

0.99+

DeLisa AlexanderPERSON

0.99+

MarchDATE

0.99+

10 yearQUANTITY

0.99+

100%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

five yearQUANTITY

0.99+

22 labsQUANTITY

0.99+

BostonLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

eight yearsQUANTITY

0.99+

one footQUANTITY

0.99+

five yearsQUANTITY

0.99+

MySQLTITLE

0.99+

AntequeraLOCATION

0.99+

7,500 peopleQUANTITY

0.99+

Monday nightDATE

0.99+

five countriesQUANTITY

0.99+

two new labsQUANTITY

0.99+

two different waysQUANTITY

0.99+

last weekDATE

0.99+

80%QUANTITY

0.99+

20QUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

Oaxaca CityLOCATION

0.99+

30 minutesQUANTITY

0.99+

iOSTITLE

0.99+

27 different knobsQUANTITY

0.99+

TwoQUANTITY

0.99+

KubeConEVENT

0.99+

Marcel Hild, Red Hat & Kenneth Hoste, Ghent University | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Announcer: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome to Valencia, Spain, in KubeCon CloudNativeCon Europe 2022. I'm your host Keith Townsend, along with Paul Gillon. And we're going to talk to some amazing folks. But first Paul, do you remember your college days? >> Vaguely. (Keith laughing) A lot of them are lost. >> I think a lot of mine are lost as well. Well, not really, I got my degree as an adult, so they're not that far past. I can remember 'cause I have the student debt to prove it. (both laughing) Along with us today is Kenneth Hoste, systems administrator at Ghent University, and Marcel Hild, senior manager software engineering at Red Hat. You're working in office of the CTO? >> That's absolutely correct, yes >> So first off, I'm going to start off with you Kenneth. Tell us a little bit about the research that the university does. Like what's the end result? >> Oh, wow, that's a good question. So the research we do at university and again, is very broad. We have bioinformaticians, physicists, people looking at financial data, all kinds of stuff. And the end result can be very varied as well. Very often it's research papers, or spinoffs from the university. Yeah, depending on the domain I would say, it depends a lot on. >> So that sounds like the perfect environment for cloud native. Like the infrastructure that's completely flexible, that researchers can come and have a standard way of interacting, each team just use it's resources as they would, the Navana for cloud native. >> Yeah. >> But somehow, I'm going to guess HPC isn't quite there yet. >> Yeah, not really, no. So, HPC is a bit, let's say slow into adopting new technologies. And we're definitely seeing some impact from cloud, especially things like containers and Kubernetes, or we're starting to hear these things in HPC community as well. But I haven't seen a lot of HPC clusters who are really fully cloud native. Not yet at least. Maybe this is coming. And if I'm walking around here at KubeCon, I can definitely, I'm being convinced that it's coming. So whether we like it or not we're probably going to have to start worrying about stuff like this. But we're still, let's say, the most prominent technologies of things like NPI, which has been there for 20, 30 years. The Fortran programming language is still the main language, if you're looking at compute time being spent on supercomputers, over 1/2 of the time spent is in Fortran code essentially. >> Keith: Wow. >> So either the application itself where the simulations are being done is implemented in Fortran, or the libraries that we are talking to from Python for example, for doing heavy duty computations, that backend library is implemented in Fortran. So if you take all of that into account, easily over 1/2 of the time is spent in Fortran code. >> So is this because the libraries don't migrate easily to, distributed to that environment? >> Well, it's multiple things. So first of all, Fortran is very well suited for implementing these type of things. >> Paul: Right. >> We haven't really seen a better alternative maybe. And also it'll be a huge effort to re-implement that same functionality in a newer language. So, the use case has to be very convincing, there has to be a very good reason why you would move away from Fortran. And, at least the HPC community hasn't seen that reason yet. >> So in theory, and right now we're talking about the theory and then what it takes to get to the future. In theory, I can take that Fortran code put it in a compiler that runs in a container? >> Yeah, of course, yeah. >> Why isn't it that simple? >> I guess because traditionally HPC is very slow at adopting new stuff. So, I'm not saying there isn't a reason that we should start looking at these things. Flexibility is a very important one. For a lot of researchers, their compute needs are very picky. So they're doing research, they have an idea, they want you to run lots of simulations, get the results, but then they're silent for a long time writing the paper, or thinking about how to, what they can learn from the results. So there's lots of peaks, and that's a very good fit for a cloud environment. I guess at the scale of university you have enough diversity end users that all those peaks never fall at the same time. So if you have your big own infrastructure you can still fill it up quite easily and keep your users happy. But this busty thing, I guess we're seeing that more and more or so. >> So Marcel, talk to us about, Red Hat needing to service these types of end users. That it can be on both ends I'd imagine that you have some people still in writing in Fortran, you have some people that's asking you for objects based storage. Where's Fortran, I'm sorry, not Fortran, but where is Red Hat in providing the underlay and the capabilities for the HPC and AI community? >> Yeah. So, I think if you look at the user base that we're looking at, it's on this spectrum from development to production. So putting AI workloads into production, it's an interesting challenge but it's easier to solve, and it has been solved to some extent, than the development cycle. So what we're looking at in Kenneth's domain it's more like the end user, the data scientist, developing code, and doing these experiments. Putting them into production is that's where containers live and thrive. You can containerize your model, you containerize your workload, you deploy it into your OpenShift Kubernetes cluster, done, you monitor it, done. So the software developments and the SRE, the ops part, done, but how do I get the data scientist into this cloud native age where he's not developing on his laptop or on a machine, where he SSH into and then does some stuff there. And then some system admin comes and needs to tweak it because it's running out of memory or whatnot. But how do we take him and make him, well, and provide him an environment that is good enough to work in, in the browser, and then with IDE, where the workload of doing the computation and the experimentation is repeatable, so that the environment is always the same, it's reliable, so it's always up and running. It doesn't consume resources, although it's up and running. Where it's, where the supply chain and the configuration of... And the, well, the modules that are brought into the system are also reliable. So all these problems that we solved in the traditional software development world, now have to transition into the data science and HPC world, where the problems are similar, but yeah, it's different sets. It's more or less, also a huge educational problem and transitioning the tools over into that is something... >> Well, is this mostly a technical issue or is this a cultural issue? I mean, are HPC workloads that different from more conventional OLTP workloads that they would not adapt well to a distributed containerized environment? >> I think it's both. So, on one hand it's the cultural issue because you have two different communities, everybody is reinventing the wheel, everybody is some sort of siloed. So they think, okay, what we've done for 30 years now we, there's no need to change it. And they, so it's, that's what thrives and here at KubeCon where you have different communities coming together, okay, this is how you solved the problem, maybe this applies also to our problem. But it's also the, well, the tooling, which is bound to a machine, which is bound to an HPC computer, which is architecturally different than a distributed environment where you would treat your containers as kettle, and as something that you can replace, right? And the HPC community usually builds up huge machines, and these are like the gray machines. So it's also technical bit of moving it to this age. >> So the massively parallel nature of HPC workloads you're saying Kubernetes has not yet been adapted to that? >> Well, I think that parallelism works great. It's just a matter of moving that out from an HPC computer into the scale out factor of a Kubernetes cloud that elastically scales out. Whereas the traditional HPC computer, I think, and Kenneth can correct me here is, more like, I have this massive computer with 1 million cores or whatnot, and now use it. And I can use my time slice, and book my time slice there. Whereas this a Kubernetes example the concept is more like, I have 1000 cores and I declare something into it and scale it up and down based on the needs. >> So, Kenneth, this is where you talked about the culture part of the changes that need to be happening. And quite frankly, the computer is a tool, it's a tool to get to the answer. And if that tool is working, if I have a 1000 cores on a single HPC thing, and you're telling me, well, I can't get to a system with 2000 cores. And if you containerized your process and move it over then maybe I'll get to the answer 50% faster maybe I'm not that... Someone has to make that decision. How important is it to get people involved in these types of communities from a researcher? 'Cause research is very tight-knit community to have these conversations and help that see move happen. >> I think it's very important to that community should, let's say, the cloud community, HPC research community, they should be talking a lot more, there should be way more cross pollination than there is today. I'm actually, I'm happy that I've seen HPC mentioned at booths and talks quite often here at KubeCon, I wasn't really expecting that. And I'm not sure, it's my first KubeCon, so I don't know, but I think that's kind of new, it's pretty recent. If you're going to the HPC community conferences there containers have been there for a couple of years now, something like Kubernetes is still a bit new. But just this morning there was a keynote by a guy from CERN, who was explaining, they're basically slowly moving towards Kubernetes even for their HPC clusters as well. And he's seeing that as the future because all the flexibility it gives you and you can basically hide all that from the end user, from the researcher. They don't really have to know that they're running on top of Kubernetes. They shouldn't care. Like you said, to them it's just a tool, and they care about if the tool works, they can get their answers and that's what they want to do. How that's actually being done in the background they don't really care. >> So talk to me about the AI side of the equation, because when I talk to people doing AI, they're on the other end of the spectrum. What are some of the benefits they're seeing from containerization? >> I think it's the reproducibility of experiments. So, and data scientists are, they're data scientists and they do research. So they care about their experiment. And maybe they also care about putting the model into production. But, I think from a geeky perspective they are more interested in finding the next model, finding the next solution. So they do an experiment, and they're done with it, and then maybe it's going to production. So how do I repeat that experiment in a year from now, so that I can build on top of it? And a container I think is the best solution to wrap something with its dependency, like freeze it, maybe even with the data, store it away, and then come to it back later and redo the experiment or share the experiment with some of my fellow researchers, so that they don't have to go through the process of setting up an equivalent environment on their machines, be it their laptop, via their cloud environment. So you go to the internet, download something doesn't work, container works. >> Well, you said something that really intrigues me you know in concept, I can have a, let's say a one terabyte data set, have a experiment associated with that. Take a snapshot of that somehow, I don't know how, take a snapshot of that and then share it with the rest of the community and then continue my work. >> Marcel: Yeah. >> And then we can stop back and compare notes. Where are we at in a maturity scale? Like, what are some of the pitfalls or challenges customers should be looking out for? >> I think you actually said it right there, how do I snapshot a terabyte of data? It's, that's... >> It's a terabyte of data. (both conversing) >> It's a bit of a challenge. And if you snapshot it, you have two terabytes of data or you just snapshot the, like and get you to do a, okay, this is currently where we're at. So that's why the technology is evolving. How do we do source control management for data? How do we license data? How do we make sure that the data is unbiased, et cetera? So that's going more into the AI side of things. But at dealing with data in a declarative way in a containerized way, I think that's where currently a lot of innovation is happening. >> What do you mean by dealing with data in a declarative way? >> If I'm saying I run this experiment based on this data set and I'm running this other experiment based on this other data set, and I as the researcher don't care where the data is stored, I care that the data is accessible. And so I might declare, this is the process that I put on my data, like a data processing pipeline. These are the steps that it's going through. And eventually it will have gone through this process and I can work with my data. Pretty much like applying the concept of pipelines through data. Like you have these data pipelines and then now you have cube flow pipelines as one solution to apply the pipeline concept, to well, managing your data. >> Given the stateless nature of containers, is that an impediment to HPC adoption because of the very large data sets that are typically involved? >> I think it is if you have terabytes of data. Just, you have to get it to the place where the computation will happen, right? And just uploading that into the cloud is already a challenge. If you have the data sitting there on a supercomputer and maybe it was sitting there for two years, you probably don't care. And typically a lot of universities the researchers don't necessarily pay for the compute time they use. Like, this is also... At least in Ghent that's the case, it's centrally funded, which means, the researchers don't have to worry about the cost, they just get access to the supercomputer. If they need two terabytes of data, they get that space and they can park it on the system for years, no problem. If they need 200 terabytes of data, that's absolutely fine. >> But the university cares about the cost? >> The university cares about the cost, but they want to enable the researchers to do the research that they want to do. >> Right. >> And we always tell researchers don't feel constrained about things like compute power, storage space. If you're doing smaller research, because you're feeling constrained, you have to tell us, and we will just expand our storage system and buy a new cluster. >> Paul: Wonderful. >> So you, to enable your research. >> It's a nice environment to be in. I think this might be a Jevons paradox problem, you give researchers this capability you might, you're going to see some amazing things. Well, now the people are snapshoting, one, two, three, four, five, different versions of a one terabytes of data. It's a good problem to have, and I hope to have you back on theCUBE, talking about how Red Hat and Ghent have solved those problems. Thank you so much for joining theCUBE. From Valencia, Spain, I'm Keith Townsend along with Paul Gillon. And you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, do you remember your college days? A lot of them are lost. the student debt to prove it. that the university does. So the research we do at university Like the infrastructure I'm going to guess HPC is still the main language, So either the application itself So first of all, So, the use case has talking about the theory I guess at the scale of university and the capabilities for and the experimentation is repeatable, And the HPC community usually down based on the needs. And quite frankly, the computer is a tool, And he's seeing that as the future What are some of the and redo the experiment the rest of the community And then we can stop I think you actually It's a terabyte of data. the AI side of things. I care that the data is accessible. for the compute time they use. to do the research that they want to do. and we will just expand our storage system and I hope to have you back on theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

Keith TownsendPERSON

0.99+

KennethPERSON

0.99+

Kenneth HostePERSON

0.99+

Marcel HildPERSON

0.99+

PaulPERSON

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

KeithPERSON

0.99+

MarcelPERSON

0.99+

1 million coresQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

50%QUANTITY

0.99+

20QUANTITY

0.99+

FortranTITLE

0.99+

1000 coresQUANTITY

0.99+

30 yearsQUANTITY

0.99+

two terabytesQUANTITY

0.99+

CERNORGANIZATION

0.99+

2000 coresQUANTITY

0.99+

GhentLOCATION

0.99+

Valencia, SpainLOCATION

0.99+

firstQUANTITY

0.99+

GhentORGANIZATION

0.99+

one terabytesQUANTITY

0.99+

each teamQUANTITY

0.99+

one solutionQUANTITY

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

one terabyteQUANTITY

0.99+

PythonTITLE

0.99+

Ghent UniversityORGANIZATION

0.99+

KubernetesTITLE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

HPCORGANIZATION

0.98+

two different communitiesQUANTITY

0.96+

terabytes of dataQUANTITY

0.96+

both endsQUANTITY

0.96+

over 1/2QUANTITY

0.93+

twoQUANTITY

0.93+

CloudnativeconORGANIZATION

0.93+

CloudNativeCon Europe 2022EVENT

0.92+

this morningDATE

0.92+

a yearQUANTITY

0.91+

fiveQUANTITY

0.9+

theCUBEORGANIZATION

0.89+

FortranORGANIZATION

0.88+

KubeConORGANIZATION

0.87+

two terabytes of dataQUANTITY

0.86+

KubeCon CloudNativeCon Europe 2022EVENT

0.86+

EuropeLOCATION

0.85+

yearsQUANTITY

0.81+

a terabyte of dataQUANTITY

0.8+

NavanaORGANIZATION

0.8+

200 terabytes ofQUANTITY

0.79+

Kubecon +ORGANIZATION

0.77+

Naina Singh & Roland Huß, Red Hat | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022 brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and KubeCon and CloudNativeCon Europe 2022. I'm Keith Townsend, my co-host, Paul Gillin, Senior Editor Enterprise Architecture for SiliconANGLE. We're going to talk, or continue to talk to amazing people. The coverage has been amazing, but also the city of Valencia is beautiful. I have to eat a little crow, I landed and I saw the convention center, Paul, have you got out and explored the city at all? >> Absolutely, my first reaction to Valencia when we were out in this industrial section was, "This looks like Cincinnati." >> Yes. >> But then I got on the bus second day here, 10 minutes to downtown, another world, it's almost a middle ages flavor down there with these little winding streets and just absolutely gorgeous city. >> Beautiful city. I compared it to Charlotte, no disrespect to Charlotte, but this is an amazing city. Naina Singh, Principal Product Manager at Red Hat, and Roland Huss, also Principal Product Manager at Red Hat. We're going to talk a little serverless. I'm going to get this right off the bat. People get kind of feisty when we call things like Knative serverless. What's the difference between something like a Lambda and Knative? >> Okay, so I'll start. Lambda is, like a function as a server, right? Which is one of the definitions of serverless. Serverless is a deployment platform now. When we introduced serverless to containers through Knative, that's when the serverless got revolutionized, it democratized serverless. Lambda was proprietary-based, you write small snippets of code, run for a short duration of time on demand, and done. And then Knative which brought serverless to containers, where all those benefits of easy, practical, event-driven, running on demand, going up and down, all those came to containers. So that's where Knative comes into picture. >> Yeah, I would also say that Knative is based on containers from the very beginning, and so, it really allows you to run arbitrary workloads in your container, whereas with Lambda you have only a limited set of language that you can use and you have a runtime contract there which is much easier with Knative to run your applications, for example, if it's coming in a language that is not supported by Lambda. And of course the most important benefit of Knative is it's run on top of Kubernetes, which allows you- >> Yes. >> To run your serverless platform on any other Kubernetes installation, so I think this is one of the biggest thing. >> I think we saw about three years ago there was a burst of interest around serverless computing and really some very compelling cost arguments for using it, and then it seemed to die down, we haven't heard a lot about serverless, and maybe I'm just not listening to the right people, but what is it going to take for serverless to kind of break out and achieve its potential? >> Yeah, I would say that really the big advantage of course of Knative in that case is that you can scale down to zero. I think this is one of the big things that will really bring more people onto board because you really save a lot of money with that if your applications are not running when they're not used. Yeah, I think also that, because you don't have this vendor log in part thing, when people realize that you can run really on every Kubernete platform, then I think that the journey of serverless will continue. >> And I will add that the event-driven applications, there hasn't been enough buzz around them yet. There is, but serverless is going to bring a new lease on life on them, right? The other thing is the ease of use for developers. With Knative, we are introducing a new programming model, the functions, where you don't even have to create containers, it would do create containers for you. >> So you create the servers, but not the containers? >> Right now, you create the containers and then you deploy them in a serverless fashion using Knative. But the container creation was on the developers, and functions is going to be the third component of Knative that we are developing upstream, and Red Hat donated that project, is going to be where code to cloud capability. So you bring your code and everything else will be taken care of, so. >> So, I'd call a function or, it's funny, we're kind of circular with this. What used to be, I'd write a function and put it into a container, this server will provide that function not just call that function as if I'm developing kind of a low code no code, not no code, but a low code effort. So if there's a repetitive thing that the community wants to do, you'll provide that as a predefined function or as a server. >> Yeah, exactly. So functions really helps the developer to bring their code into the container, so it's really kind of a new (indistinct) on top of Knative- >> on top op. >> And of course, it's also a more opinionated approach. It's really more closer coming to Lambda now because it also comes with a programming model, which means that you have certain signature that you have to implement and other stuff. But you can also create your own templates, because at the end what matters is that you have a container at the end that you can run on Knative. >> What kind of applications is serverless really the ideal platform? >> Yeah, of course the ideal application is a HTTP-based web application that has no state and that has a very non-uniform traffic shape, which means that, for example, if you have a business where you only have spikes at certain times, like maybe for Super Bowl or Christmas, when selling some merchandise like that, then you can scale up from zero very quickly at a arbitrary high depending on the load. And this is, I think, the big benefit over, for example, Kubernetes Horizontal Pod Autoscaling where it's more like indirect measures of value scaling based on CPR memory, but here, it directly relates one to one to the traffic that is coming in to concurrent request. Yeah, so this helps a lot for non-uniform traffic shapes that I think this has become one of the ideal use case. >> Yeah. But I think that is one of the most used or defined one, but I do believe that you can write almost all applications. There are some, of course, that would not be the right load, but as long as you are handling state through external mechanism. Let's say, for example you're using database to save the state, or you're using physical volume amount to save the state, it increases the density of your cluster because when they're running, the containers would pop up, when your application is not running, the container would go down, and the resources can be used to run any other application that you want to us, right? >> So, when I'm thinking about Lambda, I kind of get the event-driven nature of Lambda. I have a S3 bucket, and if a S3 event is driven, then my functions as the server will start, and that's kind of the listening servers. How does that work with Knative or a Kubernetes-based thing? 'Cause I don't have an event-driven thing that I can think of that kicks off, like, how can I do that in Kubernetes? >> So I'll start. So it is exactly the same thing. In Knative world, it's the container that's going to come up and your servers in the container, that will do the processing of that same event that you are talking. So let's say the notification came from S3 server when the object got dropped, that would trigger an application. And in world of Kubernetes, Knative, it's the container that's going to come up with the servers in it, do the processing, either find another servers or whatever it needs to do. >> So Knative is listening for the event, and when the event happens, then Knative executes the container. >> Exactly. >> Basically. >> So the concept of Knative source which is kind of adapted to the external world, for example, for the S3 bucket. And as soon as there is an event coming in, Knative will wake up that server, will transmit this event as a cloud event, which is another standard from the CNCF, and then when the server is done, then the server spins down again to zero so that the server is only running when there are events, which is very cost effective and which people really actually like to have this kind of way of dynamic scaling up from zero to one and even higher like that. >> Lambda has been sort of synonymous with serverless in the early going here, is Knative a competitor to Lambda, is it complimentary? Would you use the two together? >> Yeah, I would say that Lambda is a offering from AWS, so it's a cloud server there. Knative itself is a platform, so you can run it in the cloud, and there are other cloud offerings like from IBM, but you can also run it on-premise for example, that's the alternative. So you can also have hybrid set scenarios where you really can put one part into the cloud, the other part on-prem, and I think there's a big difference in that you have a much more flexibility and you can avoid this kind of Windows login compared to AWS Lambda. >> Because Knative provides specifications and performance tests, so you can move from one server to another. If you are on IBM offering that's using Knative, and if you go to a Google offering- >> A google offering. >> That's on Knative, or a Red Hat offering on Knative, it should be seamless because they're both conforming to the same specifications of Knative. Whereas if you are in Lambda, there are custom deployments, so you are only going to be able to run those workloads only on AWS. >> So KnativeCon, co-located event as part of KubeCon, I'm curious as to the level of effort in the user interaction for deploying Knative. 'Cause when I think about Lambda or cloud-run or one of the other functions as a servers, there is no backend that I have to worry about. And I think this is where some of the debate becomes over serverless versus some other definition. What's the level of lifting that needs to be done to deploy Knative in my Kubernetes environment? >> So if you like... >> Is this something that comes as based part of the OpenShift install or do I have to like, you know, I have to... >> Go ahead, you answer first. >> Okay, so actually for OpenShift, it's a code layer product. So you have this catalog of operator that you can choose from, and OpenShift Serverless is one part of that. So it's really kind of a one click install where you have also get a default configuration, you can flexibly configure it as you like. Yeah, we think that's a good user experience and of course you can go to these cloud offerings like Google Cloud one or IBM Code Engine, they just have everything set up for you. And the idea of other different alternatives, you have (indistinct) charts, you can install Knative in different ways, you also have options for the backend systems. For example, we mentioned that when an event comes in, then there's a broker in the middle of something which dispatches all the events to the servers, and there you can have a different backend system like Kafka or AMQ. So you can have very production grade messaging system which really is responsible for delivering your events to your servers. >> Now, Knative has recently, I'm sorry, did I interrupt you? >> No, I was just going to say that Knative, when we talk about, we generally just talk about the serverless deployment model, right? And the Eventing gets eclipsed in. That Eventing which provides this infrastructure for producing and consuming event is inherent part of Knative, right? So you install Knative, you install Eventing, and then you are ready to connect all your disparate systems through Events. With CloudEvents, that's the specification we use for consistent and portable events. >> So Knative recently admitted to the, or accepted by the Cloud Native Computing Foundation, incubating there. Congratulations, it's a big step. >> Thank you. >> Thanks. >> How does that change the outlook for Knative adoption? >> So we get a lot of support now from the CNCF which is really great, so we could be part of this conference, for example which was not so easy before that. And we see really a lot of interest and we also heard before the move that many contributors were not, started into looking into Knative because of this kind of non being part of a mutual foundation, so they were kind of afraid that the project would go away anytime like that. And we see the adoption really increases, but slowly at the moment. So we are still ramping up there and we really hope for more contributors. Yeah, that's where we are. >> CNCF is almost synonymous with open source and trust. So, being in CNCF and then having this first KnativeCon event as part of KubeCon, we are hoping, and it's a recent addition to CNCF as well, right? So we are hoping that this events and these interviews, this will catapult more interest into serverless. So I'm really, really hopeful and I only see positive from here on out for Knative. >> Well, I can sense the excitement. KnativeCon sold out, congratulations on that. >> Thank you. >> I can talk about serverless all day, it's a topic that I really love, it's a fascinating way to build applications and manage applications, but we have a lot more coverage to do today on "theCUBE" from Spain. From Valencia, Spain, I'm Keith Townsend along with Paul Gillin, and you're watching "theCUBE," the leader in high-tech coverage. (gentle upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, I have to eat a little crow, reaction to Valencia 10 minutes to downtown, another world, I compared it to Charlotte, Which is one of the that you can use and you of the biggest thing. that you can run really the functions, where you don't even have and then you deploy them that the community wants So functions really helps the developer that you have a container at the end Yeah, of course the but I do believe that you can and that's kind of the listening servers. it's the container that's going to come up So Knative is listening for the event, so that the server is only running in that you have a much more flexibility and if you go so you are only going to be able that needs to be done of the OpenShift install and of course you can go and then you are ready So Knative recently admitted to the, that the project would go to CNCF as well, right? Well, I can sense the excitement. coverage to do today

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillinPERSON

0.99+

Naina SinghPERSON

0.99+

IBMORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SpainLOCATION

0.99+

twoQUANTITY

0.99+

10 minutesQUANTITY

0.99+

Roland HussPERSON

0.99+

ValenciaLOCATION

0.99+

LambdaTITLE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CincinnatiLOCATION

0.99+

second dayQUANTITY

0.99+

ChristmasEVENT

0.99+

PaulPERSON

0.99+

CharlotteLOCATION

0.99+

AWSORGANIZATION

0.99+

OpenShiftTITLE

0.99+

Super BowlEVENT

0.99+

KnativeORGANIZATION

0.99+

one partQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

KubeConEVENT

0.99+

Roland HußPERSON

0.98+

KnativeConEVENT

0.98+

S3TITLE

0.98+

one clickQUANTITY

0.98+

bothQUANTITY

0.98+

zeroQUANTITY

0.98+

GoogleORGANIZATION

0.98+

CNCFORGANIZATION

0.97+

oneQUANTITY

0.96+

googleORGANIZATION

0.96+

theCUTITLE

0.95+

CloudNativeCon Europe 2022EVENT

0.95+

todayDATE

0.95+

KubernetesTITLE

0.95+

firstQUANTITY

0.94+

one serverQUANTITY

0.93+

KnativeTITLE

0.93+

KubeconORGANIZATION

0.91+

KuberneteTITLE

0.91+

WindowsTITLE

0.9+

CloudEventsTITLE

0.9+

Dave Cope, Spectro Cloud | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> theCUBE presents KubeCon and CloudNativeCon Europe 22, brought to you by the Cloud Native Computing Foundation. >> Valencia, Spain, a KubeCon, CloudNativeCon Europe 2022. I'm Keith Towns along with Paul Gillon, Senior Editor Enterprise Architecture for Silicon Angle. Welcome Paul. >> Thank you Keith, pleasure to work with you. >> We're going to have some amazing people this week. I think I saw stat this morning, 65% of the attendees, 7,500 folks. First time KubeCon attendees, is this your first conference? >> It is my first KubeCon and it is amazing to see how many people are here and to think of just a couple of years ago, three years ago, we were still talking about, what the Cloud was, what the Cloud was going to do and how we were going to integrate multiple Clouds. And now we have this whole new framework for computing that is just rifled out of nowhere. And as we can see by the number of people who are here this has become the dominant trend in Enterprise Architecture right now how to adopt Kubernetes and containers, build microservices based applications, and really get to that transparent Cloud that has been so elusive. >> It has been elusive. And we are seeing vendors from startups with just a few dozen people, to some of the traditional players we see in the enterprise space with 1000s of employees looking to capture kind of lightning in a bottle so to speak, this elusive concept of multicloud. >> And what we're seeing here is very typical of an early stage conference. I've seen many times over the years where the floor is really dominated by companies, frankly, I've never heard of that. The many of them are only two or three years old, you don't see the big dominant computing players with the presence here that these smaller companies have. That's very typical. We saw that in the PC age, we saw it in the early days of Unix and it's happening again. And what will happen over time is that a lot of these companies will be acquired, there'll be some consolidation. And the nature of this show will change, I think dramatically over the next couple or three years but there is an excitement and an energy in this auditorium today that is really a lot of fun and very reminiscent of other new technologies just as they requested. >> Well, speaking of new technologies, we have Dave Cole, CRO, Chief Revenue Officer. >> That's right. >> Chief Marketing Officer of Spectrum Cloud. Welcome to the show. >> Thank you. It's great to be here. >> So let's talk about this big ecosystem, Kubernetes. >> Yes. >> Solve problem? >> Well the dream is... Well, first of all applications are really the lifeblood of a company, whether it's our phone or whether it's a big company trying to connect with its customers about applications. And so the whole idea today is how do I build these applications to build that tight relationship with my customers? And how do I reinvent these applications rapidly in along comes containerization which helps you innovate more quickly? And certainly a dominant technology there is Kubernetes. And the question is, how do you get Kubernetes to help you build applications that can be born anywhere and live anywhere and take advantage of the places that it's running? Because everywhere has pluses and minuses. >> So you know what, the promise of Kubernetes from when I first read about it years ago is, runs on my laptop? >> Yeah. >> I can push it to any Cloud, any platforms. >> That's right, that's right. >> Where's the gap? Where are we in that phase? Like talk to me about scale? Is it that simple? >> Well, that is actually the problem is that today, while the technology is the dominant containerization technology in orchestration technology, it really still takes a power user, it really hasn't been very approachable to the masses. And so was these very expensive highly skilled resources that sit in a dark corner that have focused on Kubernetes, but that now is trying to evolve to make it more accessible to the masses. It's not about sort of hand wiring together, what is a typical 20 layer stack, to really manage Kubernetes and then have your engineers manually can reconfigure it and make sure everything works together. Now it's about how do I create these stacks, make it easy to deploy and manage at scale? So we've gone from sort of DIY Developer Centric to all right, now how do I manage this at scale? >> Now this is a point that is important, I think is often overlooked. This is not just about Kubernetes. This is about a whole stack of Cloud Native Technologies. And you who is going to integrate that all that stuff, piece that stuff together? Obviously, you have a role in that. But in the enterprise, what is the awareness level of how complex this stack is and how difficult it is to assemble? >> We see a recognition of that we've had developers working on Kubernetes and applications, but now when we say, how do we weave it into our production environments? How do we ensure things like scalability and governance? How do we have this sort of interesting mix of innovation, flexibility, but with control? And that's sort of an interesting combination where you want developers to be able to run fast and use the latest tools, but you need to create these guardrails to deploy it at scale. >> So where do the developers fit in that operation stack then? Is Kubernetes an AIOps or an ops task or is it sort of a shared task across the development spectrum? >> Well, I think there's a desire to allow application developers to just focus on the application and have a Kubernetes related technology that ensures that all of the infrastructure and related application services are just there to support them. And because the typical stack from the operating system to the application can be up to 20 different layers, components, you just want all those components to work together, you don't want application developers to worry about those things. And the latest technologies like Spectra Cloud there's others are making that easy application engineers focus on their apps, all of the infrastructure and the services are taken care of. And those apps can then live natively on any environment. >> So help paint this picture for us. I get AKS, EKS, Anthos, all of these distributions OpenShift, the Tanzu, where's Spectra Cloud helping me to kind of cobble together all these different distros, I thought distro was the thing just like Linux has different distros, Randy said different distros. >> That actually is the irony, is that sort of the age of debating the distros largely is over. There are a lot of distros and if you look at them there are largely shades of gray in being different from each other. But the Kubernetes distribution is just one element of like 20 elements that all have to work together. So right now what's happening is that it's not about the distribution it's now how do I again, sorry to repeat myself, but move this into scale? How do I move it into deploy at scale to be able to manage ongoing at scale to be able to innovate at-scale, to allow engineers as I said, use the coolest tools but still have technical guardrails that the enterprise knows, they'll be in control of. >> What does at-scale mean to the enterprise customers you're talking to now? What do they mean when they say that? >> Well, I think it's interesting because we think scale's different because we've all been in the industry and it's frankly, sort of boring old word. But today it means different things, like how do I automate the deployment at-scale? How do I be able to make it really easy to provision resources for applications on any environment, from either a virtualized or bare metal data center, Cloud, or today Edge is really big, where people are trying to push applications out to be closer to the source of the data. And so you want to be able to deploy it-scale, you want to manage at-scale, you want to make it easy to, as I said earlier, allow application developers to build their applications, but ITOps wants the ability to ensure security and governance and all of that. And then finally innovate at-scale. If you look at this show, it's interesting, three years ago when we started Spectra Cloud, there are about 1400 businesses or technologies in the Kubernetes ecosystem, today there's over 1800 and all of these technologies made up of open source and commercial all version in a different rates, it becomes an insurmountable problem, unless you can set those guardrails sort of that balance between flexibility, control, let developers access the technologies. But again, manage it as a part of your normal processes of a scaled operation. >> So Dave, I'm a little challenged here, because I'm hearing two where I typically consider conflicting terms. Flexibility, control. >> Yes. >> In order to achieve control, I need complexity, in order to choose flexibility, I need t-shirt, one t-shirt fits all and I get simplicity. How can I get both that just doesn't compute. >> Well, that's the opportunity and the challenge at the same time. So you're right. So developers want choice, good developers want the ability to choose the latest technology so they can innovate rapidly. And yet ITOps, wants to be able to make sure that there are guardrails. And so with some of today's technologies, like Spectra Cloud, it is, you have the ability to get both. We actually worked with dimensional research, and we sponsor an annual state of Kubernetes survey. We found this last summer, that two out of three IT executives said, you could not have both flexibility and control together, but in fact they want it. And so it is this interesting balance, how do I give engineers the ability to get anything they want, but ITOps the ability to establish control. And that's why Kubernetes is really at its next inflection point. Whereas I mentioned, it's not debates about the distro or DIY projects. It's not big incumbents creating siloed Kubernetes solutions, but in fact it's about allowing all these technologies to work together and be able to establish these controls. And that's really where the industry is today. >> Enterprise , enterprise CIOs, do not typically like to take chances. Now we were talking about the growth in the market that you described from 1400, 1800 vendors, most of these companies, very small startups, our enterprises are you seeing them willing to take a leap with these unproven companies? Or are they holding back and waiting for the IBMs, the HPS, the MicrosoftS to come in with the VMwares with whatever they solution they have? >> I think so. I mean, we sell to the global 2000. We had yesterday, as a part of Edge day here at the event, we had GE Healthcare as one of our customers telling their story, and they're a market share leader in medical imaging equipment, X-rays, MRIs, CAT scans, and they're starting to treat those as Edge devices. And so here is a very large established company, a leader in their industry, working with people like Spectra Cloud, realizing that Kubernetes is interesting technology. The Edge is an interesting thought but how do I marry the two together? So we are seeing large corporations seeing so much of an opportunity that they're working with the smaller companies, the latest technology. >> So let's talk about the Edge a little, you kind of opened it up there. How should customers think about the Edge versus the Cloud Data Center or even bare metal? >> Actually it's a... Well bare metal is fairly easy is that many people are looking to reduce some of the overhead or inefficiencies of the virtualized environment. But we've had really sort of parallel little white tornadoes, we've had bare metal as infrastructure that's been developing, and then we've had orchestration developing but they haven't really come together very well. Lately, we're finally starting to see that come together. Spectra Cloud contributed to open source a metal as a service technology that finally brings these two worlds together, making bare metal much more approachable to the enterprise. Edge is interesting, because it seems pretty obvious, you want to push your application out closer to your source of data, whether it's AI inferencing, or IoT or anything like that, you don't want to worry about intermittent connectivity or latency or anything like that. But people have wanted to be able to treat the Edge as if it's almost like a Cloud, where all I worry about is the app. So really, the Edge to us is just the next extension in a multi-Cloud sort of motif where I want these Edge devices to require low IT resources, to automate the provisioning, automate the ongoing version management, patch management, really act like a Cloud. And we're seeing this as very popular now. And I just used the GE Healthcare example of that, imagine a CAT scan machine, I'm making this part up in China and that's just an Edge device and it's doing medical imagery which is very intense in terms of data, you want to be able to process it quickly and accurately, as close to the endpoint, the healthcare provider is possible. >> So let's talk about that in some level of details, we think about kind of Edge and these fixed devices such as imaging device, are we putting agents on there, or we looking at something talking back to the Cloud? Where does special Cloud inject and help make that simple, that problem of just having dispersed endpoints all over the world simpler? >> Sure. Well we announced our Edge Kubernetes, Edge solution at a big medical conference called HIMMS, months ago. And what we allow you to do is we allow the application engineers to develop their application, and then you can de you can design this declarative model this cluster API, but beyond Cluster profile which determines which additional application services you need and the Edge device, all the person has to do with the endpoint is plug in the power, plug in the communications, it registers the Edge device, it automates the deployment of the full stack and then it does the ongoing versioning and patch management, sort of a self-driving Edge device running Kubernetes. And we make it just very easy. No IT resources required at the endpoint, no expensive field engineering resources to go to these endpoints twice a year to apply new patches and things like that, all automated. >> But there's so many different types of Edge devices with different capabilities, different operating systems, some have no operating system. I mean that seems, like a much more complex environment, just calling it the Edge is simple, but what you're really talking about is 1000s of different devices, that you have to run your applications on how are you dealing with that? >> So one of the ways is that we're really unbiased. In other words, we're OS and distro agnostic. So we don't want to debate about which distribution you like, we don't want to debate about which OS you want to use. The truth is, you're right. There's different environments and different choices that you'll want to make. And so the key is, how do you incorporate those and also recognize everything beyond those, OS and Kubernetes and all of that and manage that full stack. So that's what we do, is we allow you to choose which tools you want to use and let it be deployed and managed on any environment. >> And who's... >> So... >> I'm sorry Keith, who's responsible for making Kubernetes run on the Edge device. >> We do. We provision the entire stack. I mean, of course the company does using our product, but we provision the entire Kubernetes infrastructure stack, all the application services and the application itself on that device. >> So I would love to dig into like where pods happen and all that. But, provisioning is getting to the point that is a solve problem. Day two. >> Yes. >> Like you just mentioned HIMMS, highly regulated environments. How does Spectra Cloud helping with configuration management, change control, audit, compliance, et cetera, the hard stuff. >> Yep. And one of the things we do, you bring up a good point is we manage the full life cycle from day zero, which is sort of create, deploy, all the way to day two, which is about access control, security, it's about ongoing versioning in a patch management. It's all of that built into the platform. But you're right, like the medical industry has a lot of regulations. And so you need to be able to make sure that everything works, it's always up to the latest level have the highest level of security. And so all that's built into the platform. It's not just a fire and forget it really is about that full life cycle of deploying, managing on an ongoing basis. >> Well, Dave, I'd love to go into a great deal of detail with you about kind of this day two ops and I think we'll be covering a lot more of that topic, Paul, throughout the week, as we talk about just as we've gotten past, how do I deploy Kubernetes pod, to how do I actually operate IT? >> Absolutely, absolutely. The devil is in the details as they say. >> Well, and also too, you have to recognize that the Edge has some very unique requirements, you want very small form factors, typically, you want low IT resources, it has to be sort of zero touch or low touch because if you're a large food provider with 20,000 store locations, you don't want to send out field engineers two or three times a year to update them. So it really is an interesting beast and we have some exciting technology and people like GE are using that. >> Well, Dave, thanks a lot for coming on theCUBE, you're now KubeCon, you've not been on before? >> I have actually, yes its... But I always enjoy it. >> Great conversation. From Valencia, Spain. I'm Keith Towns, along with Paul Gillon and you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by the Cloud I'm Keith Towns along with Paul Gillon, pleasure to work with you. of the attendees, and it is amazing to see kind of lightning in a bottle so to speak, And the nature of this show will change, we have Dave Cole, Welcome to the show. It's great to be here. So let's talk about this big ecosystem, and take advantage of the I can push it to any approachable to the masses. and how difficult it is to assemble? to be able to run fast and the services are taken care of. OpenShift, the Tanzu, is that sort of the age And so you want to be So Dave, I'm a little challenged here, in order to choose the ability to get anything they want, the MicrosoftS to come in with the VMwares and they're starting to So let's talk about the Edge a little, So really, the Edge to us all the person has to do with the endpoint that you have to run your applications on OS and Kubernetes and all of that run on the Edge device. and the application itself on that device. is getting to the point the hard stuff. It's all of that built into the platform. The devil is in the details as they say. it has to be sort of But I always enjoy it. the leader

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave ColePERSON

0.99+

Paul GillonPERSON

0.99+

Dave CopePERSON

0.99+

KeithPERSON

0.99+

DavePERSON

0.99+

RandyPERSON

0.99+

ChinaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

PaulPERSON

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

65%QUANTITY

0.99+

20 layerQUANTITY

0.99+

Keith TownsPERSON

0.99+

KubeConEVENT

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

20 elementsQUANTITY

0.99+

Spectro CloudORGANIZATION

0.99+

GEORGANIZATION

0.99+

7,500 folksQUANTITY

0.99+

Spectrum CloudORGANIZATION

0.99+

yesterdayDATE

0.99+

Valencia, SpainLOCATION

0.99+

Spectra CloudTITLE

0.99+

three years agoDATE

0.99+

first conferenceQUANTITY

0.98+

EdgeTITLE

0.98+

1400QUANTITY

0.98+

KubernetesTITLE

0.98+

one elementQUANTITY

0.98+

todayDATE

0.98+

IBMsORGANIZATION

0.98+

First timeQUANTITY

0.98+

Day twoQUANTITY

0.98+

months agoDATE

0.97+

last summerDATE

0.97+

over 1800QUANTITY

0.97+

CloudNativeCon Europe 2022EVENT

0.97+

about 1400 businessesQUANTITY

0.96+

this weekDATE

0.96+

KubeconORGANIZATION

0.96+

CloudNativeCon Europe 22EVENT

0.96+

twice a yearQUANTITY

0.96+

EdgeORGANIZATION

0.95+

two worldsQUANTITY

0.95+

CentricORGANIZATION

0.94+

LinuxTITLE

0.93+

couple of years agoDATE

0.93+

CloudnativeconORGANIZATION

0.93+

up to 20 different layersQUANTITY

0.92+

day zeroQUANTITY

0.92+

AnthosTITLE

0.91+

AKSTITLE

0.91+

OpenShiftTITLE

0.9+

UnixTITLE

0.9+

this morningDATE

0.9+

Silicon AngleORGANIZATION

0.89+

Haseeb Budhani, Rafay & Adnan Khan, MoneyGram | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: theCUBE presents "Kubecon and Cloudnativecon Europe 2022" brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to theCUBE coverage of Kubecon 2022, E.U. I'm here with my cohost, Paul Gillin. >> Pleased to work with you, Keith. >> Nice to work with you, Paul. And we have our first two guests. "theCUBE" is hot. I'm telling you we are having interviews before the start of even the show floor. I have with me, we got to start with the customers first. Enterprise Architect Adnan Khan, welcome to the show. >> Thank you so much. >> Keith: CUBE time first, now you're at CUBE-alumni. >> Yup. >> And Haseeb Budhani, CEO Arathi, welcome back. >> Nice to talk to you again today. >> So, we're talking all things Kubernetes and we're super excited to talk to MoneyGram about their journey to Kubernetes. First question I have for Adnan. Talk to us about what your pre-Kubernetes landscape looked like? >> Yeah. Certainly, Keith. So, we had a traditional mix of legacy applications and modern applications. A few years ago we made the decision to move to a microservices architecture, and this was all happening while we were still on-prem. So, your traditional VMs. And we started 20, 30 microservices but with the microservices packing. You quickly expand to hundreds of microservices. And we started getting to that stage where managing them without sort of an orchestration platform, and just as traditional VMs, was getting to be really challenging, especially from a day two operational. You can manage 10, 15 microservices, but when you start having 50, and so forth, all those concerns around high availability, operational performance. So, we started looking at some open-source projects. Spring cloud, we are predominantly a Java shop. So, we looked at the spring cloud projects. They give you a number of initiatives for doing some of those management. And what we realized again, to manage those components without sort of a platform, was really challenging. So, that kind of led us to sort of Kubernetes where along with our journey new cloud, it was the platform that could help us with a lot of those management operational concerns. >> So, as you talk about some of those challenges, pre-Kubernetes, what were some of the operational issues that you folks experienced? >> Yeah, certain things like auto scaling is number one. I mean, that's a fundamental concept of cloud native, right? Is how do you auto scale VMs, right? You can put in some old methods and stuff, but it was really hard to do that automatically. So, Kubernetes with like HPA gives you those out of the box. Provided you set the right policies, you can have auto scaling where it can scale up and scale back, so we were doing that manually. So, before, you know, MoneyGram, obviously, holiday season, people are sending more money, Mother's Day. Our Ops team would go and basically manually scale VMs. So, we'd go from four instances to maybe eight instances, but that entailed outages. And just to plan around doing that manually, and then sort of scale them back was a lot of overhead, a lot of administration overhead. So, we wanted something that could help us do that automatically in an efficient and intrusive way. That was one of the things, monitoring and and management operations, just kind of visibility into how those applications were during what were the status of your workloads, was also a challenge to do that. >> So, Haseeb, I got to ask the question. If someone would've came to me with that problem, I'd just say, "You know what? Go to the plug to cloud." How does your group help solve some of these challenges? What do you guys do? >> Yeah. What do we do? Here's my perspective on the market as it's playing out. So, I see a bifurcation happening in the Kubernetes space. But there's the Kubernetes run time, so Amazon has EKS, Azure as AKS. There's enough of these available, they're not managed services, they're actually really good, frankly. In fact, retail customers, if you're an Amazon why would you spin up your own? Just use EKS, it's awesome. But then, there's an operational layer that is needed to run Kubernetes. My perspective is that, 50,000 enterprises are adopting Kubernetes over the next 5 to 10 years. And they're all going to go through the same exact journey, and they're all going to end up potentially making the same mistake, which is, they're going to assume that Kubernetes is easy. They're going to say, "Well, this is not hard. I got this up and running on my laptop. This is so easy, no worries. I can do EKS." But then, okay, can you consistently spin up these things? Can you scale them consistently? Do you have the right blueprints in place? Do you have the right access management in place? Do you have the right policies in place? Can you deploy applications consistently? Do you have monitoring and visibility into those things? Do your developers have access when they need it? Do you have the right networking layer in place? Do you have the right chargebacks in place? Remember you have multiple teams. And by the way, nobody has a single cluster, so you got to do this across multiple clusters. And some of them have multiple clouds. Not because they want to be multiple clouds, because, but sometimes you buy a company, and they happen to be in Azure. How many dashboards do you have now across all the open-source technologies that you have identified to solve these problems? This is where pain lies. So, I think that Kubernetes is fundamentally a solve problem. Like our friends at AWS and Azure, they've solved this problem. It's like a AKS, EKS, et cetera, EGK for that matter. They're great, and you should use them, and don't even think about spinning up QB best clusters. Don't do it, use the platforms that exist. And commensurately on-premises, OpenShift is pretty awesome. If you like it, use it. But then when it comes to the operations layer, that's where today, we end up investing in a DevOps team, and then an SRE organization that need to become experts in Kubernetes, and that is not tenable. Can you, let's say unlimited capital, unlimited budgets. Can you hire 20 people to do Kubernetes today? >> If you could find them. >> If you can find 'em, right? So, even if you could, the point is that, see five years ago when your competitors were not doing Kubernetes, it was a competitive advantage to go build a team to do Kubernetes so you could move faster. Today, you know, there's a high chance that your competitors are already buying from a Rafay or somebody like Rafay. So, now, it's better to take these really, really sharp engineers and have them work on things that make the company money. Writing operations for Kubernetes, this is a commodity now. >> How confident are you that the cloud providers won't get in and do what you do and put you out of business? >> Yeah, I mean, absolutely. In fact, I had a conversation with somebody from HBS this morning and I was telling them, I don't think you have a choice, you have to do this. Competition is not a bad thing. If we are the only company in a space, this is not a space, right? The bet we are making is that every enterprise, they have an on-prem strategy, they have at least a handful of, everybody's got at least two clouds that they're thinking about. Everybody starts with one cloud, and then they have some other cloud that they're also thinking about. For them to only rely on one cloud's tools to solve for on-prem, plus that second cloud, they potentially they may have, that's a tough thing to do. And at the same time, we as a vendor, I mean, the only real reason why startups survive, is because you have technology that is truly differentiator. Otherwise, I mean, you got to build something that is materially interesting, right? We seem to have- >> Keith: Now. Sorry, go ahead. >> No, I was going to, you actually have me thinking about something. Adnan? >> Yes. >> MoneyGram, big, well known company. a startup, adding, working in a space with Google, VMware, all the biggest names. What brought you to Rafay to solve this operational challenge? >> Yeah. A good question. So, when we started out sort of in our Kubernetes, we had heard about EKS and we are an AWS shop, so that was the most natural path. And we looked at EKS and used that to create our clusters. But then we realized very quickly, that, yes, to Haseeb's point, AWS manages the control plane for you, it gives you the high availability. So, you're not managing those components which is some really heavy lifting. But then what about all the other things like centralized dashboard? What about, we need to provision Kubernetes clusters on multicloud, right? We have other clouds that we use, or also on-prem, right? How do you do some of that stuff? We also, at that time were looking at other tools also. And I had, I remember come up with an MVP list that we needed to have in place for day one or day two operations before we even launch any single applications into production. And my Ops team looked at that list and literally, there was only one or two items that they could check off with EKS. They've got the control plane, they've got the cluster provision, but what about all those other components? And some of that kind of led us down the path of, you know, looking at, "Hey, what's out there in this space?" And we realized pretty quickly that there weren't too many. There were some large providers and capabilities like Antos, but we felt that it was a little too much for what we were trying to do at that point in time. We wanted to scale slowly. We wanted to minimize our footprint, and Rafay seemed to sort of, was a nice mix from all those different angles. >> How was the situation affecting your developer experience? >> So, that's a really good question also. So, operations was one aspect to it. The other part is the application development. We've got MoneyGram is when a lot of organizations have a plethora of technologies from Java, to .net, to node.js, what have you, right? Now, as you start saying, okay, now we're going cloud native and we're going to start deploying to Kubernetes. There's a fair amount of overhead because a tech stack, all of a sudden goes from, just being Java or just being .net, to things like Docker. All these container orchestration and deployment concerns, Kubernetes deployment artifacts, (chuckles) I got to write all this YAML as my developer say, "YAML hell." (panel laughing) I got to learn Docker files. I need to figure out a package manager like HELM on top of learning all the Kubernetes artifacts. So, initially, we went with sort of, okay, you know, we can just train our developers. And that was wrong. I mean, you can't assume that everyone is going to sort of learn all these deployment concerns and we'll adopt them. There's a lot of stuff that's outside of their sort of core dev domain, that you're putting all this burden on them. So, we could not rely on them in to be sort of CUBE cuddle experts, right? That's a fair amount overhead learning curve there. So, Rafay again, from their dashboard perspective, saw the managed CUBE cuddle, gives you that easy access for devs, where they can go and monitor the status of their workloads. They don't have to figure out, configuring all these tools locally, just to get it to work. We did some things from a DevOps perspective to basically streamline and automate that process. But then, also Rafay came in and helped us out on kind of that providing that dashboard. They don't have to break, they can basically get on through single sign on and have visibility into the status of their deployment. They can do troubleshooting diagnostics all through a single pane of glass, which was a key key item. Initially, before Rafay, we were doing that command line. And again, just getting some of the tools configured was huge, it took us days just to get that. And then the learning curve for development teams "Oh, now you got the tools, now you got to figure out how to use it." >> So, Haseeb talk to me about the cloud native infrastructure. When I look at that entire landscape number, I'm just overwhelmed by it. As a customer, I look at it, I'm like, "I don't know where to start." I'm sure, Adnan, you folks looked at it and said, "Wow, there's so many solutions." How do you engage with the ecosystem? You have to be at some level opinionated but flexible enough to meet every customer's needs. How do you approach that? >> So, it's a really tough problem to solve because... So, the thing about abstraction layers, we all know how that plays out, right? So, abstraction layers are fundamentally never the right answer because they will never catch up, because you're trying to write a layer on top. So, then we had to solve the problem, which was, well, we can't be an abstraction layer, but then at the same time, we need to provide some, sort of like centralization standardization. So, we sort of have this the following dissonance in our platform, which is actually really important to solve the problem. So, we think of a stack as floor things. There's the Kubernetes layer, infrastructure layer, and EKS is different from AKS, and it's okay. If we try to now bring them all together and make them behave as one, our customers are going to suffer. Because there are features in EKS that I really want, but then if you write an abstraction then I'm not going to get 'em so not okay. So, treat them as individual things that we logic that we now curate. So, every time EKS, for example, goes from 1.22 to 1.23, we write a new product, just so my customer can press a button and upgrade these clusters. Similarly, we do this for AKS, we do this for GK. It's a really, really hard job, but that's the job, we got to do it. On top of that, you have these things called add-ons, like my network policy, my access management policy, my et cetera. These things are all actually the same. So, whether I'm EKS or AKS, I want the same access for Keith versus Adnan, right? So, then those components are sort of the same across, doesn't matter how many clusters, doesn't matter how many clouds. On top of that, you have applications. And when it comes to the developer, in fact I do the following demo a lot of times. Because people ask the question. People say things like, "I want to run the same Kubernetes distribution everywhere because this is like Linux." Actually, it's not. So, I do a demo where I spin up access to an OpenShift cluster, and an EKS cluster, and then AKS cluster. And I say, "Log in, show me which one is which?" They're all the same. >> So, Adnan, make that real for me. I'm sure after this amount of time, developers groups have come to you with things that are snowflakes. And as a enterprise architect, you have to make it work within your framework. How has working with Rafay made that possible? >> Yeah, so I think one of the very common concerns is the whole deployment to Haseeb's point, is you are from a deployment perspective, it's still using HELM, it's still using some of the same tooling. How do you? Rafay gives us some tools. You know, they have a command line Add Cuddle API that essentially we use. We wanted parity across all our different environments, different clusters, it doesn't matter where you're running. So, that gives us basically a consistent API for deployment. We've also had challenges with just some of the tooling in general that we worked with Rafay actually, to actually extend their, Add Cuddle API for us so that we have a better deployment experience for our developers. >> Haseeb, how long does this opportunity exist for you? At some point, do the cloud providers figure this out, or does the open-source community figure out how to do what you've done and this opportunity is gone? >> So, I think back to a platform that I think very highly of, which has been around a long time and continues to live, vCenter. I think vCenter is awesome. And it's beautiful, VMware did an incredible job. What is the job? It's job is to manage VMs, right? But then it's for access, it's also storage. It's also networking in a sec, right? All these things got done because to solve a real problem, you have to think about all the things that come together to help you solve that problem from an operations perspective. My view is that this market needs essentially a vCenter, but for Kubernetes, right? And that is a very broad problem. And it's going to spend, it's not about a cloud. I mean, every cloud should build this. I mean, why would they not? It makes sense. Anto exist, right? Everybody should have one. But then, the clarity in thinking that the Rafay team seems to have exhibited, till date, seems to merit an independent company, in my opinion, I think like, I mean, from a technical perspective, this product's awesome, right? I mean, we seem to have no real competition when it comes to this broad breadth of capabilities. Will it last? We'll see, right? I mean, I keep doing "CUBE" shows, right? So, every year you can ask me that question again, and we'll see. >> You make a good point though. I mean, you're up against VMware, You're up against Google. They're both trying to do sort of the same thing you're doing. Why are you succeeding? >> Maybe it's focused. Maybe it's because of the right experience. I think startups, only in hindsight, can one tell why a startup was successful. In all honesty, I've been in a one or two startups in the past, and there's a lot of luck to this, there's a lot of timing to this. I think this timing for a product like this is perfect. Like three, four years ago, nobody would've cared. Like honesty, nobody would've cared. This is the right time to have a product like this in the market because so many enterprises are now thinking of modernization. And because everybody's doing this, this is like the boots strong problem in HCI. Everybody's doing it, but there's only so many people in the industry who actually understand this problem, so they can't even hire the people. And the CTO said, "I got to go. I don't have the people, I can't fill the seats." And then they look for solutions, and via that solution, that we're going to get embedded. And when you have infrastructure software like this embedded in your solution, we're going to be around with the... Assuming, obviously, we don't score up, right? We're going to be around with these companies for some time. We're going to have strong partners for the long term. >> Well, vCenter for Kubernetes I love to end on that note. Intriguing conversation, we could go on forever on this topic, 'cause there's a lot of work to do. I don't think this will over be a solved problem for the Kubernetes as cloud native solutions, so I think there's a lot of opportunities in that space. Haseeb Budhani, thank you for rejoining "theCUBE." Adnan Khan, welcome becoming a CUBE-alum. >> (laughs) Awesome. Thank you so much. >> Check your own profile on the sound's website, it's really cool. From Valencia, Spain, I'm Keith Townsend, along with my Host Paul Gillin . And you're watching "theCUBE," the leader in high tech coverage. (bright upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, Welcome to theCUBE Nice to work with you, Paul. now you're at CUBE-alumni. And Haseeb Budhani, Talk to us about what your pre-Kubernetes So, that kind of led us And just to plan around So, Haseeb, I got to ask the question. that you have identified So, even if you could, the point I don't think you have a Keith: Now. No, I was going to, you to solve this operational challenge? that to create our clusters. I got to write all this YAML So, Haseeb talk to me but that's the job, we got to do it. developers groups have come to you so that we have a better to help you solve that problem Why are you succeeding? And the CTO said, "I got to go. I love to end on that note. Thank you so much. on the sound's website,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

KeithPERSON

0.99+

Haseeb BudhaniPERSON

0.99+

Paul GillinPERSON

0.99+

10QUANTITY

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

20QUANTITY

0.99+

AdnanPERSON

0.99+

oneQUANTITY

0.99+

Red HatORGANIZATION

0.99+

Adnan KhanPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

AWSORGANIZATION

0.99+

PaulPERSON

0.99+

20 peopleQUANTITY

0.99+

JavaTITLE

0.99+

50QUANTITY

0.99+

TodayDATE

0.99+

Adnan KhanPERSON

0.99+

HBSORGANIZATION

0.99+

RafayPERSON

0.99+

50,000 enterprisesQUANTITY

0.99+

node.jsTITLE

0.99+

Valencia, SpainLOCATION

0.99+

two itemsQUANTITY

0.98+

second cloudQUANTITY

0.98+

vCenterTITLE

0.98+

HPAORGANIZATION

0.98+

first two guestsQUANTITY

0.98+

eight instancesQUANTITY

0.98+

one cloudQUANTITY

0.98+

HaseebPERSON

0.98+

todayDATE

0.98+

five years agoDATE

0.98+

hundreds of microservicesQUANTITY

0.98+

KubernetesTITLE

0.98+

LinuxTITLE

0.98+

EKSORGANIZATION

0.98+

Mother's DayEVENT

0.98+

ArathiPERSON

0.97+

HaseebORGANIZATION

0.97+

DockerTITLE

0.97+

First questionQUANTITY

0.97+

VMwareORGANIZATION

0.97+

four years agoDATE

0.97+

MoneyGramORGANIZATION

0.97+

bothQUANTITY

0.97+

15 microservicesQUANTITY

0.97+

single clusterQUANTITY

0.96+

CUBEORGANIZATION

0.96+

30 microservicesQUANTITY

0.95+

singleQUANTITY

0.95+

one aspectQUANTITY

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

RafayORGANIZATION

0.94+

EKSTITLE

0.94+

CloudnativeconORGANIZATION

0.94+

AzureORGANIZATION

0.94+

two startupsQUANTITY

0.94+

theCUBETITLE

0.94+

AKSORGANIZATION

0.94+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>> Instructor: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and we're at KubeCon, CloudNativeCon Europe 2022. I'm Keith Townsend, and my co-host, Enrico Signoretti. Enrico's really proud of me. I've called him Enrico instead of Enrique every session. >> Every day. >> Senior IT analyst at GigaOm. We're talking to fantastic builders at KubeCon, CloudNativeCon Europe 2022 about the projects and their efforts. Enrico, up to this point, it's been all about provisioning, insecurity, what conversation have we been missing? >> Well, I mean, I think that we passed the point of having the conversation of deployment, of provisioning. Everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem a and in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their cluster work, why it is happening and all the questions that come with it. And the more I talk with people in the show floor here or even in the various sessions is about, we are growing so that our clusters are becoming bigger and bigger, applications are becoming bigger as well. So we need to now understand better what is happening. As it's not only about cost, it's about everything at the end. >> So I think that's a great set up for our guests, Matt Provo, founder and CEO of StormForge and Patrick Brixton? >> Bergstrom. >> Bergstrom. >> Yeah. >> I spelled it right, I didn't say it right, Bergstrom, CTO. We're at KubeCon, CloudNativeCon where projects are discussed, built and StormForge, I've heard the pitch before, so forgive me. And I'm kind of torn. I have service mesh. What do I need more, like what problem is StormForge solving? >> You want to take it? >> Sure, absolutely. So it's interesting because, my background is in the enterprise, right? I was an executive at UnitedHealth Group before that I worked at Best Buy and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky-dory, right? But then we run into the issue like you and I were just talking about, where it gets very very expensive very quickly. And so my first conversations with Matt and the StormForge group, and they were telling me about the product and what we're dealing with. I said, that is the problem statement that I have always struggled with and I wish this existed 10 years ago when I was dealing with EC2 costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically what it is, is we take your raw telemetry data and we essentially monitor the performance of your application, and then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over-provisioning. So we reduce your consumption of CPU, of memory and production which ultimately nine times out of 10, actually I would say 10 out of 10, reduces your cost significantly without sacrificing reliability. >> So can your solution also help to optimize the application in the long run? Because, yes, of course-- >> Yep. >> The lowering fluid as you know optimize the deployment. >> Yeah. >> But actually the long-term is optimizing the application. >> Yes. >> Which is the real problem. >> Yep. >> So, we're fine with the former of what you just said, but we exist to do the latter. And so, we're squarely and completely focused at the application layer. As long as you can track or understand the metrics you care about for your application, we can optimize against it. We love that we don't know your application, we don't know what the SLA and SLO requirements are for your app, you do, and so, in our world it's about empowering the developer into the process, not automating them out of it and I think sometimes AI and machine learning sort of gets a bad rap from that standpoint. And so, at this point the company's been around since 2016, kind of from the very early days of Kubernetes, we've always been, squarely focused on Kubernetes, using our core machine learning engine to optimize metrics at the application layer that people care about and need to go after. And the truth of the matter is today and over time, setting a cluster up on Kubernetes has largely been solved. And yet the promise of Kubernetes around portability and flexibility, downstream when you operationalize, the complexity smacks you in the face and that's where StormForge comes in. And so we're a vertical, kind of vertically oriented solution, that's absolutely focused on solving that problem. >> Well, I don't want to play, actually. I want to play the devils advocate here and-- >> You wouldn't be a good analyst if you didn't. >> So the problem is when you talk with clients, users, there are many of them still working with Java, something that is really tough. I mean, all of us loved Java. >> Yeah, absolutely. >> Maybe 20 years ago. Yeah, but not anymore, but still they have developers, they have porting applications, microservices. Yes, but not very optimized, et cetera, cetera, et cetera. So it's becoming tough. So how you can interact with this kind of old hybrid or anyway, not well engineered applications. >> Yeah. >> We do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage and we, like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So perfect example is Java, you have to worry about your heap size, your garbage collection tuning and one of the things that really struck me very early on about the StormForge product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and performance tuning, we were only as good as our humans were because of what they knew. And so, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that the machine will recommend things you never would've dreamed of. And you get amazing results out of that. >> So both me and Enrico have been doing this for a long time. Like, I have battled to my last breath the argument when it's a bare metal or a VM, look, I cannot give you any more memory. >> Yeah. >> And the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources are expensive, buy bigger box. >> Yeah. >> Yap. >> Buying a bigger box in the cloud to your point is no longer a option because it's just expensive. >> Yeah. >> Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? Is it the shift in responsibility? >> I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially as the development of applications becomes more and more rapid and the management of them. Our charge and our belief wholeheartedly is that you shouldn't have to choose. You should not have to choose between costs or performance. You should not have to choose where your applications live, in a public private or hybrid cloud environment. And so, we want to empower people to be able to sit in the middle of all of that chaos and for those trade offs and those difficult interactions to no longer be a thing. We're at a place now where we've done hundreds of deployments and never once have we met a developer who said, "I'm really excited to get out of bed and come to work every day and manually tune my application." One side, secondly, we've never met, a manager or someone with budget that said, please don't increase the value of my investment that I've made to lift and shift us over to the cloud or to Kubernetes or some combination of both. And so what we're seeing is the converging of these groups, their happy place is the lack of needing to be able to make those trade offs, and that's been exciting for us. >> So, I'm listening and looks like that your solution is right in the middle in application performance, management, observability. >> Yeah. >> And, monitoring. >> Yeah. >> So it's a little bit of all of this. >> Yeah, so we want to be, the intel inside of all of that, we often get lumped into one of those categories, it used to be APM a lot, we sometimes get, are you observability or and we're really not any of those things, in and of themselves, but we instead we've invested in deep integrations and partnerships with a lot of that tooling 'cause in a lot of ways, the tool chain is hardening in a cloud native and in Kubernetes world. And so, integrating in intelligently, staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for our users who have already invested likely, in a APM or observability. >> So to go a little bit deeper. What does it mean integration? I mean, do you provide data to this, other applications in the environment or are they supporting you in the work that you do. >> Yeah, we're a data consumer for the most part. In fact, one of our big taglines is take your observability and turn it into action ability, right? Like how do you take that, it's one thing to collect all of the data, but then how do you know what to do with it, right? So to Matt's point, we integrate with folks like Datadog, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >> But also we want Datadog customers, for example, we have a very close partnership with Datadog so that in your existing Datadog dashboard, now you have-- >> Yeah. >> The StormForge capability showing up in the same location. >> Yep. >> And so you don't have to switch out. >> So I was just going to ask, is it a push pull? What is the developer experience when you say you provide developer this resolve ML learnings about performance, how do they receive it? Like, what's the developer experience. >> They can receive it, for a while we were CLI only, like any good developer tool. >> Right. >> And, we have our own UI. And so it is a push in a lot of cases where I can come to one spot, I've got my applications and every time I'm going to release or plan for a release or I have released and I want to pull in observability data from a production standpoint, I can visualize all of that within the StormForge UI and platform, make decisions, we allow you to set your, kind of comfort level of automation that you're okay with. You can be completely set and forget or you can be somewhere along that spectrum and you can say, as long as it's within, these thresholds, go ahead and release the application or go ahead and apply the configuration. But we also allow you to experience the same, a lot of the same functionality right now, in Grafana, in Datadog and a bunch of others that are coming. >> So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges or if not, one of the biggest challenges CIOs are facing are resource constraints. >> Yeah. >> They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs-- >> Yeah.6 >> And developers? >> You should take that one. >> Yeah, absolutely. So like my background, like I said at UnitedHealth Group, right. It's not always just about cost savings. In fact, the way that I look about at some of these tech challenges, especially when we talk about scalability there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece 'cause you can only throw money at a problem for so long and it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small, and so, we are absolutely squarely in that footprint of we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. Like, you were talking about private cloud for instance and so having a physical data center, I've worked with physical data centers that companies I've worked for have owned where it is literally full, wall to wall. You can't rack any more servers in it, and so their biggest option is, well, I could spend $1.2 billion to build a new one if I wanted to, or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster, like that's a huge opportunity. >> So I have another question. I mean, maybe it doesn't sound very intelligent at this point, but, so is it an ongoing process or is it something that you do at the very beginning, I mean you start deploying this. >> Yeah. >> And maybe as a service. >> Yep. >> Once in a year I say, okay, let's do it again and see if something change it. >> Sure. >> So one spot, one single.. >> Yeah, would you recommend somebody performance test just once a year? Like, so that's my thing is, at previous roles, my role was to do performance test every single release, and that was at a minimum once a week and if your thing did not get faster, you had to have an executive exception to get it into production and that's the space that we want to live in as well as part of your CICD process, like this should be continuous verification, every time you deploy, we want to make sure that we're recommending the perfect configuration for your application in the name space that you're deploying into. >> And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the CICD process that's connected to optimization and that no application should be released, monitored, and sort of analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, but for cost and performance. >> Almost a couple of hundred vendors on this floor. You mentioned some of the big ones Datadog, et cetera, but what happens when one of the up and comings out of nowhere, completely new data structure, some imaginative way to click to telemetry data. >> Yeah. >> How do, how do you react to that? >> Yeah, to us it's zeros and ones. >> Yeah. >> And, we really are data agnostic from the standpoint of, we're fortunate enough from the design of our algorithm standpoint, it doesn't get caught up on data structure issues, as long as you can capture it and make it available through one of a series of inputs, one would be load or performance tests, could be telemetry, could be observability, if we have access to it. Honestly, the messier the better from time to time from a machine learning standpoint, it's pretty powerful to see. We've never had a deployment where we saved less than 30%, while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and 30 to 40% improvement in performance. >> And what happens if the application is, I mean, yes Kubernetes is the best thing of the world but sometimes we have to, external data sources or, we have to connect with external services anyway. >> Yeah. >> So, can you provide an indication also on this particular application, like, where the problem could be? >> Yeah. >> Yeah, and that's absolutely one of the things that we look at too, 'cause it's, especially when you talk about resource consumption it's never a flat line, right? Like depending on your application, depending on the workloads that you're running it varies from sometimes minute to minute, day to day, or it could be week to week even. And so, especially with some of the products that we have coming out with what we want to do, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like, or, what your disc looks like, right. Like 'cause with our low environment testing, any metric you throw at us, we can optimize for. >> So Matt and Patrick, thank you for stopping by. >> Yeah. >> Yes. >> We can go all day because day two is I think the biggest challenge right now, not just in Kubernetes but application re-platforming and transformation, very, very difficult. Most CTOs and EASs that I talked to, this is the challenge space. From Valencia, Spain, I'm Keith Townsend, along with my host Enrico Signoretti and you're watching "theCube" the leader in high-tech coverage. (whimsical music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, and we're at KubeCon, about the projects and their efforts. And the more I talk with I've heard the pitch and then we can tell you know optimize the deployment. is optimizing the application. the complexity smacks you in the face I want to play the devils analyst if you didn't. So the problem is when So how you can interact and one of the things that last breath the argument and the CIO basically saying, Buying a bigger box in the cloud Is it the shift in responsibility? and the management of them. that your solution is right in the middle we sometimes get, are you observability or in the work that you do. consumer for the most part. showing up in the same location. What is the developer experience for a while we were CLI only, and release the application and he's saying one of the They cannot find the developers and it's the same thing or is it something that you do Once in a year I say, okay, and that's the space and that no application You mentioned some of the and 30 to 40% improvement in performance. Kubernetes is the best thing of the world so that we can make So Matt and Patrick, Most CTOs and EASs that I talked to,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

EnricoPERSON

0.99+

Enrico SignorettiPERSON

0.99+

MattPERSON

0.99+

JeffPERSON

0.99+

Tim CrawfordPERSON

0.99+

PatrickPERSON

0.99+

2003DATE

0.99+

Keith TownsendPERSON

0.99+

UnitedHealth GroupORGANIZATION

0.99+

40QUANTITY

0.99+

AlexPERSON

0.99+

Jeff FrickPERSON

0.99+

Santa ClaraLOCATION

0.99+

30QUANTITY

0.99+

$1.2 billionQUANTITY

0.99+

Alex WolfPERSON

0.99+

EnriquePERSON

0.99+

StormForgeORGANIZATION

0.99+

Alexander WolfPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ACGORGANIZATION

0.99+

JanuaryDATE

0.99+

Matt ProvoPERSON

0.99+

Red HatORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

Best BuyORGANIZATION

0.99+

30%QUANTITY

0.99+

first timeQUANTITY

0.99+

BergstromORGANIZATION

0.99+

nine timesQUANTITY

0.99+

10QUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

300 peopleQUANTITY

0.99+

millionsQUANTITY

0.99+

DatadogORGANIZATION

0.99+

JavaTITLE

0.99+

GigaOmORGANIZATION

0.99+

Baskin School of EngineeringORGANIZATION

0.99+

two thingsQUANTITY

0.99+

third yearQUANTITY

0.99+

Mountain View, CaliforniaLOCATION

0.99+

KubeConEVENT

0.99+

ACGSVORGANIZATION

0.99+

bothQUANTITY

0.99+

once a weekQUANTITY

0.99+

less than 30%QUANTITY

0.99+

ACGSV GROW! AwardsEVENT

0.98+

2016DATE

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

40%QUANTITY

0.98+

Santa Cruz UC Santa Cruz School of EngineeringORGANIZATION

0.98+

todayDATE

0.98+

ACG Silicon ValleyORGANIZATION

0.98+

60%QUANTITY

0.98+

once a yearQUANTITY

0.98+

one spotQUANTITY

0.98+

10 years agoDATE

0.97+

Patrick BrixtonPERSON

0.97+

PrometheusTITLE

0.97+

20 years agoDATE

0.97+

CloudNativeCon Europe 2022EVENT

0.97+

secondlyQUANTITY

0.97+

one singleQUANTITY

0.96+

first conversationsQUANTITY

0.96+

millions of dollarsQUANTITY

0.96+

ACGSV GROW! Awards 2018EVENT

0.96+

Varun Talwar, Tetrate | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Narrator: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, in KubeCon, CloudNativeCon Europe 2022. It's near the end of the day, that's okay. We have plenty of energy because we're bringing it. I'm Keith Townsend, along with my cohost, Paul Gillon. Paul, this has been an amazing day. Thus far we've talked to some incredible folks. You got a chance to walk the show floor. >> Yeah. >> So I'm really excited to hear what's the vibe of the show floor, 7,500 people in Europe, following the protocols, but getting stuff done. >> Well, at first I have to say that I haven't traveled for two years. So getting out to a show by itself is an amazing experience. But a show like this with all the energy and the crowd too, enormously crowded at lunchtime today. It's hard to believe how many people have made it all the way here. Out on the floor the booth are crowded, the demonstrations are what you would expect at a show like this. Lots of code, lots of block diagrams, lots of architecture. I think the audience is eating it up. They're on their laptops, they're coding on their laptops. And this is very much symbolic of the crowd that comes to a KubeCon. And it's just a delight to see them out here having so much fun. >> So speaking of lots of code, we have Varun Talwar, co-founder of Tetrate. But, I just saw I didn't realize this, Istio becoming part of CNCF. What's the latest on Istio? >> Yeah, Istio is, it was always one of those service mesh projects which was very widely adopted. And it's great to see it going into the Cloud Native Computing Foundation. And, I think what happened with Kubernetes like just became the de-facto container orchestrator. I think similar thing is happening with Istio and service mesh. >> So. >> I'm sorry, go ahead Keith. What's the process like of becoming adopted by and incubated by the CNCF? >> Yeah, I mean, it's pretty simple. It's an application process into the foundation where you say, what the project is about, how diverse is your contributor base, how many people are using it. And it goes through a review of, with TOC, it goes through a review of like all the users and contributors, and if you see a good base of deployments in production, if you see a diverse community of contributors, then you can basically be part of the CNCF. And as you know, CNCF is very flexible on governance. Basically it's like bring your own governance. Then the projects can basically seamlessly go in and get into incubation and gradually graduate. >> Another project close and dear to you, Envoy. >> Yes. >> Now I've always considered Envoy just as what it is. It's a, I've always used it as a low balancer type thing. So, I've always considered it some wannabe gateway of proxy. But Envoy gateway was announced last week. >> Yes. So Envoy is, basically won the data plane war of in cloud native workloads, right? And, but, and this was over the last five years. Envoy was announced even way before Istio, and it is used in various deployment models. You can use it as a front load balancer, you can use it as an ingress in Kubernetes, you can use it as a side car in a service mesh like Istio. And it's lightweight, dynamically programmable, very open with the right community. But, what we looked at when we looked at the Envoy base was, it still wasn't very approachable for application developers. Like, when you still see like the nouns that it uses in terms of clusters and so on is not what an application developer was used to. And, so Envoy gateway is really an effort to make Envoy even more stronger out of the box for an application developer to use it as an API gateway, right? Because if you think about it, ultimately people, developers, start deploying workloads onto their Kubernetes clusters, they need some functionality like an API gateway to expose their services and you want to make it really, really easy and simple, right? I often say like, what Engine X was to like static websites, like Envoy gateway will be to like APIs. And it's really, the community coming together, we are a big part, but also VMware, and as well as end users, like in this case Fidelity, who is investing heavily into Envoy and API gateway use cases, joining forces saying, let's do this in upstream Envoy. >> I'd like to go back Istio, because this is a major step in Istio's development. Where do you see Istio coming into the picture? And Kubernetes is already broadly accepted, is Istio generally adopted as an after, an after step to Kubernetes, or are they increasingly being adopted together? >> Yeah. So, usually it's adopted as a follow on step. And, the reason is, primarily the learning curve, right? It's just to get used to all the Kubernetes and, it takes a while for people to understand the concepts, get applications going, and then, Istio was made to basically solve, three big problems there, right? Which is around, observability, traffic management, and security, right? So as people deploy more services they figure out, okay, how do I connect them? How do I secure all the connections? And how do I do more fine grain routing? I'm doing more frequent deployments with Kubernetes, but I would like to do canary releases, to make safer roll outs, right? And those are the problems that Istio solves. And I don't really want to know the metrics of like, yes, it'll be, it's good to know all the node level, and CPO level metrics, but really what I want to know is, how are my services performing? Where is the latency, right? Where is the error rate? And those are the things that Istio gives out of the box. So that's like a very natural next step for people using Kubernetes. And, Tetrate was really formed as a company to enable enterprises to adopt Istio, Envoy, and service mesh in their environment, right? So we do everything from, run an academy for like courses and certifications on Envoy and Istio, to a distribution, which is, compliant with various rules and tooling, as well as a whole platform on top of Istio, to make it usable in deployment in a large enterprise. >> So paint the end to end for me for Istio and Envoy. I know they can be used in similar fashions as like side cars, but how do they work together to deliver value? >> Yeah. So if you step back from technology a little bit, right? And you make sort of, look at what customers are doing and facing, right? Really it is about, they have applications, they have some applications that new workloads going into Kubernetes and cloud native, they have a lot of legacy workloads, a lot of workloads in VMs, and with different teams in different clouds or due to acquisitions, they're very heterogeneous, right? Now our mission, Tetrate's mission is power the world's application traffic. But really the business value that we are going after is consistency of application operations, right? And I'll tell you how powerful that is. Because the more places you can deploy Envoy into, the more places you can deploy Istio into, the more consistency you can get for the value pillars of observability, traffic management, and security, right? And really if you think about what is the journey for an enterprise to migrate from VM workloads into Kubernetes, or from data centers into cloud, the challenges are around security and connectivity, right? Because if it's Kubernetes fabric, the same Kubernetes app and data center can be deployed exactly as it is in cloud, right? >> Keith: Right. >> So why is it hard to migrate to cloud, right? The challenges come in the security and networking layer, right? >> So let's talk about that with some granularity and you can maybe give me some concrete examples. >> Right. >> Because as I think about the hybrid infrastructure, where I have VMs on-premises, cloud native stuff running in the public cloud or even cloud native next to VMs. >> Varun: Right. >> I do security differently when I'm in the VM world. I say, you know what? This IP address can't talk to this Oracle database server. >> Right. >> Keith: That's not how cloud native works. >> Right. >> I can't say, if I have a cloud native app talking to a Oracle database, there's no IP address. >> Yeah. >> Keith: But how do I secure the communication between the two? >> Exactly. So I think you hit it, well, straight on the head. So which is, with things like Kubernetes IP is no longer a really a valid noun, where you can say because things will auto scale either from Kubernetes or the cloud autoscalers. So really the noun that is becoming now is service. So, and I could have many instances of it. They could, will scale up and down. But what I'm saying is, this service, which you know some app server, some application can talk to the Oracle service. >> Keith: Hmm. >> And what we have done with the Tetrate Service Bridge which is why we call our platform service bridge, because it's all about bridging all the services, is whatever you're running on the VM can be onboarded onto the mesh, like as if it were a Kubernetes service, right? And then my policy around this service can talk to this service, is same in Kubernetes, is same for Kubernetes talking to VM, it's same for VM to VM, both in terms of access control. In terms of encryption what we do is, because it's, the Envoy proxy goes everywhere and the traffic is going through them we actually take care of distributing certs, encrypting everything, and it becomes, and that is what leads to consistent application operations. And that's where the value is. >> We're seeing a lot of activity around observability right now, a lot of different tools, both open source and proprietary Istio, certainly part of the open telemetry project, and I believe you're part of that project? >> Yes. >> But the customers are still piecing together a lot of tools on their own. >> Right. >> Do you see a more coherent framework forming around observability? >> I think very much so. And there are layers of observability, right? So the thing is, like if we tell you there is latency between these two services at L seven layer, the first question is, is it the service? Is it the Envoy? Or is it the network? It sounds like a very simple question. It's actually not that easy to answer. And that is one of the questions we answer in like platforms like ours, right? But even that is not the end. If it's neither of these three, it could be the node, it could be the hardware underneath, right? And those, you realize like those are different observability tools that work on each layer. So I think there's a lot of work to be done to enable end users to go from IP, like from top to bottom, to make, reduce what is called MPTR or meantime to, resolution of an issue where is the problem. But I think with tools like what is being built now, it is becoming easier, right? It is because, one of the things we have to realize is with things like Kubernetes we made the development of microservices easier, right? And that's great, But as a result, what is happening is that more things are getting broken down. So there is more network in between. So there's, harder it gets to troubleshoot, harder it gets to secure everything, harder it gets to get visibility from everywhere, right? So I often say like, actually if you're going, embarking down microservices journey, you actually are... You better have a platform like this. Otherwise, you're taking on operational cost. >> Wow, Jevons paradox, the more accessible we make something, the more it get used, the more complex it is. That's been a theme here at KubecCon, CloudNativeCon Europe 2022, from Valencia, Spain. I'm Keith Townsend, along with my cohost Paul Gillon. And you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

the Cloud Native Computing Foundation It's near the end of the day, So I'm really excited to hear Out on the floor the booth are crowded, What's the latest on Istio? like just became the de-facto What's the process like of becoming be part of the CNCF. and dear to you, Envoy. So, I've always considered it Envoy even more stronger out of the box coming into the picture? Where is the latency, right? So paint the end to end the more places you can deploy Istio into, and you can maybe give me in the public cloud I say, you know what? how cloud native works. talking to a Oracle database, So really the noun that is and the traffic is going through them But the customers are And that is one of the questions we answer the more accessible we make something,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NicolaPERSON

0.99+

MichaelPERSON

0.99+

DavidPERSON

0.99+

JoshPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Jeremy BurtonPERSON

0.99+

Paul GillonPERSON

0.99+

GMORGANIZATION

0.99+

Bob StefanskiPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave McDonnellPERSON

0.99+

amazonORGANIZATION

0.99+

JohnPERSON

0.99+

James KobielusPERSON

0.99+

KeithPERSON

0.99+

Paul O'FarrellPERSON

0.99+

IBMORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

BMWORGANIZATION

0.99+

FordORGANIZATION

0.99+

David SiegelPERSON

0.99+

CiscoORGANIZATION

0.99+

SandyPERSON

0.99+

Nicola AcuttPERSON

0.99+

PaulPERSON

0.99+

David LantzPERSON

0.99+

Stu MinimanPERSON

0.99+

threeQUANTITY

0.99+

LisaPERSON

0.99+

LithuaniaLOCATION

0.99+

MichiganLOCATION

0.99+

AWSORGANIZATION

0.99+

General MotorsORGANIZATION

0.99+

AppleORGANIZATION

0.99+

AmericaLOCATION

0.99+

CharliePERSON

0.99+

EuropeLOCATION

0.99+

Pat GelsingPERSON

0.99+

GoogleORGANIZATION

0.99+

BobbyPERSON

0.99+

LondonLOCATION

0.99+

Palo AltoLOCATION

0.99+

DantePERSON

0.99+

SwitzerlandLOCATION

0.99+

six-weekQUANTITY

0.99+

VMwareORGANIZATION

0.99+

SeattleLOCATION

0.99+

BobPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

100QUANTITY

0.99+

Michael DellPERSON

0.99+

John WallsPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

Sandy CarterPERSON

0.99+