Greg Muscarella, SUSE | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe, 2022. Brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain and cuon cloud native con 20 Europe, 2022. I'm your host Keith towns alongside a new hope en Rico, senior reti, senior editor. I'm sorry, senior it analyst at <inaudible> Enrique. Welcome to the program. >>Thank you very much. And thank you for having me. It's exciting. >>So thoughts, high level thoughts of CU con first time in person again in couple years? >>Well, this is amazing for several reasons. And one of the reasons is that yeah, I had the chance to meet, uh, with, uh, you know, people like you again. I mean, we, we met several times over the internet over zoom calls. I, I started to eat these zoom codes. <laugh> because they're really impersonal in the end. And like last night we, we are together group of friends, industry folks. It's just amazing. And a part of that, I mean, the event is, uh, is a really cool, it's really cool. There are a lot from people interviews and, you know, real people doing real stuff, not just, uh, you know, again, in personal calls, you don't even know if they're telling the truth, but when you can, you know, look in their eyes, what they're doing, I, I think that's makes a difference. >>So speaking about real people, meeting people for the first time, new jobs, new roles, Greg Moscarella, enterprise container management and general manager at SUSE. Welcome to the show, welcome back clue belong. >>Thank you very much. It's awesome to be here. It's awesome to be back in person. And I completely agree with you. Like there's a certain fidelity to the conversation and a certain, uh, ability to get to know people a lot more. So it's absolutely fantastic to be here. >>So Greg, tell us about your new role and what SUSE has gone on at KU coupon. >>Sure. So I joined SA about three months ago to lead the rancher business unit, right? So our container management pieces and, you know, it's a, it's a fantastic time. Cause if you look at the transition from virtual machines to containers and to moving to microservices, right alongside that transition from on-prem to cloud, like this is a very exciting time to be in this industry. And rancher has been setting the stage. And again, I'm go back to being here. Rancher's all about the community, right? So this is a very open, independent, uh, community driven product and project. And so this, this is kinda like being back to our people, right. And being able to reconnect here. And so, you know, doing it, digital is great, but, but being here is changes the game for us. So we, we feed off that community. We feed off the energy. So, uh, and again, going back to the space and what's happening in it, great time to be in this space. And you guys have seen the transitions you've seen, I mean, we've seen just massive adoption, uh, of containers and Kubernetes overall and ranchers been been right there with some amazing companies doing really interesting things that I'd never thought of before. Uh, so I'm, I'm still learning on this, but, um, but it's been great so far. >>Yeah. And you know, when we talk about strategy about Kubernetes today, we are talking about very broad strategies. I mean, not just the data center or the cloud with, you know, maybe smaller organization adopting Kubernetes in the cloud, but actually large organization thinking guide and more and more the edge. So what's your opinion on this, you know, expansion of Kubernetes towards the edge. >>So I think you're, I think you're exactly right. And that's actually a lot of meetings I've been having here right now is these are some of these interesting use cases. So people who, uh, whether it be, you know, ones that are easy to understand in the telco space, right? Especially the adoption of 5g and you have all these space stations, new towers, and they have not only the core radio functions or network functions that they're trying to do there, but they have other applications that wanna run on that same environment. Uh, I spoke recently with some of our, our good friends at a major automotive manufacturer, doing things in their factories, right. That can't take the latency of being somewhere else. Right. So they have robots on the factory floor, the latency that they would experience if they tried to run things in the cloud meant that robot would've moved 10 centimeters. >>By the time, you know, the signal got back, it may not seem like a lot to you, but if, if, if you're an employee, you know, there, you know, uh, a big 2000 pound robot being 10 centimeters closer to you may not be what you, you really want. Um, there's, there's just a tremendous amount of activity happening out there on the retail side as well. So it's, it's amazing how people are deploying containers in retail outlets. You know, whether it be fast food and predicting, what, what, how many French fries you need to have going at this time of day with this sort of weather. Right. So you can make sure those queues are actually moving through. It's, it's, it's really exciting and interesting to look at all the different applications that are happening. So yes, on the edge for sure, in the public cloud, for sure. In the data center and we're finding is people want a common platform across those as well. Right? So for the management piece too, but also for security and for policies around these things. So, uh, it really is going everywhere. >>So talk to me, how do, how are we managing that as we think about pushing stuff out of the data center, out of the cloud cloud, closer to the edge security and life cycle management becomes like top of mind thought as, as challenges, how is rancher and sushi addressing >>That? Yeah. So I, I think you're, again, spot on. So it's, it starts off with the think of it as simple, but it's, it's not simple. It's the provisioning piece. How do we just get it installed and running right then to what you just asked the management piece of it, everything from your firmware to your operating system, to the, the cluster, uh, the Kubernetes cluster, that's running on that. And then the workloads on top of that. So with rancher, uh, and with the rest of SUSE, we're actually tacking all those parts of the problems from bare metal on up. Uh, and so we have lots of ways for deploying that operating system. We have operating systems that are, uh, optimized for the edge, very secure and ephemeral container images that you can build on top of. And then we have rancher itself, which is not only managing your ES cluster, but can actually start to manage the operating system components, uh, as well as the workload components. >>So all from your single interface, um, we mentioned policy and security. So we, yeah, we'll probably talk about it more, um, uh, in a little bit, but, but new vector, right? So we acquired a company called new vector, just open sourced, uh, that here in January, that ability to run that level of, of security software everywhere again, is really important. Right? So again, whether I'm running it on, whatever my favorite public cloud providers, uh, managed Kubernetes is, or out at the edge, you still have to have security, you know, in there. And, and you want some consistency across that. If you have to have a different platform for each of your environments, that's just upping the complexity and the opportunity for error. So we really like to eliminate that and simplify our operators and developers' lives as much as possible. >>Yeah. From this point of view, are you implying that even you, you are matching, you know, self, uh, let's say managed clusters at the, at the very edge now with, with, you know, added security, because these are the two big problems lately, you know, so having something that is autonomous somehow easier to manage, especially if you are deploying hundreds of these that's micro clusters. And on the other hand, you need to know a policy based security that is strong enough to be sure again, if you have these huge robots moving too close to you, because somebody act the, the, the class that is managing them, that is, could be a huge problem. So are you, you know, approaching this kind of problems? I mean, is it, uh, the technology that you are acquired, you know, ready to, to do this? >>Yeah. I, I mean, it, it really is. I mean, there's still a lot of innovation happening. Don't, don't get me wrong. We're gonna see a lot of, a lot more, not just from, from SA and ranch here, but from the community, right. There's a lot happening there, but we've come a long way and we solved a lot of problems. Uh, if I think about, you know, how do you have this distributed environment? Uh, well, some of it comes down to not just, you know, all the different environments, but it's also the applications, you know, with microservices, you have very dynamic environment now just with your application space as well. So when we think about security, we really have to evolve from a fairly static policy where like, you might even be able to set an IP address and a port and some configuration on that. >>It's like, well, your workload's now dynamically moving. So not only do you have to have that security capability, like the ability to like, look at a process or look at a network connection and stop it, you have to have that, uh, manageability, right? You can't expect an operator or someone to like go in and manually configure a YAML file, right? Because things are changing too fast. It needs to be that combination of convenient, easy to manage with full function and ability to protect your, your, uh, your resources. And I think that's really one of the key things that new vector really brings is because we have so much intelligence about what's going on there. Like the configuration is pretty high level, and then it just runs, right? So it's used to this dynamic environment. It can actually protect your workloads wherever it's going from pod to pod. Uh, and it's that, that combination, again, that manageability with that high functionality, um, that, that is what's making it so popular. And what brings that security to those edge locations or cloud locations or your data center. >>So one of the challenges you're kind of, uh, touching on is this abstraction on, upon abstraction. When I, I ran my data center, I could put, uh, say this IP address, can't talk to this IP address on this port. Then I got next generation firewalls where I could actually do, uh, some analysis. Where are you seeing the ball moving to when it comes to customers, thinking about all these layers of abstraction IP address doesn't mean anything anymore in cloud native it's yes, I need one, but I'm not, I'm not protecting based on IP address. How are customers approaching security from the name space perspective? >>Well, so it's, you're absolutely right. In fact, even when you go to IPV six, like, I don't even recognize IP addresses anymore. <laugh> yeah. >>That doesn't mean anything like, oh, just a bunch of, yeah. Those are numbers, alpha Ric >>And colons. Right. You know, it's like, I don't even know anymore. Right. So, um, yeah, so it's, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. Right? So this static thing that I can sort of know and, and love and touch and kind of protect to this almost living, breathing thing, which is moving all around, it's a swarm of, you know, pods moving all over the place. And so, uh, it, it is, I mean, that's what Kubernetes has done for the workload side of it is like, how do you get away from, from that, that pet to a declarative approach to, you know, identifying your workload and the components of that workload and what it should be doing. And so if we go on the security side some more like, yeah, it's actually not even namespace namespace. >>Isn't good enough if we wanna get, if we wanna get to zero trust, it's like, just cuz you're running in my namespace doesn't mean I trust you. Right. So, and that's one of the really cool things about new vectors because of the, you know, we're looking at protocol level stuff within the network. So it's pod to pod, every single connection we can look at and it's at the protocol layer. So if you say you're on my SQL database and I have a mye request going into it, I can confirm that that's actually a mye protocol being spoken and it's well formed. Right. And I know that this endpoint, you know, which is a, uh, container image or a pod name or some, or a label, even if it's in the same name, space is allowed to talk to and use this protocol to this other pod that's running in my same name space. >>Right. So I can either allow or deny. And if I can, I can look into the content that request and make sure it's well formed. So I'll give you an example is, um, do you guys remember the log four J challenges from not too long ago, right. It was a huge deal. So if I'm doing something that's IP and port based and name space based, so what are my protections? What are my options for something that's got logged four J embedded in like, I either run the risk of it running or I shut it down. Those are my options. Like those neither one of those are very good. So we can do, because again, we're at the protocol layer. It's like, ah, I can identify any log for J protocol. I can look at whether it's well formed, you know, or if it's malicious and it's malicious, I can block it. If it's well formed, I can let it go through. So I can actually look at those, those, um, those vulnerabilities. I don't have to take my service down. I can run and still be protected. And so that, that extra level, that ability to kind of peek into things and also go pod to pod, you know, not just same space level is one of the key differences. So I talk about the evolution or how we're evolving with, um, with the security. Like we've grown a lot, we've got a lot more coming. >>So let's talk about that a lot more coming what's in the pipeline for SUSE. >>Well, probably before I get to that, we just announced new vector five. So maybe I can catch us up on what was released last week. Uh, and then we can talk a little bit about going, going forward. So new vector five, introduce something called um, well, several things, but one of the things I can talk in more detail about is something called zero drift. So I've been talking about the network security, but we also have run time security, right? So any, any container that's running within your environment has processes that are running that container. What we can do is actually comes back to that manageability and configuration. We can look at the root level of trust of any process that's running. And as long as it has an inheritance, we can let that process run without any extra configuration. If it doesn't have a root level of trust, like it didn't spawn from whatever the, a knit, um, function was in that container. We're not gonna let it run. Uh, so the, the configuration that you have to put in there is, is a lot simpler. Um, so that's something that's in, in new vector five, um, the web application firewall. So this layer seven security inspection has gotten a lot more granular now. So it's that pod Topo security, um, both for ingress egress and internal on the cluster. Right. >>So before we get to what's in the pipeline, one question around new vector, how is that consumed and deployed? >>How is new vector consumed, >>Deployed? And yeah, >>Yeah, yeah. So, uh, again with new vector five and, and also rancher 2 65, which just were released, there's actually some nice integration between them. So if I'm a rancher customer and I'm using 2 65, I can actually deploy that new vector with a couple clicks of the button in our, uh, in our marketplace. And we're actually tied into our role-based access control. So an administrator who has that has the rights can just click they're now in a new vector interface and they can start setting those policies and deploying those things out very easily. Of course, if you aren't using, uh, rancher, you're using some other, uh, container management platform, new vector still works. Awesome. You can deploy it there still in a few clicks. Um, you're just gonna get into, you have to log into your new vector, uh, interface and, and use it from there. >>So that's how it's deployed. It's, it's very, it's very simple to use. Um, I think what's actually really exciting about that too, is we've opensourced it? Um, so it's available for anyone to go download and try, and I would encourage people to give it a go. Uh, and I think there's some compelling reasons to do that now. Right? So we have pause security policies, you know, depreciated and going away, um, pretty soon in, in Kubernetes. And so there's a few things you might look at to make sure you're still able to run a secure environment within Kubernetes. So I think it's a great time to look at what's coming next, uh, for your security within your Kubernetes. >>So Paul, we appreciate chief stopping by from ity of Spain, from Spain, I'm Keith Townsend, along with en Rico Sinte. Thank you. And you're watching the, the leader in high tech coverage.
SUMMARY :
Brought to you by red hat, Welcome to the program. And thank you for having me. I had the chance to meet, uh, with, uh, you know, people like you again. So speaking about real people, meeting people for the first time, new jobs, So it's absolutely fantastic to be here. So Greg, tell us about your new role and what SUSE has gone So our container management pieces and, you know, it's a, it's a fantastic time. you know, maybe smaller organization adopting Kubernetes in the cloud, So people who, uh, whether it be, you know, By the time, you know, the signal got back, it may not seem like a lot to you, to what you just asked the management piece of it, everything from your firmware to your operating system, managed Kubernetes is, or out at the edge, you still have to have security, And on the other hand, you need to know a policy based security that is strong have to evolve from a fairly static policy where like, you might even be able to set an IP address and a port and some configuration So not only do you have to have So one of the challenges you're kind of, uh, touching on is this abstraction In fact, even when you go to IPV six, like, Those are numbers, it comes back to that, moving from a static, you know, it's the pets versus cattle thing. And I know that this endpoint, you know, and also go pod to pod, you know, not just same space level is one of the key differences. the configuration that you have to put in there is, is a lot simpler. Of course, if you aren't using, uh, rancher, you're using some other, So I think it's a great time to look at what's coming next, uh, for your security within your So Paul, we appreciate chief stopping by from ity of Spain,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Greg Moscarella | PERSON | 0.99+ |
Greg Muscarella | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
Paul | PERSON | 0.99+ |
January | DATE | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
10 centimeters | QUANTITY | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Enrique | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
last week | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
2000 pound | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
Valencia Spain | LOCATION | 0.98+ |
2022 | DATE | 0.97+ |
Coon | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first time | QUANTITY | 0.97+ |
two big problems | QUANTITY | 0.97+ |
single interface | QUANTITY | 0.96+ |
IPV six | OTHER | 0.96+ |
Cloudnativecon | ORGANIZATION | 0.96+ |
Kubecon | ORGANIZATION | 0.95+ |
ingress | ORGANIZATION | 0.95+ |
today | DATE | 0.95+ |
each | QUANTITY | 0.95+ |
SQL | TITLE | 0.93+ |
5g | QUANTITY | 0.93+ |
SUSE | TITLE | 0.92+ |
ES | TITLE | 0.92+ |
red hat | ORGANIZATION | 0.9+ |
zero | QUANTITY | 0.9+ |
hundreds | QUANTITY | 0.88+ |
Kubernetes | ORGANIZATION | 0.87+ |
Keith towns | PERSON | 0.84+ |
vector five | OTHER | 0.84+ |
last night | DATE | 0.84+ |
vector five | TITLE | 0.83+ |
Europe | LOCATION | 0.83+ |
Rico Sinte | PERSON | 0.82+ |
three months ago | DATE | 0.81+ |
cuon cloud native con | ORGANIZATION | 0.79+ |
cloud native con | ORGANIZATION | 0.79+ |
SA | ORGANIZATION | 0.79+ |
couple years | QUANTITY | 0.78+ |
2 65 | COMMERCIAL_ITEM | 0.76+ |
about | DATE | 0.73+ |
Rico | PERSON | 0.72+ |
SA | LOCATION | 0.71+ |
single connection | QUANTITY | 0.63+ |
rancher | ORGANIZATION | 0.63+ |
French | OTHER | 0.6+ |
egress | ORGANIZATION | 0.58+ |
reasons | QUANTITY | 0.57+ |
20 | LOCATION | 0.56+ |
foundation | ORGANIZATION | 0.56+ |
CU | ORGANIZATION | 0.51+ |
five | TITLE | 0.47+ |
Kubernetes | PERSON | 0.46+ |
KU | ORGANIZATION | 0.45+ |
con | EVENT | 0.4+ |
vector | COMMERCIAL_ITEM | 0.36+ |
seven | QUANTITY | 0.35+ |
coupon | EVENT | 0.33+ |
Alan Flower, HCL Technologies & Ramón Nissen, Red Hat | Kubecon + Cloudnativecon EU 2022
>>The queue presents Coon and cloud native con Europe, 2022, brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Valencia Spain and Coon cloud native con Europe, 2022. I'm Keith towns, along with Paul Gillon, senior editor, enterprise architecture, Silicon angle. We are going to talk to some amazing folks, especially in today's segment. Paul there's a lot of companies here, like what what's been the, the consistent theme you've heard so far in the show. >>Well, you know, one thing that's different from this show, it seems to me than others I've attended is it's all around open source. We're not seeing a lot of companies bringing new proprietary technology to market. We are seeing them try to piece together, open source components with some kind of, perhaps there's a proprietary element to it, but to create some kind of a, a common management interface or control plane, and that's quite different from what I think we've seen in the past and open source business models have been difficult to make work historically. And these companies are all taking their, their own approaches to it. But I think the, the degree to which this, the people here of coalesced around the importance of open source is building blocks to the future of, of applications is something I've not seen quite this way before. >>Well, with our current segment guests, we're gonna go deep into kind of these challenges and how enterprises are addressing, and their partners are addressing with those challenges we have with us, a flower head of cloud native HCL technologies. We'll get into how a system integrator is helping with this transition to Ramon neon, senior product manager, redhead. Welcome to the show. You're now cute. Alum. Welcome. >>Thanks for having us. >>So we're gonna get right off, off the bat. We're gonna talk about this. What are some of the trends you're seeing when it comes to application migration? You've done, I'm assuming at this point, thousands of them, what are some of the common trends? >>Well, it's a very good question. And clearly a C we've helped thousands of clients move tens of thousands of applications to what we would call a cloud native, you know, environment. I think the overwhelming trend that we're seeing of course is clients realize it's a particularly complex, sophisticated journey. It requires a certain set of skills and capability clients increasingly asking us for anything that we can do to simplify and accelerate the journey, cuz what's really important to clients. If you're on a transformation journey to cloud is you wanna see some value very quickly. So I don't wanna wait three to five years to transform my applications portfolio. If you can do something in three to five days, that would be perfect. Thank you. >>Well, three to five days, that sounds more akin to when we were doing P to V or V to V migrations, I'm sure HCL is at this point done in the millions of those types of migrations. What are some of the challenges or the nuance in doing a traditional migration from a traditional MI monolithic application to a cloud native? >>Well, it's another good question. Of course you notice that there's a general trend in the industry. Clients don't really want to lift and shift anymore. Lift and shift doesn't really bring any transformational value to my, to my company. So clients are looking for increasingly what we could call cloud native modernization. I want my applications to really take advantage of the cloud native environment. They need to be elastic and kind of more robust than maybe before now in particular, I think a lot of clients have realized that this state of Nirvana, which was we're gonna modernize everything to be a cloud native microservices based application. That is a tremendous journey, but no client really has the time patient or resources to fully refactor or rearchitect all of their applications. They're looking for more immediate kind of impact. So a key trend that we've seen of course is clients still want to refactor and modernize applications, but they're focusing those resources on those applications that will bring greater impact to their business. >>What they now see as a better replacement for lift and shift is probably what we would call replatforming, where they want all of the advantages of a cloud native environment, but they haven't necessarily got the time to modernize the code base. They wanna refactor to Kubernetes and re replatform to Kubernetes in particular, and they want us to take them there quickly. And that's why, for example, this week at cuon eight sellers announced a new set of tools called KMP based on conveyor, an open source project supported by red hat. And the key attraction of KMP is it lets me replatform my applications to Kubernetes immediately, right? Within two or three minutes, I can bring an application from a legacy platform directly onto Kubernetes and I can take it straight into production. That's the kind of acceleration that clients are looking for today. Isn't >>That just a form of lift and shift though? >>Well, no lift and shift typically of course, was moving virtual machines from one place to another. You know, the focus of Kubernetes of course is containerization of solutions. And it's not just about containerizing the solution and movement. It it's the DevOps tool chain around the solution as well. And of course, when I take that application into production in a Kubernetes based environment, I'm expecting to operate it in a different way as well. So that's where we see tremendous focus on what we would call cloud native operations clients expecting to use practices like site reliability engineering, to run these replatformed applications in a different way to, >>It sounds like you're saying, I mean, replatforming has been a, a spectrum of options. I think Gartner has seven different types of platforming. Are you seeing clients take more mature attitude now to replatforming? Are they looking more carefully at the characteristics of their legacy applications and, and try to try to make maybe more nuanced choices about what to replatform, what to just leave >>Alone? I think clients and I I'm sure Ramon's got some comments on this too, but clients have a lot more insight now in terms of what works for them. They they've realized that this, this promise of maybe a microservices based applications estate is a good one, but I can't do that for every application. If I am a large enterprise with several thousand applications in my portfolio, I can't refactor everything to become microservices based. So clients see replatforming possibly it's a middle ground. I, I get a lot of the advantages from a cloud native environment. My applications are inherently more efficient, hopefully a lot more performance. >>Yeah. It's, it's a matter of software delivery performance. Yeah. So legacy workloads will definitely benefit from being brought into Kubernetes in the software delivery per performance department. So it's a matter of somehow revamping your, your legacy applications and getting the benefits in, in life's application, life cycle management, a full tolerance and all that stuff. It's about leveraging the, what Kubernetes offers. >>When you say bringing legacy applications into Kubernetes. It's not that simple, right? I mean, what's involved in doing that. >>It, it, isn't, it's just a matter of taking a holistic view at your application portfolio and understanding the nuance sets of each application type within your organization and trying to come up with a suitable migration strategy for each one of these application types. And for that, what we're trying to do is provide a series of standardized tools and methodologies from a community perspective, we created this conveyor community. It, it was kick started by red hat and IBM, but we are trying to bring as many vendors and GSI as possible to try to set up these standards to make these road towards Kubernetes as easy as >>Possible. So we've done a little bit of app modernization in the CTO advisor hybrid infrastructure. And one of the things that we've found is there's plenty of Avan advantages. If I take a monolithic application that has that I've traditionally had to scale off to game performance, I can take selective parts of that, and now I can add autoscaling to it. Exactly. However, as I look at a landscape Allen of thousands of applications, I need to dedicate developer resources to get that done in my traditional environment. But my traditional environment is busy building new. My traditional or my developers are building new applications and new capabilities. I just don't have the resources to do that. How does HCL and red hat team together to kind of fast track that capability? >>Well, I'll comment on two things in particular, actually the, the first thing when it comes to skilling, I think the thing that's really surprised us at HCL is so many of our clients around the world have said, we are desperately short of skills. We cannot hire ourselves out of this problem. We need to get our existing developer community reskilled around platforms like OpenShift, conveyor, and other projects too. So the first thing that's happened to us at eight seal is we've been incredibly busy undertaken, probably what we would call developer workforce modernization, right? Where we have to help the client reskill their entire technical and developer community to give them the skills, right. So we will help the clients develop a community, build the cloud native understanding, help them understand how to modernize tools for example, or applications. But the second thing I mention is, and this comes back to a comment the Ram made around around conveyor. >>It's been really encouraging to see the open source community, start to invest in building the supporting frameworks around my kind of modernization journey, because if I'm a developer that's reskilling and I'm attempting to maybe modernize an application, being able to dip into an opensource project, I mean, a good example would be tackled part of the conveyor project. Exactly. You now have open source based tools that will help you analyze your applications. They will go into the source code and they will give the developer guidance in terms of what would be effective treatments to undertake. So perhaps a development team that are new to this modernization journey, they would benefit from a project like conveyor, for example, exactly because I need to know where can I safely modernize my application now for experience organizations like HCL that comes naturally to us, but for people who are just starting this journey, if I can take an open source tool like tackle or the rest of conveyor, for example, and use that to accelerate my journey, it takes a lot of pressure off, off my organization, but it also accelerates the journey too. And >>It's not just a matter of, of tooling. We we're also, opensourcing the, the modernization methodology that we've been using in red hat consulting for years. So this whole conveyor communities, it's all about knowledge sharing on one hand and building a set of tools together based on that knowledge that we are sharing to make it as easy as possible. >>And what role does red hat play in all that, I mean is your, your, you you've carved out this position for yourself as the, as the true open source company. Is that, does that position you for a leadership role in helping or companies make this >>Transition? I wouldn't say we should be leading the whole thing. We, we kick started it, but we want to get other vendors on board for this thing. One cool thing about the Camra community is that IBM is opensourcing a lot of their IP. So IBM research is on board. In this thing, we have some really crazy stuff related to a AI being applied to application analysis. We have some machine learning in place. We have very cool stuff that has been sitting on a, on a corner in IBM research for quite some years that now it's being open sourced and integrated in a unified user experience to streamline the modernization process as much as possible. >>So let's talk about the elephant of the room. HCL was leading the conversation around cloud Foundry circa five plus years ago. And as customers are thinking about their journey to cloud native, how should they think about that cloud Foundry to cloud native or Kubernetes replatform? >>Well with within the cloud Foundry community, we've, we've been quite staunched supporters of Kubernetes for quite some time, right? It's, it's quite a, a stated intent of the cloud Foundry foundation to, to move across to Kubernetes platform right now that is a significant engineering journey for cloud Foundry to take. Now we're in this position where a lot of large users of cloud Foundry have a certain urgency to their journey. They, they want to consolidate on a single Kubernetes based infrastructure. We, we see a lot of traction around OpenShift, for example, from red hat in terms of its market leadership. So a lot of clients are saying we would like to consolidate all of our platforms around a single kind of Kubernetes vendor, whether that's red hat or anyone else, you know, quite frankly. So what HCL is doing right now with the tools and the solutions we've announced this week is we're simply accelerating that journey for clients. If I've got a large installed base of applications running in my cloud Foundry environment, and I've also started to invest in standardize on Kubernetes place platforms like OpenShift, most clients would see it as quite a sensible choice to now try and consolidate those two environments into one. And that's simply what we're doing at HCL. We're making it very, very easy. In fact, we fully automated the journey so I can move all of my applications from cloud Foundry into for example, OpenShift pretty much immediately, and it just simplifies the entire journey. >>So the, as we start to wrap up the segment, I like to know customer stories. What, what, how customers either surprised or challenged when they get into, even with the help of an ACL in redhead, why are they seeing the most difficult parts of their migrations? >>Well, my, my simple comment would be maybe complexity, right? And the, the associated requirement for skilled people to undertake this modernization work, right? We spoke about this, of course, in terms of clients now are a lot more realistic. They understand that their ambition now needs to be somewhat tempered by their ability to sort of drive modernization quickly. So we see a lot of clients when they look at their very large global portfolios of applications, they're trying to invest their resources in the higher priority applications, the revenue generative applications in particular, but they have to bring everything else with them as well. Now, a common kind of separation point was we see a lot of clients who might say I'm gonna properly modernize and refactor, maybe five to 10% of my portfolio, but the other 90% also needs to come on the journey as well. And that's really where replatforming in particular kicks in. So, so the key trend again, is, is clients send to us, I've gotta take the entire journey. All right, I've got the resources and the skills to really focus on this much of my application base. Can someone simplify the overall journey so I can afford to bring everything on a cloud native journey? >>So the key to success here is having a holistic view at the application portfolio, segmenting the application portfolio in different application types and ordering the, the priorities of these application types and come up with suitable migration strategies for each one of them is >>Really necessary to move everything though. >>Not necessarily, no. Yeah. Or not necessarily. Yeah, absolutely not everything, but it would make sense. As we were saying before, it will definitely move, make sense to move legacy applications towards Kubernetes, to leverage all the software delivery. >>That's, that's a big project, right? >>It is. >>If you're gonna restructure the application around eight API and microservices, >>That it should be taken the way I've seen organizations succeeding the most in these road towards cloud native and Kubernetes in general is trying to address the whole portfolio. Maybe not move everything, but try to have this holistic view and not leave anything behind. Because if you try to do this isolated initiatives of bringing these or that application in, in isolation, you're Def you, you will miss part of the picture and you might be doomed to fail >>There. Yeah. It's been my experience that if you don't have a plan to migrate your applications to a cloud native operating model, then you're doomed to follow lift and shift examples to the public cloud. Yeah. Whether you're going to any other clouds, if you don't make that, that operational transition. Last question on operational transition, we've talked a lot about the replatforming process itself. What about day two at the I've landed to the cloud? What are some of the top considerations for, for compliance op observability? Just making sure my apps stay up in transitioning my workforce to that model. >>I think the over, you know, the overarching trend or theme that, that I see is clients now are, are asking for what I would call cloud native operations. Now in particular, there's a very solid theme around what we would call reliability engineering. So think about site reliability, engineering, SRE platform, reliability engineering, PR E. These are the dominant topics that clients now want to engage HCL on in particular, because the point you make is a valid one. I've modernized my application. Now I need to modernize the way that I operate the application in production. Otherwise I won't see those benefits. So that general theme of SRE is keeping us really busy. We're busy, re-skilling all of those operations teams around the world as well, because they need to know how to run these environments appropriately >>Too. And also being able to measure your progress while your transitioning is important. And that's one of the concerns that we are addressing as well in the community with a called polars to, to measure and to effectively measure the software delivery performance of, of the organization after the transition has been done. >>And this is a really good point by the way, cuz most, most people think it's a bit of a black art. How do I understand how I modernize my application? How do I understand how I've improved my kind of value chain around software creation and many people thought you needed to bring in very expensive consultants to advise you on these, on these black lives? No, >>Definitely >>Not. But in open source projects like conveyor from, from red hat, the availability of these tools available on an open source model means exactly any engineer, any developer can get these tools off the shelf and get that immediate benefit. >>Well, a flower head of creative labs at HCL at Ramon neon, senior product manager, redhead. Thank you for joining the Q you now cube alum. You'll have a nice profile like the profile pictures on here. Awesome. Absolutely. Thank you. From Valencia Spain. I'm Keith towns, along with Paul Gillon and you're watching the cue, the leader in high tech coverage.
SUMMARY :
The queue presents Coon and cloud native con Europe, 2022, brought to you by red hat, We are going to of open source is building blocks to the future of, of applications is Welcome to the show. of the trends you're seeing when it comes to application migration? to what we would call a cloud native, you know, environment. Well, three to five days, that sounds more akin to when we were doing P has the time patient or resources to fully refactor or rearchitect all the time to modernize the code base. environment, I'm expecting to operate it in a different way as well. attitude now to replatforming? I get a lot of the advantages from a cloud native environment. So it's a matter of somehow revamping your, your legacy applications and It's not that simple, right? as possible to try to set up these standards to make these road towards Kubernetes I just don't have the resources to do that. So the first thing that's happened to us at eight seal is we've been incredibly busy undertaken, So perhaps a development team that are new to this modernization journey, they would benefit from a project like So this whole conveyor communities, it's all about knowledge And what role does red hat play in all that, I mean is your, your, you you've carved out this position being applied to application analysis. And as customers are thinking about their journey to cloud native, how should they think about that cloud Foundry So a lot of clients are saying we would like to consolidate all of our platforms around a single kind So the, as we start to wrap up the segment, I like to know customer stories. the revenue generative applications in particular, but they have to bring everything else with them as make sense to move legacy applications towards Kubernetes, to leverage all the software delivery. to fail to any other clouds, if you don't make that, that operational transition. Now I need to modernize the way that I operate the application in production. And that's one of the concerns that we are addressing as well in the community with a called polars to, And this is a really good point by the way, cuz most, most people think it's a bit of a black art. the shelf and get that immediate benefit. You'll have a nice profile like the profile pictures on here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillon | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
HCL | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
five days | QUANTITY | 0.99+ |
Ramon neon | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Coon | ORGANIZATION | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
Alan Flower | PERSON | 0.99+ |
HCL Technologies | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
three minutes | QUANTITY | 0.99+ |
each application | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
KMP | TITLE | 0.98+ |
first thing | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
10% | QUANTITY | 0.98+ |
Kubecon | ORGANIZATION | 0.98+ |
Cloudnativecon | ORGANIZATION | 0.98+ |
millions | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
each one | QUANTITY | 0.97+ |
OpenShift | TITLE | 0.97+ |
2022 | DATE | 0.97+ |
Europe | LOCATION | 0.96+ |
Kubernetes | TITLE | 0.96+ |
two things | QUANTITY | 0.96+ |
second thing | QUANTITY | 0.96+ |
two environments | QUANTITY | 0.96+ |
tens of thousands of applications | QUANTITY | 0.95+ |
seven | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
red hat | ORGANIZATION | 0.93+ |
five plus years ago | DATE | 0.92+ |
SRE | ORGANIZATION | 0.92+ |
single | QUANTITY | 0.9+ |
ACL | ORGANIZATION | 0.88+ |
one thing | QUANTITY | 0.87+ |
day two | QUANTITY | 0.87+ |
Keith | PERSON | 0.87+ |
Ramon | PERSON | 0.86+ |
thousands of applications | QUANTITY | 0.86+ |
Kubernetes | ORGANIZATION | 0.85+ |
thousand applications | QUANTITY | 0.85+ |
GSI | ORGANIZATION | 0.83+ |
Ramón Nissen | PERSON | 0.82+ |
cloud native con | ORGANIZATION | 0.78+ |
thousands of | QUANTITY | 0.78+ |
Nirvana | LOCATION | 0.77+ |
Coon cloud native con | ORGANIZATION | 0.72+ |
cuon | ORGANIZATION | 0.72+ |
thousands of clients | QUANTITY | 0.71+ |
eight sellers | QUANTITY | 0.7+ |
Camra | ORGANIZATION | 0.69+ |
Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Melissa Spain. And we're at cuon cloud native con Europe, 2022. I'm Keith Townsend. And my co-host en Rico senior Etti en Rico's really proud of me. I've called him en Rico and said IK, every session, senior it analyst giga, O we're talking to fantastic builders at Cuban cloud native con about the projects and the efforts en Rico up to this point, it's been all about provisioning insecurity. What, what conversation have we been missing? >>Well, I mean, I, I think, I think that, uh, uh, we passed the point of having the conversation of deployment of provisioning. You know, everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem. And in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their classroom, what, why it is happening and all the, the questions that come with it. I mean, and, uh, the more I talk with, uh, people in the, in the show floor here, or even in the, you know, in the various sessions is about, you know, we are growing, the, our clusters are becoming bigger and bigger. Uh, applications are becoming, you know, bigger as well. So we need to know, understand better what is happening. It's not only, you know, about cost it's about everything at the >>End. So I think that's a great set up for our guests, max, Provo, founder, and CEO of storm for forge and Patrick Britton, Bergstrom, Brookstone. Yeah, I spelled it right. I didn't say it right. Berg storm CTO. We're at Q con cloud native con we're projects are discussed, built and storm forge. I I've heard the pitch before, so forgive me. And I'm, I'm, I'm, I'm, I'm, I'm kind of torn. I have service mesh. What do I need more like, what problem is storm for solving? >>You wanna take it? >>Sure, absolutely. So it it's interesting because, uh, my background is in the enterprise, right? I was an executive at United health group. Um, before that I worked at best buy. Um, and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky Dory. Right. Uh, but then we run into the issue like you and I were just talking about where it gets very, very expensive, very quickly. Uh, and so my first conversations with Matt and the storm forge group, and they were telling me about the product and, and what we're dealing with. I said, that is the problem statement that I have always struggled with. And I wish this existed 10 years ago when I was dealing with EC two costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically, what it is is we take your raw telemetry data and we essentially monitor the performance of your application. And then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over provisioning. So we reduce your consumption of CPU of memory and production, which ultimately nine times outta 10, actually I would say 10 out of 10 reduces your cost significantly without sacrificing reliability. >>So can your solution also help to optimize the application in the long run? Because yes, of course, yep. You know, the lowing fluid is, you know, optimize the deployment. Yeah. But actually the long term is optimizing the application. Yes. Which is the real problem. >>Yep. So we actually, um, we're fine with the, the former of what you just said, but we exist to do the latter. And so we're squarely and completely focused at the application layer. Um, we are, uh, as long as you can track or understand the metrics you care about for your application, uh, we can optimize against it. Um, we love that we don't know your application. We don't know what the SLA and SLO requirements are for your app. You do. And so in, in our world, it's about empowering the developer into the process, not automating them out of it. And I think sometimes AI and machine learning sort of gets a bad wrap from that standpoint. And so, uh, we've at this point, the company's been around, you know, since 2016, uh, kind of from the very early days of Kubernetes, we've always been, you know, squarely focused on Kubernetes using our core machine learning, uh, engine to optimize metrics at the application layer, uh, that people care about and, and need to need to go after. And the truth of the matter is today. And over time, you know, setting a cluster up on Kubernetes has largely been solved. Um, and yet the promise of, of Kubernetes around portability and flexibility, uh, downstream when you operationalize the complexity, smacks you in the face. And, uh, and that's where, where storm forge comes in. And so we're a vertical, you know, kind of vertically oriented solution. Um, that's, that's absolutely focused on solving that problem. >>Well, I don't want to play, actually. I want to play the, uh, devils advocate here and, you know, >>You wouldn't be a good analyst if you didn't. >>So the, the problem is when you talk with clients, users, they, there are many of them still working with Java with, you know, something that is really tough. Mm-hmm <affirmative>, I mean, we loved all of us loved Java. Yeah, absolutely. Maybe 20 years ago. Yeah. But not anymore, but still they have developers. They are porting applications, microservices. Yes. But not very optimized, etcetera. C cetera. So it's becoming tough. So how you can interact with these kind of yeah. Old hybrid or anyway, not well in generic applications. >>Yeah. We, we do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage. And we like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So the perfect example is Java, you know, you have to worry about your heap size, your garbage collection tuning. Um, and one of the things that really struck, struck me very early on about the storm forage product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and, and performance tuning, we were only as good as our humans were because of what they knew. And so we were, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that, the machine will recommend things you never would've dreamed of. And you get amazing results out of >>That. So both me and an Rico have been doing this for a long time. Like I have battled to my last breath, the, the argument when it's a bare metal or a VM. Yeah. Look, I cannot give you any more memory. Yeah. And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources expensive, my bigger box. Yep. Uh, buying a bigger box in the cloud to your point is no longer a option because it's just expensive. Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? So is it, that is that if it, is it the shift in responsibility? >>I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially, um, especially as the development of applications becomes more and more rapid. And the management of them, our, our charge and our belief wholeheartedly is that you shouldn't have to choose, you should not have to choose between costs or performance. You should not have to choose where your, you know, your applications live, uh, in a public private or, or hybrid cloud environment. And so we want to empower people to be able to sit in the middle of all of that chaos and for those trade-offs and those difficult interactions to no, no longer be a thing. You know, we're at, we're at a place now where we've done, you know, hundreds of deployments and never once have we met a developer who said, I'm really excited to get outta bed and come to work every day and manually tune my application. <laugh> One side, secondly, we've never met, uh, you know, uh, a manager or someone with budget that said, uh, please don't, you know, increase the value of my investment that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, or some combination of both. And so what we're seeing is the converging of these groups, um, at, you know, their happy place is the lack of needing to be able to, uh, make those trade offs. And that's been exciting for us. So, >>You know, I'm listening and looks like that your solution is right in the middle in application per performance management, observability. Yeah. And, uh, and monitoring. So it's a little bit of all of this. >>So we, we, we, we want to be, you know, the Intel inside of all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. It used to be APM a lot. We sometimes get a, are you observability or, and we're really not any of those things in and of themselves, but we, instead of invested in deep integrations and partnerships with a lot of those, uh, with a lot of that tooling, cuz in a lot of ways, the, the tool chain is hardening, uh, in a cloud native and, and Kubernetes world. And so, you know, integrating in intelligently staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for, for our users who have already invested likely in a APM or observability. >>So to go a little bit deeper. Sure. What does it mean integration? I mean, do you provide data to this, you know, other applications in, in the environment or are they supporting you in the work that you >>Yeah, we're, we're a data consumer for the most part. Um, in fact, one of our big taglines is take your observability and turn it into actionability, right? Like how do you take the it's one thing to collect all of the data, but then how do you know what to do with it? Right. So to Matt's point, um, we integrate with folks like Datadog. Um, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >>But, but also we want Datadog customers. For example, we have a very close partnership with, with Datadog, so that in your existing data dog dashboard, now you have yeah. This, the storm for capability showing up in the same location. Yep. And so you don't have to switch out. >>So I was just gonna ask, is it a push pull? What is the developer experience? When you say you provide developer, this resolve ML, uh, learnings about performance mm-hmm <affirmative> how do they receive it? Like what, yeah, what's the, what's the, what's the developer experience >>They can receive it. So we have our own, we used to for a while we were CLI only like any good developer tool. Right. Uh, and you know, we have our own UI. And so it is a push in that, in, in a lot of cases where I can come to one spot, um, I've got my applications and every time I'm going to release or plan for a release or I have released, and I want to take, pull in, uh, observability data from a production standpoint, I can visualize all of that within the storm for UI and platform, make decisions. We allow you to, to set your, you know, kind of comfort level of automation that you're, you're okay with. You can be completely set and forget, or you can be somewhere along that spectrum. And you can say, as long as it's within, you know, these thresholds, go ahead and release the application or go ahead and apply the configuration. Um, but we also allow you to experience, uh, the same, a lot of the same functionality right now, you know, in Grafana in Datadog, uh, and a bunch of others that are coming. >>So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges, or if not, one of the biggest challenges CIOs are facing are resource constraints. Yeah. They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs? Yeah. >>Development? >>Just take that one. Yeah, absolutely. That's um, so like my background, like I said, at United health group, right. It's not always just about cost savings. In fact, um, the way that I look about at some of these tech challenges, especially when we talk about scalability, there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece, cuz you can only throw money at a problem for so long. And it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small. And so we are absolutely squarely in that footprint of, we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. >>Like we were, you were talking about private cloud for instance. And so having a physical data center, um, I've worked with physical data centers that companies I've worked for have owned where it is literally full wall to wall. You can't rack any more servers in it. And so their biggest option is, well, I could spend 1.2 billion to build a new one if I wanted to. Or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster. Like that's a huge opportunity. >>So either out of question, I mean, may, maybe it, it doesn't sound very intelligent at this point, but so is it an ongoing process or is it something that you do at the very beginning mean you start deploying this. Yeah. And maybe as a service. Yep. Once in a year I say, okay, let's do it again and see if something changes. Sure. So one spot 1, 1, 1 single, you know? >>Yeah. Um, would you recommend somebody performance tests just once a year? >>Like, so that's my thing is, uh, previous at previous roles I had, uh, my role was you performance test, every single release. And that was at a minimum once a week. And if your thing did not get faster, you had to have an executive exception to get it into production. And that's the space that we wanna live in as well as part of your C I C D process. Like this should be continuous verification every time you deploy, we wanna make sure that we're recommending the perfect configuration for your application in the name space that you're deploying >>Into. And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the C I C D process that's connected to optimization and that no application should be released monitored and sort of, uh, analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, yeah. Cost end performance, >>Almost a couple of hundred vendors on this floor. You know, you mentioned some of the big ones, data, dog, et cetera. But what happens when one of the up and comings out of nowhere, completely new data structure, some imaginable way to click to elementry data. Yeah. How do, how do you react to that? >>Yeah. To us it's zeros and ones. Yeah. Uh, and you know, we're, we're, we're really, we really are data agnostic from the standpoint of, um, we're not, we we're fortunate enough to, from the design of our algorithm standpoint, it doesn't get caught up on data structure issues. Um, you know, as long as you can capture it and make it available, uh, through, you know, one of a series of inputs, what one, one would be load or performance tests, uh, could be telemetry, could be observability if we have access to it. Um, honestly the messier, the, the better from time to time, uh, from a machine learning standpoint, um, it, it, it's pretty powerful to see we've, we've never had a deployment where we, uh, where we saved less than 30% while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and, you know, 30 to 40% improvement in performance. >>And what happens if the application is, I, I mean, yes, Kubernetes is the best thing of the world, but sometimes we have to, you know, external data sources or, or, you know, we have to connect with external services anyway. Mm-hmm <affirmative> yeah. So can you, you know, uh, can you provide an indication also on, on, on this particular application, like, you know, where the problem could >>Be? Yeah, yeah. And that, that's absolutely one of the things that we look at too, cuz it's um, especially when you talk about resource consumption, it's never a flat line, right? Like depending on your application, depending on the workloads that you're running, um, it varies from sometimes minute to minute, day to day, or it could be week to week even. Um, and so especially with some of the products that we have coming out with what we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like. Um, or you know, what your disc looks like, right? Like cuz with our, our low environment testing, any metric you throw at us, we can, we can optimize for. >>So Madden Patrick, thank you for stopping by. Yeah. Yes. We can go all day. Because day two is I think the biggest challenge right now. Yeah. Not just in Kubernetes, but application replatforming and re and transformation. Very, very difficult. Most CTOs and S that I talked to, this is the challenge space from Valencia Spain. I'm Keith Townsend, along with my host en Rico senior. And you're watching the queue, the leader in high tech coverage.
SUMMARY :
brought to you by the cloud native computing foundation. And we're at cuon cloud native you know, in the various sessions is about, you know, we are growing, I I've heard the pitch before, and one of the issues that we always had was, especially as you migrate to the cloud, You know, the lowing fluid is, you know, optimize the deployment. And so we're a vertical, you know, devils advocate here and, you know, So the, the problem is when you talk with clients, users, So the perfect example is Java, you know, you have to worry about your heap size, And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, You know, I'm listening and looks like that your solution is right in the middle in all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. this, you know, other applications in, in the environment or are they supporting Like how do you take the it's one thing to collect all of the data, And so you don't have to switch out. Um, but we also allow you to experience, How are you hoping to address this And it's the same thing with the human piece. Like we were, you were talking about private cloud for instance. is it something that you do at the very beginning mean you start deploying this. And that's the space that we wanna live in as well as part of your C I C D process. actually adding a step in the C I C D process that's connected to optimization and that no application You know, you mentioned some of the big ones, data, dog, Um, you know, as long as you can capture it and make it available, or, you know, we have to connect with external services anyway. we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some So Madden Patrick, thank you for stopping by.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Crawford | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
1.2 billion | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
Matt Provo | PERSON | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
storm for forge | ORGANIZATION | 0.99+ |
Patrick Bergstrom | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Java | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
Melissa Spain | PERSON | 0.99+ |
nine times | QUANTITY | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
less than 30% | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
United health group | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Keith | PERSON | 0.98+ |
once a year | QUANTITY | 0.98+ |
once a week | QUANTITY | 0.98+ |
HPA | ORGANIZATION | 0.98+ |
2022 | DATE | 0.98+ |
Coon | ORGANIZATION | 0.98+ |
30% | QUANTITY | 0.98+ |
first conversations | QUANTITY | 0.97+ |
Cloudnativecon | ORGANIZATION | 0.97+ |
60% | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Etti | PERSON | 0.97+ |
today | DATE | 0.96+ |
Patrick Britton | PERSON | 0.96+ |
Kubecon | ORGANIZATION | 0.96+ |
StormForge | ORGANIZATION | 0.95+ |
data dog | ORGANIZATION | 0.94+ |
Prometheus | TITLE | 0.94+ |
three pillars | QUANTITY | 0.94+ |
secondly | QUANTITY | 0.94+ |
Rico | ORGANIZATION | 0.93+ |
Q con cloud | ORGANIZATION | 0.93+ |
hundreds of deployments | QUANTITY | 0.92+ |
day two | QUANTITY | 0.92+ |
Europe | LOCATION | 0.92+ |
Kubernetes | ORGANIZATION | 0.92+ |
Intel | ORGANIZATION | 0.92+ |
one spot | QUANTITY | 0.89+ |
at least 10% | QUANTITY | 0.87+ |
one thing | QUANTITY | 0.85+ |
hundred vendors | QUANTITY | 0.83+ |
Once in a year | QUANTITY | 0.83+ |
cuon cloud native con | ORGANIZATION | 0.81+ |
Rico | LOCATION | 0.81+ |
Brookstone | ORGANIZATION | 0.8+ |
Grafana | ORGANIZATION | 0.8+ |
Berg storm CTO | ORGANIZATION | 0.8+ |
SRE | TITLE | 0.79+ |
SLA | TITLE | 0.79+ |
Bergstrom | ORGANIZATION | 0.79+ |
cloud native con | ORGANIZATION | 0.78+ |
single release | QUANTITY | 0.77+ |
storm forge group | ORGANIZATION | 0.75+ |
1 | QUANTITY | 0.75+ |
One side | QUANTITY | 0.74+ |
EC two | TITLE | 0.74+ |
1 single | QUANTITY | 0.74+ |
Patrick | PERSON | 0.74+ |
Alan Flower, HCL Technologies & Ramón Nissen, Red Hat | Kubecon + Cloudnativecon EU 2022
>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain and Coon cloud native con Europe, 2022. I'm Keith towns, along with Paul Gillon, senior editor, enterprise architecture and Silicon angle. We are going to talk to some amazing folks, especially in today's segment. Paul, uh, there's a lot of companies here, like what what's been the, the consistent theme you've heard so far in the show. >>Well, you know, one thing that's different from this show, it seems to me than others I've attended is it's all around open source. We're not seeing a lot of companies bringing new proprietary technology to market. We are seeing them try to piece together, open source components with some kind of, perhaps there's a proprietary element to it, but to create some kind of a, a common management interface or control plane, and that's quite different from what I think we've seen in the past open source business models have been difficult to make work historically. Uh, and these companies are all taking their, their own approaches to it. But I think the, the degree to which this, the people here of coalesced around the importance of open source is building blocks to the future of, of applications is something I've not seen quite this way before. >>Well, with our current segment, guess we're gonna go deep into kind of these challenges and how enterprises are addressing, and their partners are addressing with those challenges we have with us, a flower head of cloud native HCL technologies. We'll get into how a system integrator is helping with this transition to Ramon neon, senior product manager, redhead. Welcome to the show. You're now cube alum. Welcome. Thanks for having us. So we're gonna get right off, uh, off the bat. We're gonna talk about this. What are some of the trends you're seeing when it comes to application migration? You've done, I'm assuming at this point, thousands of them, what are some of the common trends? >>Well, it's a very good question. And clearly ACL we've helped thousands of clients move tens of thousands of applications to what we would call a cloud native, um, you know, environment. I think the overwhelming trend that we're seeing of course is clients realize it's a particularly complex, sophisticated journey. It requires a certain set of skills and capability clients increasingly us for anything that we can do to simplify and accelerate the journey, cuz what's really important to clients. If you're on a transformation journey to cloud is you wanna see some value very quickly. So I don't wanna wait three to five years to transform my applications portfolio. If you can do something in three to five days, that would be perfect. Thank you. >>Well, three to five days, that sounds more akin to when we were doing, uh, P to V or V to V migrations. I'm sure. Uh, HCL is at this point done in the millions of those types of migrations. What are some of the challenges or the nuance in doing a traditional migration from a traditional MI monolithic application to a cloud native? >>Well, it's another good question. Of course you notice that there's a general trend in the industry. Clients don't really want to lift and shift anymore. Lift and shift doesn't really bring any transformational value to my, to my company. So clients are looking for increasingly what we could recall, cloud native modernization. I want my applications to really take advantage of the cloud native environment. They need to be elastic and kind of more robust than maybe before now in particular, I think a lot of clients have realized that this state of Nirvana, which was we're gonna modernize everything to be a cloud native microservices based application. That is a tremendous journey, but no client really has the time patient or resources to fully refactor or rearchitect all of their applications. They're looking for more immediate kind of impact. So a key trend that we've seen of course is clients still want to refactor and modernize applications, but they're focusing those resources on those applications that will bring greater impact to their business. >>What they now see as a better replacement for lift and shift is probably what we would call replatforming, where they want all of the advantages of a cloud native environment, but they haven't necessarily got the time to modernize the code base. They wanna refactor to Kubernetes in re replatform to Kubernetes in particular, and they want us to take them there quickly. And that's why, for example, this week at cuon eight sellers announced a new set of tools called KMP based on conveyor, an open source project supported by red hat. And the key attraction of KMP is it lets me replatform my applications to Kubernetes immediately, right? Within two or three minutes, I can bring an application from a legacy platform directly onto Kubernetes and I can take it straight into production. That's the kind of acceleration that clients are looking for today. Isn't >>That just a form of lift and shift though? >>Well, no lift and shift typically of course, was moving virtual machines from one place to another. You know, the focus of Kubernetes of course is containerization of solutions. And it's not just about containerizing the solution and moving it. It's the DevOps tool chain around the solution as well. And of course, when I take that application into production in a Kubernetes based environment, I'm expecting to operate it in a different way as well. So that's where we see tremendous focus on what we would call cloud native operations clients expecting to use practices like site reliability engineering, to run these replatformed applications in a different way to, so >>It sounds like you're saying, I, I mean, replatforming has been a, a spectrum of options. I think Gartner has seven different types of re-platforming. Uh, are you seeing clients take more mature attitude now toward replatforming? Are they looking more carefully at the characteristics of their legacy applications and, and trying to try to make maybe more nuanced choices about what to replatform, what to just leave >>Alone? I think clients and I I'm sure Ramon's got some comments on this too, but clients have a lot more insight now in terms of what works for them. They they've realized that this, this promise of maybe a microservices based applications estate is a good one, but I can't do that for every application. If I am a large enterprise with several thousand applications in my portfolio, I can't refactor everything to become microservices based. So clients see replatforming possibly is a middle ground. I, I get a lot of the advantages from a cloud native environment. My applications are inherently more efficient, hopefully a lot more performance. >>Yeah. It's, it's a matter of software delivery performance. Yeah. So, uh, legacy workloads will definitely benefit from, uh, being brought into Kubernetes in the software delivery per performance department. So, uh, it's a matter of, uh, somehow Rebump your, your legacy applications and getting the benefits in, in life's application, life cycle management, a, uh, full tolerance and all that stuff. It's about leveraging the, what Kubernetes offers. >>When you say bringing legacy applications into Kubernetes. It's not that simple, right? I mean, what's involved in doing that. >>It, it, isn't, it's just a matter of taking a holistic view at your application portfolio and understanding the nuances of each application type within your organization and trying to come up with a suitable migration strategy for each one of these application types. And for that, what we're trying to do is provide a series of standardized, um, tools and methodologies, uh, from a community perspective, uh, we created this conveyor community. Uh, it, it was kick started by red hat and IBM, but we are trying to bring as many vendors and GSI, uh, as possible to try to set up these standards to make these, uh, road towards Kubernetes as easy as >>Possible. So we've done a little bit of, uh, app modernization in the CTO advisor hybrid infrastructure. And one of the things that we've found, there's plenty of Avan advantages. If I take a monolithic application that has, uh, that I've traditionally had to scale off to, uh, game performance, I can take selective parts of that, and now I can add auto-scaling to it. Exactly. However, as I look at a landscape Allen of thousands of applications, uh, I need to dedicate developer resources to get that done and my traditional environment, but my traditional environment is busy building new. My traditional or my developers are building new applications and new capabilities. I just don't have the resources to do that. How does HCL and red hat team together to kind of fast track that capability? >>Well, um, I'll comment on two things in particular, actually the, the first thing when it comes to skilling, I think the thing that's really surprised us at HCL is so many of our clients around the world have said, we are desperately short of skills. We cannot hire ourselves out of this problem. We need to get our existing developer community re-skilled around platforms like OpenShift, conveyor, and other projects too. So the first thing that's happened to us at eight still is we've been incredibly busy undertaken, probably what we would call developer workforce modernization, right, where we have to help the client reskill their entire technical and developer community to give them the skills, right. So we will help the clients develop a community, build the cloud native understanding, help them understand how to modernize tools for example, uh, or applications. But the second thing I mention is, and this comes back to a comment that Ramon made around around conveyor. >>It's been really encouraging to see the open source community start to invest in building the supporting frameworks around my kind of modernization journey, because if I'm a developer that's re-skilling and I'm attempting to maybe modernize an application, being able to dip into an open source project, I mean, a good example would be tackled part of the conveyor project. Exactly. You now have open source based tools that will help you analyze your applications. They will go into the source code and they will give the developer guidance in terms of what would be effective treatments to undertake. So perhaps a development team that are new to this modernization journey, they would benefit from a project like conveyor, for example, because I need to know where can I safely modernize my application now for experience organizations like HCL that comes naturally to us, but for people who are just starting this journey, if I can take an open source tool like tackle or the rest of the conveyor, for example, and use that to accelerate my journey, it takes a lot of pressure off, off my organization, but it also accelerates the journey too. >>And it's not just a matter of, of tooling. We we're also opensourcing, uh, the, the modernization methodology that we've been using in red hat consulting for years. So this whole conveyor communities, it's all about knowledge sharing on one hand and building a set of tools together, based on that knowledge that we are sharing to make it as easy as possible. >>And what role does red hat play in all that, I mean, is your you've carved out this position for yourself as the, as the true open source company. Is that, does that position you for a leadership role in helping companies make this >>Transition? I wouldn't say we should be leading the whole thing. Uh, we, we kick started it, but we want to get other vendors on board for this thing. One cool thing about the Camira community is that IBM is, uh, opensourcing a lot of their IP. So IBM research is on board. In this thing, we have some really crazy stuff related to a AI being applied to application analysis. We have some machine learning in place. We have very cool stuff that has been sitting on a, on a corner in IBM research for quite some years that now it's being open sourced and integrated in a, uh, unified user experience to streamline the, uh, modernization process as much as >>Possible. So let's talk about the elephant of the room. Uh, HCL was leading the conversation around cloud Foundry circa five plus years ago. And as customers are thinking about their journey to cloud native, how should they think about that cloud Foundry to cloud native or Kubernetes, uh, replatforming? >>Well within the cloud Foundry community, we've, we've been quite staunched supporters of Kubernetes for quite some time, right? It's, it's quite a, a stated intent of the cloud Foundry foundation to, to move across to Kubernetes platform right now that is a significant engineering journey for cloud Foundry to take. Now we're in this position where a lot of large users of cloud Foundry have a certain urgency to their journey. They, they want to consolidate on a single Kubernetes based, okay. Um, infrastructure. We, we see a lot of traction around OpenShift, for example, from red hat in terms of its market leadership. So a lot of clients are saying we would like to consolidate all of our platforms around a single kind of Kubernetes vendor, whether that's red hat or anyone else, you know, quite frankly. So what ATL is doing right now with the tools and the solutions we've announced this week is we're simply accelerating that journey for clients. If I've got a large installed base of applications running in my cloud Foundry environment, and I've also started to invest in standardize on Kubernetes based platforms like OpenShift, most clients would see it as quite a sensible choice to now try and consolidate those two environments into one. And that's simply what we're doing at HCL. We're making it very, very easy. In fact, we fully automated the journey so I can move all of my applications from cloud Foundry into for example, OpenShift pretty much immediately. And it just simplifies the entire journey. >>So the, as we start to wrap up the segment, I like to know customer stories. What, what, how customers either surprised or challenged when they get into, even with the help of an ACL in redhead, why are they seeing the most difficult parts of their migrations? >>Well, my, my simple comment would be maybe complexity, right? And the, the associated requirement for skilled people to undertake this modernization work, right? We spoke about this, of course, in terms of clients now are a lot more realistic. They understand that their ambition now needs to be somewhat tempered by their ability to sort of drive modernization quickly. So we see a lot of clients when they look at their very large global portfolios of applications, they're trying to invest their resources in the higher priority applications, the revenue generative applications in particular, but they have to bring everything else with them as well. Now, a common kind of separation point was we see a lot of clients who might say I'm gonna properly modernize and refactor, maybe five to 10% of my portfolio, but the other 90% also needs to come on the journey as well. And that's really where replatforming in particular kicks in. So, so the key trend again, is, is clients send to us, I've gotta take the entire journey. All right, I've got the resources and the skills to really focus on this much of my application base. Can someone simplify the overall journey so I can afford to bring everything on a cloud native journey? >>So the key to success here is having a holistic view at the application portfolio, segmenting the application portfolio in different application types and ordering the, the priorities of these application types and come up with suitable migration strategies for each one of them is >>Really necess necessary to move everything though. >>Not necessarily no, or, uh, not necessarily. Yeah, absolutely not everything. But, uh, it would make sense. Uh, as we were saying before, it will definitely move, make sense to move legacy applications towards Kubernetes, to leverage all the, uh, software delivery >>That's >>That's project, right? >>It is. If >>You're gonna restructure the application around APIs and microservices, >>That it should be taken the, the way I've seen, uh, organizations succeeding the most in this, uh, road towards cloud native and Kubernetes in general is trying to address the whole portfolio. Maybe not move everything, but try, try to have this holistic view and not leave anything behind, because if you try to do this isolated, uh, initiatives of bringing this or that applications in a, in isolation, you're Def you, you will miss part of the picture and you might be, uh, doomed to fail >>There. Yeah. It's been my experience that if you don't have a plan to migrate your applications to a cloud native operating model, then you're doomed to follow lift and shift examples to the public cloud. Yeah. Whether you're, uh, going to any other clouds, if you don't make that, that operational transition. Last question on operational transition, we've talked a lot about the replatforming process itself. What about day two, uh, at the I've landed to the cloud? What are some of the top considerations for, for compliance, uh, op op observability, just making sure my apps stay up and transitioning my workforce to that model. >>I, I, I think the over, you know, the overarching trend or theme that, that I see is clients now are, are asking for what I would call cloud native operations. Now in particular, there's a very solid theme around what we would call reliability engineering. So think about site reliability, engineering, SRE platform, reliability engineering, PR E. These are the dominant topics that clients and I want to engage, uh, HCL on in particular, because the point you make is a valid one. I've modernized my application. Now I need to modernize the way that I operate the application in production. Otherwise I won't see those benefits. So that general theme of SRE is keeping us really busy. We're busy, re-skilling all of those operations teams around the world as well, because they need to know how to run these environments appropriately too. >>And also being able to measure your progress while your transitioning is important. And that's one of the concerns that we are addressing as well in the premier community with a tool called polars to, to measure, to effectively measure the software delivery performance of, of the organization after the transition has been done. >>And this is a really good point by the way, cuz most, most people think it's a bit of a black art. How do I understand how I modernize my application? How do I understand how I've improved my kind of value chain around software creation and many people thought you needed to bring in very expensive consultants to advise you on these, on these black lives? No, >>Definitely >>Not. But in open source projects like conveyor from, from red hat, the availability of these tools available on an open source model means exactly any engineer, any developer can get these tools off the shelf and get that immediate benefit. >>Well, a flower head of creative labs at HCL at Ramon neon, senior product manager, redhead. Thank you for joining the QPI. Now Cuba alum, uh, you'll have a nice profile like the profile picture on here. Awesome. >>Absolutely. Thank you. >>From Valencia Spain. I'm Keith towns, along with Paul Gillon and you're watching the cue, the leader in high tech coverage.
SUMMARY :
brought to you by the cloud native computing foundation. We are going to of open source is building blocks to the future of, of applications is Welcome to the show. to what we would call a cloud native, um, you know, environment. Well, three to five days, that sounds more akin to when we were doing, has the time patient or resources to fully refactor or rearchitect all the time to modernize the code base. environment, I'm expecting to operate it in a different way as well. Uh, are you seeing clients take more mature I get a lot of the advantages from a cloud native environment. getting the benefits in, in life's application, life cycle management, a, It's not that simple, right? the nuances of each application type within your organization and trying to come up with a I just don't have the resources to do that. So the first thing that's happened to us at eight still is we've been incredibly busy undertaken, So perhaps a development team that are new to this modernization journey, they would benefit from a project like based on that knowledge that we are sharing to make it as easy as possible. And what role does red hat play in all that, I mean, is your you've carved out this position for being applied to application analysis. to cloud native or Kubernetes, uh, replatforming? So a lot of clients are saying we would like to So the, as we start to wrap up the segment, I like to know customer stories. of my portfolio, but the other 90% also needs to come on the journey as well. make sense to move legacy applications towards Kubernetes, to leverage all the, If uh, doomed to fail applications to a cloud native operating model, then you're doomed Now I need to modernize the way that I operate the application And that's one of the concerns that we are addressing as well in the premier community with a tool called polars to, And this is a really good point by the way, cuz most, most people think it's a bit of a black art. on an open source model means exactly any engineer, any developer can get these tools off the shelf Well, a flower head of creative labs at HCL at Ramon neon, Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillon | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
ATL | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
HCL | ORGANIZATION | 0.99+ |
five days | QUANTITY | 0.99+ |
Alan Flower | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Ramon neon | PERSON | 0.99+ |
Coon | ORGANIZATION | 0.99+ |
Ramon | PERSON | 0.99+ |
this week | DATE | 0.99+ |
HCL Technologies | ORGANIZATION | 0.98+ |
three minutes | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
10% | QUANTITY | 0.98+ |
Europe | LOCATION | 0.97+ |
two | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
ACL | ORGANIZATION | 0.97+ |
Ramón Nissen | PERSON | 0.97+ |
2022 | DATE | 0.97+ |
SRE | ORGANIZATION | 0.96+ |
KMP | TITLE | 0.96+ |
Cloudnativecon | ORGANIZATION | 0.96+ |
Kubecon | ORGANIZATION | 0.96+ |
OpenShift | TITLE | 0.96+ |
red hat | ORGANIZATION | 0.96+ |
five plus years ago | DATE | 0.95+ |
Kubernetes | TITLE | 0.95+ |
second thing | QUANTITY | 0.95+ |
tens of thousands | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
millions | QUANTITY | 0.94+ |
thousands of clients | QUANTITY | 0.93+ |
single | QUANTITY | 0.92+ |
Cuba | LOCATION | 0.91+ |
QPI | ORGANIZATION | 0.91+ |
applications | QUANTITY | 0.9+ |
one thing | QUANTITY | 0.9+ |
Keith | PERSON | 0.89+ |
redhead | ORGANIZATION | 0.88+ |
day two | QUANTITY | 0.85+ |
Kubernetes | ORGANIZATION | 0.82+ |
each application type | QUANTITY | 0.82+ |
GSI | ORGANIZATION | 0.81+ |
thousand applications | QUANTITY | 0.81+ |
Nirvana | LOCATION | 0.8+ |
cuon eight | ORGANIZATION | 0.77+ |
seven different | QUANTITY | 0.76+ |
Keith | LOCATION | 0.74+ |
22 | EVENT | 0.71+ |
cloud | ORGANIZATION | 0.71+ |