Image Title

Search Results for Kube Conversation:

Nick Barcet, Red Hat | KubeCon + CloudNativeCon NA 2021


 

(bright music) >> Welcome to this Kube Conversation. I'm Dave Nicholson. And today we have a very special guest from Red Hat, Nick Barcet. Nick is the Senior Director of Technology, Technology Strategy at Red Hat. Nick, welcome back to theCUBE. >> Thank you. It's always a pleasure to be visiting you here virtually. >> It's fantastic to have you here. I see a new office surroundings at Red Hat. Have they taken a kind of a nautical theme at the office there? Where are you joining us from? >> I'm joining from my boat now, I've been living on my boat for the past few years, and that's where you'll find me most of the time. >> So would you consider your boat to be on the Edge? >> It's certainly one form of Edge. You know, there are multiple forms of Edge and a boat is one of those forms. >> Let's talk about Edge now. We're having this conversation in anticipation of KubeCon CloudNativeCon that's coming up North America 2021, coming up in Los Angeles. Let's talk about specifically the Edge, where the Edge, Edge computing and Kubernetes come together from a Red Hat perspective. Walk us through that, talk about some of the challenges that people are having at the Edge, why Kubernetes is something that would be considered at the edge. Walk us through that. >> Let's start from the premises that people have been doing stuff at the Edge for ages. I mean, nobody has been waiting for Kubernetes or any other technology to start implementing some form of computing that is happening in their stores, in their factories, wherever. What's really new today is when we talk about Edge computing, it's reusing the same technology we've been using to deploy inside of the data center and expand that all the way to the Edge. And that's what, from my perspective, constituents, Edge computing or the revolution it bring. So that means that the same GitOps, DevSecOps methodology that we were using into that center are now expandable all the way to those devices that leaves in where locations and that we can reuse the same methodology, the same tooling, and that includes Kubernetes. And all the efforts we've been doing over the past couple of years has been to make Kubernetes even more accessible for the various Edge typologies that we are encountering when discussing with our customer that have Edge projects. >> So typically when we think of a Kubernetes environment, you're talking about containers that are contained in pods, that live on physical clusters, despite all of the talk of a no-code and serverless, we still live in a world where applications and microservices run on physical servers. Are there practical limitations in terms of just how small you can scale Kubernetes? How far, how close to the Edge can you get with the Kubernetes deployment? >> So in theory, there is really no limit. As the smallest devices are always bigger than Kubernetes itself. But the reality is you never use just Kubernetes, you use Kubernetes with a series of other projects that makes it complete, or for example, stuff that is going to be reporting telemetry, components that are going to help you automatically scale, et cetera. And the further you go into the Edge, the less of these competence you can afford. So you have to make trade-offs when you reduce the size of the device. Today, what Red Hat offers, is really concentrated to where we can deliver a full OpenShift experience. So the smallest environments on which we would recommend to run OpenShift at the Edge is a single node is roughly 24 gigabytes of RAM, which is you could buy it, sorry, which is already a relatively big Edge device. And when you go a step lower then, that's where we would recommend using a standard rail for Edge configuration or something similar. Not Kubernetes anymore. >> So you said single node, are you let's double click on that for a second. Is that a single physical node that is abstracted in a way to create some level of logical redundancy? When you say single node, walk us through that. We've got containers that are in pods, so what are we talking about? >> You have, based on your requirements, you can have different way of addressing your compute need at the Edge. You can have those smallest of clusters. And this would be three nodes that are delivered, with is the control plane and the worker nodes integrated into one. When you want to go a step further, you could use worker nodes that are controlled remotely via a central control plane that is at a central site. And when you want to go, even one step further deploy Kubernetes on a very small machine, but that remains fully functional even if disconnected that's when you would use the thing that is not anymore a cluster, which is a single note, Kubernetes where you still have access to the full Kubernetes API, regardless of the connectivity of your site, whether it's active or not, whether you're at sea or in the air or not. And that's where we still offer some form of software high vulnerability, because Kubernetes, even on a single node, it'll still detect if a container dies and restarted and provide similar functionality like this, but it won't provide hardware availability since we are a single node. >> And that makes sense. Yeah, that makes, yeah, it makes perfect sense. And I would suggest that we refer to that as a single node cluster, just because we like to mix it up with terminology in our business and sometimes confuse people with it. >> Technically, that was the choice we made, actually. You like to call it a cluster because it's not a cluster >> Exactly. No, I appreciate that. Absolutely. So what's be explicit about what the trade-offs are there. Let's say that I'm thinking of deploying something at the Edge, and I'm going use Kubernetes to orchestrate my container environment and pretend for a moment that space and cost aren't huge limiting factors. I could put a three node cluster in, but the idea of putting in a single node is very, it's attractive. Where does, where's the line drawn in terms of what you would recommend from, you know, what are the trade offs? What am I losing, going to the single node cluster? See I just called that. >> Well, in a nutshell, you're losing hardware high availability. Meaning if one of your server fails since you only have one server, you lose everything. And there is no way around that. That's the biggest trade-off. Then you have also a trade-off on the memory used by the control plane, which you won't be able to use to do something else. So if I have a site with excellent connectivity and the biggest loss of connectivity might be counted in hours, maybe a remote worker use a better solution because this way, I have a single central-side that carries my control plane, and I can use all the RAM and all the CPU's on my local site to deploy my workloads, not to carries a control plane. To give you an example of these trade-off in the telco space, for example, if you're deploying an antenna in a city, you have plenty of antennas covering that city. And therefore, the loss of one antenna is not a big deal. So in that case, you will be tempted to use a remote worker because you will be maximizing your use of the RAM on the sites for the workload, which is let's have people establish communication using their phones. But now, we take another antenna that we are getting to locate in a very remote location. There, if this antenna fail, everybody fails. There's nobody that is able to make calls, even emergency vehicles cannot discuss together very often. So in that case, it's a lot better to have an autonomous deployment, something where the control plane and the workload itself are being run in one box. And this one box in fact can be duplicated. There could be a another box that is either seating in a truck in case of emergency or off, but on the antenna site, so that in case of a major failure, you have a possibility to re to restore it. So it really depends on what's your sets of constraints in terms of availability in SIM of efficiency of your RAM use is going to be that it's going to make you choose between one or the other of the deployment models. >> No, that's a great example. And so it sounds like it's not a one size fits all world, obviously. Now, from the perspective of the marketplace, looking in at Red Hat, participating in this business, some think of Red Hat as the company that deployed Linux 20 years ago. Help us make that connection between Red Hat today and what you've been doing for the last 20 years and this topic of Edge computing, 'cause some people don't automatically think of Red Hat an Edge computing. I do, I think they should, (chuckles) but help us understand that. >> Yeah, obviously a lot of people consider that Red Hat is Red Hat, Linux, and that's it. The Red Hat Enterprise Linux is what we've been known since our beginnings 25 years ago, and what has made our early success. But we consider ourselves more of an infrastructure company. We have been offering for the past 20 years, the various component that you need to deploy server, run and manage your workloads across data centers and make sure that you can store your data, and that you can automate your operations on top of this infrastructure. So we really consider ourselves much more of a company that offers everything that enables you to run your servers and run your workloads on top of your server. And that includes a tool to do virtualization, that includes tool to do continuous deployment of containers. And that's where Kubernetes entered in play about 10 years ago. Well, first it was OPAs that then became Kubernetes and the OpenShift offering that we have today. >> Yeah. Thanks for that. So I have, I've got a final question for you. It's a little bit off topic, but it's related, this is in the category of Nick predicts. So when does Nick predict that we will get to a point where we tip beyond the 50/50 point cloud versus on-premises IT spending, if you accept today that we're still in the neighborhood of 75 to 80% on-premises. When will we hit the 50/50 mark? I'm not asking you for the hundred percent cloud date, but give us a date, you give us a month and a year for 50/50. >> Given the progression of cloud, if there was no Edge, we could said two to three years from now, we would be at this 50/50 mark. But the funny thing is that at the same time, as the cloud progresses, people start realizing that they have needs that needs to be solved locally. And this is why we are deploying Edge-based solution, solution which reliably can provide answers, regardless of the connectivity to the cloud, regardless of the bandwidth. There are things that I would never want to do, like feeding a size on feeds from 4K cameras, into my cloud environment that won't scale, I won't have the bandwidth to do so. And therefore, maybe the answer to your question is, it's going to be asymptotic, and it's almost impossible to predict. >> So that is a much better answer than giving me an exact date and time, because (chuckles) because it reveals exactly the reality that we're living in. Again, there is, you know, it's fit for function. It's not cloud for cloud's sake, compute resources, data, resources have a place that they naturally belong oftentimes. And oftentimes that is on the Edge, whether it's on the edge of the edge of the world in a sailboat or out in a single server, not node, or I keep wanting to single node cluster, it's killing me. I dunno why, I think it's so funny, but a single node implementation of OpenShift where you can run Kubernetes on the Edge, it's a fascinating subject. Anything else that you want to share with us that we didn't get? >> I think one aspect that we never talk enough is how do you manage at the scale of Edge? Because even though each Edge site is very small, you can have thousands, even hundreds of thousands of these single node something that are running all over the place. And I think that what you're seeing in advent cluster management for Kubernetes, and particularly the 2.4 version that we are going to be announcing this week and actually releasing in November is I think a pretty good answer to that problem on how do I deploy with zero touch these devices? How do I update them, upgrade them? How do I deploy the workloads on top of that? How do I ensure to have the right tooling to deploy that at the scale? And we've done the testing now of ACM with up to 2,000 clusters, connected to a single ACMs. And in the future, we are planning on building federation of those, which really gives us the possibility to provide the tooling needed to manage at its scale. >> Excellent. Excellent. Yeah. That's whenever we start talking about anything in the realm of containerization and Kubernetes scale starts to become an issue. It's no longer a question of a human being managing 10 servers and 50 applications. We start talking about tens of thousands and hundreds of thousands of instances where it's beyond human scale. So that's obviously something that's very, very important. Well, Nick, I want to thank you for becoming a Kube veteran once again. Thanks for joining this Kube Conversation from Dave Nicholson, this has been a Kube Conversation in anticipation of KubeCon and CloudNativeCon North America 2021. Thanks for tuning in. (bright music)

Published Date : Oct 14 2021

SUMMARY :

Nick is the Senior Director of Technology, to be visiting you here virtually. It's fantastic to have you here. find me most of the time. and a boat is one of those forms. Let's talk about specifically the Edge, So that means that the same How far, how close to the Edge can you get And the further you go into the Edge, on that for a second. and the worker nodes And that makes sense. Technically, that was the but the idea of putting in a single node So in that case, you will be of the marketplace, and that you can automate your operations in the neighborhood of that at the same time, And oftentimes that is on the Edge, that are running all over the place. in the realm of containerization

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

NickPERSON

0.99+

Nick BarcetPERSON

0.99+

hundred percentQUANTITY

0.99+

NovemberDATE

0.99+

10 serversQUANTITY

0.99+

50 applicationsQUANTITY

0.99+

Los AngelesLOCATION

0.99+

thousandsQUANTITY

0.99+

75QUANTITY

0.99+

one boxQUANTITY

0.99+

Red HatORGANIZATION

0.99+

TodayDATE

0.99+

KubeConEVENT

0.99+

one serverQUANTITY

0.99+

24 gigabytesQUANTITY

0.99+

todayDATE

0.98+

LinuxTITLE

0.98+

twoQUANTITY

0.98+

single nodeQUANTITY

0.98+

20 years agoDATE

0.98+

50/50QUANTITY

0.98+

singleQUANTITY

0.98+

eachQUANTITY

0.98+

CloudNativeCon North America 2021EVENT

0.98+

EdgeTITLE

0.97+

one aspectQUANTITY

0.97+

KubernetesTITLE

0.97+

80%QUANTITY

0.97+

telcoORGANIZATION

0.97+

hundreds of thousandsQUANTITY

0.97+

25 years agoDATE

0.97+

OpenShiftTITLE

0.97+

this weekDATE

0.96+

Red HatTITLE

0.96+

single noteQUANTITY

0.96+

oneQUANTITY

0.95+

0 yearsQUANTITY

0.95+

one antennaQUANTITY

0.95+

firstQUANTITY

0.95+

Kube ConversationEVENT

0.94+

KubeCon CloudNativeConEVENT

0.94+

GitOpsTITLE

0.93+

one formQUANTITY

0.93+

three yearsQUANTITY

0.93+

up to 2,000 clustersQUANTITY

0.92+

one stepQUANTITY

0.91+

North AmericaLOCATION

0.91+

three nodesQUANTITY

0.91+

doubleQUANTITY

0.89+

EdgeORGANIZATION

0.89+

single serverQUANTITY

0.89+

a month andQUANTITY

0.88+

CloudNativeCon NA 2021EVENT

0.87+