Image Title

Search Results for Nick Chase:

Why Multi-Cloud?


 

>>Hello, everyone. My name is Rick Pew. I'm a senior product manager at Mirant. This and I have been working on the Doctor Enterprise Container Cloud for the last eight months. Today we're gonna be talking about multi cloud kubernetes. So the first thing to kind of look at is, you know, is multi cloud rial. You know, the terms thrown around a lot and by the way, I should mention that in this presentation, we use the term multi cloud to mean both multi cloud, which you know in the technical sense, really means multiple public clouds and hybrid cloud means public clouds. And on Prem, uh, we use in this presentation will use the term multi cloud to refer to all different types of multiple clouds, whether it's all public cloud or a mixture of on Prem and Public Cloud or, for that matter, multiple on Prem clouds as doctor and price container. Cloud supports all of those scenarios. So it really well, let's look at some research that came out of flex era in their 2020 State of the cloud report. You'll notice that ah, 33% state that they've got multiple public and one private cloud. 53% say they've got multiple public and multiple private cloud. So if you have those two up, you get 86% of the people say that they're in multiple public clowns and at least one private cloud. So I think at this stage we could say that multi cloud is a reality. According to 4 51 research, you know, a number of CEO stated that the strong driver their desire was to optimize cost savings across their private and public clouds. Um, they also wanted to avoid vendor lock in by operating in multiple clouds and try to dissuade their teams from taking too much advantage of a given providers proprietary infrastructure. But they also indicated that there the complexity of using multiple clouds hindered the rate of adoption of doing it doesn't mean they're not doing it. It just means that they don't go assed fast as they would like to go in many cases because of the complexity. And here it Miranda's. We surveyed our customers as well, and they're telling us similar things, you know. Risk management, through the diversification of providers, is key on their list cost optimization and the democratization of allowing their development teams, uh, to create kubernetes clusters without having to file a nightie ticket. But to give them a self service, uh, cloud like environment, even if it's on prem or multi cloud to give them the ability to create their own clusters, resize their own clusters and delete their own clusters without needing to have I t. Or of their operations teams involved at all. But there are some challenges with this, with the different clouds you know require different automation. Thio provisioned the underlying infrastructure or deploy and operating system or deployed kubernetes, for that matter, in a given cloud. You could say that they're not that complicated. They all have, you know, very powerful consoles and a P I s to do that. But did you get across three or four or five different clouds? Then you have to learn three or four or five different AP ice and Web consoles in order to make that happen on in. That scenario is difficult to provide self service for developers across all the cloud options, which is what you want to really accelerate your application innovation. So what's in it for me? You know We've got a number of roles and their prizes developers, operators and business leaders, and they have somewhat different needs. So when the developer side the need is flexibility to meet their development schedules, Number one you know they're under constant pressure to produce, and in order to do that, they need flexibility and in this case, the flexibility to create kubernetes clusters and use them across multiple clouds. Now they also have C I C D tools, and they want them to be able to be normalized on automated across all of the the on prim and public clouds that they're using. You know, in many cases they'll have a test and deployment scenario where they'll want to create a cluster, deploy their software, run their test, score the tests and then delete that cluster because the only point of that cluster, perhaps, was to test ah pipeline of delivery. So they need that kind of flexibility. From the operator's perspective, you know, they always want to be able to customize the control of their infrastructure and deployment. Uh, they certainly have the desire to optimize their optics and Capex fans. They also want to support their develops teams who many times their their customers through a p I access for on Prem and public clouds burst. Scaling is something operators are interested in, and something public clouds can provide eso the ability to scale out into public clouds, perhaps from there on prem infrastructure in a seamless manner. And many times they need to support geographic distribution of applications either for compliance or performance reasons. So having you know, data centers all across the world and be able to specifically target a given region, uh, is high on their list. Business leaders want flexibility and confidence to know that you know, they're on prim and public cloud uh, deployments. Air fully supported. They want to be able, like the operator, optimize their cloud, spends business leaders, think about disaster recovery. So having the applications running and living in different data centers gives them the opportunity to have disaster recovery. And they really want the flexibility of keeping private data under their control. On on Prem In certain applications may access that on Prem. Other applications may be able to fully run in the cloud. So what should I look for in a container cloud? So you really want something that fully automates these cluster deployments for virtual machine or bare metal. The operating system, uh, and kubernetes eso It's not just deploying kubernetes. It's, you know, how do I create my underlying infrastructure of a VM or bare metal? How do I deploy the operating system? And then, on top of all that, I want to be able to deploy kubernetes. Uh, you also want one that gives a unified cluster lifecycle management across all the clouds. So these clusters air running software gets updated. Cooper Netease has a new release cycle. Uh, they come out with something new. It's available, you know, How do you get that across all of your clusters? That air running in multiple clouds. We also need a container cloud that can provide you the visibility through logging, monitoring and alerting again across all the clouds. You know, many offerings have these for a particular cloud, but getting that across multiple clouds, uh, becomes a little more difficult. The Doctor Enterprise Container cloud, you know, is a very strong solution and really meets many of these, uh, dimensions along the left or kind of the dimensions we went through in the last slide we've got on Prem and public clouds as of RG A Today we're supporting open stack and bare metal for the on Prem Solutions and AWS in the public cloud. We'll be adding VM ware very soon for another on Prem uh, solution as well as azure and G C P. So thank you very much. Uh, look forward, Thio answering any questions you might have and we'll call that a rap. Thank you. >>Hi, Rick. Thanks very much for that. For that talk, I I am John James. You've probably seen me in other sessions. I do marketing here in Miran Tous on. I wanted to to take this opportunity while we had Rick to ask some more questions about about multi cloud. It's ah, potentially a pretty big topic, isn't it, Rick? >>Yeah. I mean, you know, the devil's in the details and there's, uh, lots of details that we could go through if you'd like, be happy to answer any questions that you have. >>Well, we've been talking about hybrid cloud for literally years. Um, this is something that I think you know, several generations of folks in the in the I. A s space doing on premise. I s, for example, with open stack the way Miran Tous Uh does, um, found, um, you know, thought that that it had a lot of potential. A lot of enterprises believed that, but there were There were things stopping people from from making it. Really, In many cases, um, it required a very, ah, very high degree of willingness to create homogeneous platforms in the cloud and on the premise. Um, and that was often very challenging. Um, but it seems like with things like kubernetes and with the isolation provided by containers, that this is beginning to shift, that that people are actually looking for some degree of application portability between their own Prem and there and their cloud environments. And that this is opening up, Uh, you know, investment on interest in pursuing this stuff. Is that the right perception? >>Yeah. So let's let's break that down a little bit. So what's nice about kubernetes is through the a. P. I s are the same. Regardless of whether it's something that Google or or a W s is offering as a platform as a service or whether you've taken the upstream open source project and deploy it yourself on parameter in a public cloud or whatever the scenario might be or could be a competitor of Frances's product, the Kubernetes A. P I is the same, which is the thing that really gives you that application portability. So you know, the container itself is contained arising, obviously your application and minimizing any kind of dependency issues that you might have And then the ability to deploy that to any of the coup bernetti clusters you know, is the same regardless of where it's running, the complexity comes and how doe I actually spend up a cluster in AWS and open stack and D M Where and gp An azure. How do I build that infrastructure and and spin that up and then, you know, used the ubiquitous kubernetes a p I toe actually deploy my application and get it to run. So you know what we've done is we've we've unified and created A I use the word normalized. But a lot of times people think that normalization means that you're kind of going to a lowest common denominator, which really isn't the case and how we've attacked the the enabling of multi cloud. Uh, you know, what we've done is that we've looked at each one of the providers and are basically providing an AP that allows you to utilize. You know, whatever the best of you know, that particular breed of provider has and not, uh, you know, going to at least common denominator. But, you know, still giving you a ah single ap by which you can, you know, create the infrastructure and the infrastructure could be on Prem is a bare metal infrastructure. It could be on preeminent open stack or VM ware infrastructure. Any of the public clouds, you know, used to have a a napi I that works for all of them. And we've implemented that a p i as an extension to kubernetes itself. So all of the developers, Dev ops and operators that air already familiar operating within the, uh, within the aapi of kubernetes. It's very, very natural. Extension toe actually be able to spend up these clusters and deploy them >>Now that's interesting. Without giving away, obviously what? Maybe special sauce. Um, are you actually using operators to do this in the Cooper 90? Sense of the word? >>Yes. Yeah, we've extended it with with C R D s, uh, and and operators and controllers, you know in the way that it was meant to be extended. So Kubernetes has a recipe on how you extend their A P I on that. That's what we used as our model. >>That, at least to me, makes enormous sense. Nick Chase, My colleague and I were digging into operators a couple of weeks ago, and that's a very elegant technology. Obviously, it's a it's evolving very fast, but it's remarkably unintimidating once you start trying to write them. We were able toe to compose operators around Cron and other simple processes and just, >>you know, >>a couple of minutes on day worked, which I found pretty astonishing. >>Yeah, I mean, you know, Kubernetes does a lot of things and they spent a lot of effort, um, in being able, you know, knowing that their a p I was gonna be ubiquitous and knowing that people wanted to extend it, uh, they spent a lot of effort in the early development days of being able to define that a p I to find what an operator was, what a controller was, how they interact. How a third party who doesn't know anything about the internals of kubernetes could add whatever it is that they wanted, you know, and follow the model that makes it work. Exactly. Aziz, the native kubernetes ap CSTO >>What's also fascinating to me? And, you know, I've I've had a little perspective on this over the past, uh, several weeks or a month or so working with various stakeholders inside the company around sessions related to this event that the understanding of how things work is by no means evenly distributed, even in a company as sort of tightly knit as Moran Tous. Um, some people who shall remain nameless have represented to me that Dr Underprice Container Cloud basically works. Uh, if you handed some of the EMS, it will make things for you, you know, and this is clearly not what's going on that that what's going on is a lot more nuanced that you are using, um, optimal resource is from each provider to provide, uh, you know, really coherent architected solutions. Um, the load balancing the d. N s. The storage that this that that right? Um all of which would ultimately be. And, you know, you've probably tried this. I certainly have hard to script by yourself in answerable or cloud formation or whatever. Um, this is, you know, this is not easy work. I I wrote a about the middle of last year for my prior employer. I wrote a dip lawyer in no Js against the raw aws a piece for deployment and configuration of virtual networks and servers. Um, and that was not a trivial project. Um, it took a long time to get thio. Uh, you know, a dependable result. And to do it in parallel and do other things that you need to do in order to maintain speed. One of the things, in fact, that I've noticed in working with Dr Enterprise Container Cloud recently, is how much parallelism it's capable of within single platforms. It's It's pretty powerful. I mean, if you want to clusters to be deployed simultaneously, that's not hard for Doc. Aerated price container cloud to dio on. I found it pretty remarkable because I have sat in front of a single laptop trying to churn out of cluster under answerable, for example, and just on >>you get into that serial nature, your >>poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending it's a person and it's doing all that stuff. This is much more magical. Um, so So that's all built into the system to, isn't it? >>Yeah. Interesting, Really Interesting point on that. Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies have, you know, spend a lot of effort to try to make that as easy as possible. But when you get into networking, load balancing, routing, storage and hooking those up, you know, two containers automating that if you were to do that in terror form or answerable or something like that is many, many, many lines of code, you know, people have to experiment. Could you never get it right the first or second or the third time? Uh, you know, and then you have to maintain that. So one of the things that we've heard from customers that have looked a container cloud was that they just can't wait to throw away their answerable or their terror form that they've been maintaining for a couple of years. The kind of enables them to do this. It's very brittle. If if the clouds change something, you know on the network side, let's say that's really buried. And it's not something that's kind of top of mind. Uh, you know, your your thing fails or maybe worse, you think that it works. And it's not until you actually go to use it that you notice that you can't get any of your containers. So you know, it's really great the way that we've simplified that for the users and again democratizing it. So the developers and Dev ops people can create these clusters, you know, with ease and not worry about all the complexities of networking and storage. >>Another thing that amazed me as I was digging into my first, uh, Dr Price container Cloud Management cluster deployment was how, uh, I want I don't want to use the word nuanced again, but I can't think of a better word. Nuanced. The the security thinking is in how things air set up. How, um, really delicate the thinking about about how much credential power you give to the deploy. Er the to the seed server that deploys your management cluster as opposed thio Um uh or rather the how much how much administrative access you give to the to the administrator who owns the entire implementation around a given provider versus how much power the seed server gets because that gets its own user right? It gets a bootstrap user specifically created so that it's not your administrator, you know, more limited visibility and permissions. And this whole hierarchy of permissions is then extended down into the child clusters that this management cluster will ultimately create. So that Dev's who request clusters will get appropriate permissions granted within. Ah, you know, a corporate schema of permissions. But they don't get the keys to the kingdom. They don't have access to anything they don't you know they're not supposed to have access to, but within their own scope, they're safe. They could do anything they want, so it's like a It's a It's a really neat kind of elegant way of protecting organizations against, for example, resource over use. Um, you know, give people the power to deploy clusters, and basically you're giving them the power toe. Make sure that a big bill hits you know, your corporate accounting office at the end of the billing cycle, um so there have to be controls and those controls exist in this, you know, in this. >>Yeah, And there's kind of two flavors of that. One is kind of the day one that you're doing the deployment you mentioned the seed servers, you know, And then it creates a bastion server, and then it creates, you know, the management cluster and so forth, you know, and how all those permissions air handled. And then once the system is running, you know, then you have full access to going into key cloak, which is a very powerful open source identity management tool on you have dozens of, you know, granular permissions that you can give to an individual user that gives them permission to do certain things and not others within the context of kubernetes eso. It's really well thought out. And the defaults, you know, our 80% right. You know, there's very few people are gonna have to go in and sort of change those defaults. You mentioned the corporate directory. You know, hooks right upto l bap or active directory can suck everybody down. So there's no kind of work from a day. One perspective of having to go add. You know everybody that you can think of different teams and groupings of of people. Uh, you know, that's kind of all given from the three interface to the corporate directory. And so it just makes kind of managing the users and and controlling who can do what? Uh, really easy. And, you know, you know, day one day two it's really almost like our one hour to write because it's just all the defaults were really well thought out. You can deploy, you know, very powerful doctor and price container cloud, you know, within an hour, and then you could just start using it. And you know, you can create users if you want. You can use the default users. That air set up a time goes on, you can fine tune that, and it's a really, really nice model again for the whole frictionless democratization of giving developers the ability to go in and get it out of, you know, kind of their way and doing what they want to do. And I t is happy to do that because they don't like dozens of tickets and saying, you know, create a cluster for this team created cluster for that team. You know, here's the size of these guys. Want to resize when you know let's move all that into a self service model and really fulfill the prophecy of, you know, speeding up application development. >>It strikes me is extremely ironic that one of the things that public cloud providers bless them, uh, have always claimed, is that their products provide this democratization when in the experience, I think my own experience and the experience of most of the AWS developers, for example, not toe you know, name names, um, that I've encountered is that an initial experience of trying to start start a virtual machine and figuring out how to log into it? A. W s could take the better part of an afternoon. It's just it's not familiar once you have it in your fingers. Boom. Two seconds, right. But, wow, that learning curve is steep and precipitous, and you slip back and you make stupid mistakes your first couple 1000 times through the loop. Um, by letting people skip that and letting them skip it potentially on multiple providers, in a sense, I would think products like this are actually doing the public cloud industry is, you know, a real surface Hide as much of that as you can without without taking the power away. Because ultimately people want, you know, to control their destiny. They want choice for a reason. Um, and and they want access to the infinite services And, uh, and, uh, innovation that AWS and Azure and Google are all doing on their platforms. >>Yeah, you know, and they're solving, uh, very broad problems in the public clouds, you know, here were saying, you know, this is a world of containers, right? This is a world of orchestration of these containers. And why should I have to worry about the underlying infrastructure, whether it's a virtual machine or bare metal? You know, I shouldn't care if I'm an application developer developing some database application. You know, the last thing I wanna worry about is how do I go in and create a virtual machine? Oh, this is running. And Google. It's totally different than the one I was creating. An AWS I can't find. You know where I get the I P address in Google. It's not like it was an eight of us, you know, and you have to relearn the whole thing. And that's really not what your job is. Anyways, your job is to write data base coat, for example. And what you really want to do is just push a button, deploy a nor kiss traitor, get your app on it and start debugging it and getting it >>to work. Yep. Yeah, it's It's powerful. I've been really excited to work with the product the past week or so, and, uh, I hope that folks will look at the links at the bottoms of our thank you slides and, uh, and, uh, avail themselves of of free trial downloads of both Dr Enterprise Container, Cloud and Lens. Thank you very much for spending this extra time with me. Rick. I I think we've produced some added value here for for attendees. >>Well, thank you, John. I appreciate your help. >>Have a great rest of your session by bike. >>Okay, Thanks. Bye.

Published Date : Sep 16 2020

SUMMARY :

the first thing to kind of look at is, you know, is multi cloud rial. For that talk, I I am John James. And that this is opening up, Uh, you know, investment on interest in pursuing any of the coup bernetti clusters you know, is the same regardless of where it's running, Um, are you actually using operators to do this in the Cooper 90? and and operators and controllers, you know in the way that it was meant to be extended. but it's remarkably unintimidating once you start trying whatever it is that they wanted, you know, and follow the model that makes it work. And, you know, poor little devil, every you know, it's it's going out and it's ssh, Indian Terminals and it's pretending Is that you know, the complexity isn't not necessarily and just creating a virtual machine because all of these companies Make sure that a big bill hits you know, your corporate accounting office at the And the defaults, you know, our 80% right. I would think products like this are actually doing the public cloud industry is, you know, a real surface you know, and you have to relearn the whole thing. bottoms of our thank you slides and, uh, and, uh, avail themselves of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rick PewPERSON

0.99+

RickPERSON

0.99+

John JamesPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Nick ChasePERSON

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

86%QUANTITY

0.99+

80%QUANTITY

0.99+

fiveQUANTITY

0.99+

firstQUANTITY

0.99+

MirantORGANIZATION

0.99+

threeQUANTITY

0.99+

Two secondsQUANTITY

0.99+

one hourQUANTITY

0.99+

53%QUANTITY

0.99+

33%QUANTITY

0.99+

2020DATE

0.99+

each providerQUANTITY

0.99+

secondQUANTITY

0.99+

TodayDATE

0.99+

third timeQUANTITY

0.99+

AzizPERSON

0.98+

ThioPERSON

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

eightQUANTITY

0.97+

OneQUANTITY

0.97+

first thingQUANTITY

0.97+

first couple 1000 timesQUANTITY

0.96+

two flavorsQUANTITY

0.96+

Prem SolutionsORGANIZATION

0.96+

MirandaORGANIZATION

0.96+

single platformsQUANTITY

0.95+

last yearDATE

0.95+

dozens of ticketsQUANTITY

0.95+

dozensQUANTITY

0.94+

past weekDATE

0.93+

a dayQUANTITY

0.93+

KubernetesTITLE

0.92+

CapexORGANIZATION

0.92+

each oneQUANTITY

0.92+

single laptopQUANTITY

0.92+

last eight monthsDATE

0.92+

couple of weeks agoDATE

0.91+

One perspectiveQUANTITY

0.91+

two containersQUANTITY

0.91+

an hourQUANTITY

0.9+

AzureORGANIZATION

0.9+

a monthQUANTITY

0.88+

three interfaceQUANTITY

0.87+

azureORGANIZATION

0.87+

FrancesPERSON

0.87+

dayQUANTITY

0.83+

Dr Enterprise ContainerORGANIZATION

0.82+

PremORGANIZATION

0.82+

RG AORGANIZATION

0.81+

WORGANIZATION

0.8+

Miran TousORGANIZATION

0.79+

Cooper NeteasePERSON

0.78+

Kubernetes A.TITLE

0.77+

CronTITLE

0.76+

Dr Underprice Container CloudORGANIZATION

0.76+

one dayQUANTITY

0.75+

five different cloudsQUANTITY

0.72+

Moran TousPERSON

0.7+

single apQUANTITY

0.68+

Miran TousPERSON

0.67+

Dr EnterpriseORGANIZATION

0.65+

G C P.ORGANIZATION

0.61+

90COMMERCIAL_ITEM

0.61+

weeksQUANTITY

0.61+

LensORGANIZATION

0.61+

ON DEMAND SWARM ON K8S FINAL NEEDS CTA SLIDE


 

>>welcome to the session. Long live swarm with containers and kubernetes everywhere we have this increasing cloud complexity at the same time that we're facing economic uncertainty and, of course, to navigate this. For most companies, it's a matter of focusing on speed and on shipping and iterating their code faster. Now. For many, Marantz is customers. That means using docker swarm rather than kubernetes to handle container orchestration. We really believe that the best way to increase your speed to production is choice, simplicity and security. So we wanted to bring you a couple of experts to talk about the state of swarm and Docker enterprise and how you can make best use of both of you. So let's get to it. Well, good afternoon or good morning, depending on where you are on and welcome to today's session. Long live swarm. I am Nick Chase. I'm head of content here at Mantis and I would like to introduce you to our two Panelists today eight of Manzini. Why don't you introduce yourself? >>I am a van CNI. I'm a solutions architect here at Moran Tous on work primarily with Docker Enterprise System. I have a long history of working with support team. Um, at what used to be Ah Docker Enterprise, part of Docker Inc. >>Yeah, Okay. Great. And Don Power. >>I, um Yeah, I'm Don Power on the docker. Captain Docker, community leader. Right now I run our Dev Ops team for Citizens Bank out of Nashville, Tennessee, and happy to be here. >>All right, Excellent. So All right, so thank you both for coming. Now, before we say anything else, I want to go ahead and kind of name the elephant in the room. There's been a lot of talk about the >>future. Yeah, that's right. Um, swarm as it stands right now, um, we have, ah, very vested interest in keeping our customers on who want to continue using swarm, functional and keeping swarm a viable alternative or complement to kubernetes. However you see the orchestration war playing out as it were. >>Okay? It's hardly a war at this point, but they do work together, and so that's >>absolutely Yeah, I I definitely consider them more of like, complimentary services, um, using the right tool for the job. Sort of sense. They both have different design goals when they were originally created and set out so I definitely don't see it as a completely one or the other kind of decision and that they could both be used in the same environment and similar clusters to run whatever workload that you have. >>Excellent. And we'll get into the details of all that as we go along. So that's terrific. So I have not really been involved in in the sort of swarm area. So set the stage for us where we kind of start out with all of this. Don I know that you were involved and so guys said, set the stage for us. >>Sure, Um I mean so I've been a heavy user of swarm in my past few roles. Professionally, we've been running containers in production with Swarm for coming up on about four years. Now, Um, in our case, we you know, we looked at what was available at the time, and of course you had. Kubernetes is your biggest contender out there, but like I just mentioned, the one of the things that really led us to swarm is it's design goals were very different than kubernetes. So Kubernetes tries to have an answer for absolutely every scenario where swarm tries to have an answer for, like, the 80% of problems or challenges will say that you might come across 80% of the workloads. Um, I had a better way of saying that, but I think I got my point across >>E Yeah, I think I think you hit the nail on the head. Um, Kubernetes in particular with the way that kubernetes itself is an a P I I believe that kubernetes was, um, you know, written as a toolkit. It wasn't really intended to be used by end users directly. It was really a way to build platforms that run containers. And because it's this really, really extensible ap I you can extend it to manage all sorts of resource is swarm doesn't have that X sensibility aspect, but what it was designed to do, it does very, very well and very easily in a very, very simple sort of way. Um, it's highly opinionated about the way that you should use the product, but it works very effectively. It's very easy to use. It's very low. Um, not low effort, but low. Ah, low barrier to entry. >>Yes. Yes. Absolutely. I was gonna touch on the same thing. It's very easy for someone to come in. Pick up swarm. You know they don't They don't have to know anything about the orchestrator on day one. Most people that are getting into this space are very familiar with Docker. Compose um, and entering from Docker compose into swarm is changing one command that you would run on the command line. >>Yeah, very, very trivial to if you are already used to building docker files using composed, organize your deployment into stacks of related components. It's trivial to turn on swarm mode and then deploy your container set to a cluster. >>Well, excellent. So answer this question for me. Is the swarm of today the same as the swarm of, you know, the original swarm. So, like when swim first started is that the same is what we have now >>it's kind of ah, complicated story with the storm project because it's changed names and forms a few times. Originally in is really somewhere around 2014 in the first version, and it was a component that you really had to configure and set up separately from Docker Ah, the way that it was structured. Ah, you would just have docker installed on a number of servers are machines in your cluster. And then you would organize them into a swarm by bringing your own database and some of the tooling to get those nodes talking to each other and to organize your containers across all of your docker engines. Ah, few years later, the swarm project was retooled and baked into the docker engine. And, um, this is where we sort of get the name change from. So originally it was a feature that we called swarm. Ah. Then the Swarm Kit project was released on Get Hub and baked directly into the engine, where they renamed it as swarm mode. Because now it is a motile option that you just turn on as a button in the docker engine and because it's already there the, um, the tuning knobs that you haven't swarm kit with regard to how what my time outs are and some of these other sort of performance settings there locked there, they're there. It's part of the opinionated set of components that builds up the docker engine is that we bring in the Swarm Kit project with a certain set of defaults and settings. And that is how it operates in today's version of Docker engine. >>Uh, okay for that, that makes sense. That makes sense. So ah, so don, I know you have pretty strong feelings about this topic, but it is swarm still viable in a world that's sort of increasingly dominated by Kubernetes. >>Absolutely. And you were right. I'm very passionate about this topic where I work. We're we're doing almost all of our production work lives on swarm we only have out of Ah, we've got something like 600 different services between three and 4000 containers. At any given point in time. Out of all of those projects, all of those services we've only run into two or three that don't kind of fit into the opinionated model of swarm. So we are running those on KUBERNETES in the same cluster using Moranis is Docker enterprise offering. But, um, no, that's a very, very small percentage of services that we didn't have an answer for in swarm with one. The one case that really gets us just about every time is scaling state full services. But you're gonna have very few staple services in most environments for things like micro service architecture, which is predominantly what we build out. Swarm is perfect. It's simple. It's easy to use you, don't you? Don't end up going for miles of yamma files trying to figure out the one setting that you didn't get exactly right? Um yeah, the other Thea the other big piece of it that way really led us to adopting it so heavily in the beginning is, you know, the overlay network. So your networks don't have to span the whole cluster like they do with kubernetes. So we could we could set up a network isolation between service A and service B, just by use using the built in overlay networks. That was a huge component that, like I said, let us Teoh adopting it so heavily when we first got started. >>Excellent. You look like you're about to say something in a >>Yeah, I think that speaks to the design goals for each piece of software. On the way that I've heard this described before is with regard to the networking piece the ah, the docker networking under the hood, um, feels like it was written by a network engineer. The way that the docker engine overlay networks communicate uses ah, VX lan under the hood, which creates pseudo V lands for your containers. And if two containers aren't on the same Dylan, there's no way they can communicate with each other as opposed to the design of kubernetes networking, which is really left to the C and I implementation but still has the design philosophy of one big, flat sub net where every I p could reach every other i p and you control what is allowed to access, what by policy. So it's more of an application focused Ah design. Whereas in Docker swarm on the overlay networking side, it's really of a network engineering sort of focus. Right? >>Okay, got it. Well, so now how does all this fit in with Docker enterprise now? So I understand there's been some changes on how swarm is handled within Docker Enterprise. Coming with this new release, >>Docker s O swarm Inside Docker Enterprise is represented as both the swarm classic legacy system that we shift way back in 2014 on and then also the swarm mode that is curly used in the docker engine. Um, the Swarm Classic back end gives us legacy support for being able to run unmanaged plane containers onto a cluster. If you were to take Docker ce right now, you would find that you wouldn't be able to just do a very basic docker run against a whole cluster of machines. You can create services using the swarms services, a p I but, um, that that legacy plane container support is something that you have to set up external swarm in order to provide. So right now, the architecture of Docker Enterprise UCP is based on some of that legacy code from about five or six years ago. Okay. Ah, that gives us ability to deploy plane containers for use cases that require it as well as swarm services for those kinds of workloads that might be better served by the built in load balancing and h A and scaling features that swarm provides. >>Okay, so now I know that at one point kubernetes was deployed within Docker Enterprise as you create a swarm cluster and then deploy kubernetes on top of swarm. >>Correct? That is how the current architecture works. >>Okay. All right. And then, um what is what is where we're going with this like, Are we supposed to? Are we going to running Swarm on top of kubernetes? What's >>the the design goals for the future of swarm within branches? Stocker Enterprise are that we will start the employing Ah, like kubernetes cluster features as the base and a swarm kit on top of kubernetes. So it is like you mentioned just a reversal of the roles. I think we're finding that, um, the ability to extend kubernetes a p I to manage resource is is valuable at an infrastructure and platform level in a way that we can't do with swarm. We still want to be able to run swarm workloads. So we're going to keep the swarm kit code the swarm kit orchestration features to run swarm services as a part of the platform to keep the >>got it. Okay, so, uh, if I'm a developer and I want to run swarm, but my company's running kubernetes what? What are my one of my options there? Well, I think >>eight touched on it pretty well already where you know, it depends on your design goals, and you know, one of the other things that's come up a few times is Thea. The level of entry for for swarm is much, much simpler than kubernetes. So I mean, it's it's kind of hard to introduce anything new. So I mean, a company, a company that's got most of their stuff in kubernetes and production is gonna have a hard time maybe looking at a swarm. I mean, this is gonna be, you know, higher, higher up, not the boots on the ground. But, um, you know, the the upper management, that's at some point, you have to pay for all their support, all of it. What we did in our approach. Because there was one team already using kubernetes. We went ahead and stood up a small cluster ah, small swarm cluster and taught the developers how to use it and how to deploy code to it. And they loved it. They thought it was super simple. A time went on, the other teams took notice and saw how fast these guys were getting getting code deployed, getting services up, getting things usable, and they would look over at what the innovation team was doing and say, Hey, I I want to do that to, uh, you know, so there's there's a bunch of different approaches. That's the approach we took and it worked out very well. It looks like you wanted to say something too. >>Yeah, I think that if you if you're if you're having to make this kind of decision, there isn't There isn't a wrong choice. Ah, it's never a swarm of its role and your organization, right? Right. If you're if you're an individual and you're using docker on your workstation on your laptop but your organization wants to standardize on kubernetes there, there are still some two rules that Mike over Ah, pose. And he's manifest if you need to deploy. Coop resource is, um if you are running Docker Enterprise Swarm kit code will still be there. And you can run swarm services as regular swarm workloads on that component. So I I don't want to I don't want people to think that they're going to be like, locked into one or the other orchestration system. Ah, there the way we want to enable developer choice so that however the developer wants to do their work, they can get it done. Um Docker desktop. Ah, ships with that kubernetes distribution bundled in it. So if you're using a Mac or Windows and that's your development, uh, system, you can run docker debt, turn on your mode and run the kubernetes bits. So you have the choices. You have the tools to deploy to either system. >>And that's one of the things that we were super excited about when they introduced Q. Burnett ease into the Docker Enterprise offering. So we were able to run both, so we didn't have to have that. I don't want to call it a battle or argument, but we didn't have to make anybody choose one or the other. We, you know, we gave them both options just by having Docker enterprise so >>excellent. So speaking of having both options, let's just say for developers who need to make a decision while should I go swarm, or should I go kubernetes when it sort of some of the things that they should think about? >>So I think that certain certain elements of, um, certain elements of containers are going to be agnostic right now. So the the the designing a docker file and building a container image, you're going to need to know that skill for either system that you choose to operate on. Ah, the swarm value. Some of the storm advantage comes in that you don't have to know anything beyond that. So you don't have to learn a whole new A p I a whole new domain specific language using Gamel to define your deployment. Um, chances are that if you've been using docker for any length of time, you probably have a whole stack of composed files that are related to things that you've worked on. And, um, again, the barrier to entry to getting those running on swarm is very low. You just turn it on docker stack, deploy, and you're good to go. So I think that if you're trying to make that choice, if you I have a use case that doesn't require you to manage new resource is if you don't need the Extensible researchers part, Ah, swarm is a great great, great viable option. >>Absolutely. Yeah, the the recommendation I've always made to people that are just getting started is start with swarm and then move into kubernetes and going through the the two of them, you're gonna figure out what fits your design principles. What fits your goals. Which one? You know which ones gonna work best for you. And there's no harm in choosing one or the other using both each one of you know, very tailor fit for very various types of use cases. And like I said, kubernetes is great at some things, but for a lot of other stuff, I still want to use swarm and vice versa. So >>on my home lab, for all my personal like services that I run in my, uh, my home network, I used storm, um, for things that I might deploy onto, you know, a bit this environment, a lot of the ones that I'm using right now are mainly tailored for kubernetes eso. I think especially some of the tools that are out there in the open source community as well as in docker Enterprise helped to bridge that gap like there's a translator that can take your compose file, turn it into kubernetes. Yeah, Mel's, um, if if you're trying to decide, like on the business side, should we standardize on former kubernetes? I think like your what? What functionality are you looking at? Out of getting out of your system? If you need things like tight integration into a ah infrastructure vendor such as AWS Azure or VM ware that might have, like plug ins for kubernetes. You're now you're getting into that area where you're managing Resource is of the infrastructure with your orchestration. AP I with kube so things like persistent volumes can talk to your storage device and carve off chunks of storage and assign those two pods if you don't have that need or that use case. Um, you know, KUBERNETES is bringing in a lot of these features that you maybe you're just not taking advantage of. Um, similarly, if you want to take advantage of things like auto scaling to scale horizontally, let's say you have a message queue system and then a number of workers, and you want to start scaling up on your workers. When your CPU hits a certain a metric. That is something that Kubernetes has built right into it. And so, if you want that, I would probably suggest that you look at kubernetes if you don't need that, or if you want to write some of that tooling yourself. Swarm doesn't have an object built into it that will do automatic horizontal scaling based on some kind of metric. So I always consider this decision as a what features are the most I available to you and your business that you need to Yep. >>All right. Excellent. Well, and, ah, fortunately, of course, they're both available on Docker Enterprise. So aren't we lucky? All right, so I am going to wrap this up. I want to thank Don Bauer Docker captain, for coming here and spending some time with us and eight of Manzini. I would like to thank you. I know that the the, uh, circumstances are less than ideal here for your recording today, but we appreciate you joining us. Um and ah, both of you. Thank you very much. And I want to invite all of you. First of all, thank you for joining us. We know your time is valuable and I want to invite you all Teoh to take a look at Docker Enterprise. Ah, follow the link that's on your screen and we'll see you in the next session. Thank you all so much. Thank you. >>Thank you, Nick.

Published Date : Sep 14 2020

SUMMARY :

So we wanted to bring you a couple of experts to talk about the state of swarm I have a long history of working with support Tennessee, and happy to be here. kind of name the elephant in the room. However you see the orchestration to run whatever workload that you have. Don I know that you were involved Um, in our case, we you know, we looked at what was Um, it's highly opinionated about the way that you should use is changing one command that you would run on the command line. Yeah, very, very trivial to if you are already used to building docker of, you know, the original swarm. in the first version, and it was a component that you really had to configure and set up separately So ah, so don, I know you have pretty strong to figure out the one setting that you didn't get exactly right? You look like you're about to say something in a On the way that I've heard this described before is with regard to the networking piece Well, so now how does all this fit in with Docker you have to set up external swarm in order to provide. was deployed within Docker Enterprise as you create a swarm cluster That is how the current architecture works. is what is where we're going with this like, Are we supposed to? a part of the platform to keep the I think I mean, this is gonna be, you know, higher, So you have the choices. And that's one of the things that we were super excited about when they introduced Q. So speaking of having both options, let's just say Some of the storm advantage comes in that you don't have to know anything beyond the two of them, you're gonna figure out what fits your design principles. available to you and your business that you need to Yep. I know that the the, uh, circumstances are less than

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nick ChasePERSON

0.99+

80%QUANTITY

0.99+

twoQUANTITY

0.99+

2014DATE

0.99+

Citizens BankORGANIZATION

0.99+

threeQUANTITY

0.99+

MarantzORGANIZATION

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

two podsQUANTITY

0.99+

MantisORGANIZATION

0.99+

first versionQUANTITY

0.99+

600 different servicesQUANTITY

0.99+

todayDATE

0.99+

Ah Docker EnterpriseORGANIZATION

0.99+

both optionsQUANTITY

0.99+

each pieceQUANTITY

0.99+

Docker Inc.ORGANIZATION

0.99+

few years laterDATE

0.99+

one caseQUANTITY

0.99+

4000 containersQUANTITY

0.98+

Docker Enterprise SystemORGANIZATION

0.98+

Q. BurnettPERSON

0.98+

one teamQUANTITY

0.98+

ManziniORGANIZATION

0.98+

Don PowerPERSON

0.98+

Docker EnterpriseORGANIZATION

0.98+

DockerTITLE

0.98+

FirstQUANTITY

0.98+

Docker EnterpriseTITLE

0.97+

oneQUANTITY

0.97+

MoranisORGANIZATION

0.97+

about four yearsQUANTITY

0.97+

two PanelistsQUANTITY

0.97+

NickPERSON

0.97+

Stocker EnterpriseORGANIZATION

0.97+

six years agoDATE

0.96+

DylanPERSON

0.95+

Moran TousORGANIZATION

0.95+

Don BauerPERSON

0.95+

two containersQUANTITY

0.95+

two rulesQUANTITY

0.94+

WindowsTITLE

0.93+

Nashville, TennesseeLOCATION

0.93+

one commandQUANTITY

0.93+

firstQUANTITY

0.92+

KubernetesORGANIZATION

0.92+

eightQUANTITY

0.91+

Docker enterpriseTITLE

0.89+

KUBERNETESORGANIZATION

0.88+

GamelTITLE

0.88+

day oneQUANTITY

0.87+

one big, flat sub netQUANTITY

0.86+

MacCOMMERCIAL_ITEM

0.83+

service AOTHER

0.82+

Docker enterpriseTITLE

0.81+

milesQUANTITY

0.8+

each oneQUANTITY

0.8+

Mike ovPERSON

0.78+

dockerORGANIZATION

0.78+

service BOTHER

0.77+

AzureTITLE

0.75+