ON DEMAND SWARM ON K8S FINAL NEEDS CTA SLIDE
>>welcome to the session. Long live swarm with containers and kubernetes everywhere we have this increasing cloud complexity at the same time that we're facing economic uncertainty and, of course, to navigate this. For most companies, it's a matter of focusing on speed and on shipping and iterating their code faster. Now. For many, Marantz is customers. That means using docker swarm rather than kubernetes to handle container orchestration. We really believe that the best way to increase your speed to production is choice, simplicity and security. So we wanted to bring you a couple of experts to talk about the state of swarm and Docker enterprise and how you can make best use of both of you. So let's get to it. Well, good afternoon or good morning, depending on where you are on and welcome to today's session. Long live swarm. I am Nick Chase. I'm head of content here at Mantis and I would like to introduce you to our two Panelists today eight of Manzini. Why don't you introduce yourself? >>I am a van CNI. I'm a solutions architect here at Moran Tous on work primarily with Docker Enterprise System. I have a long history of working with support team. Um, at what used to be Ah Docker Enterprise, part of Docker Inc. >>Yeah, Okay. Great. And Don Power. >>I, um Yeah, I'm Don Power on the docker. Captain Docker, community leader. Right now I run our Dev Ops team for Citizens Bank out of Nashville, Tennessee, and happy to be here. >>All right, Excellent. So All right, so thank you both for coming. Now, before we say anything else, I want to go ahead and kind of name the elephant in the room. There's been a lot of talk about the >>future. Yeah, that's right. Um, swarm as it stands right now, um, we have, ah, very vested interest in keeping our customers on who want to continue using swarm, functional and keeping swarm a viable alternative or complement to kubernetes. However you see the orchestration war playing out as it were. >>Okay? It's hardly a war at this point, but they do work together, and so that's >>absolutely Yeah, I I definitely consider them more of like, complimentary services, um, using the right tool for the job. Sort of sense. They both have different design goals when they were originally created and set out so I definitely don't see it as a completely one or the other kind of decision and that they could both be used in the same environment and similar clusters to run whatever workload that you have. >>Excellent. And we'll get into the details of all that as we go along. So that's terrific. So I have not really been involved in in the sort of swarm area. So set the stage for us where we kind of start out with all of this. Don I know that you were involved and so guys said, set the stage for us. >>Sure, Um I mean so I've been a heavy user of swarm in my past few roles. Professionally, we've been running containers in production with Swarm for coming up on about four years. Now, Um, in our case, we you know, we looked at what was available at the time, and of course you had. Kubernetes is your biggest contender out there, but like I just mentioned, the one of the things that really led us to swarm is it's design goals were very different than kubernetes. So Kubernetes tries to have an answer for absolutely every scenario where swarm tries to have an answer for, like, the 80% of problems or challenges will say that you might come across 80% of the workloads. Um, I had a better way of saying that, but I think I got my point across >>E Yeah, I think I think you hit the nail on the head. Um, Kubernetes in particular with the way that kubernetes itself is an a P I I believe that kubernetes was, um, you know, written as a toolkit. It wasn't really intended to be used by end users directly. It was really a way to build platforms that run containers. And because it's this really, really extensible ap I you can extend it to manage all sorts of resource is swarm doesn't have that X sensibility aspect, but what it was designed to do, it does very, very well and very easily in a very, very simple sort of way. Um, it's highly opinionated about the way that you should use the product, but it works very effectively. It's very easy to use. It's very low. Um, not low effort, but low. Ah, low barrier to entry. >>Yes. Yes. Absolutely. I was gonna touch on the same thing. It's very easy for someone to come in. Pick up swarm. You know they don't They don't have to know anything about the orchestrator on day one. Most people that are getting into this space are very familiar with Docker. Compose um, and entering from Docker compose into swarm is changing one command that you would run on the command line. >>Yeah, very, very trivial to if you are already used to building docker files using composed, organize your deployment into stacks of related components. It's trivial to turn on swarm mode and then deploy your container set to a cluster. >>Well, excellent. So answer this question for me. Is the swarm of today the same as the swarm of, you know, the original swarm. So, like when swim first started is that the same is what we have now >>it's kind of ah, complicated story with the storm project because it's changed names and forms a few times. Originally in is really somewhere around 2014 in the first version, and it was a component that you really had to configure and set up separately from Docker Ah, the way that it was structured. Ah, you would just have docker installed on a number of servers are machines in your cluster. And then you would organize them into a swarm by bringing your own database and some of the tooling to get those nodes talking to each other and to organize your containers across all of your docker engines. Ah, few years later, the swarm project was retooled and baked into the docker engine. And, um, this is where we sort of get the name change from. So originally it was a feature that we called swarm. Ah. Then the Swarm Kit project was released on Get Hub and baked directly into the engine, where they renamed it as swarm mode. Because now it is a motile option that you just turn on as a button in the docker engine and because it's already there the, um, the tuning knobs that you haven't swarm kit with regard to how what my time outs are and some of these other sort of performance settings there locked there, they're there. It's part of the opinionated set of components that builds up the docker engine is that we bring in the Swarm Kit project with a certain set of defaults and settings. And that is how it operates in today's version of Docker engine. >>Uh, okay for that, that makes sense. That makes sense. So ah, so don, I know you have pretty strong feelings about this topic, but it is swarm still viable in a world that's sort of increasingly dominated by Kubernetes. >>Absolutely. And you were right. I'm very passionate about this topic where I work. We're we're doing almost all of our production work lives on swarm we only have out of Ah, we've got something like 600 different services between three and 4000 containers. At any given point in time. Out of all of those projects, all of those services we've only run into two or three that don't kind of fit into the opinionated model of swarm. So we are running those on KUBERNETES in the same cluster using Moranis is Docker enterprise offering. But, um, no, that's a very, very small percentage of services that we didn't have an answer for in swarm with one. The one case that really gets us just about every time is scaling state full services. But you're gonna have very few staple services in most environments for things like micro service architecture, which is predominantly what we build out. Swarm is perfect. It's simple. It's easy to use you, don't you? Don't end up going for miles of yamma files trying to figure out the one setting that you didn't get exactly right? Um yeah, the other Thea the other big piece of it that way really led us to adopting it so heavily in the beginning is, you know, the overlay network. So your networks don't have to span the whole cluster like they do with kubernetes. So we could we could set up a network isolation between service A and service B, just by use using the built in overlay networks. That was a huge component that, like I said, let us Teoh adopting it so heavily when we first got started. >>Excellent. You look like you're about to say something in a >>Yeah, I think that speaks to the design goals for each piece of software. On the way that I've heard this described before is with regard to the networking piece the ah, the docker networking under the hood, um, feels like it was written by a network engineer. The way that the docker engine overlay networks communicate uses ah, VX lan under the hood, which creates pseudo V lands for your containers. And if two containers aren't on the same Dylan, there's no way they can communicate with each other as opposed to the design of kubernetes networking, which is really left to the C and I implementation but still has the design philosophy of one big, flat sub net where every I p could reach every other i p and you control what is allowed to access, what by policy. So it's more of an application focused Ah design. Whereas in Docker swarm on the overlay networking side, it's really of a network engineering sort of focus. Right? >>Okay, got it. Well, so now how does all this fit in with Docker enterprise now? So I understand there's been some changes on how swarm is handled within Docker Enterprise. Coming with this new release, >>Docker s O swarm Inside Docker Enterprise is represented as both the swarm classic legacy system that we shift way back in 2014 on and then also the swarm mode that is curly used in the docker engine. Um, the Swarm Classic back end gives us legacy support for being able to run unmanaged plane containers onto a cluster. If you were to take Docker ce right now, you would find that you wouldn't be able to just do a very basic docker run against a whole cluster of machines. You can create services using the swarms services, a p I but, um, that that legacy plane container support is something that you have to set up external swarm in order to provide. So right now, the architecture of Docker Enterprise UCP is based on some of that legacy code from about five or six years ago. Okay. Ah, that gives us ability to deploy plane containers for use cases that require it as well as swarm services for those kinds of workloads that might be better served by the built in load balancing and h A and scaling features that swarm provides. >>Okay, so now I know that at one point kubernetes was deployed within Docker Enterprise as you create a swarm cluster and then deploy kubernetes on top of swarm. >>Correct? That is how the current architecture works. >>Okay. All right. And then, um what is what is where we're going with this like, Are we supposed to? Are we going to running Swarm on top of kubernetes? What's >>the the design goals for the future of swarm within branches? Stocker Enterprise are that we will start the employing Ah, like kubernetes cluster features as the base and a swarm kit on top of kubernetes. So it is like you mentioned just a reversal of the roles. I think we're finding that, um, the ability to extend kubernetes a p I to manage resource is is valuable at an infrastructure and platform level in a way that we can't do with swarm. We still want to be able to run swarm workloads. So we're going to keep the swarm kit code the swarm kit orchestration features to run swarm services as a part of the platform to keep the >>got it. Okay, so, uh, if I'm a developer and I want to run swarm, but my company's running kubernetes what? What are my one of my options there? Well, I think >>eight touched on it pretty well already where you know, it depends on your design goals, and you know, one of the other things that's come up a few times is Thea. The level of entry for for swarm is much, much simpler than kubernetes. So I mean, it's it's kind of hard to introduce anything new. So I mean, a company, a company that's got most of their stuff in kubernetes and production is gonna have a hard time maybe looking at a swarm. I mean, this is gonna be, you know, higher, higher up, not the boots on the ground. But, um, you know, the the upper management, that's at some point, you have to pay for all their support, all of it. What we did in our approach. Because there was one team already using kubernetes. We went ahead and stood up a small cluster ah, small swarm cluster and taught the developers how to use it and how to deploy code to it. And they loved it. They thought it was super simple. A time went on, the other teams took notice and saw how fast these guys were getting getting code deployed, getting services up, getting things usable, and they would look over at what the innovation team was doing and say, Hey, I I want to do that to, uh, you know, so there's there's a bunch of different approaches. That's the approach we took and it worked out very well. It looks like you wanted to say something too. >>Yeah, I think that if you if you're if you're having to make this kind of decision, there isn't There isn't a wrong choice. Ah, it's never a swarm of its role and your organization, right? Right. If you're if you're an individual and you're using docker on your workstation on your laptop but your organization wants to standardize on kubernetes there, there are still some two rules that Mike over Ah, pose. And he's manifest if you need to deploy. Coop resource is, um if you are running Docker Enterprise Swarm kit code will still be there. And you can run swarm services as regular swarm workloads on that component. So I I don't want to I don't want people to think that they're going to be like, locked into one or the other orchestration system. Ah, there the way we want to enable developer choice so that however the developer wants to do their work, they can get it done. Um Docker desktop. Ah, ships with that kubernetes distribution bundled in it. So if you're using a Mac or Windows and that's your development, uh, system, you can run docker debt, turn on your mode and run the kubernetes bits. So you have the choices. You have the tools to deploy to either system. >>And that's one of the things that we were super excited about when they introduced Q. Burnett ease into the Docker Enterprise offering. So we were able to run both, so we didn't have to have that. I don't want to call it a battle or argument, but we didn't have to make anybody choose one or the other. We, you know, we gave them both options just by having Docker enterprise so >>excellent. So speaking of having both options, let's just say for developers who need to make a decision while should I go swarm, or should I go kubernetes when it sort of some of the things that they should think about? >>So I think that certain certain elements of, um, certain elements of containers are going to be agnostic right now. So the the the designing a docker file and building a container image, you're going to need to know that skill for either system that you choose to operate on. Ah, the swarm value. Some of the storm advantage comes in that you don't have to know anything beyond that. So you don't have to learn a whole new A p I a whole new domain specific language using Gamel to define your deployment. Um, chances are that if you've been using docker for any length of time, you probably have a whole stack of composed files that are related to things that you've worked on. And, um, again, the barrier to entry to getting those running on swarm is very low. You just turn it on docker stack, deploy, and you're good to go. So I think that if you're trying to make that choice, if you I have a use case that doesn't require you to manage new resource is if you don't need the Extensible researchers part, Ah, swarm is a great great, great viable option. >>Absolutely. Yeah, the the recommendation I've always made to people that are just getting started is start with swarm and then move into kubernetes and going through the the two of them, you're gonna figure out what fits your design principles. What fits your goals. Which one? You know which ones gonna work best for you. And there's no harm in choosing one or the other using both each one of you know, very tailor fit for very various types of use cases. And like I said, kubernetes is great at some things, but for a lot of other stuff, I still want to use swarm and vice versa. So >>on my home lab, for all my personal like services that I run in my, uh, my home network, I used storm, um, for things that I might deploy onto, you know, a bit this environment, a lot of the ones that I'm using right now are mainly tailored for kubernetes eso. I think especially some of the tools that are out there in the open source community as well as in docker Enterprise helped to bridge that gap like there's a translator that can take your compose file, turn it into kubernetes. Yeah, Mel's, um, if if you're trying to decide, like on the business side, should we standardize on former kubernetes? I think like your what? What functionality are you looking at? Out of getting out of your system? If you need things like tight integration into a ah infrastructure vendor such as AWS Azure or VM ware that might have, like plug ins for kubernetes. You're now you're getting into that area where you're managing Resource is of the infrastructure with your orchestration. AP I with kube so things like persistent volumes can talk to your storage device and carve off chunks of storage and assign those two pods if you don't have that need or that use case. Um, you know, KUBERNETES is bringing in a lot of these features that you maybe you're just not taking advantage of. Um, similarly, if you want to take advantage of things like auto scaling to scale horizontally, let's say you have a message queue system and then a number of workers, and you want to start scaling up on your workers. When your CPU hits a certain a metric. That is something that Kubernetes has built right into it. And so, if you want that, I would probably suggest that you look at kubernetes if you don't need that, or if you want to write some of that tooling yourself. Swarm doesn't have an object built into it that will do automatic horizontal scaling based on some kind of metric. So I always consider this decision as a what features are the most I available to you and your business that you need to Yep. >>All right. Excellent. Well, and, ah, fortunately, of course, they're both available on Docker Enterprise. So aren't we lucky? All right, so I am going to wrap this up. I want to thank Don Bauer Docker captain, for coming here and spending some time with us and eight of Manzini. I would like to thank you. I know that the the, uh, circumstances are less than ideal here for your recording today, but we appreciate you joining us. Um and ah, both of you. Thank you very much. And I want to invite all of you. First of all, thank you for joining us. We know your time is valuable and I want to invite you all Teoh to take a look at Docker Enterprise. Ah, follow the link that's on your screen and we'll see you in the next session. Thank you all so much. Thank you. >>Thank you, Nick.
SUMMARY :
So we wanted to bring you a couple of experts to talk about the state of swarm I have a long history of working with support Tennessee, and happy to be here. kind of name the elephant in the room. However you see the orchestration to run whatever workload that you have. Don I know that you were involved Um, in our case, we you know, we looked at what was Um, it's highly opinionated about the way that you should use is changing one command that you would run on the command line. Yeah, very, very trivial to if you are already used to building docker of, you know, the original swarm. in the first version, and it was a component that you really had to configure and set up separately So ah, so don, I know you have pretty strong to figure out the one setting that you didn't get exactly right? You look like you're about to say something in a On the way that I've heard this described before is with regard to the networking piece Well, so now how does all this fit in with Docker you have to set up external swarm in order to provide. was deployed within Docker Enterprise as you create a swarm cluster That is how the current architecture works. is what is where we're going with this like, Are we supposed to? a part of the platform to keep the I think I mean, this is gonna be, you know, higher, So you have the choices. And that's one of the things that we were super excited about when they introduced Q. So speaking of having both options, let's just say Some of the storm advantage comes in that you don't have to know anything beyond the two of them, you're gonna figure out what fits your design principles. available to you and your business that you need to Yep. I know that the the, uh, circumstances are less than
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick Chase | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
Citizens Bank | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Marantz | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two pods | QUANTITY | 0.99+ |
Mantis | ORGANIZATION | 0.99+ |
first version | QUANTITY | 0.99+ |
600 different services | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Ah Docker Enterprise | ORGANIZATION | 0.99+ |
both options | QUANTITY | 0.99+ |
each piece | QUANTITY | 0.99+ |
Docker Inc. | ORGANIZATION | 0.99+ |
few years later | DATE | 0.99+ |
one case | QUANTITY | 0.99+ |
4000 containers | QUANTITY | 0.98+ |
Docker Enterprise System | ORGANIZATION | 0.98+ |
Q. Burnett | PERSON | 0.98+ |
one team | QUANTITY | 0.98+ |
Manzini | ORGANIZATION | 0.98+ |
Don Power | PERSON | 0.98+ |
Docker Enterprise | ORGANIZATION | 0.98+ |
Docker | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
Docker Enterprise | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
Moranis | ORGANIZATION | 0.97+ |
about four years | QUANTITY | 0.97+ |
two Panelists | QUANTITY | 0.97+ |
Nick | PERSON | 0.97+ |
Stocker Enterprise | ORGANIZATION | 0.97+ |
six years ago | DATE | 0.96+ |
Dylan | PERSON | 0.95+ |
Moran Tous | ORGANIZATION | 0.95+ |
Don Bauer | PERSON | 0.95+ |
two containers | QUANTITY | 0.95+ |
two rules | QUANTITY | 0.94+ |
Windows | TITLE | 0.93+ |
Nashville, Tennessee | LOCATION | 0.93+ |
one command | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
Kubernetes | ORGANIZATION | 0.92+ |
eight | QUANTITY | 0.91+ |
Docker enterprise | TITLE | 0.89+ |
KUBERNETES | ORGANIZATION | 0.88+ |
Gamel | TITLE | 0.88+ |
day one | QUANTITY | 0.87+ |
one big, flat sub net | QUANTITY | 0.86+ |
Mac | COMMERCIAL_ITEM | 0.83+ |
service A | OTHER | 0.82+ |
Docker enterprise | TITLE | 0.81+ |
miles | QUANTITY | 0.8+ |
each one | QUANTITY | 0.8+ |
Mike ov | PERSON | 0.78+ |
docker | ORGANIZATION | 0.78+ |
service B | OTHER | 0.77+ |
Azure | TITLE | 0.75+ |
Reliance Jio: OpenStack for Mobile Telecom Services
>>Hi, everyone. My name is my uncle. My uncle Poor I worked with Geo reminds you in India. We call ourselves Geo Platforms. Now on. We've been recently in the news. You've raised a lot off funding from one of the largest, most of the largest tech companies in the world. And I'm here to talk about Geos Cloud Journey, Onda Mantis Partnership. I've titled it the story often, Underdog becoming the largest telecom company in India within four years, which is really special. And we're, of course, held by the cloud. So quick disclaimer. Right. The content shared here is only for informational purposes. Um, it's only for this event. And if you want to share it outside, especially on social media platforms, we need permission from Geo Platforms limited. Okay, quick intro about myself. I am a VP of engineering a geo. I lead the Cloud Services and Platforms team with NGO Andi. I mean the geo since the beginning, since it started, and I've seen our cloud footprint grow from a handful of their models to now eight large application data centers across three regions in India. And we'll talk about how we went here. All right, Let's give you an introduction on Geo, right? Giorgio is on how we became the largest telecom campaign, India within four years from 0 to 400 million subscribers. And I think there are There are a lot of events that defined Geo and that will give you an understanding off. How do you things and what you did to overcome massive problems in India. So the slide that I want to talkto is this one and, uh, I The headline I've given is, It's the Geo is the fastest growing tech company in the world, which is not a new understatement. It's eggs, actually, quite literally true, because very few companies in the world have grown from zero to 400 million subscribers within four years paying subscribers. And I consider Geo Geos growth in three phases, which I have shown on top. The first phase we'll talk about is how geo grew in the smartphone market in India, right? And what we did to, um to really disrupt the telecom space in India in that market. Then we'll talk about the feature phone phase in India and how Geo grew there in the future for market in India. and then we'll talk about what we're doing now, which we call the Geo Platforms phase. Right. So Geo is a default four g lt. Network. Right. So there's no to geo three g networks that Joe has, Um it's a state of the art four g lt voiceover lt Network and because it was designed fresh right without any two D and three G um, legacy technologies, there were also a lot of challenges Lawn geo when we were starting up. One of the main challenges waas that all the smart phones being sold in India NGOs launching right in 2000 and 16. They did not have the voice or lt chip set embedded in the smartphone because the chips it's far costlier to embed in smartphones and India is a very price and central market. So none of the manufacturers were embedding the four g will teach upset in the smartphones. But geos are on Lee a volte in network, right for the all the network. So we faced a massive problem where we said, Look there no smartphones that can support geo. So how will we grow Geo? So in order to solve that problem, we launched our own brand of smartphones called the Life um, smartphones. And those phones were really high value devices. So there were $50 and for $50 you get you You At that time, you got a four g B storage space. A nice big display for inch display. Dual cameras, Andi. Most importantly, they had volte chip sets embedded in them. Right? And that got us our initial customers the initial for the launch customers when we launched. But more importantly, what that enabled other oh, EMS. What that forced the audience to do is that they also had to launch similar smartphones competing smartphones with voltage upset embedded in the same price range. Right. So within a few months, 3 to 4 months, um, all the other way EMS, all the other smartphone manufacturers, the Samsung's the Micromax is Micromax in India, they all had volte smartphones out in the market, right? And I think that was one key step We took off, launching our own brand of smartphone life that helped us to overcome this problem that no smartphone had. We'll teach upsets in India and then in order. So when when we were launching there were about 13 telecom companies in India. It was a very crowded space on demand. In order to gain a foothold in that market, we really made a few decisions. Ah, phew. Key product announcement that really disrupted this entire industry. Right? So, um, Geo is a default for GLT network itself. All I p network Internet protocol in everything. All data. It's an all data network and everything from voice to data to Internet traffic. Everything goes over this. I'll goes over Internet protocol, and the cost to carry voice on our smartphone network is very low, right? The bandwidth voice consumes is very low in the entire Lt band. Right? So what we did Waas In order to gain a foothold in the market, we made voice completely free, right? He said you will not pay anything for boys and across India, we will not charge any roaming charges across India. Right? So we made voice free completely and we offer the lowest data rates in the world. We could do that because we had the largest capacity or to carry data in India off all the other telecom operators. And these data rates were unheard off in the world, right? So when we launched, we offered a $2 per month or $3 per month plan with unlimited data, you could consume 10 gigabytes of data all day if you wanted to, and some of our subscriber day. Right? So that's the first phase off the overgrowth and smartphones and that really disorders. We hit 100 million subscribers in 170 days, which was very, very fast. And then after the smartphone faith, we found that India still has 500 million feature phones. And in order to grow in that market, we launched our own phone, the geo phone, and we made it free. Right? So if you take if you took a geo subscription and you carried you stayed with us for three years, we would make this phone tree for your refund. The initial deposit that you paid for this phone and this phone had also had quite a few innovations tailored for the Indian market. It had all of our digital services for free, which I will talk about soon. And for example, you could plug in. You could use a cable right on RCR HDMI cable plug into the geo phone and you could watch TV on your big screen TV from the geophones. You didn't need a separate cable subscription toe watch TV, right? So that really helped us grow. And Geo Phone is now the largest selling feature phone in India on it. 100 million feature phones in India now. So now now we're in what I call the geo platforms phase. We're growing of a geo fiber fiber to the home fiber toe the office, um, space. And we've also launched our new commerce initiatives over e commerce initiatives and were steadily building platforms that other companies can leverage other companies can use in the Jeon o'clock. Right? So this is how a small startup not a small start, but a start of nonetheless least 400 million subscribers within four years the fastest growing tech company in the world. Next, Geo also helped a systemic change in India, and this is massive. A lot of startups are building on this India stack, as people call it, and I consider this India stack has made up off three things, and the acronym I use is jam. Trinity, right. So, um, in India, systemic change happened recently because the Indian government made bank accounts free for all one billion Indians. There were no service charges to store money in bank accounts. This is called the Jonathan. The J. GenDyn Bank accounts. The J out off the jam, then India is one of the few countries in the world toe have a digital biometric identity, which can be used to verify anyone online, which is huge. So you can simply go online and say, I am my ankle poor on duh. I verify that this is indeed me who's doing this transaction. This is the A in the jam and the last M stands for Mobil's, which which were held by Geo Mobile Internet in a plus. It is also it is. It also stands for something called the U. P I. The United Unified Payments Interface. This was launched by the Indian government, where you can carry digital transactions for free. You can transfer money from one person to the to another, essentially for free for no fee, right so I can transfer one group, even Indian rupee to my friend without paying any charges. That is huge, right? So you have a country now, which, with a with a billion people who are bank accounts, money in the bank, who you can verify online, right and who can pay online without any problems through their mobile connections held by G right. So suddenly our market, our Internet market, exploded from a few million users to now 506 106 100 million mobile Internet users. So that that I think, was a massive such a systemic change that happened in India. There are some really large hail, um, numbers for this India stack, right? In one month. There were 1.6 billion nuclear transactions in the last month, which is phenomenal. So next What is the impact of geo in India before you started, we were 155th in the world in terms off mobile in terms of broadband data consumption. Right. But after geo, India went from one 55th to the first in the world in terms of broadband data, largely consumed on mobile devices were a mobile first country, right? We have a habit off skipping technology generation, so we skip fixed line broadband and basically consuming Internet on our mobile phones. On average, Geo subscribers consumed 12 gigabytes of data per month, which is one of the highest rates in the world. So Geo has a huge role to play in making India the number one country in terms off broad banded consumption and geo responsible for quite a few industry first in the telecom space and in fact, in the India space, I would say so before Geo. To get a SIM card, you had to fill a form off the physical paper form. It used to go toe Ah, local distributor. And that local distributor is to check the farm that you feel incorrectly for your SIM card and then that used to go to the head office and everything took about 48 hours or so, um, to get your SIM card. And sometimes there were problems there also with a hard biometric authentication. We enable something, uh, India enable something called E K Y C Elektronik. Know your customer? We took a fingerprint scan at our point of Sale Reliance Digital stores, and within 15 minutes we could verify within a few minutes. Within a few seconds we could verify that person is indeed my hunk, right, buying the same car, Elektronik Lee on we activated the SIM card in 15 minutes. That was a massive deal for our growth. Initially right toe onboard 100 million customers. Within our and 70 days. We couldn't have done it without be K. I see that was a massive deal for us and that is huge for any company starting a business or start up in India. We also made voice free, no roaming charges and the lowest data rates in the world. Plus, we gave a full suite of cloud services for free toe all geo customers. For example, we give goTV essentially for free. We give GOTV it'll law for free, which people, when we have a launching, told us that no one would see no one would use because the Indians like watching TV in the living rooms, um, with the family on a big screen television. But when we actually launched, they found that GOTV is one off our most used app. It's like 70,000,080 million monthly active users, and now we've basically been changing culture in India where culture is on demand. You can watch TV on the goal and you can pause it and you can resume whenever you have some free time. So really changed culture in India, India on we help people liver, digital life online. Right, So that was massive. So >>I'm now I'd like to talk about our cloud >>journey on board Animal Minorities Partnership. We've been partners that since 2014 since the beginning. So Geo has been using open stack since 2014 when we started with 14 note luster. I'll be one production environment One right? And that was I call it the first wave off our cloud where we're just understanding open stack, understanding the capabilities, understanding what it could do. Now we're in our second wave. Where were about 4000 bare metal servers in our open stack cloud multiple regions, Um, on that around 100,000 CPU cores, right. So it's a which is one of the bigger clouds in the world, I would say on almost all teams, with Ngor leveraging the cloud and soon I think we're going to hit about 10,000 Bama tools in our cloud, which is massive and just to give you a scale off our network, our in French, our data center footprint. Our network introduction is about 30 network data centers that carry just network traffic across there are there across India and we're about eight application data centers across three regions. Data Center is like a five story building filled with servers. So we're talking really significant scale in India. And we had to do this because when we were launching, there are the government regulation and try it. They've gotten regulatory authority of India, mandates that any telecom company they have to store customer data inside India and none of the other cloud providers were big enough to host our clothes. Right. So we we made all this intellectual for ourselves, and we're still growing next. I love to show you how we grown with together with Moran says we started in 2014 with the fuel deployment pipelines, right? And then we went on to the NK deployment. Pipelines are cloud started growing. We started understanding the clouds and we picked up M C p, which has really been a game changer for us in automation, right on DNA. Now we are in the latest release, ofem CPM CPI $2019 to on open stack queens, which on we've just upgraded all of our clouds or the last few months. Couple of months, 2 to 3 months. So we've done about nine production clouds and there are about 50 internal, um, teams consuming cloud. We call as our tenants, right. We have open stack clouds and we have communities clusters running on top of open stack. There are several production grade will close that run on this cloud. The Geo phone, for example, runs on our cloud private cloud Geo Cloud, which is a backup service like Google Drive and collaboration service. It runs out of a cloud. Geo adds G o g S t, which is a tax filing system for small and medium enterprises, our retail post service. There are all these production services running on our private clouds. We're also empaneled with the government off India to provide cloud services to the government to any State Department that needs cloud services. So we were empaneled by Maiti right in their ego initiative. And our clouds are also Easter. 20,000 certified 20,000 Colin one certified for software processes on 27,001 and said 27,017 slash 18 certified for security processes. Our clouds are also P our data centers Alsop a 942 be certified. So significant effort and investment have gone toe These data centers next. So this is where I think we've really valued the partnership with Morantes. Morantes has has trained us on using the concepts of get offs and in fries cold, right, an automated deployments and the tool change that come with the M C P Morantes product. Right? So, um, one of the key things that has happened from a couple of years ago to today is that the deployment time to deploy a new 100 north production cloud has decreased for us from about 55 days to do it in 2015 to now, we're down to about five days to deploy a cloud after the bear metals a racked and stacked. And the network is also the physical network is also configured, right? So after that, our automated pipelines can deploy 100 0 clock in five days flight, which is a massive deal for someone for a company that there's adding bear metals to their infrastructure so fast, right? It helps us utilize our investment, our assets really well. By the time it takes to deploy a cloud control plane for us is about 19 hours. It takes us two hours to deploy a compu track and it takes us three hours to deploy a storage rack. Right? And we really leverage the re class model off M C. P. We've configured re class model to suit almost every type of cloud that we have, right, and we've kept it fairly generous. It can be, um, Taylor to deploy any type of cloud, any type of story, nor any type of compute north. Andi. It just helps us automate our deployments by putting every configuration everything that we have in to get into using infra introduction at school, right plus M. C. P also comes with pipelines that help us run automated tests, automated validation pipelines on our cloud. We also have tempest pipelines running every few hours every three hours. If I recall correctly which run integration test on our clouds to make sure the clouds are running properly right, that that is also automated. The re class model and the pipelines helpers automate day to operations and changes as well. There are very few seventh now, compared toa a few years ago. It very rare. It's actually the exception and that may be because off mainly some user letter as opposed to a cloud problem. We also have contributed auto healing, Prometheus and Manager, and we integrate parameters and manager with our even driven automation framework. Currently, we're using Stack Storm, but you could use anyone or any event driven automation framework out there so that it indicates really well. So it helps us step away from constantly monitoring our cloud control control planes and clothes. So this has been very fruitful for us and it has actually apps killed our engineers also to use these best in class practices like get off like in France cord. So just to give you a flavor on what stacks our internal teams are running on these clouds, Um, we have a multi data center open stack cloud, and on >>top of that, >>teams use automation tools like terra form to create the environments. They also create their own Cuba these clusters and you'll see you'll see in the next slide also that we have our own community that the service platform that we built on top of open stack to give developers development teams NGO um, easy to create an easy to destroy Cuban. It is environment and sometimes leverage the Murano application catalog to deploy using heats templates to deploy their own stacks. Geo is largely a micro services driven, Um um company. So all of our applications are micro services, multiple micro services talking to each other, and the leverage develops. Two sets, like danceable Prometheus, Stack stone from for Otto Healing and driven, not commission. Big Data's tax are already there Kafka, Patches, Park Cassandra and other other tools as well. We're also now using service meshes. Almost everything now uses service mesh, sometimes use link. Erred sometimes are experimenting. This is Theo. So So this is where we are and we have multiple clients with NGO, so our products and services are available on Android IOS, our own Geo phone, Windows Macs, Web, Mobile Web based off them. So any client you can use our services and there's no lock in. It's always often with geo, so our sources have to be really good to compete in the open Internet. And last but not least, I think I love toe talk to you about our container journey. So a couple of years ago, almost every team started experimenting with containers and communities and they were demand for as a platform team. They were demanding community that the service from us a manage service. Right? So we built for us, it was much more comfortable, much more easier toe build on top of open stack with cloud FBI s as opposed to doing this on bare metal. So we built a fully managed community that a service which was, ah, self service portal, where you could click a button and get a community cluster deployed in your own tenant on Do the >>things that we did are quite interesting. We also handle some geo specific use cases. So we have because it was a >>manage service. We deployed the city notes in our own management tenant, right? We didn't give access to the customer to the city. Notes. We deployed the master control plane notes in the tenant's tenant and our customers tenant, but we didn't give them access to the Masters. We didn't give them the ssh key the workers that the our customers had full access to. And because people in Genova learning and experimenting, we gave them full admin rights to communities customers as well. So that way that really helped on board communities with NGO. And now we have, like 15 different teams running multiple communities clusters on top, off our open stack clouds. We even handle the fact that there are non profiting. I people separate non profiting I peoples and separate production 49 p pools NGO. So you could create these clusters in whatever environment that non prod environment with more open access or a prod environment with more limited access. So we had to handle these geo specific cases as well in this communities as a service. So on the whole, I think open stack because of the isolation it provides. I think it made a lot of sense for us to do communities our service on top off open stack. We even did it on bare metal, but that not many people use the Cuban, indeed a service environmental, because it is just so much easier to work with. Cloud FBI STO provision much of machines and covering these clusters. That's it from me. I think I've said a mouthful, and now I love for you toe. I'd love to have your questions. If you want to reach out to me. My email is mine dot capulet r l dot com. I'm also you can also message me on Twitter at my uncouple. So thank you. And it was a pleasure talking to you, Andre. Let let me hear your questions.
SUMMARY :
So in order to solve that problem, we launched our own brand of smartphones called the So just to give you a flavor on what stacks our internal It is environment and sometimes leverage the Murano application catalog to deploy So we have because it was a So on the whole, I think open stack because of the isolation
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2015 | DATE | 0.99+ |
India | LOCATION | 0.99+ |
2014 | DATE | 0.99+ |
two hours | QUANTITY | 0.99+ |
$50 | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
12 gigabytes | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Morantes | ORGANIZATION | 0.99+ |
70,000,080 million | QUANTITY | 0.99+ |
Andre | PERSON | 0.99+ |
three hours | QUANTITY | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
2000 | DATE | 0.99+ |
70 days | QUANTITY | 0.99+ |
Genova | LOCATION | 0.99+ |
five days | QUANTITY | 0.99+ |
2 | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
0 | QUANTITY | 0.99+ |
170 days | QUANTITY | 0.99+ |
100 million subscribers | QUANTITY | 0.99+ |
Onda Mantis Partnership | ORGANIZATION | 0.99+ |
first phase | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
16 | DATE | 0.99+ |
four years | QUANTITY | 0.99+ |
4 months | QUANTITY | 0.99+ |
one person | QUANTITY | 0.99+ |
49 p | QUANTITY | 0.99+ |
100 million customers | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one billion | QUANTITY | 0.99+ |
Two sets | QUANTITY | 0.99+ |
155th | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one key step | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
first country | QUANTITY | 0.98+ |
3 months | QUANTITY | 0.98+ |
around 100,000 CPU cores | QUANTITY | 0.98+ |
Joe | PERSON | 0.98+ |
100 | QUANTITY | 0.98+ |
27,001 | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
15 different teams | QUANTITY | 0.98+ |
Android IOS | TITLE | 0.98+ |
one month | QUANTITY | 0.98+ |
France | LOCATION | 0.98+ |
506 106 100 million | QUANTITY | 0.98+ |
Geo | ORGANIZATION | 0.98+ |
Elektronik Lee | ORGANIZATION | 0.98+ |
FBI | ORGANIZATION | 0.98+ |
one group | QUANTITY | 0.98+ |
1.6 billion nuclear transactions | QUANTITY | 0.98+ |
Andi | PERSON | 0.97+ |
Geo Mobile Internet | ORGANIZATION | 0.97+ |
five story | QUANTITY | 0.97+ |
Prometheus | TITLE | 0.97+ |
Speed K8S Dev Ops Secure Supply Chain
>>this session will be reviewing the power benefits of implementing a secure software supply chain and how we can gain a cloud like experience with flexibility, speed and security off modern software delivery. Hi, I'm Matt Bentley, and I run our technical pre sales team here. Um Iran. Tous I spent the last six years working with customers on their container ization journey. One thing almost every one of my customers is focused on how they can leverage the speed and agility benefits of contain arising their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason, and that is for our applications. So now let's take a look at how we could provide flexibility all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focus platforms, I generally see two different mindsets in terms of where the responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure yet robust service that fits the organization's goals around how modern applications are built and delivered. Yeah. First, let's take a look at the developer or application team approach. This approach follows Mawr of the Dev ops philosophy, where a developer and application teams are the owners of their applications. From the development through their life cycle, all the way to production. I would refer this more of a self service model of application, delivery and promotion when deployed to a container platform. This is fairly common organizations where full stack responsibilities have been delegated to the application teams, even in organizations were full stack ownership doesn't exist. I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers and other organizations. There's a strong separation between responsibilities for developers and I T operations. This is often do the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include doctorate the development layer or b'more traditional throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach, where we can take container platforms and be delivered as a service to other consumers inside of the I T organization. This is fairly prescriptive, in the manner of which application teams would consume it. When examining the two approaches, there are pros and cons to each process. Controls and appliance are often seen as inhibitors to speak. Self service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self service is great without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization and the true infrastructure is a code. Experience requires Dev ops related coding skills that teams often have in pockets but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this Doc Enterprise Container Cloud provides the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that are professional services Team and your operations team spend their time designing and implementing. This removes much of the additional work and worry Run, ensuring that your clusters and experiences are consistent while maintaining the ideal self service model, no matter if it is a full stack ownership or easing the needs of I T operations. We're also bringing the most natural kubernetes experience today with winds to allow for multi cluster visibility that is both developer and operator friendly. Let's provides immediate feedback for the health of your applications. Observe ability for your clusters. Fast context, switching between environments and allowing you to choose the best in tool for the task at hand. Whether is three graphical user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get Dave off speed with all the security controls to meet the regulations your business lives by. We're talking about more frequent deployments. Faster time to recover from application issues and better code quality, as you can see from our clusters we have worked with were able to tie these processes back to real cost savings, riel efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look at see how we're able to actually build a secure supply chain. Help deliver these sorts of initiatives in our example. Secure Supply chain. We're utilizing doctor desktop to help with consistency of developer experience. Get hub for our source Control Jenkins for a C A C D. Tooling the doctor trusted registry for our secure container registry in the universal control playing to provide us with our secure container run time with kubernetes and swarm. Providing a consistent experience no matter where are clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience for my developers that works for any application. Brownfield or Greenfield monolith or micro service on boarding teams could be simplified with integrations into enterprise authentication services. Calls to get help repositories. Jenkins Access and Jobs, Universal Control Plan and Dr Trusted registry teams and organizations. Cooper down his name space with access control, creating doctor trusted registry named spaces with access control, image scanning and promotion policies. So now let's take a look and see what it looks like from the C I c D process, including Jenkins. So let's start with Dr Desktop from the doctor desktop standpoint, what should be utilizing visual studio code and Dr Desktop to provide a consistent developer experience. So no matter if we have one developer or 100 we're gonna be able to walk through the consistent process through docker container utilization at the development layer. Once we've made our changes to our code will be able to check those into our source code repository in this case, abusing Get up. Then, when Jenkins picks up, it will check out that code from our source code repository, build our doctor containers, test the application that will build the image, and then it will take the image and push it toward doctor trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we consign them. So once we signed our images, we've deployed our application to Dev. We can actually test their application deployed in our real environment. Jenkins will then test the deployed application, and if all tests show that is good, will promote the r R Dr and Mr Production. So now let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as is deployed today. Here, we can see that we have a change that we want to make on our application. So marketing Team says we need to change containerized injure next to something more Miranda's branded. So let's take a look at visual studio coat, which will be using for I D to change our application. So here's our application. We have our code loaded, and we're gonna be able to use Dr Desktop on our local environment with our doctor desktop plug in for visual studio code to be able to build our application inside of doctor without needing to run any command line. Specific tools here is our code will be able to interact with docker, make our changes, see it >>live and be able to quickly see if our changes actually made the impact that we're expecting our application. Let's find our updated tiles for application and let's go and change that to our Miranda sized into next. Instead of containerized in genetics, so will change in the title and on the front page of the application, so that we save. That changed our application. We can actually take a look at our code here in V s code. >>And as simple as this, we can right click on the docker file and build our application. We give it a name for our Docker image and V s code will take care of the automatic building of our application. So now we have a docker image that has everything we need in our application inside of that image. So here we can actually just right click on the image tag that we just created and do run this winter, actively run the container for us and then what's our containers running? We could just right click and open it up in a browser. So here we can see the change to our application as it exists live. So once we can actually verify that our applications working as expected, weaken, stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here we're going to go ahead and make a commit message to say that we updated to our Mantis branding. We will commit that change and then we'll push it to our source code repository again. In this case we're using get Hub to be able to use our source code repository. So here in V s code will have that pushed here to our source code repository. And then we'll move on to our next environment, which is Jenkins. Jenkins is gonna be picking up those changes for our application, and it checked it out from our source code repository. So get Hub Notifies Jenkins. That there is a change checks out. The code builds our doctor image using the doctor file. So we're getting a consistent experience between the local development environment on our desktop and then and Jenkins or actually building our application, doing our tests, pushing in toward doctor trusted registry, scanning it and signing our image. And our doctor trusted registry, then 2.4 development environment. >>So let's actually take a look at that development environment as it's been deployed. So here we can see that our title has been updated on our application so we can verify that looks good and development. If we jump back here to Jenkins, will see that Jenkins go >>ahead and runs our integration tests for a development environment. Everything worked as expected, so it promoted that image for production repository and our doctor trusted registry. Where then we're going to also sign that image. So we're signing that. Yes, we have signed off that has made it through our integration tests, and it's deployed to production. So here in Jenkins, we could take a look at our deployed production environment where our application is live in production. We've made a change automated and very secure manner. >>So now let's take a look at our doctor trusted registry where we can see our game Space for application are simple in genetics repository. From here we will be able to see information about our application image that we've pushed into the registry, such as Thean Midge signature when it was pushed by who and then we'll also be able to see the scan results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Dr Trusted registry does binary level scanning, so we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what, exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how, exactly we would remediate that and secure supply chain. So let's take a look at that and the example that we were looking at the vulnerability is actually in the base layer of our image. In order to pull in a new base layer of our image, we need to actually find the source of that and updated. One of the ways that we can help secure that is a part of the supply chain is to actually take a look at where we get our base layers of our images. Dr. Help really >>provides a great source of content to start from, but opening up docker help within your organization opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker, However, curated by docker, open source projects and other vendors, one of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them instead of just blindly trusting the content from doctor. How we could take a set >>of content that we find useful, such as those base image layers or content from vendors, and pull that into our own Dr trusted registry using our rearing feature. Once the images have been mirrored into a staging area of our DACA trusted registry, we can then scan them to ensure that the images meet our security requirements and then, based off the scan result, promote the image toe a public repository where we can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know a secure and controlled within our environment. So from here we confined our updated doctor image in our doctor trust registry, where we can see that the vulnerabilities have been resolved from a developers point of view, that's about a smooth process gets. Now let's take a look at how we could provide that secure content for developers and our own Dr Trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our doctor trusted registry. Here we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up nearing and we can quickly turn it on by making active. Then we can see that our image mirroring will pull our content from Dr Hub and then make it available in our doctor trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within docker trusted registry that makes it so. That content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the docker image. So are actually users. How they would consume this content is by taking a look at the public to them official images that we've made available here again, Looking at our Alpine image, we can take a look at the tags that exist. We could see that we have our content that has been made available, so we've pulled in all sorts of content from Dr Hub. In this case, we have even pulled in the multi architectural images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Len's. Lens provides capabilities to be able to give developers a quick, opinionated view that focuses around how they would want to view, manage and inspect applications to point to a Cooper Days cluster. Lindsay integrates natively out of the box with universal control playing clam bundles so you're automatically generated. Tell certificates from UCP. Just work inside our organization. We want to give our developers the ability to see their applications and a very easy to view manner. So in this case, let's actually filter down to the application that we just deployed to our development environment. Here we can see the pot for application and we click on that. We get instant, detailed feedback about the components and information that this pot is utilizing. We can also see here in Linz that it gives us the ability to quickly switch context between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package of applications, especially those that may be more complex to make it much simpler to be able to consume inversion our applications. In this case, let's take a look at the application that we just built and deployed. This case are simple in genetics. Application has been bundled up as a helm chart and has made available through lens here. We can just click on that description of our application to be able to see more information about the helm chart so we can publish whatever information may be relevant about our application, and through one click, we can install our helm chart here. It will show us the actual details of the home charts. So before we install it, we can actually look at those individual components. So in this case, we could see that's created ingress rule. And then it's well, tell kubernetes how to create the specific components of our application. We just have to pick a name space to to employ it, too. And in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Dr Hub in our universal Control plane. We've turned on Dr Content Trust Policy Enforcement. So this is actually gonna fail to deploy because we're trying to deploy application from Dr Hub. The image hasn't been properly signed in our environment. So the doctor can to trust policy enforcement prevents us from deploying our doctor image from Dr Hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know our image came from, and that meets our quality standards. So if we comment out the doctor Hub repository and comment in our doctor trusted registry repository and click install, it will then install the helm chart with our doctor image being pulled from our GTR, which then has a proper signature, we can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple in genetics application, and in this case we'll get details around the actual deploy and help chart. The nice thing is that Linds provides us this capability here with home. To be able to see all the components that make up our application from this view is giving us that single pane of glass into that specific application so that we know all the components that is created inside of kubernetes. There are specific details that can help us access the applications, such as that ingress world that we just talked about gives us the details of that. But it also gives us the resource is such as the service, the deployment in ingress that has been created within kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around dev ups and operations controlled processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators mawr time designing systems that meet our security and compliance concerns
SUMMARY :
So now let's take a look at how we could provide flexibility all layers of the stack from the and on the front page of the application, so that we save. So here we can see the change to our application as it exists live. So here we can So here in Jenkins, we could take a look at our deployed production environment where our application So let's take a look at that and the example that we were looking at of the most important use cases is around how you get base images into your So in this case, let's actually filter down to the application that we just deployed to our development environment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Bentley | PERSON | 0.99+ |
UCP | ORGANIZATION | 0.99+ |
Mawr | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
Cooper | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
one reason | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Dr Hub | ORGANIZATION | 0.98+ |
Dave | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Jenkins | TITLE | 0.97+ |
two | QUANTITY | 0.97+ |
Linds | ORGANIZATION | 0.97+ |
Iran | LOCATION | 0.97+ |
One thing | QUANTITY | 0.97+ |
one developer | QUANTITY | 0.96+ |
DACA | TITLE | 0.95+ |
each process | QUANTITY | 0.95+ |
Dr Desktop | TITLE | 0.93+ |
one click | QUANTITY | 0.92+ |
single pane | QUANTITY | 0.92+ |
both worlds | QUANTITY | 0.91+ |
Thean Midge | PERSON | 0.91+ |
docker | TITLE | 0.89+ |
three graphical user | QUANTITY | 0.86+ |
Mantis | ORGANIZATION | 0.85+ |
last six years | DATE | 0.84+ |
Dr | ORGANIZATION | 0.82+ |
Miranda | ORGANIZATION | 0.81+ |
Brownfield | ORGANIZATION | 0.8+ |
this winter | DATE | 0.75+ |
ways | QUANTITY | 0.75+ |
C | TITLE | 0.74+ |
one of | QUANTITY | 0.74+ |
Lindsay | ORGANIZATION | 0.72+ |
ingress | TITLE | 0.71+ |
Alpine | ORGANIZATION | 0.69+ |
most important use cases | QUANTITY | 0.67+ |
Cooper Days | ORGANIZATION | 0.66+ |
Jenkins | PERSON | 0.65+ |
mindsets | QUANTITY | 0.63+ |
Greenfield | LOCATION | 0.62+ |
Miranda | PERSON | 0.62+ |
R | PERSON | 0.59+ |
C A C | TITLE | 0.59+ |
Linz | TITLE | 0.59+ |
every one | QUANTITY | 0.56+ |
challenges | QUANTITY | 0.53+ |
Enterprise | COMMERCIAL_ITEM | 0.5+ |
2.4 | OTHER | 0.5+ |
Hub | ORGANIZATION | 0.48+ |
K8S | TITLE | 0.48+ |
Lens | TITLE | 0.44+ |
Doc | ORGANIZATION | 0.4+ |
Help | PERSON | 0.39+ |
Docker | ORGANIZATION | 0.37+ |
Alpine | OTHER | 0.35+ |
Dave Van Everen, Mirantis | Mirantis Launchpad 2020 Preview
>>from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back. You're ready, Jeffrey here with the Cuban Apollo Alto studios today, and we're excited. You know, we're slowly coming out of the, uh, out of the summer season. We're getting ready to jump back into the fall. Season, of course, is still covet. Everything is still digital. But you know, what we're seeing is a digital events allow a lot of things that you couldn't do in the physical space. Mainly get a lot more people to attend that don't have to get in airplanes and file over the country. So to preview this brand new inaugural event that's coming up in about a month, we have We have a new guest. He's Dave and Everen. He is the senior vice president of marketing. Former ran tous. Dave. Great to see you. >>Happy to be here today. Thank you. >>Yeah. So tell us about this inaugural event. You know, we did an event with Miranda's years ago. I had to look it up like 2014. 15. Open stack was hot and you guys sponsored a community event in the Bay Area because the open stack events used to move all over the country each and every year. But you guys said, and the top one here in the Bay Area. But now you're launching something brand new based on some new activity that you guys have been up to over the last several months. So let us give us give us the word. >>Yeah, absolutely. So we definitely have been organizing community events in a variety of open source communities over the years. And, you know, we saw really, really good success with with the Cube And are those events in opens tax Silicon Valley days? And, you know, with the way things have gone this year, we've really seen that virtual events could be very successful and provide a new, maybe slightly different form of engagement but still very high level of engagement for our guests and eso. We're excited to put this together and invite the entire cloud native industry to join us and learn about some of the things that Mantis has been working on in recent months. A zwelling as some of the interesting things that are going on in the Cloud native and kubernetes community >>Great. So it's the inaugural event is called Moran Sous launchpad 2020. The Wares and the Winds in September 16th. So we're about a month away and it's all online is their registration. Costars is free for the community. >>It's absolutely free. Eso everyone is welcome to attend You. Just visit Miranda's dot com and you'll see the info for registering for the event and we'd love it. We love to see you there. It's gonna be a fantastic event. We have multiple tracks catering to developers, operators, general industry. Um, you know, participants in the community and eso we'd be happy to see you on join us on and learn about some of the some of the things we're working on. >>That's awesome. So let's back up a step for people that have been paying as close attention as they might have. Right? So you guys purchase, um, assets from Docker at the end of last year, really taken over there, they're they're kind of enterprise solutions, and you've been doing some work with that. Now, what's interesting is we we cover docker con, um, A couple of months ago, a couple three months ago. Time time moves fast. They had a tremendously successful digital event. 70,000 registrants, people coming from all over the world. I think they're physical. Event used to be like four or 5000 people at the peak, maybe 6000 Really tremendous success. But a lot of that success was driven, really by the by the strength of the community. The docker community is so passionate. And what struck me about that event is this is not the first time these people get together. You know, this is not ah, once a year, kind of sharing of information and sharing ideas, but kind of the passion and and the friendships and the sharing of information is so, so good. You know, it's a super or, um, rich development community. You guys have really now taken advantage of that. But you're doing your Miranda's thing. You're bringing your own technology to it and really taking it to more of an enterprise solution. So I wonder if you can kind of walk people through the process of, you know, you have the acquisition late last year. You guys been hard at work. What are we gonna see on September 16. >>Sure, absolutely. And, you know, just thio Give credit Thio Docker for putting on an amazing event with Dr Khan this year. Uh, you know, you mentioned 70,000 registrants. That's an astounding number. And you know, it really is a testament thio. You know, the community that they've built over the years and continue to serve eso We're really, really happy for Docker as they kind of move into, you know, the next the next path in their journey and, you know, focus more on the developer oriented, um, solution and go to market. So, uh, they did a fantastic job with the event. And, you know, I think that they continue toe connect with their community throughout the year on That's part of what drives What drove so many attendees to the event assed faras our our history and progress with with Dr Enterprise eso. As you mentioned mid November last year, we did acquire Doctor Enterprise assets from Docker Inc and, um, right away we noticed tremendous synergy in our product road maps and even in the in the team's eso that came together really, really quickly and we started executing on a Siris of releases. Um that are starting Thio, you know, be introduced into the market. Um, you know, one was introduced in late May and that was the first major release of Dr Enterprise produced exclusively by more antis. And we're going to announce at the launch pad 2020 event. Our next major release of the Doctor Enterprise Technology, which will for the first time include kubernetes related in life cycle management related technology from Mirant is eso. It's a huge milestone for our company. Huge benefit Thio our customers on and the broader user community around Dr Enterprise. We're super excited. Thio provide a lot of a lot of compelling and detailed content around the new technology that will be announcing at the event. >>So I'm looking at the at the website with with the agenda and there's a little teaser here right in the middle of the spaceship Docker Enterprise Container Cloud. So, um, and I glanced into you got a great little layout, five tracks, keynote track D container track operations and I t developer track and keep track. But I did. I went ahead and clicked on the keynote track and I see the big reveal so I love the opening keynote at at 8 a.m. On the 76 on the September 16th is right. Um, I, Enel CEO who have had on many, many times, has the big reveal Docker Enterprise Container Cloud. So without stealing any thunder, uh, can you give us any any little inside inside baseball on on what people should expect or what they can get excited about for that big announcement? >>Sure, absolutely so I definitely don't want to steal any thunder from Adrian, our CEO. But you know, we did include a few Easter eggs, so to speak, in the website on Dr Enterprise. Container Cloud is absolutely the biggest story out of the bunch eso that's visible on the on the rocket ship as you noticed, and in the agenda it will be revealed during Adrian's keynote, and every every word in the product name is important, right? So Dr Enterprise, based on Dr Enterprise Platform Container Cloud and there's the new word in there really is Cloud eso. I think, um, people are going to be surprised at the groundbreaking territory that were forging with with this release along the lines of a cloud experience and what we are going to provide to not only I t operations and the Op Graders and Dev ops for cloud environment, but also for the developers and the experience that we could bring to developers As they become more dependent on kubernetes and get more hands on with kubernetes. We think that we're going thio provide ah lot of ways for them to be more empowered with kubernetes while at the same time lowering the bar, the bar or the barrier of entry for kubernetes. As many enterprises have have told us that you know kubernetes can be difficult for the broader developer community inside the organization Thio interact with right? So this is, uh, you know, a strategic underpinning of our our product strategy. And this is really the first step in a non going launch of technologies that we're going to make bigger netease easier for developing. >>I was gonna say the other Easter egg that's all over the agenda, as I'm just kind of looking through the agenda. It's kubernetes on 80 infrastructure multi cloud kubernetes Miranda's open stack on kubernetes. So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. But as you said, kind of the new theme that we're hearing a little bit more Morris is the difficulty and actually managing it so looking, kind of beyond the actual technology to the operations and the execution in production. And it sounds like you guys might have a few things up your sleeve to help people be more successful in in and actually kubernetes in production. >>Yeah, absolutely. So, uh, kubernetes is the focus of most of the companies in our space. Obviously, we think that we have some ideas for how we can, you know, really begin thio enable enable it to fulfill its promise as the operating system for the cloud eso. If we think about the ecosystem that's formed around kubernetes, uh, you know, it's it's now really being held back on Lee by adoption user adoption. And so that's where our focus in our product strategy really lives is around. How can we accelerate the move to kubernetes and accelerate the move to cloud native applications on? But in order to provide that acceleration catalyst, you need to be able to address the needs of not only the operators and make their lives easier while still giving them the tools they need for things like policy enforcement and operational insights. At the same time, Foster, you know, a grassroots, um, upswell of developer adoption within their company on bond Really help the I t. Operations team serve their customers the developers more effectively. >>Well, Dave, it sounds like a great event. We we had a great time covering those open stack events with you guys. We've covered the doctor events for years and years and years. Eso super engaged community and and thanks for, you know, inviting us back Thio to cover this inaugural event as well. So it should be terrific. Everyone just go to Miranda's dot com. The big pop up Will will jump up. You just click on the button and you can see the full agenda on get ready for about a month from now. When when the big reveal, September 16th will happen. Well, Dave, thanks for sharing this quick update with us. And I'm sure we're talking a lot more between now in, uh, in the 16 because I know there's a cube track in there, so we look forward to interview in our are our guests is part of the part of the program. >>Absolutely. Eso welcome everyone. Join us at the event and, uh, you know, stay tuned for the big reveal. >>Everybody loves a big reveal. All right, well, thanks a lot, Dave. So he's Dave. I'm Jeff. You're watching the Cube. Thanks for watching. We'll see you next time.
SUMMARY :
from the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world. But you know, what we're seeing is a digital Happy to be here today. But you guys said, and the top one here in the Bay Area. invite the entire cloud native industry to join us and The Wares and the Winds in September 16th. participants in the community and eso we'd be happy to see you on So you guys purchase, um, assets from Docker at the end of last year, you know, focus more on the developer oriented, um, solution and So I'm looking at the at the website with with the agenda and there's a little teaser here right in the on the on the rocket ship as you noticed, and in the agenda it will be revealed So Goober Netease plays a huge part and you know, we talk a lot about kubernetes at all the events that we cover. some ideas for how we can, you know, really begin thio enable You just click on the button and you can see the full agenda on uh, you know, stay tuned for the big reveal. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adrian | PERSON | 0.99+ |
September 16 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Dave Van Everen | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Everen | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
September 16th | DATE | 0.99+ |
Docker Inc | ORGANIZATION | 0.99+ |
Bay Area | LOCATION | 0.99+ |
late May | DATE | 0.99+ |
Enel | PERSON | 0.99+ |
mid November last year | DATE | 0.99+ |
5000 people | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
70,000 registrants | QUANTITY | 0.99+ |
Dr Enterprise | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
8 a.m. | DATE | 0.99+ |
first time | QUANTITY | 0.98+ |
Docker Enterprise Container Cloud | TITLE | 0.98+ |
Doctor Enterprise | ORGANIZATION | 0.98+ |
Foster | PERSON | 0.98+ |
Dr Enterprise | TITLE | 0.98+ |
2014. 15 | DATE | 0.98+ |
first step | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
Docker Enterprise Container Cloud | TITLE | 0.98+ |
6000 | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Cube Studios | ORGANIZATION | 0.96+ |
late last year | DATE | 0.96+ |
Container Cloud | TITLE | 0.96+ |
five tracks | QUANTITY | 0.96+ |
Easter | EVENT | 0.96+ |
Morris | PERSON | 0.96+ |
The Wares and the Winds | EVENT | 0.95+ |
Miranda | PERSON | 0.94+ |
Dr | PERSON | 0.94+ |
Silicon Valley | LOCATION | 0.94+ |
first time | QUANTITY | 0.94+ |
once a year | QUANTITY | 0.93+ |
each | QUANTITY | 0.9+ |
end | DATE | 0.9+ |
couple of months ago | DATE | 0.88+ |
Launchpad | COMMERCIAL_ITEM | 0.88+ |
Apollo Alto studios | ORGANIZATION | 0.87+ |
Cloud | TITLE | 0.87+ |
about a month | QUANTITY | 0.86+ |
Thio | PERSON | 0.85+ |
Will | PERSON | 0.85+ |
Mantis | PERSON | 0.84+ |
Mirant | ORGANIZATION | 0.84+ |
Thio Docker | PERSON | 0.83+ |
Doctor Enterprise | TITLE | 0.82+ |
a month | QUANTITY | 0.82+ |
Khan | PERSON | 0.81+ |
first major release | QUANTITY | 0.81+ |
last year | DATE | 0.8+ |
2020 | DATE | 0.8+ |
couple three months ago | DATE | 0.79+ |
years | QUANTITY | 0.79+ |
years | DATE | 0.75+ |
Moran Sous launchpad 2020 | EVENT | 0.72+ |
Thio | ORGANIZATION | 0.72+ |
Lee | PERSON | 0.71+ |
Miranda | ORGANIZATION | 0.7+ |
one | QUANTITY | 0.69+ |
Container Cloud | TITLE | 0.67+ |
months | DATE | 0.66+ |
Dr Enterprise Platform | TITLE | 0.65+ |
76 | DATE | 0.64+ |