ON DEMAND API GATEWAYS INGRESS SERVICE MESH
>> Thank you, everyone for joining. I'm here today to talk about ingress controllers, API gateways, and service mesh on Kubernetes, three very hot topics that are also frequently confusing. So I'm Richard Li, founder/CEO of Ambassador Labs, formerly known as Datawire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including Telepresence and Ambassador, which is a Kubernetes native API gateway. And most of what I'm going to talk about today is related to our work around Ambassador. So I want to start by talking about application architecture and workflow on Kubernetes and how applications that are being built on Kubernetes really differ from how they used to be built. So when you're building applications on Kubernetes, the traditional architecture is the very famous monolith. And the monolith is a central piece of software. It's one giant thing that you build deploy, run. And the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly is that architecture is really reflected in that workflow. So with a monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular piece of software and everyone works towards that release train. And you have specialized teams. You have a development team, which has all your developers. You have a QA team, you have a release team, you have an operations team. So that's your typical development organization and workflow with a monolithic application. As organizations shift to microservices, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of this application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship as soon as they're ready. And so we call this full cycle development because that team is really responsible not just for the coding of that microservice, but also the testing and the release and operations of that service. So this is a huge change, particularly with workflow, and there's a lot of implications for this. So I have a diagram here that just tries to visualize a little bit more the difference in organization. With the monolith, you have everyone who works on this monolith. With microservices, you have the yellow folks work on the yellow microservice and the purple folks work on the purple microservice and maybe just one person work on the orange microservice and so forth. So there's a lot more diversity around your teams and your microservices, and it lets you really adjust the granularity of your development to your specific business needs. So how do users actually access your microservices? Well, with a monolith, it's pretty straightforward. You have one big thing, so you just tell the internet, well, I have this one big thing on the internet. Make sure you send all your traffic to the big thing. But when you have microservices and you have a bunch of different microservices, how do users actually access these microservices? So the solution is an API gateway. So the API gateway consolidates all access to your microservices. So requests come from the internet. They go to your API gateway. The API gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate microservice. And because the API gateway is centralizing access to all of the microservices, it also really helps you simplify authentication, observability, routing, all these different cross-cutting concerns, because instead of implementing authentication in each of your microservices, which would be a maintenance nightmare and a security nightmare, you've put all of your authentication in your API gateway. So if you look at this world of microservices, API gateways are a really important part of your infrastructure which are really necessary, and pre-microservices, pre-Kubernetes, an API gateway, while valuable, was much more optional. So that's one of the really big things around recognizing with the microservices architecture, you really need to start thinking much more about an API gateway. The other consideration with an API gateway is around your management workflow, because as I mentioned, each team is actually responsible for their own microservice, which also means each team needs to be able to independently manage the gateway. So Team A working on that microservice needs to be able to tell the API gateway, this is how I want you to route requests to my microservice, and the purple team needs to be able to say something different for how purple requests get routed to the purple microservice. So that's also a really important consideration as you think about API gateways and how it fits in your architecture, because it's not just about your architecture, it's also about your workflow. So let me talk about API gateways on Kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the internet to services inside the cluster. Kubernetes, from an architectural perspective, it actually has a requirement that all the different pods in a Kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own IP address. So this makes things very, very simple for interpod communication. Kubernetes, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you route it around the cluster, and it's very opinionated about how this works, but getting traffic into the cluster, there's a lot of different options and there's multiple strategies. There's Pod IP, there's Ingress, there's LoadBalancer resources, there's NodePort. I'm not going to go into exhaustive detail on all these different options, and I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be a hardware load balancer. It can be a virtual machine. It can be a cloud load balancer. But the key requirement for an external load balancer is to be able to attach a stable IP address so that you can actually map a domain name and DNS to that particular external load balancer, and that external load balancer usually, but not always, will then route traffic and pass that traffic straight through to your ingress controller. And then your ingress controller takes that traffic and then routes it internally inside Kubernetes to the various pods that are running your microservices. There are other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really require each of your microservices to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about an ingress controller. What exactly is an ingress controller? So an ingress controller is an application that can process rules according to the Kubernetes ingress specification. Strangely, Kubernetes is not actually shipped with a built-in ingress controller. I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement, and it is. It turns out that this is complex enough that there's no one size fits all ingress controller. And so there is a set of ingress rules that are part of the Kubernetes ingress specification that specify how traffic gets routed into the cluster, and then you need a proxy that can actually route this traffic to these different pods. And so an ingress controller really translates between the Kubernetes configuration and the proxy configuration, and common proxies for ingress controllers include HAProxy, Envoy Proxy, or NGINX. So let me talk a little bit more about these common proxies. So all these proxies, and there are many other proxies. I'm just highlighting what I consider to be probably the three most well-established proxies, HAProxy, NGINX, and Envoy Proxy. So HAProxy is managed by HAProxy Technologies. Started in 2001. The HAProxy organization actually creates an ingress controller. And before they created an ingress controller, there was an open source project called Voyager which built an ingress controller on HAProxy. NGINX, managed by NGINX, Inc., subsequently acquired by F5. Also open source. Started a little bit later, the proxy, in 2004. And there's the Nginx-ingress, which is a community project. That's the most popular. As well as the Nginx, Inc. kubernetes-ingress project, which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the NGINX ingress controller, and it's not clear if they're using this commercially supported version or this open source version. And they actually, although they have very similar names, they actually have different functionality. Finally, Envoy Proxy, the newest entrant to the proxy market, originally developed by engineers at Lyft, the ride sharing company. They subsequently donated it to the Cloud Native Computing Foundation. Envoy has become probably the most popular cloud native proxy. It's used by Ambassador, the API gateway. It's used in the Istio service mesh. It's used in the VMware Contour. It's been used by Amazon in App Mesh. It's probably the most common proxy in the cloud native world. So as I mentioned, there's a lot of different options for ingress controllers. The most common is the NGINX ingress controller, not the one maintained by NGINX, Inc., but the one that's part of the Kubernetes project. Ambassador is the most popular Envoy-based option. Another common option is the Istio Gateway, which is directly integrated with the Istio mesh, and that's actually part of Docker Enterprise. So with all these choices around ingress controller, how do you actually decide? Well, the reality is the ingress specification's very limited. And the reason for this is that getting traffic into a cluster, there's a lot of nuance into how you want to do that, and it turns out it's very challenging to create a generic one size fits all specification because of the vast diversity of implementations and choices that are available to end users. And so you don't see ingress specifying anything around resilience. So if you want to specify a timeout or rate-limiting, it's not possible. Ingress is really limited to support for HTTP. So if you're using gRPC or web sockets, you can't use the ingress specification. Different ways of routing, authentication. The list goes on and on. And so what happens is that different ingress controllers extend the core ingress specification to support these use cases in different ways. So NGINX ingress, they actually use a combination of config maps and the ingress resources plus custom annotations that extend the ingress to really let you configure a lot of the additional extensions that is exposed in the NGINX ingress. With Ambassador, we actually use custom resource definitions, different CRDs that extend Kubernetes itself to configure Ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by Kubernetes. So when you do a kub control apply of an Ambassador CRD, kub control can immediately validate and tell you if you're actually applying a valid schema and format for your Ambassador configuration. And as I previously mentioned, Ambassador's built on Envoy Proxy, Istio Gateway also uses CRDs. They can be used in extension of the service mesh CRDs as opposed to dedicated gateway CRDs. And again, Istio Gateway is built on Envoy Proxy. So I've been talking a lot about ingress controllers, but the title of my talk was really about API gateways and ingress controllers and service mesh. So what's the difference between an ingress controller and an API gateway? So to recap, an ingress controller processes Kubernetes ingress routing rules. An API gateway is a central point for managing all your traffic to Kubernetes services. It typically has additional functionality such as authentication, observability, a developer portal, and so forth. So what you find is that not all API gateways are ingress controllers because some API gateways don't support Kubernetes at all. So you can't, they can't be ingress controllers. And not all ingress controllers support the functionality such as authentication, observability, developer portal, that you would typically associate with an API gateway. So generally speaking, API gateways that run on Kubernetes should be considered a superset of an ingress controller. But if the API gateway doesn't run on Kubernetes, then it's an API gateway and not an ingress controller. So what's the difference between a service mesh and an API gateway? So an API gateway is really focused on traffic into and out of a cluster. So the colloquial term for this is North/South traffic. A service mesh is focused on traffic between services in a cluster, East/West traffic. All service meshes need an API gateway. So Istio includes a basic ingress or API gateway called the Istio Gateway, because a service mesh needs traffic from the internet to be routed into the mesh before it can actually do anything. Envoy Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Docker Enterprise provides an Envoy-based solution out of the box, Istio Gateway. The reason Docker does this is because, as I mentioned, Kubernetes doesn't come package with an ingress. It makes sense for Docker Enterprise to provide something that's easy to get going, no extra steps required, because with Docker enterprise, you can deploy it and get going, get it exposed on the internet without any additional software. Docker Enterprise can also be easily upgraded to Ambassador because they're both built on Envoy. It ensures consistent routing semantics. And also with Ambassador, you get greater security for single sign-on. There's a lot of security by default that's configured directly into Ambassador. Better control over TLS, things like that. And then finally, there's commercial support that's actually available for Ambassador. Istio is an open source project that has a very broad community, but no commercial support options. So to recap, ingress controllers and API gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. And I think a lot of times organizations don't think critically enough about the API gateway until they're much further down the Kubernetes journey. Considerations around how to choose that API gateway include functionality such as how does it do with traffic management and observability? Does it support the protocols that you need? Also nonfunctional requirements such as does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this? An API gateway is focused on North/South traffic, so traffic into and out of your Kubernetes cluster. A service mesh is focused on East/West traffic, so traffic between different services inside the same cluster. Docker Enterprise includes Istio Gateway out of the box. Easy to use, but can also be extended with Ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between API gateways, ingress controllers, and service meshes, and how you should be thinking about that on your Kubernetes deployment.
SUMMARY :
So ingress is the process
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2004 | DATE | 0.99+ |
Richard Li | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Ambassador Labs | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
each team | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
each team | QUANTITY | 0.99+ |
Datawire | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
each pod | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Nginx, Inc. | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
one person | QUANTITY | 0.98+ |
HAProxy Technologies | ORGANIZATION | 0.98+ |
HAProxy | TITLE | 0.97+ |
Docker Enterprise | TITLE | 0.96+ |
Ambassador | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
NGINX | TITLE | 0.96+ |
NGINX, Inc. | ORGANIZATION | 0.96+ |
Docker Enterprise | TITLE | 0.96+ |
Envoy Proxy | TITLE | 0.96+ |
one | QUANTITY | 0.95+ |
one big thing | QUANTITY | 0.95+ |
NGINX ingress | TITLE | 0.95+ |
Docker enterprise | TITLE | 0.94+ |
one particular vehicle | QUANTITY | 0.93+ |
ingress | ORGANIZATION | 0.91+ |
Telepresence | ORGANIZATION | 0.87+ |
F5 | ORGANIZATION | 0.87+ |
Envoy | TITLE | 0.86+ |
Nginx-ingress | TITLE | 0.85+ |
three very hot topics | QUANTITY | 0.82+ |
both mesh | QUANTITY | 0.82+ |
three most well-established proxies | QUANTITY | 0.76+ |
single sign | QUANTITY | 0.75+ |
Istio Gateway | OTHER | 0.75+ |
one giant thing | QUANTITY | 0.73+ |
VMware Contour | TITLE | 0.71+ |
Ingress | ORGANIZATION | 0.7+ |
Docker Enterprise | ORGANIZATION | 0.69+ |
Ambassador | TITLE | 0.67+ |
Voyager | TITLE | 0.67+ |
Envoy | ORGANIZATION | 0.65+ |
Istio Gateway | TITLE | 0.65+ |
Istio | ORGANIZATION | 0.62+ |
API Gateways Ingress Service Mesh | Mirantis Launchpad 2020
>>thank you everyone for joining. I'm here today to talk about English controllers. AP Gateways and service mention communities three very hot topics that are also frequently confusing. So I'm Richard Lee, founder CEO of Ambassador Labs, formerly known as Data Wire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including telepresence and Ambassador, which is a kubernetes native AP gateway. And most of what I'm going to talk about today is related to our work around ambassador. Uh huh. So I want to start by talking about application architecture, er and workflow on kubernetes and how applications that are being built on kubernetes really differ from how they used to be built. So when you're building applications on kubernetes, the traditional architectures is the very famous monolith, and the monolith is a central piece of software. It's one giant thing that you build, deployed run, and the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly, is the architecture er is really reflecting that workflow. So with the monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular pieces offering. Everyone works towards that release train, and you have specialized teams. You have a development team which has all your developers. You have a Q A team. You have a release team, you have an operations team, so that's your typical development organization and workflow with a monolithic application. As organization shift to micro >>services, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of the application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship a soon as they're ready. And so we call this full cycle development because that team is >>really responsible, not just for the coding of that micro service, but also the testing and the release and operations of that service. Um, >>so this is a huge change, particularly with workflow. And there's a lot of implications for this, s o. I have a diagram here that just try to visualize a little bit more the difference in organization >>with the monolith. You have everyone who works on this monolith with micro services. You have the yellow folks work on the Yellow Micro Service, and the purple folks work on the Purple Micro Service and maybe just one person work on the Orange Micro Service and so forth. >>So there's a lot more diversity around your teams and your micro services, and it lets you really adjust the granularity of your development to your specific business need. So how do users actually access your micro services? Well, with the monolith, it's pretty straightforward. You have one big thing. So you just tell the Internet while I have this one big thing on the Internet, make sure you send all your travel to the big thing. But when you have micro services and you have a bunch of different micro services, how do users actually access these micro services? So the solution is an AP gateway, so the gateway consolidates all access to your micro services, so requests come from the Internet. They go to your AP gateway. The AP Gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate micro service. And because the AP gateway is centralizing thing access to all the micro services, it also really helps you simplify authentication, observe ability, routing all these different crosscutting concerns. Because instead of implementing authentication in each >>of your micro services, which would be a maintenance nightmare and a security nightmare, you put all your authentication in your AP gateway. So if you look at this world of micro services, AP gateways are really important part of your infrastructure, which are really necessary and pre micro services. Pre kubernetes Unhappy Gateway Well valuable was much more optional. So that's one of the really big things around. Recognizing with the micro services architecture er, you >>really need to start thinking much more about maybe a gateway. The other consideration within a P A gateway is around your management workflow because, as I mentioned, each team is actually response for their own micro service, which also means each team needs to be able to independently manage the gateway. So Team A working on that micro service needs to be able to tell the AP at Gateway. This this is >>how I want you to write. Request to my micro service, and the Purple team needs to be able to say something different for how purple requests get right into the Purple Micro Service. So that's also really important consideration as you think about AP gateways and how it fits in your architecture. Because it's not just about your architecture. It's also about your workflow. So let me talk about a PR gateways on kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the Internet to services inside the cluster kubernetes. From an architectural perspective, it actually has a requirement that all the different pods in a kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own I p address. So this makes things very, very simple for inter pod communication. Cooper in any is, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you routed around the cluster and it's very opinionated about how this works but getting traffic into the cluster. There's a lot of different options on there's multiple strategies pot i p. There's ingress. There's low bounce of resource is there's no port. >>I'm not gonna go into exhaustive detail on all these different options on. I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be >>ah, Harvard load balancer. It could be a virtual machine. It could be a cloud load balancer. But the key requirement for an external load balancer >>is to be able to attack to stable I people he address so that you can actually map a domain name and DNS to that particular external load balancer and that external load balancer, usually but not always well, then route traffic and pass that traffic straight through to your ingress controller, and then your English controller takes that traffic and then routes it internally inside >>kubernetes to the various pods that are running your micro services. There are >>other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really required each of your micro services to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about in English controller. What exactly is an English controller? So in English controller is an application that can process rules according to the kubernetes English specifications. Strangely, Kubernetes is not actually ship with a built in English controller. Um, I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement. And it is. It turns out that this is complex enough that there's no one size fits all English controller. And so there is a set of ingress >>rules that are part of the kubernetes English specifications at specified how traffic gets route into the cluster >>and then you need a proxy that can actually route this traffic to these different pods. And so an increase controller really translates between the kubernetes configuration and the >>proxy configuration and common proxies for ingress. Controllers include H a proxy envoy Proxy or Engine X. So >>let me talk a little bit more about these common proxies. So all these proxies and there >>are many other proxies I'm just highlighting what I consider to be probably the most three most well established proxies. Uh, h a proxy, uh, Engine X and envoy proxies. So H a proxy is managed by a plastic technology start in 2000 and one, um, the H a proxy organization actually creates an ingress controller. And before they kept created ingress controller, there was an open source project called Voyager, which built in ingress Controller on >>H a proxy engine X managed by engine. Xing, subsequently acquired by F five Also open source started a little bit later. The proxy in 2004. And there's the engine Xing breast, which is a community project. Um, that's the most popular a zwelling the engine Next Inc Kubernetes English project which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the ingress engine X ingress controller, and it's not clear if they're using this commercially supported version or the open source version, and they actually, although they have very similar names, uh, they actually have different functionality. Finally. Envoy Proxy, the newest entrant to the proxy market originally developed by engineers that lift the ride sharing company. They subsequently donated it to the cloud. Native Computing Foundation Envoy has become probably the most popular cloud native proxy. It's used by Ambassador uh, the A P a. Gateway. It's using the SDO service mash. It's using VM Ware Contour. It's been used by Amazon and at mesh. It's probably the most common proxy in the cloud native world. So, as I mentioned, there's a lot of different options for ingress. Controller is the most common. Is the engine X ingress controller, not the one maintained by Engine X Inc but the one that's part of the Cooper Nannies project? Um, ambassador is the most popular envoy based option. Another common option is the SDO Gateway, which is directly integrated with the SDO mesh, and that's >>actually part of Dr Enterprise. So with all these choices around English controller. How do you actually decide? Well, the reality is the ingress specifications very limited. >>And the reason for this is that getting traffic into the cluster there's a lot of nuance into how you want to do that. And it turns out it's very challenging to create a generic one size fits all specifications because of the vast diversity of implementations and choices that are available to end users. And so you don't see English specifying anything around resilience. So if >>you want to specify a time out or rate limiting, it's not possible in dresses really limited to support for http. So if you're using GSPC or Web sockets, you can't use the ingress specifications, um, different ways of routing >>authentication. The list goes on and on. And so what happens is that different English controllers extend the core ingress specifications to support these use cases in different ways. Yeah, so engine X ingress they actually use a combination of config maps and the English Resource is plus custom annotations that extend the ingress to really let you configure a lot of additional extensions. Um, that is exposing the engineers ingress with Ambassador. We actually use custom resource definitions different CRTs that extend kubernetes itself to configure ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by kubernetes. So when you do a coup control apply of an ambassador CRD coop Control can immediately validate and tell >>you if you're actually applying a valid schema in format for your ambassador configuration on As I previously mentioned, ambassadors built on envoy proxy, >>it's the Gateway also uses C R D s they can to use a necks tension of the service match CRD s as opposed to dedicated Gateway C R D s on again sdo Gateway is built on envoy privacy. So I've been talking a lot about English controllers. But the title of my talk was really about AP gateways and English controllers and service smashed. So what's the difference between an English controller and an AP gateway? So to recap, an immigrant controller processes kubernetes English routing rules and a P I. G. Wave is a central point for managing all your traffic to community services. It typically has additional functionality such as authentication, observe, ability, a >>developer portal and so forth. So what you find Is that not all Ap gateways or English controllers? Because some MP gateways don't support kubernetes at all. S o eso you can't make the can't be ingress controllers and not all ingrates. Controllers support the functionality such as authentication, observe, ability, developer portal >>that you would typically associate with an AP gateway. So, generally speaking, um, AP gateways that run on kubernetes should be considered a super set oven ingress controller. But if the A p a gateway doesn't run on kubernetes, then it's an AP gateway and not an increase controller. Yeah, so what's the difference between a service Machin and AP Gateway? So an AP gateway is really >>focused on traffic into and out of a cluster, so the political term for this is North South traffic. A service mesh is focused on traffic between services in a cluster East West traffic. All service meshes need >>an AP gateway, so it's Theo includes a basic ingress or a P a gateway called the SDO gateway, because a service mention needs traffic from the Internet to be routed into the mesh >>before it can actually do anything Omelet. Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Dr. Enterprise provides an envoy based solution out of the box. >>Uh, SDO Gateway. The reason Dr does this is because, as I mentioned, kubernetes doesn't come package with an ingress. Uh, it makes sense for Dr Enterprise to provide something that's easy to get going. No extra steps required because with Dr Enterprise, you can deploy it and get going. Get exposed on the Internet without any additional software. Dr. Enterprise can also be easily upgraded to ambassador because they're both built on envoy and interest. Consistent routing. Semantics. It also with Ambassador. You get >>greater security for for single sign on. There's a lot of security by default that's configured directly into Ambassador Better control over TLS. Things like that. Um And then finally, there's commercial support that's actually available for Ambassador. SDO is an open source project that has a has a very broad community but no commercial support options. So to recap, ingress controllers and AP gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. >>And I think a lot of times organizations don't think critically enough about the AP gateway until they're much further down the Cuban and a journey. Considerations around how to choose that a p a gateway include functionality such as How does it do with traffic management and >>observe ability? Doesn't support the protocols that you need also nonfunctional requirements such as Does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this on a P? A. Gateway is focused on north south traffic, so traffic into and out of your kubernetes cluster. A service match is focused on East West traffic, so traffic between different services inside the same cluster. Dr. Enterprise includes SDO Gateway out of the box easy to use but can also be extended with ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between a P gateways, English controllers and service meshes and how you should be thinking about that on your kubernetes deployment
SUMMARY :
So with the monolith, you have a very centralized development process. And so you really have a continuous release cycle because instead of synchronizing all your teams really responsible, not just for the coding of that micro service, but also the testing and so this is a huge change, particularly with workflow. You have the yellow folks work on the Yellow Micro Service, and the purple folks work on the Purple Micro Service and maybe just so the gateway consolidates all access to your micro services, So that's one of the really big things around. really need to start thinking much more about maybe a gateway. So ingress is the process of getting traffic from the Internet to services So the most common strategy for routing is coupling an external load balancer But the key requirement for an external load balancer kubernetes to the various pods that are running your micro services. And the reason for this is that the and the So So all these proxies and So H a proxy is managed by a plastic technology Envoy Proxy, the newest entrant to the proxy the reality is the ingress specifications very limited. And the reason for this is that getting traffic into the cluster there's a lot of nuance into how you want to do that. you want to specify a time out or rate limiting, it's not possible in dresses really limited is that different English controllers extend the core ingress specifications to support these use cases So to recap, an immigrant controller processes So what you find Is that not all Ap gateways But if the A p a gateway doesn't run on kubernetes, then it's an AP gateway focused on traffic into and out of a cluster, so the political term for this Proxy, as I mentioned, is the most common proxy for both mesh because with Dr Enterprise, you can deploy it and get going. So make sure that you choose something that works well for you. to choose that a p a gateway include functionality such as How does it do with traffic Doesn't support the protocols that you need also nonfunctional requirements
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard Lee | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2000 | DATE | 0.99+ |
Ambassador Labs | ORGANIZATION | 0.99+ |
each team | QUANTITY | 0.99+ |
Engine X Inc | ORGANIZATION | 0.99+ |
Data Wire | ORGANIZATION | 0.99+ |
each team | QUANTITY | 0.99+ |
each pod | QUANTITY | 0.99+ |
Native Computing Foundation | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
English | OTHER | 0.99+ |
one person | QUANTITY | 0.98+ |
SDO | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
ingress | ORGANIZATION | 0.96+ |
Ambassador | ORGANIZATION | 0.96+ |
Purple | ORGANIZATION | 0.95+ |
Harvard | ORGANIZATION | 0.95+ |
one big thing | QUANTITY | 0.94+ |
both | QUANTITY | 0.94+ |
Orange Micro Service | ORGANIZATION | 0.93+ |
one giant thing | QUANTITY | 0.92+ |
Purple Micro Service | ORGANIZATION | 0.92+ |
SDO | OTHER | 0.9+ |
Next Inc Kubernetes | ORGANIZATION | 0.89+ |
Cuban | LOCATION | 0.89+ |
one particular vehicle | QUANTITY | 0.88+ |
SDO Gateway | TITLE | 0.86+ |
three most well established proxies | QUANTITY | 0.85+ |
envoy | ORGANIZATION | 0.85+ |
purple | ORGANIZATION | 0.85+ |
Cooper Nannies | ORGANIZATION | 0.83+ |
Cooper | PERSON | 0.81+ |
Yellow Micro Service | ORGANIZATION | 0.8+ |
single sign | QUANTITY | 0.8+ |
A P a. | COMMERCIAL_ITEM | 0.77+ |
hot topics | QUANTITY | 0.76+ |
Launchpad 2020 | COMMERCIAL_ITEM | 0.75+ |
both mesh and | QUANTITY | 0.69+ |
Envoy | TITLE | 0.65+ |
CEO | PERSON | 0.64+ |
Dr | TITLE | 0.64+ |
AP | ORGANIZATION | 0.63+ |
VM Ware Contour | TITLE | 0.62+ |
Dr Enterprise | ORGANIZATION | 0.61+ |
Mirantis | ORGANIZATION | 0.59+ |
North South | LOCATION | 0.57+ |
Gateway | TITLE | 0.54+ |
folks | ORGANIZATION | 0.54+ |
Voyager | TITLE | 0.5+ |
Dr. Enterprise | TITLE | 0.49+ |
Omelet | TITLE | 0.45+ |
Machin | TITLE | 0.45+ |
Enterprise | ORGANIZATION | 0.43+ |