Image Title

Search Results for GRPC:

ON DEMAND API GATEWAYS INGRESS SERVICE MESH


 

>> Thank you, everyone for joining. I'm here today to talk about ingress controllers, API gateways, and service mesh on Kubernetes, three very hot topics that are also frequently confusing. So I'm Richard Li, founder/CEO of Ambassador Labs, formerly known as Datawire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including Telepresence and Ambassador, which is a Kubernetes native API gateway. And most of what I'm going to talk about today is related to our work around Ambassador. So I want to start by talking about application architecture and workflow on Kubernetes and how applications that are being built on Kubernetes really differ from how they used to be built. So when you're building applications on Kubernetes, the traditional architecture is the very famous monolith. And the monolith is a central piece of software. It's one giant thing that you build deploy, run. And the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly is that architecture is really reflected in that workflow. So with a monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular piece of software and everyone works towards that release train. And you have specialized teams. You have a development team, which has all your developers. You have a QA team, you have a release team, you have an operations team. So that's your typical development organization and workflow with a monolithic application. As organizations shift to microservices, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of this application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship as soon as they're ready. And so we call this full cycle development because that team is really responsible not just for the coding of that microservice, but also the testing and the release and operations of that service. So this is a huge change, particularly with workflow, and there's a lot of implications for this. So I have a diagram here that just tries to visualize a little bit more the difference in organization. With the monolith, you have everyone who works on this monolith. With microservices, you have the yellow folks work on the yellow microservice and the purple folks work on the purple microservice and maybe just one person work on the orange microservice and so forth. So there's a lot more diversity around your teams and your microservices, and it lets you really adjust the granularity of your development to your specific business needs. So how do users actually access your microservices? Well, with a monolith, it's pretty straightforward. You have one big thing, so you just tell the internet, well, I have this one big thing on the internet. Make sure you send all your traffic to the big thing. But when you have microservices and you have a bunch of different microservices, how do users actually access these microservices? So the solution is an API gateway. So the API gateway consolidates all access to your microservices. So requests come from the internet. They go to your API gateway. The API gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate microservice. And because the API gateway is centralizing access to all of the microservices, it also really helps you simplify authentication, observability, routing, all these different cross-cutting concerns, because instead of implementing authentication in each of your microservices, which would be a maintenance nightmare and a security nightmare, you've put all of your authentication in your API gateway. So if you look at this world of microservices, API gateways are a really important part of your infrastructure which are really necessary, and pre-microservices, pre-Kubernetes, an API gateway, while valuable, was much more optional. So that's one of the really big things around recognizing with the microservices architecture, you really need to start thinking much more about an API gateway. The other consideration with an API gateway is around your management workflow, because as I mentioned, each team is actually responsible for their own microservice, which also means each team needs to be able to independently manage the gateway. So Team A working on that microservice needs to be able to tell the API gateway, this is how I want you to route requests to my microservice, and the purple team needs to be able to say something different for how purple requests get routed to the purple microservice. So that's also a really important consideration as you think about API gateways and how it fits in your architecture, because it's not just about your architecture, it's also about your workflow. So let me talk about API gateways on Kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the internet to services inside the cluster. Kubernetes, from an architectural perspective, it actually has a requirement that all the different pods in a Kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own IP address. So this makes things very, very simple for interpod communication. Kubernetes, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you route it around the cluster, and it's very opinionated about how this works, but getting traffic into the cluster, there's a lot of different options and there's multiple strategies. There's Pod IP, there's Ingress, there's LoadBalancer resources, there's NodePort. I'm not going to go into exhaustive detail on all these different options, and I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be a hardware load balancer. It can be a virtual machine. It can be a cloud load balancer. But the key requirement for an external load balancer is to be able to attach a stable IP address so that you can actually map a domain name and DNS to that particular external load balancer, and that external load balancer usually, but not always, will then route traffic and pass that traffic straight through to your ingress controller. And then your ingress controller takes that traffic and then routes it internally inside Kubernetes to the various pods that are running your microservices. There are other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really require each of your microservices to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about an ingress controller. What exactly is an ingress controller? So an ingress controller is an application that can process rules according to the Kubernetes ingress specification. Strangely, Kubernetes is not actually shipped with a built-in ingress controller. I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement, and it is. It turns out that this is complex enough that there's no one size fits all ingress controller. And so there is a set of ingress rules that are part of the Kubernetes ingress specification that specify how traffic gets routed into the cluster, and then you need a proxy that can actually route this traffic to these different pods. And so an ingress controller really translates between the Kubernetes configuration and the proxy configuration, and common proxies for ingress controllers include HAProxy, Envoy Proxy, or NGINX. So let me talk a little bit more about these common proxies. So all these proxies, and there are many other proxies. I'm just highlighting what I consider to be probably the three most well-established proxies, HAProxy, NGINX, and Envoy Proxy. So HAProxy is managed by HAProxy Technologies. Started in 2001. The HAProxy organization actually creates an ingress controller. And before they created an ingress controller, there was an open source project called Voyager which built an ingress controller on HAProxy. NGINX, managed by NGINX, Inc., subsequently acquired by F5. Also open source. Started a little bit later, the proxy, in 2004. And there's the Nginx-ingress, which is a community project. That's the most popular. As well as the Nginx, Inc. kubernetes-ingress project, which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the NGINX ingress controller, and it's not clear if they're using this commercially supported version or this open source version. And they actually, although they have very similar names, they actually have different functionality. Finally, Envoy Proxy, the newest entrant to the proxy market, originally developed by engineers at Lyft, the ride sharing company. They subsequently donated it to the Cloud Native Computing Foundation. Envoy has become probably the most popular cloud native proxy. It's used by Ambassador, the API gateway. It's used in the Istio service mesh. It's used in the VMware Contour. It's been used by Amazon in App Mesh. It's probably the most common proxy in the cloud native world. So as I mentioned, there's a lot of different options for ingress controllers. The most common is the NGINX ingress controller, not the one maintained by NGINX, Inc., but the one that's part of the Kubernetes project. Ambassador is the most popular Envoy-based option. Another common option is the Istio Gateway, which is directly integrated with the Istio mesh, and that's actually part of Docker Enterprise. So with all these choices around ingress controller, how do you actually decide? Well, the reality is the ingress specification's very limited. And the reason for this is that getting traffic into a cluster, there's a lot of nuance into how you want to do that, and it turns out it's very challenging to create a generic one size fits all specification because of the vast diversity of implementations and choices that are available to end users. And so you don't see ingress specifying anything around resilience. So if you want to specify a timeout or rate-limiting, it's not possible. Ingress is really limited to support for HTTP. So if you're using gRPC or web sockets, you can't use the ingress specification. Different ways of routing, authentication. The list goes on and on. And so what happens is that different ingress controllers extend the core ingress specification to support these use cases in different ways. So NGINX ingress, they actually use a combination of config maps and the ingress resources plus custom annotations that extend the ingress to really let you configure a lot of the additional extensions that is exposed in the NGINX ingress. With Ambassador, we actually use custom resource definitions, different CRDs that extend Kubernetes itself to configure Ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by Kubernetes. So when you do a kub control apply of an Ambassador CRD, kub control can immediately validate and tell you if you're actually applying a valid schema and format for your Ambassador configuration. And as I previously mentioned, Ambassador's built on Envoy Proxy, Istio Gateway also uses CRDs. They can be used in extension of the service mesh CRDs as opposed to dedicated gateway CRDs. And again, Istio Gateway is built on Envoy Proxy. So I've been talking a lot about ingress controllers, but the title of my talk was really about API gateways and ingress controllers and service mesh. So what's the difference between an ingress controller and an API gateway? So to recap, an ingress controller processes Kubernetes ingress routing rules. An API gateway is a central point for managing all your traffic to Kubernetes services. It typically has additional functionality such as authentication, observability, a developer portal, and so forth. So what you find is that not all API gateways are ingress controllers because some API gateways don't support Kubernetes at all. So you can't, they can't be ingress controllers. And not all ingress controllers support the functionality such as authentication, observability, developer portal, that you would typically associate with an API gateway. So generally speaking, API gateways that run on Kubernetes should be considered a superset of an ingress controller. But if the API gateway doesn't run on Kubernetes, then it's an API gateway and not an ingress controller. So what's the difference between a service mesh and an API gateway? So an API gateway is really focused on traffic into and out of a cluster. So the colloquial term for this is North/South traffic. A service mesh is focused on traffic between services in a cluster, East/West traffic. All service meshes need an API gateway. So Istio includes a basic ingress or API gateway called the Istio Gateway, because a service mesh needs traffic from the internet to be routed into the mesh before it can actually do anything. Envoy Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Docker Enterprise provides an Envoy-based solution out of the box, Istio Gateway. The reason Docker does this is because, as I mentioned, Kubernetes doesn't come package with an ingress. It makes sense for Docker Enterprise to provide something that's easy to get going, no extra steps required, because with Docker enterprise, you can deploy it and get going, get it exposed on the internet without any additional software. Docker Enterprise can also be easily upgraded to Ambassador because they're both built on Envoy. It ensures consistent routing semantics. And also with Ambassador, you get greater security for single sign-on. There's a lot of security by default that's configured directly into Ambassador. Better control over TLS, things like that. And then finally, there's commercial support that's actually available for Ambassador. Istio is an open source project that has a very broad community, but no commercial support options. So to recap, ingress controllers and API gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. And I think a lot of times organizations don't think critically enough about the API gateway until they're much further down the Kubernetes journey. Considerations around how to choose that API gateway include functionality such as how does it do with traffic management and observability? Does it support the protocols that you need? Also nonfunctional requirements such as does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this? An API gateway is focused on North/South traffic, so traffic into and out of your Kubernetes cluster. A service mesh is focused on East/West traffic, so traffic between different services inside the same cluster. Docker Enterprise includes Istio Gateway out of the box. Easy to use, but can also be extended with Ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between API gateways, ingress controllers, and service meshes, and how you should be thinking about that on your Kubernetes deployment.

Published Date : Sep 14 2020

SUMMARY :

So ingress is the process

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2004DATE

0.99+

Richard LiPERSON

0.99+

2001DATE

0.99+

Ambassador LabsORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

DatawireORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

each podQUANTITY

0.99+

LyftORGANIZATION

0.99+

Nginx, Inc.ORGANIZATION

0.99+

todayDATE

0.98+

eachQUANTITY

0.98+

KubernetesTITLE

0.98+

one personQUANTITY

0.98+

HAProxy TechnologiesORGANIZATION

0.98+

HAProxyTITLE

0.97+

Docker EnterpriseTITLE

0.96+

AmbassadorORGANIZATION

0.96+

bothQUANTITY

0.96+

NGINXTITLE

0.96+

NGINX, Inc.ORGANIZATION

0.96+

Docker EnterpriseTITLE

0.96+

Envoy ProxyTITLE

0.96+

oneQUANTITY

0.95+

one big thingQUANTITY

0.95+

NGINX ingressTITLE

0.95+

Docker enterpriseTITLE

0.94+

one particular vehicleQUANTITY

0.93+

ingressORGANIZATION

0.91+

TelepresenceORGANIZATION

0.87+

F5ORGANIZATION

0.87+

EnvoyTITLE

0.86+

Nginx-ingressTITLE

0.85+

three very hot topicsQUANTITY

0.82+

both meshQUANTITY

0.82+

three most well-established proxiesQUANTITY

0.76+

single signQUANTITY

0.75+

Istio GatewayOTHER

0.75+

one giant thingQUANTITY

0.73+

VMware ContourTITLE

0.71+

IngressORGANIZATION

0.7+

Docker EnterpriseORGANIZATION

0.69+

AmbassadorTITLE

0.67+

VoyagerTITLE

0.67+

EnvoyORGANIZATION

0.65+

Istio GatewayTITLE

0.65+

IstioORGANIZATION

0.62+

Mario Baldi, Pensando | Future Proof Your Enterprise 2020


 

(bright music) >> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a Cube conversation. >> Hi, I'm Stu Miniman, and welcome to a Cube conversation. I'm coming to you from our Boston area studio. And we're going to be digging into P4, which is, the programming protocol independent packet processors. And to help me with that, first time guest on the program, Mario Baldi, he is a distinguished technologist with Pensando. Mario, so nice to see you. Thanks for joining us. >> Thank you. Thank you for inviting. >> Alright, so Mario, you have you have a very, you know, robust technical career, lot of patents, you've worked on, you know, many technologies, you know, deep in the networking and developer world, but give our audience a little bit of your background and what brought you to Pensando. >> Yeah, yes, absolutely. So I started my my professional life in academia, actually, I worked for many years in academia, about 15 years exclusively in academia, and I was focusing both my teaching in research on computer networking. And then I also worked in a number of startups and established companies, in the last about eight years almost exclusively in the industry. And before joining Pensando, I worked for a couple of years at Cisco on a P4 programmable switch and that's where I got in touch with P4 actually. For the occasion I wore a T shirt of one of the P4 workshops. Which reminds me a bit of those people when you ask them, whether they do any sports, they tell you they have a membership at the gym. So I don't just have membership, I didn't just show up at the workshop. I've really been involved in the community and so when I learned what pensando was doing, I immediately got very excited that the ASIC that Pensando has developed these is really extremely powerful and flexible because it's fully programmable, partly programmable, with P4 partly programmable differently. And Pensando is starting to deploy these ASIC at the edge and Haas. And I think such a powerful and flexible device, at the edge of the network really opens incredible opportunities to, on the one hand implement what we have been doing in a different way, on the other hand, implement completely different solution. So, you know, I've been working most of my career in innovation, and when when I saw these, I immediately got very excited and I realized that Pensando was really the right place for me to be. >> Excellent. Yeah, interesting, you know, many people in the industry, they talk about innovation coming out of the universities, you know, Stanford often gets mentioned, but the university that you, you know, attended and also were associate professor at in Italy, a lot of the networking team, your MPLS, you know, team at Pensando, many of them came from them. Silvano guy, you know, written many books, they're, you know, very storied career in that environment. P4, maybe step back for a second, you know, you're you're deep in this group, help us understand what that is, how long it's been around, you know, and who participates in it with P4? >> Yeah, yeah. So as you were saying before, one of the few P4 from whom I've heard saying it, because everyone calls it P4 and nobody says what it really means. So programming protocol, independent packet processor. So it's a programming language for packet processors. And it's protocol independent. So it doesn't start from assuming that we want to use certain protocols. So P4 first of all allows you to specify what packets look like. So what the headers look like, and how they can be parsed. And secondly, because P4 is specifically designed for packet processing, and it's based on the idea that you want to look up values in tables. So it allows you to define tables, in keys that are being used to look up those tables and find an entry in the table. And when you find an entry, that entry contains an action and parameters to be used for that action. So the idea is that the package descriptions that you have in the program, define how the package should be processed. Header fields should be parsed, values extracted from them, and those values are being used as keys to look up into tables. And when the appropriate entry in the table is found, an action is executed and that action is going to modify those header fields, and these happens a number of times, the program specifies a sequence of tables that are being looked up, header fields being modified. In the end, those modified header fields are used to construct new packets that are being sent out of the device. So this is the basic idea of a P4 program. You specify a bunch of tables that are being looked up using values extracted from packets. So this is very powerful for a number of reasons. So first of all, its input, which is always good as we know, especially in networking, and then it maps very well on what we need to do, when we do packet processing. So writing a packet processing program, is relatively easy and fast. Could be difficult to write a generic programming in P4, you could not, but the packet processing program, it's easy to write. And last but not least, P4 really maps well on hardware that was designed specifically to process packet. What we call domain specific processes, right. And those processes are, in fact designed to quickly look up tables that might have decamping side, they might have processes that are specialized in performing, in building keys and performing table lookup, and modifying those header fields. So when you have those processors that are usually organized in pipelines to achieve a good throughput, then you can very efficiently take a P4 program and compile it to execute it very high speed on those processors. And this way, you get the same performance of a fixed function ASIC, but it's fully programmable, nothing is fixed. Which means that you can develop your features much faster, you can add features and fix bugs, you know, with a very short cycle, not with a four or five year cycle of baking a new ASIC. And this is extremely powerful. This is the strong value proposition of P4. >> Yeah, absolutely. I think that that resonates Mario, you know, I used to do presentations about the networking industry and you would draw timelines out there in decades. Because from the standard to get deployed for, you know, the the hardware to get baked, the customers to do the adoption, things take a really long time. You brought up, you know, edge computing, obviously, you know, we are, you know, it is really exciting, but it is changing really fast, and there's a lot of different, you know, capabilities out there. So if you could help us, you know, connect the dots between what P4 does and what the customers need. You know, we talked about multi-cloud and edge. What is it that you know, P4 in general, and what Pensando is doing with P4 specifically, enables this next generation architecture? >> Yeah, sure. So, Pensando has developed these card, which we call DSC distribute services card, that is built around an ASIC, that has a very very versatile architecture. It's a fully programmable. And it's fully programmable it's various levers, and one of them is in fact P4. Now this card and has a PCIE interface. So it can be installed in horse. And by the way, this is not the only way this powerful as you can be deployed. It's the first way Pensando has decided to use it. And so we have this card, it can be plugged into a host, it has two network interfaces. So it can be used as a network adapter. But in reality, because the card is fully programmable and it has several processors inside, it can be used to implement very sophisticated services. Things that you wouldn't even dream of doing with the typical network adapter, with a typical NIC. So in particular, this card, this ASIC contains a sizable amount of memory. Right now we have two sizes four, an eight gig but we are going to have versions of the card with even larger memory. Then it has some specialized hardware for specific functions like cryptographic functions, compression, computation of CRCs and if sophisticated queueing system with packet buffer with the queuing system to end the packets that have to go out to the interfaces or coming from the interfaces. Then it is several types of processors. It has generic processors, specifically arms, arm processors that can be programmed with general purpose languages. And then a set of processors that are specific for packet processing that are organized in a pipeline. In those, idea to be programmed with P4. We can very easily map a P4 program, on those pipeline of processor. So that's where Pensando is leveraging P4, is the language for programming those processes that allow us to process packets at the line rate of our 200 gigabit interfaces that we have in the card. >> Great. So Mario, what about from a customer viewpoint? Do they need to understand you know, how to program in P4, is this transparent to them? What's the customer interaction with it? >> Oh yeah, not at all. The Pensando platform, Pensando is offering a platform that is a completely turnkey solution. Basically the platform, first of all, the platform has a controller with which the user interacts, the user can configure policies on this controller. So using an intent based paradigm, the user defines policies that the controller is going to push those policies to the cards. So in your data center in your horse, in your data center, you can deploy thousands of those cards. Those cards implement distributed services. Let's say, just to give a very simple example, a distributed stateful firewall implemented on the all of those cards. The user writes a security policy, says this particular application can talk to these other particular application, and then translate it into configuration for those cards. It's transparently deployed on the cards that start in force the policies. So the user can use this system at this very high level. However, if the user has more specific needs, then the system, the platform offers several interfaces and several API's to program the platform through those interfaces. So the one at the highest level, is a REST API to the controller. So if the customer has an orchestrator, they can use that orchestrator to automatically send policies to the controller. Or if a customer already have their own controller, they can interact directly with the DSCs with the cards on the horse, with another API's that's fully open, is based on GRPC. And in this way, they can control the cards directly. If they need something even more specific, if they need a functionality that Pensando doesn't offer on those card, hasn't already ever written software for the cards, then customers can program the card, and the first level at which they can program it is the ARM processors. We have ARM processors, those are running in version of Linux, so customers can program it by writing C-code or Python. But if they have very specific needs, like when they write a software for the ARM processor, they can leverage the P4 code that we have already written for the card for those specialized packet processors. So they can leverage all of the protocols that our P4 program is already supported. And by the way because that's software, they can pick and choose in a Manga library of many different protocols and features we support, and decide to deploy them and then integrate them in their software running on the ARM processor. However, if they want to add their own proprietary protocols, if they want, if they need to execute some functionalities at very high performance, then they that's when they can write P4 code. And even in that case, we are going to make it very simple for them. Because they don't have to write everything from scratch. They don't have to worry about how to process AP packets, how to terminate TCP, we have to solve the P4 code for them. They can focus just on their own feature. And we are going to give them a development environment that allows them to focus on their own little feature and integrate it with the rest of our P4 program. Which by the way, is something that P4 is not designed for. P4 is not designed for having different programmers, write different pieces of the program and put them together. But we have the means to enable this. >> Okay, interesting. So, you know, maybe bring us inside a little bit, you know the P4 community, you're very active in it, when I look online, there's a large language consortium, many of, you know, all the hardware and software companies that I would expect in the networking space are on that list. So what's Pensando's participation in the community? And you were just teasing through, you know, what does P4 do and then what does Pensando, maybe enable, you know, above and beyond what, you know, P4 just does on its own? >> Yeah, so yes Pensando is very much involved in the community. There has been recently an event, online event that substituted the yearly P4 workshop. It was called the P4 expert round-table series. And Pensando had very strong participation. our CTO, Vipin Jain, had the keynote speech. Talking about how P4 can be extended beyond packet processing. P4, we said, has been designed for packet processing, but today, there are many applications that require message processing, which is more sophisticated then. And he gave a speech on how we can go towards that direction. Then we had a talk that was resulting from a submission that was reviewed and accepted on in fact, the architecture of our ASIC, and how it can be used to implement many interesting use cases. And finally, we participated into a panel in which we discussed how to use P4 in mix-ins Martin at the edge of the network. And there we argued with some use cases and example and code, how before it needs to be extended a little bit because NICs have different needs and open up different opportunities rather than switches. Now P4 was never really meant only for switches. But if we looked at what happened, the community has worked mostly on switches. For example it is defined that what is called the PSA, portable switch architecture. And we see that the NICs have an edge devices, have a little bit different requirements. So, one of the things we are doing within the communities working within one of the working groups, is called the architecture work group. And they are working in there to create the definition of a PNA, Portable NIC Architecture. Now, we didn't start this activity, this activity has started already in 2018. But it did slow down significantly, mostly because there wasn't so much of a push. So now Pensando coming on the market with this new architecture really gave new life to this activity. And we are contributing, actively we have proposed a candidate for a new architecture which has been discussed within the community. And, you know, just to give you an example, why do we need a new architecture? Because if you think of the switch, there are several reasons but one, it's very intuitive. If you think of a switch, you have packets coming in, they've been processed and packets go out. As we said before, there's the PMA then sorry, PSA architecture is meant for these kinds of operation. If you think of a NIC, it's a little bit different because yes, you have packets coming in, and yes, if you have multiple interfaces like our card, you might take those packets and send them out. But most likely what you want to do, you want to process those packets, and then not give the packets to the host. Otherwise the host CPU will have to process them again, to pass them again. You want to give some artifacts to the host, some pre-processed information. So you want to, I don't know take those packets for example, assemble many TCP messages and provide a stream of bytes coming out of this TCP connection. Now, these requires a completely different architecture, packets come in, something else goes out. And goes out, for example, through a PCI bus. So, you need the some different architecture and then you will need in the P4 language, different constructs to deal with the fact that you are modifying memory, you are moving data from the card to the host and vice versa. So again, back to your question, how are we involved in the workgroups? We are involved in the architecture workgroup right now to define the PNA, the Portable NIC Architecture. And also, I believe in the future we will be involved in the language group to propose some extensions to the language. >> Excellent. Well, Mario, thank you so much for giving us a deep dive into P4, where it is and you know some of the potential futures for where it will go in the future. Thanks so much for joining us. >> Thank you. >> Alright. I'm Stu Miniman, thank you so much for watching the Cube. (gentle music)

Published Date : Jun 17 2020

SUMMARY :

Announcer: From the Cube I'm coming to you from Thank you for inviting. and what brought you to Pensando. that the ASIC that Pensando a lot of the networking and it's based on the idea What is it that you know, P4 in general, And by the way, this is not the only way Do they need to understand you know, and the first level at which above and beyond what, you And also, I believe in the future some of the potential futures thank you so much for watching the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarioPERSON

0.99+

Mario BaldiPERSON

0.99+

2018DATE

0.99+

PensandoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

ItalyLOCATION

0.99+

Stu MinimanPERSON

0.99+

BostonLOCATION

0.99+

thousandsQUANTITY

0.99+

P4ORGANIZATION

0.99+

five yearQUANTITY

0.99+

StanfordORGANIZATION

0.99+

PythonTITLE

0.99+

Vipin JainPERSON

0.99+

200 gigabitQUANTITY

0.99+

first levelQUANTITY

0.99+

P4TITLE

0.99+

eight gigQUANTITY

0.99+

fourQUANTITY

0.99+

bothQUANTITY

0.99+

SilvanoPERSON

0.98+

about 15 yearsQUANTITY

0.98+

LinuxTITLE

0.98+

first wayQUANTITY

0.97+

Future Proof Your EnterpriseTITLE

0.97+

CubeORGANIZATION

0.97+

oneQUANTITY

0.96+

first timeQUANTITY

0.96+

P4COMMERCIAL_ITEM

0.96+

two network interfacesQUANTITY

0.95+

two sizesQUANTITY

0.94+

todayDATE

0.92+

secondlyQUANTITY

0.92+

about eight yearsQUANTITY

0.9+

HaasORGANIZATION

0.89+

2020DATE

0.87+

ASICORGANIZATION

0.84+

firstQUANTITY

0.83+

MartinPERSON

0.8+

PNATITLE

0.8+

secondQUANTITY

0.78+

those cardsQUANTITY

0.75+

Chris Aniszczyk, CNCF | KubeCon 2018


 

>> From Seattle, Washington, it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation, and the its ecosystem partners. >> Okay, welcome back everyone. Live here in Seattle for KubeCon CloudNativeCon 2018, with theCUBE's coverage I'm John Furrier for Stu Miniman. We've been there from the beginning watching this community grow into a powerhouse. Almost a Moore's Law like growth, doubling every, actually six months, if you think about it. >> Yeah it's pretty wild. >> Chris Aniszczyk, CTO and COO of the CNCF, the Cloud Native Computing Foundation, great to see you again. Thanks for coming on. >> Super stoked to be here. Thank you for being with us since the beginning. >> So it's been fun to watch you guys, CNCF has done an exceptional job, I thought, a fabulous job of how you guys have built out a great community, open-source community as the main persona target, but brought in the vendor on terms that really work for open-source, Linux foundation, great shepherding this thing through, now you have, basically, looks like a conference. >> Yeah. >> End user conference, vendors are here, still open-source is pure. The growth has been phenomenal. Just take a minute to give us the update on just some of the stats, massive growth. >> Yeah, sure. I mean you know, we're 8,000 people here today, which is absolutely wild. What's actually crazy is when we planned this event, it was about two years ago when we had to start booking a venue, figuring out how many people may be here. And two years ago we thought 5,000 would have been a fantastic number. Well, we got to 8,000. We have about 1500 to 2,000 people on the wait list that could not get in. So, obviously we did not plan properly but sometimes it's hard to predict kind of the uptake of technology these days. Things just move quickly. I think we've kind of benefited from the turnaround that's happening in the industry right now where companies are finally looking to modernize their infrastructure. Whether it's moving to the cloud or just modernizing things, and that's happening everywhere, from traditional enterprises to internet scale companies. Everyone's looking to kind of modernize things and we're kind of at the forefront of that. >> I mean the challenge of events is, some of it is provisioning, over provision. You don't show up, you want elastic, dynamic, agile-- >> I want the Cloud Native events. >> Programmable space that could just go auto scale when you need it. >> Exactly. >> All kidding aside, congratulations on the success. But one thing we've been covering on SiliconANGLE and theCUBE, and you guys have been actually executing on, is the growth in China in open-source. And it's been around for a while but just the scale, just pure numbers, tell them about the success in China and the impact to the open-source community and business. >> Yeah. We put on our first event in Shanghai, KubeCon China. It was fantastic. We sold out at 2500 people. Always a little bit difficult to do your first event in China. I have many stories to share on that one, but the amount of scale, in terms of software deployment there are just fascinating. You kind of have these companies like ofo, is like a bike sharing system right. You know in China they have hundreds of millions of these bicycles that they have to kind of manage in an infrastructural way. The software that you use to actually do that has to be built very well. And so the trend that we're actually seeing in CNCF now is about 10%, we have three projects that were born in China, dealing with China-scale problems. So one of those projects is TiKV, which is kind of a very well fine-tuned built distributed key value store that is used by a lot of the Chinese com providers and folks like ofo and LME out there that are just dealing with hundreds of millions of users. It's fascinating. I think the trend you're going to see in the future is there's going to be more technology that is kind of born dealing with China-scale issues, and having those lessons being shared with the rest of the world and collaborate. One of the goals in CNCF for us is to help bridge these communities. In China about 25% of our attendance was international, which was higher then we expected. But we had dual live simultaneous translation for everyone, to kind of try to bridge these... >> It's a big story. The consumption and the contribution side is just phenomenal. >> China is our number two contributor to all CNCF projects, it's very impressive in my opinion. >> So Chris there was a lot in the keynote. I wondered, give us a little insight, it's different for a foundation in open-source communities than it is for company when you talk about the core product being Kubernetes and then all these other projects, you've got the incubating projects, the ones that have been elevated, new FCD comes into it, how do you do the juggling act of this? >> Honestly, the whole goal of the foundation is basically to cultivate and sustain, and kind of grow projects that come in. Some are going to work and be very successful, some may never leave the sandbox, which is our early stage. So today I was very excited to finally have etcd come as an official incubating project. This is our 31st project, which is a little bit wild, since we started, it was just Kubernetes. We had other projects that moved from, say, sandbox to incubating. So in China, one of our big announcements was Harbor, which is a container registry, or actually, technically, we call it a Cloud Native Registry, because it does support things like helm charts, it doesn't only host container-based artifacts. It moved up to the incubating level and that is being embedded. It's in all of Cloud Foundry's and Pivotal's products. It's used by some cloud providers in China as their kind of registry as a service. Like their equivalent to ECR or GCR, essentially. And we've just seen incredible growth across all of our projects. I mean, we have three graduated projects. Envoy recently, which you saw Matt, Constance, and Jose on stage a little bit to talk about. To me, what I really like about Envoy and Prometheus, these are two projects that were not born from a vendor. You know. Envoy came from Lyft because they were just like, you know what? We're not happy with our current kind of reverse proxy, service proxy situation, let's build our own open-source and kind of share our lessons. Prometheus, born from SoundCloud. So I think CNCF has a good mix of, hey, we have some initial vendor-driven projects, like Kubernetes came from Google but now it's used by a ton of people. But then you have other projects that were born from the end-user community. I think having that healthy mix is good for everyone. >> I think the DNA of that early on in the culture has been a successful one for you guys. Not being vendor-led, being end-user led, but vendors can come in and participate. >> Yeah, absolutely. >> So talk about the end-user perspective because we're very interested, a lot of people are interested in end-user. What are they doing with it? It used to be a joke. I stood up a bunch Hadoop but what are you using it for? What are people using Kubernetes for? You've got Apple, Uber, Capital One, Comcast, GoDaddy, Airbnb. They're all investing in Kubernetes as their main stack. >> And CNCF projects, not only Kubernetes. >> But what does that mean when they say Kubernetes as a stack? It's kind of been encapsulated to include other things. People are looking at this as a real alternative. Can you explain what that is about? >> So, I think people have to realize that CNCF is essentially more than just Kubernetes. Cloud Native is more than just Kubernetes. So what we'll see is, take a company like Lyft. Lyft did not start using Kubernetes, they are kind of on that migration path now but Lyft started to use Envoy, Prometheus, gRPC, other technologies that kind of lead them to that Cloud Native journey that eventually they're like, you know what? Maybe we don't need our homegrown orchestrator. We'll go use that. And use, (huffs) Everyone falls in differently in kind of a community. Some people start with Kubernetes and eventually subsume the other kind of ancillary projects. >> This is what the project cloud is about. Let me rephrase the question. So when people say, because this is a real trend we've been reporting on this, the CNCF stack, people have language semantics on how that's couched. Oh, on the Kubernetes-- >> I don't like stack because it means there's one proscribed solution, where I think it's more like an a la carte model. >> Well if I quote the CNCF stack, if there was a word for it, as an alternative, as a solution base with Kubernetes at the core of it, right. Okay, cool. What is that usage being looked like? How is that developing? How are end users looking at the CNCF holistically with Kubernetes at the core? >> So we have one of the largest end-user communities out there of any open-source foundation. We have about 80 members. When we talk to them directly, why are they adopting CNCF projects and technology? Most of the time is they want to deploy software faster, right? They want to use modern CICD tools and just development patterns. So it's all about faster time to market and making the developers lives easier so they're actually able to deliver business customer value. And it's basically similar to a whole DevOps mantra, right. If I could ship software faster and it's easier for my developers to get stuff done, I'm delivering value to whatever my end-user customer is at the end of the day. If you go to the CNCF end-user website, we have case studies from Nordstrom, Capital One, I think Lyft is there. Just a bunch of people that, we moved to these technologies because it improved the way we could monitor software, how fast we could ship. It's all about faster time to market, and modernizing their infrastructure. >> Chris, give us a little bit of a view coming forward. We're on 1.13 for Kubernetes, if I read it right. The contribution slowed down a little bit because we're actually reaching a level of maturity. >> Kubernetes is boring and mature. >> What do you see as we come, other than continued growth? >> So I think the wider ecosystem is going to continue to grow. So if you actually look at Kubernetes directly, it has been very focused on moving things out of the core as much as possible and trying to force people to extend things. I don't know if you saw, Tim Hockin had this great talk in terms of how all the Kubernetes components are either being ripped out or turned into custom resource definition of CODs. Basically trying to make Kubernetes as extensible as possible. Instead of trying to ram things into Kubernetes, hey, use the built in extensibility layer. >> Decompose a little bit. >> Decompose and the analogy here would be like kernel space versus user space if you're going to Linux. All the exciting things tend to happen in user space these days but, yeah, the kernel is still important, actively contributed to by a ton of people, very critical, everything. But a lot of the action happens in user space. And I think you'll see the same thing with Kubernetes, where it will kind of become like Linux where the kernel of Kubernetes, very stable, mature, focused on basically not breaking and trying to keep it as simple as possible and built good extensibility mechanisms so folks could plug in whatever systems. We saw this with storage in Kubernetes. A lot of the initial storage drivers, flex volume stuff, was baked into the Kubernetes with a new effort called the container storage interface. They all pulled that out and made they basically built an extensibility mechanism so any company or any project could bring in their storage solution. >> One of the key trends we're seeing, obviously, in cloud is automation. We see serverless around the corner, you see all these things going on around the cool things you guys are building. As automation continues to move down the track, where is that going to impact and create value for customer end-users as they roll with the CNCF? So Kubernetes at some point could be auto, why even be managing clusters? Well, that should be automated at some point. >> I mean, hey, you could do it both ways. A lot of people love the managed service approach. If I could pay a large hyper-scale cloud provider to manage everything, the more the merrier. Some want the freedom to roll their own. Some may want to pay a vendor, I don't know, Red Hat OpenShift looks great, let's pay them to help manage data. Or I just roll alone. And we've seen it all. You know it really depends on the organization. We've seen some very high end banks or financial institutions that have very good technical chops. They're okay rolling on their own. Some may not be as interested in that and just pay a vendor to manage it. >> It's a choice issue. >> For us it's all goodness, whatever you prefer. I think longer term we'll see more people, just for the convenience of managed services, go that route. But for CNCF Kubernetes there's multiple ways to do it; you could go Vanilla, you could go Managed Service, you could go through a vendor like Rancher or OpenShift. The cool thing about all these things is they all are conformant to the Kubernetes certified program, so it means there's no breakage or forking, everyone is compliant. >> So for the people that are watching that couldn't make it here or are on the waiting list, or doing LobbyCon. >> I'm sorry, I'm sorry for the waiting list. >> This is actually a good venue to do LobbyCon, there's places to meet here. I know a lot of people actually in town kind of LobbyCon-ing it. But for the people that aren't here, what's the most important story that's being told? I know we're not being talked about. What is happening here? What should people know about this year? In your mind's eye, in your understanding of the program, and how it's developed early on, what's the most important thing? >> I think in general CNCF, Cloud Native, Kubernetes all have matured a lot in the last three years, especially the last 12-18 months, where you've seen... Earlier it was all about technical-savvy folks scratching their itch. Now the end-users that I'm talking to, you have like Maersk, what does Maersk do? They actually ship containers, right? But now they are using Kubernetes to manage containers on the containers. >> They're in the container business. >> I'm seeing traditional insurance companies. So I think what we're doing is we're basically hitting, we're kind of past that threshold of early adopters and tinkerers, and now we're moving to full-blown mainstream adoption. Part of that is the cloud providers are all offering Managed Kubernetes, so it's convenient for companies that move in the cloud. And then on the distro front, OpenShift, PKS, Rancher, they're all mature products. So there's just a lot of stability and maturity in the ecosystem. >> Just talking about the mature stuff, give us your take on Knative. What should people be looking at that? How does Serverless fit into all this? >> So Serverless, you know we love Serverless in CNCF. We just view it as another kind of programing model that eventually runs on some type of containerized stack. For us at CNCF, we have a Serverless working group that's been putting out whitepapers. We have a spec around cloud events standardized. I think Knative is a fantastic approach of how to basically build a, kind of like CNCF where it's a set of components that you can use to build your own serverless framework. I think the adoption has been great. We've actually been talking to them about potentially bringing in some components of Knative into CNCF. I think, if you want to provide your own serverless offering, you're going to need the components in Knative to make that happen. I've seen SAPs picked up on it. GitLab just announced a serverless offering based on Knative today. I think it's a great technology. It's still very early days. I think serverless is great and will be continually used, but it's one option of many. We're going to have containers, we're going to have serverless, we're going to have mainframes. It's going to be a mix of everything. >> I'm old enough to remember the old client server days when multi-vendor was a big buzz word. Multi-cloud now is a subtext here. I think that one of the big stories in issue of the maturity is that you're starting to see people, I want choice. And hybrid-cloud is the word today but I think ultimately people view it as a multi-cloud environment of resource. >> So one interesting thing about KubeCon, I think one of our reasons that we've grown so much is if you look at it, there's really no other event you can go to that is truly multi-cloud. You have all the HyperScale folks, you've got your end-users and vendors in one area, right? Versus you going to a vendor-specific event. So I think that's kind of been part of our benefit and then luck to kind of stumble in this where everyone is in the same room. I think next year, big push on bringing all the clouds. >> Well, Chris, thanks for spending the time. I know you're super busy. CTO and COO of the CNCF, really making things happen. This is a real, important technology wave, the cloud computing, and having the kind of choices in ecosystem around open-source is making it happen. Congratulations to your success. We're going to continue coverage here. Day one of three days of CUBE coverage. I'm John Furrier for Stu Miniman. Stay with us for more after this short break. (light music)

Published Date : Dec 11 2018

SUMMARY :

and the its ecosystem partners. the beginning watching and COO of the CNCF, Super stoked to be here. So it's been fun to watch you guys, on just some of the stats, massive growth. kind of the uptake of I mean the challenge of events is, auto scale when you need it. and the impact to the open-source One of the goals in CNCF for us The consumption and the contribution side contributor to all CNCF projects, a lot in the keynote. goal of the foundation early on in the culture So talk about the end-user perspective It's kind of been encapsulated and eventually subsume the other Oh, on the Kubernetes-- I don't like stack at the core of it, right. Most of the time is they want bit of a view coming forward. in terms of how all the All the exciting things tend to happen One of the key trends we're seeing, A lot of people love the just for the convenience of So for the people that are watching for the waiting list. But for the people that aren't here, in the last three years, Part of that is the cloud providers Just talking about the mature stuff, of how to basically build a, And hybrid-cloud is the word and then luck to kind of stumble in this CTO and COO of the CNCF,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Tim HockinPERSON

0.99+

ChinaLOCATION

0.99+

ComcastORGANIZATION

0.99+

Chris AniszczykPERSON

0.99+

SeattleLOCATION

0.99+

MattPERSON

0.99+

AppleORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

JosePERSON

0.99+

Red HatORGANIZATION

0.99+

Capital OneORGANIZATION

0.99+

UberORGANIZATION

0.99+

ConstancePERSON

0.99+

LyftORGANIZATION

0.99+

NordstromORGANIZATION

0.99+

ShanghaiLOCATION

0.99+

5,000QUANTITY

0.99+

AirbnbORGANIZATION

0.99+

8,000QUANTITY

0.99+

31st projectQUANTITY

0.99+

next yearDATE

0.99+

CNCFORGANIZATION

0.99+

first eventQUANTITY

0.99+

GitLabORGANIZATION

0.99+

8,000 peopleQUANTITY

0.99+

two projectsQUANTITY

0.99+

2500 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

PrometheusTITLE

0.99+

KubeConEVENT

0.99+

three daysQUANTITY

0.99+

OpenShiftORGANIZATION

0.99+

LobbyConEVENT

0.99+

six monthsQUANTITY

0.99+

RancherORGANIZATION

0.99+

KubernetesTITLE

0.99+

todayDATE

0.98+

Stu MinimanPERSON

0.98+

ofoORGANIZATION

0.98+

PKSORGANIZATION

0.98+

both waysQUANTITY

0.98+

LMEORGANIZATION

0.98+

GoDaddyORGANIZATION

0.98+

Seattle, WashingtonLOCATION

0.98+

GoogleORGANIZATION

0.97+

about 25%QUANTITY

0.97+

EnvoyTITLE

0.97+

two years agoDATE

0.97+

about 80 membersQUANTITY

0.97+

CloudNativeCon North America 2018EVENT

0.97+

this yearDATE

0.97+

2,000 peopleQUANTITY

0.97+

OneQUANTITY

0.96+

Cloud NativeORGANIZATION

0.96+

KnativeORGANIZATION

0.96+

one areaQUANTITY

0.96+

PivotalORGANIZATION

0.96+

MaerskORGANIZATION

0.96+

Simon Wardley, ​Leading Edge Forum | ServerlessConf 2018


 

>> From the Regency Center in San Francisco, it's theCUBE covering Serverlessconf San Francisco 2018 brought to you by SiliconANGLE Media. >> I'm Stu Miniman and you're watching theCUBE's coverage of Serverlessconf 2018 here in San Francisco at the Regency ballroom. I'm happy to welcome back to the program Simon Wardley, who's a researcher with the Leading Edge Forum, I spoke with you last year at Serverless in New York City, and thanks for joining me again here in San Francisco. >> Absolute pleasure, nice to be back. >> Alright, so many things have changed, Simon, we talked off camera and we're not going into it, your wardrobe stays consistent >> Always. >> But, you know, technology tends to change pretty fast these days. >> Mhmm. >> You do a lot of predictions and I'm curious starting out when you think about timelines and predictions, how do you deal with the pace of change, and put things out, I have my CTOs, like well, if I put a 10 year forecast down there, I can be off on some of the twists and curves, and kind of hit closer to the mark. Give us some of your thoughts as to how you look out and think about things when we know it's changing really fast. >> Okay, okay, so there are a number of different comments in there, one about how do you do predictions, one about the speed of change, okay? So I'm going to start off with the fact that one of the things I use is maps. And maps are based on a couple of characteristics. Any map needs an anchor, in the case of the maps of business that I do, that's the user, and often the business, and often regulators. You also need movement and position in a map. So position's relative to the anchor, so a geographical map, if you've got a compass then this piece is north, south, east or west of that. In the sort of maps that I do, it's the value chain which gives you position relative to the user or the business at the top. Movement, in a geographical map you have consistency of movement, so if I go, I don't know, north from England I end up in Scotland, so you have the same thing with a business map, but that evolution is described, sorry, that movement is described by evolution. So what you have is the genesis of novel and new activities custom-build examples, products and rental services, commodity and utility services, and that's driven by supply and demand competition. Now, that evolution axis, in order to create it, you have to abolish time. So one of the problems when you look at a map is there is no easy use of time in a map. You can have a general direction and then you have to use weak signals to get an idea of when something is likely to happen. So for example if I take nuts and bolts, they took 2,000 years to go from genesis to commodity, electricity was 1,400 years from genesis to commodity, utility, computing 80 years. So, there are weak signals that you can use to identify roughly when something is going to transition, particularly between stages like product to a commodity. Product-product substitution very unpredictable, genesis of novel acts, you can usually say when stuff might appear, but not what is going to appear because in that space it's actually what we call the uncharted, the unexplored space. So, one of the problems is time is an extremely difficult thing to predict without the use of weak signals. The second thing is the pace of change. Because what happens is components evolve, and when we see them shift from product to more commodity and utility, we often see a big change in the value chains that that impacts. And you can get multiple components evolving, and they overlap, and so we feel that the pace is very very fast, despite the fact that it actually takes about 30 to 50 years to go from genesis to the point of industrialization, becoming a commodity, and then about 10 to 15 years for that to actually happen. So if you look at something like machine learning, we can start with it back in the '70s, 3D printing 1968, the Battelle Institute, all of this stuff, virtual reality back in the 1960s as well. So the problem is, one, time's very difficult. The only way to effectively manage time is to use weak signals, it's probability. The second thing is the pace of change is confusing because what we're seeing is overlapping points of industrialization like for example cloud, and what's going here with Serverless. That doesn't actually imply that things are rapidly changing because you've actually got this overlapping pattern. Does that make sense? >> Yes, it does actually. >> Perfect. >> Because you think, we have in hindsight we always think that things happen a lot faster but-- >> Yeah. >> it's funny, infrastructure space when I talk to some of the people that I came up with, they were like oh yeah, come on, we did this in mainframe decades ago. and now we're trying again, we're trying again. Things like-- >> Containers, for example, you've got LXE before that, and we had Solaris Zones before that, so it's all sort of like, interconnected together. >> Okay, so tie this into Serverless for us. >> Okay. >> You were a rather big proponent of Platform as a Service, is this a continuation of us trying to get that abstraction of the application or is it something else? What is the map we are on, and, you know, help us connect things like PaaS and Serverless and that space. >> So back in 2005, the company I ran, we mapped out our value chain, and we realized that compute was shifting from product to utility. Now that had a number of impacts. A, that shift from product to utility tends to be exponential, people have inertia due to past practice, you see a co-evolution of practice, around the changing characteristic. It's normally to do with something called MTTR, mean time to recovery changes. And so you see rapid efficiency, rapid speed of development, being able to build new sources, new areas of value. So that happened with infrastructure, and we also knew it was going to happen with platform, which is why we built something called Zymkey, which was a code execution environment, totally stateless, event-driven, utility billing, and billing to the function, and that was basically a shift of the code execution platform from a product, lamp.net stack, to a much more utility form. Now we were way too early, way too early, because the educational barriers to get people into this idea of building with functions, functional program, much more declarative environment, was really different, I mean when Amazon launched EC2 in 2006, that was a big enough shock for everybody else, and now of course, now we're in 2014, Lambda represents that shift, and the timing's much much better. Now the impact of the shift is not only efficiency and speed of development of new things, and being able to explore new sources of value, but also a change of practice, and in the past, change of practice created DevOps, this is likely to create a new type of practice. For us, we've also got inertia to change because of pre-existing systems and governance and ways of working, sunk capital, physical capital, social capital. So it's all perfectly normal. So in terms of being able to predict and far-predict these types of future, well for me, actually, Lambda's my past, because that's where we were. It's just the timing was wrong, and so when it came out, it was like for me, it was like, this is really powerful stuff and the timing is much, and we're seeing it here, it's now really starting to grow. >> Alright, you've poked a little bit at some of the container discussions going on in the industry, you know, I look at the ecosystem here, and of course AWS is the big player, but there's lots of other Serverless out there. There's discussion of Multicloud. >> Yeah. >> How does things like Kubernetes, and there was this new term canative, or cane-native project, that was just announced, and we're all, don't expect that you've dug in too deeply, but, if you look at containers and Kubernetes, and Serverless, do these combine, intersect, fight? How do you see this playing out? >> So when I look at the map, you know, you've got the code execution layer, the framework which has now become more of a utility, and that's what we call platform. The problem is, is people will application to containers, and therefore describe their environments as application-container platforms, and the platform term became really messy, basically meant everything, okay. But if we break it down into code execution, this is what we call frameworks, this is becoming utility, this is where things like Lambda is, underneath that, are all these components like operating systems, and containers, and container management, Kubernetes type systems. So if you now look at the value chain, the focus is on building applications, and those applications need functions, and then lower down the stack are all these other components. And that will tend to become less visible over time. It's a bit like your toaster. I mean, your toaster contains nuts and bolts and all sorts of things, do you care? Have you ever noticed? Have you ever broken one open and had a look? >> Only if something's not working right. >> (laughs) Only if something, maybe, a lot of people these days wouldn't even go that far, they'd just go and buy themselves a new toaster. The point is, what happens is, as layers industrialize, the lower-order systems become much less visible. So, containers, I'm a big fan of containers. I know Solomon and the stuff in Docker, and I take the view that they are an important but invisible subsystem, and the same with container management and things like containers. The focus has got to be on the code execution. Now when you talk about canative, I've go to say I was really excited with Google Next last week, with their announcements like functions going GA, I thought that was really good. >> We've been hoping that it would have happened last year. >> Yeah exactly, I wanted this before, but I'm really pleased they've got functions coming out GA. There was some really interesting stuff around SDO, and there was the GRPC stuff which is, sort of, I think a hidden gem. In terms of the canative stuff, really interesting stuff there in terms of demos, not something I've played with, I'm sort of waiting for them to come out with canative as a service, rather than, you know, having to build your own. I think there was a lot of good and interesting stuff. The only criticism I would have was the emphasis wasn't so much on basically, serverless code execution building, it was too much focused on the lower end systems, but the announcements are good. Have I played with canative? No, I've just gone along and seen it. >> So Simon, the last question I have for you is, we spoke a year ago today, what are you excited about that's matured? What are you still looking for in this space, to really make the kind of vision you've been seeing for a while become reality, and allow serverless to dominate? >> So, when you get a shift from, say, product to utility, you get this co-evolution of practice, this practice is always novel and new. It starts to emerge, and gets better over time. The area that I think we're going to see that practice is the combining of finance and development, and so when you're running your application, and your application consists of many different functions, it's being able to look at the capital flow through your application, because that gives you hints on things like what should I refactor? Refactoring's never really had financial value. By exposing the cost per function and looking at capital flow, it's suddenly does. So, what I'm really interested in is the new management practices, the new tooling around observing capital flow, monitoring, managing capital flow, refactoring around that space and building new business models. And so there's a couple of companies here with a couple of interesting tools, it's not quite there yet, but it's emerging. >> Well, Simon Wardley, really appreciate you. >> Oh, it's a delight! >> Mapping out the space a little bit, to understand where things have been going. >> Absolute pleasure! >> And thank you so much, for watching as always, theCUBE. (upbeat music)

Published Date : Aug 2 2018

SUMMARY :

brought to you by SiliconANGLE Media. here in San Francisco at the Regency ballroom. But, you know, technology tends to change and curves, and kind of hit closer to the mark. So one of the problems when you look at a map and now we're trying again, we're trying again. and we had Solaris Zones before that, What is the map we are on, and in the past, change of practice created DevOps, in the industry, you know, and the platform term became really messy, and the same with container management We've been hoping that it and there was the GRPC stuff which is, and so when you're running your application, Mapping out the space a little bit, to understand And thank you so much,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2014DATE

0.99+

ScotlandLOCATION

0.99+

2006DATE

0.99+

SimonPERSON

0.99+

Simon WardleyPERSON

0.99+

AmazonORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Battelle InstituteORGANIZATION

0.99+

2005DATE

0.99+

1968DATE

0.99+

1,400 yearsQUANTITY

0.99+

80 yearsQUANTITY

0.99+

Stu MinimanPERSON

0.99+

last yearDATE

0.99+

10 yearQUANTITY

0.99+

New York CityLOCATION

0.99+

LambdaTITLE

0.99+

a year agoDATE

0.99+

2,000 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

second thingQUANTITY

0.99+

EnglandLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

oneQUANTITY

0.98+

decades agoDATE

0.97+

ServerlessORGANIZATION

0.97+

SolomonPERSON

0.96+

SDOTITLE

0.96+

GoogleORGANIZATION

0.96+

Serverlessconf 2018EVENT

0.95+

50 yearsQUANTITY

0.92+

GALOCATION

0.92+

EC2TITLE

0.91+

about 10QUANTITY

0.9+

Serverlessconf San Francisco 2018EVENT

0.9+

about 30QUANTITY

0.9+

15 yearsQUANTITY

0.9+

2018DATE

0.89+

RegencyLOCATION

0.89+

Next last weekDATE

0.88+

KubernetesTITLE

0.87+

theCUBEORGANIZATION

0.82+

Regency CenterLOCATION

0.81+

'70sDATE

0.78+

1960sDATE

0.75+

ServerlessTITLE

0.74+

LambdaORGANIZATION

0.72+

DevOpsTITLE

0.66+

GRPCORGANIZATION

0.63+

ZymkeyORGANIZATION

0.62+

DockerTITLE

0.61+

SolarisTITLE

0.61+

couple of companiesQUANTITY

0.61+

coupleQUANTITY

0.59+

MulticloudORGANIZATION

0.56+

​LeadingORGANIZATION

0.56+

todayDATE

0.5+

LeadingEVENT

0.49+

LXEORGANIZATION

0.47+

ForumEVENT

0.41+

canativeORGANIZATION

0.39+

Brian Stevens, Google Cloud - OpenStack Summit 2017 - #OpenmStackSummit - #theCUBE


 

>> Narrator: Live from Boston, Massachusets. It's theCUBE, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat and additional ecosystem and support. >> Hi, welcome back, I'm Stu Miniman, joined by my cohost John Troyer and happy to welcome back to the program Brian Stevens who's the CTO of Google Cloud. Brian, thanks for joining us. >> I'm glad to, it's been a few years. >> All right, I wanted to bounce something off you. We always talk about, you know, it's like open source. You worked for in the past what is most considered the most successful open source company for monetizing open source, which is Red Hat. We have posited at Wikibon that it's not necessarily the company, it's not only the companies that sell a product or a solution that make money off it, but I said, if it wasn't for things like Linux in general and open source, we wouldn't have a company like Google. Do you agree with that, you look at the market cap of a Google, I said if we didn't have Linux and we didn't have open source, Google probably couldn't exist today. >> Yeah, I don't think any of the hyper scale cloud companies would exist without open source and Linux and Intel. I think it's a big part of the stack, absolutely. >> All right. You made a comment at the beginning about what it means to be an open source person working at Google. The joke we all used to make was the rest of us are using what Google did 10 years ago, it eventually goes from that whitepaper all the way down to some product that you used internally and then maybe gets spun off. We wouldn't have Hadoop if it wasn't for Google. Just some of the amazing things that have come out of those people at Google. But what does it mean to be open source at Google and with Google? >> You get both, right? 'Cause I think that's the fun part is I don't think a week goes by where I don't get to discover something coming out of a resource group somewhere. Now the latest is machine learning, you know, Spanner because they'd learned how do to distributed time synchronization across geo data centers, like who does that, right? But Google has both the people and the desire and the ability to invest in on the research side. And then you marry that innovation with everything that's happening in open source. It's a really perfect combination. And so instead of building these proprietary systems, it's all about how do we actually not just contribute to open source, but how do we actually build that interoperability framework, because you don't want cloud to be an island, you want it to be really integrated into developer tools, databases, infrastructure, et cetera. >> And a lot of that sounds like it plays into the Kubernetes story, 'cause, you know, Kubernetes is a piece that allows some similarities between wherever you place your data. Maybe give us a little bit more about what Google, you know, how do you decide what's internal, I think about like the Spanner program, which there's some other open source pieces coming up, looks like they read the whitepaper and they're trying to do some pieces. You said less whitepapers, more code coming out of people, what does that means? >> It's not that we'll do less whitepapers. 'Cause whitepapers are great for research, and Google's definitely a research strong academic oriented company. It's just that you need to go further as well. So that was, you know, what I was talking about like with GRPC, creating an Apache project I think was the first time for streaming analytics, right, was the first time that I think Google's done that. Obviously, been involved for years at the Linux kernel, compilers, et cetera. I think it's more around what do developers need, where can we actually contribute to areas, because what you don't want, what we don't want is you're on premise and you're using one type of system, then you move to Google Cloud and it feels like there's impedance. You're really trying to get rid of the impedance mismatch all the way across the stack, and one of the best ways you can do that is by contributing new system designs. There's a little bit less of that happening in the analytics space now though, I think the new ground for that is everything that's happening in machine learning with Tensor Flow et cetera. >> Yeah, absolutely. There was some mention in the keynote this morning, all of the AI and ML, I mean, Google with Tensor Flow, even Amazon themselves getting involved more with open source. You said you couldn't build the hyper scales without them, but is that the, do they start with open source, do you see, or? >> Well, I think that most people are running on a Linux backplane. It's a little bit different in Google 'cause we got an underlying provisioning system called the Borg. And that just works, so some things work, don't change them. Here is where you really want to be open source first are areas that are just under active evolution, because then you actually can join that movement of active evolution. Developer tools are kind of like that. Even machine learning. Machine learning's super strategic to just about every company out there. But what Google did by actually open sourcing Tensor Flow is now they created a canvas, that community, we talk about that here, but for data scientists to collaborate, and these are people that didn't do much in open source prior, but you've given that ability to sort of come up with the best ideas and to innovate in code. >> I wanted to ask a little bit about the enterprise, right. We can all make jokes about enterprising is what everybody should've been doing 10 years ago, and they're finally getting to. But on the other hand, Red Hat, very enterprise focused company. OpenStack, service provider and very enterprise focused. One of the things that Google Cloud is doing... Well, I guess the criticism has typically been how does Google as a company and as a culture and as a cloud focused on the enterprise, especially bringing advanced topics like machine learning and things like that, which to a traditional IT person are a little foreign. So I just am interested in kind of how you're viewing, how do we approach the needs of the enterprise, meet them where they are today, while yet giving them an access to a whole set of services and tools that are actually going to take them into a business transformation stance? >> Sure. And that's because you end up as a public cloud provider with the enterprise, you end up having multiple conversations. You certainly have one of your primary audiences, the IT team, right. And so you have to earn trust and help them understand the tools and your strategy and your commitment to enterprise. And then you have CSOs, right, and the CEO, that's worried about everything security and risk and compliance, so it's a little bit different than your IT department. And then what's happening with machine learning and some of the higher end services is now you're actually building solutions for lines of business. So you're not talking to the IT teams with machine learning and you're not talking to the CSOs, you're really talking around business transformation. And when you're actually, if you're going into healthcare, if you're going into financial, it's a whole different team when you're talking about machine learning. So what happens is Google's really got a segmented three sort of discreet conversations that happen at separate points of time, but all of which are enterprise focused, 'cause they all have to marry together. Even though there may be interest in machine learning, if you don't wrap that in an enterprise security model and a way that IT can sustain and enable and deal with identity and all the other aspects, then you'll come up short. >> Yeah. Building on that. One of the critiques of OpenStack for years has been it's tough. I think about one of the critiques of Google is like, oh well, Google build stuff for Google engineers, we're not Google engineers, you know, Google's got the smartest people and therefore we're not worthy to be able to handle some of that. What's your response to that? How do you put some of those together? >> Of course, Google's really smart, but there's smart people everywhere. And I don't think that's it. I think the issue is, you know, Google had to build it for themselves, right, they'd build it for search and build it for apps and build it for YouTube. And OpenStack's got a harder problem in a way, when you think about it, 'cause they're building it for everybody. And that was the Red Hat model as well, it's not just about building it for Goldman Sachs, it's building it for every vertical. And so it's supposed to be hard. This isn't just about building a technology stack and saying we're done, we're going to move on. This community has to make sure that it works across the industry. And that doesn't happen in six years, it takes a longer period of time to do that, and it just means keeping your focus on it. And then you deal with all the use cases over time and then you build, that's what getting to a unified commoditized platform delivers. >> I love that, absolutely. We tend to oversimplify things and, right, building from the ground up some infrastructure stack that can live in any data center is a big challenge. I wrote an article years ago about Amazon hyperoptimizes. They only have to build for one data center, it's theirs. At Google, you understand what set of applications you're going to be running, you build your applications and the infrastructure supports it underneath that. What are some of the big challenges you're working on, some of the meaty things that are exciting you in the technology space today? >> In a way, it's similar. In a way, it's similar, it's just that at least our stack's our stack, but what happens is then we have to marry that into the operational environments, not just for a niche of customers, but for every enterprise segment that's out there. What you end up realizing is that it ends up becoming more of a competency challenge than a technology issue because cloud is still, you know, public cloud is still really new. It's consolidating but it's still relatively new when you start to think about these journeys that happen in the IT world. So a lot of it for us is really that technical enablement of customers that want to get to Google Cloud, but how do you actually help them? And so it's really a people and processes kind of conversation over how fast is your virtual machine. >> One of the things I think is interesting about that Google Cloud that has developed is the role of the SRE. And Google has been, has invented that, wrote the book on it, literally, is training others, has partnerships to help train others with their SREs and the CRE program. So much of the people formerly known as sysadmins, in this new cloud world, some of them are architects, but some of them will end up being operators and SREs. How do you see the balance in this upscaling of kind of the architecture and the traditional infrastructure and capacities and app dev versus operations, how important is operations in our new world? >> It's everything. And that's why I think people, you know... What's funny is that if you do this code handoff where the software developers build code and then they hand it to a team to run and deploy. Developers never become great at building systems that can be operationally managed and maintained. And so I think that was sort of the aha moment, as the best I understand the SRE model at Google is that until you can actually deliver code that can be maintained or alive, well then the software developer owns that problem. The SRE organization only comes in at that point in time where they hand up their, and they're software developers. They're every bit as skilled software developers as the engineers are that are building the code, it's just that's the problem they want to decode, which I think is actually a harder problem than writing the code. 'Cause when you think about it for a public cloud, its like, how do you actually make change, right, but keep the plane flying? And to make sure that it works with everything in an ecosystem. At a period of time where you never really had a validation stage, because in the land of delivering ISV software, you always have the six month, nine month evaluation phase to bring in a new operating system or something else, or all the ecosystem tests around that. Cloud's harder, the magic of cloud is you don't have that window, but you still have to guarantee the same results. One of the things that we did around that was we took the page out of the SRE playbook, which is how does Google do it, and what we realized is that, even though public cloud's moved the layers up, enterprises still have the same issue. Because they're deploying critical applications and workloads on top. How do they do that and how do they keep those workloads running and what are their mechanisms for managing availability, service level objectives, share a few dashboards, and that's why we created the CRE team, which is customer reliability engineering, which is a playbook of SRE, but they work directly with end users. And that's part of the how do we help them get to Google Cloud, part of it's like really understanding their application stacks and helping them build those operational procedures, so they become SREs if you will. >> Brian, one of the things I, if you look at OpenStack, it's really, it's the infrastructure layer that it handles, when I think about Google Cloud, the area that you're strongest and, you know, you're welcome to correct me, but it's really when we talk about data, how you use data, how analytics, your leadership you're taking in the machine learning space. Is it okay for OpenStack to just handle those lower levels and let other projects sit on top of it? And curious as to the developing or where Google Cloud sits. >> I think that was a lower level aha moment for me, even prior to Google, was it was, I did have a lens and it was all about infrastructure. And I think the infrastructure is every bit as important as it ever was. But the fact that some of these services that don't exist in the on-premise world that live in Google Cloud are the ones that are transformative change, as opposed to just giving you operational, easing the operational burden, easing the security burden. But it's some of these add-on services that are the ones that really changed here, bring around business transformation. The reason we have been moving away from Hadoop as an example, not entirely but just because Hadoop's a batch oriented application. >> Could go to Spark, Flink, everything beyond that. >> Sure, and also now when you get to real time and streaming image, you can have adjusted data pipelines, data come from multiple sources. But then you can action on that data instantly, and a lot of businesses require, or ours certainly does and I think a lot of our customers' businesses do, the time to action really matters, and those are the types of services that, at least at scale, don't really exist anywhere else and machine learning, the ability of our custom ASICs to support machine learning. But I don't think it's a one versus the other, I think that brings about how do you allow enterprises to have both. And not have to choose between public cloud and on premise, or doing (mumbles) services or (mumbles) services, because if you ask them, the best thing they can have is actually how do you marry the two environments together so they don't look, again, back to that impedance differences. >> Yeah, and I think that's a great point, we've talked OpenStack is fitting into that hybrid or multi-cloud world a bunch. The challenge I guess we look at is some of those really cool features that are game changers that I have in public cloud that I can't do in my own data center, how do we bridge that? Started to see the reach or the APIs that do that, but how do you see that playing out? >> Because you don't have to bring them in. Because if you think about the fabric of IT, the fabric of IT is that Google's data center in that way just becomes an extension of the data center that a large enterprise is already using anyway. So it's through us. So they aren't going to the lines of distinction, only we and sort of the IT side see that. There isn't going to be seen, as long as they have an existing platform and they can take advantage of those services, and it doesn't mean that their workload has to be portable and the services have to exist in both places, it's just a data extension with some pretty compelling services. >> I think back, you know, Hadoop was let me bring the compute to the data 'cause the data's big and can't be moved. Look at edge computing now, I'm not going to be able to move all that data from the edge, I don't have the networking connectivity. There's certain pieces which we'll come back to, you know, a core public cloud, but I wonder if you can comment on some of those edge pieces, how you see that fitting in? We've talked a little bit about it here at OpenStack, but 'cause you're Google. I think it's the evolution. When we look at, we just even see the edge of our network, the edge of our network is in, it's 173 countries and regions globally. And so that edge of the network is full compute and cashing. And so even for us, we're looking at what sort of compute services do you bring to the edge of the network. We're like, low latency really matters and proximity matters. The easiest obvious examples are gaming, but there's other ones as well, trading. But still though, if you want to take advantage of that foundation, it shouldn't be one that you have to dive into the specificities of a single provider, you'd really want that abstraction layer across the edge, whether that's Docker and a defined set of APIs around data management and delivery and security, that probably gives you that edge computing sell, and then you really want to build around that on Google's edge, you want to build around that on a telco's edge. So I don't think it really becomes necessarily around whether it's centralized or it's the edge, it's really what's that architecture to deliver. >> All right. Brian, I want to give you the opportunity, final world, things either from OpenStack, retrospectively or Google looking forward that you'd like to leave our audience with. >> Wow, closing remarks. You know, I think the continuity here is open source. And I know the backdrop of this is OpenStack, but it's really around open source is the accepted foundation and substrate for IT computing up the stack, so I think that's not changing, the faces may change and what we call these projects may change, but that's the evolution and I think there's really no turning back on that now. >> Brian Stevens, always a pleasure to catch up with you, we'll be back with lots more coverage here with theCUBE, thanks for watching. (energetic music)

Published Date : May 9 2017

SUMMARY :

Brought to you by the OpenStack Foundation, John Troyer and happy to welcome back to the program it's not only the companies that sell a product I think it's a big part of the stack, absolutely. that you used internally and then maybe gets spun off. and the desire and the ability to invest in the Kubernetes story, 'cause, you know, So that was, you know, what I was talking about all of the AI and ML, I mean, Google with Tensor Flow, Here is where you really want to and as a cloud focused on the enterprise, and some of the higher end services is now you're actually One of the critiques of OpenStack for years I think the issue is, you know, some of the meaty things that are exciting you that happen in the IT world. One of the things I think is interesting is that until you can actually deliver code Brian, one of the things I, if you look at OpenStack, that are the ones that really changed here, Sure, and also now when you get to real time but how do you see that playing out? Because you don't have to bring them in. And so that edge of the network is Brian, I want to give you the opportunity, final world, And I know the backdrop of this is OpenStack, to catch up with you, we'll be back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian StevensPERSON

0.99+

John TroyerPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

nine monthQUANTITY

0.99+

OpenStack FoundationORGANIZATION

0.99+

six monthQUANTITY

0.99+

LinuxTITLE

0.99+

first timeQUANTITY

0.99+

IntelORGANIZATION

0.99+

bothQUANTITY

0.99+

OpenStackORGANIZATION

0.99+

six yearsQUANTITY

0.99+

10 years agoDATE

0.98+

oneQUANTITY

0.98+

OpenStack Summit 2017EVENT

0.98+

173 countriesQUANTITY

0.98+

WikibonORGANIZATION

0.98+

Red HatORGANIZATION

0.98+

HadoopTITLE

0.98+

OneQUANTITY

0.98+

two environmentsQUANTITY

0.98+

Linux kernelTITLE

0.98+

SRETITLE

0.97+

both placesQUANTITY

0.97+

SREORGANIZATION

0.97+

KubernetesTITLE

0.96+

#OpenmStackSummitEVENT

0.96+

Tensor FlowTITLE

0.95+

threeQUANTITY

0.95+

OpenStackTITLE

0.93+

todayDATE

0.93+

single providerQUANTITY

0.93+

BostonLOCATION

0.93+

one data centerQUANTITY

0.89+

Google CloudTITLE

0.89+

SparkTITLE

0.89+

years agoDATE

0.88+