Patrick Chanezon, Docker | Open Source Summit 2017
(Upbeat Music) >> Announcer: Live from Los Angeles, it's theCUBE, covering Open Source Summit, North America, 2017, brought to you by the Linux Foundation and The Red Hat. >> Hey, welcome back everyone, live here in Los Angeles, California for theCUBE's exclusive coverage of Open Source Summit in North America. I'm John Furrrier, with my co-star Stu Miniman, Our next guest is Patrick Chanezan, who is a member of the technical docker, also on the governing board of the Cloud Native Compute Foundation, also known as CNCF, which is the hottest part of the open-source community right now. It's very fast, we're very trendy, a lot of people are on the bandwagon, a lot of contribution going on. Welcome back to theCUBE. Great to see you. >> Hey, thanks, John and Stu, it's very good to be back on theCUBE. >> Docker's been just a great company to follow since the beginning, the birth of Docker to the transformation from Dark Cloud to Docker. It's just a great team. We have a lot of respect for you guys. Congratulations. But the CNCF right now is the hottest thing, there's more platinum sponsors than I think maybe members. It seems to be very hot. Industry loves it, developer is going crazy about it, why is CNCF so hot? What's your perspective on that? >> What we're seeing right now is really the realization of adoption of containers, we talked about it two years ago. It was very early, and people were starting to use Docker and just covering containers. Today they're really putting them into production, and what we see at Docker with our customer base is that they are using it more and more to modernize traditional applications. So we see tremendous use of containers everywhere in enterprises, and the rise of CNCF is tied to that, I think. We're seeing more and more developers joining the bandwagon, more and more systems being built based on containers. And at Docker, we're playing a big role into that. >> Patrick, for a couple years, the chant was Docker, Docker, Docker, and sometimes people say, "Cubernetti's is where the hotness is." Well underneath that, there's containers. And a lot of those containers, Docker's involved there. Maybe you can help us understand the nuance a little bit as the Cubernetti's wave has grown, sure there was the Mezos, Docker Swarm, Cubernetti's war, if you will there, but what does this mean for Docker? What are you seeing from your customers? Give us the update on Docker itself. We'll probably need to get into the Mobi stuff, too, as we get into the interview. >> Sure, definitely. That's a big question, so let's start with the beginning. When enterprises adopt containers, what happens is that usually it starts with the wrappers who are adopting containers with Docker. So they download Docker for their Windows machine, or for their Mac, or on Linux, they start modernizing their applications. What we see is more and more enterprising wrappers, modernizing existing applications by Dockerizing them, and then the next step is that they want to put that into production. For that, you need the whole system. So at Docker, we have two systems. We have Docker C and Docker E, our enterprise version that has role-based controlled sequencing and all that good stuff. There are lots of different components that you need in order to have a production container system, and so Cuberneris, the orchestration engine is one piece of that. At Docker, we have swarm kits. But there are lots of other different components and lots of different layers to that system. So you have the infrastructure layer that you are using to deploy that inside the firewall or in different cloud providers. Many different solutions there. At Docker, we have one that's called infrakit, that we're using in our additions, to deploy it everywhere. Then on top of that, you need some version of Linux. At Docker Con in April, we released a project called Linuxkit, which helps you do that. On top of that, you need a container run-time. Traditionally, it's been Docker. Right now, we re-factored the Docker codebase to extract a core run-time component that's called container G, which we donated to CNCF. Container G is nearing one or better, so it would be one of them pretty soon. Then, on top of that, you need an orchestration engine. Docker E comes with its own orchestration based on swarm, Cuberneris is another orchestration engine that people like. Cuberneris, behind the scenes, is using Docker, and right now we are working very closely with theCUBE rneris community to implement CRI container G. So CRI is the container run-time interface in Cuberneris that lets you plug in different engines to plug container G in the place of Docker in there. >> Stu: There's a lot of pieces in here. We had too many interviews yesterday talking about the Open Container Initiative, or OCI, which really made sure we've got the 1.0 version of that done. What container format, seems like we're in agreement. We're not fighting over that kind of piece anymore. From the Cubernetti's community, I heard loud and clear, they're like, we've got container D. We've kind of got what we want. We're happy it's open-sourced. We're going. We were at Docker Con when you annouced Mobi, which is kind of open-source, and it felt like we were still trying to figure out all those pieces. Give us the update as to Mobi, you're talking at the open source show, you talk a little bit about CE and EE being the productized versions, but part of it is what we used to think of as Docker is now Mobi, and the company Docker versus the project. You kind of teased those apart a little bit, right? >> Yes. Exactly. And actually, that's what I came here at the Open Summit to talk about, to give people an update on the Mobi project. So what we announced back in April was the launch of the Mobi project, which is the end of a two year re-factoring of the Docker codebase into different components. So all these components on the stack that I told you about, we just tease them out from the Docker codebase so that it's a modular set of components that you can assemble together. Mobi is three things. It's an open source project where people can collaborate in container-based systems. It's also a tool that we're using to assemble our components into Mobi Corp, which is the upstream of Docker products. Then it's also a set of lots of components, like container G, Linux, Infrakit, Notary, and all the projects I talked about. One other thing we've started doing since April as well is we started proposing to donate some of these container projects to CNCF. So container G is already part of CNCF now. Recently, this summer, we proposed Infrakit, and they think it's a little bit too early for donation, because they want to see other, different projects in there. Right now we're in the process of donating and proposing Notary, so there's an active discussion in there, and I hope that the vote will happen probably next week or something like that. So Notary is the component that we're using for Docker, and we think that this could be used in lots of different Cloud Native systems, so it really has its place in the CNCF. >> So identity component for the container management, or what specifically is that going to address? >> So Notary is the piece that we're using in Docker Con Contrast to make sure that you can trust the images that you've built. A signed signature should be able to revoke all the signatures, all the kind of features that our customers love in Docker E. >> John: It's kind of like Stu and me on Twitter, he's verified, I'm not. But this is important, because now, this is a stamp of approval, if you will, that the community can look to. >> Yeah, definitely. So it's something that we implement in Docker, and now people building other containment systems who will be able to use it. And so Mobi saw a lot of traction for its different projects, some of them are going to CNCF, some of them are growing by themselves. On the Docker side, we made some progress prioritizing all that with Docker C and Docker E. We had a 1706 launch of Docker E recently, with lots of new role-based axis control, controls for enterprises, who are adopting it essentially to modernize their traditional apps. >> Take us through a kind of personal question. You were just at a board meeting with the CNCF. Did everyone show up or are people calling in? >> I think Alexi Richardson was the only one, maybe two people on the phone. >> John: Was Sam Redjay there? >> Sam was not there either, but Epona was standing for him. So the room was full, and to me it's really an impressive achievement, two years after we helped start the CNCF. The first meetings were 10, 15 people at Google deciding to create this foundation, and today, maybe we're twenty or thirty people around the table. An\d everybody-- >> Even before that Google meeting, we were covering theCUBE Con Cubernettis' movement early on from your event. So I think, out of Docker Con and some of the Linux Foundation events, the early momentum, we were there, Stu. Then it became the CNCF, and they decided, hey, let's get the Cloud Native Foundation. So it's interesting to me, seeing the growth from the beginning. And it's unique to have that opportunity to be in the front lines of an organically developing group. It wasn't really build the table and come, this was a realization. >> It was a realization and also a concerted effort to build something together to show customers where the containment systems were going in terms of architecture-- >> What were the factors beside, I mean Docker was big driver. Notably, you should get the credit for pioneering the space. But what were the drivers for this coalescing, this call to arms, if you will, or this organic formation of CNCF. What were the key drivers in your mind. Obviously, containers is one. What are the other ones? >> Yeah, to me, containers is a big one, because when you are starting to design your system with containers in mind, you need to change lots of things, how you're building them and things like that. And how you are architecting things together. There were lots of questions about how you do the balancing in that kind of system, how do you do monitoring, how do you do tracing. The CNCF was assembled so that all these components have a place where we can show our inter-repairability between them. So Docker is part of that, Mezos is part of that, as well as Cuberneris. There's a big inter-repairability work that's happening in there. We had a report in the board meeting today about the new CI Initiative that tests different CNCF projects together. >> John: What CI? >> Sorry, continuous integration. >> John: Got it, yeah. >> So there's the continuous integration-- >> John: Not conversion infrastructure. >> Oh, you're right, yeah. >> We always get acronym-ed up. But Chris Anazik was talking yesterday about the graduation path, still waiting to see something graduate from the process. What's going to graduate first? Any bets, what's the betting, what betting is going on? Do you guys actually make bets? Is there a fantasy drafting going on? >> I don't think that really matters, what matters is really adoption of the components. >> Okay, so what's happening on the graduation scale? What's coming out of the woodworks? What's next? What's going to graduate first? >> So one thing I'm curious about is whether Container G will graduate, because it's kind of mature now, it's reaching 1-0 with the CRI and soon integration in Docker, it may be a good candidate for graduation. For the others, I don't know which ones would be first into the graduation process. >> Well, we know it's a high bar, for sure. >> Patrick, the stuff that's getting mature. What about some of the roadmap there? From Docker and CNCF, something like serverless containers, first generation, are going to be important. We had too many interviews this week talking about, today, many of the containers we'll see in the future where serverless and open Faz and things like that go. So how does that all fit in? Can you give us a Docker and a CNCF view on that? >> Let's talk about the CNCF view first. CNCF is working on lots of different areas where there needs to be more definition about what Cloud Native means for storage, for example, with the CSI Initiative, container storage interface, CNI, container networking interface, and then there's the working group for CI, which is about integrating all these projects together, but the working group I'm most interested in is the serverless one. So we have a Docker rep at the serverless working group, and there we're trying to define what a portable, serverless stack looks like. And at Docker, we're naturally interested in this -- >> Of course, Serverless is a beautiful thing. >> Most of these projects are running on top of Docker, so open Faz for people-- >> I got to ask you, Patrick, because we love serverless, I have a love/hate relationship with the word serverless because technically it's a beautiful thing, but there's servers involved. I'm an old-school, so I kind of look at it differently. The younger generation, they want infrastructure as code. This is a clear obvious thing. It was once a dream, but now it's become a reality. What's your position on that? Where is it on the progress bar? How close are we to serverless? >> I'd say there's an initial adoption of serverless on one of the few stacks that exist out there today. So you have the hosted services, the Faz services, from Amazon, Microsoft, and Google, where I'm more interested, and I think customers are kind of looking for that, is a portable way of doing that. For example, in studying that on top of Docker platforms, so that's what projects like Open Faz is doing. Right now, I think we're really in the stage of discussions with CNCF of what a portable service layer would look like so that you could focus on your code, but be able to deploy on Prim, on top of Docker, or in different cloud providers. So that portability aspect to me is very important there. And I think it's important for customers as well. To me, also, I'm an old timer as well, I used to pitch a platform as a service at the beginning of it, Google App Engine, many years ago. To me, it's kind of a feeling of deja vu. We're kind of re-inventing that, but with containers and in a much more portable way. >> The beautiful thing about being an old-timer is we get to look back and, not so much to the young kids, get off my lawn, we had to walk to school with bare feet in the snow, build our own libraries. I was just talking to Eilene, she's like, "Oh, my low-level class was C and my high-level class was Python." I'm like, "Our low-level class was machine code "and high-level wasn't even C yet." >> Yesterday, at the party, I was discussing with one of the IBM engineers, who's working on Linux and containers on mainframe, and we were talking about GCL, and that's the type of feeling that we got. Like we're getting higher up in the stack, and I think for modern developers, it really helped them-- >> It's a beautiful thing right now. Just think about the young guns that are coming up. This is a beautiful library of options now. 90% of the code is leverage-able. That's like unbelievable. So it really allows the creativity of the developer to be a lot more about structural engineering code-base rather than just being very creative on the 10-20% of real intellectual property that they can bring to the table. >> I would add something, it's really about creating value, as opposed to building infrastructure. When we're getting up the stack, and serverless is an example of that, it's really about creating value for enterprises, and that's what these wrappers are about. >> When you start dreaming in code, you know you're doing good. Patrick, thanks so much for coming on theCUBE, and congratulations on all the success with CNCF, and certainty Docker. You guys continue to impress and do a great job. I know there's some changes over there we're looking for, some of the cool stuff graduating out of CNCF, more Docker container goodness from you guys. Thanks for coming on theCUBE. We appreciate it. I'm John Furrier, we're live in Los Angeles, California, for the Open Source Summit North America coverage with theCUBE. I'm John Furrier, Stu Miniman back with more after this short break.
SUMMARY :
brought to you by the Linux Foundation a lot of people are on the bandwagon, it's very good to be back on theCUBE. We have a lot of respect for you guys. and the rise of CNCF is tied to that, I think. the chant was Docker, Docker, Docker, So CRI is the container run-time interface in Cuberneris at the open source show, you talk a little bit So Notary is the component that we're using for Docker, So Notary is the piece that we're using in Docker Con that the community can look to. On the Docker side, we made some progress You were just at a board meeting with the CNCF. I think Alexi Richardson was the only one, So the room was full, and to me it's really and some of the Linux Foundation events, this call to arms, if you will, the balancing in that kind of system, how do you do about the graduation path, still waiting to see something I don't think that really matters, For the others, I don't know which ones would be first What about some of the roadmap there? is the serverless one. Serverless is a beautiful thing. Where is it on the progress bar? on one of the few stacks that exist out there today. is we get to look back and, not so much to the young kids, and that's the type of feeling that we got. So it really allows the creativity of the developer to be and that's what these wrappers are about. and congratulations on all the success with CNCF,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
Chris Anazik | PERSON | 0.99+ |
Patrick Chanezan | PERSON | 0.99+ |
John Furrrier | PERSON | 0.99+ |
twenty | QUANTITY | 0.99+ |
Sam | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Patrick Chanezon | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
April | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
two systems | QUANTITY | 0.99+ |
Eilene | PERSON | 0.99+ |
Cloud Native Foundation | ORGANIZATION | 0.99+ |
Alexi Richardson | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
CNCF | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Sam Redjay | PERSON | 0.99+ |
Docker | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
thirty people | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two year | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Los Angeles, California | LOCATION | 0.99+ |
Yesterday | DATE | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
Mobi Corp | ORGANIZATION | 0.99+ |
Docker Con | EVENT | 0.98+ |
Linux | TITLE | 0.98+ |
Open Source Summit | EVENT | 0.98+ |
two years ago | DATE | 0.98+ |
Docker E | TITLE | 0.98+ |
this week | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
Epona | PERSON | 0.98+ |
Windows | TITLE | 0.98+ |
Mac | COMMERCIAL_ITEM | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Mezos | ORGANIZATION | 0.97+ |
this summer | DATE | 0.97+ |
one piece | QUANTITY | 0.97+ |
first meetings | QUANTITY | 0.96+ |
Cuberneris | ORGANIZATION | 0.96+ |
Craig McLuckie, Google - #OpenStackSV 2015 #theCUBE
>> Computer Museum, in the heart of Silicon Valley, extracting the signal from the noise. It's theCUBE. Covering OpenStack Silicon Valley 2015. Brought to you by Mirantis. Now, your hosts John Furrier and Jeff Frick. (upbeat music) >> Okay welcome back everyone. We are here live, broadcasting. This is SiliconANGLE Media, theCUBE, our flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier, my co-host Jeff Frick this week. Two days of wall-to-wall coverage live in Silicon Valley for OpenStack Silicon Valley or #OpenStackSV or the hashtag for this event today, #OSSV15. Join the conversation. Join our crowd chat, crowdchat.net/OSSV15. Our next guest is Craig McLuckie, who's with Google. He's on the Google Cloud team, CUBE Alum. Welcome back to theCUBE. Got a keynote there, welcome back. >> Thank you so much, great to be with you again. >> So Silicon Valley house leads center of the innovation engine house a lot of investment capital here, a lot of big players, you guys, Facebook, VMware, Intel, you name it. It's the giants of the technology industry. And the bubble conversation's happening. China's going down in terms of economics, and seeing the stock market crash there. But yet, underlying infrastructure change is happening. Cloud certainly is floating, that wealth-creation engine, you guy are a big part of it here in Silicon Valley. Just talk about the state of the Cloud. OpenStack has momentum, you have some stability in the core compute side with OpenStack, virtualization is not going away. New things like Kubernetes, Containers, fast on the scene, rising very fast. What's your take on this innovation engine in the Cloud? >> So I think there's a couple of things that are really exciting and interesting that are happening right now, as we speak. The first is a transition to open. It's a way of rethinking about how you evaluate, acquire, and integrate your software. And I think that OpenStack has established a legitimacy as a technology that's really bringing the value proposition of traditional infrastructure service to everyone everywhere. And we're really starting to see a convergence to that community, a set of technologies that are consistent, of high simatic consistency, is really becoming a thing, which is phenomenal. At the same time we're also seeing another disruption happening. And it was really a disruption that was triggered by the emergence of Docker as a technology to support a new way of thinking about packaging and deployment. And it's really part of a bigger story around a move towards Cloud -Ntive computing. This is a computing set of patterns that was really inspired by the internet giants by the Google's, the Facebook's, the Twitter's. But it's really been cracked open and been accessible by folks like Docker who have opened up those container technologies and now we're seeing a lot of the players start to really focus on this and look at bringing the value proposition of that new style of computing to enterprises everywhere. >> You know you start to see maturity in a market specially when platforms are involved, platform wars, whatever the bloggers want to put the headline out there, when you see abstraction layers develop. And one of the things that you talked about in your keynote I'd like you can elaborate on is ending the distinction between what's under the hood. Containers you mentioned bring out this notion that, "I'm a developer I want interoperability." >> Right. >> "I want cross platform API's." This is the economy so I want you to explain that. What is this disruption with containers and Kubernetes? Do, for this abstraction, do we care about the features any more? And that's one of the signals of maturity. Is that you're not talking speeds and feeds and infrastructure to service, platform as a service. When those conversations go away you know things are moving. >> Right. >> Or is that true, what's your take on all that? >> I think that's a very good observation. I think that one of the things we as a community have looked for for a while is a separation between the world of tools and infrastructure that people interact with on a day to day basis to build applications and in the systems that actually take those built applications and run them for you. And a big part of our focus has been to make the set of subsystems that are actually responsible for the operations of applications, transparent to the end developer. And we're looking to formalize that interface that exists between how you create an application, how you package up it's dependencies and how you offer up the infrastructure and then how you run it. One of the most exciting and energizing things for me is to see the emergence of a standard set of abstraction that interface between these two worlds so it creates massive opportunities for innovation. By standardizing that interface you have incredible innovation in the tooling area with technologies like Docker or continuous integration of delivery frame works. You know new development environments that are producing an artifact that can be universally consumed everywhere else. And then on the infrastructure side you have a lot of innovation around running that artifact for the developer, the end enterprise efficiently and intelligently whether it's being deployed into a virtual machine on OpenStack with being deployed into a main-source cluster running on the metal or whether it's been deployed into a next generation Kubernetes cluster running in one of those environments or somewhere else. We're looking to create this common abstraction and it's going to drive a lot of innovation at every level of the stack. >> You know at Wikibon research one of the things that they're putting out, some cutting edge research around the innovation around some of the technologies under the hood. Conversion infrastructure, cloud technologies, flash, storage, software defined networking all that stuff under the hood is evolving as fast as well. So you have underlying core technology and tooling exploding. >> Right. >> So some really good stuff coming out Wikibon.com. And with that and your comment I want to ask, kind of a pointed question which is: Does hybrid cloud really exist? Is it a concept or is it a category? Do people buy hybrid-cloud? Do they buy into it? It seems to be that's the conversation people are talking about now. But I just don't see hybrid-cloud existing other than being part of private and public. >> Right. >> And talk about that. >> It's a great question. I love that question. It exists but not the way that people think of it existing. Right so you can think about it this way when you are building an application on your laptop and deploying it into a cloud it's kind of hybrid-cloud right? But it's not the way that people think about hybrid-cloud. When you want to run a continuous integration server for your company and have it hosted in the cloud and have it create artifacts that are deployed into you on-prem production clusters. That's hybrid-cloud but it's not the way people have come to think about it. And so what I think about it is really about the ecosystem. About establishing a common set of tools and capabilities so that first and foremost people can choose the destination for an application based solely on the technical merits of the infrastructure that they're ruing on. Google offers some very high quality, robust, fast, affordable cloud infrastructure. But we recognize and embrace the fact that for some customers you have very legitimate regional requirements. For some of the applications you might really want to run them on premises. And so the first step toward achieving legitimacy for hybrid-cloud is establishing a common set of patterns and tools and capabilities that exist in both places. The next step is going to be around creating a common services abstraction that let's you start to access things from other environments. And then over time you might actually see people deploy these sort of cloud bursting scenarios et cetera. But the path to get there is really through infrastructure. You know like a common set of abstractions, a common set of tools, a common set of pattern, and making those available to people everywhere. And then over time we will start building these fused together, legitimately hybrid solutions. >> So hybrid-cloud then is a paradigm, it's a concept that highlights the common tooling interoperability so developers can actually work in these environments without having to do anything. That's where Docker comes in, that's where Kubernetes come in? >> Exactly. And it's really, hybrid needs to be first and foremost about being able to use a common set of technologies to build an application for A or B. >> So let's take it forward. So let's put the brainstorming hat on. Let's talk about the future and let's kind of play with some scenario's. Internet of things opens up a huge can of worms and challenges, engineering challenges around: How do I manage the data? How do I drive workloads to these devices? Whether their wearables or cars or stacks or devices? Anything that's on the edge of the network is now considered a device. PC, mobile, internet of things. So for a developer to work in that kind of environment they need these toolings. Is that how you see it? >> Absolutely, I think that's a great way to think about it. You know it's an interesting thing you raise. Because if you think about it Cloud-Native has really been the domain of internet companies, right? It's really been something that Google's done because it's the only way to practically achieve a certain level of scale. We've seen co-evolution of this, of these patterns inside Twitter, eBay, Facebook, Netflix. Everyone's been doing it on their own terms. Now the reality is when IoT happens every enterprise has to kind of become a internet company right? And what we've seen consistently across, you know all of the internet companies that exist today is there's one pattern that really works well to actually deploy computational infrastructure, at scale efficiently. And that's this pattern around container package, dynamically schedule, microservices oriented computing. And so our mission is really to bring these technologies in a democratized way to enterprises so that they can actually tackle problems that were previously only really solved by the internet giants. Without having to make Google level investments or Facebook level investments in technology. >> Yeah. When we hear that Internet companies, just clarify like a hyperscaler like with Yahoo and Google did. Building large scale systems in a seamless way that's kind of abstract to the user. >> Right. >> Just pure performance all, everything is running and it's kind of a brilliant concept. That brings up the point of Google envy. I mean you hear this all the time in the enterprise. "I want to be more like Google." "I want to be more like Facebook." And what they really are saying is: "I want to have Ops." Right so. >> Right. >> DevOps, Cloud-Native do you hear hat often? And when you hear that: "I want to be more like Google." What does that really mean from your stand point? How do you guys internalize that? >> Right. >> How do you talk back to customers? >> So I think you know when I say I want to be more like Google I think there's a lot of different sort of angles that you might have there. I've heard people coin this phrase GIFEE to describe what we're trying to do: Google Infrastructure for Everyone else. But I think the heart of it is really this: If you're a Google engineer, it's like you have a superpower, right. You have access to this amazing almost unlimited mass of infrastructure that's just at your disposal immediately. At very little cost or overhead. And you don't have to worry about the mechanics of actually where the thing I built is run, right. Operations is just a function of the platform. The developer gets to focus on their application and their application operations and what they get for free is this cluster environment where cluster operations is handled for you. The process of actually mapping an atom of code into a distributors systems environment. The ability to use some very powerful services that make it trivial to build distributable systems. The fact that I'm not paged all the time because what I deploy is understandable by some very smart subsystems, they can watch it, they know what it's supposed to be doing. They can tell when it's not doing that and they know how to fix it, right. And so traditionally when you go out of operating parameters in a traditional system you get paged. And for me a lot of what this operate like Google really means is one is I want to be able to Access Compute at an unprecedented level easily and two is I don't want to get paged by my applications that are doing that. >> Yes so let's bring that up, the API economy. Let's bring this to the next level. Today applications are either Legacy or their Cloud-Native so and I ask everyone the question, even on our own Wikibon team we have a debate. And I ask Dave Alante: "Dave name the Cloud-Native Apps that are out there?" I don't think there are any Cloud-Native apps out there. I mean who has a Cloud-Native App? Now that's a trick question because he goes: "Amazons an App, Google has Cloud-Native." Well they're already hyperscaled. >> Right. >> So the question is what, where are the Cloud-Native apps? Where are the examples? Now Facebook's a Cloud-Native App because they built it from the ground up to be Cloud-Native. >> Sure. >> Google same way. So as an enterprise, what is the Cloud-Native App to the enterprise and how do they get there? And what Legacy do they have to throw away because its synchronous and API interactions is fundamental. >> Right. >> How do you ease that out? >> This is actually a fascinating topic and I think one of the most dangerous things people assume is that to accomplish Cloud-Native you have to go fully along the API-Fication path, right. Now the reality is that of you look at they way that people access data today the fast majority of business data's stored in relational database's. People have great tools to access data in relational databases. They want to be able to move that forward. And to me if you force API-Fication, if you force a protocol specific approach to actual integration, if you force people to use a specific authentication scheme you're going to alienate a very broad array of your customers and you're going to create this cognitive hurdle that's very hard for people to get over. So when I think about Cloud-Native, I think about it as providing a different paradigm for deployment management, activation et cetera. But it has to make allowances for integration with your existing systems. And so I think at the forefront of this is the notion of a service or a microservice. And a microservice has to be a minimal atom of software consumption, the easiest way to find and consume something and you can't force an opinion around how people project that, right. So if you build something that runs in a cluster you should be able to access an Oracle database as if it were a microservice running inside your cluster. You should be able to access a sales force SAS endpoint as if it were a microservice running inside your cluster. And so as I think about my mission and Google's mission around the move towards Cloud-Native computing, you can't create this experiential cliffs, you can't create these artificial boundaries to your system. You have to make natural allowances where, look there's some stuff that just works better in a vertically scalable VM. If you want to run a big database with a Dune Kernel and a few other things, by all means put it in a VM. And we are absolutely committed to the idea of creating a natural set of experiences when you want to go from that to some portion of the application that's doing stateless, front and serving. Or a portion of the application that's running in a cloud-friendly, distributed scaled out database. You shouldn't have to take the pull and be stuck in this world. You should be able to mix these. >> So you're saying it's dangerous to force API-Fication, if that's a term I can't even spell it, It's too may I's at the end there, I like that hyphen in there. But if you force API-Fication or movement, you can foreclose future performance and functionality by alienating existing apps. >> By alienating the existing system. It is a very dangerous, there's a lot of. It's very attractive to drive API-Fication but it has to be, you have to create this pressure grading that attracts people up it by adding value at every stage of the game. And you can't build your management systems around a predicated, sort of, opinionated API framework. We saw this with, in the world of SOA, I mean I don't know if you remember the SOAP and SOA stuff. >> Yeah, yeah. >> You know way back when. >> That was just another way of describing API-Fication and we've saw where it went. The problem was that. >> It wasn't ready, the market wasn't ready for web services at that time. >> And it was, but it was beyond that, it was like, no one's willing to make a massive infrastructure investment to get you to ground zero, where you can actually start building. >> So let's look at that web services back in 2000, 2001 when you saw SOAP, XML, SAM all those things emerging. At that time who did take advantage of that? It was the hyperscalers. It was internet companies cause they needed it, right. So the mainstream market now is adopting that kind of concept around microservices. Explain that. >> But it wasn't, the interesting thing is when you look at what the adoption was around microservices, it wasn't around interoperable SOAP, it was around discrete, highly optimized RPC protocols. It was around relatively closed systems at that time. And it worked well, right? The challenge. >> It was controlled. >> It was controlled and it worked well inside a closed ecosystem. Now what, the thing that really held people back is that to get there you had to do a big ESP deployment. You had to then go and SOA-fy a bunch of your components and it required a huge investment in terms of sort of infrastructure and capabilities to get, before you started realizing value. And it was inaccessible to most people and it alienated technologies that didn't fit well into that model. Right like how do you take your database and put it into that model? It was purely optimized around a certain portion of it. And so now we're in a world where we make it available to everyone. We reduce barriers to entry and you get immediate value without having to make huge investments. So let's take microservices and let's unpack that for the audience out there. You're seeing DockerCon, ContainerCon, KubeCon, MasosCon. All these conferences are around developers. And this is all about scale right? >> Right. >> Operating a scale, abstraction layers. I think it's, we need to be careful not to pigeonhole this as about operating at scale. It is the only practical way to operate at internet scale but the value proposition is just as applicable if you're running something in five virtual machines, at a more humble scale. >> So let's talk about development versus operation team. >> Right. >> Where does the Kubernetes, where does the microservices model fit in? And how do companies avoid the trap of alienating existing apps? How do they get the system up and running? What is the roadmap? And differentiate from a Dev standpoint and an Ops standpoint. >> I think one of the most important things you're going to start seeing is a specialization of the operations function. Today it's all kind of glum together and if you ask a developer to actually run an application they have to be cognizant of which virtual machine it's in. You force them into the ugly world of infrastructure Ops. And sort of common services Ops. And what we're going to start seeing, and what I hope to help companies achieve, is a specialization of the operations function. So Infrastructure Ops should be relegated to a set of people that actually understand the physical infrastructure. They will create an optimal physical environment surround your application. There'll be a small number of specialized people that know how to do that and they will rack and stack and wire and configure and do what ever needs to be done to tune the infrastructure. Above that you're going to see this Cluster Operations. So a common Services Operation team that provide a basic operational platform and common services to everyone. So these are a highly specialized set of people that provide you the tools you need to be able to autonomously run a distribute system. They are unlikely to be involved in the day to day operations because most of these systems will be autonomous but they're there to answer the call if something happens in, in that system. So it becomes a very specialized function. And Google does this with our SRE folks that actually manage our, like the Boar clusters that run all our infrastructure. Small number of highly specialized people providing a very valuable service to a lot of folks. And then at the top level you're going to have Application Operations. And that really just becomes the developers function. And it should really be about understanding and managing your code and you should never have to think about: Where it's running? How it's running? You never, should never have to SSH into an instance to try and debug it. All that should be presented to you through your tools. So the developer's experience becomes one of of using logical infrastructure. And so I think what we're going to start seeing is companies making investments in these clustering technologies. Offering up these simple, clustered service environments for their departments. And then having portfolio's of container package applications that can be easily taken, adjusted and run in these environments. And we'll naturally see the specialization of operations emerge. >> So we're running out of time. Jeff didn't get one question in but maybe next time. >> He has a role in that. >> Brendan Burns, Brendan Burns I think on your team. >> Yeah. >> Brendan, so he brought up something. He brought up the hybrid-cloud is kind of the way, meaning the way you described it, not as a category. But he also brought up the different aspects of Google Cloud in our last crowd chat last month. How do customers mix and mach with the cloud? I mean you guys offer Linux, you guys offer Windows. I mean if I want to work with Google Cloud what are the touch points? How do people ingratiate in? How do they engage with Google? What are some of the use cases? Can you share just put the plug in for Google Cloud what you guys have up and running that's mature, stable, >> Right >> Shipping. And how do customers get into the Google Cloud? >> So we've really seen Google Cloud, it needs to be in all of the above sort of capabilities. The operating characteristics. The thing that make Google Cloud unique is the quality of the basic infrastructure. We offer by far the most price performing basic infrastructure out there. It's an innovative clouds, you know it's driving and active in a lot of the sort of disruptions we're seeing around the container space. It's an open cloud. It's a cloud that's invested in making sure that we engage and connect with the OpenSource community. So if you want to work with Google Cloud there's a lot of different ways to do it. One is you can go and just buy beautiful, clean, pristine, powerful, affordable infrastructure in large chucks through Google compute engine. And we're seeing a tremendous amount of adoption. You don't have to make massive capex down payments to get our best price. We really focus on doing that. You can also come in if you just want to write a bit of code, have it run, we have a wonderful Pass product called Google App engine that's becoming very naturally integrated into the container ecosystem and is a natural sort of path. It's a great entry point for people that just want to operate on a higher level and want to take some code and then have it easily deployed and run on your behalf. And then we're also, another entry pointy that isn't obvious to people is, you can help us build the Google Cloud. What we're building with our next generation set of offerings with technologies like Google Container Engine, is an opensource cloud. It's been built in public. Come join our community, work with us. Try it out. Give us feedback and be part of actually building the next generation of clouds. >> Okay so the question I have for you is, let's just say I'm an Amazon customer and I want to go to Google Cloud, do you have like an elastic Beanstalk application containers an App Engine, how do I get I there? I mean there are some things that Amazon has you might have some things. How do you talk to that, Beanstalk particulars. >> That's a great question. So Beanstalk you know provides the ability to you know deploy and run applications. The closest analogy is App Engine. So Beanstalk traditionally was a Java base platform that you could provide your Java code and it would run it for you. App Engine gives you that equivalent capability. And with the new generation of App Engine we actually provide the ability to deploy into directly into VM. So that it feels a lot, it feels a lot like the Beanstalk experience. But it comes with a lot of other high value services. And so that's a natural starting point. And App Engine it self is being rebased on a lot of the Kubernetes concepts. So that you have this immediate, easy, accessible experience for code but when you reach an edge and you want to actually integrate it naturally with a vertically scaled database that runs in a VM, we have compute engine waiting for you and all very natural, it will feel natural to actually just integrate those two things together and snap together these more holistic solutions. >> You guys have a, final question. I now you guys have a lot of track record with developers certainly Google's history and OpenSource, everything is great. But other competitors, more commercial IBM and Amazon, they're providing marketplaces for distribution, where people can make some cash and some cabbage. >> Right. >> What's the plans Google? Is there anything there? How do I make money if I'm a developer with Google? Or is there plans there, what's the state of that? >> It's a great question and obviously we have aspirations in that space. I can't go into all the details right now. But you know the we are obviously investing in that area. And one of the things that we're really like though is looking at containers as a standard distribution framework, let's you plug into everyone's market places. So one of the things that I see around marketplaces historically is that they offer immediate value in connecting a producer and consumer of software but they're not offering steady state value. So once those two have been connected the marketplace isn't adding significant ongoing value. So when you think about what we want to do, we want to make sure that one is, we become a market maker, we let lost of different market places emerge and that we support those. But then in our own efforts we actually add legitimate value to both the producer and the consumer of the software. And the we're not just taking a cut off the top. So but that's, it will become much more clearer in the face of time. >> Craig, thanks for spending some time and congrats on a great keynote. Good to see you again. Thanks for jumping in and sharing the data here on theCUBE, really appreciate it. We are live here in Silicon Valley. It's theCUBE at OpenStackSV, join the conversation #OSSV15. We'll be right back after this short break. (upbeat music)
SUMMARY :
Brought to you by Mirantis. and extract the signal from the noise. And the bubble conversation's happening. of that new style of computing to enterprises everywhere. And one of the things that you talked about in your keynote This is the economy so I want you to explain that. and in the systems that actually take So you have underlying core technology And with that and your comment I want to ask, But the path to get there is really through infrastructure. it's a concept that highlights the common tooling And it's really, hybrid needs to be first and foremost Is that how you see it? And so our mission is really to bring these technologies that's kind of abstract to the user. I mean you hear this all the time in the enterprise. And when you hear that: And so traditionally when you go out of operating parameters so and I ask everyone the question, So the question is what, And what Legacy do they have to throw away is that to accomplish Cloud-Native you have to go But if you force API-Fication or movement, And you can't build your management systems and we've saw where it went. It wasn't ready, the market wasn't ready for to get you to ground zero, So the mainstream market now is adopting when you look at what the adoption was around microservices, to get there you had to do a big ESP deployment. It is the only practical way to operate at internet scale And how do companies avoid the trap All that should be presented to you through your tools. So we're running out of time. meaning the way you described it, not as a category. And how do customers get into the Google Cloud? So if you want to work with Google Cloud Okay so the question I have for you is, So that you have this immediate, easy, I now you guys have a lot of track record with developers And one of the things that we're really like though is Good to see you again.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brendan | PERSON | 0.99+ |
Craig McLuckie | PERSON | 0.99+ |
Brendan Burns | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Yahoo | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Dave Alante | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Craig | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Java | TITLE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
crowdchat.net/OSSV15 | OTHER | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
one question | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
one pattern | QUANTITY | 0.98+ |
five virtual machines | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
this week | DATE | 0.98+ |
DockerCon | EVENT | 0.98+ |
both places | QUANTITY | 0.98+ |
two worlds | QUANTITY | 0.98+ |
Google Cloud | TITLE | 0.97+ |
OpenStack Silicon Valley | EVENT | 0.97+ |
#OSSV15 | EVENT | 0.97+ |
Dave | PERSON | 0.97+ |
2001 | DATE | 0.97+ |
Two days | QUANTITY | 0.97+ |
App Engine | TITLE | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
Cloud-Native App | TITLE | 0.97+ |
#OpenStackSV | EVENT | 0.97+ |
Windows | TITLE | 0.97+ |
KubeCon | EVENT | 0.96+ |
Oracle | ORGANIZATION | 0.96+ |
Craig McLuckie, Google | Google Cloud Platform 2014
(upbeat music) >> Live from the Mission Bay Conference Center in San Francisco, California, it's theCUBE at Google Cloud Platform Live. Here are your hosts, John Furrier and Jeff Frick. >> Okay welcome back everyone, we are live. This is theCUBE in San Francisco, California for Google Platform Conference Live, their developer conference for the cloud. I'm John Furrier, the founder of SiliconANGLE, Jeff Frick, my cohost, and we're excited to have CUBE alumni but also man about town coming to talk about containers, Kubernetes. We have Craig McLuckie, product manager at Google. Named the product Kubernetes. Welcome back. >> Thank you. It's great to be back on theCUBE. >> As I said, you're the man about town. Containers are the hottest thing going on. Really enabling a lot of new change. A lot of solidarity in the developer community around bringing cloud together, right? You're seeing people go, wow, containers are not a new concept. Docker has brought together the concept and made a huge push, just the ball got moved down the field big time. And then Kubernetes kind of tying it all together and you guys are open sourcing it. I wanted to first talk about, from your perspective, what's changed since VMware where we had a great conversation around Kubernetes? Obviously that was front and center in VMware's show, which is a huge IT enterprise vote of confidence. So now, here at Google, core developers. Large scale, backend network interconnect stuff going on. You almost connect the dots, right? Native developers really cranking out the apps? Large scale interconnect? There's a lot in the middle there between those bookends. What's changed? >> So a couple things I think have changed since I last spoke to theCUBE at VMworld. The first is we've seen an amazing amount of velocity around the Kubernetes community. Not just what Google's been doing but also what our open source community members have been contributing. And we're seeing a very fast acceleration of the overall platform. Moving quickly towards operation maturity, you know getting closer to production readiness and introducing a lot of features that are really need to both run real world applications and to go to new place, to go to a variety of new clouds. We're seeing the reality of a very highly portable and maturing way to build container based applications emerging. That's been very exciting. I think the other thing that's really interesting here is the way that we at Google have been introducing Kubernetes directly into the Google Cloud platform. Today we announced a new product called Google Container Engine which provides the quickest and easiest way to get a Kubernetes cluster up and running and managed for you on Google Cloud platform. And we're very excited about how easy it's making it for our customers to access this new way of building applications. >> Talk about this Container Engine because obviously App Engine's had huge success. Little bit of learning curve but you guys have some core front end developers that you're making that easier now but what is a Container Engine? Is it a Docker engine? Is it Docker compatible? Is it a whole new animal? What it is? What is it? >> That's great, I'm glad you asked that question. I would start by saying this, at Google we have Google Compute Engine which offers powerful, flexible, fast breeding VMs and at the other end of the spectrum we've had App Engine which offers a highly managed, very efficient way to get web applications up and running. And what we've encountered with our customers is that there is no natural way to move from one world to the other world. There's no connective tissue that exists in the middle that let's our customers think about building applications that are running on a cloud computer rather than just running on a virtual machine. And so what Google Container Engine is is a technology that let's our customers program at the cluster level. So Docker has provided this amazingly productive way to package up an application and deploy it into a node. Docker has done a great job of taking a lot of technologies that existed and making them incredibly accessible to developers. But the reality, in our experience, is that at least 80% of our customer's cost of maintaining applications comes out of the operation space so Kubernetes and Google Container Engine are an operationally viable way to build these distributed applications. It really moves our customers from thinking about deploying things into individual virtual machines to instead saying, hey, I'm just going to drop this into this cluster and it will all be wired together so I can take these little Lego building blocks I've got called containers, piece them together in ways that are intuitive and then have a very smart and effective system to run those for me on my behalf.. >> So basically a pool of VMs could be available to developer, if I get this right? So you're saying, I'm a developer, I don't have to worry about the dependencies by VMware, by VMware versus another form factor? I just let the container deal with that? Is that-- >> What we've done, yes, that's exactly right, we've created this strong separation between infrastructure operations and application operations. Docker has created a portable framework to take basically a binary and run it anywhere which is an amazing capability. But that's not enough. You also need to be able to manage that with a framework that can run anywhere so the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat Open Stack deployment, you could run on another major cloud provider like Rack Space or IBM and you could just build this application and deploy it there and experience this very powerful cluster first way of building and managing that app. >> Cluster first, I haven't heard that one. >> It's not a cluster you-know-what, it's a cluster first. (laughing) That trumps cloud first from Microsoft but let's go back to Kubernetes. You named the product, what does it mean? I mean it's kind of a, you don't look at a tech name, you say, it's not like alpha one, ya know? >> Kubernetes is the Greek word for the helmsman of a ship. I was looking to find a name and turns out, there's a lot of cluster management technologies and a lot of the obvious names were taken and so I had the inspiration of what is this doing? It's actually the thing that's overseeing the whole of your operation, and is planning what goes where and managing it. So Kubernetes is the helmsman of your cluster group, it's the thing that manages it. >> Did you design the algorithm to stay away from icebergs? (laughing) That's the key thing, you don't want to crash the system. But that's the challenge, you know, just joking aside, orchestration is really a hard thing. That's been a cloud phenomenon, automation. Everyone's been talking about, oh we have management software that automates and orchestrates cloud resources. But now in a cloud environment, it's more challenging now. Talk about what Kubernetes does different than older approaches to orchestration. >> I think is a very, very important consideration. When I look at the way that orchestration's been done traditionally, you tend to think about your application as being deeply tied to the underlying piece of infrastructure, so your orchestration process is provision me a basic machine, go get the packages I need, deploy my application pieces, wire it in explicitly to all the other pieces of my system and so you have to kind of build this relatively fragile system where all the piece are tied together and deeply coupled. What Kubernetes has done is provide a framework where you have a very principled, almost Lego building block that you can stick together and say, I want one of these things, I want it replicated six times, and I want it wired in to these other pieces without actually having to know about where those other pieces are deployed, how they relate to one another. It really is realizing this highly decoupled, very principled way of thinking about your environment as a cluster where you just drop your packages in and they're all wired together using virtualized networking and using this cluster centric paradigm and it radically, radically reduces the cost of operations. I could just give you an example of that. In the old days of Google, before we had these technologies inside the house, it was all we could do to keep the lights on. Like every day was an adventure, it was very hard, because our operations had our application pieces deeply tied into the physical infrastructure. When we introduced the system internally known as Borg, we changed the game. In less than a year-- >> Hold on, name is Borg? >> What was it called? >> Borg? >> Borg. >> Borg. >> Internally known as Borg. (laughing) >> Like connected to everything, like the Microsoft Borg, that's at Microsoft but Microsoft used to be called-- >> I was thinking more Arnold Schwarzenegger, but that's alright. >> Continue. I just wanted to make sure we heard that right. >> We literally doubled the number of production services we were running within a year. It's just so much easier to run things at scale. >> So provisioning, managing, it just makes a smoother operation? Smooth sailing if you will? >> It's really trying to hide provision, managing, right? You're basically, I have an app and I want to build it easily and then I want to deploy it easily and then I want it to be able to scale easily. >> Yes. >> Without having to go back and reconnect it to more stuff. It's funny because I think most people think that that's what clouds have already always done, right? There's basically compute, a networking and storage that's just in small units, virtually available to assemble however I want. But you say it, I used to have to still assemble it and disassemble it, now it's just-- >> Exactly. >> It's just plugging in. >> That's the challenge. The way we've seen cloud evolving has disappointed us a little bit because it really is just a re manifestation of the same existing first generation way of thinking about application development, application provisioning. If you challenge a lot of the fundamental assumptions, if you really step back and think about is there a better way to do this? If I have all this incredibly fungible resource that can turn up and turn down, is there a better way to build applications? Kubernetes is our invitation to the community to participate in defining that thing. We think it is a better way to build applications. We know it because we've been doing this for 10 years and it works really well for us. >> So talk about the open source angle because one, Kubernetes is open source, we've reported that live when we last chatted. Docker has huge success with their open source model. That's not well known in the main world, how the nuance and developers really are engaged and motivated to play with Docker which has it's own flywheel effect which is very viral in network effect. What's your strategy with Kubernetes? Is it standard open source blocking and tackling? Is there things you're doing to prime the pump? Is there a magical formula you guys are really nurturing and fostering? >> I am very happy with the way that the projects been run and it's been humbling to see the amount of adoption success we've had. I think that this manner of operating where we built Kubernetes as an open source project with the community, and then we take it and take exactly that and we turn it into a service and add a lot high value capabilities to it, is a pattern that's working very well for us. It's massively increased our velocity because it's not just us that are actually developing the project, we have amazing contributions from people like Red Hat. They're putting a lot of time and effort into making this thing great. Our friends at CoreOS are putting a lot of effort into it. We're able to do more because it's just more people working on it, so the velocity is far higher. The second thing is that we were able to go straight to an open offer. Normally we do these early adopter programs hidden behind the curtain, try to figure stuff out and do a lot of iteration. We didn't have to do that because the community has built the API with us, our customers have been working directly with us to shape the API. We know it's going to work for them. >> And that's helped you guys, so your differentiation doesn't really conflict with the community? >> Absolutely not. We recognized as we moved from a cloud that's worked mostly in the start up community and with internet facing companies to a cloud that's really engaging mainstream business. Our customers want multi cloud. It's critical to them. They want to be able to run in hybrid cloud. They want to have multi cloud provider relationships. They don't want to just rely on one provider and so our framework that works well everywhere but works especially well on Google, serves our business very well. >> Getting some great prompts on Crowd Chat so thanks for coming on theCUBE, always great to chat with you. You're in a hot area, we'd love to pick your brain but I want you to address three things I'm going to say to you, get your thoughts on. >> Okay. >> It can be your Google perspective, could be your own geeky perspective. Perimeter-less IT, multi cloud and mobile infrastructure. Three of the hottest areas on the planet right now in terms of people looking at investments, retooling, trying to figure things out, perimeter-less IT. Obviously perimeter IT, perimeter based security? >> Sure. >> Kind of goes away with the cloud right? >> Yeah. >> But you still need security, it's perimeter-less, so what does that mean? How do people understand and grasp that concept? >> I'm not sure I'm the right person to speak to perimeter less IT but I can say that-- >> Just in general. >> When I think about it, I think there's a couple of things that are happening here that are really interesting. When I look at the idea of perimeter-less IT, when I look at the idea of what I consider the democratization of IT, if you will, we've lived in a world where most businesses have been beholden to a specific organization that's controlled their provisioning, the policies and the set of bits they can use, everything's been controlled and IT hasn't been well loved by and large. We're moving into a world where it's a much more open ecosystem. Departments are far more empowered, anyone with a corporate credit card can go and get a machine and that's creating amazing agility and velocity for businesses. But it's introducing-- >> Creativity, too. >> A lot of creativity, but it's introducing a lot of pain as well. The hard thing is going to be creating a smart framework that allows empowered decentralization. Going from this world of highly controlled to decentralized empowerment, and I think that's where we're going to see a lot of interest from folks that are operating in the airplay space. >> Okay, multi cloud, just in general. Will people move to multiple clouds? Do you see that? UberClouds, we had Bitnami in earlier like, ah, people aren't really going to multiple clouds. They're not interested in moving workloads. Is that a state of the current situation or will it evolve to workloads anywhere? >> Multi cloud is the reality of our world. There's no serious customer I've spoken to in the last six months that has not been interested in a multi cloud relationship. Sorry, that's not true, there's no enterprise customer I've spoken to the last six months. >> That has not been interested? >> That has not been interested in multi cloud. >> And the reason is? >> In some ways. >> It's for what, resources? >> There's a couple of reasons. One is a lot of companies want to have just a multi provider relationship. They don't want to be beholden to a single cloud provider and frankly almost every customer I speak to has a massive investment in on premise infrastructure. They want to move away from a lot of the pain associated with managing that, but it's not going to happen overnight. Hybrid cloud is going to exist for quite a while. >> This is back to your empowered decentralization theme. >> And we have to provide them the tools to do that. We have to create positive pressure that moves them from those clouds to the public cloud. >> Final concept, and I've heard this a lot, kind of leads into the keynote, not necessarily the words but almost reeking of this concept of mobile infrastructure. I mean, mobile first, cluster first, kind of enables mobile first but mobile is obviously a form factor, whether it's an internet of things as a human or a device, doesn't matter it's still an endpoint the network. >> Yeah. >> It's a multitude of millions of devices so what is mobile infrastructure? Is it different? Is it the same? What's your take on it? >> It's an interesting question and the reality of our world is it's a mobile world. It's almost folly to do anything but think about mobile as the primary vehicle for customers, consumers and everyone else to interface with the internet, with the web. It certainly introduces an interesting set of challenges to application developers. I think one of the things that I am most sort of interested in cracking from a cloud provider's perspective is the world of multiple devices where you have a large set of devices in different form factors that are ultimately presenting a view of the same set of data, the same set of information and creating a set of experiences that work well in that multi device space. Moving away from a world where state is bound to a device to a world where state is based in your cloud and your device is simply providing a view or a way to interface with that data. We still have a way to go before that is fully materialized but I think that's going to be a big sort of anchor point of a lot of mobile development in the space. >> So Craig, where's the locus of competition move then? If the data center just becomes a resource that's on tap, basically, that I can just get? How do the cloud providers then differentiate? >> Basic infrastructure is relatively undifferentiated but when I look at the way that we run inside Google, we do some really, really scary smart things to make your application run for you. If you think about the way we run our infrastructure it's almost like the flight controller of a modern airplane. It's going from the old wire based control system where you move something to move a flap to a world where you have this controller that's taking in million of signals a second and making incredibly informed decisions that is optimizing the heck out of everything you do and making very fine grain corrections and I think that's going to be a huge avenue of differentiation. When you take an application, you package it and you give it to us and you trust us to run it for you and it's running at a slightly higher level, we have a much high extraction level, we can do incredibly smart things with things like machine learning technologies. We can watch how your application's running. We know how it ran last time so we can tell if something's going wrong because we have the ability to actually watch it. This is how we run internally. >> Right, right. >> It's not just about the infrastructure. It's going to be about smart systems that run your application for you. And that's going to be hard to-- >> It's really to abstract above the management of the application. It's actually the management of the application and the optimization of the application as opposed to the infrastructure? >> There's so much more value in moving from static, dumb infrastructure to actively managed, sort of precision managed container based capabilities. It's quite jarring. This was clear to me very soon after we shipped Google Compute Engine. I was able to see, we never looked inside VM so we were able to see what level of CP utilization our customer's were getting and we compared that to what we were able to run in our internal web loads and our customers are only getting like, there were several integer multiples less utilization than what they were paying for. So we knew that something could be done. We could actually move up the abstraction layer and just do a better job by actively managing and making smart decisions. And that would be very disruptive-- >> So let's play a game, we played a game with our last guest, we'll play the game of you and I are going to go into business together and be venture capitalist. >> Okay. >> Okay. >> Sounds like fun. >> What's our investment thesis? Knowing what we know, I mean, there's a lot of entrepreneurs out there really looking at the enterprise right now. The enterprise is hard, cloud is kind of like a proxy for the enterprise but it's not like your classic enterprise. I'm a tech entrepreneur, I'm a coder, I'm an architect, I'm an OS guy, systems guy, could be a creative filmmaker, whatever but I want to come in and get some white space. Is there white space out there that you see that is an opportunity for developers that could really come in and stake claim and build a really good business? It could be lifestyle business, it could be a home run. Where would we invest? >> Yeah, I think there's so much white space in this domain. We are in the very early days of getting these technologies to market. Obviously there's just bolstering the basic, sort of the fundamentals of the platform. Overlay networking, everyone's talking SDN. Obviously there's a lot of hype around that but being able to create an abstraction that allows high levels of plugability for different network fabrics as you move between clouds is interesting. Storage, and doing a better job of providing virtualized storage that is available to these containers is an area of opportunity. There's a lot of work to be done in the tuning environment, full on application lifecycle management, continuous integration, lots of opportunity in that space. And then frankly, as we start looking at taking these technologies to market and deploying them into real businesses that are running multi cloud, there's going to be a lot of the governance, risk management and compliance overlay capabilities that just don't exist. We have the ability to define policy and enforce it in a very effective way, whether it's security policy, data loss prevention policy-- >> But it has to be dynamic, right? >> And it has to by dynamically done and it has to be enforced at the node. >> That's software, that's hard software? >> And there's so much work to be done there. There's so many opportunities to either create niche, vertically oriented capabilities of service specific protocol or unique, highly valuable, cross coding capabilities. I'm very excited about the future in this space. >> Where would we get started if I was an entrepreneur? Like, hey Craig, I saw your interview, where do I get started? Writing an app engine code? I want to put the boat in the water and starting drifting into this area you just mentioned, how should I navigate in? How should I vector in? >> A lot of it depends on where you're going to be operating in the stack. I would suggest you go and learn Go. Go is rapidly, GoLang, if you want to talk about the sort of the development environment is rapidly emerging as the language for the new cloud. We're seeing a lot of work in the Go community. Docker is written in Go, Kubernetes is written in Go. So I'd start there. It's a great platform for systems development. So I'd start looking at some of the existing technologies, Docker, Kubernetes, start just assessing where the gaps are. I'd probably approach it from a systems development perspective if I was doing it but there's also going to be a lot of value higher up the chain where you can actually-- >> You can dance on top of the stack and around the stack? >> Absolutely. >> Alright so final question, are we going back to the old OS days? I know you were joking before we came on, conversational even in a way, that was pretty relevant. I mean, we're seeing concepts of systems programming of the 80's kind of, but in decentralized way. Comment on that because I think that's tying a lot of things together. >> I think that's an incredibly astute observation and I think we're moving away from a world, operating system today is a node local thing, right? So I have an operating system and it's providing an environment that abstracts me from the physical details of one piece of hardware, one machine, you know one set of resources. What we're starting to see now is the emergence of some of these distributed concepts where you're programming not to a specific singe piece of infrastructure, single piece of hardware but you're programming to a cluster and so I think it's very much like that. I think that's a very astute observation and we're going to see the buzz-- >> But no one vendor owns it. It's owned by the world. >> And nor should one. It needs to be a POSIX like ubiquitous framework that let's us get more out of these cluster centric applications. >> Very organic, I mean I love what's happening is a very organic development but yet there's some, kind of group dynamics going on around cluster and Docker's a great example. Came out of the woodwork to become a defacto standard. Probably the fastest defacto standard that I've ever seen-- >> It's been breathtaking how quickly that technology's taken hold. >> And that's just the crowd. >> Yeah. >> Just saying, hey if we don't like decide on something? We like these guys the best, they didn't piss anyone off or whatever, whatever the dynamic is. It could be double source, flywheel, but-- >> It's interesting, certainly from Google's perspective, we've noticed Docker a lot sooner than most the world did. We had technologies that we could have stood up as potentially competing capabilities but we chose not to, because the world is incredibly well served by a single standard for defining and packaging applications. Now we need to continue that and we need to build the standard for the POSIX like distributed systems standard, that people think about coding to when they're building these modern, next gen cloud V2 applications. >> Craig, I really appreciate you spending the time. Love the conversation, love kind of the long winding road we took there. We knocked out some Kubernetes. We talked about Docker containers. Talked about the future of the industry. Really appreciate it, you're awesome to have on theCUBE here, you're invited any time. CUBE alumni Craig McLuckie right on theCUBE. We'll be right back, here, live in San Francisco broadcasting exclusively from Google's developer conference here, the Cloud Platform Live Event from Google. We'll be right back after this short break. (light music)
SUMMARY :
Live from the Mission Bay Conference Center I'm John Furrier, the founder of SiliconANGLE, It's great to be back on theCUBE. and made a huge push, just the ball is the way that we at Google Little bit of learning curve but you guys and at the other end of the spectrum and deploy it there and experience this very powerful You named the product, what does it mean? and a lot of the obvious names were taken But that's the challenge, you know, and it radically, radically reduces the cost of operations. but that's alright. I just wanted to make sure we heard that right. It's just so much easier to run things at scale. and then I want it to be able to scale easily. and reconnect it to more stuff. of the same existing first generation way of thinking and motivated to play with Docker and it's been humbling to see the amount and so our framework that works well everywhere I'm going to say to you, get your thoughts on. Three of the hottest areas on the planet right now the democratization of IT, if you will, that are operating in the airplay space. Is that a state of the current situation Multi cloud is the reality of our world. and frankly almost every customer I speak to that moves them from those clouds to the public cloud. kind of leads into the keynote, not necessarily the words and the reality of our world is it's a mobile world. and I think that's going to be a huge avenue It's not just about the infrastructure. and the optimization of the application and we compared that to what we were able to run we played a game with our last guest, cloud is kind of like a proxy for the enterprise We have the ability to define policy and it has to be enforced at the node. There's so many opportunities to either create is rapidly emerging as the language for the new cloud. of the 80's kind of, but in decentralized way. and so I think it's very much like that. It's owned by the world. It needs to be a POSIX like ubiquitous framework Came out of the woodwork to become a defacto standard. how quickly that technology's taken hold. Just saying, hey if we don't like decide on something? that people think about coding to Talked about the future of the industry.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Craig McLuckie | PERSON | 0.99+ |
Craig | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
six times | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Arnold Schwarzenegger | PERSON | 0.99+ |
Today | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
one machine | QUANTITY | 0.99+ |
Go | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first generation | QUANTITY | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Three | QUANTITY | 0.98+ |
Google Platform Conference Live | EVENT | 0.98+ |
less than a year | QUANTITY | 0.98+ |
Rack Space | ORGANIZATION | 0.97+ |
one piece | QUANTITY | 0.97+ |
Mission Bay Conference Center | LOCATION | 0.97+ |
Lego | ORGANIZATION | 0.97+ |
one provider | QUANTITY | 0.97+ |
UberClouds | ORGANIZATION | 0.96+ |
VMworld | ORGANIZATION | 0.96+ |
Greek | OTHER | 0.96+ |
VMware | ORGANIZATION | 0.96+ |
80's | DATE | 0.95+ |
one world | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
Docker | ORGANIZATION | 0.94+ |
Google Container Engine | TITLE | 0.94+ |
Borg | TITLE | 0.93+ |
last six months | DATE | 0.93+ |
Google Cloud | TITLE | 0.93+ |
one set | QUANTITY | 0.93+ |
millions of devices | QUANTITY | 0.91+ |
Docker | TITLE | 0.91+ |
at least 80% | QUANTITY | 0.9+ |
osoft | ORGANIZATION | 0.9+ |
Google Compute Engine | ORGANIZATION | 0.89+ |
million of signals a second | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.89+ |
a year | QUANTITY | 0.88+ |
three things | QUANTITY | 0.88+ |
Kubernetes | ORGANIZATION | 0.88+ |
Google Cloud Platform Live | EVENT | 0.87+ |