Image Title

Search Results for Eric Brewer:

Eric Brewer, Google Cloud | Google Cloud Next 2019


 

>> fly from San Francisco. It's the Cube covering Google Cloud next nineteen, brought to you by Google Cloud and its ecosystem partners. >> Welcome back. This is Day three of Google Cloud. Next, you're watching the Cube, the leader in live tech coverage. The cube goes out to the events. We extract the signal from the noise. My name is Dave Volante. I'm here with my co host to minimum. John Farrier has been here >> all week. Wall to wall >> coverage, three days. Check out cube dot net for all the videos. Silicon angle dot com For all the news, Eric Brewer is here is the vice president of Infrastructure and a Google fellow. Dr Breuer, Thanks for coming on The Cube. >> Happy to be here to see >> you. So tell us the story of sort of infrastructure and the evolution at Google. And then we'll talk about how you're you're taking what you've learned inside a googol and helping customers apply it. >> Yeah, one or two things about Google is it essentially makes no use of virtual machines internally. That's because Google started in nineteen ninety eight, which is the same year that VM where started it was kind of brought the modern virtual machine to bear. And so good infrastructure tends to be built really on kind of classic Unix processes on communication. And so scaling that up, you get a system that works a lot with just prophecies and containers. So kind of when I saw containers come along with Doctor who said, Well, that's a good model for us and we could take what we know internally, which was called Boring a big scheduler and we could turn that into Cooper Netease and we'LL open source it. And suddenly we have kind of a a cloud version of Google that works the way we would like it to work a bit more about the containers and AP eyes and services rather than kind of the low level infrastructure. >> Would you refer from from that comment that you essentially had a cleaner sheet of paper when when containers started to ascend, I >> kind of feel like it's not an accident. But Google influenced Lena Lennox's use of containers right, which influenced doctors use of containers, and we kind of merged the two concepts on. It became a good way to deploy applications that separates the application from the underlying machine instead of playing a machine and OS and application together, we'd actually like to separate those and say we'LL manage the Western machine and let's just deploy applications independent of machines. Now we can have lots of applications for machine improved realization. Improve your productivity. That's kind of way we're already doing internally what was not common in the traditional cloud. But it's actually a more productive way to work, >> Eric. My backgrounds and infrastructure. And, you know, I was actually at the first doctor. Calm back in twenty fourteen, only a few hundred of us, you know, right across the street from where we were here. And I saw the Google presentation. I was like, Oh, my gosh, I lived through that wave of virtual ization, and the nirvana we want is I want to just be able to build my application, not worry about all of those underlying pieces of infrastructure we're making progress for. We're not there. How are we doing as an industry as a whole? And, you know, get Teo, say it's where are we? And what Google looking that Cooper, Netease and all these other pieces to improve that. What do you still see is the the the room for growth. >> Well, it's pretty clear that you Burnett is one in the sense that if you're building new applications for enterprise, that's currently the way you would build them now. But it doesn't help you move your legacy stuff on it for, say, help you move to the cloud. It may be that you have worth loads on Crim that you would like to modernize their on V EMS or bare metal, their traditional kind of eighties APS in Java or whatever. And how does Cooper Netease affect those? That's that's actually still place where I think things are evolving. The good news now is much easier to mix kind of additional services and new services using SDO and other things on GC people contain arising workloads. But actually it would say most people are actually just do the new stuff in Cooper Netease and and wrapped the old stuff to make it look like a service that gets you pretty far. And then over time you khun containerized workloads that you really care about. You want to invest in and what's new with an so so you can kind of make some of those transitions on fram. Ifyou'd like separate from moving to the cloud and then you can decide. Oh, this workload goes in the cloud. This work load. I need to keep on priming for awhile, but I still want to modernize it of a lot more flexibility. >> Can you just parts that a little bit for us? You're talking about the migration service that that's that's coming out? Or is it part of >> the way the Val Estrada work, which is kind of can take a V M A. Converted to a container? It's a newer version of that which really kind of gives you a A manifest, essentially for the container. So you know what's inside it. You can actually use it as in the modern way. That's migration tool, and it's super useful. But I kind of feel like even just being able to run high call the Communities on Crim is a pretty useful step because you get to developer velocity, you get released frequency. You get more the coupling of operations and development, so you get a lot of benefits on treme. But also, when you move to cloud, you could go too geeky and get a you know, a great community experience whenever you're ready to make that transition. >> So it sounds like that what you described with Santos is particularly on from pieces like an elixir to help people you know more easily get to a cloud native environment and then, ultimately, Brigitte to the >> class. That's kind of like we're helping people get cloud native benefits where they are right now. On a day on their own time. Khun decide. You know not only when to move a workload, but even frankly, which cloud to move it to right. We prefer, obviously moved to Google Cloud, and we'LL take our chances because I think these cattle native applications were particularly good at. But it's more important that they are moving to this kind of modern platform but helps them, and it increases our impact on the Indus. Sory to have this happen. >> Help us understand the nuance there because there's obvious benefits of being in the public cloud. You know, being able to rent infrastructure op X versus cap packs and manage services, etcetera. But to the extent that you could bring that cloud experience, Tio, you're on premises to your data. That's what many people want to have that hybrid experience for sure. But but other than that, the obvious benefits that I get from a public cloud, what are the other nuances of actually moving into the public cloud from experience standpoint in the business value perspective? >> Well, one question is, how much rewriting do you have to do because it's a big transition? Moved a cloud that's also big transition to rewrite some of your applications. So in this model, we're actually separating those two steps, and you can do them in either order. You can lift and shift to move to cloud and then modernize it, but it's also perfectly fine. I'm gonna modernize on Graham, read my do my rewrites in a safe controlled environment that I understand this low risk for me. And then I'm going to move it to the cloud because now I have something that's really ready for the cloud and has been thought through carefully that way on that having those two options is actually an important change. With Anthony >> Wavered some stats. I think Thomas mentioned them that eighty percent of the workloads are still on prams way here. That all the time. And some portion of those workloads are mission critical workloads with a lot of custom code that people really don't want to necessarily freeze. Ah, and a lot of times, if you gonna migrate, you have to free. So my question is, can I bring some of those Antos on other Google benefits to on Prem and not have to freeze the code, not have to rewrite just kind of permanently essentially, uh, leave those there and it take my other stuff and move it into the cloud? Is that what people are doing? And can I >> work? Things mix. But I would say the beachhead is having well managed Cooper and his clusters on Prem. Okay, you can use for new development or a place to do your read rights or partial read writes. You convicts V EMS and mainframes and Cooper Netease. They're all mix herbal. It's not a big problem, especially this to where it could make him look like they're part of the same service >> on framework, Right? >> S o. I think it's more about having the ability to execute modern development on prim and feel like you're really being able to change those acts the way you want and on a good timeline. >> Okay, so I've heard several times this week that Santos is a game changer. That's how Google I think is looking at this. You guys are super excited about it. So one would presume then that that eighty percent on Prem is gonna just gonna really start to move. What your thoughts on that? >> I think the way to think about it is all the customs you talked to actually do want to move there were close to cloud. That's not really the discussion point anymore. It's more about reasons they can't, which could be. They already have a data center. They fully paid for two. There's regulatory issues they have to get resolved to. This workload is too messy. They don't want to touch it at all. The people that wrote it are here anymore. There's all kinds of reasons and so it's gone. I feel like the essence of it is let's just interacted the customer right now before they make a decision about their cloud on DH, help them and in exchange for that, I believe we have a much better chance to be their future clown, right? Right, Because we're helping them. But also, they're starting to use frameworks that were really good at all. Right, if they're betting on coordinates containers, I like our chances for winning their business down the road. >> You're earning their trust by providing those those capabilities. >> That's really the difference. We can interact with those eighty percent of workloads right now and make them better. >> Alright. So, Eric, with you, the term we've heard a bunch this meat, we because we're listening customers where we're meeting them where they are now. David Iran analyst. So we could tell customers they suck out a lot stuff. You should listen to Google. They're really smart, and they know how to do these things, right? Hopes up. Tell us some of those gaps there is to the learnings you've had. And we understand. You know, migrations and modernization is a really challenging thing, you know? What are some of those things that customers can do toe >> that's on the the basic issues. I would say one thing you get you noticed when using geeky, is that huh? The os has been passed for me magically. All right, We had these huge security issues in the past year, and no one on G had to do anything right. They didn't restart their servers. We didn't tell them. Oh, you get down time because we have to deal with these massive security tax All that was magically handled. Uh, then you say, Oh, I want to upgrade Cooper Netease. Well, you could do that yourself. Guess what? It's not that easy to do. Who Burnett is is a beast, and it's changing quickly every quarter. That's good in terms of velocity and trajectory, and it's the reason that so many people can participate at the same time. If you're a group trying to run communities on Prem, it's not that easy to do right, So there's a lot of benefit Justin saying We update Custer's all the time. Wear experts at this way will update your clusters, including the S and the Cuban A's version, and we can give you modern ing data and tell you how your clusters doing. Just stuff. It honestly is not core to these customers, right? They want to focus on there advertising campaign or their Their oil and gas were close. They don't want to focus on cluster management. So that's really the second thing >> they got that operating model. If I do Antos in my own data center of the same kind of environment, how do we deal with things like, Well, I need to worry about change management testing at all my other pieces Most of the >> way. The general answer to that is, you use many clusters. You could have a thousand clusters on time. If you want that, there's good reason to do that. But one reason is, well, upgrade the clusters individually so you could say, Let's make this cluster a test cluster We'LL upgrade it first and we'LL tell you what broke. If anything, if you give us tests we can run the test on then once we're comfortable that the upgrade is working, we'LL roll it out to all your clusters. Automatic thing with policy changes. You want to change your quota management or access control. We can roll up that change in a progressive way so that we do it first on clusters that are not so critical. >> So I gotta ask a question. You software guy, Uh and you're approaching this problem from a real software perspective. There are no box. I don't see a box on DH there. Three examples in the marketplace as your stack er, Oracle Clouded customer and Amazon Outpost Where there's a box. A box from Google. Pure software. Why no box? Do you need a box? The box Guys say you gotta have that. You have a box? Yes, you don't have a box, >> There's it's more like I would say, You don't have to have a box >> that's ever box. Okay, that's >> because again all these customers sorting the data center because they already have the hardware, right. If they're going to buy new hardware, they might as well move to cloud the police for some of the customers. And it turns out we can run on. Most of their hardware were leveraging VM wear for that with the partnership we announced here. So that's generally works. But that being said, we also now partnerships with Dell and others about if you want a box Cisco, Dell, HP. You can Actually, we'LL have offerings that way as well, and there's certainly good reason to do that. You can get up that infrastructure will know it works well. It's been tested, but the bottom line is, uh, we're going to do both models. >> Yeah, okay. So I could get a full stack from hardware through software. Yet through the partnerships on there's Your stack, >> Right And it'll always come from Partners were really working with a partner model for a lot of these things because we honestly don't have enough people to do all the things we would like to do with these customers. >> And how important is it that that on Prem Stack is identical from homogeneous with what's in the public cloud? Is it really? It sounds like you're cooking growing, but their philosophies well, the software components have to be >> really at least the core pieces to be the same, like Uber Netease studio on a policy management. If youse open source things like my sequel or Kafka or elastic, those auto operate the same way as well, right? So that when you're in different environments, you really kind of get the feeling of one environment one stroll plane used. Now that being said, if you want to use a special feature like I want to use big query that's only available on Google Cloud right, you can call it but that stuff won't be portable. Likewise is something you want to use on Amazon. You can use it, and that part will be portable. But at least you'LL get the most. Your infrastructure will be consistent across the platforms. >> How should we think about the future? You guys, I mean, just without giving away, you know, confidential information, obviously not going to do that, but just philosophically, Were you going when you talk to customers? What should their mindset be? How should they repeat preparing for the future? >> Well, I think it's a few bets were making. So you know, we're happy to work on kind of traditional cloud things with Bush machines and discs and lots of classic stuff that's still important. It's still needed. But I would say a few things that are interesting that we're pushing on pretty hard won in general. This move to a higher level stack about containers and AP eyes and services, and that's Cuba nowadays and SDO and its genre. But then the other thing I think interesting is we're making a pretty fundamental bit on open source, and it's a it's a deeper bad, then others air making right with partnerships with open source companies where they're helping us build the manage version of there of their product on. So I think that's that's really going to lead to the best experience for each of those packages, because the people that developed that package are working on it right, and we will share revenue with them. So it's it's, uh, Cooper. What is open source? Tension flows open. Source. This is kind of the way we're going to approach this thing, especially for a hybrid and mostly cloud where they're really in my mind is no other way to do multi cloud other than open source because it's the space is too fast moving. You're not going to say, Oh, here's a standard FBI for multi cloud because whatever a pair you define is going to be obsolete in a quarter or two, right? What we're saying is, the standard is not particular standard per se. It's the collection of open source software that evolves together, and that's how you get consistency across the environment is because the code is the same and in fact there is a standard. But we don't even know what it is exactly right. It's it's implicit in the code, >> Okay, but so any other competitors say, Okay, we love open source, too, will embrace open stores. What's different about Google's philosophy? >> Well, first of all, you could just look at a very high level of contribution back into the open source packages, not just the ones that were doing. You can see we've contributed things like the community's trademark so that that means it's actually not a Google thing anymore. Belonged to the proud Native Reading Foundation. But also, the way we're trying to partner with open source projects is really to give them a path to revenue. All right, give them a long term future on DH. Expectation is, that makes the products better. And it also means that, uh, we're implicitly preferred partner because we're the ones helping them. All >> right, Eric, One of things caught our attention this week really kind of extending containers with things like cloud code and cloud run. You speak a little bit to that and you know directionally where that's going, >> Yeah, crowd runs one of my favorite releases of this week. Both the one God code is great, also, especially, it's V s code integration which is really nice for developers. But I would say the cloud run kind of says we can take you know, any container that has a kind of a stateless thing inside and http interface and make it something we can run for you in a very clean way. What I mean by that is you pay per call and in particular Well, listen twenty four seven and case it call comes But if no call comes, we're going to charge you zero, right? So we'll eat the cost of listening for your package to arrive. But if a packet arrives for you, we will magically make sure you're there in time to execute it on. If you get a ton of connections, we'll scale you up. We could have a thousand servers running your cloud run containers. And so what you get is a very easy deployment model That is a generalization. Frankly, of functions, you can run a function, but you also run not only a container with kind of a managed run time ap engine style, but also any arbitrary container with your own custom python and image processing libraries. Whatever you want, >> here are our last guest at Google Cloud next twenty nineteen. So thank you. And so put a bow on the show this year. Obviously got the bigger, better shiny er Mosconi Center. It's awesome. Definitely bigger crowd. You see the growth here, but but tie a bow. Tell us what you think. Take us home. >> I have to say it's been really gratifying to see the reception that anthrax is getting. I do think it is a big shift for Google and a big shift for the industry. And, uh, you know, we actually have people using it, so I kind of feel like we're at the starting line of this change. But I feel like it's it's really resonated well this week, and it's been great to watch the reaction. >> Everybody wants their infrastructure to be like Google's. This is one of the people who made it happen. Eric, Thanks very much for coming in the Cube. Appreciate. Pleasure. All right, keep right, everybody. We'Ll be back to wrap up Google Cloud next twenty nineteen. My name is David. Dante. Student meant John Furry will be back on set. You're watching. The cube will be right back

Published Date : Apr 11 2019

SUMMARY :

Google Cloud next nineteen, brought to you by Google Cloud and The cube goes out to the events. Wall to wall Eric Brewer is here is the vice president of Infrastructure and a Google fellow. And then we'll talk about how you're you're taking what you've learned inside And so scaling that up, you get a system that works a lot with just prophecies and That's kind of way we're Calm back in twenty fourteen, only a few hundred of us, you know, right across the street from where we were here. the old stuff to make it look like a service that gets you pretty far. But I kind of feel like even just being able to run high call the Communities But it's more important that they are moving to this kind of modern platform but helps But to the extent that you could bring that cloud experience, Tio, Well, one question is, how much rewriting do you have to do because it's Ah, and a lot of times, if you gonna migrate, you have to free. Okay, you can use for new development or a place to do your read rights S o. I think it's more about having the ability to execute modern development is gonna just gonna really start to move. I think the way to think about it is all the customs you talked to actually do That's really the difference. you know? Cuban A's version, and we can give you modern ing data and tell you how your clusters doing. Most of the The general answer to that is, you use many clusters. The box Guys say you gotta have that. Okay, that's It's been tested, but the bottom line is, uh, we're going to do both models. So I could get a full stack from hardware through software. we honestly don't have enough people to do all the things we would like to do with these customers. really at least the core pieces to be the same, like Uber Netease studio on a policy This is kind of the way we're going to approach this Okay, but so any other competitors say, Okay, we love open source, too, will embrace open stores. Well, first of all, you could just look at a very high level of contribution back into the open You speak a little bit to that and you know directionally where that's And so what you get is a very easy deployment model That is a generalization. Tell us what you think. And, uh, you know, we actually have people using it, so I kind of feel like we're at the starting line This is one of the people who made it happen.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

Eric BrewerPERSON

0.99+

CiscoORGANIZATION

0.99+

John FarrierPERSON

0.99+

DellORGANIZATION

0.99+

DavidPERSON

0.99+

EricPERSON

0.99+

Native Reading FoundationORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

HPORGANIZATION

0.99+

AnthonyPERSON

0.99+

BreuerPERSON

0.99+

JustinPERSON

0.99+

ThomasPERSON

0.99+

oneQUANTITY

0.99+

Lena LennoxPERSON

0.99+

GoogleORGANIZATION

0.99+

zeroQUANTITY

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

David IranPERSON

0.99+

two optionsQUANTITY

0.99+

eighty percentQUANTITY

0.99+

one questionQUANTITY

0.99+

Three examplesQUANTITY

0.99+

pythonTITLE

0.99+

DantePERSON

0.99+

John FurryPERSON

0.99+

two conceptsQUANTITY

0.99+

both modelsQUANTITY

0.99+

three daysQUANTITY

0.99+

two stepsQUANTITY

0.98+

this yearDATE

0.98+

eachQUANTITY

0.98+

second thingQUANTITY

0.98+

UberORGANIZATION

0.98+

this weekDATE

0.98+

one reasonQUANTITY

0.98+

SDOTITLE

0.98+

twenty fourteenQUANTITY

0.97+

KafkaTITLE

0.97+

twenty four sevenQUANTITY

0.97+

two thingsQUANTITY

0.97+

GrahamPERSON

0.97+

first doctorQUANTITY

0.97+

next nineteenDATE

0.96+

Day threeQUANTITY

0.96+

FBIORGANIZATION

0.96+

CooperPERSON

0.96+

Google CloudTITLE

0.96+

past yearDATE

0.95+

CubaLOCATION

0.95+

Cooper NeteaseORGANIZATION

0.95+

BothQUANTITY

0.95+

JavaTITLE

0.94+

TeoPERSON

0.94+

next twenty nineteenDATE

0.93+

firstQUANTITY

0.92+

CooperORGANIZATION

0.91+

OracleORGANIZATION

0.9+

Prem StackTITLE

0.9+

Amazon OutpostORGANIZATION

0.85+

BrigittePERSON

0.82+

CloudTITLE

0.82+

Breaking Analysis: The Improbable Rise of Kubernetes


 

>> From theCUBE studios in Palo Alto, in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vollante. >> The rise of Kubernetes came about through a combination of forces that were, in hindsight, quite a long shot. Amazon's dominance created momentum for Cloud native application development, and the need for newer and simpler experiences, beyond just easily spinning up computer as a service. This wave crashed into innovations from a startup named Docker, and a reluctant competitor in Google, that needed a way to change the game on Amazon and the Cloud. Now, add in the effort of Red Hat, which needed a new path beyond Enterprise Linux, and oh, by the way, it was just about to commit to a path of a Kubernetes alternative for OpenShift and figure out a governance structure to hurt all the cats and the ecosystem and you get the remarkable ascendancy of Kubernetes. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we tapped the back stories of a new documentary that explains the improbable events that led to the creation of Kubernetes. We'll share some new survey data from ETR and commentary from the many early the innovators who came on theCUBE during the exciting period since the founding of Docker in 2013, which marked a new era in computing, because we're talking about Kubernetes and developers today, the hoodie is on. And there's a new two part documentary that I just referenced, it's out and it was produced by Honeypot on Kubernetes, part one and part two, tells a story of how Kubernetes came to prominence and many of the players that made it happen. Now, a lot of these players, including Tim Hawkin Kelsey Hightower, Craig McLuckie, Joe Beda, Brian Grant Solomon Hykes, Jerry Chen and others came on theCUBE during formative years of containers going mainstream and the rise of Kubernetes. John Furrier and Stu Miniman were at the many shows we covered back then and they unpacked what was happening at the time. We'll share the commentary from the guests that they interviewed and try to add some context. Now let's start with the concept of developer defined structure, DDI. Jerry Chen was at VMware and he could see the trends that were evolving. He left VMware to become a venture capitalist at Greylock. Docker was his first investment. And he saw the future this way. >> What happens is when you define infrastructure software you can program it. You make it portable. And that the beauty of this cloud wave what I call DDI's. Now, to your point is every piece of infrastructure from storage, networking, to compute has an API, right? And, and AWS there was an early trend where S3, EBS, EC2 had API. >> As building blocks too. >> As building blocks, exactly. >> Not monolithic. >> Monolithic building blocks every little building bone block has it own API and just like Docker really is the API for this unit of the cloud enables developers to define how they want to build their applications, how to network them know as Wills talked about, and how you want to secure them and how you want to store them. And so the beauty of this generation is now developers are determining how apps are built, not just at the, you know, end user, you know, iPhone app layer the data layer, the storage layer, the networking layer. So every single level is being disrupted by this concept of a DDI and where, how you build use and actually purchase IT has changed. And you're seeing the incumbent vendors like Oracle, VMware Microsoft try to react but you're seeing a whole new generation startup. >> Now what Jerry was explaining is that this new abstraction layer that was being built here's some ETR data that quantifies that and shows where we are today. The chart shows net score or spending momentum on the vertical axis and market share which represents the pervasiveness in the survey set. So as Jerry and the innovators who created Docker saw the cloud was becoming prominent and you can see it still has spending velocity that's elevated above that 40% red line which is kind of a magic mark of momentum. And of course, it's very prominent on the X axis as well. And you see the low level infrastructure virtualization and that even floats above servers and storage and networking right. Back in 2013 the conversation with VMware. And by the way, I remember having this conversation deeply at the time with Chad Sakac was we're going to make this low level infrastructure invisible, and we intend to make virtualization invisible, IE simplified. And so, you see above the two arrows there related to containers, container orchestration and container platforms, which are abstraction layers and services above the underlying VMs and hardware. And you can see the momentum that they have right there with the cloud and AI and RPA. So you had these forces that Jerry described that were taking shape, and this picture kind of summarizes how they came together to form Kubernetes. And the upper left, Of course you see AWS and we inserted a picture from a post we did, right after the first reinvent in 2012, it was obvious to us at the time that the cloud gorilla was AWS and had all this momentum. Now, Solomon Hykes, the founder of Docker, you see there in the upper right. He saw the need to simplify the packaging of applications for cloud developers. Here's how he described it. Back in 2014 in theCUBE with John Furrier >> Container is a unit of deployment, right? It's the format in which you package your application all the files, all the executables libraries all the dependencies in one thing that you can move to any server and deploy in a repeatable way. So it's similar to how you would run an iOS app on an iPhone, for example. >> A Docker at the time was a 30% company and it just changed its name from .cloud. And back to the diagram you have Google with a red question mark. So why would you need more than what Docker had created. Craig McLuckie, who was a product manager at Google back then explains the need for yet another abstraction. >> We created the strong separation between infrastructure operations and application operations. And so, Docker has created a portable framework to take it, basically a binary and run it anywhere which is an amazing capability, but that's not enough. You also need to be able to manage that with a framework that can run anywhere. And so, the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat open stack deployment. You could run on another major cloud provider like rec. >> Now Google had this huge cloud infrastructure but no commercial cloud business compete with AWS. At least not one that was taken seriously at the time. So it needed a way to change the game. And it had this thing called Google Borg, which is a container management system and scheduler and Google looked at what was happening with virtualization and said, you know, we obviously could do better Joe Beda, who was with Google at the time explains their mindset going back to the beginning. >> Craig and I started up Google compute engine VM as a service. And the odd thing to recognize is that, nobody who had been in Google for a long time thought that there was anything to this VM stuff, right? Cause Google had been on containers for so long. That was their mindset board was the way that stuff was actually deployed. So, you know, my boss at the time, who's now at Cloudera booted up a VM for the first time, and anybody in the outside world be like, Hey, that's really cool. And his response was like, well now what? Right. You're sitting at a prompt. Like that's not super interesting. How do I run my app? Right. Which is, that's what everybody's been struggling with, with cloud is not how do I get a VM up? How do I actually run my code? >> Okay. So Google never really did virtualization. They were looking at the market and said, okay what can we do to make Google relevant in cloud. Here's Eric Brewer from Google. Talking on theCUBE about Google's thought process at the time. >> One interest things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998 which is the same year that VMware started was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic Unix processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well, that's a good model for us. And we can take what we know internally which was called Borg a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. >> Now, Eric Brewer gave us the bumper sticker version of the story there. What he reveals in the documentary that I referenced earlier is that initially Google was like, why would we open source our secret sauce to help competitors? So folks like Tim Hockin and Brian Grant who were on the original Kubernetes team, went to management and pressed hard to convince them to bless open sourcing Kubernetes. Here's Hockin's explanation. >> When Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on we realized we know what this is going to we know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes, once you get beyond two or three of them, and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please this is going to happen with us or without us. And I think it, the world would be better if we helped. >> So the open source strategy became more compelling as they studied the problem because it gave Google a way to neutralize AWS's advantage because with containers you could develop on AWS for example, and then run the application anywhere like Google's cloud. So it not only gave developers a path off of AWS. If Google could develop a strong service on GCP they could monetize that play. Now, focus your attention back to the diagram which shows this smiling, Alex Polvi from Core OS which was acquired by Red Hat in 2018. And he saw the need to bring Linux into the cloud. I mean, after all Linux was powering the internet it was the OS for enterprise apps. And he saw the need to extend its path into the cloud. Now here's how he described it at an OpenStack event in 2015. >> Similar to what happened with Linux. Like yes, there is still need for Linux and Windows and other OSs out there. But by and large on production, web infrastructure it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was, it was by having a truly open consistent API and a commitment into not breaking APIs and, so on. That allowed Linux to really become ubiquitous in the data center. Yes, there are other OSs, but Linux buy in large for production infrastructure, what is being used. And I think you'll see a similar phenomenon happen for this next level up cause we're treating the whole data center as a computer instead of trading one in visual instance is just the computer. And that's the stuff that Kubernetes to me and someone is doing. And I think there will be one that shakes out over time and we believe that'll be Kubernetes. >> So Alex saw the need for a dominant container orchestration platform. And you heard him, they made the right bet. It would be Kubernetes. Now Red Hat, Red Hat is been around since 1993. So it has a lot of on-prem. So it needed a future path to the cloud. So they rang up Google and said, hey. What do you guys have going on in this space? So Google, was kind of non-committal, but it did expose that they were thinking about doing something that was you know, pre Kubernetes. It was before it was called Kubernetes. But hey, we have this thing and we're thinking about open sourcing it, but Google's internal debates, and you know, some of the arm twisting from the engine engineers, it was taking too long. So Red Hat said, well, screw it. We got to move forward with OpenShift. So we'll do what Apple and Airbnb and Heroku are doing and we'll build on an alternative. And so they were ready to go with Mesos which was very much more sophisticated than Kubernetes at the time and much more mature, but then Google the last minute said, hey, let's do this. So Clayton Coleman with Red Hat, he was an architect. And he leaned in right away. He was one of the first outside committers outside of Google. But you still led these competing forces in the market. And internally there were debates. Do we go with simplicity or do we go with system scale? And Hen Goldberg from Google explains why they focus first on simplicity in getting that right. >> We had to defend of why we are only supporting 100 nodes in the first release of Kubernetes. And they explained that they know how to build for scale. They've done that. They know how to do it, but realistically most of users don't need large clusters. So why create this complexity? >> So Goldberg explains that rather than competing right away with say Mesos or Docker swarm, which were far more baked they made the bet to keep it simple and go for adoption and ubiquity, which obviously turned out to be the right choice. But the last piece of the puzzle was governance. Now Google promised to open source Kubernetes but when it started to open up to contributors outside of Google, the code was still controlled by Google and developers had to sign Google paper that said Google could still do whatever it wanted. It could sub license, et cetera. So Google had to pass the Baton to an independent entity and that's how CNCF was started. Kubernetes was its first project. And let's listen to Chris Aniszczyk of the CNCF explain >> CNCF is all about providing a neutral home for cloud native technology. And, you know, it's been about almost two years since our first board meeting. And the idea was, you know there's a certain set of technology out there, you know that are essentially microservice based that like live in containers that are essentially orchestrated by some process, right? That's essentially what we mean when we say cloud native right. And CNCF was seated with Kubernetes as its first project. And you know, as, as we've seen over the last couple years Kubernetes has grown, you know, quite well they have a large community a diverse con you know, contributor base and have done, you know, kind of extremely well. They're one of actually the fastest, you know highest velocity, open source projects out there, maybe. >> Okay. So this is how we got to where we are today. This ETR data shows container orchestration offerings. It's the same X Y graph that we showed earlier. And you can see where Kubernetes lands not we're standing that Kubernetes not a company but respondents, you know, they doing Kubernetes. They maybe don't know, you know, whose platform and it's hard with the ETR taxon economy as a fuzzy and survey data because Kubernetes is increasingly becoming embedded into cloud platforms. And IT pros, they may not even know which one specifically. And so the reason we've linked these two platforms Kubernetes and Red Hat OpenShift is because OpenShift right now is a dominant revenue player in the space and is increasingly popular PaaS layer. Yeah. You could download Kubernetes and do what you want with it. But if you're really building enterprise apps you're going to need support. And that's where OpenShift comes in. And there's not much data on this but we did find this chart from AMDA which show was the container software market, whatever that really is. And Red Hat has got 50% of it. This is revenue. And, you know, we know the muscle of IBM is behind OpenShift. So there's really not hard to believe. Now we've got some other data points that show how Kubernetes is becoming less visible and more embedded under of the hood. If you will, as this chart shows this is data from CNCF's annual survey they had 1800 respondents here, and the data showed that 79% of respondents use certified Kubernetes hosted platforms. Amazon elastic container service for Kubernetes was the most prominent 39% followed by Azure Kubernetes service at 23% in Azure AKS engine at 17%. With Google's GKE, Google Kubernetes engine behind those three. Now. You have to ask, okay, Google. Google's management Initially they had concerns. You know, why are we open sourcing such a key technology? And the premise was, it would level the playing field. And for sure it has, but you have to ask has it driven the monetization Google was after? And I would've to say no, it probably didn't. But think about where Google would've been. If it hadn't open source Kubernetes how relevant would it be in the cloud discussion. Despite its distant third position behind AWS and Microsoft or even fourth, if you include Alibaba without Kubernetes Google probably would be much less prominent or possibly even irrelevant in cloud, enterprise cloud. Okay. Let's wrap up with some comments on the state of Kubernetes and maybe a thought or two about, you know, where we're headed. So look, no shocker Kubernetes for all its improbable beginning has gone mainstream in the past year or so. We're seeing much more maturity and support for state full workloads and big ecosystem support with respect to better security and continued simplification. But you know, it's still pretty complex. It's getting better, but it's not VMware level of maturity. For example, of course. Now adoption has always been strong for Kubernetes, for cloud native companies who start with containers on day one, but we're seeing many more. IT organizations adopting Kubernetes as it matures. It's interesting, you know, Docker set out to be the system of the cloud and Kubernetes has really kind of become that. Docker desktop is where Docker's action really is. That's where Docker is thriving. It sold off Docker swarm to Mirantis has made some tweaks. Docker has made some tweaks to its licensing model to be able to continue to evolve its its business. To hear more about that at DockerCon. And as we said, years ago we expected Kubernetes to become less visible Stu Miniman and I talked about this in one of our predictions post and really become more embedded into other platforms. And that's exactly what's happening here but it's still complicated. Remember, remember the... Go back to the early and mid cycle of VMware understanding things like application performance you needed folks in lab coats to really remediate problems and dig in and peel the onion and scale the system you know, and in some ways you're seeing that dynamic repeated with Kubernetes, security performance scale recovery, when something goes wrong all are made more difficult by the rapid pace at which the ecosystem is evolving Kubernetes. But it's definitely headed in the right direction. So what's next for Kubernetes we would expect further simplification and you're going to see more abstractions. We live in this world of almost perpetual abstractions. Now, as Kubernetes improves support from multi cluster it will be begin to treat those clusters as a unified group. So kind of abstracting multiple clusters and treating them as, as one to be managed together. And this is going to create a lot of ecosystem focus on scaling globally. Okay, once you do that, you're going to have to worry about latency and then you're going to have to keep pace with security as you expand the, the threat area. And then of course recovery what happens when something goes wrong, more complexity, the harder it is to recover and that's going to require new services to share resources across clusters. So look for that. You also should expect more automation. It's going to be driven by the host cloud providers as Kubernetes supports more state full applications and begins to extend its cluster management. Cloud providers will inject as much automation as possible into the system. Now and finally, as these capabilities mature we would expect to see better support for data intensive workloads like, AI and Machine learning and inference. Schedule with these workloads becomes harder because they're so resource intensive and performance management becomes more complex. So that's going to have to evolve. I mean, frankly, many of the things that Kubernetes team way back when, you know they back burn it early on, for example, you saw in Docker swarm or Mesos they're going to start to enter the scene now with Kubernetes as they start to sort of prioritize some of those more complex functions. Now, the last thing I'll ask you to think about is what's next beyond Kubernetes, you know this isn't it right with serverless and IOT in the edge and new data, heavy workloads there's something that's going to disrupt Kubernetes. So in that, by the way, in that CNCF survey nearly 40% of respondents were using serverless and that's going to keep growing. So how is that going to change the development model? You know, Andy Jassy once famously said that if they had to start over with Amazon retail, they'd start with serverless. So let's keep an eye on the horizon to see what's coming next. All right, that's it for now. I want to thank my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team, who also manages the breaking analysis podcast, Kristin Martin and Cheryl Knight help get the word out on socials, so thanks to all of you. Remember these episodes, they're all available as podcasts wherever you listen, just search breaking analysis podcast. Don't forget to check out ETR website @etr.ai. We'll also publish. We publish a full report every week on wikibon.com and Silicon angle.com. You can get in touch with me, email me directly david.villane@Siliconangle.com or DM me at D Vollante. You can comment on our LinkedIn post. This is Dave Vollante for theCUBE insights powered by ETR. Have a great week, everybody. Thanks for watching. Stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : Feb 12 2022

SUMMARY :

bringing you data driven and many of the players And that the beauty of this And so the beauty of this He saw the need to simplify It's the format in which A Docker at the time was a 30% company And so, the union of Docker and Kubernetes and said, you know, we And the odd thing to recognize is that, at the time. And so scaling that up, you and pressed hard to convince them and said, you know, please And he saw the need to And that's the stuff that Kubernetes and you know, some of the arm twisting in the first release of Kubernetes. of Google, the code was And the idea was, you know and dig in and peel the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

Chris AniszczykPERSON

0.99+

HockinPERSON

0.99+

Dave VollantePERSON

0.99+

Solomon HykesPERSON

0.99+

Craig McLuckiePERSON

0.99+

Cheryl KnightPERSON

0.99+

Jerry ChenPERSON

0.99+

Alex MyersonPERSON

0.99+

Kristin MartinPERSON

0.99+

Brian GrantPERSON

0.99+

Eric BrewerPERSON

0.99+

1998DATE

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Tim HockinPERSON

0.99+

Andy JassyPERSON

0.99+

2013DATE

0.99+

Alex PolviPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Craig McLuckiePERSON

0.99+

Clayton ColemanPERSON

0.99+

2018DATE

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

50%QUANTITY

0.99+

JerryPERSON

0.99+

AppleORGANIZATION

0.99+

2012DATE

0.99+

Joe BedaPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CNCFORGANIZATION

0.99+

17%QUANTITY

0.99+

John FurrierPERSON

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

OracleORGANIZATION

0.99+

23%QUANTITY

0.99+

iOSTITLE

0.99+

1800 respondentsQUANTITY

0.99+

AlibabaORGANIZATION

0.99+

2015DATE

0.99+

39%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AirbnbORGANIZATION

0.99+

Hen GoldbergPERSON

0.99+

fourthQUANTITY

0.99+

twoQUANTITY

0.99+

Chad SakacPERSON

0.99+

threeQUANTITY

0.99+

david.villane@Siliconangle.comOTHER

0.99+

first projectQUANTITY

0.99+

CraigPERSON

0.99+

VMwareORGANIZATION

0.99+

ETRORGANIZATION

0.99+

Alex Polvi - Structure 2015 - theCUBE - #structureconf


 

>> Live from the Julia Morgan Ballroom in San Francisco. Extracting the signal from the noise, it's TheCUBE. Covering Structure 2015. Now your host, George Gilbert. >> This is George Gilbert, we're at Structure 2015. Reborn and really healthy from the old GigaOM, and we're pleased to welcome Alex Polvi from CoreOS, everyone seems to want to talk to Alex these days. So we've got first dibs. Alex why don't you tell us a little bit about CoreOS and why it's of such relevance right now. >> Sure, so we started CoreOS a little over two years ago, about two and a half years ago now. And our mission is to fundamentally improve the security of the internet. And our approach in doing that is to help companies run infrastructure in this way that allows it to be much more serviceable and have much better security and so on. This way that we're modeling looks a lot like what we've seen from the hyperscale companies. Folks like Google. So we often call it Google's Infrastructure For Everyone Else, GIFFY for short 'cause that's kind of a mouthful. And that involves distributed systems, containers, and running on standard hardware which in 2015 can be a bare-metal server, or could be an instance in AWS. >> Okay. So help us understand though that, if CoreOS, it sounds like there's an operating system at the core. >> Yeah. >> Is this like a cut down version of Linux that gives it a small attack surface and a sort of easier deployment and patching? >> Exactly, so in our quest to run the world servers to secure the internet we start at the lowest level component possible. There's the OS, then there's the distributed system side. So CoreOS is our company name, but it's also the name of the first product that we released, CoreOS Linux. CoreOS Linux is a lightweight container-based OS that automatically updates itself, 'cause we think that updates are the key to good security. So it's a combination of the updates, the container weight, the lightweight container-based application model. As well as just stripping everything else out. I mean the last 20 years of Linux distributions have created lots of cruft so it was time to kind of rebirth a new lightweight Linux OS. >> Sticking to CoreOS >> Yeah. >> For a moment, in an earlier era, might we have called this like an embedded OS where you just sort of chopped out everything that was not necessary for the application? >> Yeah, it's very much inspired by embedded OSes. On servers you know, you really want to get everything out of the way of the resources like the memory and CPU and so on so you get as much as you want out of it. So while it's a little bit counterintuitive, you have this really monster server, you still want as light and thin of an OS on there as you possibly can like an embedded OS so you can really maximize the performance. >> So something that abstracts the hardware but gets out of the way. >> Exactly. Just focus, get on the things that matter which is running your applications and managing the actual hardware and really nothing else. >> Okay, so, presumably to provide Google's infrastructure for everyone else, and I don't remember the acronym, >> GIFFY. >> Okay. What other products did you have to fill out to make that possible? >> Sure, great question. So the next major piece that we released is a tool called ETCD. It's meant for doing shared configuration amongst servers. Whenever you have a group of servers, the first thing you need to do is they all need to know about each other, and tell each other about the configuration. This is load balancers knowing where the app servers are, the app servers knowing where the databases are and so on. And to do this in the most robust distributed systems way, you have to do this thing in computer science that's very difficult called "consensus". Consensus algorithms is an area of computing, actually speaking about here in a little bit with Eric Brewer, who is a huge academic, a very well respected engineer in the area of consenus and distributed systems. And so we built ETCD, which solves this really hard distributed systems problem in a way that's usable by many operations teams. >> So let me just interrupt you for a second, >> Yeah. >> I mean I've got this sound going off in my head that says "Zookeeper, Zookeeper". >> Exactly. It's Zookeeper for everyone else. >> It's simplified. >> It's a simplified Zookeeper and make it accessible. Areas that a lot of people wanted to use distributed systems but Zookeeper is a little bit too difficult to use as well as really oriented toward the Java and Hadoop community, and there's a whole wide array of other folks out there. >> So it couldn't make as many constraining assumptions as yours, which would simplify. >> It just couldn't be as widely adopted. And so we released ETCD around the same time we released CoreOS Linux and this point, there's been over a thousand projects if you go on GitHub that have built things around ETCD, so our bet was right. Even things like Kubernetes, itself has a hard dependency on ETCD. Without ETCD, Kupernetes will not run. So our hypothesis there was let's make the hardest part of distributed systems easier, and then we will see distributed systems overall accelerate. And that is definitely what's happened with ETCD. >> Okay so help us understand, how you've built up the rest of the infrastructure and then where you'd like to see it go. >> Sure, so the thing that we're targeting is this distributed systems approach. And again we care about this a lot because we think that the ability to manage and service your applications, is what is the key to the security. Keeping things up to date, and when we mean up to date, we don't just mean like patch a vulnerability. Of which we've fixed many of those. But it's also about company's comfort rolling out a new version of their application that they won't break something. If you run your infrastructure in a distributed system, you can roll out a version, if it breaks a little bit of the application that's okay, but you didn't take the whole thing down. And that's kind of the safety net that distributed systems give you. >> Does this require the sort of micro-service approach where there's a clean separation between this new set of bits and the rest of the app? >> It really does. And that's why we've invested so heavily in containers. It requires a container, it also requires the distributed systems components of it. So we first built CoreOS Linux, then we built ETCD, then we started building some distributed systems work very early in the market. And then things like Kubernetes came along, and we were like, "Hey, instead of us reinventing all of this stuff let's partner up with the guys from the Google" if we're monitoring Google's infrastructure for everyone else, let's partner up with the team at Google that built that and get their solution more widely adopted out in the world as well. So the whole platform comes together as this combination of Kubernetes, ETCD, CoreOS Linux, we have our own container runtime called Rocket, which we built primarily to address some security issues in Docker. And so all of these pieces come together and what we call that piece when they're all together is Tectonic. Tectonic is our product that is that Google's infrastructure in a box. >> Okay let me just drop down in the weed for a sec. Derek Collison calls, I'm sorry I'm having a senior moment. And I hope it's not early onset Alzheimer's. The Docker, he calls sort of this generation's Tarball, you know, like to distribute you know, just a sort of I guess equivalent of an executable. Are you providing something that's compatible or does what's inside the container have to change to take advantage of the additional services that's sort of Google-centric. >> Sure. So the packaging, that Tarball piece, we're compatible with. And will always remain compatible with. To even further the compatibility, we've put together standards around what that container should be so many vendors can inter-operate more widely. We've done that first through the app container project and then more recently through the open container initiative which is a joint effort between Docker and us, and the rest of the ecosystem. And so we always, we always want the user to be able to package their application once and then choose whatever system they want to run it in, and the container is what really unlocks that portability. >> Okay. So then let me ask you, does the Google compute engine folks, or the passgroup, do they view you as a way of priming the pump outside the Google ecosystem to get others using their sort of application model or their infrastructure model? Because I'm trying to understand, you know Azure sort of has its own way of looking at the world, Amazon has its own way of looking at the world, are they looking at you as a way of sort of disseminating an approach to building applications? Or managing applications. >> Sure. So the Google team and their motivations behind Kubernetes, you'd have to talk to them about it. My understanding is that they see that as a way to have a very consistent environment between different cloud providers and so on. It is a next-generation way of running infrastructure as well, and its just better than the previous way of running infrastructure. >> That's sort of the answer I was looking for which is, they don't have to either give away their stuff or manage their infrastructure elsewhere. But you're sort of the channel to deliver Google style infrastructure in other environments. >> Sure, I mean Google Cloud's motivation at the end of the day is selling cores of memory. They put all these other services on top of it to make it, to make it more attractive to use, but the end of the day anything that drives more usage of these products is how they run their business. At least that's my perception of it. I'm obviously not speaking on behalf of Google. >> So where are you in attracting showcase customers? Guys who've sort of said "okay we'll bet", if not the entire business, "we'll bet the success of this application or these set of applications on this". >> Right, so first the technology's been very, very exciting. I mean the past two years we've seen this whole space explode in interest, but the discussion around "how does this solve business problems, how does this actually get adopted to these companies and what motivates them to actually do this" outside of the tech being very cool. That's a discussion that is just getting started and in fact in about two weeks here in early December in New York we're hosting that discussion at an event called the Tectonic Summit. The Tectonic Summit is where we're bringing together all the enterprise early adopters that are using containers, using distributed systems, and talking about why did their management and their leadership decide to make investments in these technologies. And what we're seeing are use cases about multi-data center between your physical data center and your cloud environments. We're seeing folks build their next-generation web services. Many businesses that weren't traditionally in the web services businesses need to be now because of mobile, just modern product offerings. And so we're hearing from these large guys and how they're using our technologies and other companies' technologies today to do this, and it's just two weeks at our event. >> Would it be fair to say, I'm listening to this and what seems to becoming across is that your technology makes it easier to abstract not just the machine, which would be CoreOS, but hybrid infrastructure. And it doesn't even have to be hybrid, it could be this data center and that data center. >> Right. >> Or your own data center and a public cloud. >> Exactly. One of the biggest value props of all this is the consistency between environments. We just give this compute, CPUs, memory, storage, we don't care if it's on cloud or if it's a physical data center, we can allow you to manage that in an extremely consistent way. Not just between your data centers but also between development and production, and that's a really important part of all of this. >> Do you need a point of view built into the infrastructure to make it palatable to developers who want a platform? As opposed to just infrastructure. >> Sure. So one of the things that's most exciting about this space is we're splitting the difference of platform and infrastructure. Platform is typically, platform is a service, this very prescriptive way of running your server infrastructure. And there's raw infrastructure which is a like, "here is a canvas, go to town but you need to bring all your own tools". What's happening right now in this distributed systems container space is a middle category. It's still infrastructure, but it's application focused. And at the end of the day that's what a developer is trying to do, is deploy their application out into the server infrastructure. >> So it doesn't have an opinion that tells the developer "we think you should build it this way", but it does hide all the sort of, the different types of hardware and their location pretty much. >> Right, it gives you a prescriptive way to how you package and deploy that, but doesn't put on any constraints of what you can package or deploy. >> Okay. Very interesting. It's sort of like a, if platform as a service was constraining because developers didn't want a straightjacket for how they should build the app, and infrastructures, our service was too raw. You're giving them a middle ground. >> Exactly. It's still infrastructure, but it's a consistent way of running that infrastructure. And that's why companies like Google and Facebook and Twitter do this, they have millions of servers and data centers all over the world. >> And they can't prescribe. >> Well they need to be able to have a consistent way of doing it so that they don't have to have an infinitely growing operations team as they scale their infrastructure. You need to have consistency, but at the same time you need to be able to have a wide array of tools and things to deploy and interact with that infrastructure. So it's that middle ground, and that's why the hyperscale guys have adopted it because they're forced to, because they have to have that consistency to have that scale. >> Okay let me ask you then, not on the, separate from the hyperscale guys, the sort of newest distributed system that mainstream enterprises are struggling with and sort of off the record, maybe choking on, you know is Hadoop. Because they haven't had to do elastic infrastructure before and like you said the Zookeeper is not that easy, and there's 22 other projects by the way that also have to get stood up. Can you help someone who is perhaps flailing in that or if not flailing, finding the skills overhead really, really tough? >> So, Hadoop. Let's remember Hadoop's roots. Where did that come from? >> Well Yahoo!. >> Well but where did Yahoo! get the idea? >> Oh yeah, Google, sorry. >> Exactly. Yahoo! gets all the credit for it. Even though it was a Google paper that was modeled after. And so again, if Kubernetes and containers and everything is the equivalent of Google's borg, which is that raw application infrastructure, Hadoop is a certain application that consumes the spare resources on that cluster in order to do these map reducing computational jobs. >> So the next question is, how much can you simplify what mainstream enterprises do that don't have the Google infrastructure yet? >> Right, so they have to manage that as its own whole separate thing. It's its own set of infrastructure, it's its own set of servers to manage their Hadoop cluster. If you combine it with this application infrastructure, we just treat Hadoop as another application that runs on the platform. It's not its own distinct, special thing. It's just another application running out there along with your web servers and your databases, and everything else, you have your Hadoop workload in the mix. So you have this consistent pool of infrastructure and Hadoop is just another application that's monitored or managed the exact same way as everything else. >> So, for folks who are a little more familiar with Mesos, which is the opposite of a virtual machine, it makes many machines look like a single one, I assume. >> Well this is a very similar message to Mesos. Mesos is also building Google-like infrastructure for everyone else. The difference with what we're doing is really we just partnered up with the team that built that at Google, and focusing our solution around Kubernetes which is what the Google efforts are behind. So we're all modeling Google's infrastructure. >> Okay. >> Mesos took their own spin on it with Kubernetes, and CoreOS and ETCD, we're taking a different spin on it. >> So and what other products have you built out that we haven't touched on, and what do you see the roadmap looking like? >> Sure, so really, all these things we've talked about are open source projects. They're all components for building this Google-like infrastructure. Tectonic is our platform for companies that want this style of infrastructure but they don't want to have to figure out all the different pieces themselves. And we think once companies adopt Tectonic, just this general style of infrastructure, that we can give them all the benefits of this, better utilization, that consistency, easier management of lots and lots of servers and so on. But we also think we can dramatically improve the security of their infrastructure as well. And that's what we're investing in our roadmap is to leverage this kind of change, and then with that change we can do some things to the infrastructure that was never possible before. >> Okay. >> And that's the things that we're investing in as a company. >> Okay, great. We're going to break at that, this is George Gilbert, at Structure '15, with Alex Polvi of CoreOS. And we'll be back in just a few minutes. (light music)

Published Date : Nov 18 2015

SUMMARY :

Extracting the signal from the noise, from the old GigaOM, the security of the internet. at the core. So it's a combination of the updates, of the resources like the memory but gets out of the way. and managing the actual hardware to make that possible? So the next major piece that we released sound going off in my head that It's Zookeeper for everyone else. and there's a whole wide array So it couldn't make as many around the same time rest of the infrastructure the ability to manage So the whole platform comes together down in the weed for a sec. and the container is what of looking at the world, and its just better than the previous way That's sort of the answer but the end of the day "we'll bet the success of this application so first the technology's not just the machine, and a public cloud. is the consistency between environments. built into the infrastructure And at the end of the day opinion that tells the developer to how you package and deploy that, and infrastructures, all over the world. but at the same time you and sort of off the record, Where did that come from? is the equivalent of Google's borg, that runs on the platform. of a virtual machine, and focusing our solution and CoreOS and ETCD, the security of their And that's the things We're going to break at that,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric BrewerPERSON

0.99+

Alex PolviPERSON

0.99+

GoogleORGANIZATION

0.99+

Derek CollisonPERSON

0.99+

AmazonORGANIZATION

0.99+

2015DATE

0.99+

George GilbertPERSON

0.99+

HadoopTITLE

0.99+

CoreOSTITLE

0.99+

San FranciscoLOCATION

0.99+

TectonicORGANIZATION

0.99+

22 other projectsQUANTITY

0.99+

New YorkLOCATION

0.99+

two weeksQUANTITY

0.99+

LinuxTITLE

0.99+

FacebookORGANIZATION

0.99+

Tectonic SummitEVENT

0.99+

MesosTITLE

0.99+

first productQUANTITY

0.99+

TwitterORGANIZATION

0.99+

millionsQUANTITY

0.99+

early DecemberDATE

0.99+

AWSORGANIZATION

0.99+

Yahoo!ORGANIZATION

0.98+

CoreOS LinuxTITLE

0.98+

firstQUANTITY

0.98+

GigaOMORGANIZATION

0.98+

oneQUANTITY

0.97+

CoreOSORGANIZATION

0.96+

over a thousand projectsQUANTITY

0.96+

about two weeksQUANTITY

0.95+

first dibsQUANTITY

0.94+

DockerORGANIZATION

0.94+

first thingQUANTITY

0.94+

todayDATE

0.94+

KubernetesTITLE

0.94+

a secondQUANTITY

0.93+

about two and a half years agoDATE

0.92+

over two years agoDATE

0.92+

AlexPERSON

0.91+

JavaTITLE

0.91+

AzureTITLE

0.9+

Structure '15ORGANIZATION

0.89+

few minutesQUANTITY

0.85+