Image Title

Search Results for DigitalOcean:

Alex Ellis, OpenFaaS | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Announcer: TheCUBE presents KubeCon and CloudNativeCon Europe, 2022. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, a KubeCon, CloudNativeCon Europe, 2022. I'm your host, Keith Townsend alongside Paul Gillon, Senior Editor, Enterprise Architecture for SiliconANGLE. We are, I think at the half point way point this to be fair we've talked to a lot of folks in open source in general. What's the difference between open source communities and these closed source communities that we attend so so much? >> Well open source is just it's that it's open it's anybody can contribute. There are a set of rules that manage how your contributions are reflected in the code base. What has to be shared, what you can keep to yourself but the it's an entirely different vibe. You know, you go to a conventional conference where there's a lot of proprietary being sold and it's all about cash. It's all about money changing hands. It's all about doing the deal. And open source conferences I think are more, they're more transparent and yeah money changes hands, but it seems like the objective of the interaction is not to consummate a deal to the degree that it is at a more conventional computer conference. >> And I think that can create an uneven side effect. And we're going to talk about that a little bit with, honestly a friend of mine Alex Ellis, founder of OpenFaaS. Alex welcome back to the program. >> Thank you, good to see Keith. >> So how long you've been doing OpenFaaS? >> Well, I first had this idea that serverless and function should be run on your own hardware back in 2016. >> Wow and I remember seeing you at DockerCon EU, was that in 2017? >> Yeah, I think that's when we first met and Simon Foskett took us out to dinner and we got chatting. And I just remember you went back to your hotel room after the presentation. You just had your iPhone out and your headphones you were talking about how you tried to OpenWhisk and really struggled with it and OpenFaaS sort of got you where you needed to be to sort of get some value out of the solution. >> And I think that's the magic of these open source communities in open source conferences that you can try stuff, you can struggle with it, come to a conference either get some advice or go in another direction and try something like a OpenFaaS. But we're going to talk about the business perspective. >> Yeah. >> Give us some, like give us some hero numbers from the project. What types of organizations are using OpenFaaS and what are like the download and stars all those, the ways you guys measure project success. >> So there's a few ways that you hear this talked about at KubeCon specifically. And one of the metrics that you hear the most often is GitHub stars. Now a GitHub star means that somebody with their laptop like yourself has heard of a project or seen it on their phone and clicked a button that's it. There's not really an indication of adoption but of interest. And that might be fleeting and a blog post you might publish you might bump that up by 2000. And so OpenFaaS quite quickly got a lot of stars which encouraged me to go on and do more with it. And it's now just crossed 30,000 across the whole organization of about 40 different open source repositories. >> Wow that is a number. >> Now you are in ecosystem where Knative is also taken off. And can you distinguish your approach to serverless or FaaS to Knatives? >> Yes so, Knative isn't an approach to FaaS. That's simply put and if you listen to Aikas Ville from the Knative project, he was working inside Google and wished that Kubernetes would do a little bit more than what it did. And so he started an initiative with some others to start bringing more abstractions like Auto Scaling, revision management so he can have two versions of code and and shift traffic around. And that's really what they're trying to do is add onto Kubernetes and make it do some of the things that a platform might do. Now OpenFaaS started from a different angle and frankly, two years earlier. >> There was no Kubernetes when you started it. >> It kind of led in the space and and built out that ecosystem. So the idea was, I was working with Lambda and AWS Alexa skills. I wanted to run them on my own hardware and I couldn't. And so OpenFaaS from the beginning started from that developer experience of here's my code, run it for me. Knative is a set of extensions that may be a building block but you're still pretty much working with Kubernetes. We get calls come through. And actually recently I can't tell you who they are but there's a very large telecommunications provider in the US that was using OpenFaaS, like yourself heard of Knative and in the hype they switched. And then they switched back again recently to OpenFaaS and they've come to us for quite a large commercial deal. >> So did they find Knative to be more restrictive? >> No, it's the opposite. It's a lot less opinionated. It's more like building blocks and you are dealing with a lot more detail. It's a much bigger system to manage, but don't get me wrong. I mean the guys are very friendly. They have their sort of use cases that they pursue. Google's now donated the project to CNCF. And so they're running it that way. Now it doesn't mean that there aren't FaaS on top of it. Red Hat have a serverless product VMware have one. But OpenFaaS because it owns the whole stack can get you something that's always been very lean, simple to use to the point that Keith in his hotel room installed it and was product with it in an evening without having to be a Kubernetes expert. >> And that is and if you remember back that was very anti-Kubernetes. >> Yes. >> It was not a platform I thought that was. And for some of the very same reasons, I didn't think it was very user friendly. You know, I tried open with I'm thinking what enterprise is going to try this thing, especially without the handholding and the support needed to do that. And you know, something pretty interesting that happened as I shared this with you on Twitter, I was having a briefing by a big microprocessor company, one of the big two. And they were showing me some of the work they were doing in Cloud-native and the way that they stretch test the system to show me Auto Scaling. Is that they bought up a OpenFaaS what is it? The well text that just does a bunch of, >> The cows maybe. >> Yeah the cows. That does just a bunch of texts. And it just all, and I'm like one I was amazed at is super simple app. And the second one was the reason why they discovered it was because of that simplicity is just a thing that's in your store that you can just download and test. And it was open fast. And it was this big company that you had no idea that was using >> No >> OpenFaaS. >> No. >> How prevalent is that? That you're always running into like these surprises of who's using the solution. >> There are a lot of top tier companies, billion dollar companies that use software that I've worked on. And it's quite common. The main issue you have with open source is you don't have like the commercial software you talked about, the relationships. They don't tell you they're using it until it breaks. And then they may come in incognito with a personal email address asking for things. What they don't want to do often is lend their brands or support you. And so it is a big challenge. However, early on, when I met you, BT, live person the University of Washington, and a bunch of other companies had told us they were using it. We were having discussions with them took them to Kubecon and did talks with them. You can go and look at them in the video player. However, when I left my job in 2019 to work on this full time I went to them and I said, you know, use it in production it's useful for you. We've done a talk, we really understand the business value of how it saves you time. I haven't got a way to fund it and it won't exist unless you help they were like sucks to be you. >> Wow that's brutal. So, okay let me get this right. I remember the story 2019, you leave your job. You say I'm going to do OpenFaaS and support this project 100% of your time. If there's no one contributing to the project from a financial perspective how do you make money? I've always pitched open source because you're the first person that I've met that ran an open source project. And I always pitched them people like you who work on it on their side time. But they're not the Knatives of the world, the SDOs, they have full time developers. Sponsored by Google and Microsoft, etc. If you're not sponsored how do you make money off of open source? >> If this is the million dollar question, really? How do you make money from something that is completely free? Where all of the value has already been captured by a company and they have no incentive to support you build a relationship or send you money in any way. >> And no one has really figured it out. Arguably Red Hat is the only one that's pulled it off. >> Well, people do refer to Red Hat and they say the Red Hat model but I think that was a one off. And we quite, we can kind of agree about that in a business. However, I eventually accepted the fact that companies don't pay for something they can get for free. It took me a very long time to get around that because you know, with open source enthusiast built a huge community around this project, almost 400 people have contributed code to it over the years. And we have had full-time people working on it on and off. And there's some people who really support it in their working hours or at home on the weekends. But no, I had to really think, right, what am I going to offer? And to begin with it would support existing customers weren't interested. They're not really customers because they're consuming it as a project. So I needed to create a product because we understand we buy products. Initially I just couldn't find the right customers. And so many times I thought about giving up, leaving it behind, my family would've supported me with that as well. And they would've known exactly why even you would've done. And so what I started to do was offer my insights as a community leader, as a maintainer to companies like we've got here. So Casting one of my customers, CSIG one of my customers, Rancher R, DigitalOcean, a lot of the vendors you see here. And I was able to get a significant amount of money by lending my expertise and writing content that gave me enough buffer to give the doctors time to realize that maybe they do need support and go a bit further into production. And over the last 12 months, we've been signing six figure deals with existing users and new users alike in enterprise. >> For support >> For support, for licensing of new features that are close source and for consulting. >> So you have proprietary extensions. Also that are sort of enterprise class. Right and then also the consulting business, the support business which is a proven business model that has worked >> Is a proven business model. What it's not a proven business model is if you work hard enough, you deserve to be rewarded. >> Mmh. >> You have to go with the system. Winter comes after autumn. Summer comes after spring and you, it's no point saying why is it like that? That's the way it is. And if you go with it, you can benefit from it. And that's what the realization I had as much as I didn't want to do it. >> So you know this community, well you know there's other project founders out here thinking about making the leap. If you're giving advice to a project founder and they're thinking about making this leap, you know quitting their job and becoming the next Alex. And I think this is the perception that the misperception out there. >> Yes. >> You're, you're well known. There's a difference between being well known and well compensated. >> Yeah. >> What advice would you give those founders >> To be. >> Before they make the leap to say you know what I'm going to do my project full time. I'm going to lean on the generosity of the community. So there are some generous people in the community. You've done some really interesting things for individual like contributions etc but that's not enough. >> So look, I mean really you have to go back to the MBA mindset. What problem are you trying to solve? Who is your target customer? What do they care about? What do they eat and drink? When do they go to sleep? You really need to know who this is for. And then customize a journey for them so that they can come to you. And you need some way initially of funneling those people in qualifying them because not everybody that comes to a student or somebody doing a PhD is not your customer. >> Right, right. >> You need to understand sales. You need to understand a lot about business but you can work it out on your way. You know, I'm testament to that. And once you have people you then need something to sell them that might meet their needs and be prepared to tell them that what you've got isn't right for them. 'cause sometimes that's the one thing that will build integrity. >> That's very hard for community leaders. It's very hard for community leaders to say, no >> Absolutely so how do you help them over that hump? I think of what you've done. >> So you have to set some boundaries because as an open source developer and maintainer you want to help everybody that's there regardless. And I think for me it was taking some of the open source features that companies used not releasing them anymore in the open source edition, putting them into the paid developing new features based on what feedback we'd had, offering support as well but also understanding what is support. What do you need to offer? You may think you need a one hour SLA for a fix probably turns out that you could sell a three day response time or one day response time. And some people would want that and see value in it. But you're not going to know until you talk to your customers. >> I want to ask you, because this has been a particular interest of mine. It seems like managed services have been kind of the lifeline for pure open source companies. Enabling these companies to maintain their open source roots, but still have a revenue stream of delivering as a service. Is that a business model option you've looked at? >> There's three business models perhaps that are prevalent. One is OpenCore, which is roughly what I'm following. >> Right. >> Then there is SaaS, which is what you understand and then there's support on pure open source. So that's more like what Rancher does. Now if you think of a company like Buoyant that produces Linkerd they do a bit of both. So they don't have any close source pieces yet but they can host it for you or you can host it and they'll support you. And so I think if there's a way that you can put your product into a SaaS that makes it easier for them to run then you know go for it. However, we've OpenFaaS, remember what is the core problem we are solving, portability So why lock into my cloud? >> Take that option off the table, go ahead. >> It's been a long journey and I've been a fan since your start. I've seen the bumps and bruises and the scars get made. If you're open source leader and you're thinking about becoming as famous as Alex, hey you can do that, you can put in all the work become famous but if you want to make a living, solve a problem, understand what people are willing to pay for that problem and go out and sell it. Valuable lessons here on theCUBE. From Valencia, Spain I'm Keith Townsend along with Paul Gillon and you're watching theCUBE the leader in high-tech coverage. (Upbeat music)

Published Date : May 19 2022

SUMMARY :

Brought to you by Red Hat, What's the difference between what you can keep to yourself And I think that can create that serverless and function you went back to your hotel room that you can try stuff, the ways you guys measure project success. and a blog post you might publish And can you distinguish your approach and if you listen to Aikas Ville when you started it. and in the hype they switched. and you are dealing And that is and if you remember back and the support needed to do that. that you can just download and test. like these surprises of and it won't exist unless you help you leave your job. to support you build a relationship Arguably Red Hat is the only a lot of the vendors you see here. that are close source and for consulting. So you have proprietary extensions. is if you work hard enough, And if you go with it, that the misperception out there. and well compensated. to say you know what I'm going so that they can come to you. And once you have people community leaders to say, no Absolutely so how do you and maintainer you want to help everybody have been kind of the lifeline perhaps that are prevalent. that you can put your product the table, go ahead. and the scars get made.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

Keith TownsendPERSON

0.99+

GoogleORGANIZATION

0.99+

KeithPERSON

0.99+

one dayQUANTITY

0.99+

Alex EllisPERSON

0.99+

2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

Simon FoskettPERSON

0.99+

2016DATE

0.99+

100%QUANTITY

0.99+

three dayQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

one hourQUANTITY

0.99+

2017DATE

0.99+

USLOCATION

0.99+

DigitalOceanORGANIZATION

0.99+

KnativeORGANIZATION

0.99+

AWSORGANIZATION

0.99+

BuoyantORGANIZATION

0.99+

Valencia, SpainLOCATION

0.99+

Rancher RORGANIZATION

0.99+

OneQUANTITY

0.99+

CNCFORGANIZATION

0.99+

OpenFaaSTITLE

0.99+

University of WashingtonORGANIZATION

0.99+

AlexPERSON

0.99+

KubeConEVENT

0.99+

three business modelsQUANTITY

0.99+

OpenFaaSORGANIZATION

0.99+

30,000QUANTITY

0.99+

two years earlierDATE

0.98+

million dollarQUANTITY

0.98+

oneQUANTITY

0.98+

six figureQUANTITY

0.98+

about 40 different open source repositoriesQUANTITY

0.98+

two versionsQUANTITY

0.98+

CloudNativeCon EuropeEVENT

0.97+

CloudnativeconORGANIZATION

0.97+

BTORGANIZATION

0.96+

bothQUANTITY

0.96+

firstQUANTITY

0.96+

KubeconORGANIZATION

0.95+

twoQUANTITY

0.95+

FaaSTITLE

0.95+

KubernetesORGANIZATION

0.94+

AlexaTITLE

0.94+

almost 400 peopleQUANTITY

0.94+

TwitterORGANIZATION

0.94+

TheCUBEORGANIZATION

0.93+

first personQUANTITY

0.92+

billion dollarQUANTITY

0.92+

second oneQUANTITY

0.91+

LinkerdORGANIZATION

0.88+

Red HatTITLE

0.87+

KubernetesTITLE

0.87+

CSIGORGANIZATION

0.87+

KnativeTITLE

0.86+

HatTITLE

0.85+

OpenCoreTITLE

0.84+

RancherORGANIZATION

0.83+

EuropeLOCATION

0.79+

KnativesORGANIZATION

0.79+

SiliconANGLEORGANIZATION

0.78+

Another test of transitions


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, (upbeat music) >> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music) >> Narrator: Live from Las Vegas It's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with it's ecosystem partners. >> Okay, welcome back everyone theCUBE's live coverage of AWS re:Invent 2019. This is theCUBE's 7th year covering Amazon re:Invent. It's their 8th year of the conference. I want to just shout out to Intel for their sponsorship for these two amazing sets. Without their support we wouldn't be able to bring our mission of great content to you. I'm John Furrier. Stu Miniman. We're here with the chief of AWS, the chief executive officer Andy Jassy. Tech athlete in and of himself three hour Keynotes. Welcome to theCUBE again, great to see you. >> Great to be here, thanks for having me guys. >> Congratulations on a great show a lot of great buzz. >> Andy: Thank you. >> A lot of good stuff. Your Keynote was phenomenal. You get right into it, you giddy up right into it as you say, three hours, thirty announcements. You guys do a lot, but what I liked, the new addition, the last year and this year is the band; house band. They're pretty good. >> Andy: They're good right? >> They hit the queen notes, so that keeps it balanced. So we're going to work on getting a band for theCUBE. >> Awesome. >> So if I have to ask you, what's your walk up song, what would it be? >> There's so many choices, it depends on what kind of mood I'm in. But, uh, maybe Times Like These by the Foo Fighters. >> John: Alright. >> These are unusual times right now. >> Foo Fighters playing at the Amazon Intersect Show. >> Yes they are. >> Good plug Andy. >> Headlining. >> Very clever >> Always getting a good plug in there. >> My very favorite band. Well congratulations on the Intersect you got a lot going on. Intersect is a music festival, I'll get to that in a second But, I think the big news for me is two things, obviously we had a one-on-one exclusive interview and you laid out, essentially what looks like was going to be your Keynote, and it was. Transformation- >> Andy: Thank you for the practice. (Laughter) >> John: I'm glad to practice, use me anytime. >> Yeah. >> And I like to appreciate the comments on Jedi on the record, that was great. But I think the transformation story's a very real one, but the NFL news you guys just announced, to me, was so much fun and relevant. You had the Commissioner of NFL on stage with you talking about a strategic partnership. That is as top down, aggressive goal as you could get to have Rodger Goodell fly to a tech conference to sit with you and then bring his team talk about the deal. >> Well, ya know, we've been partners with the NFL for a while with the Next Gen Stats that they use on all their telecasts and one of the things I really like about Roger is that he's very curious and very interested in technology and the first couple times I spoke with him he asked me so many questions about ways the NFL might be able to use the Cloud and digital transformation to transform their various experiences and he's always said if you have a creative idea or something you think that could change the world for us, just call me he said or text me or email me and I'll call you back within 24 hours. And so, we've spent the better part of the last year talking about a lot of really interesting, strategic ways that they can evolve their experience both for fans, as well as their players and the Player Health and Safety Initiative, it's so important in sports and particularly important with the NFL given the nature of the sport and they've always had a focus on it, but what you can do with computer vision and machine learning algorithms and then building a digital athlete which is really like a digital twin of each athlete so you understand, what does it look like when they're healthy and compare that when it looks like they may not be healthy and be able to simulate all kinds of different combinations of player hits and angles and different plays so that you could try to predict injuries and predict the right equipment you need before there's a problem can be really transformational so we're super excited about it. >> Did you guys come up with the idea or was it a collaboration between them? >> It was really a collaboration. I mean they, look, they are very focused on players safety and health and it's a big deal for their- you know, they have two main constituents the players and fans and they care deeply about the players and it's a-it's a hard problem in a sport like Football, I mean, you watch it. >> Yeah, and I got to say it does point out the use cases of what you guys are promoting heavily at the show here of the SageMaker Studio, which was a big part of your Keynote, where they have all this data. >> Andy: Right. >> And they're data hoarders, they hoard data but the manual process of going through the data was a killer problem. This is consistent with a lot of the enterprises that are out there, they have more data than they even know. So this seems to be a big part of the strategy. How do you get the customers to actually wake up to the fact that they got all this data and how do you tie that together? >> I think in almost every company they know they have a lot of data. And there are always pockets of people who want to do something with it. But, when you're going to make these really big leaps forward; these transformations, the things like Volkswagen is doing where they're reinventing their factories and their manufacturing process or the NFL where they're going to radically transform how they do players uh, health and safety. It starts top down and if the senior leader isn't convicted about wanting to take that leap forward and trying something different and organizing the data differently and organizing the team differently and using machine learning and getting help from us and building algorithms and building some muscle inside the company it just doesn't happen because it's not in the normal machinery of what most companies do. And so it always, almost always, starts top down. Sometimes it can be the Commissioner or CEO sometimes it can be the CIO but it has to be senior level conviction or it doesn't get off the ground. >> And the business model impact has to be real. For NFL, they know concussions, hurting their youth pipe-lining, this is a huge issue for them. This is their business model. >> They lose even more players to lower extremity injuries. And so just the notion of trying to be able to predict injuries and, you know, the impact it can have on rules and the impact it can have on the equipment they use, it's a huge game changer when they look at the next 10 to 20 years. >> Alright, love geeking out on the NFL but Andy, you know- >> No more NFL talk? >> Off camera how about we talk? >> Nobody talks about the Giants being 2 and 10. >> Stu: We're both Patriots fans here. >> People bring up the undefeated season. >> So Andy- >> Everybody's a Patriot's fan now. (Laughter) >> It's fascinating to watch uh, you and your three hour uh, Keynote, uh Werner in his you know, architectural discussion, really showed how AWS is really extending its reach, you know, it's not just a place. For a few years people have been talking about you know, Cloud is an operational model its not a destination or a location but, I felt it really was laid out is you talked about Breadth and Depth and Werner really talked about you know, Architectural differentiation. People talk about Cloud, but there are very-there are a lot of differences between the vision for where things are going. Help us understand why, I mean, Amazon's vision is still a bit different from what other people talk about where this whole Cloud expansion, journey, put ever what tag or label you want on it but you know, the control plane and the technology that you're building and where you see that going. >> Well I think that, we've talked about this a couple times we have two macro types of customers. We have those that really want to get at the low level building blocks and stitch them together creatively however they see fit to create whatever's in their-in their heads. And then we have the second segment of customers that say look, I'm willing to give up some of that flexibility in exchange for getting 80% of the way there much faster. In an abstraction that's different from those low level building blocks. And both segments of builders we want to serve and serve well and so we've built very significant offerings in both areas. I think when you look at microservices um, you know, some of it has to do with the fact that we have this very strongly held belief born out of several years of Amazon where you know, the first 7 or 8 years of Amazon's consumer business we basically jumbled together all of the parts of our technology in moving really quickly and when we wanted to move quickly where you had to impact multiple internal development teams it was so long because it was this big ball, this big monolithic piece. And we got religion about that in trying to move faster in the consumer business and having to tease those pieces apart. And it really was a lot of impetus behind conceiving AWS where it was these low level, very flexible building blocks that6 don't try and make all the decisions for customers they get to make them themselves. And some of the microservices that you saw Werner talking about just, you know, for instance, what we-what we did with Nitro or even what we did with Firecracker those are very much about us relentlessly working to continue to uh, tease apart the different components. And even things that look like low level building blocks over time, you build more and more features and all of the sudden you realize they have a lot of things that are combined together that you wished weren't that slow you down and so, Nitro was a completely re imagining of our Hypervisor and Virtualization layer to allow us, both to let customers have better performance but also to let us move faster and have a better security story for our customers. >> I got to ask you the question around transformation because I think that all points, all the data points, you got all the references, Goldman Sachs on stage at the Keynote, Cerner, I mean healthcare just is an amazing example because I mean, that's demonstrating real value there there's no excuse. I talked to someone who wouldn't be named last night, in and around the area said, the CIA has a cost bar like this a cost-a budget like this but the demand for mission based apps is going up exponentially, so there's need for the Cloud. And so, you see more and more of that. What is your top down, aggressive goals to fill that solution base because you're also a very transformational thinker; what is your-what is your aggressive top down goals for your organization because you're serving a market with trillions of dollars of spend that's shifting, that's on the table. >> Yeah. >> A lot of competition now sees it too, they're going to go after it. But at the end of the day you have customers that have a demand for things, apps. >> Andy: Yeah. >> And not a lot of budget increase at the same time. This is a huge dynamic. >> Yeah. >> John: What's your goals? >> You know I think that at a high level our top down aggressive goals are that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding, but also that it allows them to build, uh, build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time. And then, we also really want to be the technology infrastructure platform under all the applications that people build. And we're realistic, we know that you know, the market segments we address with infrastructure, software, hardware, and data center services globally are trillions of dollars in the long term and it won't only be us, but we have that goal of wanting to serve every application and that requires not just the security operational premise but also a lot of functionality and a lot of capability. We have by far the most amount of capability out there and yet I would tell you, we have 3 to 5 years of items on our roadmap that customers want us to add. And that's just what we know today. >> And Andy, underneath the covers you've been going through some transformation. When we talked a couple of years ago, about how serverless is impacting things I've heard that that's actually, in many ways, glue behind the two pizza teams to work between organizations. Talk about how the internal transformations are happening. How that impacts your discussions with customers that are going through that transformation. >> Well, I mean, there's a lot of- a lot of the technology we build comes from things that we're doing ourselves you know? And that we're learning ourselves. It's kind of how we started thinking about microservices, serverless too, we saw the need, you know, we would have we would build all these functions that when some kind of object came into an object store we would spin up, compute, all those tasks would take like, 3 or 4 hundred milliseconds then we'd spin it back down and yet, we'd have to keep a cluster up in multiple availability zones because we needed that fault tolerance and it was- we just said this is wasteful and, that's part of how we came up with Lambda and you know, when we were thinking about Lambda people understandably said, well if we build Lambda and we build this serverless adventure in computing a lot of people were keeping clusters of instances aren't going to use them anymore it's going to lead to less absolute revenue for us. But we, we have learned this lesson over the last 20 years at Amazon which is, if it's something that's good for customers you're much better off cannibalizing yourself and doing the right thing for customers and being part of shaping something. And I think if you look at the history of technology you always build things and people say well, that's going to cannibalize this and people are going to spend less money, what really ends up happening is they spend less money per unit of compute but it allows them to do so much more that they ultimately, long term, end up being more significant customers. >> I mean, you are like beating the drum all the time. Customers, what they say, we encompass the roadmap, I got that you guys have that playbook down, that's been really successful for you. >> Andy: Yeah. >> Two years ago you told me machine learning was really important to you because your customers told you. What's the next traunch of importance for customers? What's on top of mind now, as you, look at- >> Andy: Yeah. >> This re:Invent kind of coming to a close, Replay's tonight, you had conversations, you're a tech athlete, you're running around, doing speeches, talking to customers. What's that next hill from if it's machine learning today- >> There's so much I mean, (weird background noise) >> It's not a soup question (Laughter) And I think we're still in the very early days of machine learning it's not like most companies have mastered it yet even though they're using it much more then they did in the past. But, you know, I think machine learning for sure I think the Edge for sure, I think that um, we're optimistic about Quantum Computing even though I think it'll be a few years before it's really broadly useful. We're very um, enthusiastic about robotics. I think the amount of functions that are going to be done by these- >> Yeah. >> robotic applications are much more expansive than people realize. It doesn't mean humans won't have jobs, they're just going to work on things that are more value added. We're believers in augmented virtual reality, we're big believers in what's going to happen with Voice. And I'm also uh, I think sometimes people get bored you know, I think you're even bored with machine learning already >> Not yet. >> People get bored with the things you've heard about but, I think just what we've done with the Chips you know, in terms of giving people 40% better price performance in the latest generation of X86 processors. It's pretty unbelievable in the difference in what people are going to be able to do. Or just look at big data I mean, big data, we haven't gotten through big data where people have totally solved it. The amount of data that companies want to store, process, analyze, is exponentially larger than it was a few years ago and it will, I think, exponentially increase again in the next few years. You need different tools and services. >> Well I think we're not bored with machine learning we're excited to get started because we have all this data from the video and you guys got SageMaker. >> Andy: Yeah. >> We call it the stairway to machine learning heaven. >> Andy: Yeah. >> You start with the data, move up, knock- >> You guys are very sophisticated with what you do with technology and machine learning and there's so much I mean, we're just kind of, again, in such early innings. And I think that, it was so- before SageMaker, it was so hard for everyday developers and data scientists to build models but the combination of SageMaker and what's happened with thousands of companies standardizing on it the last two years, plus now SageMaker studio, giant leap forward. >> Well, we hope to use the data to transform our experience with our audience. And we're on Amazon Cloud so we really appreciate that. >> Andy: Yeah. >> And appreciate your support- >> Andy: Yeah, of course. >> John: With Amazon and get that machine learning going a little faster for us, that would be better. >> If you have requests I'm interested, yeah. >> So Andy, you talked about that you've got the customers that are builders and the customers that need simplification. Traditionally when you get into the, you know, the heart of the majority of adoption of something you really need to simplify that environment. But when I think about the successful enterprise of the future, they need to be builders. how'l I normally would've said enterprise want to pay for solutions because they don't have the skill set but, if they're going to succeed in this new economy they need to go through that transformation >> Andy: Yeah. >> That you talk to, so, I mean, are we in just a total new era when we look back will this be different than some of these previous waves? >> It's a really good question Stu, and I don't think there's a simple answer to it. I think that a lot of enterprises in some ways, I think wish that they could just skip the low level building blocks and only operate at that higher level abstraction. That's why people were so excited by things like, SageMaker, or CodeGuru, or Kendra, or Contact Lens, these are all services that allow them to just send us data and then run it on our models and get back the answers. But I think one of the big trends that we see with enterprises is that they are taking more and more of their development in house and they are wanting to operate more and more like startups. I think that they admire what companies like AirBnB and Pintrest and Slack and Robinhood and a whole bunch of those companies, Stripe, have done and so when, you know, I think you go through these phases and eras where there are waves of success at different companies and then others want to follow that success and replicate it. And so, we see more and more enterprises saying we need to take back a lot of that development in house. And as they do that, and as they add more developers those developers in most cases like to deal with the building blocks. And they have a lot of ideas on how they can creatively stich them together. >> Yeah, on that point, I want to just quickly ask you on Amazon versus other Clouds because you made a comment to me in our interview about how hard it is to provide a service to other people. And it's hard to have a service that you're using yourself and turn that around and the most quoted line of my story was, the compression algorithm- there's no compression algorithm for experience. Which to me, is the diseconomies of scale for taking shortcuts. >> Andy: Yeah. And so I think this is a really interesting point, just add some color commentary because I think this is a fundamental difference between AWS and others because you guys have a trajectory over the years of serving, at scale, customers wherever they are, whatever they want to do, now you got microservices. >> Yeah. >> John: It's even more complex. That's hard. >> Yeah. >> John: Talk about that. >> I think there are a few elements to that notion of there's no compression algorithm for experience and I think the first thing to know about AWS which is different is, we just come from a different heritage and a different background. We ran a business for a long time that was our sole business that was a consumer retail business that was very low margin. And so, we had to operate at very large scale given how many people were using us but also, we had to run infrastructure services deep in the stack, compute storage and database, and reliable scalable data centers at very low cost and margins. And so, when you look at our business it actually, today, I mean its, its a higher margin business in our retail business, its a lower margin business in software companies but at real scale, it's a high volume, relatively low margin business. And the way that you have to operate to be successful with those businesses and the things you have to think about and that DNA come from the type of operators we have to be in our consumer retail business. And there's nobody else in our space that does that. So, you know, the way that we think about costs, the way we think about innovation in the data center, um, and I also think the way that we operate services and how long we've been operating services as a company its a very different mindset than operating package software. Then you look at when uh, you think about some of the uh, issues in very large scale Cloud, you can't learn some of those lessons until you get to different elbows of the curve and scale. And so what I was telling you is, its really different to run your own platform for your own users where you get to tell them exactly how its going to be done. But that's not the way the real world works. I mean, we have millions of external customers who use us from every imaginable country and location whenever they want, without any warning, for lots of different use cases, and they have lots of design patterns and we don't get to tell them what to do. And so operating a Cloud like that, at a scale that's several times larger than the next few providers combined is a very different endeavor and a very different operating rigor. >> Well you got to keep raising the bar you guys do a great job, really impressed again. Another tsunami of announcements. In fact, you had to spill the beans earlier with Quantum the day before the event. Tight schedule. I got to ask you about the musical festival because, I think this is a very cool innovation. It's the inaugural Intersect conference. >> Yes. >> John: Which is not part of Replay, >> Yes. >> John: Which is the concert tonight. Its a whole new thing, big music act, you're a big music buff, your daughter's an artist. Why did you do this? What's the purpose? What's your goal? >> Yeah, it's an experiment. I think that what's happened is that re:Invent has gotten so big, we have 65 thousand people here, that to do the party, which we do every year, its like a 35-40 thousand person concert now. Which means you have to have a location that has multiple stages and, you know, we thought about it last year and when we were watching it and we said, we're kind of throwing, like, a 4 hour music festival right now. There's multiple stages, and its quite expensive to set up that set for a party and we said well, maybe we don't have to spend all that money for 4 hours and then rip it apart because actually the rent to keep those locations for another two days is much smaller than the cost of actually building multiple stages and so we thought we would try it this year. We're very passionate about music as a business and I think we-I think our customers feel like we've thrown a pretty good music party the last few years and we thought we would try it at a larger scale as an experiment. And if you look at the economics- >> At the headliners real quick. >> The Foo Fighters are headlining on Saturday night, Anderson Paak and the Free Nationals, Brandi Carlile, Shawn Mullins, um, Willy Porter, its a good set. Friday night its Beck and Kacey Musgraves so it's a really great set of um, about thirty artists and we're hopeful that if we can build a great experience that people will want to attend that we can do it at scale and it might be something that both pays for itself and maybe, helps pay for re:Invent too overtime and you know, I think that we're also thinking about it as not just a music concert and festival the reason we named it Intersect is that we want an intersection of music genres and people and ethnicities and age groups and art and technology all there together and this will be the first year we try it, its an experiment and we're really excited about it. >> Well I'm gone, congratulations on all your success and I want to thank you we've been 7 years here at re:Invent we've been documenting the history. You got two sets now, one set upstairs. So appreciate you. >> theCUBE is part of re:Invent, you know, you guys really are apart of the event and we really appreciate your coming here and I know people appreciate the content you create as well. >> And we just launched CUBE365 on Amazon Marketplace built on AWS so thanks for letting us- >> Very cool >> John: Build on the platform. appreciate it. >> Thanks for having me guys, I appreciate it. >> Andy Jassy the CEO of AWS here inside theCUBE, it's our 7th year covering and documenting the thunderous innovation that Amazon's doing they're really doing amazing work building out the new technologies here in the Cloud computing world. I'm John Furrier, Stu Miniman, be right back with more after this short break. (Outro music)

Published Date : Sep 29 2020

SUMMARY :

at org the org to the andyc and it was. of time. That's hard. I think that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

AndyPERSON

0.99+

Stu MinimanPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CIAORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

3QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

80%QUANTITY

0.99+

4 hoursQUANTITY

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

VolkswagenORGANIZATION

0.99+

Rodger GoodellPERSON

0.99+

AirBnBORGANIZATION

0.99+

RogerPERSON

0.99+

40%QUANTITY

0.99+

Brandi CarlilePERSON

0.99+

PintrestORGANIZATION

0.99+

PythonTITLE

0.99+

two daysQUANTITY

0.99+

4 hourQUANTITY

0.99+

7th yearQUANTITY

0.99+

Willy PorterPERSON

0.99+

Friday nightDATE

0.99+

andy@stackrox.comOTHER

0.99+

7 yearsQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

two tagsQUANTITY

0.99+

IntelORGANIZATION

0.99+

millionsQUANTITY

0.99+

Foo FightersORGANIZATION

0.99+

last yearDATE

0.99+

GiantsORGANIZATION

0.99+

todayDATE

0.99+

andyc.info/dc20OTHER

0.99+

65 thousand peopleQUANTITY

0.99+

Saturday nightDATE

0.99+

SlackORGANIZATION

0.99+

two setsQUANTITY

0.99+

flask.docker.lifeOTHER

0.99+

WernerPERSON

0.99+

two thingsQUANTITY

0.99+

Shawn MullinsPERSON

0.99+

RobinhoodORGANIZATION

0.99+

IntersectORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Kacey MusgravesPERSON

0.99+

4 hundred millisecondsQUANTITY

0.99+

first imageQUANTITY

0.99+

Andy


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music)

Published Date : Sep 28 2020

SUMMARY :

at org the org to the andyc

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

two tagsQUANTITY

0.99+

PythonTITLE

0.99+

flask.docker.lifeOTHER

0.99+

andy@stackrox.comOTHER

0.99+

AndyPERSON

0.99+

andyc.info/dc20OTHER

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

flask.doctor.lifeOTHER

0.99+

third oneQUANTITY

0.99+

DockerfilesTITLE

0.99+

seventhQUANTITY

0.99+

KubernetesTITLE

0.98+

first oneQUANTITY

0.98+

second oneQUANTITY

0.98+

label-schema.orgOTHER

0.98+

OneQUANTITY

0.98+

KeynoteTITLE

0.98+

andyc.infodc20OTHER

0.98+

first imageQUANTITY

0.98+

First dayQUANTITY

0.97+

CentOsTITLE

0.97+

StackRoxTITLE

0.97+

SkopeoORGANIZATION

0.96+

Red HatORGANIZATION

0.96+

GitTITLE

0.96+

Ubuntu 19.10 VmsTITLE

0.95+

oneQUANTITY

0.95+

build 12OTHER

0.95+

JQTITLE

0.95+

base64TITLE

0.93+

JenkinsTITLE

0.93+

build number 12OTHER

0.91+

org.opencontainers.image.OTHER

0.91+

IngressORGANIZATION

0.89+

DODORGANIZATION

0.89+

opencontainers.image.authors.OTHER

0.89+

a secondQUANTITY

0.89+

two major waysQUANTITY

0.89+

Jenkins StackRoxTITLE

0.88+

GitosisTITLE

0.86+

GitLabORGANIZATION

0.86+

GitHubORGANIZATION

0.86+

two fun onesQUANTITY

0.84+

GitLabTITLE

0.82+

skopeoORGANIZATION

0.82+

DockerTITLE

0.81+

JSONTITLE

0.81+

traefikTITLE

0.77+

skopeoTITLE

0.76+

@clemenkoPERSON

0.74+

RancherTITLE

0.74+

IngressTITLE

0.73+

org.zdockerOTHER

0.72+

RedisTITLE

0.72+

DigitalOceanTITLE

0.71+

org.opencontainers.image.blankOTHER

0.71+

KuberORGANIZATION

0.69+

William Janssen, DeltaBlue | Cloud Native Insights


 

>> From theCUBE studios in Palo Alto in Boston, connecting with thought leaders around the globe, these are cloud native insights. >> Welcome to another episode of Cloud Native Insights. I'm your host Stu Miniman and of course with Cloud Native Insights will really help understand you know, where we have gone from cloud, how we are taking advantage of innovation, a real driver for what happens in the spaces of course developers. You think back to the early days, it was often developers that were grabbing a credit card, using cloud services and then it had to be integrated into what was being done and the rest of the organization saw the large rise of DevOps and all the other pieces around that, that help bring in things like security and finance and the like. Happy to welcome to the program first time guest, William Janssen. He is the CEO of DeltaBlue. Deep in this discussion of cloud native DeltaBlue is a European company helping with continuous deployment across cross cloud providers in the space. William, thanks so much for joining us, nice to see you. >> Glad to be on the show, thank you Stu. >> All right, so one of the reasons I'm glad to have you on is because of some of the early episodes here, you know we were discussing really what cloud native is and what it should be. I had my first interview on the program, Joep Piscaer, who you know, had given the analogy and said when you talked about DevOps, DevOps isn't something you could buy. But it's something that lots of vendors would try to sell you. And we're trying to dispel, lots of companies out there, they're like, "Oh, cloud native, well we support Kubernetes. "And we have this tool and you should buy our cloud native, "you know, A, B, C or D." So, want to start a little first with what you see out there and what you think the ultimate goal and outcome of cloud native should be? >> I think cloud native, to start with your last question, I think cloud native should make life fun again. We have a lot of technical problems, we solve them in technical things. You mentioned Kubernetes but Kubernetes is solving a technical problem. And introducing another technical problem. So what I think cloud native should do is focus on what you're actually good at. So a developer should develop. Someone from the infrastructure, an operator, should focus on their key points and not try to mix it up. So, not Kubernetes, Kubernetes is again introducing another technical issue. Our view on cloud native is that people should have fun again and should be focusing on what they're good at. And so it's not about technology, it's about getting the procedures right and focusing on the things you love to do. And not to talk to the cross border, talk to a lot of developers and solve operational kind of things. That's what we try to solve and that's our view of cloud native. >> Yeah, I'll poke that a little bit because one thing you say, people should do what they're good at. It's really what is important for the business, what do we need to get done? There's often new skills that we need to do. So it's really great if we could just keep doing the same thing we're doing. We know how to do it. We optimize it, we play with all of our geek knobs. But the drumbeat that I hear is, we need to be agile, we need to be able to create new applications. IT needs to be responsive for the business and rather than in the past it was about, building this beautiful stack that we could optimize and build these pieces together. Today, the analogy I hear more is, there's layers out there, there's lots of different tooling, especially if you look at the developer world. There is just too many options out there. So, maybe bring us a little bit as to you know, what DeltaBlue does. How you look at allowing developers to build what they, new things that they need but not be, I guess the word, locked into a certain place or certain technology. >> Yes, I've been on IT for 20 years. So I've seen a lot of things go around. And when we started out with DeltaBlue, the only thing we had in mind is how could we make the lifecycle of applications and all the things you had to do, the government around applications way more easy. Back in the days, we already saw that containerization solved some of the issues. But it solves technical issues. So like when you start coding, you don't need to go to the network card anymore. We took the same approach to our cloud native approach. So we started on the top level. We started with applications in mind. And the things back in the day you had Bitnami already had the option to have a VM or standard installation of an application. So what we see is that nowadays, many developers and many organizations try to focus on that specific part, how to get your code into some kind of under configuration solution. We take that for granted. There are so many great solutions out there, already tried to solve that problem. So instead of reinventing that wheel again, we take that for granted. But we take another approach. We think that if the application is there, you need to test it. You need to take it into production. You want to have several versions of a specific application into the production environment. So what we've tried to solve with our platform is to make that part of the life cycle, let's call it horizontal version of your application lifecycle, not getting an application built or running up different stuff, we take that for granted. We take the horizontal approach. How to get your traditional application from your development environment to your testing, acceptance. That's a different kind of people test your application, security testing before you take it into production. And that should be all be done from a logical point of view. So we built one web interface, a logical portal. And you can simply drag and drop any type of application, not just a more than micro service oriented or Kubernetes based application but any type of application from your acceptance environment to your production environment. That's going to solve the real problem. So now, any business can have 10 different acceptance environments for even your old legacy SAP or your Intershop environment. That's going to get your business value. So going back to your definition of cloud native, getting that kind of abstraction between getting your and code your application and get it get somewhere up and running and all the stuff that's needs to be done from your development environment into the production environment. That's going to add to your business value. That's going to speed up your time to market, that's going to make sure that you have a better cloud quality because now you can test even your legacy application from 10 different points of view and 10 different types of different branches, all in a parallel environment. So, when we started with DeltaBlue, we took a different approach, took the technical stuff for granted, and focus on all the government around applications and the governance that's the thing, I think that's the most important part in the cloud native discussion. >> So governance, especially in Europe, has a lot of importance there. If you could, bring us inside a little bit, customers you're talking to, where they are in this journey. If you've got an example of something you're doing specifically we'd love to hear how that happens in real world. >> Yes we have many different customers but I think one of our best examples, for example is Wunderman Thompson, a big eCommerce party across the globe but also here in the Netherlands. And we made a blueprint of their development environment the way they develop application and the way they host applications. So, now they started a new project, 40 developers go to the new big eCommerce application. In the past, everyone had to install their own Intershop environment on their own laptop, Java, Oracle, that kind of stuff. It took me a day and a half. Since we abstracted that into like a simple cell, like you would do in any serverless environment nowadays, they can now simply click on a button. And since they made their laptop or their development environment part of our platform, they can now simply drag and drop the complete initial environment to the laptop and they can send development in 10 minutes instead of a day and a half. That's just the first step that makes their life easier. But also imagine, we have an application up and running for two, three months and our security patch, we all know the trouble of getting a patch installed in production but also then install it into the acceptance environment, test environment, development environment, all those kind of different versions. With our platform, since we have the application in mind, we can, with simple one simple click of the button, we can propagate that security patch across all the different environments. So from a developer point of view, there's no need to have any kind of knowledge of course they need to configure a port or something like that but no need of knowledge of any type of infrastructure anymore. We have made the same blueprint for the complete development environment. So with a single click of the button, they have a complete detail environment, known over the need to go to their infrastructure to get the service to their operating guys, they have them installed, industrial Nexus, very book of repository, all that kind of stuff. It's all within one blueprint. So again, we think that the application should come first. That should be abstracted, and not abstracted just in a single spin up a container or spinning up a VM. Now, the complete business case, application, complete environment should be up and running with a single click of a button. So now they can start if they have a demo tomorrow, for example, and they want to have a demo setup. With a single click, they have a complete environment up and running, instead of having to wait three weeks, four weeks before they can start coding. And the same comes with a production environment. We now have an intelligent proxy in front of it. So they can have three different versions of the same shot in their production environment. And based on business rules, we can spread the load against the different versions of a business application, eCommerce application. We signed a new contract with New Relic last week. And the next thing we're going to do, and it's going to be there in two weeks, is fit New Relic data, I mean, an eCommerce application is about performance. A longer response time of a page page load time will drop your drop your revenue. So what we're going to do with New Relic is feed it's performance data back into that the intelligent proxy in front of their application. So now they're going to drop the new version of their intershop application on a Thursday evening, they go to sleep. Friday morning, they wake up and from the three versions, and the best performing website will be up and running. That's the kind of intelligence and that's the kind of feedback we can put into our platform since we started with applications in mind first. It's getting better quality, because you can do better testing. I mean, we all want to test, but we never want to wait for those different kinds of setups, they want to have fast development cycles. That kind of flexibility where you do the functional deployment, the functional release, not the technical stuff. What we now see in the market is that most people, when they go to the cloud, try to solve the technical release problems of getting the application up and running in a technical way into the production time, we try to focus on the functional level. >> So, William, being data driven, a very important piece of what you talked about there. What I want to help our audience understand is concerns about if you talk about abstractions, or if you want to be able to live across different environments, is can you take advantage of the full capabilities of the underlying platform? Because, that is, one of the reasons we go to cloud isn't just because it's got limitless compute Pricing comes down. But there's only new features coming out, or I want to be able to go to, a cloud provider and take advantage of some specific feature. So help us understand how I can live across these environments, but still take advantage of those cloud native features and innovations as they come out. >> Great. There are actually two ways. For most alternatives, we also have an alternative component in our platform as well. We have complete marketplace with all kinds of functionality like AWS has, but I can imagine that people want to develop an AWS and get our AWS lambda functions or s3 buckets or that that kind of specific functionality. And going back to the Intershop example, they run their application as a CaaS solution on Azure. So when you went to Azure DevOps, or that kind of specific functionality included, our platform connects over 130 different data centers across the globe and Azure and AWS, and Oviedo Digital Ocean are all part of the huge mix of different cloud providers. For every provider, we have what we call gateway components. We deploy natively, mostly bare metal or equivalents of bare metal within those cloud providers. And we made an abstraction layer on the network layer. So now we can include those kind of specific services like they were part of our platform natively. Because if we would have just build a layer and couldn't use the specific components of an AWS or an Azure or that kind of stuff, we would just be another hosting provider. I haven't liked VMware. So that kind of stuff. We want to and we are aware that we need to include a specific stuff, functionality. And what we do with this with what we call gateway components. So we have AWS, gay components, educators, but also for IBM, or Google specific environment. So we can combine the network of AWS, with our specific network. And that's possible, because we made a complete abstraction layer between the network of the infrastructure provider and our network. So we can complete IP subnets DNS resolver as if it was running on their local environment. And thereby, since we have that abstraction layer, we can even move the workloads on AWS to Azure. And since we have the abstraction layer network, we can even make sure that you don't need to reconfigure your application. I think that's the flexibility that people are looking for. And if they have a specific workload and Azure and it's getting too expensive, for the ones that includes AWS stuff, they want to shift the workload to different kind of cloud providers based on the characteristics of a specific worker, or even if you want to have the cheapest option, you can even use your on premise data center. >> William, do that there absolutely is interest in doing that. One of the barriers to being able to just go between environments is of course that the skills required to do this. So, there's something to be said about, if I use a single provider, I understand how to do it, I understand how to optimize it, I understand the finances of it. And while there may be very similar things in another cloud, or in my own data center, the management tools are different and everything. So how do we overcome, that skill set challenge, between different environments. >> We had a different approach the same as we do it on application level, we took it also in data center level, so we're going to handle most cannot say all because there's always specific components. But from our interface, you can simply go to a specific application and select the type of data center you want to run on your application. And if your application is running on an AWS, you get the gateway components with the components, like an s3 bucket or a lambda or an RDS, based on the data center you're running in. So we took that abstraction layer even on that level. But I got to be honest, I think 80% of our customers is not interested in the data center, they run their application in unless they have specific functionality, and which is not available on our platform, or they have a long running application, or use a specific or they bought a specific application. Otherwise, they don't care. Because from a traditional application, there is no difference between running on Azure or Google Cloud or an IBM cloud or whatever. The main difference is that we can make a guarantee about the SLA. I mean, IBM has a better uptime guarantee. A better performance and a better network compared to let's say, digitalocean. Kind of set this up. But there is a huge difference. But it's more like the guarantee that we can give them. So we have this abstraction layers, and we try to put as many as possible as much as possible into our portal interface. There will no way that we're going to redesign and we work about the complete AWS interface, or we're not going to include 100% of their functionality. That's not possible. We're, small company. AWS is somewhat more developers in place. But the main components and people are asking for like RDS or these kind of specific setups, that's where we have the gateway components for available and they can include them into their own application. But we also going to advise them why they were looking for those specific AWS components. Is it within the application architecture or is it something gauges right? Isn't there a better solution or an other solution? And I think, since we have that objection that one of the biggest benefits is, and what we see our customers also do is we incorporate that data center into our platform. And we have one huge network across all the cloud providers and including their own data center. So in the past, they had to have two different development teams, one specialized in AWS development, with all that kind of specific stuff. And all one development team which had more like a traditional point of view, because their internal system and data which was not allowed to go outside the company or had to stay within the firewall. And since we have now one big network, which is transparent to them, we can make sure that their code for their internal systems stays internal and is running on internal systems. But we could still use some kind of functionality from the outside. We do it all unencrypted today, and we have one big platform available. So with our gateway components, we can make sure that that data and application data is really staying internally. And only is allowed to grow internal data access and that kind of stuff, but still use external functionality or price. But again, I would say 80% of our customers, they don't care because they just want to get rid of the burden. I think going back to what we think cloud native means is just getting rid of the burden. And you shouldn't be concerned about what type of cloud we're actually using. >> Absolutely, William, the goal of infrastructure support, my applications and my data and we want companies to be able to focus on what is important for the business and not get bogged down and certain technical arguments introduction. So William, thank you so much for joining us. Really great to hear about Delta blue. Looking forward to hearing more in the future. >> Thank you. >> I'm Stu Miniman. And look forward to hearing more of your cloud native insights.

Published Date : Jul 17 2020

SUMMARY :

leaders around the globe, and the rest of the organization saw Glad to be on the show, because of some of the early and focusing on the things you love to do. and rather than in the past it was about, and all the stuff that's needs to be done to hear how that happens and that's the kind of feedback we can put one of the reasons we go to cloud of the huge mix of One of the barriers to and select the type of is important for the business And look forward to hearing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
WilliamPERSON

0.99+

IBMORGANIZATION

0.99+

William JanssenPERSON

0.99+

Joep PiscaerPERSON

0.99+

twoQUANTITY

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

AWSORGANIZATION

0.99+

three weeksQUANTITY

0.99+

Palo AltoLOCATION

0.99+

DeltaBlueORGANIZATION

0.99+

80%QUANTITY

0.99+

New RelicORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

10 minutesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

Friday morningDATE

0.99+

Delta blueORGANIZATION

0.99+

last weekDATE

0.99+

Thursday eveningDATE

0.99+

GoogleORGANIZATION

0.99+

40 developersQUANTITY

0.99+

a day and a halfQUANTITY

0.99+

two waysQUANTITY

0.99+

three monthsQUANTITY

0.99+

first stepQUANTITY

0.99+

NetherlandsLOCATION

0.99+

four weeksQUANTITY

0.99+

Cloud Native InsightsTITLE

0.99+

two weeksQUANTITY

0.99+

oneQUANTITY

0.99+

TodayDATE

0.99+

first interviewQUANTITY

0.99+

10 different pointsQUANTITY

0.99+

10 different typesQUANTITY

0.99+

one big platformQUANTITY

0.98+

tomorrowDATE

0.98+

three versionsQUANTITY

0.98+

Oviedo Digital OceanORGANIZATION

0.97+

first timeQUANTITY

0.97+

BostonLOCATION

0.97+

over 130 different data centersQUANTITY

0.97+

OracleORGANIZATION

0.96+

todayDATE

0.96+

Azure DevOpsTITLE

0.96+

single providerQUANTITY

0.96+

StuPERSON

0.96+

firstQUANTITY

0.95+

KubernetesTITLE

0.95+

theCUBEORGANIZATION

0.95+

WundermanORGANIZATION

0.95+

JavaTITLE

0.95+

singleQUANTITY

0.95+

two different development teamsQUANTITY

0.95+

10 different acceptance environmentsQUANTITY

0.93+

single clickQUANTITY

0.93+

DevOpsTITLE

0.92+

AzureTITLE

0.88+

one bigQUANTITY

0.88+

one blueprintQUANTITY

0.88+

Anurag Goel, Render & Steve Herrod, General Catalyst | CUBE Conversation, June 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, and welcome to this CUBE Conversation, from our Boston area studio, I'm Stu Miniman, happy to welcome to the program, first of all we have a first time guest, always love when we have a founder on the program, Anurag Goel is the founder and CEO of Render, and we've brought along a longtime friend of the program, Dr. Steve Herrod, he is a managing director at General Catalyst, a investor in Render. Anurag and Steve, thanks so much for joining us. >> Thank you for having me. >> Yeah, thanks, Stu. >> All right, so Anurag, Render, your company, the tagline is the easiest cloud for developers and startups. It's a rather bold statement, most people feel that the first generation of cloud has happened and there were certain clear winners there. The hearts and minds of developers absolutely has been a key thing for many many companies, and one of those drivers in the software world. Why don't you give us a little bit of your background, and as the founder of the company, what was it, the opportunity that you saw, that had you create Render? >> Yeah, so I was the fifth engineer at Stripe, and helped launch the company and grow it to five billion dollars in revenue. And throughout that period, I saw just how much money we were spending on just hiring DevOps engineers, AWS was a huge huge management headache, really, there's no other way to describe it. And even after I left Stripe, I was thinking hard about what I wanted to do next, and a lot of those ideas required some form of development and deployment, and putting things in production, and every single time I had to do the same thing over and over and over again, as a developer, so despite all the advancements in the cloud, it was always repetitive work, that wasn't just for my projects, I think a lot of my friends felt the same way. And so, I decided that we needed to automate some of these new things that have come about, as part of the regular application deployment process, and how it evolves, and that's how Render was born. >> All right, so Steve, remember in the early days, cloud was supposed to be easy and inexpensive, I've been saying on theCUBE it's like well, I guess it hasn't quite turned out that way. Love your viewpoint a little bit, because you've invested here, to really be competitive in the cloud, tens of billions of dollars a year, that need to go into this, right? >> Yeah, I had the fortunate chance to meet Anurag early on, General Catalyst was an investor in Stripe, and so seeing what they did sort of spurred us to think about this, but I think we've talked about this before, also, on theCUBE, even back, long ago in the VMware days, we looked very seriously at buying Heroku, one of the early players, and still around, obviously, at Salesforce in this PaaS space, and every single infrastructure conversation I've had from the start, I have to come back to myself and come back to everyone else and just say, don't forget, the only reason any infrastructure even exists is to run applications. And as we talked about, the first generation of cloud, it was about, let's make the infrastructure disappear, and make it programmatic, but I think even that, we're realizing from developers, that is just still way too low of an abstraction level. You want to write code, you want to have it in GitHub, and you want to just press go, and it should automatically deploy, automatically scale, automatically secure itself, and just let the developer focus purely on the app, and that's a idea that people have been talking about for 20 years, and should continue to talk about, but I really think with Render, we found a way to make it just super easy to deploy and run, and certainly it is big players out there, but it really starts with developers loving the platform, and that's been Anurag's obsession since I met him. >> Yeah, it's interesting, when I first was reading I'm like "Wait," reminds me a lot of somebody like DigitalOcean, cloud for developers who are, Steve, we walked through, the PaaS discussion has gone through so many iterations, what would containerization do for things, or serverless was from its name, I don't need to think about that underlying layer. Anurag, give us a little bit as to how should we think of Render, you are a cloud, but you're not so much, you're not an infrastructure layer, you're not trying to compete against the laundry list of features that AWS, Azure, or Google have, you're a little bit different than some of the previous PaaS players, and you're not serverless, so, what is Render? >> Yeah, it is actually a new category that has come about because of the advent of containers, and because of container orchestration tools, and all of the surrounding technologies, that make it possible for companies like Render to innovate on top of those things, and provide experiences to developers that are essentially serverless, so by serverless you could mean one of two things, or many things really, but the way in which Render is serverless is you just don't have to think about servers, all you need to do is connect your code to GitHub, and give Render a quick start command for your server and a build command if needed, and we suggest a lot of those values ourselves, and then every push to your GitHub repo deploys a new version of your service. And then if you wanted to check out pull requests, which is a way developers test out code before actually pushing it to deployment, every pull request ends up creating a new instance of your service, and you can do everything from a single static site, to building complex clusters of several microservices, as well as managed Postgres, things like clustered Kafka and Elasticsearch, and really one way to think about Render, is it is the platform that every company ends up building internally, and spends a lot of time and money to build, and we're just doing it once for everyone and doing it right, and this is what we specialize in, so you don't have to. >> Yeah, just to add to that if I could, Stu, what's I think interesting is that we've had and talked about a lot of startups doing a lot of different things, and there's a huge amount of complexity to enable all of this to work at scale, and to make it work with all the things you look for, whether it's storage or CDNs, or metrics and alerting and monitoring, all of these little startups that we've gone through and big companies alike, if you could just hide that entirely from the developer and just make it super easy to use and deploy, that's been the mission that Anurag's been on to start, and as you hear it from some of the early customers, and how they're increasing the usage, it's just that love of making it simple that is key in this space. >> All right, yeah, Anurag, maybe it would really help illustrate things if you could talk a little bit about some of your early customers, their use case, and give us what stats you can about how your company's growing. >> Certainly. So, one of our more prominent customers was the Pete Buttigieg campaign, which ran through most of 2019, and through the first couple of months of 2020. And they moved to us from Google Cloud, because they just could not or did not want to deal with the complexity in today's standard infrastructure providers, where you get a VM and then you have to figure out how to work with it, or even Managed Kubernetes, actually, they were trying to run on Managed Kubernetes on GKE, and that was too complex or too much to manage for the team. And so they moved all of their infrastructure over to Render, and they were able to service billions of requests over the next few months, just on our platform, and every time Pete Buttigieg went on stage during a debate and said "Oh, go to PeteForAmerica.com," there's a huge spike in traffic on our platform, and it scaled with every debate. And so that's just one example of where really high quality engineering teams are saying "No, this stuff is too complex, it doesn't need to be," and there is a simpler alternative, and Render is filling in that gap. We also have customers all over, from single indie hackers who are just building out their new project ideas, to late stage companies like Stripe, where we are making sure that we scale with our users, and we give them the things that they would need without them having to "mature" into AWS, or grow into AWS. I think Render is built for the entire lifecycle of a company, which is you start off really easily, and then you grow with us, and that is what we're seeing with Render where a lot of customers are starting out simple and then continuing to grow their usage and their traffic with us. >> Yeah, I was doing some research getting ready for this, Anurag, I saw, not necessarily you're saying that you're cheaper, but there are some times that price can help, performance can be better, if I was a Heroku customer, or an AWS customer, I guess what might be some of the reasons that I'd be considering Render? >> So, for Heroku, I think the comparison of course, there's a big difference in price, because we think Heroku is significantly overpriced, because they have a perpetual free tier, and so their paid customers end up footing the bill for that. We don't have a perpetual free tier that way, we make sure that our paid customers pay what's fair, but more importantly, we have features that just haven't been available in any platform as a service up until now, for example, you cannot spin up persistent storage, block storage, in Heroku, you cannot set up private networking in Heroku as a developer, unless you pay for some crazy enterprise tier which is 1500, 3000 dollars a month. And Render just builds all of that into the platform out of the box, and when it comes to AWS, again, there's no comparison in terms of ease of use, we'll never be cheaper than AWS, that's not our goal either, it's our goal to make sure that you never have to deal with the complexity of AWS while still giving you all of the functionality that you would need from AWS, and when you think about applications as applications and services as opposed to applications that are running on servers, that's where Render makes it much easier for developers and development teams to say "Look, we don't actually need "to hire hundreds of DevOps people," we can significantly reduce our DevOps team and the existing DevOps team that we have can focus on application-level concerns, like performance. >> All right, so Steve, I guess, a couple questions for you, number one is, we haven't talked about security yet, which I know is a topic near and dear to your heart, was one of the early concerns about cloud, but now often is a driver to move to cloud, give us the security angle for this space. >> Yeah, I mean the key thing in all of the space is to get rid of the complexity, and complexity and human error is often, as we've talked about, that is the number one security problem. So by taking this fresh approach that's all about just the application, and a very simple GitOps-based workflow for it, you're not going to have the human error that typically has misconfigured things and coming into there, I think more broadly, the overall notion of the serverless world has also been a very nice move forward for security. If you're only bringing up and taking down the pieces of the application as needed, they're not there to be hacked or attacked. So I think for those two reasons, this is really a more modern way of looking at it, and again, I think we've talked about many times, security is the bane of DevOps, it's the slowest part of any deployment, and the more we get rid of that, the more the extra value proposition comes safer and also faster to deploy. >> The question I'd like to hear both of you is, the role of the developer has changed an awful lot. Five years ago, if I talked to companies, and they were trying to bring DevOps to the enterprise, or anything like that, it seemed like they were doomed, but things have matured, we all understand how important the developer is, and it feels like that line between the infrastructure team and the developer team is starting to move, or at least have tools and communication happening between them, I'd love, maybe Steve if you can give us a little bit your macroview of it, and Anurag, where that plays for Render too. >> Yeah, and Anurag especially would be able to go into our existing customers. What I love about Render, this is a completely clean sheet approach to thinking about, get rid of infrastructure, just make it all go away, and have it be purely there for the developers. Certainly the infrastructure people need to audit and make sure that you're passing the certifications and make sure that it has acceptable security, and data retention and all those other pieces, but that becomes Anurag's problem, not the developer problem. And so that's really how you look at it. The second thing I've seen across all these startups, you don't typically have, especially, you're not talking about startups, but mid-sized companies and above, they don't convert all the way to DevOps. You typically have people peeling off individual projects, and trying to move faster, and use some new approach for those, and then as those hopefully go successful, more and more of the existing projects will begin to move over there, and so what Render's been doing, and what we've been hoping from the start, is let's attract some of the key developers and key new projects, and then word will spread within the companies from there, but so the answer, and a lot of these companies make developers love you, and make the infrastructure team at least support you. >> Yeah, and that was a really good point about developers and infrastructure, DevOps people, the line between them sort of thinning, and becoming more of a gray area, I think that's absolutely right, I think the developers want to continue to think about code, but then, in today's environment, outside of Render when we see things like AWS, and things like DigitalOcean, you still see developers struggling. And in some ways, Render is making it easy for smaller companies and developers and startups to use the same best practices that a fully fledged DevOps team would give them, and then for larger companies, again, it makes it much easier for them to focus their efforts on business development and making sure they're building features for their users, and making their apps more secure outside of the infrastructure realm, and not spending as much time just herding servers, and making those servers more secure. To give you an example, Render's machines aren't even accessible from the public internet, where our workloads run, so there's no firewall to configure, really, for your app, there's no DMZ, there's no VPN. And then when you want to make sure that you're just, you want a private network, that's just built into Render along with service discovery. All your services are visible to each other, but not to anyone else. And just setting those things up, on something like AWS, and then managing it on an ongoing basis, is a huge, huge, huge cost in terms of resources, and people. >> All right, so Anurag, you just opened your first region, in Europe, Frankfurt if I remember right. Give us a little bit as to what growth we should expect, what you're seeing, and how you're going to be expanding your services. >> Yeah, so the expansion to Europe was by far our most requested feature, we had a lot of European users using Render, even though our servers were, until now, based in the US. In fact, one of, or perhaps the largest recipe-sharing site in Italy was using Render, even though the servers were in the US, and all their users were in Italy, and when we moved to Europe, that was like, it was Christmas come early for them, and they just started moving over things to our European region. But that's just the start, we have to make sure that we make compute as accessible to everyone, not just in the US or Europe but also in other places, so we're looking forward to expanding in Asia, to expanding in South America, and even Africa. And our goal is to make sure that your applications can run in a way that is completely transparent to where they're running, and you can even say "Look, I just want my application to run "in these four regions across the globe, "you figure out how to do it," and we will. And that's really the sort of dream that a lot of platforms as service have been selling, but haven't been able to deliver yet, and I think, again, Render is sort of this, at this point in time, where we can work on those crazy crazy dreams that we've been selling all along, and actually make them happen for companies that have been burned by platforms as a service before. >> Yeah, I guess it brings up a question, you talk about platforms, and one of the original ideas of PaaS and one of the promises of containerization was, I should be able to focus on my code and not think about where it lives, but part of that was, if I need to be able to run it somewhere else, or want to be able to move it somewhere else, that I can. So that whole discussion of portability, in the Kubernetes space, it definitely is something that gets talked quite a bit about. And can I move my code, so where does multicloud fit into your customers' environments, Anurag, and is it once they come onto Render, they're happy and it's easy and they're just doing it, or are there things that they develop on Render and then run somewhere else also, maybe for a region that you don't have, how does multicloud fit into your customers' world? >> That's a great question, and I think that multicloud is a reality that will continue to exist, and just grow over time, because not every cloud provider can give you every possible service you can think of, obviously, and so we have customers who are using, say, Redshift, on AWS, but they still want to run their compute workloads on Render. And as a result, they connect to AWS from their services running on Render. The other thing to point out here, is that Render does not force you into a specific paradigm of programming. So you can take your existing apps that have been containerized, or not, and just run them as-is on Render, and then if you don't like Render for whatever reason, you can take them away without really changing anything in your app, and run them somewhere else. Now obviously, you'll have to build out all the other things that Render gives you out of the box, but we don't lock you in by forcing you to program in a way that, for example, AWS Lambda does. And when it comes to the future, multicloud, I think Render will continue to run in all the major clouds, as well as our own data centers, and make sure that our customers can run the appropriate workloads wherever they are, as well as connect to them from the Render services with ease. >> Excellent. >> And maybe I'll make one more point if I could, Stu, which is one thing I've been excited to watch is the, in any of these platform as a services, you can't do everything yourself, so you want the opensource package vendors and other folks to really buy into this platform too, and one exciting thing we've seen at Render is a lot of the big opensource packages are saying "Boy, it'd be easier for our customers to use our opensource "if it were running on Render." And so this ecosystem and this set of packages that you can use will just be easier and easier over time, and I think that's going to lead to, at the end of the day people would like to be able to move their applications and have it run anywhere, and I think by having those services here, ultimately they're going to deploy to AWS or Google or somewhere else, but it is really the right abstraction layer for letting people build the app they want, that's going to be future-proof. >> Excellent, well Steve and Anurag, thank you so much for the update, great to hear about Render, look forward to hearing more updates in the future. >> Thank you, Stu. >> Thanks, Stu, good to talk to you. >> All right, and stay tuned, lots more coverage, if you go to theCUBE.net you can see all of the events that we're doing with remote coverage, as well as the back catalog of what we've done. I'm Stu Miniman, thank you for watching theCUBE. (calm music)

Published Date : Jun 8 2020

SUMMARY :

leaders all around the world, and we've brought along a and as the founder of the company, and grow it to five that need to go into this, right? and just let the developer I don't need to think about and all of the surrounding technologies, and to make it work with us what stats you can about and then continuing to grow their usage and the existing DevOps near and dear to your heart, and the more we get rid of that, and the developer team and make sure that you're Yeah, and that was a to be expanding your services. and you can even say and one of the original ideas of PaaS and then if you don't like and I think that's going to lead to, great to hear about Render, can see all of the events

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

EuropeLOCATION

0.99+

Anurag GoelPERSON

0.99+

ItalyLOCATION

0.99+

AsiaLOCATION

0.99+

AnuragPERSON

0.99+

USLOCATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

June 2020DATE

0.99+

Steve HerrodPERSON

0.99+

AfricaLOCATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

South AmericaLOCATION

0.99+

StuPERSON

0.99+

five billion dollarsQUANTITY

0.99+

RenderTITLE

0.99+

GoogleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

General CatalystORGANIZATION

0.99+

RenderORGANIZATION

0.99+

bothQUANTITY

0.99+

StripeORGANIZATION

0.99+

ElasticsearchTITLE

0.99+

HerokuORGANIZATION

0.99+

KafkaTITLE

0.99+

FrankfurtLOCATION

0.99+

ChristmasEVENT

0.99+

2019DATE

0.99+

1500QUANTITY

0.99+

two reasonsQUANTITY

0.99+

20 yearsQUANTITY

0.98+

SalesforceORGANIZATION

0.98+

first regionQUANTITY

0.98+

first timeQUANTITY

0.98+

AnuragORGANIZATION

0.98+

fifth engineerQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.97+

second thingQUANTITY

0.97+

Disha Chopra, Juniper | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE covering AWS re:Invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. (techy music) >> Hey, welcome back, everybody. Jeff Frick here with theCUBE, we're at AWS re:Invent 2018 in Las Vegas, day two of four days of coverage. I think we'll do 120 interviews. I mean, this is the most poppin' show in tech right now. We're really excited to be here, and joined by my cohost, Lauren Cooney. Lauren, great to see you. >> Thank you. Great to see you, too. >> And we've got... (chuckling) We've got our next guest, it's Disha Chopra, she's a senior manager, product line manager for Juniper Networks, welcome. >> Thank you, feels great to be here. >> Good. >> So, what do you think of this show, have you been to re:Invent before? >> Oh, my God, no, this is my first one, and I am so excited. The energy is so great, it's vibrant, I'm learning a lot, I'm very happy to be here. >> So, Juniper's been around for a long time, way predating this cloud, this whole cloud thing, so what are you guys up to, what's the latest, and really, why are you here at re:Invent? What's your story with AWS? >> Yeah, absolutely. So, I think the latest thing with us is as early as today there was... We were posted on the AWS partner solution website. Vodafone is partnering with Juniper for their SD-WAN offering with, you know, the SD-WAN controller that's sitting in AWS, managing all their branch offices, so that's what's the newest with us, and you know, we've been making waves with a lot of partnerships recently. Couple of months ago, or maybe just a month ago, we announced with Nutanix, so that announcement was focused more for our enterprise customers. Integration with Nutanix is a hyperconverged infrastructure where Juniper will be, you know, integral part of their networking, providing for their converged infrastructure, and then before that, I think a few months ago we had Red Hat. We announced a partnership with Red Hat, and you know, that's focused on our telco cloud. So, as you were mentioning, Juniper's been around for a long time-- >> Right. >> And you know, telco clouds are our strong suite. Telcos, now telco cloud, right, and similarly for enterprise. If you think about it, you know, large enterprises and telcos, they're not that different, right? So, that's where we were at, and that's more kind of... We're following the evolution like our customers are, right? They used to be telco, now they're telco cloud. Juniper, I think the newest thing with Juniper, to be honest, in technology I spoke about partnerships, but it's our cloud-first strategy. That's what we have in mind. We are evolving with our customers, helping them in their journey for cloud adoption, cloud migration, right? It's a couple of sentences to say that, "Oh, we're helping our customers with cloud migration," but we're, you know, there's so many steps in between. They are very complex, you need a lot of handholding, and we're right there for our customers. >> So, what does that actually mean when you are, you know, saying that you're helping your customers? Are you working with them to bring them multicloud solutions from AWS and Microsoft and Google, or you know-- >> Correct, exactly. >> Can you give me a scenario or a use case? >> Yeah, absolutely, so like I was saying, traditionally, Juniper was, you know, a hardware-focused company, so our existing customer base, they bought a lot of big, heavy boxes from us, and of course, on top of it came a world class routing and switching software component, right, and it was all bundled up and sold together. Now, you know, they're moving towards the cloud, towards AWS, towards GCP, towards Azure. We want to be able to provide to them, and who better to provide this service to them. We understand their on-prem network. We understand cloud networking. We understand the transport in between. So, what we're doing is for our customers we manage their existing on-prem network, which you know, a lot of our customers, you know, they're huge and they have a significant amount of footprint, global footprint, right, so we understand that, we're able to connect them to the AWS, to the GCP, to the Azure, right, and the value proposition for them is that if they wanted to do it themselves they have to understand, you know, three different or five different clouds, right. You have IBM, you have DigitalOcean. There's a lot out there, right, and getting the opecs or getting the talent to be able to understand all these things and do the migration, it's hard, right? This is a complex problem to solve, so what Juniper brings to the table is we abstract it out. So, for example, I wanted to move-- >> Yeah, well I just want to say, you know, one of the things that you're talking about here, and this is a total switch, if I'm right, is are you becoming a managed service provider? >> We do have a managed service-- >> Because it sounds like you're going to be putting a lot more money into that side of the business-- >> Correct. >> Versus the straight-up product side of the business. >> Yeah, yeah, that's where we are pivoting from, you know, we want... Our perception used to be that we're a hardware company, now we're a cloud-first company. We're a software company, so we're definitely pivoting towards the, you know software-based solutions, software-based, you know, offerings. It's like your iPhone, right, or your phone. You buy the hardware but you're really buying it for the iOS or for the applications that run on it. Networking is following a similar paradigm now, right? The hardware boxes, they're definitely our bread and butter still, but it's the software now that's enabling and giving it all the cool factor and the innovation that's happening, it's all in the software. Contrail, that's our story for multicloud. That's one of our product offerings. So, what Contrail does is, and I think that's what I was kind of referring to earlier, it gives you that higher level of abstraction where you don't have to worry about: "Is my workload running in AWS? "Is my workload running in GCP?" It doesn't matter, right, you as a enterprise, or as a telco, we want you to focus on, you know, powering your applications, powering your services. We don't want you to worry about your infrastructure, that's our job, right? We want to completely hide all the complexity away from you, and just, you know, let you do what generates revenue. >> So, as an application developer, right, so I'm an application developer and I use Azure, for example, right-- >> Yeah. >> And that's kind of my platform, and I'm, you know, doing some interesting stuff with like, you know, some scripting, or I'm building, you know, just a general, like, new website or something like that with, you know, a couple different things. So, as a developer at that level, I don't even know about Contrail. >> Exactly, exactly. >> Exactly, but I don't think Contrail yet extends up to that layer where it can manage everything across multiple clouds. >> So, it provides you as a developer, like you said, you're writing an application, you don't care about the infrastructure. It's just there, right? >> Mm-hm. >> And we want to keep it that way. Contrail is there, Contrail is at that level. Contrail is going to provide the plumbing, so you as a developer, today everything, all developers are moving towards containers, right? So, for example, the Red Hat partnership that I brought up earlier, that's focused on the Red Hat OpenShift platform, their path service, which is a container-based service. Contrail integrates with Kubernetes, we integrate with Mesos, we integrate with Docker. So, as a developer, when you employ these tools to write your code, you know, using a CICD platform, Contrail is sitting right under it, giving you that connectivity. So, for example, when you're developing your application and (clearing throat) you know, you deploy it, you deploy part of it in Azure, you deploy part of it in AWS, right, and you don't care where it goes, you just-- >> Or you use one for, like, bursting or something like that. >> Exactly, yeah, yeah. >> You know, the rest of it on-prem. >> Correct, so-- >> That sort of thing. >> You know, it's distributed, right? So, who's going to plumb it and make sure that it's giving you the results that you need? That's where Contrail comes in. Gives you that plumbing between on-prem, between AWS. >> So, how is that different from Kubernetes as a whole? Like, I know that it's, you know, it does like container management, orchestration, deployment-- >> Correct. >> Delivery, how does-- >> Right. >> Contrail kind of come in and work with Kubernetes? >> Right. So, great question, by the way, you know your stuff, so (laughing) Kubernetes is... Kubernetes is orchestration for your workloads, right? It's services, Kubernetes provides a service, like it gives you a service web. You deploy a bunch of Kubernetes minions, they all work together to give you that application that you need. Now, what Contrail does is it provides the networking between those Kubernetes pods. So, let's say you want to scale up your application. Okay, you had 10 pods, now you want to go to 20. Kubernetes makes that decision for you that you need the 20 pods, and then Contrail is sitting under it giving you the networking for those 20 pods. So, when those 20 pods spin up, Kubernetes pokes Contrail and says, "Hey, 20 more, and these need to talk to "those 10 pods that were already there," right? >> So, Contrail is opensource, right? >> Correct. >> Why haven't you donated it yet to the CNCF? >> (chuckling) We are part of CNCF, we recently-- >> I know that. >> Yeah. >> But fundamentally, if you want that to be pulled as much as you do... >> Yeah. >> It's already opensource. >> Yeah, you're right. >> You might as well kind of get on that thread with the Kubernetes folks-- >> Right, yeah. >> And start talking to them about how you make it part of, you know, the core distribution that then goes into, you know, six different distro. >> Correct, correct, yeah. >> You know, something along those lines versus don't start your own distro. (chuckling) >> Sorry. >> Right, don't start your own distro, but look at how you can become integrated into that Kubernetes stream, the main stream. >> Correct, yeah, yeah, yeah, exactly. Yeah, no, that is definitely something that, like you're saying, it's something that we, you know, we want to do, that's the direction that we want to go at, but I think the actual decision is maybe above my pay grade, so I don't (chuckling) want to make a commitment here. >> Fair enough. >> So, you know... (chuckling) >> Disha, I want to followup on a slightly different track. When you talk about cloud-first, and you answered the question, which is when you say cloud-first, is that, you know, kind of the way you're going to market with your customers, or is that the way you guys are looking at Juniper in terms of transforming the company? >> Mm-hm. >> And it sounds like you said it's more of the latter, really starting to reformulate Juniper-- >> Correct. >> As a cloud first service company. >> Exactly. >> So, how is that transformation going inside the company, that's a pretty significant-- >> It is, it is, yeah. >> Shift from selling boxes and maintenance agreements and-- >> Yeah. >> Shipping metal. >> Yeah, we are definitely modernizing from within, right, but a lot of it is driven by our customers. Like I was saying, you know, they are evolving, they want to connect to the cloud, and you know, we obviously want to help them do that. As part of that, we want to be microservices-based, right, because we want to be able to support containers. These are just things that, you know, we need to do. Juniper is a leader as far as, you know, innovation and networking is concerned. >> Right, right. >> So, it was never a question of if we want to do this, or if we want to go down this path or not, right, it's when, right? >> Right, right. >> And we are definitely working day in and day out to make that happen, so you know, a lot of our offerings, like recently we came out with our containerized SRX solution. SRX is our full-feature, full-service, next generation firewall, and we have containerized it, right. I believe it's the first offering of its kind, containerized, host-based firewall, so you know, innovative stuff happening all the time. Like you said, you know, it's definitely a Herculean task-- >> Right, right. >> But we're up for it-- >> Right. >> And we're doing it. >> And I'm just curious to when the customer conversations-- >> Yeah. >> You know, the hybrid cloud, multicloud, public cloud conversation, right, it's a lot of conversation. How do you take your customers down the path? Where do you see them, you know, trying to navigate in what's got to be a pretty complex world for-- >> It is, definitely. >> A CIO trying to figure out what they're supposed to buy and not buy, how to pay attention, can I hit all the booths-- >> Right, right, right, right. >> Here at AWS in three days, I don't think so. >> (laughing) I know, yeah, these conversations, to be honest, have been going for the past couple of years, right. A lot of our customers, the intent is there to move to the cloud, and you know, we are trying to help them with it, so you know, we design with them. We design their network, we design their topologies, we handhold them telling them how to do this, right, their existing networks that they have. The complexity comes in because everything, right, think of a company, right, a large company. It then goes ahead and acquires 10 more, and they all have their own networks, they all have their own environments, VMware, Red Hat, you know, Tabix, so different kinds of environments now all need to connect to the cloud. You don't want them to be siloed. You also don't want to deal with, you know, all those different kinds of, like I was saying, you know, skillset to be able to connect them all individually. So, when we talk to our customers, that's what we tell them, that you know, with a Juniper-based solution we have so many of them that work together in a cohesive way to give you that end-to-end connectivity. Secure, automated multicloud, that's our mantra, right, and it's as far as, you know, engineering is concerned, engineering simplicity. If you come down to Juniper it's plastered all over the walls, right, engineering simplicity. We were really driving that message internally so that... And a lot of the CICD stuff, right? The way we want our customers to use it is how we're using it, so that, you know, that improves our quality, that improves reliability, and all those things. So, in terms of handling our customers, we talk, you know, we're there on the table day one. We talk to them about their design. I see that a lot of our customers, currently where they're at is they are trying to connect to the cloud. They all want to move towards the container, you know, the containerized services. They know that's the right thing to do. They're not quite there yet, right? The intent is definitely there, they're playing with it, but in terms of being in production, we're still, you know, a little bit off. Not too much, but we'll get there soon, right. So, we talk to them, we talk about, you know, how they can make their applications cloud ready. There's a couple of ways to do it. You lift and shift, or you know, directly move, go cloud native. >> Right, right. >> So, we have all these discussions with them. You know, what fits their bill, right? What is good for them, what is it that's going to work for them? And then, you know, of course the connectivity piece, right, but with it security, reliability, and scale. Right, a company like Juniper obviously, you know, innovator in networking, we solve problems at a different level, right? >> Right, right. >> For our much larger customers. So, we talk to them about scale, we talk to them about, you know, reliable security is huge, right. You have a workload that you spun up on-prem, and then, now, you know, you have... Your requirements have changed, you're going to have to replicate it, say, in AWS. When you replicate it, you still want the same security that you had on-prem to apply to this workload, which is now going to be in AWS, how do you do that? It's easy with Contrail, right, because it's intent-driven. You specify the intent, in fact, you specified the intent when you brought up the first workload, and it captured it, "Okay, I'm supposed to talk to..." You know, say I'm workload red and I can only talk to other red workloads and I cannot talk to the blue workloads, something like that, right? >> Right, right. >> So, you specify the intent, and then when that red workload now comes up in AWS, it already knows that I wasn't supposed to talk to the green workload, so that policy and all the intent moves with that workload. >> Right, right. >> And this is all done through Contrail, right, and the other thing, that single pane of glass. I'm sure you've heard about it a lot today, right. The single pane of glass, you specify it one time. Again, the abstraction away from all those, you know, five clouds that you're working with, you specify the red workload, the policy for the red workload one time, and then it doesn't matter where you bring it up, Contrail will automatically apply it everywhere, and you know, it's good to go. >> That's great. >> Well, Disha, thanks for coming on, you certainly got the energy to attack this big problem, so... (laughing) Juniper's fortunate to have you. >> Great, thank you for having me. >> Thanks for coming on and sharing the story. >> It's been wonderful talking to you guys. >> All right, Disha, she's Lauren, I'm Jeff. You're watching theCUBE, we're at AWS re:Invent 2018. Come on down, we're in the main expo hall right by the center, thanks for watching. (techy music)

Published Date : Nov 29 2018

SUMMARY :

brought to you by Amazon Web Services, We're really excited to be here, Great to see you, too. We've got our next The energy is so great, it's vibrant, and you know, we've been making waves And you know, telco which you know, a lot of our customers, product side of the business. pivoting from, you know, we want... and I'm, you know, doing Exactly, but I don't think So, it provides you as a developer, you know, you deploy it, Or you use one for, like, that it's giving you the that you need the 20 pods, and then that to be pulled as much as you do... that then goes into, you You know, something along those lines but look at how you can become integrated that we, you know, we want to do, is that, you know, kind and you know, we obviously so you know, a lot of our offerings, How do you take your days, I don't think so. to move to the cloud, and you know, And then, you know, of course and then, now, you know, you have... So, you specify the intent, and then and you know, it's good to go. for coming on, you certainly and sharing the story. talking to you guys. right by the center, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

DishaPERSON

0.99+

Disha ChopraPERSON

0.99+

AWSORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

LaurenPERSON

0.99+

telcoORGANIZATION

0.99+

JeffPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

JuniperORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Juniper NetworksORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

20 podsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

TelcosORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

iOSTITLE

0.99+

10 podsQUANTITY

0.99+

VodafoneORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

120 interviewsQUANTITY

0.99+

telcosORGANIZATION

0.99+

one timeQUANTITY

0.99+

first workloadQUANTITY

0.99+

Las VegasLOCATION

0.99+

CNCFORGANIZATION

0.99+

threeQUANTITY

0.99+

a month agoDATE

0.99+

todayDATE

0.99+

IntelORGANIZATION

0.99+

five cloudsQUANTITY

0.98+

four daysQUANTITY

0.98+

first oneQUANTITY

0.97+

single paneQUANTITY

0.97+

Couple of months agoDATE

0.97+

10 moreQUANTITY

0.97+

KubernetesTITLE

0.97+

ContrailORGANIZATION

0.96+

first serviceQUANTITY

0.96+

oneQUANTITY

0.95+

three daysQUANTITY

0.95+

2018DATE

0.95+

telco cloudORGANIZATION

0.94+

DigitalOceanORGANIZATION

0.94+

six different distroQUANTITY

0.93+

Red Hat OpenShiftTITLE

0.93+

20QUANTITY

0.93+

AzureTITLE

0.92+

Saar Gillai, Teridion | CUBEConversation, Sept 2018


 

(dramatic music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in our Palo Alto studio for a CUBE conversation. It's really a great thing that we like to take advantage of. A little less hectic than the show world and we're right in the middle of all the shows, if you're paying attention. So we're happy to have a CUBE alumni on. He's been on many, many times. Saar Gillai , he's now the CEO of Teridion. And Saar, welcome. I don't think we've talked to you since you've been in this new role. >> Yeah, it's been about a year I think. >> Been 'about a year. So give us kind of the update on Teridion. What it's all about and really more importantly, what attracted you to the opportunity? >> Sure. First of all, great to be here. I don't know where John is. I'm looking for him. He ran away. Maybe he knew I was coming. >> Somewhere over the Atlantic I think. 35,000 feet. >> I'll follow up on that later but hey, you're here. So, you know Teridion, let's talk about maybe the challenge that Teridion is addressing first so people will understand that, right. So if you look about what's going on these days with the advent of Cloud. and how people are really accessing stuff, things have really moved in the past. Most of the important services that people access were in a data center and were accessed through the LAN so the enterprise had control over them and if you wanted to access an app, if it didn't work, somebody when into the LAN, played around with some CISCO router and things maybe got better. >> But at least you had control. >> You had control and if you look at what's happened over the last decade, but certainly in the last five years, with SAS and the Cloud. Stating the obvious, more and more of your services now are actually being accessed through your WAN and in many cases, that actually means the internet itself. If you're accessing Salesforce or Box or Ignite or any of these services. The challenge with that is that now means that a critical part of your user experience, you don't control. The vendor doesn't control because you can make the best SAS up in the world but, and those apps are increasingly very dynamic. Caching doesn't solve this problem and the problem is now, okay, but I'm experiencing it over the internet. And while the internet is a great tool obviously, it's not really built for reliabilty, consistency, and consistent speed. Reality, if you look at the internet, it was designed to sent one packet to NORAD and tell them that some nuclear missile died somewhere. That's what it was designed for right? So the packet will get there but the jitter and all these things may work and so what happens is that, now you have a consistency problem. Historically, people will say well, that's all been addressed through traditional caching and that's true. Caching still has it's place. The reality is though that caching is more for stuff that doesn't change a lot and now, it's all very dynamic. If you're uploading a file, that's not a caching activity. If you're doing something in Salesforce, it's very dynamic. It's not cached. At Teridion, we looked at this problem. Teridion's been around about four years. I've been there for about a year. We felt that the best way to solve this problem was actually to leverage some of the Cloud technology that already exists to solve it. So what we do, actually, is we build an overlay network on top of the public Cloud surface area. So instead of traditionally, the way people did things is they would build a network themselves but today the public Cloud guys honestly are spending gazillions of dollars building infrastructure. Why not leverage it the same way that you don't buy CPUs, why buy routers? What we do is we create a massive overlay network on demand on the public Cloud surface area. And public Cloud means not just Amazon or Google but also people like AliCloud, DigitalOcean, Vulture, any Cloud provider really, some Russian Cloud providers. And then we monitor the internet conditions and then we build a fast path. If you think about it almost like a ways, a fast path for your packet from wherever the customer is to your service thereby dramatically increasing the speed but also providing much higher reliabilty. >> So, lot of thoughts. If I'm hearing right, you're leveraging the public Cloud infrastructure so they're pipes, if you will. >> And they're CPUs. >> And they're CPUs but then you're putting basically waypoints on that packet's journey to reroute to a different public Cloud infrastructure for that next leg if that's more appropriate. >> Yeah, and basically what I'm doing is I'm basically just saying if there's a, if your server's here whether they're on a public Cloud or somewhere else, it doesn't matter, and a customer is here, through some redirection, I will create a router on a public Cloud so a soft router, somewhere close from a network perspective to a user and somewhere close to the server and then between them, I'll create an overlay fast path. And then, what is goes over will be based on whatever the algorithm figures out. The way we know where to go over is we also have a sensor network distributed throughout the public Cloud surface areas and it's constantly creating a heat map of where there's capacity, where there's problems, where there's jitter and we'll create a fast path. Typically that fast path will give you, one of the challenges, I'll give you an example. So let's say you're on Comcast and let's say you've got 40 meg let's say, your connection at home. And then you connect to some server and theoretically that server has much more, right? But reality is, when you do that connection, it's not going to be 40 meg. Sometimes it's 5 meg, okay? So we'll typically give you almost your full capacity that you have from your first provider all the way there by creating this fast path. >> So how does it compare, we hear things about like Direct Connect between Equinix and Amazon or a lot of peer relationships that get set up. How does what you're doing kind o' compare, contrast, play, compare to those solutions? >> Direct Connect is sort of a static connection. If you have an office and you want to have a Direct Connection, it's got advantages and it's useful in certain areas. Part of the challenge there is that first of all, it has a static capacity. It's static and it has a certain capacity. What we do, because it's completely software oriented, is we'll create a connection and if you want more capacity, we'll just create more routers. So you can have as much capacity as you want from wherever you want where with Direct Connect, you say I want this connection, this connection, this much capacity and it's static. So if you have something very static, then that may be a good solution for you but if you're trying to reach people at other places and it's dynamic, and also you want variable capacities. For example, let's say you say I want to pay for what I use. I don't want to pay for a line. Historically, when you're using these things, you say okay, if the maximum I may want is 40 meg, you say okay, give me a 40 meg line. That's expensive. >> Right, right. >> But what if you say I want 40 meg only for a few hours a day right? So in my case, you just say look, I want to do this many terabytes. And if you want to do it at 40 meg, do it at 40 meg. It doesn't matter. So it's much more dynamic and this lends itself more to the modern way of people thinking of things. Like the same way you used to own a server and you had to buy the strongest server you needed for the end of the month because maybe the finance guy needed to run something. Today you don't do that right? You just go to public Cloud and when it's the end of the month, you get more CPUs. We're the same thing. You just set a connection. If you need more capacity, then you'll get more capacity that you need. We had a customer that we were working with that was doing some mobile stuff in China and all of a sudden, they needed to do 600,000 connections a minute from China. And so we just scaled up. You don't have to preconfigure any of this stuff. >> Right, right. So that's really where you make the comparison of public Cloud for networking because you guys are leveraging public Cloud infrastructure, you're software based so that you can flex so you don't have the old model. >> It's completely elastic, like I said. It's very similar. Our view is the compute in the last decade, obviously, compute has moved from a very static I own everything mode to let's use dynamic resources as much as possible. Of course, there's been a lot of advantage to that. Why wouldn't your connectivity, especially your connectivity outside which is increasing your connectivity also use that paradigm. Why do you need to own all this stuff? >> Right, right. As you said before we turned the cameras on the value proposition to your customers who are the people that basically run these big apps, is the fact that they don't have to worry about that but net is just flat out faster to execute the simple operations like uploading or downloading something to BOX. >> And again, you mentioned BOX, they're one of our big customers and we have a massive network if you thing about how much BOX uploads in a given day, right? 'Cause there's a lot of there traffic that goes through us. But if you think about these SAS providers, they really need to focus on making their app as good as possible and advancing it and making it as sophisticated as possible and so, the problem is then there's this last edge which is from their server all the way to the customer, they don't really control. But that is really important to the customer experience, right? If you're trying to upload something to BOX or trying to use some website and it's really slow, your user experience is bad. It doesn't matter if it's the internet's fault. You're still as a customer, So this gives them control. They give us that ability and then we have control that we can give it much faster speed. Typically in the US, it may be two to five times faster. If you're going outside the US, it could be much faster sometimes. In China, we go 15 times faster. But also, it's consistent and if you have issues, we have a knock, we monitor, we can go look at it. If some customer says I have a problem, right? We'll immediately be able to say okay, here's the problem. Maybe there's a server issue and so forth as opposed to them saying I have a problem and the SAS vendor saying well, it's fine on our side. >> Right, right. So, I'm curious on your go to market. Obviously, you said BOX is a example of a customer. You've got some other ones on the website. Who are these big application service providers, that term came up the other day, like flashback to 1990. 1998 >> I call them SAS >> It's funny, we were talking about the old days. >> To me, it's all the same, as a service guy. >> But then, as you go to market then going to include going out directly through the public Clouds in some of their exchanges so that basically, I could just buy a faster throughput with the existing service. Where do you go from here? I imagine, who doesn't want faster internet service period? >> Yeah, we started off going to the people who have the biggest challenge and easier to work with a small company right? You want to work with a few big guys. They also help you design your solution, make sure it's good. If you can run BOX and Traffic and Ignite. Traffic can probably handle other things, last year for example. We are looking at potentially providing some of the service, for example, if you're accessing S3 for example, we can access S3 at least three times faster. So we are looking potentially at putting something on the web where you could just go to Amazon and sign up for that. The other thing that we're looking at, which is later in the year, probably is that we haven't gotten a lot of requests from people that said hey, since the WAN is the new LAN, right, and they want to also try to use this technology for their enterprise WAN between branch offices where SD-WAN is sort of playing today, we've gotten a lot of requests to leverage this technology also in SD-WAN and so we're also looking at how that could potentially play out because again, people just say look, why can't I use this for all my WAN connectivity? Why is it only for SAS connectivity? >> Right, right. I mean it makes sense. Again, who doesn't want, the network never goes fast enough, right? Never, never, never. >> It's not only speed. I agree with you but it's not only speed. What you find, what people take for granted in the LAN but they only notice it when now they're running over the LAN is that it's a business critical service. So you want it to be consistent. If it's up, it needs to have latency, jitter, control. It needs to be consistent. It can't be one second it's great, the next second it's bad and you don't know why and visibility. No one's ever had that problem. >> I'm just laughing. I'm thinking of our favorite Comcast here. If they're not a customer, you need to get them on your list. Help make some introductions hopefully. >> So, people take that for granted when they're LAN and then when they move to the Cloud, they just assume that it's going to continue but it doesn't actually work that way. Then they get people from branch offices complaining that they couldn't upload a doc or the sales person was slow and all these problems happen and the bigger issue is, not only is this a problem, you don't have control. As a person providing a service, you want to have control all the way so you can say "yeah, I can see it. "I'm fixing it for you here. "I fixed it for you." And so it's about creating that connection and making it business critical. >> It's just a funny thing that we see over and over and over where cutting edge and brand new quickly becomes expected behavior very, very quickly. The best delivery by the best service, suddenly you have an expectation that that's going to be consistent across all your experiences with all your apps. So you got to deliver that QS. >> Yeah, and I think the other thing that we notice, of course, is because of the explosion of data right? It's true that the internet's capacity is growing but data is growing faster because people want to do more because CPUs are stronger, your handset is stronger and so, so much of it is dynamic. Like I said before, historically, some of this was solved by just let's cache everything. But today, everything is dynamic. It's bidirectional and the caching technology doesn't do that. It's not built for that. It's a different type of network. It's not built for this kind of capacity so as more and more stuff is dynamic, it becomes difficult to do these things and that's really where we play. And again, I think the key is that historically, you had to build everything. But the same way that you have all these SAS providers not building everything themselves but just building the app and then running on top of the public Cloud. The same thing is why would I go build a network when the public Cloud is investing a hundred billion dollars a year in building massive infrastructure. >> Yeah, and they are, big infrastructure. Well Saar, thanks for giving us the update and stopping by and we will watch the story unfold. >> Great to be here. >> Alright. And we'll send John a message. >> I'll have to track him down. >> Alright, he's Saar, I'm Jeff. You're watching theCUBE. It's a CUBE conversation at our Palo Alto Studio. Thanks for watching. We'll see you next time. (dramatic music)

Published Date : Sep 27 2018

SUMMARY :

I don't think we've talked to you what attracted you to the opportunity? First of all, great to be here. Somewhere over the Atlantic I think. and if you wanted to access an app, and the problem is now, okay, but so they're pipes, if you will. to reroute to a different that you have from your first compare to those solutions? and if you want more capacity, Like the same way you used to own a server so you don't have the old model. Why do you need to own all this stuff? the value proposition to your customers and if you have issues, we have a knock, Obviously, you said BOX is talking about the old days. To me, it's all the But then, as you go to the web where you could just go the network never goes fast enough, right? and you don't know why and visibility. you need to get them on your list. all the way so you can So you got to deliver that QS. But the same way that you and stopping by and we will And we'll send John a message. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

15 timesQUANTITY

0.99+

JeffPERSON

0.99+

ChinaLOCATION

0.99+

Jeff FrickPERSON

0.99+

ComcastORGANIZATION

0.99+

SaarPERSON

0.99+

AmazonORGANIZATION

0.99+

Saar GillaiPERSON

0.99+

USLOCATION

0.99+

EquinixORGANIZATION

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

TeridionPERSON

0.99+

Palo AltoLOCATION

0.99+

S3TITLE

0.99+

35,000 feetQUANTITY

0.99+

Sept 2018DATE

0.99+

last yearDATE

0.99+

1998DATE

0.99+

1990DATE

0.99+

DigitalOceanORGANIZATION

0.99+

VultureORGANIZATION

0.99+

CISCOORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

five timesQUANTITY

0.99+

one packetQUANTITY

0.99+

first providerQUANTITY

0.99+

AliCloudORGANIZATION

0.99+

SASORGANIZATION

0.99+

gazillions of dollarsQUANTITY

0.98+

TodayDATE

0.98+

todayDATE

0.98+

40 megQUANTITY

0.98+

5 megQUANTITY

0.98+

AtlanticLOCATION

0.97+

oneQUANTITY

0.97+

one secondQUANTITY

0.97+

TeridionORGANIZATION

0.96+

Palo Alto StudioLOCATION

0.96+

last decadeDATE

0.96+

about a yearQUANTITY

0.96+

FirstQUANTITY

0.94+

600,000 connections a minuteQUANTITY

0.93+

firstQUANTITY

0.92+

about four yearsQUANTITY

0.92+

last five yearsDATE

0.91+

IgniteORGANIZATION

0.9+

SalesforceTITLE

0.9+

RussianOTHER

0.9+

BoxORGANIZATION

0.89+

a hundred billion dollars a yearQUANTITY

0.87+

theCUBEORGANIZATION

0.86+

NORADORGANIZATION

0.83+

SASTITLE

0.82+

SalesforceORGANIZATION

0.81+

Direct ConnectOTHER

0.79+

few hours a dayQUANTITY

0.76+

three timesQUANTITY

0.72+

CloudTITLE

0.68+

ConnectOTHER

0.5+

Shiven Ramji, Digital Ocean | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark it's theCUBE covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation and it's ecosystem partners. >> Okay, welcome back everyone. We're live here in Copenhagen, Denmark. It's theCUBE's exclusive coverage of KubeCon 2018 Europe. I'm John Furrier with Lauren Cooney, my cohost this week. Our next guest Shiv Ramji, VP of Product at DigitalOcean, fast growing startup, now growing company. Congratulations, welcome to theCUBE. >> Thank you very much. >> So you guys got some hard news, you got product, Kubernetes product, and you guys just upgraded your status on CNCF. Let's jump into the product news real quick. What's the hard news? >> Yeah, so we just announced a Kubernetes product and service on our platform. And you know, we've had a lot of customers who've actually have been deploying Kubernetes on our platform, either themselves or through a managed provider. And a lot of customers, specifically businesses, have been asking us to provide native support for Kubernetes. So now this is native support for Kubernetes on the DigitalOcean platform. >> What does native support for customers mean specifically? Is it managing the workload down to, how, what level of granularity, I guess, is the question. Be specific about this support. >> Yeah, yeah. So essentially, typically developers who are deploying container workloads or Kubernetes workloads do this themselves. Now we make it very, very easy. So you can come into our platform and, within a few clicks, deploy a Kubernetes cluster with your typical integrations of monitoring or container registry and the Kubernetes dashboard. >> So you basically just select a couple features and they can go from there? It's just run a gun? >> It's just a few clicks and you are running. And the reason why we did that, and sort of the history of the company has really focused on removing friction for developers to get started. So we make it very, very easy from a product experience perspective, and also from a cost perspective. So we remove all the barriers for any team size to get started. And so that's why we've made the product very, very easy to use, very simple. And then we also plan to have a lot of tutorials around containers or containerizing an application and scaling in the microservices work. >> Lauren: That's great. >> Talk about the security aspect of it. It's been a big topic here. We were talking about it on our intro, Lauren and I, around, you know, that it's evolving in real time. Things are moving fast. Up front work needs to get done. How do your customers think about security in context of the Kubernetes offering? >> So we have a story for that. We are trying to essentially deploy some native integrations and some open source projects that help us do security scanning, so the goal is to essentially let our customers know of vulnerabilities that they may have based on the images that they are deploying. And you know, all of us are guilty of it. We will get a public container image and launch it, and then realize that there are some security flaws. So that's something we do want to address as we continue to roll out additional features throughout this year. >> I know we've interviewed you guys before, but I want you to just take a minute and explain, for the folks watching who might not know DigitalOcean, what you guys do, your value proposition, who you guys target, how you sell the product, what's the service, all that good stuff. Share a one minute update on what you guys do. >> So we are a New York based company that were founded in 2012 out of Techstars. And the value proposition is very simple in that we want to be the cloud platform for developers and their teams, so that they're focused on software that changes the world. And what that means is we take all the complexity in our product development process, essentially to make it very easy for a developer to go from concept or idea to production as fast as they can. Once they get there, we want to also enable them to scale reliably on our platform. And essentially, all of the features that we've launched have been driven by customer demand. So they tell us that, hey, we're scaling on your platform, we really need these additional features, and that's how we respond. So we're very developer-obsessed, and focus on that specific persona, and help them get to the cloud as quickly as possible. >> So you're solving the problem for the developer. Bait pain points are, what? >> So there are three. We think of learning as the first one, as a barrier to developers. So this is why we've built a library of tutorials. There are about 1400 plus tutorials. We get about three million unique visitors on our platform. And about 80% of our customers actually came from one of the tutorials. Right, so that's such a great source of >> Lauren: Documentation is so important. >> Documentation. So important. So that's our first one. The second one is building. This idea of let's remove all friction for you to go from zero, essentially an idea, to production as fast as possible. So there're two things we do there. One, we try to make the product very simple and easy to use. And two, we are very price competitive. So we have a very competitive price to performance ratio in the market, with the idea that, if you want to keep your total cost of operations as low as possible. And so, that's another reason why developers, teams, and also businesses are now, we are in their consideration set, because they're like, well developers love this product, and I can get a cost benefit. Why would I not do that? And then the last one is scaling, which is once you're growing your application, you're going to need ability to scale and support. And so we provide free support to all of our customers, regardless of the size of your workload or size of customer or business. And I think that's a very important value proposition for us. >> So who do you compete against? Like, who are a couple of your competitors? >> So, the best way to answer that is to see, so we go to our customers and see who they compare us with. And typically we are compared against AWS and Google. >> Lauren: Okay, okay. >> And so, they are the ones who will come to us and say, "Hey, we're about to launch an app, or we're considering moving our workloads, you know, here's what our setup looks like in Google or AWS. You know, can you provide us similar capabilities?" And a lot of the times tends to be, you know, our developers already love you. If you have this capabilities and features set, we would love to move our workloads. >> Well I think you've got a tremendous amount of active developers as well, correct? >> Yes, yes. >> So, and you're growing that exponentially. What is, kind of your growth look like, year over year? >> Yeah, so last year we signed the one millionth developer on our platform. There's essentially one million developers that have created an account on our platform. And we sometimes have developers who come in and out of our platforms, if you're done with your project, right, if you're a student. But we have about half a million active developers on our platform, and growing rapidly. And we also foster a community which is growing tremendously. So we've got about three and a half million active developers in our communities, reading articles, and going through Q&A, and posting very interesting projects. >> Those are some great numbers. I mean, they're up there with Salesforce growth. So that's tremendous. >> And also the other news is you're upgrading your membership. Cloud Native Compute Foundation, CNCF. Talk about that dynamic, why? Size, did you fall into new bucket or you guys are increasing your participation? What's the news? >> Yeah, I mean, we were founded really on this idea of we believe in helping the community, and so free and open source software is what we've built our business on. And so, as we got active with Kubernetes ourselves, and we've been using Kubernetes for two years internally, so we have lots of lessons of our own. And as we were bringing this product to market, it was only the right, it was the right time for us to really upgrade our membership to gold with the CNCF, with the goal of getting to their platinum level where we can contribute to standards and bodies and really influence the evolution of all the tooling around containers and microservices. So, it was the right, the timing was right, and it's the right evolution of us continuing to support the community. >> Making some good profit, contribute that, and help out CNCF. >> Shiven: Absolutely. >> As the VP of Product, you have the keys to the kingdom as they say, in the product management world. (laughing) You got to balance engineering management with product, and you got to look to the market for the, you know, the needs of the customers, and of course they're helping you. Big time developers aren't afraid to share their opinion of what they need. >> Shiven: Never. >> Pain points, that's a good, good, good, good job there. What is on the road map for you? What's next? How are you looking at short, mid, long-term evolution of DigitalOcean's product strategy? >> Yeah, so I'll break it down in three different areas. The first part is really having a core complete feature set for a modern application that's being built in the cloud. So this is where, over the last 12 months, we've developed, we've deployed, developed and deployed load balancers, cloud firewalls, object storage, block storage, a new control panel experience, and a bunch of networking features that we have released. And so, we have some new features coming this year, which allow you to do, you know, the VPC feature, specifically, that allows businesses to have private networking and peering. That's been a top requested feature, so that's something that's going to come later this year to round out our core platform. And then, beyond that, we have two or three different things that we're doing. So the first category is just having a better developer experience. So this is everything from the experience you have when you are launching any cloud resource, whether it's for a control panel, or API, or CLI. So, continue to make that frictionless. So we have a few updates coming there to our control panel, improvements to our API, and adding a bunch of integrations so that, if you're using different products to manage your cloud infrastructure, we make that very, very easy. The second thing is marketplaces. So, a lot of, as you know, lots of other providers have marketplaces and different versions of marketplaces. A lot of our customers and vendors are now coming to us saying, "You have a really big audience and customer base. We really want to integrate our products so we can make it easy for them to spin up those resources." So marketplaces is the second large category that we're working on later this year. We'll have a lot of updates on that. And the third one is tied to developer experience, but it's essentially the Kubernetes product that we're launching. We also have plans to enable a marketplace-like integrations, and a lot of the CICD integrations, so that once you're up and running with your cluster, you got to get your CICD pipelines and tooling working, so that's an area. >> I want to ask you about multicloud, and where you guys are at with multicloud, and kind of connecting to the other cloud providers that are competitors, but, you know, your users are going to want to use as well as your solution. >> Yeah, this is where I think Kubernetes fits really, really well with the multicloud story for us, which is why, sort of, why now for us. If your workloads are in Kubernetes, and this is why we are going to support all of the latest community versions that are available. If your workloads are in Kubernetes, it becomes very easy for you to move those over to our platform, and so. I think we're going to see a combination of sometimes customers will have split workloads, sometimes they'll run different types of workloads in our platform, and so I think Kubernetes really opens up that possibility >> Lauren: That's great. To do that. There's still some more tooling to be done, but that's essentially where we're at. >> How many employees you guys have now? What's the number? >> We are roughly north of 400. So still very small. >> Well, congratulations. You guys are a growing company. Great to have you on theCUBE. Thanks for sharing the news. >> Thank you very much. >> Absolutely. >> Great job. DigitalOcean. You know, hot startup, growing rapidly, I'm sure they're hiring like crazy. >> We are. >> So go check 'em out. The news here at KubeCon is positive industry. Rising tide floats all boats. That's a philosophy we have seen on theCUBE and great ecosystems, of course that's happening here. More live coverage here in Copenhagen, Denmark after this short break. Stay with us. (upbeat music)

Published Date : May 2 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation Our next guest Shiv Ramji, VP of Product and you guys just upgraded your status on CNCF. And you know, we've had a lot of customers who've Is it managing the workload down to, So you can come into our platform and, within a few clicks, So we make it very, very easy from a product experience in context of the Kubernetes offering? So that's something we do want to address what you guys do, your value proposition, And essentially, all of the features that we've launched So you're solving the problem for the developer. And about 80% of our customers And so we provide free support to all of our customers, And typically we are compared against AWS and Google. And a lot of the times tends to be, you know, So, and you're growing that exponentially. And we sometimes have developers who come in and out So that's tremendous. And also the other news is you're And so, as we got active with Kubernetes ourselves, and help out CNCF. As the VP of Product, you have the keys to the kingdom How are you looking at short, mid, long-term evolution And the third one is tied to developer experience, and kind of connecting to the other cloud providers it becomes very easy for you to move those over but that's essentially where we're at. So still very small. Great to have you on theCUBE. You know, hot startup, growing rapidly, and great ecosystems, of course that's happening here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LaurenPERSON

0.99+

Lauren CooneyPERSON

0.99+

Shiv RamjiPERSON

0.99+

2012DATE

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

two yearsQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

threeQUANTITY

0.99+

John FurrierPERSON

0.99+

KubeConEVENT

0.99+

one minuteQUANTITY

0.99+

Shiven RamjiPERSON

0.99+

last yearDATE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

New YorkLOCATION

0.99+

third oneQUANTITY

0.99+

DigitalOceanORGANIZATION

0.99+

first categoryQUANTITY

0.99+

two thingsQUANTITY

0.99+

first oneQUANTITY

0.99+

first partQUANTITY

0.99+

ShivenPERSON

0.99+

oneQUANTITY

0.98+

about 80%QUANTITY

0.98+

second thingQUANTITY

0.98+

OneQUANTITY

0.98+

Digital OceanORGANIZATION

0.98+

about 1400 plus tutorialsQUANTITY

0.98+

one million developersQUANTITY

0.97+

TechstarsORGANIZATION

0.97+

this yearDATE

0.97+

CloudNativeCon Europe 2018EVENT

0.96+

KubeCon 2018 EuropeEVENT

0.96+

KubernetesTITLE

0.96+

theCUBEORGANIZATION

0.96+

later this yearDATE

0.96+

this weekDATE

0.95+

CloudNativeCon EU 2018EVENT

0.95+

one millionth developerQUANTITY

0.95+

about three and a half million active developersQUANTITY

0.93+

KubernetesORGANIZATION

0.91+

zeroQUANTITY

0.91+

second oneQUANTITY

0.9+

about half a million active developersQUANTITY

0.88+

multicloudORGANIZATION

0.82+

second large categoryQUANTITY

0.79+

about three million unique visitorsQUANTITY

0.78+

three different thingsQUANTITY

0.76+

last 12 monthsDATE

0.72+

a minuteQUANTITY

0.7+

three different areasQUANTITY

0.65+

couple featuresQUANTITY

0.64+

SalesforceORGANIZATION

0.6+

northQUANTITY

0.56+

400QUANTITY

0.5+

Kendall Nelson, OpenStack Foundation & John Griffith, NetApp - OpenStack Summit 2017 - #theCUBE


 

>> Narrator: Live from Boston, Massachusetts, it's theCUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. (techno music) >> And we're back. I'm Stu Miniman joined by my co-host, John Troyer. Happy to welcome to the program two of the keynote speakers this morning, worked on some of the container activity, Kendall Nelson, who's a Upstream Developer Advocate with the OpenStack Foundation. >> Yep. >> And John Griffith, who's a Principal Engineer from NetApp, excuse me, through the SolidFire acquisition. Thank you so much both for joining. >> Kendall Nelson: Yeah. Thank you. >> John Griffith: Thanks for havin' us. >> Stu Miniman: So you see-- >> Yeah. >> When we have any slip-ups when we're live, we just run through it. >> Run through it. >> Kendall, you ever heard of something like that happening? >> Kendall Nelson: Yeah. Yeah. That might've happened this morning a little bit. (laughs) >> So, you know, let's start with the keynote this morning. I tell ya, we're pretty impressed with the demos. Sometimes the demo gods don't always live up to expectations. >> Kendall Nelson: Yeah. >> But maybe share with our audience just a little bit about kind of the goals, what you were looking to accomplish. >> Yeah. Sure. So basically what we set out to do was once the ironic nodes were spun up, we wanted to set up a standalone cinder service and use Docker Compose to do that so that we could do an example of creating a volume and then attaching it to a local instance and kind of showing the multiple backend capabilities of Cinder, so... >> Yeah, so the idea was to show how easy it is to deploy Cinder. Right? So and then plug that into that Kubernetes deployment using a flex volume plugin and-- >> Stu Miniman: Yeah. >> Voila. >> It was funny. I saw some comments on Twitter that were like, "Well, maybe we're showing Management that it's not, you know, a wizard that you just click, click, click-- >> John Griffith: Right. >> Kendall Nelson: Yeah. >> "And everything's done." There is some complexity here. You do want to have some people that know what they're doing 'cause things can break. >> Kendall Nelson: Yeah. >> I love that the container stuff was called ironic. The bare metal was ironic because-- >> Kendall Nelson: Yeah. >> Right. When you think OpenStack at first, it was like, "Oh. This is virtualized infrastructure." And therefore when containers first came out, it was like, "Wait. It's shifting. It's going away from virtualization." John, you've been on Cinder. You helped start Cinder. >> Right. >> So maybe you could give us a little bit about historical view as to where that came from and where it's goin'. Yeah. >> Yeah. It's kind of interesting, 'cause it... You're absolutely right. There was a point where, in the beginning, where virtualization was everything. Right? Ironic actually, I think it really started more of a means to an end to figure out a better way to deploy OpenStack. And then what happened was, as people started to realize, "Oh, hey. Wait." You know, "This whole bare metal thing and running these cloud services on bare metal and bare metal clouds, this is a really cool thing. There's a lot of merit here." So then it kind of grew and took on its own thing after that. So it's pretty cool. There's a lot of options, a lot of choices, a lot of different ways to run a cloud now, so... >> Kendall Nelson: Yeah. >> You want to comment on that Kendall, or... >> Oh, no. Just there are definitely tons of ways you can run a cloud and open infrastructure is really interesting and growing. >> That has been one thing that we've noticed here at the show. So my first summit, so it was really interesting to me as an outsider, right, trying to perceive the shape of OpenStack. Right? Here the message has actually been very clear. We're no longer having to have a one winner... You know, one-size-fits-all kind of cloud world. Like we had that fight a couple of years ago. It's clear there's going to be multiple clouds, multiple places, multiple form factors, and it was very nice people... An acknowledgement of the ecosystem, that there's a whole open source ecosystem of containers and of other open source projects that have grown up all around OpenStack, so... But I want to talk a little bit about the... And the fact that containers and Kubernetes and that app layer is actually... Doesn't concern itself with the infrastructure so much so actually is a great fit for sitting on top of or... And adjacent to OpenStack. Can you all talk a little bit about the perception here that you see with the end users and cloud builders that are here at the show and how are they starting to use containers. Do they understand the way these two things fit together? >> Yeah. I think that we had a lot of talks submitted that were focused on containers, and I was just standing outside the room trying to get into a Women of OpenStack event, and the number of people that came pouring out that were interested in the container stack was amazing. And I definitely think people are getting more into that and using it with OpenStack is a growing direction in the community. There are couple new projects that are growing that are containers-focused, like... One just came into the projects, OpenStack Helm. And that's a AT&T effort to use... I think it's Kubernetes with OpenStack. So yeah, tons. >> So yeah, it's interesting. I think the last couple of years there's been a huge uptick in the interest of containers, and not just in containers of course, but actually bringing those together with OpenStack and actually running containers on OpenStack as the infrastructure. 'Cause to your point, what everybody wants to see, basically, is commoditized, automated and generic infrastructure. Right? And OpenStack does a really good job of that. And as people start to kind of realize that OpenStack isn't as hard and scary as it used to be... You know, 'cause for a few years there it was pretty difficult and scary. It's gotten a lot better. So deployment, maintaining, stuff like that, it's not so bad, so it's actually a really good solution to build containers on. >> Well, in fact, I mean, OpenStack has that history, right? So you've been solving a lot of problems. Right now the container world, both on the docker side and Kubernetes as well, you're dealing with storage drivers-- >> John Griffith: Yeah. >> Networking overlays-- >> Right. >> Multi-tenancy security, all those things that previous generations of technology have had to solve. And in fact, I mean, you know, right now, I'd say storage and storage interfaces actually are one of the interesting challenges that docker and Kubernetes and all that level of containers and container orchestration and spacing... I mean, it seems like... Has OpenStack already solved, in some way, it's already solved some of these problems with things like Cinder? >> Abso... Yeah. >> John Troyer: And possibly is there an application to containers directly? >> Absolutely. I mean, I think the thing about all of this... And there's a number of us from the OpenStack community on the Cinder side as well as the networking side, too-- >> Yeah. >> Because that's another one of those problem spaces. That are actually taking active roles and participating in the Kubernetes communities and the docker communities to try and kind of help with solving the problems over on that side, right? And moving forward. The fact is is storage is, it's kind of boring, but it's hard. Everybody thinks-- >> John Troyer: It's not boring. >> Yeah. >> It's really awesomely hard. Yeah. >> Everybody thinks it's, "Oh, I'll just do my own." It's actually a hard thing to get right, and you learn a lot over the last seven years of OpenStack. >> Yeah. >> We've learned a lot in production, and I think there's a lot to be learned from what we've done and how things could be going forward with other projects and new technologies to kind of learn from those lessons and make 'em better, so... >> Yeah. >> In terms of multicloud, hybrid cloud world that we're seeing, right? What do you see as the role of OpenStack in that kind of a multicloud deployments now? >> OpenStack can be used in a lot of different ways. It can be on top of containers or in containers. You can orchestrate containers with OpenStack. That's like the... Depending on the use case, you can plug and play a lot of different parts of it. On all the projects, we're trying to move to standalone sort of services, so that you can use them more easily with other technologies. >> Well, and part of your demo this morning, you were pulling out of a containerized repo somehow. So is that kind of a path forward for the mainline OpenStack core? >> So personally, I think it would be a pretty cool way to go forward, right? It would make things a lot easier, a lot simpler. And kind of to your point about hybrid cloud, the thing that's interesting is people have been talking about hybrid cloud for a long time. What's most interesting these days though is containers and things like Kubernetes and stuff, they're actually making hybrid cloud something that's really feasible and possible, right? Because now, if I'm running on a cloud provider, whether it's OpenStack, Amazon, Google, DigitalOcean, it doesn't matter anymore, right? Because all of that stuff in my app is encapsulated in the container. So hybrid cloud might actually become a reality, right? The one thing that's missing still (John Troyer laughs) is data, right? (Kendall Nelson laughs) Data gravity and that whole thing. So if we can figure that out, we've actually got somethin', I think. >> Interesting comment. You know, hybrid cloud a reality. I mean, we know the public cloud here, it's real. >> Yeah. >> With the Kubernetes piece, doesn't that kind of pull together some... Really enable some of that hybrid strategy for OpenStack, which I felt like two or three years ago it was like, "No, no, no. Don't do public cloud. >> John Griffith: Yeah. >> "It's expensive and (laughter) hard or something. "And yeah, infrastructure's easy and free, right?" (laughter) Wait, no. I think I missed that somewhere. (laughter) But yeah, it feels like you're right at the space that enables some of those hybrid and multicloud capabilities. >> Well, and the thing that's interesting is if you look at things like Swarm and Kubernetes and stuff like that, right? One of the first things that they all build are cloud providers, whether OpenStack, AWS, they're all in there, right? So for Swarm, it's pretty awesome. I did a demo about a year ago of using Amazon and using OpenStack, right? And running the exact same workloads the exact same way with the exact same tools, all from Docker machine and Swarm. It was fantastic, and now you can do that with Kubernetes. I mean, now that's just... There's nothing impressive. It's just normal, right? (Kendall Nelson laughs) That's what you do. (laughs) >> I love the demos this morning because they actually were, they were CLI. They were command-line driven, right? >> Kendall Nelson: Yeah. >> I felt at some conferences, you see kind of wizards and GUIs and things like that, but here they-- >> Yeah. >> They blew up the terminal and you were typing. It looked like you were actually typing. >> Kendall Nelson: Oh, yeah. (laughter) >> John Griffith: She was. >> And I actually like the other demo that went on this morning too, where they... The interop demo, right? >> Mm-hmm. >> John Troyer: They spun up 15 different OpenStack clouds-- >> Yeah. >> From different providers on the fly, right there, and then hooked up a CockroachDB, a huge cluster with all of them, right? >> Kendall Nelson: Yeah. >> Can you maybe talk... I just described it, but can you maybe talk a little bit about... That seemed actually super cool and surprising that that would happen that... You could script all that that it could real-time on stage. >> Yeah. I don't know if you, like, noticed, but after our little flub-up (laughs) some of the people during the interop challenge, they would raise their hand like, "Oh, yeah. I'm ready." And then there were some people that didn't raise their hands. Like, I'm sure things went wrong (John Troyer laughs) and with other people, too. So it was kind of interesting to see that it's really happening. There are people succeeding and not quite gettin' there and it definitely is all on the fly, for sure. >> Well, we talked yesterday to CTO Red Hat, and he was talking same thing. No, it's simpler, but you're still making a complicated distributed computing system. >> Kendall Nelson: Oh, definitely. >> Right? There are a lot of... This is not a... There are a lot of moving parts here. >> Kendall Nelson: Yeah. >> Yeah. >> Well, it's funny, 'cause I've been around for a while, right? So I remember what it was like to actually build these things on your own. (laughs) Right? And this is way better, (laughter) so-- >> So it gets your seal of approval? We have reached a point of-- >> Yeah. >> Of usability and maintainability? >> Yeah, and it's just going to keep gettin' better, right? You know, like the interop challenge, the thing that's awesome there is, so they use Ansible, and they talk to 20 different clouds and-- >> Kendall Nelson: Yeah. >> And it works. I mean, it's awesome. It's great. >> Kendall Nelson: Yeah. >> So I guess I'm hearing containers didn't kill OpenStack, as a matter of fact, it might enable the next generation-- >> Kendall Nelson: Yeah. >> Of what's going on, so-- >> John Griffith: Yeah. >> How about serverless? When do we get to see that in here? I actually was lookin' real quick. There's a Functions as a Service session that somebody's doing, but any commentary as to where that fits into OpenStack? >> Go ahead. (laughs) >> So I'm kind of mixed on the serverless stuff, especially in a... In a public cloud, I get it, 'cause then I just call it somebody else's server, right? >> Stu Miniman: Yeah. >> In a private context, it's something that I haven't really quite wrapped my head around yet. I think it's going to happen. I mean, there's no doubt about it. >> Kendall Nelson: Yeah. >> I just don't know exactly what that looks like for me. I'm more interested right now in figuring out how to do awesome storage in things like Kubernetes and stuff like that, and then once we get past that, then I'll start thinking about serverless. >> Yeah. >> Yeah. >> 'Cause where I guess I see is... At like an IoT edge use case where I'm leveraging a container architecture that's serverless driven, that's where-- >> Yeah. >> It kind of fits, and sometimes that seems to be an extension of the public cloud, rather than... To the edge of the public cloud rather than the data center driven-- >> John Griffith: Yeah. >> But yeah. >> Well, that's kind of interesting, actually, because in that context, I do have some experience with some folks that are deploying that model now, and what they're doing is they're doing a mini OpenStack deployment on the edge-- >> Stu Miniman: Yep. >> And using Cinder and Instance and everything else, and then pushing, and as soon as they push that out to the public, they destroy what they had, and they start over, right? And so it's really... It's actually really interesting. And the economics, depending on the scale and everything else, you start adding it up, it's phenomenal, so... >> Well, you two are both plugged into the user community, the hands-on community. What's the mood of the community this year? Like I said, my first year, everybody seems engaged. I've just run in randomly to people that are spinning up their first clouds right now in 2017. So it seems like there's a lot of people here for the first time excited to get started. What do you think the mood of the user community is like? >> I think it's pretty good. I actually... So at the beginning of the week, I helped to run the OpenStack Upstream Institute, which is teaching people how to contribute to the Upstream Community. And there were a fair amount of users there. There are normally a lot of operators and then just a set of devs, and it seemed like there were a lot more operators and users looking that weren't originally interested in contributing Upstream that are now looking into those things. And at our... We had a presence at DockerCon, actually. We had a booth there, and there were a ton of users that were coming and talking to us, and like, "How can I use OpenStack with containers?" So it's, like, getting more interest with every day and growing rapidly, so... >> That's great. >> Yeah. >> All right. Well, want to thank both of you for joining us. I think this went flawless on the interview. (laughter) And yeah, thanks so much. >> Yeah. >> All these things happen... Live is forgiving, as we say on theCUBE and absolutely going forward. So thanks so much for joining us. >> John Griffith: Thank you. John and I will be back with more coverage here from the OpenStack Summit in Boston. You're watching theCUBE. (funky techno music)

Published Date : May 9 2017

SUMMARY :

Brought to you by the OpenStack Foundation, Happy to welcome to the program And John Griffith, who's a Principal Engineer When we have any slip-ups when we're live, That might've happened this morning a little bit. Sometimes the demo gods about kind of the goals, and kind of showing the multiple backend capabilities So and then plug that into that Kubernetes deployment I saw some comments on Twitter that were like, You do want to have some people that know what they're doing I love that the container stuff was called ironic. When you think OpenStack at first, So maybe you could give us a little bit more of a means to an end to figure out and open infrastructure is really interesting and growing. that are here at the show and how are they starting and the number of people that came pouring out and not just in containers of course, Well, in fact, I mean, OpenStack has that history, that previous generations of technology have had to solve. Yeah. on the Cinder side as well as the networking side, too-- in the Kubernetes communities and the docker communities Yeah. and you learn a lot over the last seven years of OpenStack. and I think there's a lot to be learned from what we've done Depending on the use case, you can plug and play So is that kind of a path forward And kind of to your point about hybrid cloud, I mean, we know the public cloud here, With the Kubernetes piece, doesn't that kind of that enables some of those hybrid Well, and the thing that's interesting I love the demos this morning because they actually were, They blew up the terminal and you were typing. Kendall Nelson: Oh, yeah. And I actually like the other demo and surprising that that would happen that... and it definitely is all on the fly, for sure. and he was talking same thing. There are a lot of moving parts here. to actually build these things on your own. And it works. I actually was lookin' real quick. (laughs) So I'm kind of mixed on the serverless stuff, I think it's going to happen. and then once we get past that, At like an IoT edge use case It kind of fits, and sometimes that seems to be and as soon as they push that out to the public, here for the first time excited to get started. So at the beginning of the week, I think this went flawless on the interview. and absolutely going forward. John and I will be back with more coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John GriffithPERSON

0.99+

JohnPERSON

0.99+

Stu MinimanPERSON

0.99+

John TroyerPERSON

0.99+

Kendall NelsonPERSON

0.99+

2017DATE

0.99+

15QUANTITY

0.99+

Red HatORGANIZATION

0.99+

KendallPERSON

0.99+

OpenStack FoundationORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

twoDATE

0.99+

Boston, MassachusettsLOCATION

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

GoogleORGANIZATION

0.99+

OpenStack SummitEVENT

0.99+

OpenStackTITLE

0.99+

one thingQUANTITY

0.98+

20 different cloudsQUANTITY

0.98+

this yearDATE

0.98+

three years agoDATE

0.98+

one winnerQUANTITY

0.98+

first timeQUANTITY

0.98+

first yearQUANTITY

0.98+

OpenStack Upstream InstituteORGANIZATION

0.97+

OneQUANTITY

0.97+

OpenStack Summit 2017EVENT

0.97+

SolidFireORGANIZATION

0.96+

CTO Red HatORGANIZATION

0.96+

oneQUANTITY

0.95+

NetAppORGANIZATION

0.95+

first cloudsQUANTITY

0.94+

CinderORGANIZATION

0.93+

first summitQUANTITY

0.93+

couple of years agoDATE

0.93+

CinderTITLE

0.91+

KubernetesTITLE

0.91+

this morningDATE

0.91+