Image Title

Search Results for Ubuntu:

Udi Nachmany, Ubuntu - Google Next 2017 - #GoogleNext17 - #theCUBE


 

>> Announcer: Live, from Silicon Valley, it's theCUBE. Covering Google Cloud Next '17. (electronic music) >> Welcome back to theCUBE's live coverage of Google Next, here from our Palo Alto studio. Happy to welcome to the program a first time guest, Udi Nachmany, who is the Head of Public Cloud at Ubuntu, thank you so much for joining us. >> Thanks for having me, pleasure to be here. >> All right, so I think it goes without saying, anybody that understands the landscape. Oh wait, there's Cloud, there's Linux, and especially Ubuntu, you know that's going to be there. Before we get into some of these, just tell us a little bit about your role there, and inside the company. >> Sure, I've been with Canonical for about three years, and I head up our partnership with the public clouds and the public IS providers as a whole. >> Yeah. >> That includes Google, AWS, Azure, and many, many others. >> So can you just clarify one thing for us, though? >> Yes. >> You just said Canonical, I introduced you as Ubuntu. >> Yes. >> Which is it? How should we be referring to these two? Well, we are very well known for our products. >> Yeah. >> We're best well known our corporate brand and we're very happy with both names. I usually introduce myself as Udi from Ubuntu, >> Yeah. >> Slash Canonical, so we're used to that. >> Totally understand. So public cloud, give us your view on the landscape today. We want to talk specifically about some of the Google stuff, but what's happening, and what are customers to you for public cloud, where does your suite play into that environment? >> Sure, Ubuntu is a very popular OS, and I think probably the most popular, the area where we're most dominant is public cloud, So a large majority of workload's on Google Cloud, Azure, the Linux part of Azure, AWS, and many, many other providers is running on Ubuntu. A lot of high-visibility services actual develop on Ubuntu. And we have responsibility in that. We need to make the Ubuntu experience predictable and optimized for that cloud platform and have people trust that experience, and believe in it. So that's our job on a technical level, and then on the second level, our job is to help users access support and tooling on top of that, to help them with the operational reality. Because what we see, unless you've probably heard it before from Canonical, what we see is it's great that the licensing cost, the cost of software has gone down, that's great news for everyone, however what a lot of people don't realize is that the cost of operations has gone up, it's skyrocketed, right? It's great Kubernetes is open source, but how do you actually spin up a cluster, how do you deal with this architecture, what does it mean for your business? So that's where we critically focus on private and public cloud. >> Yeah, it's funny. I did an interview with Brad Anderson a few years ago, and I'm like, "Customers are complaining "about licensing costs," and he starts ranting, he's like, "Licensing costs? Do you know that licensing is 6% of the overall cost of what you have?" So, look, we understand operations are difficult, so why is that such a strong fit? What do you bring, what customers do you serve that they're choosing you in such a large preponderance? >> I think the two things we do well, one is we're very well-embedded in the industry and in the community, and pretty much where people are developing something exciting, they're developing it on Ubuntu and they're talking to us through the process. We get a really good view of their problems and challenges, as well as our own. And the second thing is we have come up with tools and frameworks to allow a lot of that knowledge to be crowdsourced, right? So a good example is our modeling platform Juju, where you can very easily get from not knowing anything about, for example Kubernetes, into a position where you have a Kubernetes architecture running on a public cloud, like Google, or in another public cloud, or in bare metal, right? So because we tackled that, we assume that somebody's done this before you, somebody's figured this out. Take all that knowledge, encapsulate it in what we call a Charm, and take that Charm and build an architecture on Juju, on the canvas, or through the CLI. >> Okay, maybe could you compare, contrast, Google, of course, has some pretty good chops when it come to Kubernetes, they're really trying to make some of these offerings really as a service, so ya know, what does Google do, what do you do? How do they work together? Are you actually partnering there or are you just in the community just working on things? >> Google is in this in two different ways. One is they have their own managed service GKE, and that's great and I think people who are all in on Google, then that's a probably a good way to go. You get the expertise, and you get the things that you need. Our approach, as always, is cloud-neutral and we do believe in a hybrid world. We are members of the CNCF, we're silver sponsors of the CNCF, we're very well-embedded in the Kubernetes community, and we do ship a pure upstream Kubernetes distribution that we also sell support for. So we work very closely with Google, in general, Google Cloud, on making sure Ubuntu runs well on GCE, and on the other side, we work very closely with the Kubernetes community in that ecosystem, to again, make sure that it becomes very easy to work with that solution. >> Every player that you talk to in the ecosystem gives you a different story when it comes to multi-cloud environments. Google's message tends to be pretty open. I mean, obviously, with what they're doing with Kubernetes and being their position of where they are with customer adoption, they understand that a lot of people that are doing cloud aren't doing it on Google's Cloud, so they want to make it, you can live in both worlds, and we can support it. I listened to Amazon today, they're like, well, the future's going to be, we're all going to be there, we're going to hire another 100,000 people throughout all of Amazon in the US in the next 18 months. And Microsoft is trying to wrap their arms around a lot of their applications, IBM and Google are there, doing their thing. You've got visibility into customers in all of these environments due to your place in the stack. What are you seeing today? How is Google's adoption going? Is one question I have for you. And two, most customers, I would think, are running kind of multi-cloud, if you will, is the term, is that what you see? How many clouds are they doing? What are you seeing, kind of shifts in there, and I know I asked you three different questions there, but maybe you can dig into that and unpack it for us. >> Sure. I think, in terms of what they, at least top three clouds are saying, I think it's more important to look at what they're doing. If you think about the AWS and VMWare announcement, if you think about Azure Stack for Microsoft, I think those are clearly admissions that there is an OnPrem story and there's a hybrid story that they feel they need to address. They might believe in a world where everybody's happy on a public cloud, but they also live in reality. >> We're on a public cloud show, we're not allowed to mitt about OnPrem, right? Next you're going to, like, mention OpenStack. >> Absolutely. And then, in terms of Google, I think the interesting thing Google's doing, Google are clearly in that, even in terms of size and growth, I think they're in that top three league. They are, my impression is they are focused on building the services and the applications that will attract the users, right? So they don't have this blanket approach of you must use this, because this is the best cloud ever. They actually work on making very good, specific solutions, like for big data and for other things, and Kubernetes is a good example, that will attract people and get them into that specific part of Google Cloud platform, and hopefully in the future, using more and more. So I think they have a very interesting more product than approach, in that sense. >> Okay, so. >> I think I answered one question. >> Yeah, you touched on, yes customers have public and OnPrem. >> Yeah. >> Kind of hybrid, if you will. What about public cloud, you know? Most customers have multiple public clouds in your data or are they tending to get most of it on a single cloud, and might having a second one for some other piece? >> Yeah, I think right now, we're seeing, is a lot of a lot of people using perhaps a couple of platforms. Especially if they have certain size, I'm putting things like serenity and data prophesy aside, but just in terms of public cloud users, they might, again, use a specific platform for a specific service, they might use bare metal servers on software, for example, and VMs on the cloud. People are, by and large, the savvy users do understand that a mix is needed, which also plays to our strength, of course, with tools like Juju and Landscape, we allow you to really solve that operational problem, while being really substrate-agnostic, right? And you don't have to necessarily worry about getting logged in to one or the other. The main thing is, you can manage that, and you can focus on your app. >> All right. Udi, what's the top couple of things that customers are coming to you at these shows for? Where do they find themselves engaging with you as opposed to just, ya know, they're the developers, they're loving what you're doing? >> Sure. So the one thing I mentioned before is operations, right? I've heard about big data, I've heard about Kubernetes. What are my options? Do I hire a team? Do I get a consultant? Do I spend six months reading about this? And they're looking for that help, and I think Juju as an open-source tool and conjure-up as a developer tool that's also open-source. Really expand their options in that sense, and make it much more efficient for them to do that. And the second thing I'd say is Ubuntu is obviously very popular on public cloud, it's popular in production, so production workloads, business-critical workloads. And more and more organizations are realizing that they need to think long and hard about what that means in terms of getting the right support for it, in terms of things like security. An example, this week there was a kernel vulnerability in Linus Distros, I don't think it has a name yet, and we have something called the Canonical Livepatch service which patches kernel vulnerabilities, you can guess by the name. Now, people who have that through our support package have not felt a thing through this vulnerability. So I think we'll start to see more and more of these, where people have a lot of machines running on different substrates, and they're really worried about their up time and what a professional support organization can help them do to maintain that up time. >> It's real interesting times, being a company involved in open sourced, involved in open cloud. I want you to react, there was a quote that Vint Cerf gave at the Google event, I was listening, they had a great session Marc Andreessen and Vint Cerf. >> Yeah it was overcrowed. >> Go there. There was actually room if you got in, but I was glad I got up there, and Vint Cerf said, "We have to be careful about fast leading to instability." What's your take on that? I hear, when I go to a lot of these shows it's like, wow, I used to go from 18 months to six months to six weeks for my deployments. And public cloud will just update everything automatically, but that speed, ya know? As you were just talking, security is one of the issues, but there's instability, what's your take on that? And how are customers dealing with this increasing pace of change, which is the only constant that we have in our industry? >> Yeah, that's very true. I think, so from conversations with customers I've had recently. I've had a few where they've been sitting around and really deliberating what they need to do with this public cloud thing that they've heard about. Trying to buy time, eventually might lead to panicking. So a big financial institution that I met, maybe a month ago are trying to move all in to AWS, right? Whether that's a good thing or a bad thing for them, whether it's the right thing for them, I don't think that discussion necessarily took place, it may well be the best thing for them. But it's the kind of, they're rushing in to that decision, because they took so much time to try and understand. On the other hand, you see people who are much more savvy, and understand that in terms of the rate of change, like you said, it's a constant, so you need to take ownership of your architecture. You can't be locked in to one box that solves all your problems. You need to make sure you have the operation agility and you're using the right tooling, to help you stay nimble when the next big thing comes along. Or the next little thing, which is sometimes just as scary. And I think, again, that's where we're very well placed and that's where we can have very interesting conversations. >> Really interesting stuff. Actually, I just published a case study with City, talking about, they use AWS, I would say tactically would be the way to put it. They build, they have a number of locations where they have infrastructure. Speed and agility absolutely something they need as an outcome. Public cloud is a tool that they use at certain times, but not... There are things they were concerned about in how they build their architectures. Want to give you the last word. We see Canonical, Ubuntu at a lot of shows, you're involved in a lot of partnerships. What do we expect to see from your cloud group, kind of over the next six months, what shall we be keeping an eye on? >> I think on the private cloud side we've been doing some great work into the toggle vertical, and I think you'll see us expanding into more verticals, like financial services, where we've had some good early successes. >> Can I ask, is that NFV-related? It was the top discussion point that I had at OpenStacks on it last year was around NFV. Is it that specific or? >> Yeah, that's an element of it, yeah, but it's about, how do I make my privat cloud economically viable as AWS or Google or Azure would be? How do I free myself from that and enable myself to move between the substrates without making that trade off. So I think that's on the private cloud side. And I think you're going to see more and more crossover between the world of platforms and switches and servers and the world of devices, web-connected devices. We just finished MWC in Barcelona last week. I think we're in the top 13 or 14 bars in terms of visibility, way ahead of most other OS platforms. And I think that's because our message resonates, right? It's great to have five million devices out there, but how do you actually ship a security fix? How do you ship an update? How do you ship an app, and how do you commercialize that? When you have that size of fleet. So that's a whole different kind of challenge, which, again, with the approach we have to operations, I think we are already there, in terms of offering the solution. So I think you're going to see a lot of more activity on that front. And in the public cloud, I'd say it's really about continuing to work ever closer with the bigger public clouds so that you have optimized experiences on Ubuntu, on that public cloud, on your public cloud of choice. And you're going to see a lot more focus on support offerings, sold through those clouds, which makes a lot of sense, not everyone wants to buy from another supplier. It's much easier to get all your needs met through one centralized bill. So you're going to see that as well. >> Udi Nachmany, really appreciate you coming to our studio here to help us with our coverage of Google Next 2017. We'll be wrapping up day one of two days of live coverage here from the SiliconANGLE Media Studio in Palo Alto. You're watching theCUBE (electronic music)

Published Date : Mar 9 2017

SUMMARY :

it's theCUBE. at Ubuntu, thank you me, pleasure to be here. and especially Ubuntu, you and the public IS providers as a whole. Google, AWS, Azure, and many, many others. Canonical, I introduced you as Ubuntu. How should we be referring to these two? and we're very happy with both names. to you for public cloud, is that the cost of cost of what you have?" and in the community, and and on the other side, is that what you see? that they feel they need to address. We're on a public cloud show, and hopefully in the I think I answered you touched on, yes customers Kind of hybrid, if you will. and you can focus on your app. are coming to you at these shows for? that they need to think long I want you to react, there was There was actually room if you got in, You need to make sure you Want to give you the last word. and I think you'll see us Can I ask, is that NFV-related? so that you have optimized appreciate you coming

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Udi NachmanyPERSON

0.99+

GoogleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Marc AndreessenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

six monthsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

6%QUANTITY

0.99+

Brad AndersonPERSON

0.99+

AWSORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

USLOCATION

0.99+

six weeksQUANTITY

0.99+

CanonicalORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

five million devicesQUANTITY

0.99+

one questionQUANTITY

0.99+

CNCFORGANIZATION

0.99+

last yearDATE

0.99+

two daysQUANTITY

0.99+

last weekDATE

0.99+

Vint CerfPERSON

0.99+

second thingQUANTITY

0.99+

OneQUANTITY

0.99+

BarcelonaLOCATION

0.99+

LinuxTITLE

0.99+

second levelQUANTITY

0.99+

14 barsQUANTITY

0.99+

this weekDATE

0.98+

a month agoDATE

0.98+

UbuntuTITLE

0.98+

100,000 peopleQUANTITY

0.98+

MWCEVENT

0.98+

oneQUANTITY

0.98+

SiliconANGLE Media StudioORGANIZATION

0.98+

one boxQUANTITY

0.98+

first timeQUANTITY

0.98+

Google NextTITLE

0.97+

both worldsQUANTITY

0.97+

JujuTITLE

0.97+

two thingsQUANTITY

0.97+

both namesQUANTITY

0.97+

Azure StackTITLE

0.97+

Google Cloud NextTITLE

0.97+

about three yearsQUANTITY

0.97+

todayDATE

0.97+

three different questionsQUANTITY

0.97+

13QUANTITY

0.96+

second oneQUANTITY

0.96+

OpenStackTITLE

0.96+

KubernetesTITLE

0.96+

OpenStacksORGANIZATION

0.94+

single cloudQUANTITY

0.94+

Chris Jones QA Session **DO NOT PUBLISH**


 

(upbeat music) >> Okay, welcome back everyone. I'm John Furrier here in theCUBE, in Palo Alto for "CUBE Conversation" with Chris Jones, Director of Product Management at Platform9. I've got a series of questions, had a great conversation earlier. Chris, I have a couple questions for you, what do you think? >> Let's do it, John. >> Okay, how does Platform9 Solution, you- can it be used on any infrastructure anywhere, cloud, edge, on-premise? >> It can, that's the beauty of our control plane, right? It was born in the cloud, and we primarily deliver that SaaS, which allows it to work in your data center, on bare metal, on VMs, or with public cloud infrastructure. We now give you the ability to take that control plane, install it in your data center, and then use it with anything, or even in air gap. And that includes capabilities with bare metal orchestration as well. >> Second question. How does Platform9 ensure maximum uptime, and proactive issue resolution? >> Oh, that's a good question. So if you come to Platform nine we're going to talk about always on assurance. What is driving that is a system of three components around self-healing, monitoring, and proactive assistance. So our software will heal broken things on nodes, right? If something stops running that should be running, it will attempt to restart that. We also have monitoring that's deployed with everything. So you build a cluster in AWS, well, we put open source monitoring agents, that are actually Prometheus, on every single node. That means it's resilient, right? So if you lose a node, you don't lose monitoring. But that data importantly comes back to our control plane, and that's the control plane that you can put in your data center as well. That data is what alerts us, and you as a user, anytime of the day that something's going wrong. Let's say etcd latency, good example, etcd is going slow. We'll find out, we might not be able to take restorative action immediately, but we're definitely going to reach out and say,, "You have a problem, let's get ahead of this and let's prevent that from becoming a bigger problem." And that's what we're delivering. When we say always on assurance, we're talking about self-healing, we're talking about remote monitoring, we're talking about being proactive with our customers, not waiting for the phone call or the support desk ticket saying, "Oh we think something's not working." Or worse, the customer has an outage. >> Awesome. Thanks for sharing. Can you explain the process for implementing Platform9 within a company's existing infrastructure. >> Are we doing air gap, or on-prem or SaaS approached? SaaS approach I think is by far the easiest, right? We can build a dedicated Platform9 control plane instance in a manner of minutes, for any customer. So when we do a proof of concept or onboarding, we just literally put in an email address, put in the name you want for your fully qualified domain name, and your instance is up. From that point onwards, the user can just log in, and using our CLI, talk to any number of, say, virtual machines, or physical servers in their environment for, you know, doing this in a data center or colo, and say, "I want these to be my Kubernetes control plane nodes. Here's the five of them. Here's the VIP for the load balancing, the API server and here are all of my compute nodes." And that CLI will work with the SaaS control plane, and go and build the cluster. That's as simple as it, CentOS, Ubuntu, just plain old operating system. Our software takes care of all the prerequisites, installing all the pieces, putting down MetalLB, CoreDNS, Metrics Server, Kubernetes dashboard, etcd backups. You built some servers. That's essentially what you've done, and the rest is being handled by Platform9. It's as simple as that. >> Great, thanks for that. What are the two traditional paths for companies considering the cloud native journey? The two paths. >> The traditional paths. I think that's your engineering team running so fast that before you even realize that you've got, you know, 10 EKS clusters. Or, hey, we can do this. You know, I've got the I can build it mentality. Let's go DIY completely open source Kubernetes on our infrastructure, and we're going to piecemeal build it all up together. They're, I think the pathways that people traditionally look at this journey, as opposed to having that third alternative saying can I just consume it on my infrastructure, be it cloud or on-premise or at the edge. >> Third is the new way, you guys do that. >> That's been our focus since the company was, you know, brought together back in the open OpenStack days. >> Awesome, what's the makeup of your customer base? Is there a certain pattern to the size or environments that you guys work with? Is there a pattern or consistency to your customer base? >> It's a spread, right? We've got large enterprises like Juniper, and we go all the way down to people with 20, 30, 50 nodes in total. We've got people in banking and finance, we've got things all the way through to telecommunications and storage infrastructure. >> What's your favorite feature of Platform9? >> My favorite feature? You know, if I ask, should I say this as a pre-sales engineer, let me show you a favorite thing. My immediate response is, I should never do this. (John laughs) To me it's just being able to define my cluster and say, go. And in five minutes I have that environment, I can see everything that's running, right? It's all unified, it's one spot, right? I'm a cluster admin. I said I wanted three control plane, 25 workers. Here's the infrastructure, it creates it, and once it's built, I can see everything that's running, right? All the applications that are there. One UI, I don't have to go click around. I'm not trying to solve things or download things. It's the fact that it's unified and just delivered in one hit. >> What is the one thing that people should know about Platform9 that they might not know about it? >> I think it's that we help developers and engineers as much as we can help our operations teams. I think, for a long time we've sort of targeted that user and said, hey, we, we really help you. It's like, but why are they doing this? Why are they building any infrastructure or any cloud platform? Well, it's to run applications and services, to help their customers, but how do they get there? There's people building and writing those things, and we're helping them, right? For the last two years, we've been really focused on making it simple, and I think that's an important thing to know. >> Chris, thanks so much, appreciate it. >> Yeah, thank you, John. >> Okay, that's theCUBE Q&A session here with Platform9. I'm John Furrier, thanks for watching. (light music)

Published Date : Feb 17 2023

SUMMARY :

Chris, I have a couple questions It can, that's the beauty and proactive issue resolution? and that's the control Can you explain the process and go and build the cluster. What are the two traditional paths be it cloud or on-premise or at the edge. the company was, you know, and we go all the way down It's the fact that it's unified For the last two years, Okay, that's theCUBE Q&A

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Chris JonesPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

25 workersQUANTITY

0.99+

Palo AltoLOCATION

0.99+

five minutesQUANTITY

0.99+

fiveQUANTITY

0.99+

AWSORGANIZATION

0.99+

Platform9ORGANIZATION

0.99+

Platform9TITLE

0.99+

JuniperORGANIZATION

0.99+

ThirdQUANTITY

0.99+

CentOSTITLE

0.99+

Second questionQUANTITY

0.99+

one spotQUANTITY

0.99+

two pathsQUANTITY

0.98+

UbuntuTITLE

0.97+

one hitQUANTITY

0.97+

20QUANTITY

0.97+

10 EKSQUANTITY

0.96+

One UIQUANTITY

0.96+

third alternativeQUANTITY

0.95+

PrometheusTITLE

0.94+

couple questionsQUANTITY

0.93+

50QUANTITY

0.92+

two traditional pathsQUANTITY

0.9+

one thingQUANTITY

0.89+

30QUANTITY

0.86+

single nodeQUANTITY

0.85+

KubernetesTITLE

0.85+

Platform nineTITLE

0.82+

last two yearsDATE

0.8+

CoreDNSTITLE

0.78+

OpenStackTITLE

0.74+

three componentsQUANTITY

0.71+

three control planeQUANTITY

0.7+

theCUBEORGANIZATION

0.5+

CLITITLE

0.48+

CUBEEVENT

0.32+

William Bell, PhoenixNap | VMware Explore 2022


 

(upbeat music) >> Good afternoon, everyone. Welcome back to the CUBE's day one coverage of VMware Explorer 22, live from San Francisco. I'm Lisa Martin. Dave Nicholson is back with me. Welcome back to the set. We're pleased to welcome William Bell as our next guest. The executive vice president of products at Phoenix NAP. William, welcome to the CUBE. Welcome back to the CUBE. >> Thank you, thank you so much. Happy to be here. >> Talk to us a little, and the audience a little bit about Phoenix NAP. What is it that you guys do? Your history, mission, value prop, all that good stuff. >> Absolutely, yeah. So we're global infrastructures as a service company, foundationally, we are trying to build pure play infrastructure as a service, so that customers that want to adopt cloud infrastructure but maybe don't want to adopt platform as a service and really just, you know, program themselves to a specific API can have that cloud adoption without that vendor lock in of a specific platform service. And we're doing this in 17 regions around the globe today. Yeah, so it's just flexible, easy. That's where we're at. >> I like flexible and easy. >> Flexible and easy. >> You guys started back in Phoenix. Hence the name. Talk to us a little bit about the evolution of the company in the last decade. >> Yeah, 100%. We built a data center in Phoenix expecting that we could build the centralized network access point of Phoenix, Arizona. And I am super proud to say that we've done that. 41 carriers, all three hyperscalers in the building today, getting ready to expand. However, that's not the whole story, right. And what a lot of people don't know is we founded an infrastructure as a service company, it's called Secured Servers no longer exists, but we founded that company the same time and we built it up kind of sidecar to Phoenix NAP and then we merged all of those together to form this kind of global infrastructure platform that customers can consume. >> Talk to us about the relationship with VMware. Obviously, here we are at VMware Explore. There's about seven... We're hearing 7,000 to 10,000 people here. People are ready to be back to hear from VMware and it's partner ecosystem. >> Yeah, I mean, I think that we have this huge history with VMware that maybe a lot of people don't know. We were one of the first six, the SPPs in 2011 at the end of the original kind of data center, whatever, vCloud data center infrastructure thing that they did. And so early on, there was only 10 of us, 11 of us. And most of those names don't exist anymore. We're talking, Terramark, Blue Lock, some of these guys. Good companies, but they've been bought or whatnot. And here's plucky Phoenix NAP, still, you know, offering great VMware cloud services for customers around the globe. >> What are some of the big trends that you're seeing in the market today where customers are in this multi-cloud world? You know this... I love the theme of this event. The center of the multi-cloud universe. Customers are in that by default. How do you help them navigate that and really unlock the value of it? >> Yeah, I think for us, it's about helping customers understand what applications belong where. We're very, very big believers both in the right home. But if you drill down on that right home for right applicator or right application, right home, it's more about the infrastructure choices that you're making for that application leads to just super exciting optimizations, right. If you, as an example, have a large media streaming business and you park it in a public called hyperscaler and you just eat those egress fees, like it's a big deal. Right? And there are other ways to do that, right. If you need a... If your application needs to scale from zero cores to 15,000 cores for an hour, you know, there are hyperscalers for that, right. And people need to learn how to make that choice. Right app, right home, right infrastructure. And that's kind of what we help them do. >> It's interesting that you mentioned the concept of being a pure play in infrastructure as a service. >> Yeah. >> At some point in the past, people would have argued that infrastructure as a service only exists because SaaS isn't good enough yet. In other words, if there's a good enough SaaS application then you don't want IaaS because who wants to mess around with IaaS, infrastructures as a service. Do you have customers who look at what they're developing as so much a core of what their value proposition is that they want to own it? I mean, is that a driving factor? >> I would challenge to say that we're seeing almost every enterprise become a SaaS company. And when that transition happens, SaaS companies actually care a lot about the cost basis, efficiency, uptime of their application. And ultimately, while they don't want to be in the data center business anymore, it doesn't mean that they want to pay someone else to do things that they feel wholly competent in doing. And we're seeing this exciting transition of open source technologies, open source platforms becoming good enough that they don't actually have to manage a lot of things. They can do it in software and the hardware's kind of abstracted. But that actually, I would say is a boon for infrastructure as a service, as an independent thing. It's been minimized over the years, right. People talk about hyperscalers as being cloud infrastructure companies and they're not. They're cloud platform companies, right. And the infrastructure is high quality. It is easy to access and scale, right, but it's ultimately, if you're just using one of those hyperscalers for that infrastructure, building VMs and doing a bunch of things yourself, you're not getting the value out of that hyperscaler. And ultimately that infrastructure's very expensive if you look at it that way. >> So it's interesting because if you look at what infrastructure consists of, which is hardware and software-- >> Yeah. >> People who said, eh, IaaS as is just a bridge to a bright SaaS future, people also will make the argument that the hardware doesn't matter anymore. I imagine that you are doing a lot of optimization with both hardware and stuff like the VMware cloud stack that you deploy as a VCPP partner. >> Absolutely, yeah. >> So to talk about that. >> Absolutely. >> I mean, you agree. I mean, if I were to just pose a question to you, does hardware still matter? Does infrastructure still matter? >> Way more than people think. >> Well, there you go. So what are you doing in that arena, specifically with VCPP? >> Yeah, absolutely. And so I think a good example of that, right, so last VMworld in person, 2019, we showcased a piece of technology that we had been working with Intel on for about two years at the time which was Intel persistent memory DC, persistent memory. Right? And we launched the first VMware cloud offering to have Intel DC persistent memory onboard. So that customers with the VMs that needed that technology could leverage it with the integrations in vSphere 6.7 and ultimately in seven more, right. Now I do think that was maybe a swing and a miss technology potentially but we're going to see it come back. And that specialized infrastructure deployment is a big part of our business, right. Helping people identify, you know, this application, if you'd have this accelerator, this piece of infrastructure, this quality of network can be better, faster, cheaper, right. That kind of mentality of optimization matters a lot. And VMware plays a critical role in that because it still gives the customer the operational excellence that they need without having to do everything themselves, right. And our customers rely on that a lot from VMware to get that whole story, operationally efficient, easy to manage, automated. All those things make a lot of difference to our VMware customers. >> Speaking of customers, what are you hearing, if anything, from customers, VMware customers that are your joint customers about the Broadcom acquisition? Are they excited about it? Are they concerned about it? And how do you talk about that? >> Yeah, I mean, I think that everyone that's in the infrastructure business is doing business with Broadcom, all right. And we've had so many businesses that we've been engaged with that have ultimately been a acquiree. I can say that this one feels different only in the size of the acquisition. VMware carries so much weight. VMware's brand exceeds Broadcom's brand, in my opinion. And I think ultimately, I don't know anything that's not public, right-- >> Well, they rebranded. By the way, on the point of brand, they rebranded their software business, VMware. >> Yeah. I mean, that's what I was going to say. That was the word on the street. I don't know if there's beneficial. Is that a-- >> Well, that's been-- >> But that's the word, right? >> That's what they've said. Well, but when a Avago acquired Broadcom they said, "we'll call ourselves Broadcom." >> Absolutely. Why wouldn't you? >> So yeah. So I imagine that what's been reported is likely-- >> Likely. Yeah, I 100% agree. I think that makes a ton of sense and we can start to see even more great intellectual property in software. That's where, you know, all of these businesses, CA, Symantec, VMware and all of the acquisitions that VMware has made, it's a great software intellectual property platform and they're going to be able to get so much more value out of the leadership team that VMware has here, is going to make a world of difference to the Broadcom software team. Yeah, so I'm very excited, you know. >> It's a lot of announcements this morning, a lot of technical product announcements. What did you hear in that excites you about the evolution of VMware as well as the partnership and the value in it for your customers? >> You know, one of our fastest growing parts of our business is this metal as a service infrastructure business and doing very, very... Using very specific technologies to do very interesting things, makes a big difference in our world and for our customers. So anything that's like smartNICs, disaggregated hypervisor, accelerators as a first class citizen in VMware, all that stuff makes the Phoenix NAP story better. So I'm super excited about that, right. Yeah. >> Well, it's interesting because VCPP is not a term that people who are not insiders know of. What they know is that there are services available in hyperscale cloud providers where you can deploy VMware. Well, you know, VMware cloud stack. Well, you can deploy those VMware cloud stacks with you. >> Absolutely. >> In exactly the same manner. However, to your point, all of this talk about disaggregation of CPU, GPU, DPU, I would argue with it, you're in a better position to deploy that in an agile way than a hyperscale cloud provider would be and foremost, I'm not trying to-- >> No, yeah. >> I'm not angling for a job in your PR department. >> Come on in. >> But the idea that when you start talking about something like metal as a service, as an adjunct or adjacent to a standard deployment of a VMware cloud, it makes a lot of sense. >> Yeah. >> Because there are people who can't do everything within the confines of what the STDC-- >> Yes. >> Consists of. >> Absolutely. >> So, I mean... Am I on the right track? >> No, you are 100% hitting it. I think that that point you made about agility to deliver new technology, right, is a key moment in our kind of delivery every single year, right. As a new chip comes out, Intel chip or Accelerator or something like that, we are likely going to be first to market by six months potentially and possibly ever. Persistent memory never launched in public cloud in any capacity but we have customers running on it today that is providing extreme value for their business, right. When, you know, the discreet GPUs coming from the just announced Flex series GPU from Intel, you're likely not going to see them in public cloud hyperscalers quickly, right. Over time, absolutely. We'll have them day one. Isolate came out, you could get it in our metal as a service platform the morning it launched on demand, right. Those types of agility points, they're not... Because they're hyperscale by nature. If they can't hyperscale it, they're not doing it, right. And I think that that is a very key point. Now, as it comes in towards VMware, we're driving this intersection of building that VCF or VMware cloud foundation which is going to be a key point of the VMware ecosystem. As you see this transition to core based licensing and some of the other things that have been talked about, VMware cloud foundation is going to be the stack that they expect their customers to adopt and deliver. And the fact that we can automate that, deliver it instantaneously in a couple of hours to hardware that you don't need to own, into networks you don't need to manage, but yet you are still in charge, keys to the kingdom, ready to go, just like you're doing it in your own data center, that's the message that we're driving for. >> Can you share a customer example that you think really just shines a big flashlight on the value that you guys are delivering? >> We definitely, you know, we had the pleasure of working with Make-A-Wish foundation for the last seven years. And ultimately, you know, we feel very compelled that every time we help them do something unique, different or what not, save money, that money's going into helping some child that's in need, right. And so we've done so many things together. VMware has stepped up as the plate over the years, done so many things with them. We've sponsored stuff. We've done grants, we've done all kinds of things. The other thing I would say is we are helping the City of Hope and Translational Genomics Research Institute on sequencing single cell RNA so that they can fight COVID, so that they can build cure, well, not cures but build therapies for colon cancer and things like that. And so I think that, you know, this is a driving light for us internally is helping people through efficiency and change. And that's what we're looking for. We're looking for more stories like that. We're looking... If you have a need, we're looking for people to come to us and say, "this is my problem. This is what this looks like. Let us see if we can find a solution that's a little bit different, a little bit out of the box and doesn't have to change your business dramatically." Yeah. >> And who are you talking to within customers? Is this a C level conversation? >> Yeah, I mean, I would say that we would love it to be... I think most companies would love to have that, you know, CFO conversation with every single customer. I would say VPs of engineering, increasingly, especially as we become more API centric, those guys are driving a lot of those purchasing decisions. Five years ago, I would've said director of IT, like director of IT. Now today, it's like VP of engineering, usually software oriented folks looking to deliver some type of application on top of a piece of hardware or in a cloud, right. And those guys are, you know, I guess, that's even another point, VMware's doing so much work on the API side that they don't get any credit for. Terraform, Ansible, all these integrations, VMware doing so much in this area and they just don't get any credit for it ever, right. It's just like, VMware's the dinosaur and they're just not, right. But that's the thing that people think of today because of the hype of the hyperscaler. I think that's... Yeah. >> When you're in customer conversations, maybe with prospects, are you seeing more customers that have gone all in on a hyperscaler and are having issues and coming to you guys saying help, this is getting way too expensive? >> Yeah, I think it's the unexpected growth problem or even the expected growth problem where they just thought it would be okay, but they've suffered some type of competitive pressure that they've had to optimize for and they just didn't really expect it. And so, I think that increasingly we are finding organizations that quickly adopted public cloud. If they did a full digital transformation of their business and then transformation of their applications, a lot of them now feel very locked in because every application is just reliant on x hyperscaler forever, or they didn't transform anything and they just migrated and parked it. And the bills that are coming in are just like, whoa like, how is that possible? We are typically never recommending get out of the public cloud. We are just... It's not... If I say the right home for the right application, it's by default saying that there are right applications for hyperscalers. Parking your VMware environment that you just migrated to a hyperscaler, not the right application. You know, I would love you to be with me but if you want to do that, at least go to VMC on AWS or go to OCVS or GCVE or any of those. If that's going to go with a Google or an Amazon and that's just the mandate and you're going to move your applications, don't just move them into native. Move them into a VMware solution and then if you still want to make that journey, that full transformation, go ahead and make it. I would still argue that that's not the most efficient way but, you know, if you're going to do anything, don't just dump it all into cloud, the native hyperscaler stuff. >> Good advice. >> So what do typical implementations look like with you guys when you're moving on premises environments into going back to the VCPP, STDC model? >> Absolutely. Do you have people moving and then transforming and re-platforming? What does that look like? What's the typical-- >> Yeah. I mean, I do not believe that anybody has fully made up their mind if exactly where they want to be. I'm only going to be in this cloud. It's an in the close story, right. And so even when we get customers, you know, we firmly believe that the right place to just pick up and migrate is to a VCPP cloud. Better cost effectiveness, typically better technology, you know service, right. Better service, right. We've been part of VMware for 12 years. We love the technology behind VMC's, now AWS is fantastic, but it's still just infrastructure without any help at all right, right. They're going to be there to support their technology but they're not going to help you with the other stuff. We can do some of those things. And if it's not us, it's another VCPP provider that has that expertise that you might need. So yes, we help you quickly, easily migrate everything to a VMware cloud. And then you have a decision point to make. You're happy where you are, you are leveraging public cloud for a certain applications. You're leveraging VMware cloud offerings for the standard applications that you've been running for years. Do you transform them? Do you keep them? What do you do? All those decisions can be made later. But I stress that repurchasing all your hardware again, staying inside your colo and doing everything yourself, it is for me, it's like a company telling me they're going to build a data center for themselves, single tenant data center. Like no one's doing that, right. But there are more options out there than just I'm going to go to Azure, right. Think about it. Take the time, assess the landscape. And VMware cloud providers as a whole, all 17,000 of us or whatever across the globe, people don't know that group of individuals of the companies is the third or fourth potentially largest cloud in the world. Right? That's the power of the VMware cloud provider ecosystem. >> Last question for you as we wrap up here. Where can the audience go to learn more about Phoenix NAP and really start test driving with you guys? >> Absolutely. Well, if you come to phoenixnap.com, I guarantee you that we will re-target you and you can click on a banner later if you don't want to stay there. (Lisa laughs) But yeah, phoenixnap.com has all the information that you need. We also put out tons of helpful content. So if you're looking for anything technology oriented and you're just, "I want to upgrade to Ubuntu," you're likely going to end up on a phoenixnap.com website looking for that. And then you can find out more about what we do. >> Awesome, phoenixnap.com. William, thank you very much for joining Dave and me, talking about what you guys are doing, what you're enabling customers to achieve as the world continues to evolve at a very dynamic pace. We appreciate your insights. >> Absolutely, thank you so much >> For our guest and Dave Nicholson, I'm Lisa Martin. You've been watching the CUBE live from VMware Explorer, 2022. Dave and I will be joined by a guest consultant for our keynote wrap at the end of the day in just a few minutes. So stick around. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

Welcome back to the Happy to be here. What is it that you guys do? you know, program company in the last decade. And I am super proud to say People are ready to be back still, you know, offering I love the theme of this event. and you just eat those egress It's interesting that you mentioned I mean, is that a driving factor? and the hardware's kind of abstracted. I imagine that you are I mean, you agree. So what are you doing in that arena, And VMware plays a critical role in that I can say that this one By the way, on the point of brand, I mean, that's what I was going to say. Well, but when a Avago acquired Broadcom Absolutely. So I imagine that what's VMware and all of the that excites you about all that stuff makes the Well, you know, VMware cloud stack. In exactly the same manner. job in your PR department. But the idea that when you Am I on the right track? to hardware that you don't need to own, And so I think that, you know, And those guys are, you know, that you just migrated to a hyperscaler, Do you have people moving that you might need. Where can the audience go to information that you need. talking about what you guys are doing, Dave and I will be joined

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

VMwareORGANIZATION

0.99+

DavePERSON

0.99+

SymantecORGANIZATION

0.99+

2011DATE

0.99+

BroadcomORGANIZATION

0.99+

PhoenixLOCATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

William BellPERSON

0.99+

WilliamPERSON

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

12 yearsQUANTITY

0.99+

7,000QUANTITY

0.99+

TerramarkORGANIZATION

0.99+

15,000 coresQUANTITY

0.99+

AvagoORGANIZATION

0.99+

41 carriersQUANTITY

0.99+

thirdQUANTITY

0.99+

2019DATE

0.99+

fourthQUANTITY

0.99+

VMCORGANIZATION

0.99+

zero coresQUANTITY

0.99+

2022DATE

0.99+

City of Hope and Translational Genomics Research InstituteORGANIZATION

0.99+

Blue LockORGANIZATION

0.99+

AnsibleORGANIZATION

0.99+

six monthsQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

TerraformORGANIZATION

0.98+

IntelORGANIZATION

0.98+

10,000 peopleQUANTITY

0.98+

oneQUANTITY

0.98+

Five years agoDATE

0.98+

Make-A-WishORGANIZATION

0.97+

UbuntuTITLE

0.97+

CUBEORGANIZATION

0.97+

17 regionsQUANTITY

0.97+

firstQUANTITY

0.97+

LisaPERSON

0.97+

an hourQUANTITY

0.96+

vSphere 6.7TITLE

0.96+

last decadeDATE

0.96+

VCFORGANIZATION

0.96+

sevenQUANTITY

0.96+

about two yearsQUANTITY

0.95+

VMware ExploreORGANIZATION

0.95+

singleQUANTITY

0.95+

Phoenix, ArizonaLOCATION

0.95+

COVIDOTHER

0.95+

Phoenix NAPORGANIZATION

0.94+

about sevenQUANTITY

0.93+

DockerCon2021 Keynote


 

>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.

Published Date : May 27 2021

SUMMARY :

We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mario AndrettiPERSON

0.99+

DaniPERSON

0.99+

Matt FalkPERSON

0.99+

Dana LawsonPERSON

0.99+

AmazonORGANIZATION

0.99+

Maya AndrettiPERSON

0.99+

DonniePERSON

0.99+

MicrosoftORGANIZATION

0.99+

MonaPERSON

0.99+

NicolePERSON

0.99+

UNICEFORGANIZATION

0.99+

25%QUANTITY

0.99+

GermanyLOCATION

0.99+

14 millionQUANTITY

0.99+

75%QUANTITY

0.99+

ManhattanLOCATION

0.99+

KhanPERSON

0.99+

10 minutesQUANTITY

0.99+

last yearDATE

0.99+

99QUANTITY

0.99+

1.3 timesQUANTITY

0.99+

1.2 timesQUANTITY

0.99+

ClairePERSON

0.99+

DockerORGANIZATION

0.99+

ScottPERSON

0.99+

BenPERSON

0.99+

UC IrvineORGANIZATION

0.99+

85%QUANTITY

0.99+

OracleORGANIZATION

0.99+

34%QUANTITY

0.99+

JustinPERSON

0.99+

JoeyPERSON

0.99+

80%QUANTITY

0.99+

160 imagesQUANTITY

0.99+

2020DATE

0.99+

$10,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

23 minutesQUANTITY

0.99+

JavaScriptTITLE

0.99+

AprilDATE

0.99+

twoQUANTITY

0.99+

56%QUANTITY

0.99+

PythonTITLE

0.99+

MollyPERSON

0.99+

Mac miniCOMMERCIAL_ITEM

0.99+

Hughie cowerPERSON

0.99+

two weeksQUANTITY

0.99+

100%QUANTITY

0.99+

GeorgiePERSON

0.99+

Matt fallPERSON

0.99+

MarsLOCATION

0.99+

Second questionQUANTITY

0.99+

KubickiPERSON

0.99+

MobyPERSON

0.99+

IndiaLOCATION

0.99+

DockerConEVENT

0.99+

Youi CalPERSON

0.99+

three ninesQUANTITY

0.99+

J frogORGANIZATION

0.99+

200 KQUANTITY

0.99+

appleORGANIZATION

0.99+

SharonPERSON

0.99+

AWSORGANIZATION

0.99+

10 XQUANTITY

0.99+

COVID-19OTHER

0.99+

windowsTITLE

0.99+

381QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Another test of transitions


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, (upbeat music) >> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music) >> Narrator: Live from Las Vegas It's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with it's ecosystem partners. >> Okay, welcome back everyone theCUBE's live coverage of AWS re:Invent 2019. This is theCUBE's 7th year covering Amazon re:Invent. It's their 8th year of the conference. I want to just shout out to Intel for their sponsorship for these two amazing sets. Without their support we wouldn't be able to bring our mission of great content to you. I'm John Furrier. Stu Miniman. We're here with the chief of AWS, the chief executive officer Andy Jassy. Tech athlete in and of himself three hour Keynotes. Welcome to theCUBE again, great to see you. >> Great to be here, thanks for having me guys. >> Congratulations on a great show a lot of great buzz. >> Andy: Thank you. >> A lot of good stuff. Your Keynote was phenomenal. You get right into it, you giddy up right into it as you say, three hours, thirty announcements. You guys do a lot, but what I liked, the new addition, the last year and this year is the band; house band. They're pretty good. >> Andy: They're good right? >> They hit the queen notes, so that keeps it balanced. So we're going to work on getting a band for theCUBE. >> Awesome. >> So if I have to ask you, what's your walk up song, what would it be? >> There's so many choices, it depends on what kind of mood I'm in. But, uh, maybe Times Like These by the Foo Fighters. >> John: Alright. >> These are unusual times right now. >> Foo Fighters playing at the Amazon Intersect Show. >> Yes they are. >> Good plug Andy. >> Headlining. >> Very clever >> Always getting a good plug in there. >> My very favorite band. Well congratulations on the Intersect you got a lot going on. Intersect is a music festival, I'll get to that in a second But, I think the big news for me is two things, obviously we had a one-on-one exclusive interview and you laid out, essentially what looks like was going to be your Keynote, and it was. Transformation- >> Andy: Thank you for the practice. (Laughter) >> John: I'm glad to practice, use me anytime. >> Yeah. >> And I like to appreciate the comments on Jedi on the record, that was great. But I think the transformation story's a very real one, but the NFL news you guys just announced, to me, was so much fun and relevant. You had the Commissioner of NFL on stage with you talking about a strategic partnership. That is as top down, aggressive goal as you could get to have Rodger Goodell fly to a tech conference to sit with you and then bring his team talk about the deal. >> Well, ya know, we've been partners with the NFL for a while with the Next Gen Stats that they use on all their telecasts and one of the things I really like about Roger is that he's very curious and very interested in technology and the first couple times I spoke with him he asked me so many questions about ways the NFL might be able to use the Cloud and digital transformation to transform their various experiences and he's always said if you have a creative idea or something you think that could change the world for us, just call me he said or text me or email me and I'll call you back within 24 hours. And so, we've spent the better part of the last year talking about a lot of really interesting, strategic ways that they can evolve their experience both for fans, as well as their players and the Player Health and Safety Initiative, it's so important in sports and particularly important with the NFL given the nature of the sport and they've always had a focus on it, but what you can do with computer vision and machine learning algorithms and then building a digital athlete which is really like a digital twin of each athlete so you understand, what does it look like when they're healthy and compare that when it looks like they may not be healthy and be able to simulate all kinds of different combinations of player hits and angles and different plays so that you could try to predict injuries and predict the right equipment you need before there's a problem can be really transformational so we're super excited about it. >> Did you guys come up with the idea or was it a collaboration between them? >> It was really a collaboration. I mean they, look, they are very focused on players safety and health and it's a big deal for their- you know, they have two main constituents the players and fans and they care deeply about the players and it's a-it's a hard problem in a sport like Football, I mean, you watch it. >> Yeah, and I got to say it does point out the use cases of what you guys are promoting heavily at the show here of the SageMaker Studio, which was a big part of your Keynote, where they have all this data. >> Andy: Right. >> And they're data hoarders, they hoard data but the manual process of going through the data was a killer problem. This is consistent with a lot of the enterprises that are out there, they have more data than they even know. So this seems to be a big part of the strategy. How do you get the customers to actually wake up to the fact that they got all this data and how do you tie that together? >> I think in almost every company they know they have a lot of data. And there are always pockets of people who want to do something with it. But, when you're going to make these really big leaps forward; these transformations, the things like Volkswagen is doing where they're reinventing their factories and their manufacturing process or the NFL where they're going to radically transform how they do players uh, health and safety. It starts top down and if the senior leader isn't convicted about wanting to take that leap forward and trying something different and organizing the data differently and organizing the team differently and using machine learning and getting help from us and building algorithms and building some muscle inside the company it just doesn't happen because it's not in the normal machinery of what most companies do. And so it always, almost always, starts top down. Sometimes it can be the Commissioner or CEO sometimes it can be the CIO but it has to be senior level conviction or it doesn't get off the ground. >> And the business model impact has to be real. For NFL, they know concussions, hurting their youth pipe-lining, this is a huge issue for them. This is their business model. >> They lose even more players to lower extremity injuries. And so just the notion of trying to be able to predict injuries and, you know, the impact it can have on rules and the impact it can have on the equipment they use, it's a huge game changer when they look at the next 10 to 20 years. >> Alright, love geeking out on the NFL but Andy, you know- >> No more NFL talk? >> Off camera how about we talk? >> Nobody talks about the Giants being 2 and 10. >> Stu: We're both Patriots fans here. >> People bring up the undefeated season. >> So Andy- >> Everybody's a Patriot's fan now. (Laughter) >> It's fascinating to watch uh, you and your three hour uh, Keynote, uh Werner in his you know, architectural discussion, really showed how AWS is really extending its reach, you know, it's not just a place. For a few years people have been talking about you know, Cloud is an operational model its not a destination or a location but, I felt it really was laid out is you talked about Breadth and Depth and Werner really talked about you know, Architectural differentiation. People talk about Cloud, but there are very-there are a lot of differences between the vision for where things are going. Help us understand why, I mean, Amazon's vision is still a bit different from what other people talk about where this whole Cloud expansion, journey, put ever what tag or label you want on it but you know, the control plane and the technology that you're building and where you see that going. >> Well I think that, we've talked about this a couple times we have two macro types of customers. We have those that really want to get at the low level building blocks and stitch them together creatively however they see fit to create whatever's in their-in their heads. And then we have the second segment of customers that say look, I'm willing to give up some of that flexibility in exchange for getting 80% of the way there much faster. In an abstraction that's different from those low level building blocks. And both segments of builders we want to serve and serve well and so we've built very significant offerings in both areas. I think when you look at microservices um, you know, some of it has to do with the fact that we have this very strongly held belief born out of several years of Amazon where you know, the first 7 or 8 years of Amazon's consumer business we basically jumbled together all of the parts of our technology in moving really quickly and when we wanted to move quickly where you had to impact multiple internal development teams it was so long because it was this big ball, this big monolithic piece. And we got religion about that in trying to move faster in the consumer business and having to tease those pieces apart. And it really was a lot of impetus behind conceiving AWS where it was these low level, very flexible building blocks that6 don't try and make all the decisions for customers they get to make them themselves. And some of the microservices that you saw Werner talking about just, you know, for instance, what we-what we did with Nitro or even what we did with Firecracker those are very much about us relentlessly working to continue to uh, tease apart the different components. And even things that look like low level building blocks over time, you build more and more features and all of the sudden you realize they have a lot of things that are combined together that you wished weren't that slow you down and so, Nitro was a completely re imagining of our Hypervisor and Virtualization layer to allow us, both to let customers have better performance but also to let us move faster and have a better security story for our customers. >> I got to ask you the question around transformation because I think that all points, all the data points, you got all the references, Goldman Sachs on stage at the Keynote, Cerner, I mean healthcare just is an amazing example because I mean, that's demonstrating real value there there's no excuse. I talked to someone who wouldn't be named last night, in and around the area said, the CIA has a cost bar like this a cost-a budget like this but the demand for mission based apps is going up exponentially, so there's need for the Cloud. And so, you see more and more of that. What is your top down, aggressive goals to fill that solution base because you're also a very transformational thinker; what is your-what is your aggressive top down goals for your organization because you're serving a market with trillions of dollars of spend that's shifting, that's on the table. >> Yeah. >> A lot of competition now sees it too, they're going to go after it. But at the end of the day you have customers that have a demand for things, apps. >> Andy: Yeah. >> And not a lot of budget increase at the same time. This is a huge dynamic. >> Yeah. >> John: What's your goals? >> You know I think that at a high level our top down aggressive goals are that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding, but also that it allows them to build, uh, build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time. And then, we also really want to be the technology infrastructure platform under all the applications that people build. And we're realistic, we know that you know, the market segments we address with infrastructure, software, hardware, and data center services globally are trillions of dollars in the long term and it won't only be us, but we have that goal of wanting to serve every application and that requires not just the security operational premise but also a lot of functionality and a lot of capability. We have by far the most amount of capability out there and yet I would tell you, we have 3 to 5 years of items on our roadmap that customers want us to add. And that's just what we know today. >> And Andy, underneath the covers you've been going through some transformation. When we talked a couple of years ago, about how serverless is impacting things I've heard that that's actually, in many ways, glue behind the two pizza teams to work between organizations. Talk about how the internal transformations are happening. How that impacts your discussions with customers that are going through that transformation. >> Well, I mean, there's a lot of- a lot of the technology we build comes from things that we're doing ourselves you know? And that we're learning ourselves. It's kind of how we started thinking about microservices, serverless too, we saw the need, you know, we would have we would build all these functions that when some kind of object came into an object store we would spin up, compute, all those tasks would take like, 3 or 4 hundred milliseconds then we'd spin it back down and yet, we'd have to keep a cluster up in multiple availability zones because we needed that fault tolerance and it was- we just said this is wasteful and, that's part of how we came up with Lambda and you know, when we were thinking about Lambda people understandably said, well if we build Lambda and we build this serverless adventure in computing a lot of people were keeping clusters of instances aren't going to use them anymore it's going to lead to less absolute revenue for us. But we, we have learned this lesson over the last 20 years at Amazon which is, if it's something that's good for customers you're much better off cannibalizing yourself and doing the right thing for customers and being part of shaping something. And I think if you look at the history of technology you always build things and people say well, that's going to cannibalize this and people are going to spend less money, what really ends up happening is they spend less money per unit of compute but it allows them to do so much more that they ultimately, long term, end up being more significant customers. >> I mean, you are like beating the drum all the time. Customers, what they say, we encompass the roadmap, I got that you guys have that playbook down, that's been really successful for you. >> Andy: Yeah. >> Two years ago you told me machine learning was really important to you because your customers told you. What's the next traunch of importance for customers? What's on top of mind now, as you, look at- >> Andy: Yeah. >> This re:Invent kind of coming to a close, Replay's tonight, you had conversations, you're a tech athlete, you're running around, doing speeches, talking to customers. What's that next hill from if it's machine learning today- >> There's so much I mean, (weird background noise) >> It's not a soup question (Laughter) And I think we're still in the very early days of machine learning it's not like most companies have mastered it yet even though they're using it much more then they did in the past. But, you know, I think machine learning for sure I think the Edge for sure, I think that um, we're optimistic about Quantum Computing even though I think it'll be a few years before it's really broadly useful. We're very um, enthusiastic about robotics. I think the amount of functions that are going to be done by these- >> Yeah. >> robotic applications are much more expansive than people realize. It doesn't mean humans won't have jobs, they're just going to work on things that are more value added. We're believers in augmented virtual reality, we're big believers in what's going to happen with Voice. And I'm also uh, I think sometimes people get bored you know, I think you're even bored with machine learning already >> Not yet. >> People get bored with the things you've heard about but, I think just what we've done with the Chips you know, in terms of giving people 40% better price performance in the latest generation of X86 processors. It's pretty unbelievable in the difference in what people are going to be able to do. Or just look at big data I mean, big data, we haven't gotten through big data where people have totally solved it. The amount of data that companies want to store, process, analyze, is exponentially larger than it was a few years ago and it will, I think, exponentially increase again in the next few years. You need different tools and services. >> Well I think we're not bored with machine learning we're excited to get started because we have all this data from the video and you guys got SageMaker. >> Andy: Yeah. >> We call it the stairway to machine learning heaven. >> Andy: Yeah. >> You start with the data, move up, knock- >> You guys are very sophisticated with what you do with technology and machine learning and there's so much I mean, we're just kind of, again, in such early innings. And I think that, it was so- before SageMaker, it was so hard for everyday developers and data scientists to build models but the combination of SageMaker and what's happened with thousands of companies standardizing on it the last two years, plus now SageMaker studio, giant leap forward. >> Well, we hope to use the data to transform our experience with our audience. And we're on Amazon Cloud so we really appreciate that. >> Andy: Yeah. >> And appreciate your support- >> Andy: Yeah, of course. >> John: With Amazon and get that machine learning going a little faster for us, that would be better. >> If you have requests I'm interested, yeah. >> So Andy, you talked about that you've got the customers that are builders and the customers that need simplification. Traditionally when you get into the, you know, the heart of the majority of adoption of something you really need to simplify that environment. But when I think about the successful enterprise of the future, they need to be builders. how'l I normally would've said enterprise want to pay for solutions because they don't have the skill set but, if they're going to succeed in this new economy they need to go through that transformation >> Andy: Yeah. >> That you talk to, so, I mean, are we in just a total new era when we look back will this be different than some of these previous waves? >> It's a really good question Stu, and I don't think there's a simple answer to it. I think that a lot of enterprises in some ways, I think wish that they could just skip the low level building blocks and only operate at that higher level abstraction. That's why people were so excited by things like, SageMaker, or CodeGuru, or Kendra, or Contact Lens, these are all services that allow them to just send us data and then run it on our models and get back the answers. But I think one of the big trends that we see with enterprises is that they are taking more and more of their development in house and they are wanting to operate more and more like startups. I think that they admire what companies like AirBnB and Pintrest and Slack and Robinhood and a whole bunch of those companies, Stripe, have done and so when, you know, I think you go through these phases and eras where there are waves of success at different companies and then others want to follow that success and replicate it. And so, we see more and more enterprises saying we need to take back a lot of that development in house. And as they do that, and as they add more developers those developers in most cases like to deal with the building blocks. And they have a lot of ideas on how they can creatively stich them together. >> Yeah, on that point, I want to just quickly ask you on Amazon versus other Clouds because you made a comment to me in our interview about how hard it is to provide a service to other people. And it's hard to have a service that you're using yourself and turn that around and the most quoted line of my story was, the compression algorithm- there's no compression algorithm for experience. Which to me, is the diseconomies of scale for taking shortcuts. >> Andy: Yeah. And so I think this is a really interesting point, just add some color commentary because I think this is a fundamental difference between AWS and others because you guys have a trajectory over the years of serving, at scale, customers wherever they are, whatever they want to do, now you got microservices. >> Yeah. >> John: It's even more complex. That's hard. >> Yeah. >> John: Talk about that. >> I think there are a few elements to that notion of there's no compression algorithm for experience and I think the first thing to know about AWS which is different is, we just come from a different heritage and a different background. We ran a business for a long time that was our sole business that was a consumer retail business that was very low margin. And so, we had to operate at very large scale given how many people were using us but also, we had to run infrastructure services deep in the stack, compute storage and database, and reliable scalable data centers at very low cost and margins. And so, when you look at our business it actually, today, I mean its, its a higher margin business in our retail business, its a lower margin business in software companies but at real scale, it's a high volume, relatively low margin business. And the way that you have to operate to be successful with those businesses and the things you have to think about and that DNA come from the type of operators we have to be in our consumer retail business. And there's nobody else in our space that does that. So, you know, the way that we think about costs, the way we think about innovation in the data center, um, and I also think the way that we operate services and how long we've been operating services as a company its a very different mindset than operating package software. Then you look at when uh, you think about some of the uh, issues in very large scale Cloud, you can't learn some of those lessons until you get to different elbows of the curve and scale. And so what I was telling you is, its really different to run your own platform for your own users where you get to tell them exactly how its going to be done. But that's not the way the real world works. I mean, we have millions of external customers who use us from every imaginable country and location whenever they want, without any warning, for lots of different use cases, and they have lots of design patterns and we don't get to tell them what to do. And so operating a Cloud like that, at a scale that's several times larger than the next few providers combined is a very different endeavor and a very different operating rigor. >> Well you got to keep raising the bar you guys do a great job, really impressed again. Another tsunami of announcements. In fact, you had to spill the beans earlier with Quantum the day before the event. Tight schedule. I got to ask you about the musical festival because, I think this is a very cool innovation. It's the inaugural Intersect conference. >> Yes. >> John: Which is not part of Replay, >> Yes. >> John: Which is the concert tonight. Its a whole new thing, big music act, you're a big music buff, your daughter's an artist. Why did you do this? What's the purpose? What's your goal? >> Yeah, it's an experiment. I think that what's happened is that re:Invent has gotten so big, we have 65 thousand people here, that to do the party, which we do every year, its like a 35-40 thousand person concert now. Which means you have to have a location that has multiple stages and, you know, we thought about it last year and when we were watching it and we said, we're kind of throwing, like, a 4 hour music festival right now. There's multiple stages, and its quite expensive to set up that set for a party and we said well, maybe we don't have to spend all that money for 4 hours and then rip it apart because actually the rent to keep those locations for another two days is much smaller than the cost of actually building multiple stages and so we thought we would try it this year. We're very passionate about music as a business and I think we-I think our customers feel like we've thrown a pretty good music party the last few years and we thought we would try it at a larger scale as an experiment. And if you look at the economics- >> At the headliners real quick. >> The Foo Fighters are headlining on Saturday night, Anderson Paak and the Free Nationals, Brandi Carlile, Shawn Mullins, um, Willy Porter, its a good set. Friday night its Beck and Kacey Musgraves so it's a really great set of um, about thirty artists and we're hopeful that if we can build a great experience that people will want to attend that we can do it at scale and it might be something that both pays for itself and maybe, helps pay for re:Invent too overtime and you know, I think that we're also thinking about it as not just a music concert and festival the reason we named it Intersect is that we want an intersection of music genres and people and ethnicities and age groups and art and technology all there together and this will be the first year we try it, its an experiment and we're really excited about it. >> Well I'm gone, congratulations on all your success and I want to thank you we've been 7 years here at re:Invent we've been documenting the history. You got two sets now, one set upstairs. So appreciate you. >> theCUBE is part of re:Invent, you know, you guys really are apart of the event and we really appreciate your coming here and I know people appreciate the content you create as well. >> And we just launched CUBE365 on Amazon Marketplace built on AWS so thanks for letting us- >> Very cool >> John: Build on the platform. appreciate it. >> Thanks for having me guys, I appreciate it. >> Andy Jassy the CEO of AWS here inside theCUBE, it's our 7th year covering and documenting the thunderous innovation that Amazon's doing they're really doing amazing work building out the new technologies here in the Cloud computing world. I'm John Furrier, Stu Miniman, be right back with more after this short break. (Outro music)

Published Date : Sep 29 2020

SUMMARY :

at org the org to the andyc and it was. of time. That's hard. I think that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

AndyPERSON

0.99+

Stu MinimanPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CIAORGANIZATION

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

3QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

80%QUANTITY

0.99+

4 hoursQUANTITY

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

VolkswagenORGANIZATION

0.99+

Rodger GoodellPERSON

0.99+

AirBnBORGANIZATION

0.99+

RogerPERSON

0.99+

40%QUANTITY

0.99+

Brandi CarlilePERSON

0.99+

PintrestORGANIZATION

0.99+

PythonTITLE

0.99+

two daysQUANTITY

0.99+

4 hourQUANTITY

0.99+

7th yearQUANTITY

0.99+

Willy PorterPERSON

0.99+

Friday nightDATE

0.99+

andy@stackrox.comOTHER

0.99+

7 yearsQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

two tagsQUANTITY

0.99+

IntelORGANIZATION

0.99+

millionsQUANTITY

0.99+

Foo FightersORGANIZATION

0.99+

last yearDATE

0.99+

GiantsORGANIZATION

0.99+

todayDATE

0.99+

andyc.info/dc20OTHER

0.99+

65 thousand peopleQUANTITY

0.99+

Saturday nightDATE

0.99+

SlackORGANIZATION

0.99+

two setsQUANTITY

0.99+

flask.docker.lifeOTHER

0.99+

WernerPERSON

0.99+

two thingsQUANTITY

0.99+

Shawn MullinsPERSON

0.99+

RobinhoodORGANIZATION

0.99+

IntersectORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Kacey MusgravesPERSON

0.99+

4 hundred millisecondsQUANTITY

0.99+

first imageQUANTITY

0.99+

Andy


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music)

Published Date : Sep 28 2020

SUMMARY :

at org the org to the andyc

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

two tagsQUANTITY

0.99+

PythonTITLE

0.99+

flask.docker.lifeOTHER

0.99+

andy@stackrox.comOTHER

0.99+

AndyPERSON

0.99+

andyc.info/dc20OTHER

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

flask.doctor.lifeOTHER

0.99+

third oneQUANTITY

0.99+

DockerfilesTITLE

0.99+

seventhQUANTITY

0.99+

KubernetesTITLE

0.98+

first oneQUANTITY

0.98+

second oneQUANTITY

0.98+

label-schema.orgOTHER

0.98+

OneQUANTITY

0.98+

KeynoteTITLE

0.98+

andyc.infodc20OTHER

0.98+

first imageQUANTITY

0.98+

First dayQUANTITY

0.97+

CentOsTITLE

0.97+

StackRoxTITLE

0.97+

SkopeoORGANIZATION

0.96+

Red HatORGANIZATION

0.96+

GitTITLE

0.96+

Ubuntu 19.10 VmsTITLE

0.95+

oneQUANTITY

0.95+

build 12OTHER

0.95+

JQTITLE

0.95+

base64TITLE

0.93+

JenkinsTITLE

0.93+

build number 12OTHER

0.91+

org.opencontainers.image.OTHER

0.91+

IngressORGANIZATION

0.89+

DODORGANIZATION

0.89+

opencontainers.image.authors.OTHER

0.89+

a secondQUANTITY

0.89+

two major waysQUANTITY

0.89+

Jenkins StackRoxTITLE

0.88+

GitosisTITLE

0.86+

GitLabORGANIZATION

0.86+

GitHubORGANIZATION

0.86+

two fun onesQUANTITY

0.84+

GitLabTITLE

0.82+

skopeoORGANIZATION

0.82+

DockerTITLE

0.81+

JSONTITLE

0.81+

traefikTITLE

0.77+

skopeoTITLE

0.76+

@clemenkoPERSON

0.74+

RancherTITLE

0.74+

IngressTITLE

0.73+

org.zdockerOTHER

0.72+

RedisTITLE

0.72+

DigitalOceanTITLE

0.71+

org.opencontainers.image.blankOTHER

0.71+

KuberORGANIZATION

0.69+

Paul Cormier, Red Hat | Red Hat Summit 2020


 

>> From around the globe its theCUBE with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi I'm Stu Miniman and this is theCUBE's coverage of a Red Hat Summit 2020. Of course this year the event is virtual. We're bringing all the people on theCUBE from where they are and really happy to bring back to the program, one of our CUBE alumni, Paul Cormier, who is the president and CEO of Red Hat. Of course the keynote and you and I spoke ahead of the show. Paul great to see you and thanks so much for joining us. >> My pleasure, always great to see you Stu. My pleasure. >> All right, so Paul lots have changed since last time we got together for summit. One things stayed the same though. So, you know, the big theme, I heard in your keynote, you talked about open hybrid cloud of course. We've been talking about cloud for years when you ran the product theme, you know, making Red Hat go everywhere is something that we've watched, you know, that move. Is anything different when you're talking to customers, when you're talking to your, the product themes, you think about the times were in, why is open hybrid cloud not a buzzword but hugely important in the times were facing? >> Because the big premise to open hybrid cloud is that customers, cloud has become part of people's infrastructure. I've seen very few if any true enterprise customers that are moving everything, every app to one cloud. And so I think what people really realized once they started implementing clouds, part of their infrastructure was that you going to always have applications that are running bare metal. Some are virtual machine maybe on top of VMware it might been a private cloud, and not many people saying you know what the public clouds are all so different from each other I might want to run one application for whatever reason in one in a different one or another I think they started to realize the actual operational cost to that, the security cost of that and even more mobility the development cost of that from the application perspective and now having five silos up there now how that's so costly so now our whole premise since the beginning of open hybrid cloud has been to give you that level playing field to have those things all the same no matter where the application wants whether experimental virtual machine private multiple public cloud and so in the long run as customers start to start to really go to cloud first application development and they can still manage that under one platform in a common way but at the same time managed develop secure it but at the same time they can manage develop and secure their legacy applications that are also on linux as well in the same way so I think in the long run it really brings it together and saves money and efficiency in those areas. >> Yeah it's I always loved I look over time we have certain words that we think we know what they mean and then they mature over time let's just say we'll start with the first piece of what you're talking about open we live through those of us that have been through that the really ascendancy of open-source is in the early days open was free and we joke it was free like puppies >> Yeah. but today open source of course is very prevalent we see it all over the place but give from an open hybrid cloud why open is important today and what customers should think, how do customers think about that today? >> There's probably two most misunderstood things with open so first thing is that open source is a development model, first of all. I always say it's a verb not a noun, I even say well think internally and externally. We're not an open source company, we're an enterprise software company with an open source development model. So you think about that, that's what that's really important. Why is the open source development model so important? It's important because everyone has the same opportunity in terms of the features of within the code everyone has the same opportunity to contribute. The best technology wins that's how it works in the upstream community is it's not a technology driven by one company that may have a one company agenda. It's really a development process that allows the best technology to win and I think that's one of the main things and one of the main reasons why you see all the innovation frankly in the last five years around infrastructure and development, associated pieces and tools around that of being in and around Linux because Linux was available, it was powerful, it was open when people wanted to develop for when people wanted to develop kubernetes for example, they had to make changes to the Linux kernel in order to do that it did work because they could and so those are the things that make it really important as a development model and I think those are the things that get confused a lot. I think the other things that get confuses a lot of people think that, "hey if I have this great technology and I just open-source is that it'll all just work, everyone will come, now that's not the case. The things that really, the projects that really succeed of an open-source perspective are the problems that are common and horizontal across a big group of people so they're trying to solve similar problems and that's one of the things that we found as you go further up the stack the length typically the less community is involved it's the horizontal layers where you need whether you're in banking or retail or telco or whatever they're all the same, those are the pieces where open-source really fits well. >> Alright so the second piece you talk about hybrid I think back to the early days Paul when cloud was first defined and we talked about public and private cloud we had discussions of hybrid cloud and multi clouds and the concern that I have is it was very much an infrastructure discussion and it was pieces and the vision that we always have is, were customers to actually get value is, the total solution needs to be more valuable than the sum of its parts. So it's really about hybrid applications about where my data lives, so do you agree with some of those things I'm saying how does Red Hat look at it and from your team i do get lots of the application and app dev discussion which I always find even more meaningful than arguing over ontologies of how you build your cloud. >> Everything you said is all about the application if you look at just where we started with linux just along what did Linux bring to the enterprise when we first started rally me you and I talked about this earlier that was the thing that really opened things up. The enterprise's started buying Linux they right they started buying Linux for Linux for $29.95 at the book stores but when I first came on board we talked to some of the banking customers in there, they said well we love this technology but every time you guys change a release on my applications breaker when I get new hardware it doesn't work etc. So it's all about the application Linux is better about that all the time from the beginning of time what hybrid it really means here, is that I can run that seamlessly across wherever that footprint is going to live and so I think that's also one of the things that gets confused a bit. When the cloud first started, the cloud vendors were telling people that every application was going to move to one cloud tomorrow right? We knew that was not practical, that's the other thing from open-source developers, we look at a practical perspective, we look back in 2007 I just looked at just to prepare for the note I just put up to the company. Back in 2007 at the summit I talked about any application anywhere anytime. That's really the essence of what hybrid is here, so what we found here is what every application is impractical for every application to move to one cloud and so cloud is powerful but it's become part of people's development and operations and security environment so now as we stitch that in may make that common for those three things for the operation security in development more application development world that's where the power is. So I see the day where application developers and application users won't know or care what platform the back-end day is coming from for whatever applications they're writing, they shouldn't care that should just happen seamlessly under the covers but having said that, that complicates thing and that's why management needs to be retooled with it as well. Sorry on that but I could talk about that for three days right? >> Yeah so as an industry we kind of argue about these and everybody feels that they understand the way the future should look. So Paul for a number of years it was, "we're going to build this stack "and let's have the exact same stack here and there." There were some of the big iron companies that did that a few years ago now you see some of your public cloud partners saying, "we can give you that same experience "that same hardware all the way "down to the chip level things are going to be the same." When I look at software companies, there's two that come to mind to live across dispersed environments. One is very much from a virtualization standpoint they design themselves to live on any hardware out there. Red Hat has a slightly different way of looking at things, so what's your take on kind of the stack and why is hybrid in that hybrid cloud model that you're building probably looks and sounds and feels different then I think almost anybody else out there? >> Well the cloud guys, they all have similar technologies underneath I mean most of it not all of its based on Linux but they're all different I mean remember the UNIX days I'm old enough to remember the UNIX day. That was the goal back then but like each hardware vendor did each cloud vendor is now taking that Linux or the Associated pieces with it and they have to make their changes to adapt to their environment and some of those changes don't allow for applications to be portable outside that environment, that's exactly like the OEM world of the past and so I hope some people hate it when I say this to make this a comparison but I really look at the cloud guys as a mainframe and certainly mainframe as and still does bring a ton of value to certain customer base and so if you're going to keep your application in that one place, a mainframe will all on you mainframe mentality will always stitch it to bet together better but that's not the reality of what customers are trying to do out there. So I really think you have to look at it that way it's not that much different in concept anyways to the OEM days whether from when they started running Linux and the thing that Red Hat's done that some of the others haven't for VMware for example, VMware they have no pieces that touch the application I mean they have some now they had photon, they had some of the other pieces that sort of tried to touch the application but at the end of the day we always concentrated in Linux and especially from a Red Hat perspective of keeping the environment the same, both from an application perspective and from a hardware perspective. Certainly when an application runs in the cloud, we don't have to worry about the hardware anymore but we still have to worry about the application and businesses are all about the application and so we always took that tack from both sides of that. I think that's one of VMware's weaknesses frankly is that applications don't run on hypervisors, they run on operating systems including when I say operating systems I mean containers because that is a Linux operating system. >> Yeah Paul a lot of good points you brought up there and it's interesting the mainframe analogy in the early days of cloud there were some that would throw stones and saying right you're rebuilding the mainframe and you're going to be locked in, this is going to be an environment so I'd love to get your thought you think about what's happening in application development, the rise of is you talked about containers and kubernetes serverless is out there there's that, "we want to enable the application developers but we don't want to get locked into some platform there. Talk about red-hat's role how your products are helping the ship, help customers make sure that they can take advantage of some of these new ways of building, maintaining and changing without being stuck on any specific platform or technology >> Well the first place, I believe I'm sure I will be corrected on this but we really are the only company that I can think of at this moment that is a hundred percent open source. Everything we do when our products go is open source based goes back upstream to the community for everyone to take advantage of so that's the first thing. I mean the second thing we do is one of the big fallacies is, open source has become so popular that people are confusing upstream projects with downstream products and so for us I'll use us as an example, I'll use Linux and I'll use kubernetes as an example, the Linux kernel we all built from the Linux kernel us, Susa, Ubuntu we all build from the Linux kernel but at the end of the day we all make choices when we bring that upstream work down to become a product. In our case we go upstream to rel, we go from fedora to sent us to rel. We all make choices, which file systems were going to package, what development environment we're going to to package, what packages werre gonna package and so when we get down to what's get deployed in the enterprise, those choices in what makes the difference of why by rel is slightly different than SUSE Linux which is slightly different than Canonical's upon - but they're all come from the same heritage, the same as the case with kubernetes is this sort of fallacy that kubernetes is the last time I checked it was 127 different kubernetes vendors out there. They're all just going to magically work together yes they all come from the same place but we have to touch the users face, we have to touch the kernel and so there how do you line that up in the life cycle of what the customers get is going to be different. We might be able to take different pieces from different from those 127, make it work at one point but the first time any of us makes a change, it's not coordinated with the other side, it's probably going to break. Anyone our life cycles go out 10 plus years and so engineering that altogether is something that makes it all work together as you upgrade whether it be hardware or your applications and so some people confuse that with not being old till 100 percent open. When we find a bug in rel, rel that's been out there for five years maybe we give that fix back to the upstream community that's open it's out there and so I think that's the part that this doesn't become so accepted now and so much part of the mainstream now that we very much confused projects with products and so that's one of the biggest confusion points out there. >> Yeah really good points there Paul. So when I think about some of the things we've heard over the years is in the original days it was, "Oh well public hug Paul? I'm not going to need rel anymore they've got Linux then kubernetes has come along and Red Hat's had a really strong position but you look at it and you say, "Okay well if I'm most customers, "if I'm doing Amazon, "if I'm doing Google, "if I'm doing Microsoft, "I'm probably going to end up using some of their native services that they've got built-in. Talk about how the role of Red Hat kind of continues to change and you live in this multi cloud environment and i think it's kind of that intersection that you were talking about, open and compatibility as opposed to. You're not saying that Red Hat's going to conquer the world and take down all the other options >> Well cloud providers bring a ton of value. I mean the users have to be smart on how and when they use that value. If you truly are going to be a hundred percent of your applications in one public cloud, then you probably will get the best solution from that one public cloud. Serverless is a great example if you're an Amazon and you spin up via services serverless that container that gets spun up is never going to run outside that Club, if that's okay with you that's okay with you. (Voice scrambles) The we've gone about this is as I said to give you that seamless environment all the way across. If you want to run just containers, (voice scrambles) on one particular cloud vendor and you want it under their kubernetes and it's never going to run in any other place, that's okay too but if you're going to have an environment with applications that are in multiple cloud vendors infrastructure you're even on your own, you're now going to have to spin up these different silos of that technology even though the technology as the same heritage. So that's a huge operational and development cost as you grow bigger and able to order to do that and so our set a strategy is very simple, it's give the developers operations and security people that common environment to work across and over time (voice scrambles) they shouldn't care where the services are coming from. It should just all work and that's why you seen things like automation being so important now. I mean our nation is our biggest growing business with ansible right now and part of the reason is as people spread out to a container based environment applications that may now spread across those different footprints maybe you want to have your front (voice scrambles) we have one of the rel customers in Europe that has the front facing customer side of their ticket, their ticketing system up in the public cloud and they've got the backend financial transaction database pieces that click credit cards behind their firewall, that's really one application spread across containers, if you have do you want to have to manage the front end of that with one kubernetes and the backend of that were the different kubernetes? Probably not and so that's really what we bring to the table as we've really grown in with this new technology. >> Alright, so final question I have for you Paul I'm actually going to get away a little bit from your background on the product piece you have to talk a little bit about just red hat going forward. So you talked about, we know for many years red hat has been much more than the Linux piece you talk about automation I've got some great interviews this week talking about some of the the latest in application development, lots of open source projects and so many open source projects (laughing) nobody can keep them all straight there. So as customers look at strategic partnerships, what is the role of red hat and with now being under IBM Jim white her steps over to become president there Arvind of course had a long relationship and it was the architect behind the Red Hat acquisition what's the same and what's different as we think about Red Hat 2020 under your leadership? >> I think it's a lot of the same I mean I think the the difference becomes in the world we're in right now is sort of how we can help our customers come out of back and back into re-entry right and so how that's going to to be different than the past (voice scrambles) we're working through that with many of our customers and we think we can be a big help here because we run their business and today where they run their business over the platforms on their business and that's not going to go away for them and in fact if anything that's going to get even more critical for them because they've got to get more automation to get just more efficiency out of it so in terms of what we do and as a company that's not going to change at all I mean we've been on this path that we're on for a long time. I stand up in front of our sales kickoffs every year is hearing and virtual as well and I say, "we'll to talk to you about the strategy." Guess what? It hasn't changed much from last year and that's a good thing because these technology rollouts are multi-year rollouts, so we're going to continue on that I mean the other thing too is, our customers are seeing moving many more of their work close to the Linux environment and so I think we can help them expand that as well and I think from an IBM perspective (voice scrambles) one of the big premises here from our perspective is to help us scale because they're in the process of helping their customers move to this next generation architectures and at the same time be able support the current architectures and that's what we do well and so they can just help us get to places that we just wouldn't have had the time and the resources maybe to get there get on our own so we can expand that footprint even more quickly with IBM. So that's the focus right now is to really help our customers move to the next phase of this in terms of re-entry >> Yeah as I've heard you and many other Red Hatters say Red Hat is still Red Hat and definitely it's something that we can see loud and clear at Red Hat summit 2020. Thank you so much Paul. >> Thank you Stu nice to see you again. >> All right lots of coverage from Red Hat summit 2020 be sure to check out the cube net for the whole back catalogue that we have of Paul their customers, there their partners and thank you for so watching the queue [Music]

Published Date : Apr 28 2020

SUMMARY :

brought to you by Red Hat. and really happy to bring back to the program, My pleasure, always great to see you Stu. but hugely important in the times were facing? and so in the long run as and what customers should think, and one of the main reasons and the vision that we always have is, and so I think that's also one of the and everybody feels that they understand the and the thing that Red Hat's done and it's interesting the mainframe analogy in the early days and so much part of the mainstream now and take down all the other options and part of the reason is as people spread out than the Linux piece you talk about automation and the resources maybe to get there get on our own and definitely it's something that we can see loud and clear

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul CormierPERSON

0.99+

EuropeLOCATION

0.99+

ArvindPERSON

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

MicrosoftORGANIZATION

0.99+

five yearsQUANTITY

0.99+

2007DATE

0.99+

Red HatORGANIZATION

0.99+

10 plus yearsQUANTITY

0.99+

$29.95QUANTITY

0.99+

GoogleORGANIZATION

0.99+

three daysQUANTITY

0.99+

LinuxTITLE

0.99+

Stu MinimanPERSON

0.99+

oneQUANTITY

0.99+

IBMORGANIZATION

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

StuPERSON

0.99+

100 percentQUANTITY

0.99+

CanonicalORGANIZATION

0.99+

UbuntuTITLE

0.99+

Red HatTITLE

0.99+

second pieceQUANTITY

0.99+

linuxTITLE

0.99+

both sidesQUANTITY

0.99+

Red Hat Summit 2020EVENT

0.99+

first pieceQUANTITY

0.98+

five silosQUANTITY

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

Linux kernelTITLE

0.98+

this yearDATE

0.98+

hundred percentQUANTITY

0.98+

one companyQUANTITY

0.98+

UNIXTITLE

0.98+

JimPERSON

0.98+

Red Hat summit 2020EVENT

0.98+

one platformQUANTITY

0.98+

first thingQUANTITY

0.98+

tomorrowDATE

0.97+

CUBEORGANIZATION

0.97+

firstQUANTITY

0.97+

SusaTITLE

0.97+

OneQUANTITY

0.97+

one pointQUANTITY

0.97+

second thingQUANTITY

0.96+

this weekDATE

0.96+

one cloudQUANTITY

0.96+

three thingsQUANTITY

0.95+

first applicationQUANTITY

0.95+

first timeQUANTITY

0.94+

Dustin Kirkland, Apex | CUBE Conversation, April 2020


 

>> Announcer: From the CUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Welcome to this special CUBE conversation. I'm John Furrier here in Palo Alto, California. In our remote studio, we have a quarantine crew here during this COVID-19 crisis. Here talking about the crisis and the impact to business and overall work. Joined by a great guest Dustin Kirkland, CUBE alumni, who's now the chief product officer at Apex Clearing. This COVID-19 has really demonstrated to the mainstream world stage, not just inside the industry that we've been covering for many, many years, that the idea of at-scale means something completely different, and certainly DevOps and Agile is going mainstream to survive, and people are realizing that now. No better guest than have Dustin join us, who's had experiences in open source. He's worked across the industry from Ubuntu, Open Stack, Kubernetes, Google, Canonical. Dustin, welcome back to the CUBE here remotely. Looking good. >> Yeah, yeah, thanks, John. Last time we talked, I was in the studio, and here we are talking over the internet. This is a lot of fun. >> Well, I really appreciate it. I know you've been in your new role since September. A lot's changed, but one of the things why I wanted to talk with you is because you and I have talked many times around DevOps. This has been the industry conversation. We've been inside the ropes. Now you're starting to see, with this new scale of work-at-home forcing all kinds of new pressure points, giving people the realization that the entire life with digital and with technology can be different, doesn't have to be augmented with their existing life. It's a full-on technology driven impact, and I think a lot of people are learning that, and certainly, healthcare and finance are two areas, in particular, that are impacted heavily. Obviously, people are worried about the economy, and we're worried about people's lives. These are two major areas, but even outside that, there's new entrepreneurs right now that I know who are working on new ventures. You're seeing people working on new solutions. This is kind of bringing the DevOps concept to areas that quite frankly weren't there. I want to get your thoughts and reaction to that. >> Yeah, without a doubt, I mean, the whole world has changed in 30 short days. We knew something was amiss in China. We knew that there was a lot of danger for people. The danger for business, though, didn't become apparent until vast swathes of the work force got sent home. And there's a number of businesses and industries that are coping relatively well with this. Certainly those who have previously adopted, or have experienced, doing work remotely, doing business by video, teleconference, having resources in the cloud, having people and expertise who are able to continue working at nearly 100% capacity in 100% remote environments. There's a lot of technology behind that, and there are some industries, and in particular, some firms, some organizations, that were really adept and were able to make that shift almost overnight. Maybe there were a couple bumps along the way, some VPN settings needed to be tweaked, and Zoom settings needed to be changed a little bit, but for many, this was a relatively smooth transition, and we may be doing this for a very long time. >> Yeah, I want to get your thoughts, before we get into some of the product stuff that you guys are working on and some other things. What's your general reaction to people in your circles, inside industry and tech industry, and outside, what are you seeing a reaction to this new scale, work from home, social distancing, isolation, what are your observations? >> Yeah, you know, I think we're in for a long haul. This is going to be the new normal for quite some time. I think it's super important to check on the people you care about, and before we get into dev and tech, check on the people you care about, especially people who either aren't yet respecting the social distancing norms and impress upon them the importance that, hey, this is about you, this is about the people you care about, it's about people you don't even know, because there are plenty of people who can carry this and not even know. So definitely check on the people that you care about. And reach out to those people and stay in touch. We all need one another more than ever, right? I manage a team, and it's super important, I think, to understand how much stress everyone is under. I've got over a dozen people that report to me. Most of them have kids and families. We start out our weekly staff meeting now, and we bring the kids in. They're curious, they want to know what's going on. First five, 10 minutes of our meeting is meet the family. And that demystifies some of what we're doing, and actually keeps the other 50 minutes of the meeting pretty quiet in our experience. But it's really humanized an aspect of work from home that's always been a bit taboo. We laugh about the reporter in Korea whose kid and his wife came in during the middle of a live on-air interview. There's certainly, I've worked from home for almost 12 years, like, those are really uncomfortable situations. Until about a month ago, when that just became the norm. And from that perspective, I think there's a humanization that we're far more understanding of people who work from home now than ever before. >> It's funny, I've heard people say, you know, my wife didn't know what I did until I started working at home. And comments to seeing people's family, and saying, wow, that's awesome, and just bringing a personal connection, not just this software mechanism that connects people for some meeting, and we've all been on those meetings. They go long, and you're sitting there, and you're turning the camera off so you can sneeze. All those things are happening. But when you start to think about, beyond it being a software mechanism, that it's a social equation right now. People have shared experiences. It's been an interesting time. >> Yeah, and just sharing those experiences. We do a think internal on our Slack channel every day. We try to post a picture. We call it hashtag recess, and at recess we take a picture of walking the dogs, or playing with the kids, or gardening, or whatever it is, going for a run. Again, just trying to make the best of this, take advantage of, you know, it's hard working from home, but trying to take advantage of some of those once in a lifetime opportunities we have here. And my team has started pub quiz on Fridays, so we're mostly spread across, in the U.S., so we're able to do this at a reasonable hour, but the last couple of Fridays, we've jumped on a Zoom, downloaded a pub trivia game, most of us a crack a beer, or glass of wine, or a cocktail, and you know, it's just, it actually puts a punctuate mark on the end of the week, puts a period on the end of the week. Because that's the other thing about this, man, if you don't have some boundaries, it's easy to go from an eight or nine hour normal day to 10, 12, 14, 16 hour days, Saturday bleeds into Sunday bleeds into Monday, and then the rat race takes over. >> You got to get the exercise. You have a routine. That's my experience. What's your advice for people who are working at home for the first time? Do you have any best practices? >> I actually had a blog post on this about two weeks ago and put up almost a shopping list of some of the things that I've assembled here in the work from home environment. It's something I've been doing since 2008, so it's been there for a good long while. It's a little bit hard to accumulate all the technology that you need, but I would say, most important, have a space, some kind of space. Some people have more room or less, but even just a corner in a master bedroom with a standup desk, some space that is your own, that the family understands and respects. The other best practice is set some time boundaries. I like to start my day early. I'll try to break more a little bit for that recess, see the family some, and then knock off at a reasonable hour, so establish those boundaries. Yeah, I've got a bunch of tips in that blog post I can shoot you after this, but it's the sort of thing that, be a bit understanding, too, of other people in this situation for the first time, perhaps. So you know, offer whatever help and assistance you can, and be understanding that, man, things just aren't like they used to be. >> That's great advice. Thanks for the insights. Want to get to something that I see happening, and this always kind of happens when you see these waves where there's a downturn, or there's some sort of an event. In this case it's catastrophic in the way it vectored in like this and the impact that we just discussed. But what comes out of it is creativity around entrepreneurial activity, and certainly reinvention, businesses reforming, retrenching, resetting, whatever word, pivot, digital transformation, there's plenty of words for it. But this is the time where people can actually get a lot done. I always comment, in my last interview I did, you know, Shakespeare wrote Macbeth when he was sheltering in place, and Isaac Newton invented calculus, so you can actually get some work done. And you're starting to see people look at the new technology and start disrupting old incumbent markets, because now more than ever, things are exposed. The opportunity of recognition becomes clearer. So I wanted to get your thoughts on this. You're a product person, you've got a lot of product management skills, and you're currently taking this DevOps to financial market with fintech and your business, so you're applying known principles and software and tech and disrupting an existing industry. I think this is going to be a common trend for the next five years. >> Yeah, so on that first note, I think you're exactly right. There will be a reckoning, and there will be a ton of opportunities that come out of this for the already or the rapidly transformed digital native, digital focused business. There will be some that survive and thrive here. I think you're seeing a lot of this with the popularity of Zoom that has spiked recently. I think you're going to see technologies like DocuSign being used in places that, some of those places that still require wet signatures, but you just can't get to the notary and sign a, I don't know, a refi on your mortgage or something like that. And so I think you're going to see a bunch of those. The biggest opportunities are really around our education system. I've got two kids at home, and I'm in a pretty forward thinking school district in Austin, Texas, you know, but that's not the norm where our teachers are conducting classes and assignments over Zoom. I've got a kindergartener and a second grader. There's somewhat limits to what they can do with technology. I think you're going to see a lot of entrepreneurial solutions that develop in that space, and that's going to go from K through 12, and then into college. You think about how universities have had to shift and cancel classes, and what's happening with graduation. I've got a six and an eight year old, and I've been told I need to save $200,000 apiece for each of them to go to college, which is just an astounding number, especially to someone like me, who went to an inexpensive public university on a scholarship. Saving that kind of money for college, and just thinking about how much more efficient our education system might be with a lot more digital, a lot more digital education, digital testing and classes, while still maintaining the college experience, what that's going to look like in 10 years. I think we're going to see a lot of changes over these next 18 months to our educational system. >> Dustin, talk about the event dynamics. Physical events don't exist currently. Certainly, when they do come back, they should, and they will, the role of the virtual space is going to be highlighted and new opportunities will emerge. You mentioned education. People learn, not just for school, whether they're kids, whether they're professionals, learning and collaboration, work tools are going to reshape. What's your take on that marketplace, because we got to do virtual events. You can't just replicate a physical event and move it to digital. It's a complex system. >> Yeah, you're talking about an entire industry. We saw the Google Events, Google Next, Google IO, the Microsoft Events, just across the, I'm here in Austin, Texas, all of South by Southwest was canceled, which is just, it's breathtaking. When does that come back, and what does it look like? Is it a year or two or more from now? Events is where I spend my time, and when I get on a plane, and I fly somewhere, I'm usually going to a conference or trade show. Think about the sports industry. People who get on a plane, they go to an NFL game. John, I don't have all the answers, man, but I'm telling you, that entire industry is rapidly, rapidly going to evolve. I hope and pray that one day we're back to a, I can go back to a college football game again. I hope I can sit in a CUBE studio at a CUBE Con or an Open Stack or some other conference again. >> Hey, we should do a rerun, because I was watching the Patriots game last night, Tom Brady beating the Chiefs, October from last year. It was one of the best games of the season, went down to the wire, and I watched it, and I'm like, okay, that's Tom Brady, he's still in the Patriot uniform on the TV. Do we do reruns? This is the question. Right now, there's a big void for the next three months. What do we do? Do we replay the highlights from the CUBE? Do we have physical get togethers with Zoom? What's your take on how people should think about these events? >> Yeah, you know, the reruns only go so far, right? I'm a Texas Aggie, man. I could watch Johnny Football in his prime anytime. But I know what happened, and those games are just not as exciting as something that's a surprise. I'm actually curious about e-sports for the first time. What would it look like to watch a couple of kids who are really good at Madden Football on a Playstation go at it? What would other games that I've never seen look like? In our space, it's a lot more about, I think, podcasts and live content and staying connected and apprised of what's going on, making-- Oh, we locked up there for a second. It's, I think it's going to be really interesting. I'm still following you guys. I certainly see you active on social media. I'm sort of more addicted than ever to the live news, and in fact, I'm ready to start seeing some stuff that doesn't involve COVID-19, so from that perspective, man, keep churning out good content, and good content that's pertinent to the rest of our industry. >> That's great stuff. Well, Dustin, take a minute to explain what you're doing at Apex Clearing, your mission, and what are you guys excited about. >> Yeah, so Apex Clearing, we're a fintech. We're a very forward-focused, digitally-focused fintech. We are well positioned to continue servicing the needs of our clients in this environment. We went fully remote the first week of March, long before it was mandatory, and our business shifted pretty seamlessly. We worked through a couple of hiccups, provisioning extra VPN IP addresses, and upgrading a couple of service plans on some of the softwares, the service we buy, but besides that, our team has done just a marvelous job transitioning to remote. We are in the broker, dealer, and registered advisor space, so we provide the clearing services, which handles stock trades, equity trades, in the back end, and the custodial services. We actually hold, safeguard, the equities that our correspondents, we call our clients correspondents, their retail customers end up holding. So we've been around in our current form since about 2012. This was a retread of a previous company that was bought and retooled as Apex Clearing in 2012. Very shortly after that, we helped Robinhood, Wealthfront, Betterment, a whole bunch of really forward-looking companies reinvent what it meant to buy and sell and trade securities online, and to hold assets in a robo advisor like Betterment. Today, we are definitely well-known, well-respected for how quickly and seamlessly our APIs can be used by our correspondents in building really modern e-banking and e-brokerage experiences. >> So you guys-- >> So that went-- >> Are you guys like a DevOps platform-- >> We're more like software as a service for fintech and brokerage. So our products are largely APIs that our correspondents use their own credentials to interact with, and then using our APIs, they can open accounts, which means get an account number from the systems that allows them to then fund that account, connect via ACH and other bank connectivity platforms, transfer cash into those accounts, and then start conducting trades. Some of our correspondents have that down to a 60-second experience in a mobile app. From a mobile app, you can register for that account, if you need to, take a picture of an IED, have all of that imported, add your tax information, have that account number associated with your banking account, move a couple hundred dollars into that banking account, and then if the stock market's open, start buying and selling stock in that same window. >> Great, well, I wanted to talk about this, because to the earlier bigger picture, I think people are going to be applying DevOps principles, younger entrepreneurs, but also, reborn, if you will, professionals who are old school IT or whatever, moving faster. And you wrote a blog post I want to get your thoughts on. You wrote it on April second. How we've adapted Ubuntu's time-based release cycles to fintech and software as a service. What is that all about? What's the meaning behind this post? You guys are doing something new, unique, or-- >> To this industry and to many of the people around me, even our clients and customers around me, this is a whole new world. They've never seen anything like it. To those of us who have been around Linux, open source, certainly Ubuntu, Open Stack, Kubernetes, it's just standard operating procedures. There's nothing surprising about it, necessarily. But either it's some combination of the financial services world, just the nature of proprietary software, but also the concept of software as a service, SaaS, which is very different than Ubuntu or Kubernetes or Open Stack, which is released software, right. We ship software at the end of an Ubuntu cycle or a Kubernetes cycle. It's very different when you're a software as a service platform, and it's a matter of rolling out to production some changes, and those changes then going live. So, I wrote a post mainly to give some transparency, largely to our clients, our correspondents. We've got a couple hundred customers that use the Apex platform. I've met with many of them in a sort of one-on-many, one-to-one, one-on-many basis, where I'll show up and deliver the product road map, a couple of product managers will come and do a deep dive. Part of what we communicate to those customers is around, now, around our release cycles, and to many of them, it's a foreign concept that they've just never seen or heard before, and so I put together the blog post. We shared it internally, and educated the teams, and it was well-received. We shared it externally privately with a number of customers, and it was well-received, and a couple of them, actually a couple of the Silicon Valley based customers said, hey, why don't you just put this out there on Medium or on your blog or under an Apex banner, because this actually would be really well-received by others in the family, other partners in the family. So I'm happy to kind of dive into a couple of the key principles here, and we can sort of talk through it if you're interested, John. >> Well, I think the main point is you guys have a release cycle that is the speed of open source to SaaS, and fintech, which again, proprietary stuff is slower, monolithic. >> Yeah, the key principle is that we've taken this, and we've made it predictable and transparent, and we commit to these cycles. You know, most people maybe familiar with Ubuntu releasing twice a year, right, April and October, Ubuntu has released every April and October since 2004. I was involved with Ubuntu between 2008 and 2018 as an engineer, an engineering manager, and then a product manager, and eventually a VP of product at Canonical, and that was very much my life for 10 years, oriented around that. In that time, I spent a lot of time around Open Stack, which adopted a very similar model. Open Stack's released every six months, just after the Ubuntu release. A number of the members of the technical team and the committee that formed Open Stack came out of either Ubuntu or Canonical or both, and really helped influence that community. It's actually quite similar in Kubernetes, which developed independent, generally, of Ubuntu. Kubernetes releases on a quarterly basis, about every three months, and again, it's the sort of thing where it's just a cycle. It happens like clockwork every three months. So when I joined Apex and took a look at a number of the needs that we had, our correspondents had, our relationship managers, our sales team, the client-facing people in the organization, one of the biggest items that bubbled straight to the top is our customers wanted more transparency into our road maps, tighter commitments on when we're going to deliver things, and the ability to influence those. And you know what, that's not dissimilar from any product managers plight anywhere in the industry. But what I was able to do is take some of those principles that are common around Ubuntu and Kubernetes and Open Stack, which by the way, are quite familiar. We use a lot of Ubuntu and Kubernetes inside of Apex, and many of our correspondents are quite familiar with those cycles, but they'd never really seen or heard of a software as a service, a SaaS vendor, using something like that. So that's what's new. >> You've got some cycles going now. You've got schedules, so just looking here, just to get this out there, 'cause I think it's data. You did it last year in October, November, mid-cycle in January of this year. You've got a couple summits coming up? >> Yeah, that's right, we've broken it down into three cycles per year, three 16-week cycles per year. So it's a little bit more frequent than the twice a year Ubuntu, not quite as frenetic as the quarterly Kubernetes cycles. 16 weeks time three is 48. That leaves us four weeks of slack, really to handle Thanksgiving and Christmas and end of year holidays, Chinese New Year, whatever might come up. I'll tell you from experience, that's always been a struggle in the Ubuntu and Open Stack and Kubernetes world, it's hard to plan around those cycles, so what we've done here is we've actually just allocated four weeks of a slush fund to take care of that. We're at three 16-week cycles per year. We version them according to the year and then an iterator. So 20A, 20B, 20C are our three cycles in 2020, and we'll do 21A, B, and C next year. Each of those cycles has three summits. So to your point about we get together, back in the before everyone stopped traveling, we very much enjoyed twice a year getting together for CUBE con. We very much enjoyed the Open Stack summits and the various Ubuntu summits. Inside of a small company like ours, these were physical. We'd get together in Dallas or New York or Chicago or Portland, which is the four places we have offices. We were doing that basically every six weeks or so for one of these summits. Now they're all virtual. We handle them over Zoom. When they were physical, we'd do the summit in about three days of packed agendas, Tuesday, Wednesday, Thursday. Now that we've gone to virtual, we've actually spread it a little bit thinner across the week, and so we've done, we've poked some holes in the day, which has been an interesting learning experience, and I think we're all much happier with the most recent summit we did, spreading it over the course of the week, accounting for time zones, giving ourself, everyone, lunch breaks and stuff. >> Well, we'll have to keep checking in. I want to certainly collaborate with you on the virtual digital, check your progress. We're all learning, and iterating, if you will, on the value that you can do with these digital ones. Try to get that success with physical, not always easy. Appreciate, and you're looking good, looking good and safe. Stay safe, and great to check in with you, and congratulations on the new opportunity. >> Yeah, thanks, John. >> Appreciate it. Dustin Kirkland, chief product officer at Apex Clearing. I'm John Furrier with the CUBE, checking in with a remote interview during this time when we are getting all the information of best practices on how to deal with this new at-scale, the new shift that is digital, that is impacting, and opportunities are there, certainly a lot of challenges, and hopefully, the healthcare, the finance, and the business models of these companies can continue and get back to work soon. But certainly, the people are still sheltered in place, working hard, being creative, be the coverage here in the CUBE. I'm John Furrier, thanks for watching. (bright electronic music)

Published Date : Apr 6 2020

SUMMARY :

Announcer: From the CUBE studios in Palo Alto and Boston, and people are realizing that now. and here we are talking over the internet. This is kind of bringing the DevOps concept and Zoom settings needed to be changed a little bit, that you guys are working on and some other things. and actually keeps the other 50 minutes of the meeting and you're turning the camera off so you can sneeze. it actually puts a punctuate mark on the end of the week, You got to get the exercise. all the technology that you need, but I would say, and this always kind of happens when you see these waves and that's going to go from K through 12, and move it to digital. We saw the Google Events, Google Next, Google IO, This is the question. and in fact, I'm ready to start seeing some stuff and what are you guys excited about. on some of the softwares, the service we buy, that allows them to then fund that account, I think people are going to be applying DevOps principles, of the key principles here, and we can sort of a release cycle that is the speed of open source to SaaS, and the ability to influence those. just to get this out there, and the various Ubuntu summits. and congratulations on the new opportunity. and hopefully, the healthcare, the finance,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DustinPERSON

0.99+

DallasLOCATION

0.99+

John FurrierPERSON

0.99+

Isaac NewtonPERSON

0.99+

JohnPERSON

0.99+

KoreaLOCATION

0.99+

April 2020DATE

0.99+

Tom BradyPERSON

0.99+

10 yearsQUANTITY

0.99+

50 minutesQUANTITY

0.99+

Dustin KirklandPERSON

0.99+

Palo AltoLOCATION

0.99+

$200,000QUANTITY

0.99+

2020DATE

0.99+

two kidsQUANTITY

0.99+

OctoberDATE

0.99+

CanonicalORGANIZATION

0.99+

PortlandLOCATION

0.99+

BostonLOCATION

0.99+

ChinaLOCATION

0.99+

eightQUANTITY

0.99+

last yearDATE

0.99+

New YorkLOCATION

0.99+

Apex ClearingORGANIZATION

0.99+

ApexORGANIZATION

0.99+

100%QUANTITY

0.99+

16 weeksQUANTITY

0.99+

Madden FootballTITLE

0.99+

2008DATE

0.99+

Silicon ValleyLOCATION

0.99+

2018DATE

0.99+

TodayDATE

0.99+

BettermentORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

four weeksQUANTITY

0.99+

60-secondQUANTITY

0.99+

MondayDATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

ChicagoLOCATION

0.99+

2012DATE

0.99+

UbuntuTITLE

0.99+

PatriotsORGANIZATION

0.99+

SundayDATE

0.99+

next yearDATE

0.99+

48QUANTITY

0.99+

LinuxTITLE

0.99+

SeptemberDATE

0.99+

KubernetesTITLE

0.99+

April secondDATE

0.99+

U.S.LOCATION

0.99+

16-weekQUANTITY

0.99+

10 minutesQUANTITY

0.99+

first timeQUANTITY

0.99+

bothQUANTITY

0.99+

SaturdayDATE

0.99+

WednesdayDATE

0.99+

30 short daysQUANTITY

0.99+

sixQUANTITY

0.99+

two areasQUANTITY

0.99+

EachQUANTITY

0.99+

ChiefsORGANIZATION

0.99+

PlaystationCOMMERCIAL_ITEM

0.99+

ShakespearePERSON

0.99+

NovemberDATE

0.99+

Austin, TexasLOCATION

0.99+

nine hourQUANTITY

0.98+

TuesdayDATE

0.98+

AprilDATE

0.98+

eachQUANTITY

0.98+

last nightDATE

0.98+

Open StackTITLE

0.98+

oneQUANTITY

0.98+

Bryan Liles, VMware | KubeCon + CloudNativeCon NA 2019


 

>>Ly from San Diego, California. It's the cube covering to clock in cloud native con brought to you by red hat, the cloud native computing foundation and its ecosystem Marsh. >>Welcome back to San Diego. I'm Stewman and my cohost is Justin Warren. And coming back to our program, one of our cube alumni and be coach hair of this coupon cloud native con prion Lyles who is also a senior staff engineer at VMware. Brian, thanks so much for joining us. Thanks for having me on. And do you want to have a shout out of course to a Vicky Chung who is your coach hair. She has been doing a lot of work. She came to our studio ahead of it to do a preview and unfortunately she's supposed to be sitting here but a little under the weather. And we know there was nothing worse than, you know, doing travel and you know, fighting an illness. But she's a little sick today, but um, uh, she knows that we'll, we'll, we'll still handle it. Alright, so Brian, 12,000 people here in attendance. >>Uh, more keynotes than most of us can keep a track of. So, first of all, um, congratulations. Uh, things seem to be going well other than maybe, uh, choosing the one day of the year that it rained in, uh, you know, San Diego, uh, which we we can't necessarily plan for. Um, I'd love you to bring us a little bit insight as to some of the, the, the goals and the themes that, uh, you know, you and Vicki and the, the, the, the, the community we're, we're looking at for, for this coupon. So you're right, let's help thousand people and so many sponsors and so many ideas and so many projects, it's really hard to have a singular theme. But a few months ago we came up with was, well, if, if Kubernetes in this cloud software make us better or basically advances, then we can do more advanced things. >>And then our end users can be more advanced. And it was like a three pong thing. And if you look, go back and look at our keynotes, he would say, Hey, we're looking at our software. Hey, we're looking at an amazing things that we did, especially cat by that five G keynote yesterday. And the notice that we had, it was me talking about how we could look forward and then, and then notice we had in talking about security and then we had Walmart and target talking about how they're using it and, and that was all on purpose. It's trying to tell a story that people can go back and look at. Yeah, I liked the, the message that you were, you were trying to put out there around how we need to make Kubernetes a little bit easier, but how we need to change the way that we talk about it as well. >>So maybe you could, uh, fill us in a little bit more. Let's say, unfortunately, Kubernetes is not going to get an easier, um, that's like saying we wish Linux was easier to use. Um, Linux has a huge ABI and API interface. It's not going to get easier. So what we need to do is start doing what we did with Linux and Linux is the Colonel. Um, this should be some Wars happened over the years and you notice some distributions are easier to use. Another. So if you use the current fedora or you the current Ubuntu or even like mint, it's getting really easy to use. And I'm not suggesting that we need Kubernetes distributions. That's actually the furthest thing, but we do need to work on building our ecosystem on top of Kubernetes because I mentioned like CIS CD, um, observability security audit management and who knows what else we need to start thinking about those things as pretty much first-class items. >>Just as important as Kubernetes. Kubernetes is the Colonel. Yeah. Um, in the keynotes, there's, as you said, there's such a broad landscape here. Uh, uh, I've heard some horror stories that people like, Oh, Hey, where do I start? And they're like, Oh, here's the CNCF landscape. And they're like, um, I can't start there. There's too much there. Uh, you, you picked out and highlighted, um, some of the lesser known pieces. Uh, th there's some areas that are a little bit mature. What, what are some of the more exciting things that you've seen going on right now, your system and this ecosystem? >> Um, I'm not even gonna. I highlighted open policy agent as a, as an interesting product. I don't know if it's the right answer, actually. I kind of wish there was a competitor just so I could determine if it was the right answer. >>But things like OPA and then like open telemetry, um, two projects coming together and having even bigger goals. Uh, let's make a severability easy. What I would also like to see is a little bit more, more maturity and the workflow space. So, you know, the CII and CD space. And I know with Argo and flux merging to Argo flux, uh, that's very interesting. And just a little bit of a tidbit is that I, I also co-chair the CNCF SIG application delivery, uh, special interest group, but, uh, we're thinking about that, that space right there. So I would love to see more in the workflow space, but then also I would like to see more security tools and not just old school check, check, check, but, um, think about what Aqua security is doing. And I'm, I don't know if they're now Snick or S, I don't know how to say it, but, um, there's, there's companies out there rethinking security. >>Let's do that. Yeah. I spoke to Snick a couple of days ago and it's, I'm pretty sure it's sneak. Apparently it stands for, so now you know, which that was news to me that, so now I know interesting. But they have a lot of good projects coming up. Yeah. You mentioned that the ecosystem and that you like that there's competitors for particular projects to kind of explore which way is the right way of doing things. We have a lot of exhibitors here and we have a lot of competitors out there trying to come into this ecosystem. It seems to actually be growing even bigger. Are we going to see a period of consolidation where some of these competing options, we decided that actually no, we don't want to use that. We want to go over here. I mean according to crossing the chasm, yes, but we need to figure out where we are on the maturity chart for, for the whole ecosystem. >>So I think in a healthy, healthy ecosystem, people don't succeed and products go away, but then what we see is in maybe six months or a year or two later, those same founders are out there creating new products. So not everyone's going to win on their first shot. So I think that's fine because, you know, we've all had failures in the past, but we're still better for those failures. Yeah, I've heard it described as a kind of Cambridge and explosion at the moment. So hopefully we don't get an asteroid that comes in and, uh, and hopefully it is out cause yeah. Um, one of the things really, really noticed is, uh, if you went back a year or even two years ago, we were talking about very much the infrastructure, the building blocks of what we had. Uh, I really noticed front and center, especially in the keynote here, talking a lot about the workload. >>You're talking about the application. We're talking about, uh, you know, much more up the stack and uh, from kind of that application, uh, uh, piece down, even, uh, some friends of mine that were new to this ecosystem was like, I don't understand what language they're talking. I'm like, well, they're talking to the app devs. That's why, you know, they're not speaking to you. Is that, was that intentional? >> Well, I mean for me it is because I like to speak to the app devs and I realized that infrastructure comes and goes. I've been doing this for decades now and I've seen the rise of Cisco as, as a networking platform and I've seen their ups and downs. I've worked in security. But what I know is fundamentals are, are just that. And I would like to speak to the developers now because we need to get back to the developers because they create the value. >>I mean the only people who win at selling via our selling Kubernetes are vendors of Kubernetes. So, you know, I work for one and then there's the clouds and then there's other companies as well. So the thing that stays constant are people are building applications and ultimately if Kubernetes and the cloud native landscape can't take care of those application developers remember happened, remember, um, OpenStack, and not in like a negative way, but remember OpenStack, it got to be so hard that people couldn't even focus on what gave value. >> Unlike obvious fact leaves on it. It's still being used a lot in, in service providers and so on. So technology never really goes away completely. It just may fade off and live in a corner and then we move on to whatever's the next newest and greatest thing and then end up reinventing ourselves and having to do all of the same problems again. >>It feels a little bit like that with sometimes the Kubernetes way where haven't we already sold this? Linux is still here, Linux is still, and Linux is still growing. I mean Linux is over Virgin five right now and Linux is adapting and bringing in new things in a Colonel and moving things out to the user land. Kubernetes needs to figure out how to do that as well. Yeah, no Brian, I think it's a great point. You know, I'm an infrastructure guy and we know the only reason infrastructure exists is to serve up that application. What Matt managed to the business, my application, my data. Um, you and your team have some open source projects that you're involved in. Maybe give us a little bit about right? So oxen is a, so let me tell you the quick story. Joe Beda and I talked about how do we approach developers where they are. >>And one thing came up really early in that conversation was, well, why don't we just tell developers where things are broken? So come to find out using Kubernetes object model and a little bit of computer science, like just a tiny little bit. You can actually build this graph where everything is connected and then all you need to do then is determine if for any type of object, is it working or is it not working? So now look at this. Now I can actually show you what's broken and what's not broken. And what makes octane a little bit different is that we also wrapped it with a dashboard that shows everything inside of a Kubernetes cluster. And then we made it extensible. And just, just a crazy thing. I made a plugin API one weekend because I'm like, Oh, that would be kind of cool. And just at this conference alone, nine to 10 people to walk up to me and said, Oh, um, we use oxygen and we use your plugin system. >>And now we've done things that I can't imagine, and I think I might've said this, I know I've said it somewhere recently, but the hallmark of a good platform is when people start creating things you could never imagine on it. And that's what Linux did. That's what Kubernetes is doing. And octane is doing it in the small right now. So kudos to me and me really and my team that's really exciting. So fry, Oakton, Coobernetti's and Tansu both are seven sided. Uh, was, was that, that, that uh, uh, moving to, uh, to, to eight, uh, so no marketing. Okay. And I don't profess to understand what marketing is. Someone just named it. And I said, you know what, I'm a developer. I don't really mind w as long as you can call it something, that's fine. I do like the idea that we should evolve the number of platonic solids. >>There's another answer too. So if you think about what seven is, it, um, people were thinking ahead and said, well, someone could actually take that and use it as another connotation. So I was like, all right, we'll just get out of that. That's why it's called octane, but still nautical theme. Okay, great. Brian. So much going on. You know, even outside of this facility, there's things going on. Uh, any hidden gems that just the, you know, our audience that's watching or people that we'll look back at this event and say, Hey, you know, here's some cool little things there. I mean, they hit the Twitters, I'm sure they'll see the therapy dogs and whatnot, but you know, for the people geeking out, some of those hidden gems that you'd want to share. Um, some of the hidden gems or I'll, I'll throw up to, um, watch what these end-user companies are doing and watch what, like the advanced companies like Walmart and target and capital one are doing. >>I just think there's a lot of lessons to be learned and think about this. They have a crazy amount of money. They're actually investing time in this. It might be a good idea. And other hidden gyms are, are companies that are embracing the, the extension model of Kubernetes through custom resource definitions and building things. So the other day I had the tests on, on the stage, and they're not the only example of this, but running my sequel and Coobernetti's and it pretty much works all well, let's see what we can run with this. So I think that there's going to be a lot more companies that are going to invest in this space and, and, and actually deliver on these types of products. And, and I think that's a very interesting space. Yeah. We, we spoke to Bloomberg just before and uh, we talked to the tests, we spoke to Subaru from the test yesterday. >>Uh, seeing how people are using Kubernetes to build these systems, which can then be built upon themselves. Right. I think that's, that's probably for me, one of the more interesting things is that we end up with a platform and then we build more platforms on top of it. But we, we're creating these higher levels of abstraction, which actually gets us closer to just being able to do the work that we want to do as developers. I don't need to think about how all of the internals work, which again to your keynote today is like, I don't want to write machine code and I just want to solve this sort of business problem. If we can embed that into the, into this ecosystem, then it just makes everyone's lives much, much easier. So you basically, that is my secret. I'm really, I know people hate it for attractions and they say they will, but no one hates an abstraction. >>You don't actually turn the crank in your motor to make the car run. You press the accelerator and it goes. Yeah. Um, so we need to figure out the correct attractions and we do that through iteration and failure, but I'm liking that people are pushing the boundaries and uh, like Joe beta and Kelsey Hightower said is that Kubernetes is a platform of platforms. It is basically an API for writing API APIs. Let's take advantage of that and write API APIs. All right. Well, Brian, thank you. Thank Vicky. Uh, please, uh, you know, share, congratulations to the team for everything done here. And while you might be stepping down as, or we do hope you'll come and join us back on the cube at a future event. No, I enjoyed talking to you all, so thank you. Alright, thanks so much Brian for Justin Warren we'll be back with more of our water wall coverage. CubeCon cloud native con here in San Diego. Thanks for watching the queue.

Published Date : Nov 21 2019

SUMMARY :

clock in cloud native con brought to you by red hat, the cloud native computing foundation And we know there was nothing worse than, you know, doing travel and you know, uh, you know, you and Vicki and the, the, the, the, the community we're, we're looking at for, And the notice that we Kubernetes is not going to get an easier, um, that's like saying we wish Linux was easier to use. Um, in the keynotes, there's, as you said, there's such a broad landscape I don't know if it's the right answer, actually. I don't know if they're now Snick or S, I don't know how to say it, but, um, You mentioned that the ecosystem and that you like that there's competitors So I think that's fine because, you know, we've all had failures in the We're talking about, uh, you know, much more up the stack and uh, to speak to the developers now because we need to get back to the developers because they create the value. I mean the only people who win at selling via our selling Kubernetes are vendors of Kubernetes. It just may fade off and live in a corner and then we move on to whatever's the next newest and greatest and moving things out to the user land. And just at this conference alone, nine to 10 people to walk up to me and said, And I don't profess to understand what any hidden gems that just the, you know, our audience that's watching or people that we'll look back at I just think there's a lot of lessons to be learned and think about this. I don't need to think about how all of the internals work, which again to your keynote today is like, Uh, please, uh, you know, share, congratulations to the team for everything done

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

BrianPERSON

0.99+

SubaruORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

VickyPERSON

0.99+

San DiegoLOCATION

0.99+

CiscoORGANIZATION

0.99+

Vicky ChungPERSON

0.99+

MattPERSON

0.99+

Joe BedaPERSON

0.99+

Bryan LilesPERSON

0.99+

San Diego, CaliforniaLOCATION

0.99+

VMwareORGANIZATION

0.99+

yesterdayDATE

0.99+

12,000 peopleQUANTITY

0.99+

StewmanPERSON

0.99+

VickiPERSON

0.99+

BloombergORGANIZATION

0.99+

first shotQUANTITY

0.99+

LylesPERSON

0.99+

KubeConEVENT

0.99+

LinuxTITLE

0.99+

CloudNativeConEVENT

0.99+

todayDATE

0.99+

two projectsQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

10 peopleQUANTITY

0.98+

CambridgeLOCATION

0.98+

CubeConEVENT

0.98+

oneQUANTITY

0.98+

nineQUANTITY

0.98+

bothQUANTITY

0.97+

UbuntuTITLE

0.97+

two years agoDATE

0.97+

SnickORGANIZATION

0.96+

sevenQUANTITY

0.96+

KubernetesTITLE

0.95+

eightQUANTITY

0.94+

TansuORGANIZATION

0.94+

OaktonORGANIZATION

0.93+

thousand peopleQUANTITY

0.93+

OpenStackTITLE

0.93+

CNCFORGANIZATION

0.92+

Joe betaPERSON

0.92+

LyPERSON

0.92+

CoobernettiORGANIZATION

0.92+

one thingQUANTITY

0.92+

fedoraTITLE

0.88+

VirginORGANIZATION

0.85+

firstQUANTITY

0.84+

CoobernettiPERSON

0.83+

a couple of days agoDATE

0.83+

few months agoDATE

0.83+

red hatORGANIZATION

0.81+

octaneTITLE

0.79+

one dayQUANTITY

0.76+

KubernetesORGANIZATION

0.76+

AquaORGANIZATION

0.75+

two laterDATE

0.75+

oxenORGANIZATION

0.72+

a yearDATE

0.72+

SnickPERSON

0.71+

NA 2019EVENT

0.69+

Breaking Analysis: Dell Technologies Financial Meeting Takeaways


 

>> From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE! Now here's your host, Dave Vellante. >> Hi, everybody, welcome to this Cube Insights, powered by ETR. In this breaking analysis I want to talk to you about what I learned this week at Dell Technology's financial analyst meeting in New York. They gathered all the financial analysts, Rob Williams hosted it, he's the head of IR, Michael Dell of course was there. They had Dennis Hoffman who is the head of strategic planning, Jeff Clarke who basically runs the business and Tom Sweet, of course, who was the star of the show, the CFO, all the analysts want to see him. Dell laid out its longterm goals, it provided much clearer understanding of its strategic direction, basically focused on three areas. Dell believes that IT is getting more complex, we know that, they want to capitalize on that by simplifying IT. We'll talk about that. And then they want to position for the wave of digital transformations that are coming and they also believe, Dell believes, that it can capitalize on the consolidation trend, consolidating vendors, so I'll talk about each of those. And so let me bring up the first slide, Alex, if you would. The takeaways from the Dell financial analyst meeting. Let me share with you the overall framework that Tom Sweet laid out. And I have to say, the messaging was very consistent, these guys were very well-prepared. I think Dell is, from a management perspective, very well-run company. They're targeting three to 5% growth on what they're saying is a 4% GDP forecast. Or sorry, 4%, I have GDP here, it's really 4% industry growth. GDP's a little lower than that obviously. So this is IDC data, Gartner data, 4% industry growth. So that's an error on my part, I apologize. The strategies to grow relative to their competition. So grow share on a relative basis. So whatever the market does, again, not GDP, but whatever the market does, Dell wants to grow faster than the market. So it wants to gain share, that's its primary metric. From there they want to grow operating income and they want to grow that faster than revenue, that's going to throw off cash. And then they're going to also continue to delever the balance sheet. I think they paid down 17 billion in debt since the EMC acquisition. They want to get to a two X debt to EBITA ratio within 18 months. And what they're saying is, you know, they talked about, Tom Sweet talked about this consistent march toward investment-grade rating. They've been talkin' about that for awhile. He made the comment, we don't need to have a triple A rating but we want to get to the point where we can reduce our interest expense, and that will, 'cause they'll drop right into the bottom line. So they talked about these various levers that they can turn, some of them under the P and L, gaining share, some are their operating structure and their organizational structure, and one big one is obviously their debt structure. The other key issue here is will this cut the liquidity discount that Dell faces? What do I mean by that? Well, VMware has about a $60 billion valuation. Dell owns about 80% of VMware, which would equate to 48 billion. But if you look at Dell's market cap, it's only 37 billion. So it essentially says that Dell's core business is worth minus 11 billion. We used to talk about this when EMC owned VMware. Its core business only comprised about 40% of the overall value of the company, in this case because of the high debt, Dell has a negative value. And it's not just the high debt. Michael Dell has control over the voting shares, it's essentially a conglomerate structure, there's very high debt, and it's a relatively low margin business, notwithstanding VMware. And so as a result, Dell trades at a discount relative to what you would think it should trade at, given its prominence in the market, $92 billion company, the leader in every category under the sun. So that's the big question is can Dell turn these levers, drop EBITA or cash to the bottom line, affect operating income, and then ultimately pay down its debt and affect that discount that it trades at? Okay, bring up, if you would, Alex, the next slide. Now I want to share with you the takeaways from the Dell line of business focus. This really was Jeff Clarke's presentations that I'm going to draw from. Servers, we know, they're softer demand, but the key there is they're really faced tough compares. Last year, Dell's server business grew like crazy. So this year the comparisons are lessened. But there's less spending on servers. I'll share with you some of the ETR data. Storage, they call it holding serve, you saw last quarter I did an analysis, I took the ETR data and the income statement, it showed Pure was gaining share at like 22% growth from the income statement standpoint. Dell was 0% growth but is actually growing faster than its competitors. With the exception of Pure. It's growing faster than the market. So Dell actually gained share with 0% growth. Dell's really focused on consolidating the portfolio. They've cut the portfolio down from 80, I think actually the right number is 88 products, down to 20 by May of 2020. They've got some new mid-range coming, they've just refreshed their data protection portfolio, so again, by May of next year, by Dell Technologies World they'll have a much, much more simplified portfolio. And they're gaining back share. They've refocused on the storage business. You might recall after the acquisition, EMC was kind of a mess. It was losing share before the acquisition, it was so distracted with all the Elliott Management stuff goin' on. And kind of took its eye off the ball, and then after the acquisition it took awhile for them to get their act together. They gained back about 375 basis points in the last 18 months. Remember a basis point is 1/100th of 1%. So gaining share and their consistent focus on trying to do that. Their PC business, which is actually doin' quite well, is focused on the commercial segment and focused on higher margins. They made the statement that the PCs are kind of undersupply right now so it's helping margins. There's a big focus in Jeff Clarke's organization on VMware integration. To me this makes a lot of sense. To the extent that you can take the VMware platform and make Dell hardware run VMware better, that's something that is an advantage for Dell, obviously. And at the same time, VMware has to walk the fine line with the ecosystem. But certainly it's earned the presence in the market now that it can basically do what I just said, tightly integrate with Dell and at the same time serve the ecosystem, 'cause frankly, the ecosystem has no choice. It must serve VMware customers. The strategy, essentially, is to, as I say, capitalize on vendor consolidation, leverage value across the portfolio, so whether it's pivotal, VMware integration, the security portfolio, try to leverage that and then differentiate with scale. And Dell really has the number one supply chain in the tech business. Something that Dave Donatelli at HP, when he was at HP, used to talk about. HPE doesn't really talk about that supply chain advantage anymore 'cause essentially it doesn't have it. Dell does. So Jeff Clarke's reorganization, he came in, he streamlined the organization, really from the focus on R and D to product to collaboration across the organization and the VMware integration. I actually was quite impressed with when I first met Jeff Clarke I guess two years ago now, what he and the organization have accomplished since then. No BS kind of person. And you can see it's starting to take effect. So we'll keep an eye on that. The next slide I want to show you, I want to bring in the ETR data. We've been sharing with you the ETR spending intention surveys for the last couple of weeks and months. ETR, enterprise technology research, they have a data platform that comprises 4,500 practitioners that share spending data with them. CIOs, IT managers, et cetera. What I'm showing here is a cut off of the server sector. So I'm going to drill down into server and storage. So these are spending intentions from the July survey asking about the second half of 2019 relative to the first half of 2019. And this is a drill-down into the giant public and private firms. Why do I do that? Because in meeting the ETR, this is the best indicator. So it's big, big public companies and big private companies. Think Uber. Private companies that spend a ton of dough on IT. UPS before it went public, for example. So those companies are in here. And they're, according to ETR, the best indicators. What this chart shows, so the bars show, and I've shared this with you a number of times, the lime green is we're adding, we're new to this platform, we're new adoption. The evergreen is we're spending more, the gray is we're spending the same, the light red or pink is we're spending less, and the dark red is we're leaving the platform. So if you subtract the red from the green you get what's called a net score, and that's that blue line. And this is the overall server spending intentions from that July survey. The end is about 525 respondents out of the 4,500. And this is, again, those that just answered the question on server. So you can see the net score on server spend is dropping. And you can see the market share on server is dropping. The takeaway here is that servers, as a percentage of overall IT spend, are on a downward slope, and have been for quite some time. Back to the January '16 survey. Okay, so that's going to serve us. Let's take a look at the same data for storage. So if, Alex, if you bring up the storage sector slide, You can see kind of a similar trend. And I would argue what's happening here, a couple of things. You've got the CLOB effect, I'll talk about that some more, and you've also got, in this case, the flash, all-flash array effect. What happened was you had all-flash arrays and flash come into the data center, and that gave performance a huge headroom. Remember, spinning disk was the last bastion of mechanical movement and it was the main bottleneck in terms of overall application performance. IO was the problem. Well you put a bunch of flash into the system and it gives a lot of headroom. People used to over-provision capacity just for performance reasons. So flash has had the effect of customers saying, hey, my performance is good, I don't need to over-provision anymore, I don't need to buy so much. So that combined with cloud, I think, has put down the pressure on the storage business as well. Now the next slide, Alex, that I want you to bring up is the vendor net scores, the server spending intentions. And what I've done is I've highlighted Dell EMC. Now what's happening here in the slide, and I realize it's an eye chart, but basically where you want to be in this chart is in the left-hand side. What it shows is the spending intentions and the momentum from the October '18, which is the gray, the April '19, which is the blue, and then the July '19 which is the most recent one. Again, the end is 525 in the servers for the July '19 survey. And you can see Dell's kind of in the middle of the pack. You'd love to be in the left-hand side, you know, Docker, Microsoft, VMware, Intel, Ubuntu. And you don't want to be on the right-hand side, you know, Fujitsu, IBM, is sort of below the line. Dell's kind of in the middle there, Dell EMC. The next slide I want to show you is that same slide for storage. And again, you can see here is that on-- So this is vendor net scores, the storage spending intentions. On the left-hand side it's all the high growth companies. Rubrik, Cohesity, Nutanix, Pure, VMware with vSAN, Veeam. You see Dell EMC's VxRail. On the right-hand side, you see the guys that are losing momentum. Veritas, Iron Mountain, Barracuda, HitachiHDS, Fusion-io still comes up in the survey after the acquisition by Western Digital. Again, you see Dell EMC kind of holding serve in the middle there. Not great, not bad. Okay, so that's kind of just some other ETR data that I wanted to share. All right, next thing we're going to talk about is the macros market summary. And Alex, I've got some bullet points on this, so if you bring up that slide, let me talk about that a little bit. So five points here. First, cloud continues to eat away at on-prem, despite all this talk about repatriation, which I know does happen. People try to throw everything to the cloud and they go, whoa! Look at my Amazon bill, yeah, I get that. That's at the margin. The main trend is that cloud continues to grow. That whole repatriation thing is not moving the on-prem market. On-prem is kind of steady eddy. Storage is still working through that AFA injection. Got a lot of headroom from performance standpoint. So people don't need to buy as much as they used to because you had that step function in performance. Now eventually the market will catch up, all this digital transformation is happening, all this data is flowing through the system and it will catch up, and the storage market is elastic. As NAN prices fall, people will, I predict, will buy more storage. But there's been somewhat of a lull in the overall storage market. It's not a great market right now, frankly, at the macro level. Now ETR does these surveys on a quarterly basis. They're just about to release the October survey, and they put out a little glimpse on Friday about this survey. And I'll share some bullet points there. Overall IT spending clearly is softening. We kind of know that, everybody kind of realizes that. Here's the nuance. New adoptions are reverting to pre-2018 levels, and the replacements are rising. What does this mean? So the number of respondents that said, oh yes, we're adopting this platform for the first time is declining, and the replacements are actually accelerating. Why is that? Well I was at ETR last week and we were talking about this and one of the theories, and I think it's a good one, is that 2016, 2017 was kind of experimentation around digital transformation. 2018, people started to put things into production or closer to production, they were running systems in parallel, and now they're making their bets, they're saying, hey, this test worked, let's put this heavy into production in 2019, and now we're going to start replacing. So we're not going to adopt as much stuff 'cause we're not doing as much experimentation. We're going to now focus and narrow in on those things that are going to drive our business, and we're going to replace those things that aren't going to drive our business. We're going to start unplugging them. So that's some of what's happening. Another big trend is Microsoft. Microsoft is extending its presence throughout. They're goin' after collaboration, you saw the impact that they had on Slack and Slack stock recently. So Slack Box, Dropbox, are kind of exposed there. They're goin' after security, they've just announced a SIM product. So Splunk and IBM, they're kind of goin' after that base. The application performance management vendors. For instance, New Relic. Microsoft goin' after them. Obviously they got a huge presence in cloud. Their Windows 10 cycle is a little slower this time around, but they've got other businesses that are really starting to click. So Microsoft is one of the few vendors that really is showing accelerated spending momentum in the ETR data. Financial services and telcos, which are always leading spender indicators, are actually very weak right now. That's having a spillover effect into Europe, which is over-banked, if I can use that term. Banking heavy, if you will. So right now it's not a pretty picture, but it's not a disaster. I don't want to necessarily suggest this as like going back to 2007, 2008, it's not. It's really just a matter of things are softening and it's, you know, maybe taking a little breath. Okay, so let me summarize the meeting overall. Again, it was a very well-run meeting. Started at 9:00, ended at 12:00, bagged lunch, go home. Nice and crisp. So these guys are very well-prepared. I think, again, Dell is a extremely well-managed company. They laid out a much clearer vision for Wall Street of its strategy, where it's headed. As they say, they're going after IT complexity. I want to make a comment on this. You think about Legacy EMC. Legacy EMC was not the company that you would expect to deal with complexity. In fact, they were the culprit of complexity. One of the things that Jeff Clarke did when he came in, he said, this portfolio's too complex, needs to be simplified. Joe Tucci used to say, overlap is better than gaps. Jeff Clarke said we got too much overlap. We don't have a lot of gaps so let's streamline that portfolio. Taking advantage of vendor consolidation, this is an interesting one. Ever since I've been in this business, which has been quite a long time now, I've been hearing that buyers want to consolidate the number of vendors that they have. They've really not succeeded in doing that. Now can they do that now 'cause there are less vendors? Well, in a sense, yes, there are less sort of on-prem big vendors. EMC's no longer in the market, you don't have companies like Sun and Digital anymore, Compact is gone. HP split in two, but still. You're not seeing a huge number of new vendors, at scale, come into the market. Except you've got AWS and Google as new players there. So I think that injects sort of a new dynamic that a lot of people like to put cloud aside and kind of ignore it and talk about the old on-prem business, but I think that you're going to see a lot of experimentations and workload ins and outs, particularly with AWS and Google and of course Azure, which is in itself, their cloud is almost a separate force. So we'll see how that shakes up. As I say, servers right now, Dell's got a very tough compare. I think Dell will be fine in the server space. Storage, it's all about simplifying the portfolio, they've got a refreshed portfolio focused on regaining share. They've rebranded everything Power, so their whole line is going to be Power by, if it's not already, by May of next year, Dell Technologies World. It's a much more scalable portfolio. And I think Dell's got a lot of valuation levers. They're a $92 billion company, they've got their current operations, their current P and L, their share gains, their cross-company synergies, particularly with VMware, they can expand their TAM into cloud with partnerships like they're doing with AWS and others, Google, Microsoft. The Edge is a TAM expansion opportunity to them. And also corporate structure. You've seen them. VMware acquired Pivotal. They're cleaning that up. I'm sure they could potentially make some other moves. Secureworks is out there, for example. Maybe they'll do some things with RSA. So they got that knob to turn and they can delever. Paying down the debt to the extent that they can get back to investment grade, that will lower their interest rates, that'll drop right to the bottom line, and they'll be able to reinvest that. And Tom Sweet said, within 18 months, we'll be able to get there with that two X ratio relative to EBITA, and that's when they're going to start having conversations with the rating agencies to talk about you know, hey, maybe we can get a better rating and lower our interest expense. Bottom line, did Wall Street buy the story? Yes. But I don't think it's going to necessarily change anything in the near term. This is a show me from Missouri, prove it, execute, and then I think Dell will get rewarded. Okay, so this is Dave Vellante, thanks for watching this Cube Insights powered by ETR. We'll see ya next time. (electronic music)

Published Date : Sep 27 2019

SUMMARY :

From the SiliconANGLE Media Office And at the same time, VMware has to walk the fine line

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff ClarkePERSON

0.99+

Rob WilliamsPERSON

0.99+

Tom SweetPERSON

0.99+

Michael DellPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

VMwareORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Dave DonatelliPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

Dennis HoffmanPERSON

0.99+

HPEORGANIZATION

0.99+

EMCORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AlexPERSON

0.99+

DellORGANIZATION

0.99+

OctoberDATE

0.99+

2019DATE

0.99+

UberORGANIZATION

0.99+

FridayDATE

0.99+

New YorkLOCATION

0.99+

October '18DATE

0.99+

4,500QUANTITY

0.99+

SplunkORGANIZATION

0.99+

April '19DATE

0.99+

4,500 practitionersQUANTITY

0.99+

Legacy EMCORGANIZATION

0.99+

17 billionQUANTITY

0.99+

$92 billionQUANTITY

0.99+

SunORGANIZATION

0.99+

July '19DATE

0.99+

2016DATE

0.99+

2017DATE

0.99+

JulyDATE

0.99+

Joe TucciPERSON

0.99+

Last yearDATE

0.99+

GartnerORGANIZATION

0.99+

Iron MountainORGANIZATION

0.99+

IntelORGANIZATION

0.99+

VeritasORGANIZATION

0.99+

4%QUANTITY

0.99+

HitachiHDSORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DropboxORGANIZATION

0.99+

MayDATE

0.99+

2007DATE

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

last weekDATE

0.99+

88 productsQUANTITY

0.99+

2018DATE

0.99+

Mark Shuttleworth, Canonical | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, It's theCUBE. Covering KubeCon + CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Welcome back, to theCUBE coverage here at KubeCon + CloudNativeCon. I'm Stu Miniman, my co-host is Corey Quinn. And happy to welcome back to the program Mark Shuttleworth who os the CEO of Canonical. Of course, the orange shirts of Ubuntu, are seen all throughout the show. Mark, thank you so much for joining us, great so see you. >> Great to see you. >> All right, so for years, actually, we've had these conversations at the OpenStack Summit. It's interesting that, every time you mention it around this show you get snark online, as like, it is dead, Kubernetes killed it and it's like wait, no, no, you know we're talking about, a couple of open-source projects. I've been talking to people, especially in the telco space, that's like, oh yeah, well no, we just run OpenStack underneath and Kubernetes on top and put all things together. Give us a little bit of your broad view of some of these big trends, and open-source monoliths and microservices and all these pieces, all kind of fly together. >> Yeah, I think if your in the Reddit SubChannels, then you know it can feel a bit like turf war, and gangster-type, free software riffing, right. But the reality is, OpenStack solves business problems for people. They want large scale, virtualized infrastructure, that's cheaper than VMware. We are deploying OpenStacks in enterprise environments at double the scale and double the speed, in other words, like twice as many every month, as we were a year ago. I think people have gotten comfortable with the idea that Kubernetes is an application operations construct. I think we will see virtualization blur into the Kubernetes lives, but mainly for security reasons. So I want deeper isolation of applications that come from third-party vendors, for example. And I'm willing to trade performance for isolation, in circumstances where I am bringing in third-party code into my private infrastructure. After we see a couple of significant security compromises, I mean, we saw the GitHub compromise. If you shave that Yak, it gets to a very uncomfortable place of, what are we actually running as root all over our data centers with Docker and Docker Hub. So, people are going to want that kind of isolation of containers, the Kata Containers work is going to bring that. But that's very different to the proposition of, essentially, give me large scale, machine virtualization which OpenStack addresses. OpenStack hasn't done itself any favors, don't need to go into that here. But nonetheless, as far as we're concerned, it's straight forward to deliver large scale, low cost, enterprise virtualization infrastructure for telco's or IT use cases. >> Let's get into this ecosystem here. I want to say the Cloud Native ecosystem, and I say that specifically because there are some that look at this and they say, oh, there's dozens of projects now, Kubernetes is a platform against platform. Somebody even mentioned the word big tent once. We've seen some projects merging, we've seen some various pieces. >> I saw making a bigger tent on the keynote and I was like, not my favorite choice of words. >> I seem to remember a certain article that you wrote poking a whole in the big tent thing. What's the same, what's different? What's your take on this? Is it an ecosystem? Is it Kubernetes and friends, as Corey has liked to say here? What's your take? >> Look, I think we're still trying to figure out what are the appropriate labels to attach to this kind of forum, it is a forum, right. There is a tremendous amount of value attached to being here, to the ideas that are getting bounced about. But I wouldn't call it a simple community in the sort of, traditional open-source sense. The reality is there's very serious money behind every, sort of project that's been framed as a community project. This is a new kind of consortium. And that brings with it certain, delicate, political posturing and so on. But, nonetheless, it's a valuable place to be. It's definitely staking out important concepts and operational platforms, ideas, regimes, whatever you want to call it. This is going to be a fun week. >> I started off my career in the Linux world as a grumpy Unix administrator because there really wasn't any other kind. Then I started dipping my toes into the Linux world and something struck me, almost immediately, about Ubuntu. Was how welcoming everyone was in the community. There was no such thing as a stupid question. I asked the kind of questions you would expect from someone working on a computer, wearing a suit. People were very eager to embrace newcomers into that. It was one of the absolute best things that I saw coming out of Canonical, in addition to the software itself. I love that you're here as a part of this. What is the larger picture? What do you see in the Cloud Native ecosystem that's resonating with what Canonical's doing? >> So, the big thing that we do is, essentially, try to figure out where, what's possible with open-source that's hard to do. And then make it really straight forward so that more people can do the important stuff easily. That doesn't stop people from doing all the crazy stuff at the periphery that you can do with Ubuntu. It's generally easier with Ubuntu than any other platform. But we try to make the really most important things really easy for everybody. That's the first thing. The second thing is, we're a little non-judgemental about the fact that there are different perspectives on the same stuff. In the Ubuntu ecosystem, we make a point of saying that GNOME guys, and the KDE guys, and the LXQt, and the MATE guys. The Ubuntu ecosystem is where they actually meet to hash out how they can do stuff in a way that means users get a real choice between those. There's a very similar role for us to play in an environment like this. It's kind of acronym soup out there. Like 50 new projects every KubeCon. They're all interesting, they're all important, there's a lot of overlap between them. There's work for us to do in figuring out which ones are going to be really more important in the tent. We did that very effectively with OpenStack. The people who rode the OpenStack wave with us haven't had to abandon their OpenStacks. Because the stuff that we really chose to make central and easy, turned out to be the stuff that was the important poles in the tent. And we'll do exactly the same stuff here with Kubernetes. So, to put that into context, it's been real fun to be on the booth. We had, just tons, of people coming up and saying thank you for Microk8s. Microk8s is a single package of Kubernetes, that works in lots Linux distributions. It gives you, in about a minute, it gives you a standard Kubernetes environment, that's pure upstream. That, for a developer, just let's you get productive immediately. Figure out these new development application operations, constructs. You can use it on an airplane, you can use it on a train. Of course, it's compatible with all of the public clouds so that's the second thing that we're doing. We work with Amazon, with the EKS team, I spoke at their event on Monday. We work with Azure, the AKS team, we work with Google, we work with Oracle, we work with IBM. Essentially making sure that all of them offer Ubuntu worker nodes for their Kubernetes, SaaS offerings. That means that the developer who's doing stuff on their workstation with Microk8s can take those containers straight to any other public clouds. So, we're not trying to force people to use a particular solution, we're saying, in all of those environments, there are going to be choices people have. We want to make that as easy as possible for them. We want to avoid unnecessary friction in that process. That kind of underlining culture is coming through in this forum, as well. >> We've had many conversations about how you've always tried to make the job of that developer really easy. One of the things we always look at on this show is how much of it is the infrastructure people, or the platform underneath and the developer, and how much are they coming together. Anything different about this ecosystem? >> Very much so, yeah. >> Or your customers here that you can share? >> Kubernetes is an application construct. You can think of it as a next generation message bus. It's how components of an application find each other, communicate with each other, essentially, coordinate with each other. That makes it very tightly woven in to the developer experience. By contrast, you can be sitting writing a Java application inside a bank and not know or care whether it's going to be running on a physical machine, a virtual machine or an OpenStack cloud. You just don't know, you don't care. It's too far away from the application. Kubernetes is right there. I think that's one of the really interesting things is that it's bringing those infrastructure brains together with the application, app dev brains, in a very interesting way. It's going to be challenging. I wouldn't underestimate it, there are a lot of people, sort of, wondering around here, feeling a little confused, but that's okay. Do you know what I mean, the stuff shakes out. >> So, something that's been a recurring theme here has been the idea of going in a multi-cloud direction. Where people are talking about wanting to build workloads that they can seamlessly deploy across different providers. People talk about that, periodically, as a strategic goal but I'm not seeing people do it very often in the real world. You're in a much better position than a lot of us, to see that. Is that something you're seeing people moving towards as an adoption? >> Well, yes. Because we work with all of the major public clouds to optimize Ubuntu there, in a way that I don't think any other Linux does. You get an optimized Amazon Ubuntu on Amazon. You get an optimized Azure Ubuntu on Azure, and so on. >> Going very deep in the Amazon ecosystem. Most of my customers are using Ubuntu far ahead of anything else out there. >> That's right. >> And it's the right answer for what they're doing. >> That's right. It gives them, essentially, the best of what Amazon's offering, it still gives them the ability to feel like if they want to go somewhere else, they can. And that actually works well for Amazon. In the early days, I think there was a little tension between us and the cloud guys, because they were saying, look, if people use Ubuntu then they can go somewhere else. Yes, but in a sense, that makes them more likely to be more relaxed about starting wherever they choose to start. We don't advise enterprises as to which cloud to use. We advise them to engage with those clouds and figure out their differences, they are different. Amazon's really good at some things that are different, to what Microsoft is good at. Oracle is really good at some things which are different too. And what we're starting to see is the level of maturity in the enterprise governance process. They know they want to work with multiple clouds. They initially thought that was a straight kind of commodity exchange, competition thing. They now realize that it's a bit richer than that. That there are actually business reasons to have deeper relationships with particular clouds, based on what those clouds are prioritizing, and what they are prioritizing. So, we're not going to say you should use this cloud, you should use that cloud. Obviously, we can draw a distinction between the clouds where we're deeply engaged and the clouds where, you know, where you just don't have the benefit of that. But, more importantly we can say, you know, here are the set of practices that you can adopt internally that will give you comfort that your getting the best out of those clouds, the ones that you've chosen. And you have the portability that you really need. The key turns out be, enabling your developers, to use multiple clouds and challenging the developers to do different phases of the development life cycle on different clouds. Develop on your private cloud or your work station, use Microk8s, for example. Do tests on one cloud. Do staging and production on a different cloud. Now you already know that that whole, seamless ecosystem works. If you want to go use a high value, proprietary function, effectively on a cloud, that's a business decision and it's not a bad business decision. There's some spectacular capabilities from Amazon that are unique to Amazon. Or from Microsoft that are unique, or from Oracle that are unique to Oracle. They're spectacular. Those are business decisions to use them. There's other stuff that effectively you can give yourself optionality on. I wouldn't be black and white about that, put yourself in a position to make smart choices. And our best customers are getting are getting there. PayPal, they're operating on Ubuntu in a very sophisticated way, across multiple public clouds and private infrastructure. >> All right, so Mark we're five years into Kubernetes now. We've seen adoption grow, people feel there's a certain level of maturity here. There's always that concern that we've reached that peak and we're about to fall off the cliff. What do we need to worry about? What does the ecosystem need to do to make sure we continue along the stability and security that customers are looking for. >> There will be an over shoot regardless. I don't think there's any sort of leadership or governance approach that could avoid that. It's a little bit like, if your stock is going crazy. On the one hand, you're kind of happy. On the other hand, if you feel it's over valued it's a difficult sort of thing to say. You need to say, guys, you know what I mean, we're humans too. We've got our challenges to work through. And no one likes volatility, but too a certain extent, there's always speculation and over shoot, and over-enthusiasm, and hype. Kubernetes will over shoot. There's a bunch of emperors walking around here that, frankly, have no clothes. My job, our job, is very calmly, to sort through the wheat from the chaff. Make sure that it's possible for people to experiment with everything. But, that the stuff that we think has legs, effectively, is nicely integrated for people, that they have that for the long term, they won't regret things. We have a good track record of doing that. We've done it in the Linux desktop. We did it in OpenStack, we're doing it in public cloud. We've done it here in the Cloud Native world. I'd say things like AI are going in the same direction. Again, tons of complexity, tons of new options. Helping people effectively navigate through that is what we do very well. >> Yeah, one of the questions that I started to see as well, as we look at the way that these technologies continue to evolve, has been that, for better or worse, when developers are writing applications now and even infrastructure people are working with a lot of the things they care about. What operating system, let alone what distribution they're using, is increasingly slipping beneath the waves. People don't think about that as a primary area of focus anymore. And as, I guess, of the foundational Linux vendors in this space, how are you seeing that evolving? And how does Canonical remain relevant in a world where suddenly, people in a serverless future, I just throw some code over somewhere else and it runs is the limit of where most companies get involved. >> Yes, of course, we can point to the servers. And on the servers, we can point to the operating systems and inside the containers, we can point to the operating systems and underneath the serverless code, we can point to the language runtimes. So, the reality is that those things matter less and less to the developer. >> Yes. >> They still matter to the institution. So, I'm super comfortable with the language that says, the OS doesn't matter. What it means is that that whole tangle is getting professionalized and abstracted. But to be confident in the abstractions, someone needs to do a lot of work. I know how much work we do with Google, with Amazon, with Microsoft, with Oracle, with IBM, to make sure that nobody else has to feel like the OS matters. That that stuff essentially just works. You can extend that out to what we do with VMware, what we do, essentially, on bare-metal, what we do on developer workstations, what we do with the Windows crowd, effectively, and Windows subsystem for Linux, so that developers really can just build on Windows subsystem for Linux, Ubuntu, effectively, and ship that container straight to Amazon EKS and have it just work. There are a ton of little lies that have to line up. Containers are all kind of a fiction. The fiction breaks if those pieces don't line up. So, being Ubuntu, effectively and being being able to be consistent in all of those places, is a ton of work to enable it not to matter for anybody upstairs. That's allowing developers to go faster. It's allowing them to be more productive. It's allowing them to be more heroic. And it's allowing the people who do worry about the middleware to have far fewer nights scratching their heads as to, why didn't this version of this library tie up to that driver with that kernel. All of those things are still there. When you drop that container onto Amazon, we've got to connect the GPGPU in the hardware, through the hypervisor, to the guest OS, up into the container. And there's code getting injected all the way up. It's only the fact that we can typically have Ubuntu everywhere there that, essentially, allows those pieces to line up without some spectacular fireworks. It satisfies me when people say they don't have to worry about that. >> It's a victory condition. >> Mark, I want to give you the final word. What should we be looking for, from Canonical, through the rest of the year? >> So, for us, this has been a big year in terms of visibility in the enterprise. In terms of penetration, Ubuntu's everywhere in the Fortune 500, everywhere in the Global 2000. What's changed this year, is the CIO suddenly is seeing Ubuntu on their desk. For two reasons, one is IBM Red Hat. The CIO suddenly wants to know, okay, what does this mean? What else are we running? Where else can we get 24/7 SLAs? Where else can we get long term commitments to Linux and so on? And the fact is Ubuntu's already in the building so that's one, sort of, easy connect. The other thing is, there's really interesting, new workloads that Ubuntu leads in the enterprise. Obviously the container story, the multi-cloud story, edge. It's not just telcos. Every retailer, every logistics company, anybody that has physical distribution is now trying to say, well how can I automate compute in my physical world, effectively. So, edge is super interesting and IoT beyond that. People transforming businesses through taking a Raspberry Pi with Ubuntu and putting a snap on it is really, really cool. Which of those is going to drive the biggest headlines or the scariest headlines, I can't tell you. We're just trying to take care of security, performance and operations across all of them. >> All right, well, Mark Shuttleworth, always a pleasure to catch up, thank you so much for the updates. >> Great to see you. >> All right, for Corey Quinn, I'm Stu Miniman. We'll be back with lots more coverage here from KubeCon + CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching theCUBE. (upbeat music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, And happy to welcome back to the program Mark Shuttleworth I've been talking to people, especially in the telco space, of containers, the Kata Containers work is going to bring that. and I say that specifically because there are some on the keynote and I was like, I seem to remember a certain article that you wrote This is going to be a fun week. I asked the kind of questions you would expect of saying that GNOME guys, and the KDE guys, One of the things we always look at on this show is It's going to be challenging. in the real world. to optimize Ubuntu there, in a way that I don't think in the Amazon ecosystem. and the clouds where, you know, What does the ecosystem need to do But, that the stuff that we think has legs, effectively, that these technologies continue to evolve, And on the servers, we can point to the operating systems You can extend that out to what we do with VMware, Mark, I want to give you the final word. Which of those is going to drive the biggest headlines always a pleasure to catch up, We'll be back with lots more coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Mark ShuttleworthPERSON

0.99+

Corey QuinnPERSON

0.99+

OracleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

GoogleORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

MarkPERSON

0.99+

five yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

MondayDATE

0.99+

AKSORGANIZATION

0.99+

PayPalORGANIZATION

0.99+

WindowsTITLE

0.99+

second thingQUANTITY

0.99+

KubeConEVENT

0.99+

EKSORGANIZATION

0.99+

UbuntuTITLE

0.99+

Barcelona, SpainLOCATION

0.99+

JavaTITLE

0.99+

50 new projectsQUANTITY

0.99+

LinuxTITLE

0.99+

first thingQUANTITY

0.99+

oneQUANTITY

0.99+

OpenStackTITLE

0.98+

KubernetesTITLE

0.98+

two reasonsQUANTITY

0.98+

OpenStack SummitEVENT

0.98+

a year agoDATE

0.98+

CloudNativeConEVENT

0.97+

twiceQUANTITY

0.97+

OneQUANTITY

0.97+

one cloudQUANTITY

0.96+

CoreyPERSON

0.96+

OpenStacksTITLE

0.96+

dozens of projectsQUANTITY

0.96+

GitHubORGANIZATION

0.96+

LXQtTITLE

0.95+

Stephan Fabel, Canonical | KubeCon 2018


 

>> Live, from the Seattle, Washington. It's theCUBE, covering KubeCon and CloudNativeCon, North America 2018, brought to you by Red Hat, the Cloud Native Computing Foundation and it's ecosystem partners. >> Welcome back everyone. We're live here in Seattle for theCUBE's exclusive coverage of KubeCon and CloudNativeCon 2018. I'm John Furrier at Stuart Miniman. Our next guest Stephan Fabel, who is the Director of Product Management at Canonical. CUBE alumni, welcome back. Good to see you. >> Thank you. Good to see you too. Thanks for having me. >> You guys are always in the middle of all the action. It's fun to talk to you guys. You have a pulse on the developers, you have pulse on the ecosystem. You've been deep in it for many, many years. Great value. What's hot here, what's the announcement, what's the hard news? Let's get to the hard news out of the way. What's happening? What's happening here at the show for you guys? >> Yeah, we've had a great number of announcements, a great number of threads of work that came into fruition over the last couple of months, and now just last week where we announced hardware reference architectures with our hardware partners, Dell and SuperMicro. We announced ARM support, ARM64 support for Kubernetes. We released our version 1.13 of our Charmed Distribution of Kubernetes, last week And we also released, very proud to release, MicroK8s. Kubernetes in a single snap for your workstation in the latest release 1.13. >> Maybe explain that, 'cause we often talk about scale, but there is big scale, and then we're talking about edge, we're talking about so many of these things. >> That's right. >> That small scale is super important, so- >> It really is, it really is, so, MicroK8s came out of this idea that we want to enable a developer to just quickly standup a Kubernetes cluster on their workstation. And it really came out of this idea to really enable, for example, AIML work clouds, locally from development on the workstation all the way to on-prem and into the public cloud. So that's kind of where this whole thing started. And it ended up being quite obvious to us that if we do this in a snap, then we actually can also tie this into appliances and devices at the edge. Now we're looking at interesting new use cases for Kubernetes at the edge as an actual API end point. So it's a quite nice. >> Stephan talk about ... I want to take a step back. There's kind of dynamics going on in the Kubernetes wave, which by the way is phenomenal, 8000 people here at KubeCon, up from 4000. It's got that hockey stick growth. It's almost like a Moore's Law, if you will, for the events. You guys have been around, so you have a lot of existing big players that have been in the space for a while, doing a lot of work around cloud, multi-cloud, whatever ... That's the new word, but again, you guys have been there. You got like the Cisco's of the world, you guys, big players actively involved, a lot of new entrants coming in. What's your perspective of what's happening here? A lot of people looking at this scratching their head saying: Okay I get Kubernetes, I get the magic. Kubernetes enables a lot of things. What's the impact to me? What's in it for me as an enterprise or a developer? How do you guys see this market place developing? What's really going on here? >> Well I think that the draw to this conference and to technology and all the different vendors et cetera, it's ultimately a multi-cloud experience, right? It is about enabling workload portability and enabling the operator to operate Kubernetes, independently of where that is being deployed. That's actually also the core value proposition of our charmed Kubernetes. The idea that a single operational paradigm allows you to experience, to deploy, lifecycle manage and administer Kubernetes on-prem, as well as any of the public clouds, as well as on other virtual substrates, such as VMware. So ultimately I think the consolidation of application delivery into a single container format, such as Docker and other compatible formats, OCI formats right? That was ultimately a really good thing, 'cause it enabled that portability. Now I think the question is, I know how to deploy my applications in multiple ways, 'cause it's always the same API, right? But how do I actually manage a lot of Kubernetes clusters and a lot of Kubernetes API end points all over the place? >> So break down the hype and reality, because again, a lot of stuff looks good on paper. Love the soundbites of people saying, "Hey, Kubernetes," all this stuff. But people admitting some things that need to be done, work areas. Security is a big concern and people are working on that. Where is the reality? Where does the rubber meet the road when it comes down to, "Okay, I'm an enterprise. What am I buying into with Kubernetes? How do I get there?" We heard Lyft take an approach that's saying, "Look, it solved one problem." Get a beachhead and take the incremental approach. Where's the hype, where's the reality? Separate that for us. >> I think that there is certainly a lot of hype around the technology aspect of Kubernetes. Obviously containerization is invoked. This is how developers choose to engage in application development. We have Microservices architecture. All of those things we're very well aware of and have been around for quite some time and in the conversation. Now looking at container management, container orchestration at scale, it was a natural fit for something like Kubernetes to become quite popular in this space. So from a technology perspective I'm not surprised. I think the rubber meets the road, as always, in two things: In economics and in operations. So if I can roll out more Kubernetes clusters per day, or more containers per day, then my competitor ... I gain a competitive advantage, that the cost per container is ultimately what's going to be the deciding factor here. >> Yeah, Stephan, when I think about developers how do I start with something and then how do I scale it out in the economics of that? I think Canonical has a lot of experience with that to share. What are you seeing ... What's the same, what's different about this ecosystem, CloudNative versus, when we were just talking about Linux or previous ways of infrastructure? >> Well I think that ultimately Kubernetes, in and of itself, is a mechanism to enable developers. It plays one part in the whole software development lifecycle. It accelerates a certain part. Now it's on us, distributors of Kubernetes, to ensure that all the other portions of this whole lifecycle and ecosystem around Kubernetes, where do I deploy it? How do I lifecycle manage it? If there's a security breach like last Monday, what happens to my existing stack and how does that go down? That acceleration is not solved by Kubernetes, it's solved for Kubernetes. >> Your software lives in lots and lots of environments. Maybe you can help clarify for people trying to understand how Kubernetes fits, and when you're playing with the public cloud, your Kubernetes versus their Kubernetes. The distinction I think is, there's a lot of nuance there that people may need help with. >> That's true, yeah. So I think that, first of all, we always distance ourself from the notion of having our Kubernetes. I think we have a distribution of Kubernetes. I think there is conformance, tests that are in place that they're in place for a reason. I think it is the right approach, and we won't install a fourth version of Kubernetes anytime soon. Certainly, that is one of the principles we adhere to. What is different about our distribution of Kubernetes is the operational tooling and the ability to really cookie-cutter out Kubernetes clusters that feel identical, even though they're distributed and spread across multiple different substrates. So I think that is really the fundamental difference of our Kubernetes distribution versus others that are out there on the market. >> The role of developers now, 'cause obviously you're seeing a lot of different personas emerging in this world. I'm just going to lay them out there and I want to get your reaction. The classic application developer, the ones who are sitting there writing code inside a company. It could be a consumer company like Lyft or an enterprise company that needs ... They're rebuilding inside, so it's clear that CIOs or enterprises, CXOs or whatever the title is, they're bringing more software in-house, bringing that competitive advantage under application development. You have the IT pro expert, practitioner kind of role, classic IT, and then you got the opensource community vibe, this show. So you got these three things inter-playing with each other, this show, to me feels a lot like an opensource show, which it is, but it also feels a lot like an IT show. >> Which it also is. >> It also is, and it feels like an app development show, which it also is. So, opportunity, challenge, is this a marketplace condition? What's you thoughts on these kind of personas? >> Well I think it's really a question of how far are you willing to go in your implementation of devops cultural change, right? If you look at that notion of devops and that movement that has really taken ahold in people's minds and hearts over the last couple of years, we're still far off in a lot of ways and a lot of places, right? Even the places who are saying they're doing devops, they're still quite early, if at all, on that adoption curve. I think bringing operators, developers and IT professionals together in a single show is a great way for the community and for the market to actually engage in a larger devops conversation, without the constraint of the individual enterprise that those teams find themselves in. If you can just talk about how you should do something better and how would that work, and there is other kinds of personas and roles at the same table, it is much better that you have the conversation without the constraint of like a deadline or a milestone, or some outage somewhere. Something is always going on. Being able to just have that conversation around a technology and really say, "Hey, this is going to be the one, the vehicle that we use to solve this problem and further that conversation," I think it's extremely powerful. >> Yeah, and we always talk about who's winning and who's losing. It's what media companies do. We do it on theCUBE, we debate it. At the end of the day we always like ... There's no magic quadrant for this kind of market, but the scoreboard can be customers. Amazon's got over 5000 reputable customers. I don't know how many CNCF has. It's probably a handful, not 5000. The customer implications are really where this is going. Multi-cloud equals choice. What's your conversations like with customers? What do you see on the customer landscape in terms of appetite, IQ, or progress for devops? We were talking, not everyone's on server lists yet and that's so obvious that's going to be a big thing. Enterprises are hot right now and they want the tech. Seeing the cloud growth, where's your customer-base? What are those conversations like? Where are they in the adoption of CloudNative? >> It's an extremely interesting question actually, because it really depends on whether they started with PaaS or not. If they ever had a PaaS strategy then they're mostly disillusioned. They came out, they thought it was going to solve a huge problem for them and save them a lot of money, and it turns out that developers want more flexibility than any PaaS approach really was able to offer them. So ultimately they're saying, "You know what, let's go back to basics." I'll just give you a Kubernetes API end point. You already know how to deal with everything else beyond that, and actually you're not cookie-cuttering out post ReSQueL- >> Kubernetes is a reset to PaaS. >> It really does. It kind of disrupted that whole space, and took a step back. >> All right, Stephan, how about Serverless. So a lot of discussion about Knative here. We've been teasing out where that fits compared to functions from AWS and Azure. What's the canonical take on this? What are you hearing from your customers? >> So Serverless is one of those ... Well it's certainly a hot technology and a technology of interest to our customers, but we have longstanding partnerships with Galactic Fog and others in place around Serverless. I haven't seen real production deployments of that yet, and frankly it's probably going to take a little bit longer before that materializes. I do think that there's a lot of efforts right now in containerization. Lots of folks are at that point where they are ready to, and are already running containerized workloads. I think they're busy now implementing Kubernetes. Once they have done that, I think they'll think a little bit more about Serverless. >> One of the things that interest me about this ecosystem is the rise of Kubernetes, the rise of choice, the rise of a lot of tools, a lot of services, trying to fend off the tsunami wave that's hit the beach out of Amazon. I've always said in theCUBE that that's ... They're going to take as much inland territory on this tsunami unless someone puts up a sea wall. I think this is this community here. The question is, is that ... And I want to get your expert opinion on this, because the behemoths, the big guys are getting richer. The innovation's coming from them, they have scale. You mentioned that as a key point in the value of Kubernetes, is scale, as one of those players, I would consider in the big size, not like a behemoth like an Amazon, you got a unique position. How can the industry move forward with disruption and innovation, with the big guys dominating? What has to happen? Is there going to change the size of certain TAMs? Is there going to be new service providers emerging? Something's got to give, either the big guys get richer at the expense of the little guys, or market expands with new categories. How do you guys look at that? Developers are out there, so is it promising look to new categories, but your thoughts. >> I think it's ... So a technology perspective certainly would be, there could be a disruptive technology that comes in and just eats their lunch, which I don't believe is going to happen, but I think it might actually be a more of a market functionality actually. If it goes down to the economics, and as they start to compete there will be a limit to the race to the bottom. So if I go in on an economical advantage point as a public cloud, then I can only take that so far. Now, I can still take it a lot further, but there's going to be a limit to that ultimately. So, I would say that all of the public clouds, we see that increasingly happening, are starting to differentiate. So they're saying, "Come to me for IML." "Come to me for a rich service catalog." "Come to me for workload portability," or something like that, right? And we'll se more differentiation as time goes on. I think that will develop in a little bit of a bubble, to the point where actually other players who are not watching, for example, Chinese clouds, right? Very large, very influential, very rich in services, they can come in and disrupt their market in a totally different way than a technology ever could. >> So key point you mentioned earlier, I want to pivot on that and get to the AI conversation, but scale is a competitive advantage. We've seen that on theCUBE, we see it in the marketplace. Kubernetese by itself is great but at scale it gets better, got nobs and policy. AI is a great example of where a dormant computer science concept that has not yet been unleashed ... Well, it gets unleashed by cloud. Now that's proliferating. AI, what else is out there? How do you see this trend around just large-scale Kubernetes, AI and machine learning coming on around the corner? That's going to be unique, and is new. So you mentioned the Chinese cloud could be a developer here. It's a lever. >> Absolutely, we've been involved with kubeflow since the early days. Early days, it's barely a year, so what early days? It's a year old. >> It's yesterday. >> So a year a ago we started working with kubeflow, and we published one of the first tutorials of how to actually get that up and running and started on Ubuntu, and with our distribution of Kubernetes, and it has since been a focal point of our distribution. We do a couple of things with kubeflow. So the first thing, something that we can bring as a unique value preposition is, because we're the operating system for almost all GKE, all of AKS, all EKS, such a strong standing as an operating system, and have strong partnerships with folks like NVIDIA. It was kind of one of the big milestones that we tried to achieve and we've since completed, actually as another announcement since last week, is the full automatic deployment of GPU enablement on Kubernetes clusters, and have that identical experience happen across the public clouds. So, GPGPU enablement on Kubernetes, as one of the key enablers for projects like kubeflow, which gives you machine learning stacks on demand, right? And then a parallel, we've been working with kubeflow in the community, very active, formed a steering committee to really get the industry perspective into the needs of kubeflow as a community and work with everybody else in that community to make sure that kubeflow releases on time, and hopefully soon, and a 1.0, which is due this summer, but right now they're focused on 0.4. That's a key area of innovation though, opportunity. >> Oh, absolutely. >> I see Amazon's certainly promoting that. What else is new? I've got one last question for you. What's next for you guys? Get a quick plugin for Canonical. What's coming around the corner, what's up? >> We're definitely happy to continue to work on GPGPU enablement. I think that is one of the key aspects that needs to stay ... That we need to stay on top of. We're looking at Kubernates across many different use cases now, especially with our IoT, open to core operating system, which we'll release shortly, and here actually having new use cases for AIML inference. For example, out at the edge looking at drones, robots, self-driving cars, et cetera. We're working with a bunch of different industry partners as well. So increased focus on the devices side of the house can be expected in 2019. >> And that's key these data, in a way that's really relevant. >> Absolutely. >> All right, Stephan, thanks for coming on theCUBE. I appreciate it, Canonical's. Great insight here, bringing in more commentary to the conversation here at KubeCon, CoudNativeCon. Large-scale deployments as a competitive advantage. Kubernetes really does well there: Data, machine learning, AI, all a part of the value and above and below Kubernatese. We're seeing a lot of great advances. CUBE coverage here in Seattle. We'll be back with more after this short break. (digital music)

Published Date : Dec 13 2018

SUMMARY :

North America 2018, brought to you by Red Hat, Good to see you. Good to see you too. You guys are always in the middle of all the action. in the latest release 1.13. Maybe explain that, 'cause we often talk about scale, and into the public cloud. What's the impact to me? and enabling the operator to operate Kubernetes, that need to be done, work areas. I gain a competitive advantage, that the cost per container in the economics of that? in and of itself, is a mechanism to enable developers. that people may need help with. Certainly, that is one of the principles we adhere to. You have the IT pro expert, practitioner kind of role, What's you thoughts on these kind of personas? and really say, "Hey, this is going to be the one, At the end of the day we always like ... You already know how to deal It kind of disrupted that whole space, and took a step back. What's the canonical take on this? of interest to our customers, One of the things that interest me about this ecosystem and as they start to compete there will be a limit around the corner? since the early days. in that community to make sure What's coming around the corner, what's up? So increased focus on the devices side of the house in a way that's really relevant. AI, all a part of the value and above and below Kubernatese.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephanPERSON

0.99+

2019DATE

0.99+

Stephan FabelPERSON

0.99+

NVIDIAORGANIZATION

0.99+

SeattleLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

CanonicalORGANIZATION

0.99+

DellORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

last weekDATE

0.99+

KubeConEVENT

0.99+

CiscoORGANIZATION

0.99+

SuperMicroORGANIZATION

0.99+

yesterdayDATE

0.99+

8000 peopleQUANTITY

0.99+

last MondayDATE

0.99+

one partQUANTITY

0.99+

CloudNativeConEVENT

0.99+

ServerlessORGANIZATION

0.99+

LyftORGANIZATION

0.99+

two thingsQUANTITY

0.98+

oneQUANTITY

0.98+

a yearQUANTITY

0.98+

Seattle, WashingtonLOCATION

0.98+

LinuxTITLE

0.97+

a year a agoDATE

0.97+

first thingQUANTITY

0.97+

KubernetesTITLE

0.97+

first tutorialsQUANTITY

0.96+

CloudNativeCon 2018EVENT

0.96+

UbuntuTITLE

0.96+

threeQUANTITY

0.96+

ChineseOTHER

0.96+

OneQUANTITY

0.95+

one problemQUANTITY

0.95+

waveEVENT

0.95+

kubeflowTITLE

0.95+

single showQUANTITY

0.94+

5000QUANTITY

0.94+

last couple of monthsDATE

0.94+

CUBEORGANIZATION

0.93+

AKSORGANIZATION

0.93+

fourth versionQUANTITY

0.92+

KuberneteseTITLE

0.92+

one last questionQUANTITY

0.92+

this summerDATE

0.92+

4000QUANTITY

0.91+

CNCFORGANIZATION

0.91+

MicroK8sORGANIZATION

0.91+

KubeCon 2018EVENT

0.91+

singleQUANTITY

0.87+

ARMORGANIZATION

0.87+

last couple of yearsDATE

0.86+

firstQUANTITY

0.85+

single containerQUANTITY

0.85+

North America 2018EVENT

0.84+

CoudNativeConORGANIZATION

0.83+

Diane Mueller & Rob Szumski, Red Hat | KubeCon 2018


 

>> Live from Seattle, Washington, it's theCUBE, covering KubeCon, and CloudNativeCon North America 2018. Brought to you by Red Hat, the CloudNative Computing Foundation, and the Antigo System Partners. >> Hey, welcome back everyone live here in Seattle for the theCUBE's coverage of KubeCon and CloundNativeCon 2018. I'm John Furrier, theCUBE with Stu Miniman, breaking down all the action. Three days of coverage, we're in day two. A lot of action at Open-source. 8,000 attendees, up from 4,000 North America, they were in China, they were all over Europe. The community's growing in a massive way. We had two great guests from Red Hat, all making it happen, part of the community. We've got Diane Mueller, whose theCUBE alumni director of community development, many times on theCUBE, good to see you, and Rob Szumski, principal product manager, both at Red Hat. Guys, thanks for coming on. Great to see you again. >> Yeah, glad to be here. - Great to be here. >> So the world's changing a lot, and there was some news recently around Red Hat. I can't remember what it was. Recently, something big news, but you guys have been big players in Open-source for years. We always cover it, we always wax on about the origination of it and how the evolution, but the CloudNative piece has gotten so real, and your role in it particularly, we've had many conversations, going maybe back to the OpenStack days of how OpenShift was developing, then the bet on Kubernetes that you made, Core OS acquisition, those two things I think, to me, at least from my perspective, really catalyzed a lot of things at the right time, right? So, from there, just a lot of things has just been happening really in a good way. Big tail wind for you guys, CloudNative app developers are using Open-source, CI/CD pipeline, and then also policy based up under the hood, completely big shift in moving the game down the field. So big congratulations first of all. But what's new? What's the update? >> The update is Operators. I think the next big thing that we are really focusing on, and that's a game changer for all the second day operations type things, and we'll make Rob talk about it in detail, is the rise of Kubernetes' Operators. It's not a scary thing, it's not like terminator day, or anything like that, but it is really the thing that helps us make the service catalogs, the Kubernetes marketplaces really accessible to all of the data bases as a service, and all of the other things, and takes out some of the complexity of delivering applications and database  as a Service to anybody running Kubernetes anywhere. >> Take a minute to explain Operator, real quick, and then we can jump into it, because I think this is a fundamental trend, that we're seeing. Developer trend is pretty obvious, it's been that word for awhile, CloudScale, ML, machine learning, and all the goodness around application development, but the Operator side of it has been an IT thing. But now you guys have a different, a new approach that's winning. What is it? What is Operator? >> Well, it's Kubernetes that has the approach, and I'll let you-- >> Yeah, so it's basically like the rise of containers was great, because you could take a single container and package an application and give to somebody, and know that they can run it successfully. And Operator does that for a distributed system in the exact same way. So you're using all the Kubernetes primitives, so you're not reinventing service discovery, and seeker management, and all that. And you can give somebody an entire Kafka stack, or a machine learning stack, or whatever it is, these very complex distributed systems, and have them run it without having to be an expert. They need to know Kafka at a high level, but not exactly all the underpinnings of it, because that's all baked in the software. >> And the benefit and the impact of the organization is what? >> And just to clarify, so this was added in, I believe Kubernetes is like 1.7, it's something that's in there, it's not something Red Hat specific- >> Yeah, it's like-- >> So you're extending Kubernetes so that you have a custom resource definition, which is an extensible mechanism for saying, hey, I've got a deployment or a staple set, but what if I want to have a new object called a MongoDB? That knows how to deploy, and manage, and upgrade MongoDB. So that's the extension mechanism that we're using. >> Yeah, so you got to think, there's certain applications that this is going to make, just a lot easier how I manage them, deploy them, things like that. Any specific examples you want to share as to-- >> All the clustered data bases. >> There's a lot of the application side in this model have been very excited about this. >> So its all the vendors and partners that want a hybrid Cloud story, just targeting Kubernetes, and we're using Kubernetes under the hood, and then everybody wants to run like a staple data base tier, whether that's Mongo and Couchbase, and Cassandra, whatever. And these are all distributed systems. >> Alright, so I want you to just perch, you said a hybrid Cloud. Explain that model, because there's just something in general discussion that is hybrid or multi means I'm running multiple places, I'm not necessarily stretching an application, but I have instances there, just want to make sure we're on the same page. >> So this would be more the compatibility that you're programming against when you're building an operator, is Kubernetes. It's not a Cloud offering, it's not OpenShift, so you're just targeting Kubernetes, and so you can run MongoDB on prem, in the Cloud, and have it function the exact same, by standing up one of these Operators. And then if that Operator has higher level constructs for how to do multi-cluster aware data rebalancing, you can take advantage of that too. >> And the Open-source status of this product is what? >> It's all Open-source, it's all in the github repos, there's a Google group for Operator framework, that anyone can come and participate in. We hold SIG meetings on the third Friday of every month, 9 a.m. Pacific Time, and it's a completely Open-source project. There's a whole framework around it, so there's the Operator SDK, the Operator Lifecycle Management, and Operator metering, all the tooling there to help people build and manage these Operators, and it's all being built out there in the open with the community's support and feedback loops. >> What's the feedback? What's the top feedback you guys are getting right now? Seeing right now? >> I have to say, this is really, like I've been hanging out with you guys like for the past three, four months on this topic, trying to get my head around it and everything, and we came here and we had two sessions, an intro session and a deep dive session, intro yesterday, deep dive today. Today's deep dive, the room was about 250 people, and they're were people outside of it-- >> Security guards blocking people from coming in. >> Nobody could come in and it's like, it's insane. It's like, everybody needs these things, and everybody wants to figure out that, and when you ask people in the room whose building one, half the room raises their hands. It's just crazy. This thing crept up on us really, maybe not on Core OS, okay, it crept up on me very quickly, and it's very rapid adoption. We have a Kubernetes Operators workshop on Friday, so not only do we have pre-conference days of like OpenShift Cons that are huge now, but now we're starting to book end, CNCF events and put on other things, just because, and that, we had 100 seats that we were hoping we would fill, and it sold out in like minutes once it got in there, and there's a waiting list of like 300 people. It is like one of, aside from Knative, and all the other wonderful hot things too, it is one of the most interesting developments I think right now. >> Thirst for the content. Would it impact? >> Yeah, and you can get all of the documentation is out there now, and people are already building them. We have a list of 50 community Operators. It's just, it's phenomenal how quickly it's growing. >> You know, Diane and Rob, it's funny because you know, we do so many of these theCUBE interviews, and this is our 10th year doing theCUBE coming up, and I remember the conversations going back in the OpenStack days, we would ask questions like, if you had a magic wand, what would you like, hope to have happened, right? And you know, those are parts of the evolution, where it's like, it's aspirational, things are being built. It seems now with Kubernetes, it's almost like, wait a minute, it's actually, this is like the goodness is so compelling, above and below Kubernetes that it's almost like uncomprehendible. You think about, oh this is actually happening. Finally the kinds of steady state kind of operational things that have been a pain in the butt for years-- >> Yeah, the toil, it's gone, for the most part. >> Yeah. >> So Rob, I've been having a lot of just thinking back to, you're employee number two at Core OS, when I first talked to Core OS, it was, we're going to build all of these individual tools, and we're going to Open-source them, and it's going to be good. We watched this just rising ecosystem and the CNCF, and it feels like what's nice and what's different that I see, compared to some previous things, is it's not one product or even a small group of companies. It's, I have this tool kit, and some of them work together, but many of them are independently used. We've talked to your peers earlier about it, etCD. etCD is totally stand alone, doesn't need to be Kubernetes. What have you seen, if you go back to that original vision, would Core OS just been, part of this whole ecosystem, and done it, if this was available, and has this delivering on a promise that your team had hoped to work on? >> Yeah, so we've always filled in where we see gaps, and so something like etCD, the concept is not new, and it comes from Google, and they have a system internally, and as Brandon got up on stage and said, we needed that coordinate, reboot, to grow out, to cluster of machines. It didn't exist so we had to build it. Same thing with how we wanted to manage Linux. There was no distro that even resembled what we were doing. Wanted to do automatic upgrades, people thought that was crazy, so we had to go build it. And so, but we always adopted the best of breed technology, when it existed. In our early bet Kubernetes, we just saw, this is the thing, and went for it. I don't even remember what version, but it was months and months before it was zero point oh, or one point oh, so it was, we've been doing it forever. And you just see the right thing, and it's the little nugget that you need, and if you don't see it, then you build it. >> What are you surprised about Rob, in terms of the ecosystem now, you mentioned some goodness is happening, still a lot more to do, visibility around value creation, you're starting to see spots where value can be created in the ecosystem, which is great. Still more work areas, but what's surprising you? What do you see as opportunities, challenges? Your thoughts, because this vision of ease of use and programmability, is happening, right? So there's still more work to do. What's your vision there? What's your thoughts? >> I mean, I think self service is key, so this is like the rise of the Cloud comes from self service for developers, and Kubernetes gives you the right abstraction, where self service for VM's, like OpenStack, which is not quite at the level of what you want. You don't want a VM, you actually wanted a place to deploy an application, you wanted load balancing, you wanted service discovery, you didn't want like a bare Ubuntu VM, and so Kubernetes raises you up to where you're productive, and then it's about building stuff on top. But what's interesting, in the space is, we're still kind of competing on Kubernetes installers, and stuff like that, so we're not even really into like the phase where people are being super productive on the platform, other than these leading companies. So I think we'll democratize that, and we'll have a whole new landscape. >> And so 2019 you see as what being a key theme for Kubernetes? >> I think it'll be Core stuff built on top, like all the serverless frameworks, a bunch of container natives storage solutions, solving some of these problems that folks are reaching out to external machine learning, but bringing that onto the cluster, GPU support, that type of stuff. It's all about the workloads. >> And tradition end users, you have a huge install base, with Red Hat, well documented, as the end users start coming in and looking at CloudNative, and doing a reimagine of their environment, whether it's IT span, IT investments, to have a run their coding and the deployments. It's going to change. 2019's going to have an impact on what I call mainstream enterprise, for lack of a better description. What's the impact of those guys, 'cause now, they now have head room, they can do more, what's the main stream enterprise look like right now with the impact of Kubernetes? >> I think they're going to start deploying applications and get like lower the time to business value, much, much lower. And I was just talking to a customer, and they ordered bare metal machines like a year ago, and they're still not racked and in the data center. And so people are still getting over that type of stuff, but once you have like a shared Kubernetes layer, you can onboard teams like crazy. I mean, name spaces are free, quote, unquote, and you can get 35 engineering teams on a Kubernetes cluster super easy. >> So they can ramp up in development teams basically, as they bring value in-house, versus outsourcing everything. They start getting development teams, this is where the action is. >> I think you're also going to see the rise of those end users contributing back things, to the Kubernetes community and as Lyft, and Uber, and everybody are great examples of that. Uber with Jaeger, and Lyft is, we were just in the Operators thing, and they raised their hand that they are about to Open-source it, a few Operators that they're building and stuff, and you're just going to see people that you didn't normally see. Often these large foundation driven things are vendor driven, but I think what you see here, is the end user community is now embracing the Open-source, is getting the legal teams there, allowing them to share their things, because one, they get more people to maintain them, and more people working on them, but it's really I think the rise of the end user we'll see, as they start participating more and more in here. And that's the promise of Open-source. >> And that's where CNCF really made it's bones. It wasn't really vendor led per se, it was really end users, the guys building out their stuff for the first time. You see Lyft for instance, great example, you guys did a Core OS, this is like the new generational model. Final question before we break. I want to get this out there. Get a plug in for Red Hat. What are you guys, what's the focus for the show? What's the news? What's the big story for Red Hat here at KubeCon this year? >> I think it's Operators, that's what we're here talking about. It's a really big push to once again get smarter workloads onto the cluster. We've got a really great hybrid story, we've got a really great over the air upgrade story that we're bringing from some of the Core OS technology, and then the next thing is, once it's easy to run 35 clusters, we need a bunch of workloads to put on there. And so we want to save folks from the toil of running all those workloads as well, just like we did at the cluster level. >> Awesome. >> Well put. I couldn't add more. One of the things that Core OS did, you hit the nail on the head earlier, is when there was something missing, they helped us build it, and with the Operator SDK, and the Lifecycle Management, and the metering, and whatever else the tooling is, they have really been inspirational inside of Red Hat. And so they filled a number of gaps, and it's just been all Operators all the time right now. >> It's great when a plan comes together. You guys got a great tail wind. Congratulations on all the success, and it's just the beginning of the wave. It's theCUBE, covering the wave of innovation here at KubeCon CloudNativeCon 2018, we'll be back with more live coverage. Day two of Three days of Kube Coverage. We'll be right back. (upbeat music)

Published Date : Dec 13 2018

SUMMARY :

and the Antigo System Partners. Great to see you again. Yeah, glad to be here. but the CloudNative piece has gotten so real, and all of the other things, and all the goodness around application development, and package an application and give to somebody, And just to clarify, so this was added in, So that's the extension mechanism that we're using. that this is going to make, There's a lot of the application side So its all the vendors and partners on the same page. and have it function the exact same, It's all Open-source, it's all in the github repos, and we came here and we had two sessions, and all the other wonderful hot things too, Thirst for the content. Yeah, and you can get all of the documentation and I remember the conversations going back and it's going to be good. and it's the little nugget that you need, in the ecosystem, which is great. and so Kubernetes raises you up to where you're productive, but bringing that onto the cluster, GPU support, What's the impact of those guys, 'cause now, and get like lower the time to business value, So they can ramp up in development teams basically, And that's the promise of Open-source. What's the big story for Red Hat here at KubeCon this year? and then the next thing is, and it's just been all Operators all the time right now. and it's just the beginning of the wave.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane MuellerPERSON

0.99+

Rob SzumskiPERSON

0.99+

ChinaLOCATION

0.99+

Red HatORGANIZATION

0.99+

two sessionsQUANTITY

0.99+

SeattleLOCATION

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

DianePERSON

0.99+

John FurrierPERSON

0.99+

EuropeLOCATION

0.99+

RobPERSON

0.99+

UberORGANIZATION

0.99+

LyftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

100 seatsQUANTITY

0.99+

TodayDATE

0.99+

10th yearQUANTITY

0.99+

JaegerORGANIZATION

0.99+

Antigo System PartnersORGANIZATION

0.99+

FridayDATE

0.99+

35 clustersQUANTITY

0.99+

Core OSTITLE

0.99+

2019DATE

0.99+

todayDATE

0.99+

8,000 attendeesQUANTITY

0.99+

MongoDBTITLE

0.99+

KubeConEVENT

0.99+

GoogleORGANIZATION

0.99+

Three daysQUANTITY

0.99+

LinuxTITLE

0.99+

yesterdayDATE

0.99+

KafkaTITLE

0.99+

CNCFORGANIZATION

0.99+

KubernetesTITLE

0.98+

300 peopleQUANTITY

0.98+

bothQUANTITY

0.98+

Seattle, WashingtonLOCATION

0.98+

35 engineering teamsQUANTITY

0.98+

one pointQUANTITY

0.98+

CloudNativeCon North America 2018EVENT

0.98+

first timeQUANTITY

0.98+

zero pointQUANTITY

0.98+

two great guestsQUANTITY

0.97+

BrandonPERSON

0.97+

one productQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

CloundNativeCon 2018EVENT

0.97+

firstQUANTITY

0.97+

two thingsQUANTITY

0.96+

OpenShiftTITLE

0.96+

this yearDATE

0.96+

oneQUANTITY

0.96+

second dayQUANTITY

0.96+

50 community OperatorsQUANTITY

0.95+

OneQUANTITY

0.95+

9 a.m. Pacific TimeDATE

0.95+

Day twoQUANTITY

0.95+

single containerQUANTITY

0.95+

UbuntuTITLE

0.95+

OpenStackTITLE

0.94+

North AmericaLOCATION

0.94+

about 250 peopleQUANTITY

0.94+

day twoQUANTITY

0.92+

CloudNativeTITLE

0.92+

a year agoDATE

0.91+

four monthsQUANTITY

0.9+

4,000QUANTITY

0.9+

OpenShift ConsEVENT

0.9+

Kelsey Hightower, Google Cloud Platform | KubeCon 2018


 

>> Live from Seattle, Washington, it's theCUBE, covering KubeCon and CloudNativeCon North America 2018, brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Hello everyone, welcome back to the live Cube coverage here, three days at Seattle's KubeCon and CloudNativeCon. It's a conference put on by the Linux Foundation. Cube's been there from the beginning, breaking down all the action. 8,000 people, doubling attendance from the last one, now global, on a global scale, seen great traction in China and other areas around the world. It's about the cloud global. I'm John Furrier with Stu Miniman, our next guest, Kelsey Hightower with Google. Former code program share, now out in the wild on his own, super dope, playing with all kinds of new technology, it's great to see you, thanks for coming on. >> Proper you said the word dope, by the way, so congratulations there. I'm an attendee, I still have a keynote on Thursday but I do get to enjoy the floor like everyone else. >> So what's new, so you're now, again, there's a lot of pressure now every year. It's more and more people here, so it's a lot of pressure to kind of get all the action packed, but the growth has been pretty phenomenal. You've been looking at serverless, we saw some tweets, again you mention it's super dope, serverless is. You've got serverless, you've got a lot of stuff going on within the CNC app, you've got Kubernetes at the core. A lot of people like calling it the Kubernetes stack or the CNCF stack. Is it really a stack, is it really more of an operating model because there's stacks involved but how do you describe it, because this is a point of clarification. I mean, Kubernetes isn't necessarily a stack. Is it, how do people use it, what's the current state? >> I think when people say stack, you think about the LAMP stack, right? Linux, Apache, MySQL, it's a way of pre-packaging these ideas. This is something that worked for me, it may work for you, you say that enough times and then you say things like the Kubernetes stack. It's a quick, shorthand for Kubernetes and building on top of it. I think from the engineering perspective, when you look at Kubernetes and all the gaps that the CNC app is trying to fill these days, it's all this stuff you're probably building yourself, someone else is building it, and now we kind of have an outlet now. If you're working on a service mesh like list was, you have an outlet to give it to the rest of the world, open governance, and get some contributors. I think what we're seeing now is that hey, CNCF is kind of the place people go to figure out is someone building the thing that I've already started building and can I stop and just download that and go off? >> It's been very successful open source community, obviously, it's been end user leverage, it's been great and it's been open source, community led. Not so much vendor led, but vendors have been participating, so it's been great, but now as Kubernetes is going mainstream, the rise of Kubernetes is undeniable. No one can really deny that. Other end users are now coming in either to participate or to consume Kubernetes. How is that going in your mind? What's going on in the landscape, because people want multicloud, they want hybrid, they want choice. How are end users coming into the ecosystem to consume Kubernetes and the variety of goodness around it and what's going on there? Can you give some color around that option? >> I think regardless of the industry buzzwords like multicloud and hybrid and all that, Kubernetes is good on its own. It solves a lot of problems that your previous tools didn't solve, so people are gravitating towards it regardless in that direction. When you start to talk about portability, yes, it's nice to have two different environments and have the same tools work in a similar way between those environments, that's working well. The people that started three years ago that were doing it themselves, they're finding value and treating that as a service. We saw this happen to DNS, e-mail, so people are saying maybe the value isn't running it myself, so now you kind of see the vendor ecosystem understand what the value is. For a lot of the cloud providers, it's running Kubernetes, patching it, updating it, upgrading it, so that you can go focus on the other parts on top. That's where I think we are as an industry, and then there's gaps to fill, so that's where you see things like native, people building CI-CD tools on top, that's just where the new opportunities are so I think we've kind of matured. People kind of know what Kubernetes is, they know where their value line is for Kubernetes, now they're looking for their partners or vendors or community to just layer the new stuff on top. >> Kelsey, you bring up a great point there because understanding that line of what I should do myself and what I have to do versus what I can buy, consume as a service, is really tough for people, you know. I always say, ask IT departments, what do you really suck at? Because there's somebody else that probably does it better. A year ago, when I talked to users at this show, they were really downloading stuff, putting their things together, and when you asked them why, it was well, the Azure stuff hasn't matured. It just released, Amazon, I'm not sure where they're going with it. It feels like a lot has changed in the last year. You did Amazon the hard way a little over a year ago. What has changed over the last year, you know. >> We saw this with Linux, right? >> Are we ready for that, yeah. >> In Linux everyone use to build their own Linux distro, you took pride in it, using Gentoo and Slackware, and then you're like, I'm tired of that so you go get Red Hat or Ubuntu and call it good, and then you go focus on the other things. Naturally, Kubernetes is early project, has lots of gaps, you can fill those gaps by gluing together open source yourself, but now most of the managed services fill in the gaps by default. You click a button in GKE and a thing comes up, it's secure, has most of the pieces you need, it's integrated, you're like alright, I'm done with that part. >> The other thing, we talked a year ago. There's lots of companies here that are involved in Kubernetes. We've got over 70 that are compliant, and then you've got the service providers. From what I hear, it's people aren't trying to differentiate with Kubernetes and that's probably a good thing. It's something that's going to be baked into the platform, it's something you're going to consume with the other services that I offer, what do you say? >> If you make it different, then it won't work. >> Right. >> It'll be a different thing, so if you make it too different then you lose most of the benefits that we're all talking about here. The ability to learn a set of abstractions once, kind of like we did on Linux, if you start changing the system calls on Linux, then it's not Linux anymore, it's a different thing. >> Just to clarify though, if I'm running in one cloud that has their Kubernetes and I want to go to another, is it similar enough? Can I make that move? Do I need a vendor-independent version? >> So I think up to this value line I've run this container, ship the log somewhere, give me a way to secure access, that's pretty standard. Give me a load balancer. What isn't standard is how do I do CI-DC on top of that, that's not standard. There's different opinions on how to do that. If I'm in Google Cloud, we have IEM one way, Azure has IEM a different way, and same thing for Amazon. There's things around networking, security, that are going to be different based on the environment you're in. Same for on-prem, and that's where you start to look for help. If I go to Google, I'm going to use GKE maybe instead of running it myself on just a bunch of VMs, so that's where you kind of see that little divide. >> Is that going to be custom work, that's a great point, security for instance, we'll just pull that out there. Is that going to automate and be seamless or is that going to be a work area that's always going to have to be differentiated or coded or? >> So for example, we have the big vulnerability recently in Kubernetes world, right? >> It's a big CVE, it affected everyone running Kubernetes. That's a thing, as a vendor, for us GKE people, we upgraded automatically for them and said hey, there's a CVE, it's going to be really scary when you read about it but hey, you're patched. We've taken care of you, so I think people will still look for that relationship. Will it always be custom? At the app level, that is a different story. When you run your container and you want to access the things in your environment, so if you're in Google Cloud you may want to talk to Spanner, you're going to need an IEM set of credentials. That's a little out of scope of Kubernetes, so that's going to be integration work that the provider will do. >> So the holy trinity of computing industry has always been storage, network, and compute, and it changes certainly with cloud and all the goodness that comes out from serverless and whatnot, so containers is interesting. We always love containers but I've heard conversations recently where it's like hey, I want to treat containers not as a first class citizen because it doesn't meet my security boundary. I'm going to put a VM around that and run that under the covers with say, Lambda. Is that feasible, is than an option? I've heard talk about it, is anyone doing that? Is that an alternative, is this going to introduce new elements? >> Let's put it right, in Kubernetes by defaults we chose to build on top of Docker. Industry momentum, great developer workflow, but you're right, it made a security trade off. We know VMs are a much tighter security boundary that people are comfortable with. In that world, at that time, they were too slow for what we needed to happen. Thanks to Intel and others who pulled the thread of let's make VMs faster. Recently you heard the announcement of Firecracker, right, it's part of a derivative from the Chrome VM and that thing is optimized for these kinds of workloads, containers and serverless workloads. Now we go from 10, 20 seconds to hundred milliseconds. Now it makes sense to probably have this become an underlying thing. Now that we have the speed, maybe people say hey, we can maybe take the security without sacrificing the performance. >> That's the trade off. >> Pulled on the thread, you mentioned Firecracker. There's still this tension between what's happening in Kubernetes and serverless. We saw Knative is a hot topic point. It's probably natural that there's some tension there because it's like oh wait, why do you need to learn any of this stuff because if serverless will just make it as a service and make it easy and you don't need to learn all that container stuff and everything, what do you say? >> If you're a Kubernetes user, if you really think about the very broad definition of serverless, meaning I'm not managing the database, I'm using a managed database, serverless database. Storage, I'm using S3 or Google Cloud storage, serverless. Your load balancer, also serverless. So most people in the Kubernetes ecosystem, networking, serverless, storage, serverless, their database, serverless. The only thing that you can say isn't serverless is this compute component, everything else is. Now people are looking at serverless as this spectrum. How serverless are you? If you're on-prem and you buy a server and you rack it and install Kubernetes, you're less serverless, you're probably not serverless at all, no matter what you do. Now, if you put a lot of work in, you can probably put a serverless interface on top. This is what native is designed to do for people. Maybe you have an organization that supports multiple businesses inside of your org. They may not know anything about Kubernetes. You just tell them hey, put your code here, it will run, oh, that feels serverless. You can provide a serverless experience. The delta then becomes what can we do between a container and a function, so the foundation of my keynote is exactly that. What does it mean to take a container and put it into Lambda? What do you have to change? In my presentation, I don't even read write the code. There's a small shim between the two worlds because you're already using managed services around it. We're not talking about throwing away Kubernetes and then starting over our entire architecture. We're swapping out the compute layer. One is a subset of the other. Lambda is about events and functions, Kubernetes is about container and run it however you want. You want to run it when an event comes in, that's native. You want to run it as a batch job, run it as a job. You want to run it as a long running service, run it as a deployment, so that's all we're really talking about here. When we break it down, you're just talking about compute. >> You talk a lot about automation in the CI-CD areas, that differentiation where the value is. In a world as automation goes faster, what does Kubernetes look like when it becomes automated away? Because I don't want to manage anything, why even have managed Kubernetes? It should just automatically, you mentioned the patching. In an automated world, is Kubernetes just running under the covers, how does Kubernetes look down the road in your mind, in terms of when automation comes in? >> I've been in this game maybe over 15 years and one thing holds true: most developers want to focus on the business logic. We hire them because that's their skillset. When they check in code, it would be really nice if you can take it from there and get it where it needs to be. That's been the holy grail. We see it in mobile, you build an app, you put it on the App Store, Apple gets it to every device on the planet, done. Now it's the server side turn to do this. Whether you're doing serverless functions, Kubernetes, VMWare, or Linux, if you have CI-CD in front of any of that, the developer can still have the same experience. I check in code and you're picking a different deploy target. If you did that five years ago, and you understood it, and you were using, let's say maybe Mesos or just VMs, you bring in Kubernetes, you don't even have to change this part of the equation. This is why I tell most people, just focus on this endgame. My keynote last year was about this is the endgame because this is your coacher, this is your change management process, this is your discipline, and this is just a target where that compute goes. >> Alright, we've got two minutes left. I want to get your thoughts and share with the audience who's not here, a big waiting list, I know there's some lobby con going on all around Seattle, people flew in. Great place too to actually have some good lobby con meetings around the lobby area. So what's happening here, in your mind's eye, now you're not in the throes of all the events, you're kind of in the wild here with us, everyone else. What's the top story, what's going on, what's the vibe, what are you extracting out of all this activity as a top story, top level stories here? >> I think everyone's finding their place. If you're a security vendor, you kind of know where your line is, right? I've got this Twistlock shirt on. They want to plan a world where they need to integrate closer to the developer workflow, not just on the infrastructure side. If you're selling load balancers, service mesh is a thing, where do you fit in? The lines are getting a lot clearer. Kubernetes is starting to say maybe we should stop here. Maybe service measures should take it from here and that's where Istio comes in. Traditional vendors can now play in this well-defined space. On the storage side, what are you integrating? Now we have the storage interface, like the container storage interface. Now, if you're a net app, you know where you fit into the puzzle. You don't need to have your own Kubernetes distro. Two years ago, everyone was trying to come out with their own Kubernetes distro so they can actually have an anchor. Now you're like, ah, now I know where to play and now we also know what's missing. After years of doing this, people look back and say there's a lot of stuff missing. It's OK now to go create something new. >> It's a clear visibility into the landscape. What about the impact to end users? What is notable in your mind in terms of highlights, impact to end user organizations really going through this quote digital transformation, which is very cloud-based of course, but they're certainly changing and impacting, what's your thoughts on the end user? >> We're using some of the same words now. Forget the technology piece, now we can all start to talk about the same things, so when we say container, we kind of now are talking about the same thing. When we start to talk about sidecars, whether that's a service mesh, Envoy sidecar, or something that adapts your existing code to the new world, now that we're using the same language, we can actually talk. Traditional enterprise can talk to the startups and have a meaningful conversation. >> That's awesome, any other observations here in terms of the size of the show? Got a lot more activity, feels a little bit like re:Invent, I'm bumping into people, swimming through the crowds, the swag's hot. >> It's 8,000 people here and it feels like there's more users that know nothing about Kubernetes so even though we're about five years in, it reminds me of when we were just getting started. >> Lot more work to do but great, congratulations on all the work you've done Kelsey. Really appreciate you taking the time every year to come on theCUBE. We love having you on, great commentary, great keynotes, very entertaining. Thanks for coming on, appreciate it. >> Awesome, thank you. >> I'm John Furrier, Cube here with Kelsey Hightower telling us about all the breakdown of KubeCon, CloudNativeCon, the beginning of the cloud tsunami is happening, certainly changing businesses, changing open source, it's changing, it's on a global scale. We're here with coverage for three days. We'll be right back with more after this short break.

Published Date : Dec 11 2018

SUMMARY :

brought to you by Red Hat, It's about the cloud global. Proper you said the we saw some tweets, again you mention Kubernetes and all the gaps What's going on in the landscape, and have the same tools and when you asked them why, of the pieces you need, that I offer, what do you say? If you make it different, so if you make it too different based on the environment you're in. or is that going to be a work area that the provider will do. and all the goodness that comes out a derivative from the Chrome VM Pulled on the thread, and run it however you want. automation in the CI-CD areas, in front of any of that, the developer What's the top story, what's going on, where you fit into the puzzle. What about the impact to end users? the same language, we can actually talk. in terms of the size of the show? here and it feels like congratulations on all the the beginning of the cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Susan WojcickiPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

JimPERSON

0.99+

JasonPERSON

0.99+

Tara HernandezPERSON

0.99+

David FloyerPERSON

0.99+

DavePERSON

0.99+

Lena SmartPERSON

0.99+

John TroyerPERSON

0.99+

Mark PorterPERSON

0.99+

MellanoxORGANIZATION

0.99+

Kevin DeierlingPERSON

0.99+

Marty LansPERSON

0.99+

TaraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Jim JacksonPERSON

0.99+

Jason NewtonPERSON

0.99+

IBMORGANIZATION

0.99+

Daniel HernandezPERSON

0.99+

Dave WinokurPERSON

0.99+

DanielPERSON

0.99+

LenaPERSON

0.99+

Meg WhitmanPERSON

0.99+

TelcoORGANIZATION

0.99+

Julie SweetPERSON

0.99+

MartyPERSON

0.99+

Yaron HavivPERSON

0.99+

AmazonORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

Kayla NelsonPERSON

0.99+

Mike PiechPERSON

0.99+

JeffPERSON

0.99+

Dave VolantePERSON

0.99+

John WallsPERSON

0.99+

Keith TownsendPERSON

0.99+

fiveQUANTITY

0.99+

IrelandLOCATION

0.99+

AntonioPERSON

0.99+

Daniel LauryPERSON

0.99+

Jeff FrickPERSON

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

Todd KerryPERSON

0.99+

John FurrierPERSON

0.99+

$20QUANTITY

0.99+

MikePERSON

0.99+

January 30thDATE

0.99+

MegPERSON

0.99+

Mark LittlePERSON

0.99+

Luke CerneyPERSON

0.99+

PeterPERSON

0.99+

Jeff BasilPERSON

0.99+

Stu MinimanPERSON

0.99+

DanPERSON

0.99+

10QUANTITY

0.99+

AllanPERSON

0.99+

40 gigQUANTITY

0.99+

Arturo Suarez, Canonical & Eric Sarault, Kontron | OpenStack Summit 2018


 

>> Narrator: Live from Vancouver, Canada it's theCUBE covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> Welcome back to theCUBE. I'm Stu Miniman here with my cohost here John Troyer. And we're at the OpenStack Summit 2018, here in Vancouver. One of the key topics we've been discussing, actually for a few years but under new branding, and it's really matured a bit is Edge Computing. So, we're really happy to welcome to the program two first time guests. We have Arturo Suarez, who's a program director with Canonical. We also have first time Kontron employee on, Eric Sarault, who's a product manager of software and services with, I believe Montreal based, is the headquarters. >> That's correct. >> Stu: So, thank you for allowing all of us to come up to Canada and have some fun. >> It's a pleasure. >> But we were all working during Victoria Day, right? >> Yeah. >> All right. Arturo, we know Canonical. So, we're going to talk about where you fit in. But, Eric, let's start with Kontron. I've got a little bit of background with them. I worked in really kind of the TelCo space back in the 90s. But for people that don't know Kontron maybe give us some background. So, basically, the entity here today is representing the communications business unit. So, what we do on that front is mostly TelCo's service providers. We also have strong customer base in the media vertical. But right now the OpenStack, what we're focusing on, is really on the Edge, mixed messages as well. So, we're really getting about delivering the true story about Edge because everybody has their own version of Edge. Everybody has their own little precisions about it. But down the road, it's making sure that we align everyone towards the same messaging so that we deliver a unified solution so that everybody understands what it is. >> Yeah. So, my filter on this has been Edge depends who you are. If you're a telecommunications vendor, when we've talked about the Cohen, it's the Edge of where they sit. If I'm an enterprise, it's the Edge is more like the IOT devices and sometimes there's an aggregation box in between. So, there's somewhere between two and four Edges out there. It's like cloud. We spent a bunch of years discussing it and then we just put the term to the side and go things. When you're talking Edge at Kontron, what does that mean? You actually have devices. >> We do. >> So, who's your customer? What does the Edge look like? >> So, we do have customers on that front. Right now we're working with some big names out there. Basically delivering solutions for 12 inch depth racks at the bottom of radio towers or near cell sites. And ultimately working our way up closer to what would look like, what I like to call a "closet" data center, if you will. Where we also have a platform with multiple systems that's able to be hosted in the environment. So, that's really about not only having one piece of the equation but really being able to get closer to the data center. >> All right. And Arturo, help bring us in because we know Canonical's a software company. What's the Edge mean to your customers and where does Canonical fit? >> So, Canonical, we take pride of being an ubiquitous platform, right? So, it doesn't matter where the Edge, or what the Edge is, right? There is an Ubuntu platform. There is an Ubuntu operating system for every single domain of compute, going from the very end of the Edge. That device that sits on your house or that drone that is flying around. And you need to do some application businesses, or to post on application businesses with, all the way to the core rank. Our OpenStack story starts at the core. But it's interesting as it goes farther from that core, how the density, it's an important factor in how you do things, so. We are able, with Kontron, to provide an operating system and tooling to tackle several of those compute domains that are part of the cloud where real estate is really expensive, right. >> Eric, so you all are a systems developer? Is that a fair two-word phrase? It's hardware and software? >> Basically, we do our original design. >> Okay. I know where I am. >> Manufacturing. >> So, I'm two steps away from hardware. So, I think of those as all systems. But you build things? >> Eric: Correct. >> And you work with software. I think for folks that have been a little more abstract, you tend to think, "Well, in those towers, there must be some bespoke chips and some other stuff but nothing very sophisticated." At this point we're running, or that your customers are running, full OpenStack installations on your system hardware. >> Eric: Correct. >> That's in there and it's rugged and it's upgradable. Can you talk a little bit about the business impact, of that sort of thing, as you go out and work with your customers? >> Certainly. So, one of the challenges that we saw there was really that, from a hardware perspective, people didn't really think about making sure that, once the box is shipped, how do you get the software on it, right. Typically, it's a push and forget approach. And this is where we saw a big gap, that it doesn't make any sense for folks to figure that on their own. A lot of those people out there are actually application developers. They don't have the networking background. They don't have a hardware engineering background. And the last thing they want to be doing is spending weeks, if not months, figuring out how to deploy OpenStack, or Kubernetes, or other solutions out there. So, that's where we leverage Canonical's tools, including MAAS and Juju, to really deploy that easily, at scale, and automated. Along with packaging some documentation, some proper steps on how to deploy the environment quickly in a few hours instead of just sitting there scratching your head and trying to figure it out, right. Because that's the last thing they want. The minute they have the box in their hand they already want to consume the resources and get up and running, so. That's really the mission we want to tackle that you're not going to see from most hardware vendors out there. >> Yeah, it's interesting. We often talk about scale, and our term, it's a very different scale when you talk about how fast it's deployed. We're not talking about tens or hundreds of thousands of cores for one environment. It's way more distributed. >> Yeah. It's a different type of scale. It's still a scale but the building block is different, right. So, we take the orders of magnitude more of points of presence than there are data centers, right. At that scale, and the farther you go again from the core, the larger the scale it is. But the building block is different. And the ability to play, the price of the compute is different. It goes much higher, right? So, going back again, that ability to condense in OpenStack, the ability to deliver a Kubernetes within that little space, is pretty unique, right? And while we're still figuring out what technology goes on the Edge, we still need to account for, as Eric said, the economics of that Edge play a big, big part of that gain, right. So, there is a scale, it's in the thousands of points of presence, in the hundreds of thousands of points of presence, or different buildings where you can put an Edge cloud, or the use-case are still being defined, but it's scaled on a different building block. >> Well, Arturo, just to clarify for myself, sometimes when you're looking at an OpenStack component diagram, there's a lot of components and I don't know how many nodes I'm going to have to run. And they're all talking to each other. But at the Edge, even though there's powerful hardware there, there's an overhead consideration, right? >> Yes. Absolutely and that's going to be there. And OpenStack might evolve but might not evolve. But this is something we are tackling today, right. That's why I love the fact that Kontron has also a Kubernetes cluster, right. That multi-technology, the real multi-cloud is a multi-technology approach to the Edge, right. There are all the things that we can put in the Edge and the access is set. It's not defined. We need to know exactly how much room you have, how you make the most out of each of your cores or each of the gigs of RAM out there. So, OpenStack obviously is heavy for some parts of the Edge. Kontron, with our help, has pushed that to the minimum Openstack viable that allows you not to roll a track when you need to do something on that location, right. As that is as effective as it can get today. >> Eric, can you help put this in a framework of cloud, in general. When I think of Edge, a lot of it data's going to need to go back to data centers or a public cloud, multiple public cloud providers. How do your customers deal with that? Are you using Kubernetes to help them span between public cloud and the Edge? >> So, it's a mix of both. Right now we're doing some work to see how you can utilize idle processing time, along with Kubernetes scheduling and orchestration capabilities. But also OpenStack really caters to the more traditional SDNN of the use-case out there to run your traditional applications. So, that's two things that we get out of the platform. But it's also understanding how much data do you want to go back to the data center and making sure that most of the processing is as close as possible. That goes along with 5G, of course. You literally don't have the time to go back to the data centers. So, it's really about putting those capabilities, whether it's FPGAs, GPUs, and those platform, and really enabling that as close as possible to the Edge, or the end user, should I say. >> Eric, I know you're in the carrier space. Can you talk a little, maybe Kontron in general? And maybe how you, in your career as you go the next decades looking at imbed-able technology everywhere, and what do you all see as the vision of where we're headed? >> Oh, wow. That's a hell of a question. >> That's a big question to throw on you. >> I think it's very interesting to see where things are going. There's a lot of consolidation. And you have all these opensource project that needs to work together. The fact that OpenStack is embracing the reality that Kubernetes is going to be there to drive workloads. And they're not stepping on each others' throat, not even near. So, this is where the collaboration, between what we're seeing from the OpenStack Foundation along with the projects from the Linux Foundation, this is really, really interesting to see this moving forward. Other projects upcoming, like ONAP and Akraino, it's going to be very interesting for the next 24 months, to see what it's going to shape into. >> One of the near things, you mentioned 5G and we've been watching, what's available, how that roll-out's going to go into the various pieces. Is this ecosystem ready for that? Going to take advantage of it? And how soon until it is real for customers? >> The hardware is ready. That's for sure. It's really going to be about making sure if you have a split environment that's based on X86, or a split with ARM, it's going to be about making sure that these environments can interact with each other. The service chaining is probably the most complicated aspect there is to what people want to be doing there. And there's a bit of a tie, rope-pulling, from one side to another still but it's finally starting to put in to play. So, I think that the fact that Akraino, which is going to bring a version of OpenStack within the Linux Foundation, this is going to be really unlocking the capabilities that are out there to deploy the solution. And tying along with that, with hardware that has a single purpose, that's able to cater all the use-cases, and not just think about one vertical. "And then this box does this and this other box does another use-case." I think that's the pitfall that a lot of vendors fallen into. Instead of just, "Okay, for a second think outside the box. How many applications could you fit in this footprint?" And there probably going to be big data and multiple use-cases, that are nowhere near each other. So, don't try to do this very specific platform and just make sure that you're able to cater pretty much everyone. It's probably going to do the job, right, so. >> There's over 40 sessions on Edge Computing here. Why don't we just give both of you the opportunity to give us a closing remarks on the importance of Edge, what you're seeing here at the show, and final takeaways. >> From our side, from the Canonical side again, the Edge is whatever is not core. That really has different domains of compute. There is an Ubuntu for each of one of those domains. As Eric mentioned, this is important because you have a common platform, not only in the hardware perspective or the orchestrating technologies and their needs, which are evolving fast. And we have the ability, because how we are built, to accommodate or to build on all of those technologies. And be able to allow developers to choose what they want to do or how they want to do it. Try and try again, in different types of technologies and finally get to that interesting thing, right. There is that application layer that still needs to be developed to make the best use out of the existing technologies. So, it's going to be interesting to see how applications and the technologies evolve together. And we are in a great position as a common platform to all of those compute domains on all of those technologies from the economical perspective. >> On our side, what we see, it's really about making sure it's a density play. At the Edge, and the closer you go to these more wild environments, it's not data centers with 30 kilowatts per rack. You don't have the luxury of putting in, what I like to call whiteboards, 36 inch servers or open-compute systems. So, we really want to make sure that we're able to cater to that. We do have the products for it along with the technologies that Canonical are bringing in on that front. We're able to easily roll-out multiple types of application for those different use-cases. And, ultimately, it's all going to be about density, power efficiency, and making sure that your time to production with the environment is as short as possible. Because the minute they'll want access to that platform, you need to be ready to roll it out. Otherwise, you're going to be lagging behind. >> Eric and Arturo, thanks so much for coming on the program and giving us all the updates on Edge Computing here. For John Troyer, I'm Stu Miniman. Back with lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE. (exciting music)

Published Date : May 22 2018

SUMMARY :

Brought to you by Red Hat, the OpenStack Foundation, One of the key topics we've been discussing, to come up to Canada and have some fun. So, basically, the entity here today is it's the Edge of where they sit. that's able to be hosted in the environment. What's the Edge mean to your customers that are part of the cloud But you build things? or that your customers are running, and it's rugged and it's upgradable. So, one of the challenges that we saw there when you talk about how fast it's deployed. And the ability to play, and I don't know how many nodes I'm going to have to run. has pushed that to the minimum Openstack viable data's going to need to go back to and really enabling that as close as possible to the Edge, and what do you all see as the vision of where we're headed? That's a hell of a question. the reality that Kubernetes is going to be there how that roll-out's going to go into the various pieces. that are out there to deploy the solution. the opportunity to give us a closing remarks So, it's going to be interesting to see how applications and the closer you go to these more wild environments, coming on the program and giving us all the updates

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric SaraultPERSON

0.99+

EricPERSON

0.99+

Arturo SuarezPERSON

0.99+

TelCoORGANIZATION

0.99+

John TroyerPERSON

0.99+

CanonicalORGANIZATION

0.99+

ArturoPERSON

0.99+

Stu MinimanPERSON

0.99+

VancouverLOCATION

0.99+

Red HatORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

CanadaLOCATION

0.99+

KontronORGANIZATION

0.99+

12 inchQUANTITY

0.99+

30 kilowattsQUANTITY

0.99+

MontrealLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

two-wordQUANTITY

0.99+

bothQUANTITY

0.99+

ONAPORGANIZATION

0.99+

OpenStack Summit 2018EVENT

0.99+

thousandsQUANTITY

0.99+

Vancouver, CanadaLOCATION

0.99+

twoQUANTITY

0.99+

StuPERSON

0.99+

eachQUANTITY

0.99+

two stepsQUANTITY

0.98+

36 inchQUANTITY

0.98+

fourQUANTITY

0.98+

oneQUANTITY

0.98+

EdgeTITLE

0.98+

AkrainoORGANIZATION

0.98+

90sDATE

0.98+

OneQUANTITY

0.98+

OpenStackORGANIZATION

0.98+

OpenStackTITLE

0.97+

UbuntuTITLE

0.97+

two thingsQUANTITY

0.97+

one pieceQUANTITY

0.97+

Victoria DayEVENT

0.97+

todayDATE

0.96+

over 40 sessionsQUANTITY

0.96+

KontronPERSON

0.96+

ARMORGANIZATION

0.96+

OpenStack Summit North America 2018EVENT

0.95+

first timeQUANTITY

0.94+

one environmentQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

next decadesDATE

0.92+

Stephan Fabel, Canonical | OpenStack Summit 2018


 

(upbeat music) >> Announcer: Live from Vancouver, Canada. It's The Cube covering Openstack Summit, North America, 2018. Brought to you by Red Hat, The Open Stack Foundation, and it's ecosystem partners. >> Welcome back to The Cube's coverage of Openstack Summit 2018 in Vancouver. I'm Stu Miniman with cohost of the week, John Troyer. Happy to welcome back to the program Stephan Fabel, who is the Director of Ubuntu product and development at Canonical. Great to see you. >> Yeah, great to be here, thank you for having me. Alright, so, boy, there's so much going on at this show. We've been talking about doing more things and in more places, is the theme that the Open Stack Foundation put into place, and we had a great conversation with Mark Shuttleworth, and going to dig in a little bit deeper in some of the areas with you. >> Stephan: Okay, absolutely. >> So we have the Cube, and we're go into all of the Kubernetes, Kubeflow, and all those other things that we'll mispronounce how they go. >> Stephan: Yes, yes, absolutely. >> What's your impression of the show first of all? >> Well I think that it's really, you know, there's a consolidation going on, right? I mean, we really have the people who are serious about open infrastructure here, serious about OpenStack. They're serious about Kubenetes. They want to implement, and they want to implement at a speed that fits the agility of their business. They want to really move quick with the obstrain release. I think the time for enterprise hardening delays an inertia there is over. I think people are really looking at the core of OpenStack, that's mature, it's stable, it's time for us to kind of move, get going, get success early, get it soon, then grow. I think most of the enterprise, most of the customers we talk to adopt that notion. >> One of the things that sometimes helps is help us lay out the stack a little bit here because we actually commented that some of the base infrastructure pieces we're not talking as much about because they're kind of mature, but OpenStack very much at the infrastructure level, your compute, storage, and network need to understand. But then we when we start doing things like Kubernetes as well, I can either do or, or on top of, and things like that, so give us your view as to what'd you put, what Canonical's seeing, and what customers-- how you lay out that stack? >> I think you're right, I think there's a little bit of path-finding here that needs to be done on the Kubernetes side, but ultimately, I think it's going to really converge around OpenStack being operative-centric, and operative-friendly, working and operating the infrastructure, scaling that out in a meaningful manner, providing multitenancy to all the different departments. Having Kubernetes be developer-centric and really help to on-board and accelerate the workload that option of the next gen initiatives, right? So, what we see is absolutely a use case for Kubernetes and OpenStack to work perfectly well together, be an extension of each other, possibly also sit next to each other without being too incumbenent there. But I think that ultimately having something like Kubernetes contain a based developer APIs that are providing that orchestration layer are the next thing, and they run just perfectly fine on Canonical OpenStack. >> Yeah, there certainly has been a lot of talk about that here at the show. Let's see, let's go a level above that, things we run on Kubernetes, I wanted to talk a little bit about ML and AI and Kubeflow. It seems like we're, I'd almost say that we're, this is like, if we were a movie, we're in a sequel like AI-5; this time, it's real. I really do see real enterprise applications incorporating these technologies into the workflow for what otherwise might be kind of boring, you know, line of business, can you talk a little bit about where we are in this evolution? >> You mean, John, only since we've been talking about it since the mid-1800s, so yeah. >> I was just about to point that out, I mean, AI's not new, right? We've seen it since about 60 years. It's been around for quite some time. I think that there is an unprecedented amount of sponsorship of new startups in this area, in this space, and there's a reason why this is heating up. I think the reason why ultimately it's there is because we're talking about a scale that's unprecedented, right? We thought the biggest problem we had with devices was going to be the IP addresses running out, and it turns out, that's not true at all, right? At a certain scale, and at a certain distributed nature of your rollout, you're going to have to deal with just such complexity and interaction between the underlying, the under-cloud, the over-cloud, the infrastructure, the developers. How do I roll this out? If I spin up 1000 BMs over here, why am I experiencing dropped calls over there? It's those types of things that need to be self-correlated. They need to be identified, they need to be worked out, so there's a whole operator angle just to be able to cope with that whole scenario. I think there's projects that are out there that are trying to ultimately address that, for example, Acumos (mumbles) Then, there is, of course, the new applications, right? Smart cities to connect to cars, all those car manufacturers who are, right now, faced with the problem: how do I deal with mobile, distributed inference rollout on the edge while still capturing the data continually, train my model, update, then again, distribute out to the edge to get a better experience. How do I catch up to some of the market leaders here that are out there? As the established car manufacturers are going to come and catch up, put more and more miles autonomously on the asphalt, we're going to basically have to deal with a whole lot more of proctization of machine-learning applications that just have to be managed at scale. And so we believe for all certain good company in that belief that having to manage large applications at scale, that containers and Kubernetes is a great way to do that, right? They did that for web apps. They did that for the next generation applications. This is one example where with the right operators in mind, the right CRDs, the right frameworks on top of Kubernetes managed correctly, you are actually in a great position to just go to market with that. >> I wonder if you might have a customer example that might go to walk us through kind of where they are in this discussion, talk to many companies, you know, the whole IOT even pieces were early in this. So what's actually real today, how much is planning, is this years we're talking before some of these really come to fruition? >> So yeah, I can't name a customer, but I can say that every single car manufacturer we're talking to is absolutely interested in solving the operational problem of running machine-learning frameworks as a service, making sure those are up running and up to speed at any given point in time, spin them up in a multitenant fashion, make sure that the GPU enablement is actually done properly at all layers of the virtualization. These are real operational challenges that they're facing today, and they're looking to solve with us. Pick a large car manufacturer you want. >> John: Nice. We're going down to something that I can type on my own keyboard then, and go to GitHub, right? That's one of the places to go where it is run, TensorFlow of machine-learning framework on Kubernetes is Kubeflow, and that little bit yesterday on stage, you want to talk about that maybe? >> Oh, absolutely, yes. That's the core of our current strategy right now. We're looking at Kubeflow as one of the key enablers of machine-learning frameworks as a service on top of Kubernetes, and I think they're a great example because they can really show how that as a service can be implemented on top of a virtualization platform, whether that be KVM, pure KVM, on bare metal, on OpenStack, and actually provide machine-learning frameworks such as TensorFlow, Pipe Torch, Seldon Core. You have all those frameworks being supported, and then basically start mix and matching. I think ultimately it's so interesting to us because the data scientists are really not the ones that are expected to manage all this, right? Yet they are the core of having to interact with it. In the next generation of the workloads, we're talking to PHDs and data scientists that have no interest whatsoever in understanding how all of this works on the back end, right? They just want to know this is where I'm going to submit my artifact that I'm creating, this is how it works in general. Companies pay them a lot of money to do just that, and to just do the model because that's where, until the right model is found, that is exactly where the value is. >> So Stephan, does Canonical go talk to the data scientists, or is there a class of operators who are facilitating the data scientists? >> Yes, we talk to the data scientists who understand their problems, we talk to the operators to understand their problems, and then we work with partners such as Google to try and find solutions to that. >> Great, what kind of conversations are you having here at the show? I can't imagine there's too many of those, great to hear if there are, but where are they? I think everybody here knows containers, very few know Kubernetes, and how far up the stack of building new stuff are they? >> You'd be surprised, I mean, we put this out there, and so far, I want to say the majority of the customer conversations we've had took an AI turn and said, this is what we're trying to do next year, this is what we're trying to do later in the year, this is what we're currently struggling with. So glad you have an approach because otherwise, we would spend a ton of time thinking about this, a ton of time trying to solve this in our own way that then gets us stuck in some deep end that we don't want to be. So, help us understand this, help us pave the way. >> John: Nice, nice. I don't want to leave without talking also about Microcades, that's a Kubernetes snap, you code some clojure download, Can we talk a little bit about that? >> Yeah, glad to. This was an idea that we conceived that came out of this notion of alright, well if I do have, talking to a data scientist, if I do have a data scientist, where does he start? >> Stu: Does Kubernetes have a learning curve to date? >> It does, yeah, it does. So here's the thing, as a developer, you have, what options do you have right when you get started? You can either go out and get a community stood up on one of the public clouds, but what if you're in the plane, right? You don't have a connection, you want to work on your local laptop. Possibly, that laptop also has a GPU, and you're a data scientist and you want to try this out because you know you're going to submit this training job now to a (mumbles) that runs un-prem behind the firewall with a limited training set, right? This is the situation we're talking about. So ultimately, the motivation for creating Microcades was we want to make this very, very equivalent. Now you can deploy Kubeflow on top of Microcades today, and it'll run just fine. You get your TensorBoard, you have Jupyter notebook, and you can do your work, and you can do it in a fashion that will then be compatible to your on-prem and public machine-learning framework. So that was your original motivation for why we went down this road, but then we noticed you know what, this is actually a wider need. People are thinking about local Kubernetes in many different ways. There are a couple of solutions out there. They tend to be cumbersome, or more cumbersome than developers would like it. So we actually said, you know, maybe we should turn this into a more general purpose solution. So hence, Microcades. It works like a snap on your machine, you kick that off, you have Kubernetes API, and under 30 seconds or little longer if your download speed plays a factor here, you enable DNS and you're good to go. >> Stephan, I just want to give you the opportunity, is there anything in the Queens Release that your customers have been specifically waiting for or any other product announcements before we wrap? >> Sure, we're very excited about the Queens Release. We think Queens Release is one of the great examples of the maturity of the code base and really the knot towards the operator, and that, I think was the big challenge beyond the olden days of OpenStack where the operators took a long time for the operators to be heard, and to establish that conversation. We'd like to say and to see that OpenStack Queens has matured in that respect, and we like things like Octavia. We're very exciting about (mumbles) as a service, taking its own life and being treated as a first-class citizen. I think that it was a great decision of the community to get on that road. We're supporting as a part of our distribution. >> Alright, well, appreciate the update. Really fascinating to hear about all, you know, everybody's thinking about it and really starting to move on all the ML and AI stuff. Alright, for John Troyer, I'm Tru Miniman. Lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching The Cube. (upbeat music)

Published Date : May 22 2018

SUMMARY :

Brought to you by Red Hat, The Open Stack Foundation, Great to see you. Yeah, great to be here, thank you for having me. So we have the Cube, and we're go into all of the I mean, we really have the people who are serious about and what customers-- how you lay out that stack? of path-finding here that needs to be done about that here at the show. since the mid-1800s, so yeah. As the established car manufacturers are going to in this discussion, talk to many companies, a multitenant fashion, make sure that the GPU That's one of the places to go where it is run, and to just do the model because Yes, we talk to the data scientists who understand that we don't want to be. I don't want to leave without talking also about Microcades, talking to a data scientist, and you can do your work, and you can do of the community to get on that road. Really fascinating to hear about all, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephanPERSON

0.99+

Mark ShuttleworthPERSON

0.99+

JohnPERSON

0.99+

John TroyerPERSON

0.99+

Stephan FabelPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

VancouverLOCATION

0.99+

Open Stack FoundationORGANIZATION

0.99+

KubernetesTITLE

0.99+

CanonicalORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Vancouver, CanadaLOCATION

0.99+

OpenStackTITLE

0.99+

next yearDATE

0.99+

mid-1800sDATE

0.99+

yesterdayDATE

0.99+

Tru MinimanPERSON

0.99+

under 30 secondsQUANTITY

0.99+

OpenStack Summit 2018EVENT

0.99+

GitHubORGANIZATION

0.98+

QueensORGANIZATION

0.98+

Openstack Summit 2018EVENT

0.98+

one exampleQUANTITY

0.98+

OneQUANTITY

0.98+

OpenStack Summit 2018EVENT

0.98+

KubeflowTITLE

0.97+

Openstack SummitEVENT

0.97+

1000 BMsQUANTITY

0.97+

TensorFlowTITLE

0.96+

about 60 yearsQUANTITY

0.96+

oneQUANTITY

0.96+

JupyterORGANIZATION

0.94+

The CubeORGANIZATION

0.94+

StuPERSON

0.94+

todayDATE

0.94+

asphaltTITLE

0.93+

North AmericaLOCATION

0.92+

UbuntuORGANIZATION

0.88+

The Open Stack FoundationORGANIZATION

0.87+

KubernetesORGANIZATION

0.86+

CubeCOMMERCIAL_ITEM

0.77+

Queens ReleaseTITLE

0.77+

single carQUANTITY

0.76+

Seldon CoreTITLE

0.75+

Pipe TorchTITLE

0.72+

KubeflowORGANIZATION

0.7+

The CubeTITLE

0.69+

OctaviaTITLE

0.67+

firstQUANTITY

0.57+

coupleQUANTITY

0.5+

MicrocadesORGANIZATION

0.5+

KubenetesORGANIZATION

0.49+

2018DATE

0.48+

TensorBoardTITLE

0.48+

KubernetesCOMMERCIAL_ITEM

0.42+

ReleaseTITLE

0.4+

George Mihaiescu, OICR | OpenStack Summit 2018


 

>> Narrator: Live from Vancouver, Canada, it's theCUBE, covering OpenStack Summit North America 2018, brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> The sun has come out, but we're still talking about a lot of the cloud here at the OpenStack Summit 2018 in Vancouver. I'm Stu Miniman with my co-host John Troyer. Happy to welcome to the program the 2018 Super User Award winner, George Mihaiescu, who's the senior cloud architect with the Ontario Institute for Cancer Research or OICR. First of all, congratulations. >> Thank you very much for having me. >> And thank you so much for joining us. So cancer research, obviously is, one of the things we talk about is how can technology really help us at a global standpoint, help people. So, tell us a little about the organization first, before we get into the tech of it? >> So OICR is the largest cancer research institution in Canada, and is funded by government of Ontario. Located in Toronto, we support about 1,700 researchers, trainees and clinician staff. It's focused entirely on cancer research, it's located in a hub of cancer research in downtown Toronto, with Princess Margaret Hospital, Sick Kids Hospital, Mount Sinai, very, very powerful research centers, and OICR basically interconnects all these research centers and tries to bring together and to advance cancer research in the province, in Canada and globally. >> That's fantastic George. So with that, sketch out for us a little bit your role, kind of the purview that you have, the scope of what you cover. >> So I was hired four years ago by OICR to build and design cloud environment, based on a research grant that was awarded to a number of principal investigators in Canada to build this cloud computing infrastructure that can be used by cancer researchers to do large-scale analysis. What happens with cancer, because the variety of limitations happening in cancer patients, researchers found that they cannot just analyze a few samples and draw a conclusion, because the conclusion wouldn't be actually valid. So they needed to do large-scale research, and the ICGC, which is International Cancer Genome Consortium, an organization that's made of 17 countries that are donating, collecting and analyzing data from cancer patients, okay, they decided to put together all this data and to align it uniformly using the same algorithm and then analyze it using the same workflows, in order to actually draw conclusion that's valid across multiple data sets. They are focusing on the 50 most common types of cancer that affect most people in this world, and for each type of cancer, at least two countries provide and collect data. So for brain cancer, let's say we have data sets from two countries, for melanoma, for skin, and this basically gives you better confidence that the conclusion you draw is valid, and then the more pieces of the puzzle you throw on the table, the easier to see the big picture that's this cancer. >> You know George, I mean, I'm a former academic, and you know, the more data you get right, the more infrastructure you're going to have to have. I'm just reading off the announcement, 2,600 cores, 18 terabytes of RAM, 7.3 petabytes of storage, right, that's a lot of data, and it's a lot of... accessed by a lot of different researchers. When you came in, was the decision to use OpenStack already made, or did you make that decision, and how was the cloud architected in that way? >> The decision was basically made to use open source. We wanted basically to spend the money on capacity, on hardware, on research and not on licensing and support. >> John: Good use of everybody's tax dollars. >> Exactly, so you cannot do that if you have to spend money for paying licensing, then you probably have only half of the capacity that you could. So that means less large analysis, and longer it takes, and more costly. So Ceph for storing the data sets and OpenStack for infrastructure as a service offering was a no-brainer. My specialty was in OpenStack and Ceph, I started OpenStack seven years ago, so I was hired to design and build, and I had a chance to actually do alignment, and invitation calling for some of the data sets, so I was able to monitor the kind of stress that this workflows put on the system, so when I design it, I knew what is important, and what to focus on. So it's a cloud environment, it's customized for cancer research. We have very good ratio of RAM per CPU, we have very large local discs for the VM, for the virtual machines to be able to download very large data sets. We built it so if one compute node fails, you only impact a few workflows running there, you don't impact single small points of failures. Another tuning that we applied to the system too. >> George, can walk us through a little bit of the stack? What do you use, do you build your own OpenStack, or do you get it from someone? >> So basically, we use community hardware, we just high-density chassis, currently from Super Micro, Ubuntu for the operating system, no licensing there, OpenStack from the VM packages. We focus more on stability, scalability and support costs, internal support costs, because it's just myself and I have a colleague Gerard Baker, who's a cloud engineer, and you have to support all this environment, so we try to focus on the features that are most useful to our users, as well as less strain on our time and support resources. >> I mean that's, let's talk about the scalability right? You said the team is you and a colleague. >> George: Yes. >> But mostly, right. And you know, in the olden days, right, you would be taking care of maybe a handful of machines, and maybe some disk arrays in the lab. Now you're basically servicing an entire infrastructure for all of Canada, right? At how many universities? >> Well basically, it's global, so we have 40 research projects from four continents. So we have from Australia, from Israel, from China, from Europe, US, Canada. So approved cancer researchers that can access the data open up an account with us, and they get a quota, and they start their virtual machines, they download the data sets from the extra API of Ceph to their VMS, and they do analysis and we charge them for the time used, and because the use, everything is open source, and we don't pay any licensing fees, we are able to, and we don't run for profit, we charge them just what it costs us to be able to replenish the hardware when it fails. >> Nice, nice. And these are actually the very large machines, right? Because you have to have huge, thick data sets, you've got big data sets you have to compare all at once. >> Yeah, an average bandwidth of a file that has the normal DNA of the patient, and they need also the tumor DNA from the biopsy, an average whole genome sequence is about 150 gigabytes. So they need at least 300 gigabytes, and depending on the analysis, if they find mutations, then the output is usually five, 10 gigabytes, so much smaller. For other workflows, you have to actually align the data, so you input 150 gigabytes and the output is 150 or a bit more with metadata. And so nevertheless, you need very large storage for the virtual machines, and these are virtual machines that run very hard, in terms of you cannot do CPU over subscription, you cannot do memory over subscription, when you have a workflow that runs for four days, hundred percent CPU. So is different than other web scale environments, where you have website was running at 10%, or you can do 10 to one subscription, and then you go much cheaper or different solutions. Here you have to only provide what you have physically. >> John: That's great. >> George, you've said you participated in the OpenStack community for about seven years now. >> George: Yes. >> What kind of, do you actually contribute code, what pieces are you active in the community? >> Yeah, so I'm not a developer. My background is in networking, system administration and security, but I was involved in OpenStack since the beginning, before it was a foundation. I went to the first OpenStack public conference in Boston seven years ago, at the International Intercontinental Hotel and over time I was involved in discussions from the RAC channel, mailing list support, reporting backs. Even recently we had very interesting packet affected as well. The cloud package that is supposed to resize the disk of the VM as it boots, it was not using more than two terabytes because it was a bug, okay. So we reported this, and Scott Moffat, who's the maintainer of the cloud utils package, worked on the bug, and two days later, we had a fix, and they built a package, it's in the latest cloud Ubuntu image, and that happen, everybody else is going to use the same virtual Ubuntu package, so somebody who now has larger than two terabytes VMs, when they boot, they'll be able to resize and use the entire disk. And that's just an example of how with open source we can achieve things that would take much longer in commercial distribution, where even if you pay, doesn't necessarily mean that the response... >> Sure. Also George, any lessons learned? You've been with us a long time, right, and like Ceph. One thing we noticed today in the keynote, is actually a lot of the storage networking and compute wasn't really talked, those projects were maybe down focused a bit, as they talked about all the connectivity to everything else. So, I mean any lessons, so you... My point is, the infrastructure is stable of OpenStack, but any lessons learned along the journey? >> I think the lessons are that you can definitely build very affordable and useful and scalable infrastructure, but you have to get your expectations right. We only use from the open standard project that we consider are stable enough, so we can support them confidently without spending, like if a project adds 5% value to your offering, but eats 80% of your time debugging and trying to get it working, and doesn't have packages and missing documentation and so on, that's maybe not a good fit for your environment if you don't have the manpower to. And if it's not absolutely needed. Another very important lesson is that you have to really stay up to date, like go to the conferences, read the emails from the mailing list, be active in the community, because the OpenStack meetups in Toronto for 2018, we present there, we talk to other members. In these seven years I read tens of thousands of emails, so I learn from other users experiences, I try to help where I can. You have to be involved with the developers, I know the Ceph core developers, Sage and other people. So, you can't do this just by staying on the side and looking, you have to be involved. >> Good, George what are you looking for next from this community? You talked about the stability, are there pieces that you're hoping reach that maturity threshold for yourselves, or new functionalities that you're looking for down the road? >> I think what we want to provide to our researchers, 'cause they don't run web scale applications, so their needs are a little bit different. We want to add Magnum to our environment, to allow them deploy Kubernetes cluster easily. We want to add Octavia to expose the services, even though they don't run many web services, but you have to find a way to expose them when they run them. Maybe, Trove, database as a service, we'll see if we can deploy it safely and if it's stable enough. Anything that OpenStack comes up with, we basically look, is it useful, is it stable, can you do it, and we try it. >> George, last thing. Your group is the Super User of the Year. Can you just walk us through that journey, what led to the nomination, what does it mean to your team to win? >> I think we are a bit surprised, because we are a very small team, and our scale is not as big as T-Mobile or the other members, but I think it shows that again, for a big company to be able to deploy OpenStack at scale and make it work, it's maybe not very surprising 'cause yes, they have the resources, they have a lot of manpower and a lot of... But for a small institution or organization, or small company to be able to do it, without involving a vendor, without involving extra costs, I think that's the thing that was appreciated by the community and by the OpenStack Foundation, and yeah, we are pretty excited to have won it. >> All right, George, let me give you the final word, as somebody that's been involved with the community for a while. What would you say to people if they're, you know, still maybe looking from the outside or played with it a little bit. What tips would you give? >> I think we are living proof that it can be done, and if you wait until things are perfect, then they will never be, okay. Even Google has services in beta, Amazon has services in beta. You have to install OpenStack, it's much more performant and stable than when I started with OpenStack, where there was just a few projects, but definitely they will get help from the community, and the documentation's much better. Just go and do it, you won't regret it. >> George, as we know, software will eventually work, hardware will eventually fail. >> Absolutely. >> So, George Mihaiescu, congratulations to OICR on the Super User of the Year award, for John Troyer, I'm Stu Miniman, we're getting towards the end of day one of three days of wall to wall coverage here at OpenStack Summit 2018 in Vancouver. Thanks so much for watching theCUBE.

Published Date : May 22 2018

SUMMARY :

brought to you by Red Hat, the OpenStack Foundation, at the OpenStack Summit 2018 in Vancouver. one of the things we talk about is how can technology So OICR is the largest cancer research the scope of what you cover. that the conclusion you draw is valid, and you know, the more data you get right, The decision was basically made to use open source. and invitation calling for some of the data sets, and you have to support all this environment, You said the team is you and a colleague. and maybe some disk arrays in the lab. and because the use, everything is open source, Because you have to have huge, thick data sets, and then you go much cheaper or different solutions. the OpenStack community for about seven years now. and that happen, everybody else is going to is actually a lot of the storage networking and looking, you have to be involved. but you have to find a way to expose them Your group is the Super User of the Year. or the other members, but I think it shows that again, What would you say to people if they're, and if you wait until things are perfect, George, as we know, software will eventually work, congratulations to OICR on the Super User of the Year award,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

George MihaiescuPERSON

0.99+

OICRORGANIZATION

0.99+

CanadaLOCATION

0.99+

80%QUANTITY

0.99+

John TroyerPERSON

0.99+

Gerard BakerPERSON

0.99+

Ontario Institute for Cancer ResearchORGANIZATION

0.99+

JohnPERSON

0.99+

BostonLOCATION

0.99+

Red HatORGANIZATION

0.99+

TorontoLOCATION

0.99+

hundred percentQUANTITY

0.99+

USLOCATION

0.99+

EuropeLOCATION

0.99+

150QUANTITY

0.99+

Scott MoffatPERSON

0.99+

18 terabytesQUANTITY

0.99+

2,600 coresQUANTITY

0.99+

10QUANTITY

0.99+

40 research projectsQUANTITY

0.99+

7.3 petabytesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

ICGCORGANIZATION

0.99+

150 gigabytesQUANTITY

0.99+

two countriesQUANTITY

0.99+

International Cancer Genome ConsortiumORGANIZATION

0.99+

5%QUANTITY

0.99+

OpenStack FoundationORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

fiveQUANTITY

0.99+

GoogleORGANIZATION

0.99+

VancouverLOCATION

0.99+

10%QUANTITY

0.99+

Sick Kids HospitalORGANIZATION

0.99+

four daysQUANTITY

0.99+

AustraliaLOCATION

0.99+

CephORGANIZATION

0.99+

Princess Margaret HospitalORGANIZATION

0.99+

T-MobileORGANIZATION

0.99+

Vancouver, CanadaLOCATION

0.99+

IsraelLOCATION

0.99+

todayDATE

0.99+

seven years agoDATE

0.99+

four years agoDATE

0.99+

17 countriesQUANTITY

0.99+

two days laterDATE

0.98+

OpenStackTITLE

0.98+

each typeQUANTITY

0.98+

ChinaLOCATION

0.98+

2018DATE

0.98+

about 1,700 researchersQUANTITY

0.98+

UbuntuTITLE

0.98+

three daysQUANTITY

0.98+

10 gigabytesQUANTITY

0.97+

OpenStack Summit North America 2018EVENT

0.97+

seven yearsQUANTITY

0.97+

four continentsQUANTITY

0.97+

OneQUANTITY

0.97+

International Intercontinental HotelLOCATION

0.96+

Super MicroORGANIZATION

0.96+

OpenStack Summit 2018EVENT

0.96+

more than two terabytesQUANTITY

0.96+

firstQUANTITY

0.95+

50 most common types of cancerQUANTITY

0.95+

one subscriptionQUANTITY

0.95+

oneQUANTITY

0.95+

Mark Shuttleworth, Canonical | OpenStack Summit 2018


 

(soft electronic music) >> Announcer: Live from Vancouver, Canada, it's theCUBE. Covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and it's ecosystem partners. >> Welcome back, I'm Stu Miniman here with my cohost John Troyer and you're watching theCUBE's exclusive coverage of OpenStack Summit 2018 in Vancouver. Happy to welcome you back to the program, off the keynote stage this morning, Mark Shuttleworth, the founder of Canonical. Thank you so much for joining us. >> Stu, thanks for the invitation. >> Alright, so you've been involved in this OpenStack stuff for quite a bit. >> Right, since the beginning. >> I remember three years ago we were down in the other hall talking about the maturity of the platform. I think three years ago, it was like this container thing was kind of new and the basic infrastructure stuff was starting to get, in a nice term, boring. Because that meant we could go about business and be on the buzz of there's this cool new thing and we're going to kill Amazon, kill VMware, whatever else things that people thought that had a misconceived notion. So bring us forward to where we are 2018, what you're hearing from customers as you look at OpenStack and this community. >> Well, I think you pretty much called it. OpenStack very much now is about solving a real business problem, which is the automation of the data center and the cost parody of private data centers with public data centers. So I think we're at a time now where people understand the public cloud is a really good thing. It's great that you have these giant companies dueling it out to deliver better quality infrastructure at a better price. But then at the same time, having your own private infrastructure that runs cost-effectively is important. And OpenStack really is the only approach to that that exists today. And it's important to us that the conversation is increasingly about what we think really matters, which is the economics of owning it, the economics of running it, and how people can essentially keep that in line with what they get from the public cloud providers. >> Yeah, one of the barometers I use for vendors these days is in this multi-cloud world, where do you sit? Do you play with the HyperScalers? Are you a public cloud denier? Or, like most people you're, most people are somewhere in-between. In your keynote this morning, you were talking a bit about all of the HyperScalers that use your products as well as-- >> Ubuntu is at the heart of all of the major public cloud operations at multiple levels. So we see them as great drivers of innovation, great drivers of exposure of Ubuntu into the enterprise. We're still, by far, the number one platform used in public cloud by enterprises. It's hard to argue that public cloud is testing Dev now. It really, really isn't and so most of that is still Ubuntu. And now we're seeing that pendulum swing, all of those best practices, that consumption of Ubuntu, that understanding of what a leaner, meaner Enterprise Linux looks like. Bringing that back to the data center is exciting. For us, it's an opportunity to help enterprises rethink the data center to make it fully automated from the ground up. OpenStack is part of that, Kubernetes is part of that and now the cherry on top is really AI where people understand they have to be able to do it on public cloud, on private infrastructure and at the Edge. >> Mark, I wanted to talk about open source. Marketing open source, for a minute. We are obviously here, we're part of an open source community. Open source, defacto, has won the cloud technology stack wars. So there's one way of selling OpenStack where you pound on open a lot. >> I'm always a bit nervous about projects that put open. It sounds like they're sort of trying to gloss over something or wash over something or prove a point. They shouldn't have to. >> There's one about the philosophy of open source, which certainly has to stay there, right. Because that's what drove the innovation but I was kind of impressed about on the stage today, you talked about the benefits. You didn't say, well the venture's open. You said, well, we're facilitating these benefits. Speed to market, cost, et cetera. Can you talk about your approach, Canonical's approach to talking about this open source product in terms of its benefits? >> Sure, look, open source is a license. Under that license, there's room for a huge spectrum of interest and opinions and approaches. And I'd say that I certainly see an enormous amount of value in what I would call the passion-based open source story. Now, OpenStack is not that. It's too big, too complicated, to be one person's deep passion. It really isn't. But there's still a ton of innovation that happens in our world, across the full spectrum of what we see with open source, which is really experts trying to do something beautiful and elegant. And I still think that's really important in open source. You also have a new kind of dimension, which is almost like industrial trench warfare with open source. Which is huge organizations leveraging effectively their ability go get something widespread, widely adopted, quickly and efficiently by essentially publishing it as open source. And often, people get confused between these two ends of the spectrum. There's a bunch in between. What I like about OpenStack is that I think it's over the industrial trench warfare phase. You know, you just don't see a ton of people showing up here to throw parties and prove to everyone how cool they are. They've moved on to other open source projects. The people who are here are people who essentially have the real problem of I want to automate my data center, I want to have, essentially, a cloud that runs cost-effectively in my data center that I can use as part of a multi-cloud strategy. And so now I think we're in to that sort of, a more mature place with OpenStack. We're not either sort of artisan or craftsmen oriented, nor are we a guns blazing brand oriented. It's kind of now just solving the problems. >> Mark, there's still some nay-sayers out in the marketplace. Either they say that this never matured, there's a certain analyst firm that put out a report a couple of months ago that, it kind of denigrated what's happening here. And then there's others that, as you said, off chasing that next big wave of open source. What are you hearing from your customers? You've got a good footprint around the globe. >> So that report is nonsense, for a start. They're always wrong, right. If they're hyping something, they're wrong and if they're dissing something then they're usually wrong too. >> Stu: They have a cycle for that, I believe. (chuckling) >> Exactly. Selling gold at the barroom. Here's how I see it. I think that enterprises have a real problem, which is how do they create private cloud infrastructure. OpenStack had a real problem in that it had too many opinions, too many promises. Essentially a governing structure not a leadership structure. Our position on this has always been focus on the stuff that is really necessary. There was a ton of nonsense in OpenStack and that stuff is all failing. And so what? It was never essential to the mission. The mission is stand up a data center in an automated way, provide it, essentially, as resources, as a service to everybody who you think is authorized to be there, effectively. Segment and operate that efficiently. There's only a small part of OpenStack that was ever really focused on that. That's the stuff that's succeeding, that's the stuff we deliver. That's the stuff, we think very carefully about how to automate it so that, essentially, anybody can consume it at reasonable prices. Now, we have learned that it's better for us to do the operations almost. It's better for us actually to take it to people as a solution, say look, explain your requirements to us then let us architect that cloud with you then let us build that cloud then let us operate that cloud. Until it's all stable and the economics are good, then you can take over. I think what we have seen is that you ask every single different company to build OpenStack, they will make a bunch of mistakes and then they'll say OpenStack is the problem. OpenStack's not the problem. Because we do it again and again and again, because we do it in many different data centers, because we do it with many different industries, we're able to essentially put it on rails. When you consume OpenStack that way it's super cheap. These aren't my numbers, analysts have studied the costs of public infrastructure, the cost of the established, incumbent enterprise, virtualization solutions and so on. And they found that when you consume OpenStack from Canonical it is much, much cheaper than any of your other options in your own private data center. And I think that's a success that OpenStack should be proud of. >> Alright, you've always done a good job at poking at some of the discussions happening in the industry. I wouldn't say I was surprised but you were highlighting AI as something that was showing a lot of promise. People have been a little hot and cold depending on what part of the market you're at. Tell us about AI and I'd love to hear your thoughts in general. Kubernetes, Serverless, and ask you to talk about some of those new trends that are out there. >> Sure, the big problem with data science was always finding the right person to ask the right question. So you could get all the data in the world in a data lake but now you have to hire somebody who instinctively has to ask the right question that you can test out of that data. And that's a really hard problem. What machine learning does is kind of inverts the problem. It says, well, why don't we put all that data through a pattern matching system and then we'll end up with something that reflects the underlying patterns, even if we don't know what they are. Now, we can essentially say if you saw this, what would you expect? And that turns out to be a very powerful way to deal with huge amounts of data that, previously, you had to kind of have this magical intuition to kind of get to the bottom of. So I think machine learning is real, it's valuable in almost every industry, and the challenges now are really about standardizing underlying operations so that the people who focus on the business problems can, essentially, use them. So that's really what I wanted to show today is us working with, in that case it was Google, but you can generalize that. To standardize the experience for an institution who wants to hire developers, have them effectively build machine-driven models if they can then put those into production. There's a bunch of stuff I didn't show that's interesting. For example, you really want to take the learnings from machine-learning and you want to put those at the Edge. You want to react to what's happening as close to where it's happening as possible. So there's a bunch of stuff that we're working on with various companies. It's all about taking that AI outcome right to the Edge, to IOT, to Edge Cloud but we don't have time to get in to all of that today. >> Yeah, and Ubuntu is at the Edge, on the mobile platform. >> So we're in a great position that we're on the Cloud. Now you see what we're doing in the data center for enterprises, effectively recrafting the data center has a much leaner, more automated machine. Really driving down the cost of the data center. And yes, we're on the higher-end things. We're never going to be on the LightBulb. We're a full general-purpose operating system. But you can run Ubuntu on a $10 board now and that means that people are taking it everywhere. Amazon, for example, put Ubuntu on the DeepLens so that's a great example of AI at the edge. It's super exciting. >> So the Kubernetes, Serverless-type applications, what are your thinkings around there? >> Serverless is a lovely way to think about the flow of code in a distributed system. It's a really nice way to solve certain problems. What we haven't yet seen is we haven't seen a Serverless framework that you can port. We've seen great Serverless experiences being built inside the various public clouds but there's nothing consistent about them. Everything that you invest in a particular place is very useful there but you can't imagine taking that anywhere else. I think that's fine. >> Stu: Today's primarily Lando. >> And I think the other clouds have done a credible job of getting there quickly. But kudos to Amazon for kind of pioneering that. I do think we'll see generalized Serverless, it just doesn't exist at the moment and as soon as it does we'll be itching to get it into people's hands. >> Okay, yeah? >> Well, I just wanted to pull out something that you had said in case people miss it, you talked about managed OpenStack. And that, I think, managed Kubernetes has been a trend over the last year. Managed OpenStack now. Has been trans-- >> With these complex pieces of infrastructure, you could easily drown in learning it all and if you're only ever going to do one, maybe it makes sense to have somebody else do it for a while. You can always take it over later. So we're unusual in that we will essentially standup something complex like an OpenStack or a Kubernetes, operate it as long as people want and then train them to take over. So we're not exclusively managed and we're not exclusively arms-length. We're happy to start the one way and then hand over. >> I think that's an important development, though, that's been developing as the systems get more complicated. One UNIX admin needs a whole new skill set or broader skill set now that we're orchestrating a whole cloud so that's, I think that's great. And that's interesting. Anything else you're looking forward to, in terms of operation models. I guess we've said, Ubuntu everywhere from the edge to the center and now managed, as well. Anything else we're looking at in terms of operators should be looking at? >> Well, I think it just is going to stay sort of murky for a while simply because each different group inside a large institution has a boundary of their authority and to them, that's the edge. (chuckling) And so the term is heavily overloaded. But I would say, ultimately, there are a couple of underlying problems that have to be solved and if you look at the reference architectures that the various large institutions are putting out, they all show you how they're trying to attack these patterns using Ubuntu. One is physical provisioning. The one thing that's true with every Edge deployment is there are no humans there. So you can't kind of Band-Aid over the idea that when something breaks you need to completely be able to reset it from the ground up. So MAAS, Middle as a Service, shows up in the reference architectures from AT&T and from SoftBank and from Dorich Telecom and a bunch of others because it solves their problem. It's the smallest piece of software you can use to take one server or 10 servers or 100 servers and just reflash them with Windows or CentOS or whatever you need. That's one thing. The other thing that I think is consistently true in all these different H-Cloud permutations or combinations is that overhead's really toxic. If you need three nodes of overhead for a hundred node OpenStack, it's 3%. For a thousand node OpenStack, it's .3%. It's nothing, you won't notice it. If you need three nodes of OpenStack for a nine node Edge Cloud, well then that's 30% of your infrastructure costs. So really thinking through how to get the overhead down is kind of a key for us. And all the projects with telcos in particular that we're working, that's really what we bring is that underlying understanding and some of those really lightweight tools to solve those problems. On top of that, they're all different, right. Kubenetes here, Lixti there, OpenStack on the next one. AI everywhere. But those two problems, I think, are the consistent things we see as a pattern in the Edge. >> Alright, so Mark, last question I have for you. Company update. So last year we talked a little bit about focusing, where the company's going, talked a bit about the business model and you said to me, "Developers should never have to pay for anything." It's the governance people and everything like that. Give us the company update, everything from rumors from hey, maybe you're IPO-ing to what's happening, what can you share? >> Right, so the twin areas of focus, IOT and cloud infrastructure. IOT continues to be an area of R and D for us so we're still essentially underwriting an IOT investment. I'm very excited about that. I think it's the right thing to be doing at the moment. I think IOT is the next wave, effectively, and we're in a special position. We really can get down, both economically and operationally, into that sort of small itch kind of scenario. Cloud, for us, is a growth story. I talked a little bit about taking Ubuntu and Canonical into the finance sector. In one year, we closed deals with 20% of the top 20 banks in the world to build Ubuntu base and open infrastructure. That's a huge shift from the traditional dependence exclusively on VMware Red Hat. Now, suddenly, Ubuntu's in there, Canonical's in there. I think everybody understands that telcos really love Ubuntu and so that continues to grow for us. Commercially, we're expanding both in Emir and here in the Americas. I won't talk more about our corporate plans other than to say I see no reason for us to scramble to cover any other areas. I think cloud infrastructure and IOT is plenty for one company. For me, it's a privilege to combine that kind of business with what happens in the Ubuntu community. I'm still very passionate about the fact that we enable people to consume free software and innovate. And we do that without any friction. We don't have an enterprise version of Ubuntu. We don't need an enterprise version of Ubuntu, the whole thing's enterprise. Even if you're a one-person startup. >> Mark Shuttleworth, always a pleasure to catch up. Thank you so much for joining us. >> Mark: Thank you, Stu. >> For John Troyer, I'm Stu Miniman. Back with lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE. (soft electronic music)

Published Date : May 21 2018

SUMMARY :

Brought to you by Red Hat, the OpenStack Foundation, Happy to welcome you back to the program, in this OpenStack stuff for quite a bit. and be on the buzz of there's this cool new thing And OpenStack really is the only approach a bit about all of the HyperScalers that use your products Ubuntu is at the heart of all of the major the cloud technology stack wars. I'm always a bit nervous about projects that put open. There's one about the philosophy of open source, It's kind of now just solving the problems. And then there's others that, as you said, So that report is nonsense, for a start. Stu: They have a cycle for that, I believe. to us then let us architect that cloud with you happening in the industry. so that the people who focus on the business problems so that's a great example of AI at the edge. a Serverless framework that you can port. it just doesn't exist at the moment something that you had said in case people miss it, of infrastructure, you could easily drown from the edge to the center and now managed, as well. that the various large institutions are putting out, about the business model and you said to me, really love Ubuntu and so that continues to grow for us. Thank you so much for joining us. from OpenStack Summit 2018 in Vancouver.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark ShuttleworthPERSON

0.99+

John TroyerPERSON

0.99+

MadridLOCATION

0.99+

60QUANTITY

0.99+

JeffPERSON

0.99+

Dorich TelecomORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

VodafoneORGANIZATION

0.99+

$10QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Miguel PerezPERSON

0.99+

SpainLOCATION

0.99+

10 serversQUANTITY

0.99+

two questionsQUANTITY

0.99+

CarrefourORGANIZATION

0.99+

45QUANTITY

0.99+

North CarolinaLOCATION

0.99+

MiguelPERSON

0.99+

AmericasLOCATION

0.99+

SoftBankORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

2021DATE

0.99+

VancouverLOCATION

0.99+

AT&TORGANIZATION

0.99+

20%QUANTITY

0.99+

MarkPERSON

0.99+

100 serversQUANTITY

0.99+

30%QUANTITY

0.99+

JavaTITLE

0.99+

2018DATE

0.99+

OpenStack FoundationORGANIZATION

0.99+

2020DATE

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.99+

PowerPointTITLE

0.99+

StuPERSON

0.99+

one serverQUANTITY

0.99+

15 yearsQUANTITY

0.99+

North AmericaLOCATION

0.99+

64%QUANTITY

0.99+

JeffreyPERSON

0.99+

next yearDATE

0.99+

3%QUANTITY

0.99+

LinkedInORGANIZATION

0.99+

todayDATE

0.99+

11QUANTITY

0.99+

CentOSTITLE

0.99+

Vancouver, CanadaLOCATION

0.99+

.3%QUANTITY

0.99+

two wordsQUANTITY

0.99+

120QUANTITY

0.99+

sixQUANTITY

0.99+

oneQUANTITY

0.99+

KaleenaPERSON

0.99+

three years agoDATE

0.99+

PythonTITLE

0.99+

OpenStackORGANIZATION

0.99+

two problemsQUANTITY

0.99+

yesterdayDATE

0.99+

bothQUANTITY

0.99+

Keynote Analysis | OpenStack Summit 2018


 

>> Announcer: Live, fro-- >> Announcer: Live from Vancouver, Canada it's theCUBE! Covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and it's ecosystem partners. >> Hi and welcome to SiliconANGLE Media's production of theCUBE here at OpenStack Summit 2018 in Vancouver. I'm Stu Miniman with my cohost, John Troyer. We're here for three days of live wall-to-wall coverage at the OpenStack Foundation's show they have it twice a year John, pleasure to be with you again, you and I were together at the OpenStack show in Boston, a year ago, little bit further trip for me. But views like this, I'm not complaining. >> It's a great time to be in Vancouver, little bit overcast but the convention center's beautiful and the people seem pretty excited as well. >> Yeah so if you see behind us, the keynote let out. So John, we got to get into the first question of course for some reason the last month people are always Hey Stu where are you, what're you doing and when I walk through the various shows I'm doing when it comes to this one they're like, why are you going to the OpenStack show? You know, what's going on there, hasn't that been replaced by everything else? >> I got the same thing, there seems to be kind of a almost an antireligious thing here in the industry maybe more emotional perhaps at other projects. Although frankly look, we're going to take the temperature of the community, we're going to take the temperature of the projects, the customers, we got a lot of customers here, that's really the key here is that our people actually using this, being productive, functional, and is there enough of a vendor and a community ecosystem to make this go forward. >> Absolutely, so three years ago, when we were actually here in Vancouver, the container sessions were overflowing, people sitting in the aisles. You know containers, containers, containers, docker, docker, docker, you know, we went through a year or two of that. Then Kubernetes, really a wave that has taken over, this piece of the infrastructure stack, the KubeCon and CloudNativeCon shows, in general, I think have surpassed this size, but as we know in IT, nothing ever dies, everything is always additive, and a theme that I heard here that definitely resonated is, we have complexity, we need to deal with interoperability, everybody has a lot of things and that's the, choose your word, hybrid, multi-cloud world that you have, and that's really the state of opensource, it's not a thing, it's there's lots of things you take all the pieces you need and you figure out how to put 'em together, either buy them from a platform, you have some integrator that helps, so somebody that puts it all together, and that's where, you know, we live here, which is, by they way, I thought they might rename the show in the open, and they didn't, but there's a lot of pieces to discuss. >> Definitely an open infrastructure movement, we'll probably talk about that, look I loved the message this morning that the cloud is not consolidating, in fact it's getting more complicated, and so that was a practical message here, it's a little bit of a church of opensource as well, so the open message was very well received and, these are the people that are working on it, of course, but yeah, the fact that, like last year I thought in Boston, there was a lot of, almost confusion around containers, and where containers and Kubernetes fit in the whole ecosystem, I think, now in this year in 2018 it's a lot more clear and OpenStack as a project, or as a set of projects, which traditionally was, the hit on it was very insular and inward facing, has at least, is trying to become outward facing, and again that's something we'll be looking at this week, and how well will they integrate with other opensource projects. >> I mean John, you and I are both big supporters of the opensource movements, love the community at shows like this, but not exclusively, it's, you know, Amazon participating a little bit, using a lot of opensource, they take opensource and make it as a service, you were at Red Hat Summit last week, obviously huge discussion there about everything opensource, everything, so a lot going on there, let me just set for, first of all the foundation itself in this show, the thing that I liked, coming into it, one of the things we're going to poke at is, if I go up to the highest level, OpenStack is not the only thing here, they have a few tracks they have an Edge computer track, they have a container track, and there's a co-resident OpenDev Show happening a couple floors above us and, even from what the OpenStack Foundation manages, yes it OpenStack's the main piece of it, and all those underlying projects but, they had Katacontainers, which is, you know, high level project, and the new one is Zuul, talking about CI/CD, so there are things that, will work with OpenStack but not exclusively for OpenStack, might not even come from OpenStack, so those are things that we're seeing, you know, for example, I was at the Veeam show last week, and there was a software company N2WS that Veeam had bought, and that solution only worked on Amazon to start and, you know, I was at the Nutanix show the week before, and there's lots of things that start in the Amazon environment and then make their way to the on-premises world so, we know it's a complex world, you know, I agree with you, the cloud is not getting simpler, remember when cloud was: Swipe the credit card and it's super easy, the line I've used a lot of times is, it is actually more complicated to buy, quote, a server equivalent, in the public could, than it is if I go to the website and have something that's shipped to my data center. >> It's, yeah, it's kind of ironic that that's where we've ended up. You know, we'll see, with Zuul, it'll be very interesting, one of the hits again on OpenStack has been reinvention of the wheel, like, can you inter-operate with other projects rather than doing it your self, it sounds like there's some actually, some very interesting aspects to it, as a CI/CD system, and certainly it uses stuff like Ansible so it's, it's built using opensource components, but, other opensource components, but you know, what does this give us advantage for infrastructure people, and allowing infrastructure to go live in a CI/CD way, software on hardware, rather than, the ones that've been built from the dev side, the app side. I'm assuming there's good reasons, or they wouldn't've done it, but you know, we'll see, there's still a lot of projects inside the opensource umbrella. >> Yeah, and, you know, last year we talked about it, once again, we'll talk about it here, the ecosystem has shifted. There are some of the big traditional infrastructure companies, but what they're talking about has changed a lot, you know. Remember a few years ago, it was you know, HP, thousand people, billion dollar investment, you know, IBM has been part of OpenStack since the very beginning days, but it changes, even a company like Rackspace, who helped put together this environment, the press release that went was: oh, we took all the learnings that we did from OpenStack, and this is our new Kubernetes service that we have, something that I saw, actually Randy Bias, who I'll have on the show this week, was on, the first time we did this show five years ago, can't believe it's the sixth year we're doing the show, Randy is always an interesting conversation to poke some of the sacred cows, and, I'll use that analogy, of course, because he is the one that Pets vs Cattle analogy, and he said, you know, we're spending a lot of time talking about it's not, as you hear, some game, between OpenStack and Kubernetes, containers are great, isn't that wonderful. If we're talking about that so much, maybe we should just like, go do that stuff, and not worry about this, so it'll be fun to talk to him, the Open Dev Show is being, mainly, sponsored by Mirantis who, last time I was here in Vancouver was the OpenStack company, and now, like, I saw them a year ago, and they were, the Kubernetes company, and making those changes, so we'll have Boris on, and get to find out these companies, there's not a lot of ECs here, the press and analysts that are here, most of us have been here for a lot of time so, this ecosystem has changed a lot, but, while attendance is down a little bit, from what I've heard, from previous years, there's still some good energy, people are learning a lot. >> So Stu, I did want to point out, that something I noticed on the stage, that I didn't see, was a lot of infrastructure, right? OpenStack, clearly an infrastructure stack, I think we've teased that out over the past couple years, but I didn't see a lot of talk about storage subsystems, networking, management, like all the kind of, hard, infrastructure plumbing, that actually, everybody here does, as well as a few names, so that was interesting, but at the end of the day, I mean, you got to appeal to the whole crowd here. >> Yeah, well one of the things, we spent a number of years making that stuff work, back when it was, you know, we're talkin' about gettin' Cinder, and then all the storage companies lined up with their various, do we support it, is it fully integrated, and then even further, does it actually work really well? So, same stuff that went through, for about a decade, in virtualization, we went through this in OpenStack, we actually said a couple years ago, some of the basic infrastructure stuff has gotten boring, so we don't need to talk about it anymore. Ironic, it's actually the non-virtualized environments, that's the project that they have here, we have a lot of people who are talking bare metal, who are talking containers, so that has shifted, an interesting one in the keynote is that you had the top level sponsors getting up there, Intel bringing around a lot of their ecosystem partners, talking about Edge, talking about the telecommunications, Red Hat, giving a recap of what they did last week at their summit, they've got a nice cadence, the last couple of years, they've done Red Hat Summit, and OpenStack Summit, back-to-back so that they can get that flow of information through, and then Mark Shuttleworth, who we'll have on a little bit later today, he came out puchin', you know, he started with some motherhood in Apple Pi about how Ubuntu is everywhere but then it was like, and we're going to be so much cheaper, and we're so much easier than the VMwares and Red Hats of the world, and there was a little push back from the community, that maybe that wasn't the right platform to do it. >> Yeah, I think the room got kind of cold, I mean, that's kind of a church in there, right, and everyone is an opensource believer and, this kind of invisible hand of capitalism (laughs) reached in and wrote on the wall and, you know, having written and left. But at the end of the day, right, somebody's got to pay for babies new shoes. I think that it was also very interesting seeing, at Red Hat Summit, which I covered on theCUBE, Red Hat's argument was fairly philosophical, and from first principles. Containers are Linux, therefore Red Hat, and that was logically laid out. Mark's, actually I loved Mark's, most of his speech, which was very practical, this, you know, Ubuntu's going to make both OpenStack and containers simpler, faster, quicker, and cheaper, so it was clearly benefits, and then, for the folks that don't know, then he put up a couple a crazy Eddy slides like, limited time offer, if you're here at the show, here's a deal that we've put together for ya, so that was a little bit unusual for a keynote. >> Yeah, and there are a lot of users here, and some of them'll hear that and they'll say: yeah, you know, I've used Red Hat there but, you can save me money that's awesome, let me find out some more about it. Alright, so, we've got three days of coverage here John, and we get to cover this really kind of broad ecosystem that we have here. You talked about what we don't discuss anymore, like the major lease was Queens, and it used to be, that was where I would study up and be like oh okay, we've got Hudson, and then we got, it was the letters of the alphabet, what's the next one going to be and what are the major features it's reached a certain maturity level that we're not talking the release anymore, it's more like the discussions we have in cloud, which is sometimes, here's some of the major things, and oh yeah, it just kind of wraps itself in. Deployments still, probably aren't nearly as easy as we'd like, Shuttleworth said two guys in under two weeks, that's awesome, but there's solutions we can put, stand up much faster than that now, two weeks is way better than some of the historical things we've done, but it changes quite a bit. So, telecommunications still a hot topic, Edge is something, you know what I think back, it was like, oh, all those NFE conversations we've had here, it's not just the SDN changes that are happening, but this is the Edge discussion for the Telcos, and something people were getting their arms around, so. >> It's pretty interesting to think of the cloud out on telephone poles, and in branch offices, in data centers, in closets basically or under desks almost. >> No self-driving cars on the keynote stage though? >> No, nothing that flashy this year. >> No, definitely not too flashy so, the foundation itself, it's interesting, we've heard rumors that maybe the show will change name, the foundation will not change names. So I want to give you last things, what're you looking for this week, what were you hearing from the community leading up to the show that you want to validate or poke at? >> Well, I'm going to look at real deployments, I'd like to see how standard we are, if we are, if an OpenStack deployment is standardized enough that the pool of talent is growing, and that if I hire people from outside my company who work with OpenStack, I know that they can work with my OpenStack, I think that's key for the continuation of this ecosystem. I want to look at the general energy and how people are deploying it, whether it does become really invisible and boring, but still important. Or do you end up running OpenShift on bare metal, which I, as an infrastructure person, I just can't see that the app platform should have to worry about all this infrastructure stuff, 'cause it's complicated, and so, I'll just be looking for the healthy productions and production deployments and see how that goes. >> Yeah, and I love, one of the things that they started many years ago was they have a super-user category, where they give an award, and I'm excited, we have actually have the Ontario Institute for Cancer Research is one of our guests on today, they won the 2018 super-user group, it's always awesome when you see, not only it's like, okay, CERN's here, and they're doing some really cool things looking for the Higgs boson, and all those kind of things but, you know, companies that are using technology to help them attack the battle against cancer, so, you know, you can't beat things like that. We've got the person from the keynote, Melvin, who was up on stage talking about the open lab, you know, community, ecosystem, definitely something that resonates, I know, one of the reasons I pulled you into this show in the last year is you're got a strong background there. >> Super impressed by all the community activity, this still feels like a real community, lots of pictures of people, lots of real, exhortations from stage to like, we who have been here for years know each other, please come meet us, so that's a real sign of also, a healthy community dynamic. >> Alright, so John first of all, I want to say, Happy Victoria Day, 'cause we are here in Vancouver, and we've got a lot going on here, it's a beautiful venue, hope you all join us for all of the coverage here, and I have to give a big shout out to the companies that allowed this to happen, we are independent media, but we can't survive without the funding of our sponsors so, first of all the OpenStack Foundation, helps get us here, and gives us this lovely location overlooking outside, but if it wasn't for the likes of our headline sponsor Red Hat as well as Canonical, Kontron, and Nuage Networks, we would not be able to bring you this content so, be sure to checkout thecube.net for all the coverage, for John Troyer, I'm Stu Miniman, thanks so much for watching theCUBE. (bubbly music)

Published Date : May 21 2018

SUMMARY :

the OpenStack Foundation, and it's ecosystem partners. at the OpenStack Foundation's show they have it twice a year and the people seem pretty excited as well. for some reason the last month people are always I got the same thing, there seems to be kind of a and that's really the state of opensource, it's not a thing, so the open message was very well received and, one of the things we're going to poke at is, one of the hits again on OpenStack has been and he said, you know, that something I noticed on the stage, that I didn't see, an interesting one in the keynote is that you had But at the end of the day, right, it's more like the discussions we have in cloud, It's pretty interesting to think of the cloud the foundation will not change names. I just can't see that the app platform I know, one of the reasons I pulled you into this show Super impressed by all the community activity, the companies that allowed this to happen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

CanonicalORGANIZATION

0.99+

JohnPERSON

0.99+

Mark ShuttleworthPERSON

0.99+

VancouverLOCATION

0.99+

MelvinPERSON

0.99+

IBMORGANIZATION

0.99+

KontronORGANIZATION

0.99+

Nuage NetworksORGANIZATION

0.99+

BostonLOCATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

two guysQUANTITY

0.99+

HPORGANIZATION

0.99+

Randy BiasPERSON

0.99+

TelcosORGANIZATION

0.99+

RackspaceORGANIZATION

0.99+

Ontario Institute for Cancer ResearchORGANIZATION

0.99+

CERNORGANIZATION

0.99+

last yearDATE

0.99+

three daysQUANTITY

0.99+

VeeamORGANIZATION

0.99+

two weeksQUANTITY

0.99+

last weekDATE

0.99+

OpenStackORGANIZATION

0.99+

N2WSORGANIZATION

0.99+

StuPERSON

0.99+

Vancouver, CanadaLOCATION

0.99+

thecube.netOTHER

0.99+

a year agoDATE

0.99+

three years agoDATE

0.99+

AmazonORGANIZATION

0.99+

RandyPERSON

0.99+

thousand peopleQUANTITY

0.99+

a yearQUANTITY

0.98+

OpenStackEVENT

0.98+

five years agoDATE

0.98+

SiliconANGLE MediaORGANIZATION

0.98+

MarkPERSON

0.98+

Red Hat SummitEVENT

0.98+

OpenStack Summit 2018EVENT

0.98+

twoQUANTITY

0.98+

LinuxTITLE

0.98+

Open Dev ShowEVENT

0.98+

2018DATE

0.98+

QueensORGANIZATION

0.98+

this weekDATE

0.97+

todayDATE

0.97+

Red HatTITLE

0.97+

under two weeksQUANTITY

0.97+

UbuntuTITLE

0.97+

first questionQUANTITY

0.97+

first timeQUANTITY

0.97+

BorisPERSON

0.97+

OpenStack SummitEVENT

0.97+

oneQUANTITY

0.96+

sixth yearQUANTITY

0.96+

this yearDATE

0.96+

billion dollarQUANTITY

0.95+

twice a yearQUANTITY

0.95+

first principlesQUANTITY

0.95+

ZuulORGANIZATION

0.94+

last monthDATE

0.94+

OpenStack Summit North America 2018EVENT

0.93+

ShuttleworthPERSON

0.93+

OpenShiftTITLE

0.93+

OpenStackTITLE

0.93+

bothQUANTITY

0.91+

Karl Rautenstrauch, Microsoft | VeaamOn 2018


 

>> Announcer: Live from Chicago, Illinois, it's theCUBE, covering VEEAMON 2018. Brought to you by Veeam. >> Welcome back to VEEAMON 2018 in Chicago, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. I'm here with my cohost, Stu Miniman. Karl Rautenstrauch is here, Karl Rautenstrauch, Senior Program Manager for Azure Storage at Microsoft. Karl, thanks for coming on. >> It's a pleasure, guys. Thank you for having me. >> You've got a beautiful picture of your family. You got three boys at home, is that right? >> Karl: Three boys. >> Alright. >> They keep me out of trouble. They get into it, they keep me out of it. >> I'm one of three boys. My mom, you know, kept us going. You must have a strong woman at home. >> She is a saint. >> At any rate, thanks for coming on. We love talking Microsoft Azure, Cloud and storage. Let's start with your role. >> Karl: Sure. What do you have? >> What do you do at Microsoft? >> Absolutely. So for the last year I've been program manager with the storage team, and I've kind of a unique role. Usually you see program managers who focus on features, right? You are championing a new feature in your service, your platform. For me, I get to work with our partner ecosystem. So I spend a lot of time with our great partners, like Veeam, and our channel partners, like SHI, CDW, Softchoice, Insight. I'll tell you, I've got the best job in the business. I can't complain. I get to work with great, smart people everyday. >> So is your role transferring knowledge to those partners, assisting those partners, acting as a catalyst, gathering information from them and feeding it back to the product teams? >> Yeah, really all of the above. Helping to make sure that we've got a combined solution, an end-to-end solution, that's the best thing for our customers. So everything from upfront assessment through implementation through health check afterwards, our goal is to have the happiest customers in the public Cloud, and we can't do that without our partners. >> How should we think about the Azure Storage portfolio? Can you paint a picture for us? >> Oh boy, it has grown drastically just in the last couple of months. So not only do we have our first party offerings in the disk, traditional VM disk as we all know it, you're going to attach to a server, we have hosted file infrastructures where we provide file shares that don't require a server to manage, our partnership with NetApp where we are going to be operating NetApp systems in our data centers and offering their native services. And we just continue to expand with big data solutions, with Avere, our new acquisition, that is really aimed at high performance compute environments like we see in genomics and media and entertainment. It's just a portfolio that continues to grow. We all joke that storage is boring, right? Nobody cares about storage, but honestly, it's one of the most interesting and fastest growing and evolving platforms in Azure. >> We joke, sometimes we call it snore-age, but Stu and I are kind of boring people, so we love talking about it. >> I like that. >> So you got file, you got object, you got block, you got big data solutions, you got high performance file solutions. Okay, like you say, this expanding portfolio. >> Karl, I look back at my career and Microsoft's had a long partnership, not only on the compute side, but really on the storage side, maybe isn't as well known as shipping on every PC and server out there. Lot has changed, when you talk about Azure and Azure Stack coming out. Maybe explain a little bit, I believe you called it the first party versus the second party. How that Microsoft does it versus Microsoft partners, how those mesh together. >> Yeah, absolutely. Well I'll tell you. So I joined the company about five years ago, and I've been on the storage team for the last year. I was a field specialist, a subject matter expert, before that working very, very closely with customers. And what I love that I've seen over this period through the Satya Nadella era, is just this open Microsoft that says, we don't have to do everything. We don't have to try to provide everything to the customer. We really believe in, and I think we just diffuse that best of breed attitude going forward. Our partners feel that. Whether we're working with Veeam in Azure Public Cloud as a target, or them offering protection of VMs in public cloud, which is necessary by the way. I think that's a huge fallacy in the industry, that you place your app, you place your machine in a public cloud, and it's magically protected by pixies. It's not. >> Backup and security aren't a concern, wherever you put it, right? >> Absolutely, wherever they are. So we rely on our partners like Veeam to provide that. And really where Azure Stack comes in, is providing that consistent experience, not just to our customers, but also to out partners. So Veeam is able to protect Azure Public assets, in the same manner they're able to protect Azure Private, for Azure Stack resources. So really it's just offering customers choice to use best of breed solutions, and allowing our partners to have an easy means to support both on-premises and public Cloud. >> So it's like a service catalog that you guys offer, and then you advise customers or they pick and choose what they want? How's that all work? >> Yeah, so really what we do, and that's a great way to put it. We have what we call the Azure Marketplace that's present in the Azure Public Cloud, and we extend that to Azure Stack. So if I'm a customer who wants to deploy Veeam, per se, in either infrastructure, I go to this catalog of apps. I mean it literally is a catalog of apps. Search for Veeam, there it is, and I can single click deploy in either Azure Stack or Azure Public. >> Microsoft is unique in the sense of its hybrid strategy, in terms of what you have in the cloud you have on-prem. You're trying to, wherever possible, make it identical. >> Karl: Absolutely. >> Microsoft and Oracle are really the only two companies that have a stated strategy to do that. Let's talk about Microsoft in terms of where you're at, in terms of getting that substantially similar capability in on-prem and in the public Cloud. >> Yeah, absolutely. That's a great, great topic to discuss. Azure Stack, I always like to tell folks, full disclosure, and we don't try to hide this at all, that's not who we are, but it will always lag a little bit behind Azure Public. When you think about the controls in customers' data centers for rolling out code updates and new versions of software, new capabilities, there's always an adoption curve. You have folks who are a little more hesitant to release quickly and adopt quickly. So Azure Stack offers them the capability to defer some of those updates for a period of time. So there will be a lag. We have to qualify for multiple vendor platforms, we've chosen to go to market in a hyperconverged model with our partners, like Dell EMC, HP, Lenovo and Cisco. Whereas Azure Public, that's a completely controlled infrastructure, and we're able to deploy very quickly. And we do; we're constantly iterating and releasing new features. So I think that's the biggest difference between the two. >> So Karl, you give a session here at the show called Migrating to Azure. That whole move is pretty challenging. >> Karl: Oh yes. Am I lift and shifting? Am I transforming? Am I building new? What are you hearing from customers? And give our audience a taste of some of the key takeaways that you were talking about. >> Yeah, absolutely. So that's one of the biggest concerns that we've had over the last couple of years. I said earlier, we want the happiest customers in Public Cloud, and no Cloud regret or remorse. So what we talked about in our session was a tool that we released recently called Azure Migrate, that is all about assessing and setting expectations for customers around what can and cannot migrate, how much it will cost to run that infrastructure in Public Cloud, either as is or optimized, and then suggestions for optimizing their infrastructure to get the best bang for their buck. So there are great opportunities to save cost when platforms are adopted, like Azure sql, platform as a service offerings. When I've got that time-sharing concept, when I take away maintenance activities around operating systems and software releases, there are significant cost savings versus a lift and shift, which can quite honestly be more expensive than what that customer is doing on-premises today. So Azure Migrate is meant to help customers avoid that, no regrets. >> I wonder what you're hearing from customers cuz there's some concern. Maybe I should just do infrastructure as a service. Cuz if I get into those platform as a service, am I locked in? Microsoft is used for lots of business scribble applications. I see Microsoft strongly in the Kubernetes ecosystem, getting into the functions as a service, which those things are trying to give me a little bit more portability and flexibility. Maybe discuss some of that. >> Yeah, that's great, and I'm glad you brought that back around. So there is always that concern about the Cloud Hotel California, right? And that said, I like to half jokingly refer to it as you get in, you can never leave. And there is that jeopardy with any provider. That if you're using some proprietary platform that you can be locked-in, and really we try to promote the use of containers extensively with those customers who have that concern. And even with our hosted analytics and hosted database infrastructures, we make sure to provide those portable cross-Cloud platforms, like Postgres, MySQL. Our analytics is all Ubuntu based. Really we don't want that lock-in to be there, we don't want that to be a concern. So continuing support for open platforms and ecosystems is really something we're committed to. >> The lock-in, openness choice, it's a spectrum. I've been in this business for a long time, and Unix used to be the open system. And then today, you can't get more locked-in then a Unix platform. So I feel as though, and I wonder if you guys can comment, the Cloud has transparent pricing and transparent billing. And so lock-in is if I have a customer and they're trying to move and they're up for a contract renewal or something or a maintenance, I'm going to jack their maintenance. But you can't just do that across the board, if you have transparent billing. So there's the pricing aspect. There's certainly a lock-in with the processes and procedures that you choose, but no matter what you choose, whether it's open source, a Cloud provider like Amazon, an on-prem provider like the many that we know out there, you're going to be locked-in to your processes and procedures. So it's a matter of degree. I personally see it, because of the Cloud, as a lot less onerous than it used to be. Do you guys agree with that? >> I mean Dave, it's that application is the long pole in the tent for ones I see. What I've been using and if I go to something new, if I go build this new architecture, Cloud Nader or whatever, that's a pretty big bet. So depending on how deep and tied that is to a specific platform, even if I'm choosing a database, migrating databases aren't easy. >> But that's the issue. It's the bet that you're making. It's more so than the lock-in because lock-in, you're going to be locked in to whatever bet you make, so you've got to make the right bet. To me, it's a way for consultants to act like an advocate for the customer. What's more important in my view, is negotiation strategies, how you place that bet, how you architect your Cloud strategy. >> And I mean Dave, just quickly, I remember four years ago you and I interviewed Brad Anderson with Microsoft, and we were poking him on licensing. I don't hear that discussion about Microsoft as much, of course we always want it cheaper, and everything like that, but Microsoft's done a great job. In the Cloud communities, they're known as participating in those communities, and giving customers- >> Well that's our take, what's your take? >> No, I love it. And I think what I'm seeing is customers are hedging their bets. So you do, and it is a bet. You do have to not go all in with somebody, with any Cloud provider, but you got to put your chips with some proprietary platforms. And what I'm seeing is that multi-Cloud that we're all talking about is really becoming the reality. I can think of very few customers that I've worked with who have had Azure as their single public Cloud. And really that's how they avoid that z-series down the road, right? Where you're locked-in, you got one provider that platform. They're saying, look, I'm going to deploy on the best service in the best public Cloud for that application instance, as Stu mentioned. That's happening. >> Horses for courses, as they say in England. >> Karl: There you go. >> So we're here at VEEAMON. Your relationship with Veeam, they've obviously partnered up with you guys in a big way. Your thoughts on the partnership? >> Yeah, love working with these guys. I'm very fortunate in that I get to work with some of the best that we have, and everything from the relationship that we have on a marketing level, an engineering level, a field level, they're really ingrained in our ecosystem at all levels. Just a very, very easy partner to work with, very responsive to their customer needs. And that's what we look for. We want to work with the partners that customers love. So I'm just thrilled to be part of this relationship. >> Karl, thanks so much for coming on theCUBE. I think you embody the new open Microsoft, and you guys are making great progress. Congratulations and thanks so much for coming on. >> Thank you Dave, it was a pleasure. Stu, thank you very much. >> Alright, keep it right there everybody. We'll be back with our next guest. VEEAMON live from Chicago, you're watching theCUBE. (upbeat music)

Published Date : May 16 2018

SUMMARY :

Brought to you by Veeam. the leader in live tech coverage. Thank you for having me. picture of your family. They get into it, they keep me out of it. My mom, you know, kept us going. Azure, Cloud and storage. What do you have? So for the last year I've been and we can't do that without our partners. that continues to grow. so we love talking about it. So you got file, I believe you called it the first party and I've been on the storage and allowing our partners to have and we extend that to Azure Stack. the cloud you have on-prem. and in the public Cloud. I always like to tell folks, So Karl, you give a that you were talking about. So that's one of the biggest concerns getting into the functions as a service, and I'm glad you brought that back around. and I wonder if you guys can comment, it's that application is the long pole in to whatever bet you make, I remember four years ago you and I So you do, and it is a bet. as they say in England. up with you guys in a big way. and everything from the relationship and you guys are making great progress. Thank you Dave, it was a pleasure. We'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

EnglandLOCATION

0.99+

LenovoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Karl RautenstrauchPERSON

0.99+

KarlPERSON

0.99+

Stu MinimanPERSON

0.99+

HPORGANIZATION

0.99+

Brad AndersonPERSON

0.99+

DavePERSON

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

three boysQUANTITY

0.99+

Three boysQUANTITY

0.99+

VEEAMONORGANIZATION

0.99+

Azure StackTITLE

0.99+

Azure MigrateTITLE

0.99+

two companiesQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

VeeamORGANIZATION

0.99+

Azure PublicTITLE

0.99+

Satya NadellaPERSON

0.99+

first partyQUANTITY

0.99+

Chicago, IllinoisLOCATION

0.99+

second partyQUANTITY

0.99+

CDWORGANIZATION

0.99+

SoftchoiceORGANIZATION

0.99+

Azure sqlTITLE

0.99+

Cloud Hotel CaliforniaORGANIZATION

0.99+

StuPERSON

0.98+

InsightORGANIZATION

0.98+

oneQUANTITY

0.98+

VeeamTITLE

0.98+

MySQLTITLE

0.98+

four years agoDATE

0.98+

AzureTITLE

0.98+

Azure Public CloudTITLE

0.97+

SHIORGANIZATION

0.96+

2018DATE

0.95+

one providerQUANTITY

0.95+

VeeamPERSON

0.94+

bothQUANTITY

0.94+

Dell EMCORGANIZATION

0.93+

AvereORGANIZATION

0.93+

UbuntuTITLE

0.93+

Public CloudTITLE

0.92+

Stephan Fabel, Canonical | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark, it's the CUBE, covering KubeCon and Cloud Native Con Europe 2018. Brought to you by the Cloud Native Computing Foundation and its ecosystem partners. (busy music) >> Welcome back, everyone, live here in Copenhagen, Denmark, it's the CUBE's coverage of KubeCon 2018. I'm John Furrier, the host of the CUBE, along with Lauren Cooney, who's the founder of Spark Labs. She's been co-host with me two days, two days of wall to wall coverage. Stephan Fabel, Product Strategy Lead at Canonical, is here inside the CUBE, and from San Francisco. Again, welcome to the CUBE, thanks for coming. >> Thank you, thanks so much for having me. >> I've got to, you guys have been around the block, you know about open source software platforms, you get and do it for a while. Interesting time here at KubeCon. Kubernetes, Istio, Kubeflow, Cloud Native, they've still got the brand name CloudNativeCon and KubeCon. Modern application architecture's now in play. I see this notion of an interoperability model coming in that's certainly going to be a de facto standard. People are already kind of declaring it a de facto standard. It really shows a path to multi-cloud, but also frees up developers from a lot of the heavy lifting. Lou Tucker from Cisco was saying they don't want to do networking. Let's just have that be infrastructure as code, that's DevOps, that's what we want. >> Stephan: That is exactly right. >> What are you guys doing here? What's the story with Canonical and how does that fit into the megatrends? >> Yeah, I mean, there's a couple of things that we at Canonical always believe to be one of the core sort of tenets in our distribution of Kubernetes. As you know, we've been very active in this space fairly early on, and have been an active distributor of Kubernetes and a certified distributor of our version of Kubernetes. Pure upstream, remain conformant to the main public clouds, such as to enable that workload migration and mobility from on prem up to any of the other providers to accommodate all kinds of use cases, right. >> You guys made a bet on Kubernetes, obviously, good call. >> Stephan: Right. >> Right. What's the progress now, what's next? Because that's, the bets are paying off. I saw Red Hat had a great bet with what they did with Kubernetes, changed what OpenShift became. You guys had a bet in Kubernetes, what has that become for Canonical? >> Yeah, so based on the pure upstream distribution that we have, we really feel that enabling the ecosystem in a standards compliant way so that all of the landscape projects that are part of the CNCF can be deployed on top of Kubernetes, on top of our distribution of Kubernetes in just the same way that they would be developed or deployed in any of the large containers of service offerings that are out there is one of the big benefits that our customers would gain from using our Kubernetes. >> What's your differentiator for the distribution of Kubernetes that you have versus others? >> Well, there's two. The first one, I think, is the notion that deploying Kubernetes on premise is something that you want to do in a repeatable fashion, operationally efficient with the right capex opex mix, so we believe that there is a place for Kubernetes as a product, just deploy it, it works on any substrate that you've got available to you. But then also, for mainstream America, right, you may want to have a managed service on top of Kubernetes as well. We offer that, too, just a way to get started and kick the tires and see where that takes you as far as the developers are concerned. Now, on prem, you will find that there are a couple of challenges when deploying Kubernetes that are really the key differentiator. The first one, I would say, is things like integration into the storage that's local, integration into the network that's local, and integration into all of those services that should be available in the Cloud Native microservices architecture platform, such as low bouncers, right, elasticity, object store, etc. The second, and most importantly, because it is a key enabler for those next generation workloads, is the GPGPU enablement work that we're doing with partners such as NVIDIA. When you deploy the Canonical distribution of Kubernetes, you actually get the NVIDIA acceleration out of the box the way that NVIDIA envisions this on top of Kubernetes and the way that it is, by the way, being deployed on the public clouds. >> You bring a lot of your goodness to the table inside the Kubernetes distribution. OK, what are some customers doing? Give some use cases of some customers' Kubernetes, what are some of the things that they're doing with it, what's the early indication? What's the feedback? >> Sure. We have a ton of customers that are using our version of Kubernetes to do the machine learning applications and the AI of the next gen workloads in use cases such as smart cities or connected cars, where, when you look at self-driving cars, right, as the next gen that's coming out of the valley, they put in 300,000, 150,000, 400,000 miles a year on the road these days just optimizing the models that are being used to actually take over one day. Enabling those kinds of workloads in a distributed fashion requires DevOps expertise. Now, the people who are actually writing those applications are not DevOps people, they're data scientists, right. They shouldn't have to learn how to deploy Kubernetes, how to create a container and all those things. They should just be able to deploy the application on top an attractive substrate that actually supports that distributed application use case, and so that is where we come in. >> This is interesting, because what you're basically doing is making an application developer a DevOps developer overnight. >> Stephan: That's exactly right. >> That's really important. I was just talking with the co-chair of CNCF. We're talking about, Liz Rice and I were talking about why everyone's so, like, excited here. One of the things I said was, because people who are doing DevOps were hardcore, and they had to build everything from scratch, and all the scar tissue. But the benefits, once you got through the knothole there, the benefits were amazing, right. You go, okay, you don't want to do that again, but now there's a way to make it easier. There's kind of a shared experience even though no one's met each other, so there's kind of a joint community. >> I agree. I think it is increasingly about enabling developers who are experts in their field to actually leverage Kubernetes and the advantages that it brings in a more intuitive fashion. Just take it up a notch. >> How did the Kubernetes vibe integrate in with Canonical? I'm sure, given the background of the company, it probably was a nice fit, people embraced it. You guys were early. >> Stephan: Yeah. >> What's the internal scuttlebutt on the vibe with Kubernetes? >> Oh, we love Kubernetes as a technology. Ubuntu was always close to the developer and close to where the innovation happens. It was a natural fit to actually support all that workflow now in this new world of Kubernetes. We embraced OpenStack for the same reason, and in a similar fashion, Kubernetes has really driven the point home, containerist applications with a powerful orchestration framework such as Kubernetes are the next step for all the developers that are out there, and so as a consequence, this was a perfect match. >> It's also a no-brainer if you think about it, software methodology moving to the next level. This is total step up function for productivity for developers. That's really a key thing. What's your observation of that trend? Because at the end of the day, there's now Kubernetes, which does a lot of great things, but one of the hottest areas is Istio service meshes, and then you've got Kubeflow orchestration, a lot of other things that are happening around Kubernetes. What are you guys seeing that's important for Canonical's customers, what you're doing product wise. Where's the order of operations, what's next? What are you guys focused on, what's the priorities? >> Well, our biggest priority right now is enabling things like Kubeflow, which, by the way, are also using Istio internally, right, to actually enable those data scientists who actually deploy their I workload. We work very closely with Google to try and enable this in an on prem fashion out of the box which is something you can actually do today. >> John: You guys are doing this now inside this. >> We're doing this right now. This is also where we're going to double and triple down. >> This is actually your best practice, too, if you think about it, you want to take it in house, and then get a feel for it. What's the internal vibe on that, positive? >> Oh, absolutely. I mean, we always saw infrastructure as code and actually as intelligent infrastructure as something that we wanted to build our conceptual framework around, so very concretely, right. We've always had this notion of composable building blocks adding up to, sum of one being greater than two, right, like those types of scenarios. Actually using things like Kubernetes as an effective building block to then build out web applications that use things like machine learning algorithms underneath, that's a perfect use case for a next gen workload, and also something that we might use ourselves internally. >> Well, hey, that whole building block thing, it's happening. >> Stephan: Yeah. >> News flash. >> Stephan: Exactly, right? >> I mean, it's almost a pinch me moment for the people in the industry like, oh my god, it's going to go to a whole other level. How do you guys envision that next level going? Beyond the building blocks, is it, I mean, what's the vision that you guys have? Obviously, infrastructure as code programmability, but now, you're talking about infrastructure as code was great, but now you've got microservices growth coming on top of it, it's a services market now. >> It is, it is. I think that the biggest challenge will be the distribution of the workloads, right. You have edge compute coming along in the telco space, you have, like I said, smart cities, right, the sensors will be everywhere, and they will feed data back, and how do you manage that at scale, right? How do you manage that across various different hardware perspectives? We have hardware platforms such as ARM 64 picking up, right, and actually playing a very significant role at the edge, and increasingly, even in the core. We've always believed that providing that software and the distribution of IS such as Kubernetes and others on top of those additional architectures would make a huge difference, and that is clearly paying off. What we see is, the increased need of managing hybrid workloads across multi-cloud scenarios that could be composed of different architectures, not just x86, the future is not homogeneous at all. It'll be all over the place. All those use cases and all those particular situation require that building block principle, like all the way from the OS up to the application. >> John: That's a great use case for containers. Kubernetes, Istio, Kubeflow. >> Absolutely. >> All stacking in line beautifully from an evolution standpoint. I've got to ask you a personal question. I mean, I was at Canonical, great company, I want to thank Canonical for being a sponsor of the CUBE over the years. We've had Mark Shuttleworth on the CUBE had an OpenStack going way back when. You guys are a great participant in the community as a company and the people there been phenomenal. You're new. >> I'm new. >> What attracted you to Canonical? What was the motivating force? What drew you in? You're now running Products, a big job. You've got a lot in front of you. Obviously, it's a great market, so you're a great company. Just share, just color and why Canonical, what attracted you there? >> I've always been a user of Ubuntu, I've been a user since the first hour. I've used Ubuntu in my research. I did robotics based on Ubuntu way before it was cool. I built all kinds of things on top of Ubuntu throughout my entire career. Working for Canonical, which is a company that always exhibited great vision into the future and great predictions into trends that would prove to become true was just, for me, something that was very attractive. >> Their leadership has a good eye on the prize. They had good 20 mile stare, as we say, they can see the roadmap ahead and then make either course corrections or tweaks. >> Yeah. >> Great, awesome. Well, I mean, what's new there? What's your, take a minute to explain what's new at Canonical, role here at KubeCon, what are some of the conversations you're having? >> Yeah, so I mean, for us at KubeCon, it's always been an important part of our outreach to the community, great opportunity for us to have great conversations with our partners in the field. I think it is really about enabling the ecosystem in a more straightforward way. There's no better place to have those types of conversations than here, where everybody comes together and really establishes those relationships. For us, it is about, again, enabling the developer and really staying close to that innovation and supporting that in an optimal way. Yes, I mean, that, to us, is the role that we play. You've got a lot of end users here who are building stuff. >> Oh, absolutely, yeah. They, I mean, I had a talk today about Kubeflow with Google, and after the talk, lots of folks came up to me and said, hey, how can I use this at home, right? >> Sometimes with, whether it's timing, technology, all the above, Kubernetes really hit it strong with the timing, industry was ready for it. Containers had a nice gestation period. People know about containers. >> Stephan: Absolutely. >> Engineers know containers, know about those kinds of concepts. Now we're at a whole other operating environment. >> Stephan: Absolutely. >> You guys are at the forefront. Thanks for coming on the CUBE. >> Oh, thank you, I appreciate it. >> Stephan sharing the perspective, Stephan Fabel. Running Product and Strategy for Canonical, building stuff, this is what's going on in Kubernetes in KubeCon, end users are actually building and orchestrating workloads. Multi-cloud is what people are talking about and the tech to make it happen is here. I'm John Furrier with the CUBE. Stay with us for more live coverage here at KubeCon 2018, part of the CNCF CUBE coverage. We'll be right back after this short break. (busy music)

Published Date : May 3 2018

SUMMARY :

it's the CUBE, covering KubeCon I'm John Furrier, the host of the CUBE, from a lot of the heavy lifting. and have been an active distributor of Kubernetes What's the progress now, what's next? so that all of the landscape projects and kick the tires and see where that takes you What's the feedback? and the AI of the next gen workloads This is interesting, because what you're basically doing and all the scar tissue. and the advantages that it brings How did the Kubernetes vibe integrate in with Canonical? We embraced OpenStack for the same reason, Because at the end of the day, which is something you can actually do today. This is also where we're going to double and triple down. What's the internal vibe on that, positive? and also something that we might use ourselves internally. Well, hey, that whole building block thing, for the people in the industry like, and the distribution of IS such as Kubernetes and others John: That's a great use case for containers. of the CUBE over the years. what attracted you there? into the future and great predictions into trends Their leadership has a good eye on the prize. what are some of the conversations you're having? and really staying close to that innovation and after the talk, lots of folks came up to me and said, all the above, Kubernetes really hit it strong know about those kinds of concepts. Thanks for coming on the CUBE. and the tech to make it happen is here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

StephanPERSON

0.99+

Liz RicePERSON

0.99+

JohnPERSON

0.99+

CanonicalORGANIZATION

0.99+

Lou TuckerPERSON

0.99+

Stephan FabelPERSON

0.99+

San FranciscoLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Mark ShuttleworthPERSON

0.99+

John FurrierPERSON

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two daysQUANTITY

0.99+

twoQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

20 mileQUANTITY

0.99+

Spark LabsORGANIZATION

0.99+

KubernetesTITLE

0.99+

Copenhagen, DenmarkLOCATION

0.99+

KubeConEVENT

0.99+

UbuntuTITLE

0.99+

AmericaLOCATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

secondQUANTITY

0.98+

Cloud Native Con Europe 2018EVENT

0.98+

CUBEORGANIZATION

0.98+

first oneQUANTITY

0.98+

KubeCon 2018EVENT

0.98+

first hourQUANTITY

0.98+

OneQUANTITY

0.98+

300,000, 150,000, 400,000 miles a yearQUANTITY

0.96+

greater than twoQUANTITY

0.96+

OpenStackTITLE

0.95+

IstioORGANIZATION

0.95+

KubeflowORGANIZATION

0.92+

CloudNativeCon EU 2018EVENT

0.92+

Dustin Kirkland, Canonical | KubeCon 2017


 

>> Announcer: Live from Austin, Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by: Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Hey, welcome back everyone. And we're live here in Austin, Texas. This is theCUBE's exclusive coverage of the Cloud Native conference and KubeCon for Kubernetes Conference. This is for the Linux Foundation. This is theCUBE. I'm John Furrier, the co-founder of Silicon ANGLE Media. My co, Stu Miniman. Our next guest is Dustin Kirkland Vice-President of product. The Ubuntu, Canonical, welcome to theCUBE. >> Thank you, John. >> So you're the product guy. You get the keys to the kingdom, as they would say in the product circles. Man, what a best time to be-- >> Dustin: They always say that. I don't think I've heard that one. >> Well, the product guys are, well all the action's happening on the product side. >> Dustin: We're right in the middle of it. >> Cause you got to have a road map. You got to have a 20 mile steer on the next horizon while you go up into the pasture and deliver value, but you always got to be watching for it always making decision on what to do, when to ship product, not you got the Cloud things are happening at a very accelerated rate. And then you got to bring it out to the customers. >> That's right. >> You're livin' on both sides of the world You got to look inside, you got to look outside. >> All three. There's the marketing angle too. which is what we're doing here right now. So there's engineering sales and this is the marketing. >> Alright so where are we with this? Because now you guys have always been on the front lines of open source. Great track record. Everyone knows the history there. What are the new things? What's the big aha moment that this event, largest they've had ever. They're not even three years old. Why is this happening? >> I love seeing these events in my hometown Austin, Texas. So I hope we keep coming back. The aha moment is how application development is fundamentally changing. Cloud Native is the title of the Cloud Native Computing Foundation and CloudNativeConference here. What does Cloud Native mean? It's a different form of writing applications. Just before we were talking about systems programing right? That's not exactly Cloud Native. Cloud Native programming is writing to API's that are Cloud exposed API's, integrating with software as a service. Creating applications that have no intelligence, whatsoever, about what's underneath them, Right? But taking advantage of that and all the ways that you would want and expect in a modern application. Fault tolerance, automatic updates, hyper security. Just security, security, security. That is the aha moment. The way applications are being developed is fundamentally changing. >> Interesting perspective we had on earlier. Lew Tucker from Cisco, (mumbles) in the (mumbles) History Museum, CTO at Cisco, and we have Kelsey Hightower co-chair for this conference and also very active in the community. Yet, in the perspective, and I'll over simplify and generalize it, but basically was: Hey, that's been going on for 30 years, it's just different now. Tell us the old way and new way. Because the old way, you kind of describing it you're going to build your own stuff, full stack, building all parts of the stack and do a lot of stuff that you didn't want to do. And now you have more, especially time on your hands if DevOps and infrastructure as code starts to happen. But doesn't mean that networking goes away, doesn't mean storage goes away, that some new lines are forming. Describe that dynamic of what's new and the new way what changes from the old way? >> Virtualization has brought about a different way of thinking about resources. Be those compute resources, chopping CPU's up into virtual CPU's, that's KVM ware. You mentioned network and storage. Now we virtualized both of those into software defined storage and software defined networking, right? We have things like OpenStack that brings that all together from an infrastructure perspective. and we now have Kubernetes that brings that to fare from an application perspective. Kubernetes helps you think about applications in a different way. I said that paradigm has changed. It's Kubernetes that helps implement that paradigm. So that developers can write an application to a container orchestrator like Kubernetes and take advantage of many of the advances we've made below that layer in the operating system and in the Cloud itself. So from that perspective the game has changed and the way you write your application is not the same as a the monolithic app we might have written on an IBM or a traditional system. >> Dustin, you say monolithic app versus oh my gosh the multi layered cake that we have today. We were talking about the keynote this morning where CNCF went from four projects to 14 projects, you got Kubernetes, You got things like DSDU on top. Help up tease that a little bit. What are the ones that, where's canonical engaged? What are you hearing from customers? What are they excited about? What are they still looking for? >> In a somewhat self-serving way, I'll use this opportunity to explain exactly what we do in helping build that layered cake. It starts with the OS. We provide a great operating system, Ubuntu that every developer would certainly know and understand and appreciate. That's the kernel, that's the systemd, that's the hyperviser, that's all the storage and drivers that makes an operating system work well on hardware. Lot's of hardware, IBM, Dell HP, Intel, all the rest. As well as in virtual machines, the public Clouds, Microsoft, Amazon, Google, VM ware and others. So, we take care of that operating system perspective. Within the CNCF and within in the Kubernetes ecosystem, It really starts with the Kubernetes distribution. So we provide a Kubernetes distribution, we call it Canonicals Distribution of Kubernetes, CDK. Which is open source Kubernetes with security patches applied. That's it. No special sauce, no extra proprietary extensions. It is open source Kubernetes. The reference platform for open source Kubernetes 100% conformed. Now, once you have Kubernetes as you say, "What are you hearing from customers?" We hear a lot of customers who want a Kubernetes. Once they have a Kubernetes, the next question is: "Now what do I do with it?" If they have applications that their developers have been writing to Google's Kubernetes Engine GKE, or Amazon's Kubernetes Engine, the new one announced last week at re:Invent, AKS. Or Microsoft's Kubernetes Engine, Microsoft-- >> Microsoft's AKS, Amazons EKS. A lot of TLA's out there, always. >> Thank you for the TLA dissection. If you've written the applications already having your own Kubernetes is great, because then your applications simply port and run on that. And we help customers get there. However, if you haven't written your first application, that's where actually, most of the industry is today. They want a Kubernetes, but they're not sure why. So, to that end, we're helping bring some of the interesting workloads that exists, open source workloads and putting those on top of Canonical Kubernetes. Yesterday, we press released a new product from Canonical, launched in conjunction with our partners at Rancher Labs, Which is the Cloud Native platform. The Cloud Native platform is Ubuntu plus Kubernetes plus Rancher. That combination, we've heard from customers and from users of Ubuntu inside and out. Everyone's interested in a developer work flow that includes open-source Ubuntu, open-source Kubernetes and open-source Rancher, Which really accelerates the velocity of development. And that end solution provides exactly that and it helps populate, that Kubernetes with really interesting workloads. >> Dustin, so we know Sheng, Shannon and the team, they know a thing or two about building stacks with open source. We've talked with you many times, OpenStack. Give us a little bit of compare and contrast, what we've been doing with OpenStack with Canonical, very heavily involved, doing great there versus the Cloud Native stacking. >> If you know Shannon and Sheng, I think you can understand and appreciate why Mark, myself and the rest of the Canonical team are really excited about this partnership. We really see eye-to-eye on open source principles First. Deliver great open source experiences first. And then taking that to market with a product that revolves around support. Ultimately, developer option up front is what's important, and some of those developer applications will make its way into production in a mission critical sense. Which open up support opportunities for both of us. And we certainly see eye-to-eye from that perspective. What we bring to bare is Ubuntu ecosystem of developers. The Ubuntu OpenStack infrastructure is a service where we've seen many of the world's largest organizations deploying their OpenStacks. Doing so on Ubuntu and with Ubuntu OpenStacks. With the launch of Kubernetes and Canonical Kubernetes, many of those same organizations are running their own Kubernetes along side OpenStack. Or, in some cases, on top of OpenStack. In a very few cases, instead of Openstack, in very special cases, often at the Edge or in certain tiny Cloud or micro Cloud scenarios. In all of these we see Rancher as a really, really good partner in helping to accelerate that developer work flow. Enabling developers to write code, commit code to GitHub repository, with full GitHub integration. Authenticate against an active directory with full RBAC controls. Everything that you would need in an enterprise to bring that application to bare from concept, to development, to test into production, and then the life cycle, once it gains its own life in production. >> What about the impact of customers? So, I'm an IT guy or I'm an architect and man, all this new stuff's comin' at me. I love my open source, I'm happy with space. I don't want to touch it, don't want to break it, but I want to innovate. This whole world can be a little bit noisy and new to them. How do you have that conversation with that potential customer or customer where you say, Look, we can get there. Use your app team here's what you want to shape up to be, here's service meshes and plugable, Whoa plugable (mumbles)! So, again, how do you simplify that when you have conversations? What's the narrative? What's the conversation like? >> Usually our introduction into the organization of a Fortune 500 company is by the developers inside of that company who already know Ubuntu. Who already have some experience with Kubernetes or have some experience with Rancher or any of those other-- >> So it's a bottoms up? >> Yeah, it's bottoms up. Absolutely, absolutely. The developer network around Ubuntu is far bigger than the organization that is Canonical. So that helps us with the intro. Once we're in there, and the developers write those first few apps, we do get the introductions to their IT director who then wants that comfy blanket. Customer support, maybe 24 by seven-- >> What's the experience like? Is it like going to the airport, go through TSA, and you got to take your shoes off, take your belt off. What kind of inspection, what is kind of is the culture because they want to move fast, but they got to be sure. There's always been the challenge when you have the internal advocate saying, "Look, if we want to go this way "this is going to be more the reality for companies." Developers are now major influencers. Not just some, here's the product we made a decision and they ship it to 'em, it's shifted. >> If there's one thing that I've learned in this sort of product management assignment, I'm a engineer by trade, but as a product manager now for almost five years, is that you really have to look at the different verticals and some verticals move at vastly different paces than other verticals. When we are in the tele close phase, We're in RFI's, requests for a quote or a request for information that may last months, nine months. And then go through entering into a procurement process that may last another nine months. And we're talking about 18 months in an industry here that is spinning up, we're talking about how fast this goes, which is vastly different than the work we do in Silicon Valley, right? With some of the largest dot-coms in the world that are built on Ubuntu, maybe an AWS or else where. Their adoption curve is significantly different and the procurement angle is really different. What they're looking to buy often on the US West Coast is not so much support, but they're looking to guide your roadmap. We offer for customers of that size and scale a different set of products something we call feature sponsorships, where those customers are less interested in 24 by seven telephone support and far more interested in sponsoring certain features into Ubuntu itself and helping drive the Ubuntu roadmap. We offer both of those a products and different verticals buy in different ways. We talked to media and entertainment, and the conversation's completely different. Oil and gas, conversation's completely different. >> So what are you doing here? What's the big effort at CloudNativeCon? >> So we've got a great booth and we're talking about Ubuntu as a pretty universal platform for almost anything you're doing in the Cloud. Whether that's on frame infrastructure as a service, OpenStack. People can coo coo OpenStack and point OpenStack versus Kubernetes against one another. We cannot see it more differently-- >> Well no I think it's more that it's got clarity on where the community's lines are because apps guys are moving off OpenStack that's natural. It's really found the home, OpenStack very relevant huge production flow, I talk to Johnathon Bryce about this all the time. There's no co cooing OpenStack. It's not like it's hurting. Just to clarify OpenStack is not going anywhere its just that there's been some comments about OpenStack refugees going to (mumbles), but they're going there anyway! Do you agree? >> Yeah I agree, and that choice is there on Ubuntu. So infrastructure is a service, OpenStack's a fantastic platform, platforms as a service or Cloud Native through Cloud Native development Kubernetes is an excellent platform. We see those running side by side. Two racks a systems or a single rack. Half of those machines are OpenStack, Half of those are Kubernetes and the same IT department manages both. We see IT departments that are all in OpenStack. Their entire data center is OpenStack. And we see Kubernetes as one workload inside of that Openstack. >> How do you see Kubernetes impact on containers? A lot of people are coo cooing containers. But they're not going anywhere either. >> It's fundamental. >> The ecosystem's changing, certainly the roles of each part (mumbles) is exploding. How do you talk about that? What's your opinion on how containers are evolving? >> Containers are evolving, but they've been around for a very long time as well. Kubernetes has helped make containers consumable. And doctored to an extent, before that the work we've done around Linux containers LXE LEXT as well. All of those technologies are fundamental to it and it take tight integration with the OS. >> Dustin, so I'm curious. One of the big challenges I have the U face is the proliferation of deployments for customers. It's not just data center or even Cloud. Edge is now a very big piece of it. How do you think that containers helps enable the little bit of that Cloud Native goes there, but what kind of stresses does that put on your product organization? >> Containers are adding fuel to the fire on both the Edge and the back end Cloud. What's exciting to me about the Edge is that every Edge device, every connected device is connected to something. What's it connected to, a Cloud somewhere. And that can be an OpenStack Cloud or a Kubernetes Cloud, that can be a public Cloud, that could be a private implementation of that Cloud. But every connected device, whether its a car or a plane or a train or a printer or a drone it's connected to something, it's connected to a bunch of services. We see containers being deployed on Ubuntu on those Edge devices, as the packaging format, as the application format, as the multi-tendency layer that keeps one application from DOSing or attacking or being protected from another application on that Edge device. We also see containers running the micro services in the Cloud on Ubuntu there as well. The Edge to me, is extremely interesting in how it ties back to the Cloud and to be transparent here, Canonical strategy and Canonical's play is actually quiet strong here with Ubuntu providing quite a bit of consistency across those two layers. So developers working on those applications on those devices, are often sitting right next to the developers working on those applications in the Cloud and both of them are seeing Ubuntu helping them go faster. >> Bottom line, where do you see the industry going and how do you guys fit into the next three years, what's your prediction? >> I'm going to go right back to what I was saying right there. That the connection between the Edge and the Cloud is our angle right there, and there is nothing that's stopping that right now. >> We were just talking with Joe Beda and our view is if it's a shoot and computing world, everything's an Edge. >> Yeah, that's right. That's exactly right. >> (mumbles) is an Edge. A light in a house is an Edge with a processor in it. >> So I think the data centers are getting smarter. You wanted a prediction for next year: The data center is getting smarter. We're seeing autonomous data centers. We see data centers using metals as a service mask to automatically provision those systems and manage those systems in a way that hardware look like a Cloud. >> AI and IOT, certainly two topics that are really hot trends that are very relevant as changing storage and networking those industries have to transform. Amazon's tele (mumbles), everything like LAN and serverless, you're starting to see the infrastructure as code take shape. >> And that's what sits on top of Kubernetes. That's what's driving Kubernetes adoption are those AI machine learning artificial intelligence workloads. A lot of media and transcoding workloads are taking advantage of Kubernetes everyday. >> Bottom line, that's software. Good software, smart software. Dustin, Thanks so much for coming theCube. We really appreciate it. Congratulations. Continued developer success. Good to have a great ecosystem. You guys have been successful for a very long time. As the world continues to be democratized with software as it gets smarter more pervasive and Cloud computing, grid computing, Unigrid. Whatever it's called it is all done by software and the Cloud. Thanks for coming on. It's theCube live coverage from Austin, Texas, here at KubeCon and CloudNativeCon 2017. I'm John Furrier, Stu Miniman, We'll be back with more after this short break. (lively music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by: Red Hat, the Linux Foundation, This is for the Linux Foundation. You get the keys to the kingdom, I don't think I've heard that one. the action's happening on the product side. to do, when to ship product, not you got the You got to look inside, you got to look outside. There's the marketing angle too. What are the new things? But taking advantage of that and all the ways and the new way what changes from the old way? and the way you write your application is not the same What are the ones that, where's canonical engaged? Lot's of hardware, IBM, Dell HP, Intel, all the rest. A lot of TLA's out there, always. Which is the Cloud Native platform. We've talked with you many times, OpenStack. And then taking that to market with What about the impact of customers? of a Fortune 500 company is by the developers So that helps us with the intro. There's always been the challenge when you have is that you really have to look at We cannot see it more differently-- It's really found the home, OpenStack very relevant Yeah I agree, and that choice is there on Ubuntu. How do you see Kubernetes impact on containers? the roles of each part (mumbles) is exploding. All of those technologies are fundamental to it One of the big challenges I have the U face We also see containers running the micro services That the connection between the Edge and the Cloud We were just talking with Joe Beda Yeah, that's right. A light in a house is an Edge with a processor in it. and manage those systems in a way the infrastructure as code take shape. And that's what sits on top of Kubernetes. As the world continues to be democratized with software

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Stu MinimanPERSON

0.99+

DustinPERSON

0.99+

Red HatORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

IBMORGANIZATION

0.99+

100%QUANTITY

0.99+

MarkPERSON

0.99+

DellORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

JohnPERSON

0.99+

nine monthsQUANTITY

0.99+

ShannonPERSON

0.99+

Rancher LabsORGANIZATION

0.99+

20 mileQUANTITY

0.99+

ShengPERSON

0.99+

next yearDATE

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

KubeConEVENT

0.99+

30 yearsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

twoQUANTITY

0.99+

Silicon ANGLE MediaORGANIZATION

0.99+

HalfQUANTITY

0.99+

IntelORGANIZATION

0.99+

last weekDATE

0.99+

14 projectsQUANTITY

0.99+

24QUANTITY

0.99+

bothQUANTITY

0.99+

CanonicalsORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

two topicsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

Johnathon BrycePERSON

0.99+

both sidesQUANTITY

0.99+

KubernetesTITLE

0.99+

AmazonsORGANIZATION

0.99+

two layersQUANTITY

0.99+

Lew TuckerPERSON

0.99+

GoogleORGANIZATION

0.99+

Cloud NativeTITLE

0.98+

CDKORGANIZATION

0.98+

OpenStackTITLE

0.98+

Cloud Native Computing FoundationORGANIZATION

0.98+

Joe BedaPERSON

0.98+

EdgeTITLE

0.98+

UbuntuTITLE

0.98+

Cloud NativeEVENT

0.98+

four projectsQUANTITY

0.98+

OpenstackTITLE

0.98+

AWSORGANIZATION

0.98+

first applicationQUANTITY

0.97+

Dustin Kirkland, Canonical | AWS re:Invent


 

>> Narrator: Live from Las Vegas, it's theCUBE covering AWS re:Invent 2017 presented by AWS, Intel, and our ecosystem of partners. >> We are life back here in Las Vegas at the Sands Expo as we continue our coverage here on theCUBE of re:Invent, AWS here on the fourth day of what has been a very successful show. I still hear a lot of buzz, a lot of activity on the show floor. It certainly indicative of what happened here in terms of bringing this community together in a very positive way. I'm with Justin Warren, I'm John Walls. We go from Justin to Dustin, Dustin Kirkland, who is the vice president of product development for Ubuntu on the Canonical. It's good to see you again. >> Likewise, John. >> I should let the two of you probably chat about Australia. We heard these great diving stories about your adventures, your home, your native country. >> Yep. >> Maybe afterwards will get a little photos, travel thing going on. >> Yeah that's right. (laughing) >> All right, 17 years you have been diving. Were going to have to get into that a little bit later on. First off, let's talk about Ubuntu, and maybe the footprint within AWS. Maybe not only what brings you here, but what gets you there? What are you doing there? >> First of all, this is a fantastic conference. Hundreds of these organizations here are involved in Ubuntu, using Ubuntu in AWS and taking advantage of open source, using it for lots of scale out services. To date this year in 2017, over 125 million instances of Ubuntu have launched in AWS alone just this year, and the year is not even over yet. We see anything from media entertainment. Netflix is here. I spent some time with them. One of Netflix's performance engineers gave a talk yesterday about how Netflix tunes their Ubuntu instances in Amazon to the tune of 100,000 instances of Ubuntu running in Amazon to deliver the Netflix experience that I'm sure all of us have. >> John: 100,000? >> Yeah. >> That's amazing. >> It's crazy, yeah. >> I'm a big fan of Ubuntu because I am a mad person. I've been running it as my primary desktop for something like 10 years. >> All right! >> I run it on a laptop. >> Okay. >> I love it, it's great. >> Good. >> People use Ubuntu all the time but it's like it just became the de facto, it seems like overnight of pretty much, hey, if you want to run Linux in cloud, you just spin up in Ubuntu. Just run it up, so what is it about Ubuntu itself, where are you taking the product for people who are using it in cloud? We are hearing a lot about all these different services, and we are hearing about serverless, so how does Ubuntu fit into that AWS world? >> That's a great question. First of all, it's not overnight. We have been doing this since 2004, so we are going on 14 years of building the thing that is Ubuntu. We brought Ubuntu into Amazon in about 2008, which is right when I got involved at Canonical. I was working on Ubuntu before that, but working for Canonical, and that was relatively early in the entire Amazon adventure. You said Ubuntu on the desktop. That's certainly where Ubuntu got its start, but it was Amazon that really busted Ubuntu out into the server space, and so now as you said, if you are starting a new company or a new technology, you almost by default start on Ubuntu. Now where are we taking that? Here we are talking about cloud, but the other half of cloud is the edge. The edge being embedded devices, embedded IOT connected devices. The thing about every IOT device, the I in IOT is Internet. The connected part of a connected device means it has to be connected to something, and what is going to be connected to? The cloud. Every smart autonomous driving vehicle, every oil rig out in West Texas, every airplane, every boat, every ship, every place where you are going to find compute in these next couple of years as we move into the 5G revolution, are connected to services on the backend, the majority of those hosted in Amazon, and the majority of those running Ubuntu. >> When you talk about IOT though, what kind of challenges that that bring into your world? Because you are talking about this, I mean, I can't even think about the order of growth. >> Yeah, billions, literally billions. >> It's just massive connectivity, and in a mobile environment, throw that on top of that, so what does that do for you then in terms of what you are looking at down the road and the kind of capabilities that you have got to build in? >> Security, I mean it starts with security. When we think about devices in our homes accompanying our kids to school, devices that are inside of buses and hospitals, it's all about security, and security is first and foremost. We put a lot of effort into securing Ubuntu. We've created new features as part of where we are taking Ubuntu. Many of the new features we created around Ubuntu are about updates, security updates, being able to make those updates active without rebooting the system, so zero downtime kernel updates is something we call a live patch service which we deliver in Amazon for Ubuntu Amazon users. Extended security maintenance. Security for Ubuntu after end-of-life, say you said you've been using Ubuntu for a long time. Each Ubuntu release has basically a five year lifecycle but some enterprises actually need to run Ubuntu for much longer than five years, and for those enterprises, we provide security updates after the end of life, after that five-year end-of-life, and in many cases, that helps them bridge that gap until the next release of Ubuntu. We've also worked with IBM and the US government to provide FIPS certified cryptography for Ubuntu also available in Amazon, so the Department of Defense contractors, many federal contractors are required to use FIPS bits, and this actually allows them to bring their Ubuntu usage into compliance with what's required for government regulation. >> I'm so glad that you went from IOT to security in, like, a nanosecond. That was going to be my next question. >> Well that is the only answer to that. It's the only right answer to that question in my mind. >> Not enough companies put that much focus on security and you follow it up with specific concrete examples of things that are going to work. The live kernel patching without rebooting things so that you can have the-- I mean, services in the cloud, it has to be always on. You can't take a maintenance window when something is down four hours or a weekend. That's just not acceptable in the cloud world anymore. >> Especially in the retail season. We are just now getting into the retail-- you know, Black Friday was last week, Cyber Monday this week, and the roll up all the way to Christmas, Canonical works with quite a bit with the largest retailers in the world, Walmart, Best Buy, other ones like that, and downtime is just not acceptable. At the same time, security is of the utmost importance. When you are taking people's credit cards, you are placing large amounts of money on the line every time these transactions take place. Security has to be utmost, and being able to do that without impacting the downtime. Downtime is seriously hundreds of thousands of dollars per second on some of these sites during the major holiday rush. >> You just mentioned some of the big names you're working with, so what kind of assurance can you give them that you can sleep with both eyes closed? You don't have to keep that one eye open. Don't worry, if there is an incident of some kind, we are going to take care of it. If you have a problem, rest assured, we are going to be there because, as you pointed out, with the volume involved and the issues of security infiltrations being what they are today, it's hard to rest. >> Right, the return on value, the return on investment of the live patch is easily apparent. Any time someone does the math and realizes, "Let's actually look at how much it costs us "to reboot a data center, "or how much it costs us to wake up the dev ops team "on a Saturday and have them work through a weekend "to roll out this update," whereas with the live patch, at least the patch is applied in milliseconds without downtime, and then we get back on Monday and we rollout a comprehensive plan as to what do we actually need to do about this going forward? That is for the kernel side of things. The other half of it is the user space side of Ubuntu. In the user space side of Ubuntu, we continue to make Ubuntu smaller, smaller and smaller. That might be one of the reasons you are attracted to Ubuntu on your laptop early on is because we really did a good job of making a Linux that was consumable, usable, but also very small and secure. We've actually taken that same approach in the cloud where we continue to minimize the footprint of Ubuntu. That has a security impact in that if you simply leave software out of the default image you are not vulnerable to problems in that software, so we've heard that quite a bit around the container space, the work we do in the container space. We will be in Austin next week for CUBE Con talking about containers. I will save the container talk for next week, but minimizing Ubuntu is an important of that security story as well. >> All right, just reducing that attack surface is fabulous. It also means that when you are actually doing this patching, it's less things to patch, there are less opportunities for downtime, there are less things that can go wrong and cause outages in the rest of the place. Simple is better. >> Dustin : That's right, that's exactly right. >> What else are you doing? We've talked about security a lot here. What are some of the other things that you are doing around supporting the services that we are hearing here at AWS? We've heard a lot about things like serverless. We've heard a lot about high performance computing. We've had guests here on theCUBE talking about what they are doing around data analytics and machine learning, so maybe you could give us a little bit of color around that. >> Let's start with that last point, machine learning and data analytics. We work very closely with both Amazon and Nvidia to enable the GPGPUs, the general-purpose graphics processing units that Nvidia produces which go into servers and Amazon exposes in the some of the largest machine learning type instances. Those instances powered by Ubuntu are working directly with that GPU out of the box by default, and that's something that we've worked very hard on and closely with both Amazon and Nvidia to make sure the Ubuntu experience when using the graphics accelerated instance types just work, and just work out of the box. Those are important for the machine learning and the data analytics because many of those algorithms take advantage of CUDA. CUDA is a set of libraries that allows developers to write applications that scale very, very wide across the CUDA cores, so a given Nvidia GPU may have several thousand Nvidia CUDA cores. Each of those are running little process bits and then the answers are summed up, basically, at the end. That is at the heart of everything that's happening in the AI space, and that I will tie that back to our IOT space in that for those connected devices where memory discs, CPU, power are very constrained, part of the important part of that connection is that they are talking to a cloud that has essentially infinite resources, infinite data at its disposal, enough memory to load those entire data sets and crunch those. The fastest CPUs and the fastest GPUs that can crunch that data, so to really take that and make that real, that's exactly what's powering every autonomous vehicle in the world, essentially, is a little unit inside of the car, a majority of those autonomous vehicles are running Ubuntu on the auto driving unit. Tesla, Google, Uber, all running Ubuntu inside of that car. Every one of those cars are talking to a cloud. Some clouds are Amazon, other, in Google's case, certainly the Google cloud, but they are talking to GPU Nvidia powered AI instances that are crunching all the data that these Tesla cars and GM, and Ford cars are sending to the cloud and constantly making the inference engine better. What gets downloaded to the car is an updated inference engine. That inference engine comes down to the car, and that's how that car decides is it safe to change lanes right now or not? That answer has to be determined inside of the car, not in the cloud, but you can understand why data training and modeling in the cloud is powerful, far more powerful than what can happen inside of a little CPU in a the car. >> John: Let's just keep it on the right side of the road. Can we do that? (laughing) >> Well, you need to talk to this gentleman about that. >> Yeah, I drive on the left side. (laughing) >> Or the left side of the road. >> Don't cross the streams. >> How about the correct side of the road? >> Don't cross the streams. >> Dustin thanks for the time. >> Thank you, John. >> Always good seeing you. >> Likewise. >> And we'll see you next week as well. Down in your hometown, a little barbecue in Austin. >> That sounds good. >> All right, back with more here at re:Invent. We are live in Las Vegas back with more on theCUBE in just a bit.

Published Date : Nov 30 2017

SUMMARY :

and our ecosystem of partners. a lot of activity on the show floor. I should let the two of you probably chat about Australia. Maybe afterwards will get a little Yeah that's right. and maybe the footprint within AWS. to deliver the Netflix experience I'm a big fan of Ubuntu but it's like it just became the de facto, and the majority of those running Ubuntu. Because you are talking about this, Many of the new features we created around Ubuntu I'm so glad that you went from IOT to security Well that is the only answer to that. so that you can have the-- and the roll up all the way to Christmas, and the issues of security infiltrations We've actually taken that same approach in the cloud and cause outages in the rest of the place. What are some of the other things that you are doing and modeling in the cloud is powerful, John: Let's just keep it on the right side of the road. Yeah, I drive on the left side. And we'll see you next week as well. We are live in Las Vegas

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Justin WarrenPERSON

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

AWSORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

JustinPERSON

0.99+

FordORGANIZATION

0.99+

AustinLOCATION

0.99+

DustinPERSON

0.99+

TeslaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Best BuyORGANIZATION

0.99+

UberORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

Las VegasLOCATION

0.99+

14 yearsQUANTITY

0.99+

last weekDATE

0.99+

GMORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

five yearQUANTITY

0.99+

AustraliaLOCATION

0.99+

17 yearsQUANTITY

0.99+

next weekDATE

0.99+

West TexasLOCATION

0.99+

four hoursQUANTITY

0.99+

MondayDATE

0.99+

twoQUANTITY

0.99+

UbuntuTITLE

0.99+

billionsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

ChristmasEVENT

0.99+

five-yearQUANTITY

0.99+

yesterdayDATE

0.99+

LinuxTITLE

0.99+

IntelORGANIZATION

0.99+

2004DATE

0.99+

Department of DefenseORGANIZATION

0.99+

fourth dayQUANTITY

0.98+

HundredsQUANTITY

0.98+

EachQUANTITY

0.98+

2017DATE

0.98+

Dustin Kirkland, Canonical | AWS Summit 2017


 

>> Announcer: Live from Manhattan, it's theCube, covering AWS Summit, New York City, 2017. Brought to you by Amazon Web Services. >> Welcome back to the Big Apple as we continue our coverage here on theCube of AWS Summit 2017. We're at the Javits Center. We're in midtown. A lot of hustle and bustle outsie and inside there, good buzz on the show floor with about 5,000 strong attending and some 20,000 registrants also for today's show. Along with Stu Miniman, I'm John Walls, and glad to have you here on theCube. And Dustin Kirkland now joins us. He's at Ubuntu, the product and strategy side of things at Canonical, and Dustin, good to see you back on theCube. >> Thank you very much. >> You just threw a big number out at us when we were talking off camera. I'll let you take it from there, but it shows you about the presence, you might say, of Ubuntu and AWS, what that nexus is right now. >> Ubuntu easily leads as the operating system in Amazon. About 70%, seven zero, 70% of all instances running in Amazon right now are running Ubuntu. And that's actually, despite the fact that Amazon have their own Amazon Linux and there are other, Windows, Rails, SUSE, Debian, Fedora, other alternatives. Ubuntu still represents seven out of 10 workloads in Amazon running right now. >> John: Huge number. >> So, Dustin, maybe give us a little insight as to what kind of workloads you're seeing. How much of this was people that, Ubuntu has a great footprint everywhere and therefore it kind of moved there. And how much of it is new and interesting things, IOT and machine learning and everything like that, where you also have support. >> When you're talking about that many instances, that's quite a bit of boat, right? So if you look at just EC2 and the two types of workloads, there are the long-running workloads. The workloads that are up for many months, years in some cases. I met a number of customers here this week that are running older versions of Ubuntu like 12.04 which are actually end of life, but as a customer of Canonical we continue providing security updates. So we have a product called Extended Security Maintenance. There's over a million instances of Ubuntu 12.04 which are already end of life but Canonical can continue providing security updates, critical security updates. That's great for the long-running workloads. The other thing that we do for long-running workloads are kernel live patches. So we're able to actually fix vulnerabilities in the Linux kernel without rebooting, using entirely upstream and open source technology to do that. So for those workloads that stay up for months or years, the combination of Extended Security Maintenance, covering it for a very long time, and the kernel live patch, ensuring that you're able to patch those vulnerabilities without rebooting those systems, it's great for hosting providers and some enterprise workloads. Now on the flip side, you also see a lot of workloads that are spikey, right. Workloads that come and go in bursts. Maybe they run at night or in the morning or just whenever an event happens. We see a lot of Ubuntu running there. It's really, a lot of that is focused on data and machine learning, artificial intelligence workloads, that run in that sort of bursty manner. >> Okay, so it was interesting, when I hear you talk about some things that have been running for a bunch of years, and on the other side of the spectrum is serverless and the new machine learning stuff where it tends to be there, what's Canonical doing there? What kind of exciting, any of the news, Macey, Glue, some of these other ones that came out, how much do those fit into the conversations you're having? >> Sure, they all really fit. When we talk about what we're doing to tune Ubuntu for those machine learning workloads, it really starts with the kernel. So we actually have an AWS-optimized Linux kernel. So we've taken the Ubuntu Linux kernel and we've tuned it, working with the Amazon kernel engineers, to ensure that we've carved out everything in that kernel that's not relevant inside of an Amazon data center and taken it out. And in doing so, we've actually made the kernel 15% smaller, which actually reduces the security footprint and the storage footprint of that kernel. And that means smaller downloads, smaller updates, and we've made it boot 30% faster. We've done that by adding support, turning on, configuring on some parameters that enable virtualization or divert IO drivers or specifically the Amazon drivers to work really well. We've also removed things like floppy disk drives and Bluetooth drivers, which you'll never find in a virtual machine in Amazon. And when you take all of those things in aggregate and you remove them from the kernel, you end up with a much smaller, better, more efficient package. So that's a great starting point. The other piece is we've ensured that the latest and greatest graphics adapters, the GPUs, GPGPUs from Invidia, that the experienced on Ubuntu out of the box just works. It works really well, and well at scale. You'll find almost all machine learning workloads are drastically improved inside of GPGPU instances. And for the dollar, you're able to compute sometimes hundreds or thousands of times more efficiently than a fewer CPU type workload. >> You're talking about machine learning, but on the artificial intelligence side of life, a lot of conversation about that at the keynotes this morning. A lot of good services, whatever, again, your activity in that and where that's going, do you think, over the next 12, 16 months? >> Yes, so artificial intelligence is a really nice place where we see a lot of Ubuntu, mainly because the nature of how AI is infiltrating our lives. It has these two sides. One side is at the edge, and those are really fundamentally connected devices. And for every one of those billions of devices out there, there are necessarily connections to an instance in the cloud somewhere. So if we take just one example, right, an autonomous vehicle. That vehicle is connected to the internet. Sometimes well, when you're at home, parked in the garage or parked at Whole Foods, right? But sometimes it's not. You're in the middle of the desert out in West Texas. That autonomous vehicle needs to have a lot of intelligence local to that vehicle. It gets downloaded opportunistically. And what gets downloaded are the results of that machine learning, the results of that artificial intelligence process. So we heard in the keynotes quite a bit about data modeling, right? Data modeling means putting a whole bunch of data into Amazon, which Amazon has made it really easy to do with things like Snowball and so forth. Once the data is there, then the big GPGPU instances crunch that data and the result is actually a very tight, tightly compressed bit of insight that then gets fed to devices. So an autonomous vehicle that every single night gets a little bit better by tweaking its algorithms, when to brake, when to change lanes, when to make a left turn safely or a right turn safely, those are constantly being updated by all the data that we're feeding that. Now why I said that's important from an Ubuntu perspective is that we find Ubuntu in both of those locations. So we open this by saying that Ubuntu is the leading operating system inside of Amazon, representing 70% of those instances. Ubuntu is, across the board, right now in 100% of the autonomous vehicles that are running today. So Uber's autonomous vehicle, the Tesla vehicles, the Google vehicles, a number of others from other manufacturers are all running Ubuntu on the CPU. There's usually three CPUs in a smart car. The CPU that's running the autonomous driving engine is, across the board, running Ubuntu today. The fact that it's the same OS makes it, makes life quite nice for the developers. The developers who are writing that software that's crunching the numbers in the cloud and making the critical real-time decisions in the vehicle. >> You talk about autonomous vehicles, I mean, it's about a car in general, thousands of data points coming in, in continual real time. >> Dustin: Right. >> So it's just not autonomous -- >> Dustin: Right. >> operations, right? So are you working in that way, diagnostics, navigation, all those areas? >> Yes, so we catch as headlines are a lot of the hobbyist projects, the fun stuff coming out of universities or startup space. Drones and robots and vacuum cleaners, right? And there's a lot of Ubuntu running there, anything from Raspberry Pis to smart appliances at home. But it's actually, I think, really where those artificially intelligent systems are going to change our lives, is in the industrial space. It's not the drone that some kids are flying around in the park, it's the drone that's surveying crops, that's coming to understand what areas of a field need more fertilizer or less water, right. And that's happening in an artificially intelligent way as smarter and smarter algorithms make its way onto those drones. It's less about the running Pandora and Spotify having to choose the right music for you when you're sitting in your car, and a lot more about every taxicab in the city taking data and analytics and understanding what's going on around them. It's a great way to detect traffic patterns, potentially threats of danger or something like that. That's far more industrial and less intresting than the fun stuff, you know, the fireworks that are shot off by a drone. >> Not nearly as sexy, right? It's not as much fun. >> But that's where the business is, you know. >> That's right. >> One of the things people have been looking at is how Amazon's really maturing their discussion of hyrid cloud. Now, you said that data centers, public cloud, edge devices, lots of mobile, we talked about IOT and everything, what do you see from customers, what do you think we're going to see from Amazon going forward to build these hybrid architectures and how does that fit in to autonomous vehicles and the like? >> So in the keynote we saw a couple of organizations who were spotlighted as all-in on Amazon, and that's great. And actually almost all of those logos that are all-in on Amazon are all-in on Amazon on Ubuntu and that's great. That's a very small number of logos compared to the number of organizations out there that are actually hybrid. Hybrid is certainly a ramp to being all-in but for quite a bit of the industry, that's the journey and the destination, too, in fact. That there's always going to be some amount compute that happens local and some amount of compute that happens in the cloud. Ubuntu helps provide an important portability layer. Knowing something runs well on Ubuntu locally, it's going to run well on Ubuntu in Amazon, or vise versa. The fact that it runs well in Amazon, it will also run well on Ubuntu locally. Now we have a support -- >> Yeah, I was just curious, you talked about some of the optimization you made for AWS. >> Dustin: Right. >> Is that now finding its way into other environments or do we have a little bit of a fork? >> We do, it does find it's way back into other environments so, you know, the Amazon hypervisors are usually Xen-based, although there are some interesting other things coming from Amazon there. Typically what we find on-prem is usually more KVM or Vmware based. Now, most of what goes into that virtual kernel that we build for Amazon actually applies to the virtual kernel that we built for Ubuntu that runs in Xen and Vmware and KVM. There's some subtle differences. Some, a few things that we've done very specifically for Amazon, but for the most part it's perfectly compatible all the way back to the virtual machines that you would run on-prem. >> Well, Dustin, always a pleasure, >> Yeah. >> to have you hear on theCube. >> Thanks, John. >> You're welcome back any time. >> All right. >> We appreciate the time and wish you the best of luck here the rest of the day, too. >> Great. >> Good deal. >> Thank you. >> Glad to be with us. Dustin Kirkland from Canonical joining us here on theCube. Back with more from AWS Summit 2017 here in New York City right after this.

Published Date : Aug 14 2017

SUMMARY :

Brought to you by Amazon Web Services. good buzz on the show floor with about 5,000 strong the presence, you might say, of Ubuntu and AWS, what And that's actually, despite the fact that Amazon where you also have support. Now on the flip side, you also see a lot of workloads And for the dollar, you're able to compute sometimes conversation about that at the keynotes this morning. The fact that it's the same OS makes it, it's about a car in general, thousands of data points than the fun stuff, you know, the fireworks that It's not as much fun. One of the things people have been looking at is So in the keynote we saw a couple of organizations some of the optimization you made for AWS. the virtual kernel that we built for Ubuntu that We appreciate the time and wish you the best of luck Glad to be with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

UberORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

70%QUANTITY

0.99+

DustinPERSON

0.99+

100%QUANTITY

0.99+

New York CityLOCATION

0.99+

30%QUANTITY

0.99+

thousandsQUANTITY

0.99+

Ubuntu 12.04TITLE

0.99+

UbuntuTITLE

0.99+

hundredsQUANTITY

0.99+

two sidesQUANTITY

0.99+

AWSORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

One sideQUANTITY

0.99+

TeslaORGANIZATION

0.99+

two typesQUANTITY

0.99+

XenTITLE

0.99+

15%QUANTITY

0.99+

PandoraORGANIZATION

0.99+

10 workloadsQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

one exampleQUANTITY

0.98+

12.04TITLE

0.98+

Javits CenterLOCATION

0.98+

bothQUANTITY

0.98+

GoogleORGANIZATION

0.98+

West TexasLOCATION

0.98+

DebianTITLE

0.98+

sevenQUANTITY

0.97+

AWS Summit 2017EVENT

0.97+

AWS SummitEVENT

0.97+

EC2TITLE

0.96+

Big AppleLOCATION

0.96+

billions of devicesQUANTITY

0.96+

about 5,000 strongQUANTITY

0.96+

Whole FoodsORGANIZATION

0.96+

this morningDATE

0.96+

thousands of timesQUANTITY

0.95+

About 70%QUANTITY

0.95+

WindowsTITLE

0.95+

this weekDATE

0.95+

VmwareTITLE

0.95+

Carlos Carrero, Veritas - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE


 

>> Narrator: Live from Boston, Massachusetts, it's the Cube covering OpenStack Summit 2017. Brought to you by the OpenStack foundation, RedHat, and additional ecosystem support. >> Hi. I'm Stu Miniman here with my cohost John Troyer. Happy to welcome to the program to the program, Carlos Carrera, who's a senior principal product manager with Veritas. Carlos, great to see you. >> Yeah, thank you very much. >> Stu: Alright. >> Great to be here. >> So, so many of the things we talk to here in OpenStack and the Cloud World, is relatively short-lived. The average lifetime of the average Cloud deployment, is like 1.7 years. You've been at Veritas at little bit longer with that, had an opportunity to have a conversation with you about some of your history, so we're going to have to take the abbreviated format of that, but give us a little bit about, you know, your time at Veritas, some of the ebbs and flows of your career. >> Yeah, well, again, thank you for having me here. It's great. Having 16 years with Veritas, as I mentioned before to you, you know, back in 1994, 1995 we created the first file system and volume manager, right. A lot of things happened since then, right. At that point in time, the software defined storage store was not yet there. Back, many years ago, we got some piece of software, running on top of any kind of hardware and we were able to help customers to move workloads from one place to another. In a very agnostic point of view, right. And then we move into clouds and now, three years ago, we started looking into what do we do with OpenStack clouds, because this is going to define... It's going to need something very new, something different. So today, this week, we are very happy because we finally announced hyper scale for open stack, which is a software defined storage solution that has been built for an OpenStack clouds. >> When I look at the industry these days, the term lately is storage services. How we're doing things in software more, open stack is the open source infrastructure piece. You guys are the hipster player in this space. You were doing software defined storage and software services not attached to everything else beforehand so it sounds like openstack's a natural fit. Tell us a little bit more about how Veritas fits into that. >> Well, I think that again, it was a perfect fit but we had to review what we was doing. Okay, because again, I've been many years... I was working with traditional legacy architectures in the past. We had to work class defined system that today can work with 128 notes. But we revisit... Is this what we really need to the new OpenStack clouds, are they going to scale? And as you said is that what I need the storage services. So what do we have to rethink? What do we have to do to provide those storage services to the OpenStack clouds? So three years ago, we had this, we call open flame project that today is Hyperscale. It has been building from scratch. New product, what we call emerging product at Veritas, and finally we got separated from Semantec, and we got all the visibility on the storage gain. And using all the knowhow that we have in history, as I say, we're a very big startup, right? But now, emerging with new products, we need new solutions that have been designed for OpenStack from scratch. >> Could you drill down on the product itself? Is this file block object storage? Is this sitting on top of servers. Laid off in a server-based way? How does it interact with OpenStack drivers? That sort of thing. >> Yeah, that's a good question. So it is senior storage. What we provide is block storage for OpenStack. Something key, it is based on commodity hardware of your choice, so you decided what is the hardware that you want to use. Really, it's 86 servers that you can choose in the market, whatever you want. And one of the key differentiators is that we provide block storage, but we separate the compute plane and the data plane. And this is an architectural decision we had to take three years ago. We said we cannot scale, we cannot provide the storage services that you need in a single layer of storage. Because that is what most of the software defined storage solutions on the market are doing today. And then they're having problems with things like noisy neighbor. They have problems with things like the scalability, like the quality of service, and of course they're having problems with protection. How do I protect my cloud environments with OpenStack? And we as a net pack of company, we have our leading net backup solution, we hear that from our customers. That it is not that we're bringing another solution that is going to bring another noisy neighborhood, so we really have to separate two layers. Compute plane, where you have your first copy, and the data plane, where you use cheaper and deeper storage to keep the second, third copy, and do all the data mining operations. >> That's interesting what you just said there too. Two copies, so you do have a copy that's close to the compute. But then you have another. >> Correct. Because, again, if you take a look to what you have in the market, typically it's one-size-fits-all. So, do you need three copies for everything? And today, you have emerging technologies. You can have things like mySQL, where you need high performance, or you can have things like Cassandra where you need nine copies of them, because the application itself is giving you the resiliency. So if you use a standard solution that for each OpenStack instance, you have three copies, that means you have three copies, three copies, three copies. So nine copies. And it's not only the number of copies. It's that when you make a write, you're writing nine times. And you're writing on the single layer. So we said, we have to separate that. The first thing is that what is the workload? Stop thinking about the storage. Stop thinking this is a pool of SSDs or a pool of HCDs, and then start thinking about the workload. And then we connected that very well with OpenStack because OpenStack, you have the definition of flavors, right? That is how many CPUs do you need? How much memory? But also we extend those flavors to say what do you need in terms of storage? What is the resiliency level that you need? What is the number of copies? What is the minimum performance that you need? What is the maximum performance? It's not only about solving the noisy neighbor with the maximum performance? About limiting, it's about guaranteeing that you are going to have a minimum number of IOs per second. At the end, what you can get, you can have a mySQL running with high performance needs with web servers of the same box without fighting each other. >> Carlos, can you speak a little bit about how customers consume this, how do they buy it, how's it priced? How do you get it to market? We've taught before with Veritas. Storage used to always be in an appliance or an array or things like that and the software cloud world's a little bit differently. How does that fit? >> So today's software only? So you make that decision about what hardware to use. We try to simplify the go to market model. So it's based on subscription. You just pay for the max capacity that you have. And you only pay for what you have at the compute plane. So I think a simple model that we could find to go in the open source projects, and being able to attach to that. >> Okay, could you speak to... When you talk about go to market from a partnership standpoint, it's a big market out there. Veritas, well-known name for many years but what partners are involved in this? Any certifications that are needed? We're working with our typical partners that have some expertise with OpenStack and helping with them. We are now also working with hardware providers. We are working with Supermicro and creating reference architectures with them. So we can have at the end, we have to explain to the customers what they can get from different hardware. So we're working with them. And we're also working with new partners. For example, yesterday with us on the stage, we have Verbanks. Verbanks is an OpenStack ambassador in Netherlands. They have been working with us from the very beginning of the project, on the validation. They understand OpenStack. They understand the issues and they have been doing all the validation with us about, yes guys, this is the right thing. You have to do it from the very beginning. Is this product tuned specifically for OpenStack or will it be available for other kind of private cloud applications. >> We have available for OpenStack, we're going to have it. We'll announce, I think we'll watch with you also, guys, we announced the beta version for Containers. At the end, it's the same thing. It's how do you provide persistent storage for Containers? Ninety percent of the product is all the same. It's that compute plane. It's the data plane. How can I protect my workload from the data plane? Because again, it doesn't matter if it's Container. If it's OpenStack, when I have to protect it, how do I do it? How can I read my data without affecting the performance? And that's where we have the value with the data plane. And, of course, our integration with net backup, our leader of backup solutions in the market, where just with a single click, I'm going to connect OpenStack with NetBackup, and define how my workloads are going to be protected, when and how? >> Here at the show, OpenStack Summit, how has it been working with the community? Sometimes, in the open source world, vendors have to have a certain kind of conversation with that open source community to show that they understand their needs and what they need out of the relationship. How has the week been then? >> So yeah, that's a very good question. And that goes to something that we want to announce hopefully at the end of the year. The first version that we announced this week is based on canonical Ubuntu OpenStack. At the end of the year, we are going to have RedHat, and in our DNA is to be agnostic to the pass, any hardware. And of course now, it's any kind of OpenStack distribution. So we will work with any of them. And something that we want to announce at the end of the year is to have a community edition, for Hyperscale. So again, that is our offering to the community. They can both provide-- >> And would that community edition itself be open source, or just available for the community? >> It would be available for that. >> John: For the community. >> We keep our IP. >> Great. As we get towards the end of the event, I'm sure you've had plenty of interesting customer conversations. Any one, I'm sure you can't mention names, but any interesting anecdote or just a general feel of the community? >> I feel that my anecdote for yesterday, when I had to work presentation, we had a customer on the room. We had been working on a POC with them. We have been very, very helpful customer. We finished. "Do you have any questions?" This guys stands up, went to the microphone and I was thinking, what is he going to ask? He knows everything about the product. And he said, he guys, you are doing the right thing. This is great. I'm fantastic, you are bringing a lot of value here. So I was like, wow. >> In my understanding, it was a big brand name customer who actually said where he was from, which is great validation, something we've heard all week is there's that sharing here with the community, so financial companies who, in the past, wouldn't have done that, TelCos who do that in the past, great to see. Give me the final word, Carlos. >> Yeah, the thing, again, is as you said validation is a key thing. I've been a lot of years in the company. I got this project eight months ago, and all the things I've been doing is validation, talking to customers to I don't know how many analysts I've been talking to in this week. And I love Dan said, yeah, you guys are doing the right thing. This is that direction that we have to move, so happy that finally, emerging again from Veritas, being back here with the community on OpenStack. >> Well, the speed of change, constant learning on new things and helping customers move forward. Big theme we've seen in the show. Carlos Carrera. I appreciate you joining us here. For John and Stu, thanks for watching The Cube here at OpenStack Summit. (mid-tempo electronic music)

Published Date : May 10 2017

SUMMARY :

Brought to you by the OpenStack foundation, Carlos, great to see you. had an opportunity to have a conversation with you And then we move into clouds You guys are the hipster player in this space. And as you said is that what I need the storage services. Could you drill down on the product itself? and the data plane, where you use cheaper That's interesting what you just said there too. What is the resiliency level that you need? and the software cloud world's a little bit differently. You just pay for the max capacity that you have. of the project, on the validation. We'll announce, I think we'll watch with you Sometimes, in the open source world, And that goes to something that we want to announce of the community? "Do you have any questions?" Give me the final word, Carlos. This is that direction that we have to move, I appreciate you joining us here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Carlos CarreraPERSON

0.99+

VeritasORGANIZATION

0.99+

JohnPERSON

0.99+

CarlosPERSON

0.99+

nine copiesQUANTITY

0.99+

1994DATE

0.99+

VerbanksORGANIZATION

0.99+

DanPERSON

0.99+

John TroyerPERSON

0.99+

first copyQUANTITY

0.99+

StuPERSON

0.99+

nine timesQUANTITY

0.99+

Two copiesQUANTITY

0.99+

SemantecORGANIZATION

0.99+

three copiesQUANTITY

0.99+

secondQUANTITY

0.99+

NetherlandsLOCATION

0.99+

86 serversQUANTITY

0.99+

OpenStackORGANIZATION

0.99+

16 yearsQUANTITY

0.99+

yesterdayDATE

0.99+

third copyQUANTITY

0.99+

Ninety percentQUANTITY

0.99+

1.7 yearsQUANTITY

0.99+

SupermicroORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

this weekDATE

0.99+

two layersQUANTITY

0.99+

RedHatORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

eight months agoDATE

0.99+

three years agoDATE

0.99+

TelCosORGANIZATION

0.98+

eachQUANTITY

0.98+

1995DATE

0.98+

first versionQUANTITY

0.98+

OpenStack SummitEVENT

0.98+

OpenStackTITLE

0.98+

mySQLTITLE

0.98+

single layerQUANTITY

0.98+

bothQUANTITY

0.98+

many years agoDATE

0.97+

OpenStack Summit 2017EVENT

0.97+

#OpenStackSummitEVENT

0.97+

Carlos CarreroPERSON

0.96+

first thingQUANTITY

0.96+

open stackTITLE

0.95+

128 notesQUANTITY

0.94+

RedHatTITLE

0.93+

Ubuntu OpenStackTITLE

0.92+

first fileQUANTITY

0.92+