Image Title

Search Results for swarm:

ON DEMAND SWARM ON K8S FINAL NEEDS CTA SLIDE


 

>>welcome to the session. Long live swarm with containers and kubernetes everywhere we have this increasing cloud complexity at the same time that we're facing economic uncertainty and, of course, to navigate this. For most companies, it's a matter of focusing on speed and on shipping and iterating their code faster. Now. For many, Marantz is customers. That means using docker swarm rather than kubernetes to handle container orchestration. We really believe that the best way to increase your speed to production is choice, simplicity and security. So we wanted to bring you a couple of experts to talk about the state of swarm and Docker enterprise and how you can make best use of both of you. So let's get to it. Well, good afternoon or good morning, depending on where you are on and welcome to today's session. Long live swarm. I am Nick Chase. I'm head of content here at Mantis and I would like to introduce you to our two Panelists today eight of Manzini. Why don't you introduce yourself? >>I am a van CNI. I'm a solutions architect here at Moran Tous on work primarily with Docker Enterprise System. I have a long history of working with support team. Um, at what used to be Ah Docker Enterprise, part of Docker Inc. >>Yeah, Okay. Great. And Don Power. >>I, um Yeah, I'm Don Power on the docker. Captain Docker, community leader. Right now I run our Dev Ops team for Citizens Bank out of Nashville, Tennessee, and happy to be here. >>All right, Excellent. So All right, so thank you both for coming. Now, before we say anything else, I want to go ahead and kind of name the elephant in the room. There's been a lot of talk about the >>future. Yeah, that's right. Um, swarm as it stands right now, um, we have, ah, very vested interest in keeping our customers on who want to continue using swarm, functional and keeping swarm a viable alternative or complement to kubernetes. However you see the orchestration war playing out as it were. >>Okay? It's hardly a war at this point, but they do work together, and so that's >>absolutely Yeah, I I definitely consider them more of like, complimentary services, um, using the right tool for the job. Sort of sense. They both have different design goals when they were originally created and set out so I definitely don't see it as a completely one or the other kind of decision and that they could both be used in the same environment and similar clusters to run whatever workload that you have. >>Excellent. And we'll get into the details of all that as we go along. So that's terrific. So I have not really been involved in in the sort of swarm area. So set the stage for us where we kind of start out with all of this. Don I know that you were involved and so guys said, set the stage for us. >>Sure, Um I mean so I've been a heavy user of swarm in my past few roles. Professionally, we've been running containers in production with Swarm for coming up on about four years. Now, Um, in our case, we you know, we looked at what was available at the time, and of course you had. Kubernetes is your biggest contender out there, but like I just mentioned, the one of the things that really led us to swarm is it's design goals were very different than kubernetes. So Kubernetes tries to have an answer for absolutely every scenario where swarm tries to have an answer for, like, the 80% of problems or challenges will say that you might come across 80% of the workloads. Um, I had a better way of saying that, but I think I got my point across >>E Yeah, I think I think you hit the nail on the head. Um, Kubernetes in particular with the way that kubernetes itself is an a P I I believe that kubernetes was, um, you know, written as a toolkit. It wasn't really intended to be used by end users directly. It was really a way to build platforms that run containers. And because it's this really, really extensible ap I you can extend it to manage all sorts of resource is swarm doesn't have that X sensibility aspect, but what it was designed to do, it does very, very well and very easily in a very, very simple sort of way. Um, it's highly opinionated about the way that you should use the product, but it works very effectively. It's very easy to use. It's very low. Um, not low effort, but low. Ah, low barrier to entry. >>Yes. Yes. Absolutely. I was gonna touch on the same thing. It's very easy for someone to come in. Pick up swarm. You know they don't They don't have to know anything about the orchestrator on day one. Most people that are getting into this space are very familiar with Docker. Compose um, and entering from Docker compose into swarm is changing one command that you would run on the command line. >>Yeah, very, very trivial to if you are already used to building docker files using composed, organize your deployment into stacks of related components. It's trivial to turn on swarm mode and then deploy your container set to a cluster. >>Well, excellent. So answer this question for me. Is the swarm of today the same as the swarm of, you know, the original swarm. So, like when swim first started is that the same is what we have now >>it's kind of ah, complicated story with the storm project because it's changed names and forms a few times. Originally in is really somewhere around 2014 in the first version, and it was a component that you really had to configure and set up separately from Docker Ah, the way that it was structured. Ah, you would just have docker installed on a number of servers are machines in your cluster. And then you would organize them into a swarm by bringing your own database and some of the tooling to get those nodes talking to each other and to organize your containers across all of your docker engines. Ah, few years later, the swarm project was retooled and baked into the docker engine. And, um, this is where we sort of get the name change from. So originally it was a feature that we called swarm. Ah. Then the Swarm Kit project was released on Get Hub and baked directly into the engine, where they renamed it as swarm mode. Because now it is a motile option that you just turn on as a button in the docker engine and because it's already there the, um, the tuning knobs that you haven't swarm kit with regard to how what my time outs are and some of these other sort of performance settings there locked there, they're there. It's part of the opinionated set of components that builds up the docker engine is that we bring in the Swarm Kit project with a certain set of defaults and settings. And that is how it operates in today's version of Docker engine. >>Uh, okay for that, that makes sense. That makes sense. So ah, so don, I know you have pretty strong feelings about this topic, but it is swarm still viable in a world that's sort of increasingly dominated by Kubernetes. >>Absolutely. And you were right. I'm very passionate about this topic where I work. We're we're doing almost all of our production work lives on swarm we only have out of Ah, we've got something like 600 different services between three and 4000 containers. At any given point in time. Out of all of those projects, all of those services we've only run into two or three that don't kind of fit into the opinionated model of swarm. So we are running those on KUBERNETES in the same cluster using Moranis is Docker enterprise offering. But, um, no, that's a very, very small percentage of services that we didn't have an answer for in swarm with one. The one case that really gets us just about every time is scaling state full services. But you're gonna have very few staple services in most environments for things like micro service architecture, which is predominantly what we build out. Swarm is perfect. It's simple. It's easy to use you, don't you? Don't end up going for miles of yamma files trying to figure out the one setting that you didn't get exactly right? Um yeah, the other Thea the other big piece of it that way really led us to adopting it so heavily in the beginning is, you know, the overlay network. So your networks don't have to span the whole cluster like they do with kubernetes. So we could we could set up a network isolation between service A and service B, just by use using the built in overlay networks. That was a huge component that, like I said, let us Teoh adopting it so heavily when we first got started. >>Excellent. You look like you're about to say something in a >>Yeah, I think that speaks to the design goals for each piece of software. On the way that I've heard this described before is with regard to the networking piece the ah, the docker networking under the hood, um, feels like it was written by a network engineer. The way that the docker engine overlay networks communicate uses ah, VX lan under the hood, which creates pseudo V lands for your containers. And if two containers aren't on the same Dylan, there's no way they can communicate with each other as opposed to the design of kubernetes networking, which is really left to the C and I implementation but still has the design philosophy of one big, flat sub net where every I p could reach every other i p and you control what is allowed to access, what by policy. So it's more of an application focused Ah design. Whereas in Docker swarm on the overlay networking side, it's really of a network engineering sort of focus. Right? >>Okay, got it. Well, so now how does all this fit in with Docker enterprise now? So I understand there's been some changes on how swarm is handled within Docker Enterprise. Coming with this new release, >>Docker s O swarm Inside Docker Enterprise is represented as both the swarm classic legacy system that we shift way back in 2014 on and then also the swarm mode that is curly used in the docker engine. Um, the Swarm Classic back end gives us legacy support for being able to run unmanaged plane containers onto a cluster. If you were to take Docker ce right now, you would find that you wouldn't be able to just do a very basic docker run against a whole cluster of machines. You can create services using the swarms services, a p I but, um, that that legacy plane container support is something that you have to set up external swarm in order to provide. So right now, the architecture of Docker Enterprise UCP is based on some of that legacy code from about five or six years ago. Okay. Ah, that gives us ability to deploy plane containers for use cases that require it as well as swarm services for those kinds of workloads that might be better served by the built in load balancing and h A and scaling features that swarm provides. >>Okay, so now I know that at one point kubernetes was deployed within Docker Enterprise as you create a swarm cluster and then deploy kubernetes on top of swarm. >>Correct? That is how the current architecture works. >>Okay. All right. And then, um what is what is where we're going with this like, Are we supposed to? Are we going to running Swarm on top of kubernetes? What's >>the the design goals for the future of swarm within branches? Stocker Enterprise are that we will start the employing Ah, like kubernetes cluster features as the base and a swarm kit on top of kubernetes. So it is like you mentioned just a reversal of the roles. I think we're finding that, um, the ability to extend kubernetes a p I to manage resource is is valuable at an infrastructure and platform level in a way that we can't do with swarm. We still want to be able to run swarm workloads. So we're going to keep the swarm kit code the swarm kit orchestration features to run swarm services as a part of the platform to keep the >>got it. Okay, so, uh, if I'm a developer and I want to run swarm, but my company's running kubernetes what? What are my one of my options there? Well, I think >>eight touched on it pretty well already where you know, it depends on your design goals, and you know, one of the other things that's come up a few times is Thea. The level of entry for for swarm is much, much simpler than kubernetes. So I mean, it's it's kind of hard to introduce anything new. So I mean, a company, a company that's got most of their stuff in kubernetes and production is gonna have a hard time maybe looking at a swarm. I mean, this is gonna be, you know, higher, higher up, not the boots on the ground. But, um, you know, the the upper management, that's at some point, you have to pay for all their support, all of it. What we did in our approach. Because there was one team already using kubernetes. We went ahead and stood up a small cluster ah, small swarm cluster and taught the developers how to use it and how to deploy code to it. And they loved it. They thought it was super simple. A time went on, the other teams took notice and saw how fast these guys were getting getting code deployed, getting services up, getting things usable, and they would look over at what the innovation team was doing and say, Hey, I I want to do that to, uh, you know, so there's there's a bunch of different approaches. That's the approach we took and it worked out very well. It looks like you wanted to say something too. >>Yeah, I think that if you if you're if you're having to make this kind of decision, there isn't There isn't a wrong choice. Ah, it's never a swarm of its role and your organization, right? Right. If you're if you're an individual and you're using docker on your workstation on your laptop but your organization wants to standardize on kubernetes there, there are still some two rules that Mike over Ah, pose. And he's manifest if you need to deploy. Coop resource is, um if you are running Docker Enterprise Swarm kit code will still be there. And you can run swarm services as regular swarm workloads on that component. So I I don't want to I don't want people to think that they're going to be like, locked into one or the other orchestration system. Ah, there the way we want to enable developer choice so that however the developer wants to do their work, they can get it done. Um Docker desktop. Ah, ships with that kubernetes distribution bundled in it. So if you're using a Mac or Windows and that's your development, uh, system, you can run docker debt, turn on your mode and run the kubernetes bits. So you have the choices. You have the tools to deploy to either system. >>And that's one of the things that we were super excited about when they introduced Q. Burnett ease into the Docker Enterprise offering. So we were able to run both, so we didn't have to have that. I don't want to call it a battle or argument, but we didn't have to make anybody choose one or the other. We, you know, we gave them both options just by having Docker enterprise so >>excellent. So speaking of having both options, let's just say for developers who need to make a decision while should I go swarm, or should I go kubernetes when it sort of some of the things that they should think about? >>So I think that certain certain elements of, um, certain elements of containers are going to be agnostic right now. So the the the designing a docker file and building a container image, you're going to need to know that skill for either system that you choose to operate on. Ah, the swarm value. Some of the storm advantage comes in that you don't have to know anything beyond that. So you don't have to learn a whole new A p I a whole new domain specific language using Gamel to define your deployment. Um, chances are that if you've been using docker for any length of time, you probably have a whole stack of composed files that are related to things that you've worked on. And, um, again, the barrier to entry to getting those running on swarm is very low. You just turn it on docker stack, deploy, and you're good to go. So I think that if you're trying to make that choice, if you I have a use case that doesn't require you to manage new resource is if you don't need the Extensible researchers part, Ah, swarm is a great great, great viable option. >>Absolutely. Yeah, the the recommendation I've always made to people that are just getting started is start with swarm and then move into kubernetes and going through the the two of them, you're gonna figure out what fits your design principles. What fits your goals. Which one? You know which ones gonna work best for you. And there's no harm in choosing one or the other using both each one of you know, very tailor fit for very various types of use cases. And like I said, kubernetes is great at some things, but for a lot of other stuff, I still want to use swarm and vice versa. So >>on my home lab, for all my personal like services that I run in my, uh, my home network, I used storm, um, for things that I might deploy onto, you know, a bit this environment, a lot of the ones that I'm using right now are mainly tailored for kubernetes eso. I think especially some of the tools that are out there in the open source community as well as in docker Enterprise helped to bridge that gap like there's a translator that can take your compose file, turn it into kubernetes. Yeah, Mel's, um, if if you're trying to decide, like on the business side, should we standardize on former kubernetes? I think like your what? What functionality are you looking at? Out of getting out of your system? If you need things like tight integration into a ah infrastructure vendor such as AWS Azure or VM ware that might have, like plug ins for kubernetes. You're now you're getting into that area where you're managing Resource is of the infrastructure with your orchestration. AP I with kube so things like persistent volumes can talk to your storage device and carve off chunks of storage and assign those two pods if you don't have that need or that use case. Um, you know, KUBERNETES is bringing in a lot of these features that you maybe you're just not taking advantage of. Um, similarly, if you want to take advantage of things like auto scaling to scale horizontally, let's say you have a message queue system and then a number of workers, and you want to start scaling up on your workers. When your CPU hits a certain a metric. That is something that Kubernetes has built right into it. And so, if you want that, I would probably suggest that you look at kubernetes if you don't need that, or if you want to write some of that tooling yourself. Swarm doesn't have an object built into it that will do automatic horizontal scaling based on some kind of metric. So I always consider this decision as a what features are the most I available to you and your business that you need to Yep. >>All right. Excellent. Well, and, ah, fortunately, of course, they're both available on Docker Enterprise. So aren't we lucky? All right, so I am going to wrap this up. I want to thank Don Bauer Docker captain, for coming here and spending some time with us and eight of Manzini. I would like to thank you. I know that the the, uh, circumstances are less than ideal here for your recording today, but we appreciate you joining us. Um and ah, both of you. Thank you very much. And I want to invite all of you. First of all, thank you for joining us. We know your time is valuable and I want to invite you all Teoh to take a look at Docker Enterprise. Ah, follow the link that's on your screen and we'll see you in the next session. Thank you all so much. Thank you. >>Thank you, Nick.

Published Date : Sep 14 2020

SUMMARY :

So we wanted to bring you a couple of experts to talk about the state of swarm I have a long history of working with support Tennessee, and happy to be here. kind of name the elephant in the room. However you see the orchestration to run whatever workload that you have. Don I know that you were involved Um, in our case, we you know, we looked at what was Um, it's highly opinionated about the way that you should use is changing one command that you would run on the command line. Yeah, very, very trivial to if you are already used to building docker of, you know, the original swarm. in the first version, and it was a component that you really had to configure and set up separately So ah, so don, I know you have pretty strong to figure out the one setting that you didn't get exactly right? You look like you're about to say something in a On the way that I've heard this described before is with regard to the networking piece Well, so now how does all this fit in with Docker you have to set up external swarm in order to provide. was deployed within Docker Enterprise as you create a swarm cluster That is how the current architecture works. is what is where we're going with this like, Are we supposed to? a part of the platform to keep the I think I mean, this is gonna be, you know, higher, So you have the choices. And that's one of the things that we were super excited about when they introduced Q. So speaking of having both options, let's just say Some of the storm advantage comes in that you don't have to know anything beyond the two of them, you're gonna figure out what fits your design principles. available to you and your business that you need to Yep. I know that the the, uh, circumstances are less than

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nick ChasePERSON

0.99+

80%QUANTITY

0.99+

twoQUANTITY

0.99+

2014DATE

0.99+

Citizens BankORGANIZATION

0.99+

threeQUANTITY

0.99+

MarantzORGANIZATION

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

two podsQUANTITY

0.99+

MantisORGANIZATION

0.99+

first versionQUANTITY

0.99+

600 different servicesQUANTITY

0.99+

todayDATE

0.99+

Ah Docker EnterpriseORGANIZATION

0.99+

both optionsQUANTITY

0.99+

each pieceQUANTITY

0.99+

Docker Inc.ORGANIZATION

0.99+

few years laterDATE

0.99+

one caseQUANTITY

0.99+

4000 containersQUANTITY

0.98+

Docker Enterprise SystemORGANIZATION

0.98+

Q. BurnettPERSON

0.98+

one teamQUANTITY

0.98+

ManziniORGANIZATION

0.98+

Don PowerPERSON

0.98+

Docker EnterpriseORGANIZATION

0.98+

DockerTITLE

0.98+

FirstQUANTITY

0.98+

Docker EnterpriseTITLE

0.97+

oneQUANTITY

0.97+

MoranisORGANIZATION

0.97+

about four yearsQUANTITY

0.97+

two PanelistsQUANTITY

0.97+

NickPERSON

0.97+

Stocker EnterpriseORGANIZATION

0.97+

six years agoDATE

0.96+

DylanPERSON

0.95+

Moran TousORGANIZATION

0.95+

Don BauerPERSON

0.95+

two containersQUANTITY

0.95+

two rulesQUANTITY

0.94+

WindowsTITLE

0.93+

Nashville, TennesseeLOCATION

0.93+

one commandQUANTITY

0.93+

firstQUANTITY

0.92+

KubernetesORGANIZATION

0.92+

eightQUANTITY

0.91+

Docker enterpriseTITLE

0.89+

KUBERNETESORGANIZATION

0.88+

GamelTITLE

0.88+

day oneQUANTITY

0.87+

one big, flat sub netQUANTITY

0.86+

MacCOMMERCIAL_ITEM

0.83+

service AOTHER

0.82+

Docker enterpriseTITLE

0.81+

milesQUANTITY

0.8+

each oneQUANTITY

0.8+

Mike ovPERSON

0.78+

dockerORGANIZATION

0.78+

service BOTHER

0.77+

AzureTITLE

0.75+

Philipp Pieper, Swarm Funds | Blockchain Week NYC 2018


 

>> Voiceover: From New York, it's theCUBE covering Blockchain Week. Now, here's John Furrier. >> Hello everyone, welcome back, I'm John Furrier here in the ground in New York City, Manhattan, for Blockchain Week New York, also day three of Consensus 2018; it's a huge event, everyone's here in all the action. Philipp Pieper's the CEO and co-founder of... he's with Swarm Funds; now, it's an interesting story, we've interviewed a couple of other companies: Polymath, Securitize, these guys got a unique value proposition. Philip, Swarm Funds, tell about what you guys are doing? >> Sure. >> What's the value proposition, where you guys are at? >> So we are the first security token framework that is live in the market. We launched, actually, end of January, only three months after the ICO, and we focus actually on tokenizing LP positions and funds, and we do that with a unique legal structure, governing structure, and obviously token infrastructure, that actually is meant to become a lingua franca that anyone in the market can collaborate on, so we even invite the previously named companies to actually collaborate with this because it's not a one-person or one organization sellout. >> And you got a shipping product. >> We have a shipping product. We actually have business on it, which means that there's funds that have tokenized on our platform, four of them actually. We have another 50 right now in the pipeline, so the next couple of weeks we're going to see at least nine to maybe 15 that are going to come to the market. >> So, I understand your value proposition. Are you guys operationalizing venture capital or equity partners? Or is it targeting entrepreneurs themselves or both? Who's the customer for you? >> So, on the project side, on the investment opportunity side, it's actually people that have something that they've done in the past that have existing business and where we just become another part of their capital structure. So, when you >> Give me an example. >> When you focus on a fund, so for example, we have a fund called Andra Capital that is a pre-IPO tech fund, so you can buy into a composite of Airbnb, Uber, and other tech companies where they buy secondaries off the market. They're an existing fund, they have existing LP's, they have existing business, and for them to open up to the crypto landscape, both for crypto investors as well as family offices, we're that conduit. >> Yeah. >> So, for them it's no change of legal structures, they can just do this in the existing way, and for us in the crypto community it's an excellent way to democratize access to that, so you can get into these kind of things that normally were only for the privileged investors. >> And so the benefit to them is that they don't have to unwind or mess with a tangled web of deals and LP's, relationships, because it's complicated, the side deals, all kinds of, not side deals, but you know what I'm saying, like one, there's a lot of moving parts, right, so? >> Well, yeah, and even more so, they don't have to put all their chips into this one thing that, you know, we all believe that is going to be big, but who knows whether it's going to pan out? So, you know, if I would approach one of those partners and say, "Well, your entire fund has to be tokenized." That's a pretty big deal with a lot of resistance. In this way, they can just open up a backdoor saying, "Okay, let's test this out, see how it works" and, by the way, they can actually push their existing investors to that direction, too, because it has a liquidity to it. That's the key element that is missing >> Yeah and they don't have to do anything different, so it's really smart. So, I've got to ask you, so, your advice or security token's been a pretty positive reaction from most folks. Hey, finally a security token, there are people are raising money, that's what we're doing, I mean that's what we're doing, no one has product. I mean, we have a product, some people have products, you have products. The thing is that there's very few people that have products so they're basically raising money. So call it what it is, it's a raising money token. Security tokens are now good, but as the entrepreneurs out there, they say, "Well, do I just pledge with my cashflow, or do I put equity against it?" What's your vision on how entrepreneurs should think about what they give up for the tokens, how they securitize it? >> Are you meaning that the entrepreneurs actually come to the space with their entrepreneurial efforts or? >> So, I'm an entrepreneur and I say I want to raise 15 million dollars or 10 million dollars on an issuing a security token and what do I get for that? So the investor wants security. >> Well, the investor wants actually something that is reliable in the most legal way possible, which means that it is something that they can, you know, have confidence that there's something on the other end, that there is a trustful asset that is underlying, that there's a legal stress that they can put this to and if things go sideways, that they have a voice that they can actually govern their ownership with. >> What is that now, what's the standard? Is there a standard evolving around what that is behind the security token? Is it cashflow, is it equity? >> Well, so, in our case we pay attention to actually having a vetting process that actually makes sure that things exist where actually, so this one token being the utilities, sort of like, it's a token to consider us as an AWS for fund operations, so, we incentivize existing players to help vet. We are working with some of the biggest servicing firms and auditing firms to, in the end, actually put the rubber stamp on stuff saying this actually is in existence and it's being, you know, looked at in detail, and the community in the end then can actually say, "We want this, too" or "We don't want this." So, there's multiple hoops that someone has to jump through before they can actually claim to be on a network like Swarm on this SRC-20 token that we have. >> What's interesting, too, is that what I like about your business model is that there's leverage, too, and, as you do things, you don't have to do it again, and, so, everyone has to sort of replicate and provision their company some way, right? So, it's complicated. >> Well, and, by the way, just to extend that also to the fact that there's only, there's one investor graph that is a qualified investor graph that basically anyone can chip in to, and it makes it incredibly easy for a qualified investor to move around on amongst different security tokens, and not just do that, like on a dedicated platform, but we are taking this into existing exchanges. You can even think of a model where this works with a decentralized exchange, where people can confidently actually trade one another and they don't have to requalify with the decentralized exchange, which doesn't have an organization to qualify them. >> It sounds like cloud computing and devops in action. >> Yeah. >> Bringing in some crypto, so you probably bring great service, okay, what else is going on, how much did you raise, how big is the team, what's going on with the company? >> Yeah. >> What's next? What's on the road map? >> So, we actually started thinking this end of 2016, before this whole craziness started, so there's a lot of pen to paper that we had to put in place, so there's a preparation into the ICO that we did in September/October; we were very restrictive, the way that we did it, we had a token liquidator release in order to appeal to some of the more US-focused investors. We raised 5.5 million dollars back then, valued in ether, pretty good. We then actually, the foundation still hold half of the tokens, we just were really cleared to be not a security. In this realm, we clearly separated the security from the utility function and we are off to the races with actually not just being listed on exchanges but also to actually list the security tokens on exchanges with a clear mandate by the token issuers that that's something that they are qualified to do. >> That's awesome. So, Philipp, I'm going to give you a use case, if I'm going to do a token offering, say for theCUBE, hypothetical, wink wink, what do I do? How do I engage with you? Would I use your service? How would I use your service? I'm going to issue tokens, you know, we're building the business, we're building the brand, we're going to open it up. I don't have time to deal with all those details. It's a lot of hassles. Do I do the Cayman Islands, special purpose vehicles, I mean, where is my entity, what's my domicile, what's the law here? Do I use you? I mean, would I use you guys and that would be the service or are you targeting, would I have to go somewhere else? Who do I use? Who would I, how would I use Swarm? >> Well there's two parts to answer that question: one is actually, obviously, we have a lot of institutional organizations on the other end that have their own custom setup, they have existing things, we make it incredibly easy for them to engage with us because we form these SPV's which are, you know, so far we've trialed this in MBVI and Cayman's and Estonia and Lichtenstein, but those entities become shareholders of the underlying assets. So, if someone wants to list something, they go to tokenize.swarm.fund; there's an in-take form that actually allows them to supply their proposals, their proposals get put through different layers of vetting, so we work with... >> From your team? >> Well, first on our team, but we work with external people that vet that, too, and then actually it goes to an auditing firms that actually then say this is something real because before we take it to market, and actually offer it to the broader community, we really want to make sure that this is actually something that has validity to it because, as you know, market can be killed by the first ill leanings of actually something not being real. >> So do you pay for those service or is it paid in tokens? >> It's paid in tokens. Again, the analogy is the AWS, so it's basically, if someone wants to list, there's a gas for a fund listing that has to be paid, and that goes to both investor qualification as well as the auditing process. The same actually applies to the fund operations, so there's gas for fund operations, which goes to the technical nodes, the legal service providers we work with, accounting firms, people that want to do due diligence, like say I receive a nav report and that adds some value through it. >> It's coin-operated, literally. >> Exactly, but if I receive a net asset value report from one of the underlying assets, and I as an investor don't believe it, I can stake to say I want to have KPMG go off and actually validate that this is actually real and it's actually built on standards. >> You're bringing a lot of service providers together, you're also providing some base services, that's cool, what's next, what are you going to do this next year? What's next for you guys the second half of the year? >> I think we're just scratching the surface of what this is going to do. I mean, we're very happy that actually there's a very big focus by the market on actually security tokens, Wall Street is taking it extremely serious and legislators across the world are taking it seriously, so we're very, very fortunate to be in some of those conversations with legislators who want the security tokens base to be compliant with what they're thinking about. I think it's just going to be volume, on both ends, our target is to actually have a hundred thousand active investors engaged. We want to have at least a hundred funds that are live on the platform on the network, and we want to stitch partnerships with whoever wants to participate. That makes this a frictionless ecosystem such that everyone can continue doing their business. >> Well, we need more faster, better products out there. The SEC, you've seen some of the regulatory issues, slowing things down in the US and a lot of action going on outside the United States, so, the sooner the better, right? >> Yeah, but I think the SEC is taking the approach to say, "We're going to regulate the bad actors, but we're urging a self-regulatory position by the industry." And, so, efforts like all the ones that you mentioned and us actually going in the direction to be compliant, not shying away from having security tokens in a legal fashion is the good news because the more we show that the more actually they understand that this is not some kind of evasion strategy in many different directions. >> Yeah, and we need to move faster, cool. Well, great job Philipp, we've got a great job here, Swarm Fund, check it out, they're really making it easier for investors and limited partners, the Big Money, to actually move an encrypto, open up a door, put a toe in the water, and make money, get liquid, thanks for coming on. >> Thanks so much. >> We appreciate it, BlockChain Week New York City, I'm John Furrier, thanks for watching.

Published Date : May 25 2018

SUMMARY :

Voiceover: From New York, it's theCUBE in the ground in New York City, Manhattan, that is live in the market. We have another 50 right now in the pipeline, Who's the customer for you? So, on the project side, to open up to the crypto landscape, to democratize access to that, that is going to be big, but who knows whether Yeah and they don't have to do anything different, So the investor wants security. that they can put this to and if things go sideways, before they can actually claim to be on a network like and, so, everyone has to sort of replicate and provision Well, and, by the way, just to extend that also a lot of pen to paper that we had to put in place, So, Philipp, I'm going to give you a use case, that actually allows them to supply their proposals, and actually offer it to the broader community, that has to be paid, and that goes to both investor an investor don't believe it, I can stake to say on the platform on the network, and we want to stitch outside the United States, so, the sooner the better, right? fashion is the good news because the more we show that for investors and limited partners, the Big Money, We appreciate it, BlockChain Week New York City,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Philipp PieperPERSON

0.99+

UberORGANIZATION

0.99+

John FurrierPERSON

0.99+

PhilippPERSON

0.99+

AirbnbORGANIZATION

0.99+

SecuritizeORGANIZATION

0.99+

PolymathORGANIZATION

0.99+

Swarm FundsORGANIZATION

0.99+

MBVIORGANIZATION

0.99+

two partsQUANTITY

0.99+

15 million dollarsQUANTITY

0.99+

KPMGORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SECORGANIZATION

0.99+

New YorkLOCATION

0.99+

5.5 million dollarsQUANTITY

0.99+

New York CityLOCATION

0.99+

United StatesLOCATION

0.99+

SeptemberDATE

0.99+

end of JanuaryDATE

0.99+

USLOCATION

0.99+

OctoberDATE

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

tokenize.swarm.fundOTHER

0.99+

Cayman IslandsLOCATION

0.99+

Blockchain WeekEVENT

0.99+

10 million dollarsQUANTITY

0.98+

CaymanORGANIZATION

0.98+

LichtensteinORGANIZATION

0.98+

both endsQUANTITY

0.98+

Swarm FundORGANIZATION

0.97+

SwarmORGANIZATION

0.97+

next yearDATE

0.97+

ManhattanLOCATION

0.97+

oneQUANTITY

0.97+

Andra CapitalORGANIZATION

0.96+

fourQUANTITY

0.96+

15QUANTITY

0.96+

one-personQUANTITY

0.95+

one investorQUANTITY

0.95+

PhilipPERSON

0.94+

end of 2016DATE

0.94+

three monthsQUANTITY

0.91+

Consensus 2018EVENT

0.91+

50QUANTITY

0.91+

NYCLOCATION

0.9+

one organizationQUANTITY

0.86+

Wall StreetORGANIZATION

0.82+

one thingQUANTITY

0.8+

day threeQUANTITY

0.79+

both investorQUANTITY

0.77+

next couple of weeksDATE

0.73+

least a hundred fundsQUANTITY

0.73+

hundred thousand active investorsQUANTITY

0.72+

Big MoneyORGANIZATION

0.7+

secondDATE

0.69+

BlockChainORGANIZATION

0.69+

at least nineQUANTITY

0.69+

half of theQUANTITY

0.68+

-20OTHER

0.65+

of the yearDATE

0.65+

one tokenQUANTITY

0.61+

halfQUANTITY

0.58+

EstoniaORGANIZATION

0.57+

2018EVENT

0.53+

theCUBEORGANIZATION

0.49+

coupleQUANTITY

0.47+

SRCCOMMERCIAL_ITEM

0.34+

Haseeb Budhani, Rafay & Santhosh Pasula, MassMutual | KubeCon + CloudNativeCon NA 2022


 

>>Hey guys. Welcome back to Detroit, Michigan. Lisa Martin and John Furrier here live with the cube at Coan Cloud Native Con North America. John, it's been a great day. This is day one of our coverage of three days of coverage. Kubernetes is growing up. Yeah, it's maturing. >>Yeah. We got three days of wall to wall coverage, all about Kubernetes. We about security, large scale, cloud native at scale. That's the big focus. This next segment's gonna be really awesome. You have a fast growing private company and a practitioner, big name, blue chip practitioner, building out next NextGen Cloud first, transforming, then building out the next level. This is classic of what we call super cloud-like, like interview. It's gonna be great. I'm looking forward >>To this anytime we can talk about Super Cloud. All right, please welcome back. One of our alumni, Bani is here, the CEO of Rafe. Great to see you Santos. Ula also joins us, the global head of Cloud SRE at Mass Mutual. Ge. Great to have you on the program. Thanks >>For having us. Thank you for having me. >>So Steve, you've been on the queue many times. You were on just recently with the momentum that that's around us today with the maturation of Kubernetes, the collaboration of the community, the recognition of the community. What are some of the things that you're excited about with on, on day one of the show? >>Wow, so many new companies. I mean, there are companies that I don't know who are here. And I, I, I live in this industry and I'm seeing companies that I don't know, which is a good thing. I mean, it means that the, the community's growing. But at the same time, I'm also seeing another thing, which is I have met more enterprise representatives at this show than other coupons. Like when we hung out at, you know, in Valencia for example, or even, you know, other places. It hasn't been this many people, which means, and this is, this is a good thing that enterprises are now taking Kubernetes seriously. It's not a toy. It's not just for developers. It's enterprises who are now investing in Kubernetes as a foundational component, right. For their applications going forward. And that to me is very, very good. >>Definitely becoming foundational. >>Yep. Well, you guys got a great traction. We had many interviews at the Cube and you got a practitioner here with you. You guys are both pioneering kind of what I call the next gen cloud. First you gotta get through gen one, which you guys done at Mass Mutual, extremely well, take us through the story of your transformation. Cause you're on the, at the front end now of that next inflection point. But take us through how you got here. You had a lot of transformation success at Mass Mutual. >>So I was actually talking about this topic few, few minutes back, right? And, and the whole cloud journey in big companies, large financial institutions, healthcare industry or, or our insurance sector. It takes generations of leadership to get, to get to that perfection level. And, and ideally the, the, the cloud for strategy starts in, and then, and then how do you, how do you standardize and optimize cloud, right? You know, that that's, that's the second gen altogether. And then operationalization of the cloud. And especially if, you know, if you're talking about Kubernetes, you know, in the traditional world, you know, almost every company is running middleware and their applications in middleware. And then containerization is a topic that come, that came in. And docker is, is you know, basically the runtime containerization. So that came in first and from Docker, you know, eventually when companies started adopting Docker, Docker Swarm is one of the technologies that they adopted. And eventually when, when, when we were taking it to a more complicated application implementations or modernization efforts, that's when Kubernetes played a key role. And, and Hasi was pointing out, you know, like you never saw so many companies working on Kubernetes. So that should tell you one story, right? How fast Kubernetes is growing and how important it is for your cloud strategy. So, >>And your success now, and what are you thinking about now? What's on your agenda now as you look forward? What's on your plate? What are you guys doing right now? >>So we are, we are past the stage of, you know, proof of concepts, proof of technologies, pilot implementations. We are actually playing it, you know, the real game now. So in the past I used the quote, you know, like, hello world to real world. So we are actually playing in the real world, not, not in the hello world anymore. Now, now this is where the real time challenges will, will pop up, right? So if you're talking about standardizing it and then optimizing the cloud and how do you put your governance structure in place? How do you make sure your regulations are met? You know, the, the, the demands that come out of regulations are met and, and how, how are you going to scale it and, and, and while scaling, however you wanna to keep up with all the governance and regulations that come with it. So we are in that stage today. >>Has Steve talked about, you talked about the great evolution of what's going on at Mass Mutual has talked a little bit about who, you mentioned one of the things that's surprising you about this Coan and Detroit is that you're seeing a lot more enterprise folks here who, who's deciding in the organization and your customer conversations, Who are the deci decision makers in terms of adoption of Kubernetes these days? Is that elevating? >>Hmm. Well this guy, >>It's usually, you know, one of the things I'm seeing here, and John and I have talked about this in the past, this idea of a platform organization and enterprises. So consistently what I'm seeing is, you know, somebody, a cto, CIO level, you know, individual is making a determin decision. I have multiple internal buss who are now modernizing applications. They're individually investing in DevOps. And this is not a good investment for my business. I'm going to centralize some of this capability so that we can all benefit together. And that team is essentially a platform organization and they're making Kubernetes a shared services platform so that everybody else can come and, and, and sort of, you know, consume it. So what that means to us is our customer is a platform organization and their customer is a developer. So we have to make two constituencies successful. Our customer who's providing a multi-tenant platform, and then their customer who's a developer, both have to be happy. If you don't solve for both, you know, constituencies, you're not gonna be >>Successful. You're targeting the builder of the infrastructure and the consumer of that infrastructure. >>Yes sir. It has to be both. Exactly. Right. Right. So, so that look, honestly, that it, it, you know, it takes iterations to figure these things out, right? But this is a consistent theme that I am seeing. In fact, what I would argue now is that every enterprise should be really stepping back and thinking about what is my platform strategy. Cuz if you don't have a platform strategy, you're gonna have a bunch of different teams who are doing different things and some will be successful and look, some will not be. And that is not good for business. >>Yeah. And, and stage, I wanna get to you, you mentioned that your transformation was what you look forward and your title, global head of cloud sre. Okay, so sre, we all know came from Google, right? Everyone wants to be like Google, but no one wants to be like Google, right? And no one is Google, Google's a unique thing. It's only one Google. But they had the dynamic and the power dynamic of one person to large scale set of servers or infrastructure. But concept is, is, is can be portable, but, but the situation isn't. So board became Kubernetes, that's inside baseball. So you're doing essentially what Google did at their scale you're doing for Mass Mutual. That's kind of what's happening. Is that kind of how I see it? And you guys are playing in there partnering. >>So I I totally agree. Google introduce, sorry, Ty engineering. And, and if you take, you know, the traditional transformation of the roles, right? In the past it was called operations and then DevOps ops came in and then SRE is is the new buzzword. And the future could be something like product engineering, right? And, and, and in this journey, you know, here is what I tell, you know, folks on my side like what worked for Google might not work for a financial company, might not work for an insurance company. So, so, so it's, it's okay to use the word sre, but but the end of the day that SRE has to be tailored down to, to your requirements and and, and the customers that you serve and the technology that you serve. Yep. >>And this is, this is why I'm coming back, this platform engineering. At the end of the day, I think SRE just translates to, you're gonna have a platform engineering team cuz you gotta enable developers to be producing more code faster, better, cheaper guardrails policy. So this, it's kind of becoming the, you serve the business, which is now the developers it used to serve the business Yep. Back in the old days. Hey, the, it serves the business. Yep. Which is a terminal, >>Which is actually true >>Now it the new, it serves the developers, which is the business. Which is the business. Because if digital transformation goes to completion, the company is the app. Yep. >>And the, you know, the, the hard line between development and operations, right? So, so that's thining down over the time, you know, like that that line might disappear. And, and, and that's where asari is fitting in. >>Yeah. And they're building platforms to scale the enablement up that what is, so what is the key challenges you guys are, are both building out together this new transformational direction? What's new and what's the same, The same is probably the business results, but what's the new dynamic involved in rolling it out and making people successful? You got the two constituents, the builders of the infrastructures and the consumers of the services on the other side. What's the new thing? >>So the new thing if, if I may go fast these, so the faster market to, you know, value, right? That we are bringing to the table. That's, that's very important. You know, business has an idea. How do you get that idea implemented in terms of technology and, and take it into real time. So that journey we have cut down, right? Technology is like Kubernetes. It makes, it makes, you know, an IT person's life so easy that, that they can, they can speed up the process in, in, in a traditional way. What used to take like an year or six months can be done in a month today or or less than that, right? So, so there's definitely the losses, speed, velocity, agility in general, and then flexibility. And then the automation that we put in, especially if you have to maintain like thousands of clusters, you know, these, these are today like, you know, it is possible to, to make that happen with a click off a button. In the past it used to take like, you know, probably, you know, a hundred, a hundred percent team and operational team to do it. And a lot of time. But, but, but that automation is happening. You know, and we can get into the technology as much as possible. But, but, you know, blueprinting and all that stuff made >>It possible. Well say that for another interview, we'll do it take time. >>But the, the end user on the other end, the consumer doesn't have the patience that they once had. Right? Right. It's, I want this in my lab now. Now, how does the culture of Mass Mutual, how is it evolv to be able to deliver the velocity that your customers are demanding? >>So if once in a while, you know, it's important to step yourself into the customer's shoes and think it from their, from their, from their perspective, business does not care how you're running your IT shop. What they care about is your stability of the product and the efficiencies of the product and, and, and how, how, how easy it is to reach out to the customers and how well we are serving the customers, right? So whether I'm implementing Docker in the background, Dr. Swam or es you know, business doesn't even care about it. What they really care about it is if your environment goes down, it's a problem. And, and, and if you, if your environment or if your solution is not as efficient as the business needs, that's the problem, right? So, so at that point, the business will step in. So our job is to make sure, you know, from an, from a technology perspective, how fast you can make implement it and how efficiently you can implement it. And at the same time, how do you play within the guardrails of security and compliance. >>So I was gonna ask you if you have VMware in your environment, cause a lot of clients compare what vCenter does for Kubernetes is really needed. And I think that's what you guys got going on. I I can say that you're the v center of Kubernetes. I mean, as a, as an as an metaphor, a place to manage it all is all 1, 1 1 paint of glass, so to speak. Is that how you see success in your environment? >>So virtualization has gone a long way, you know where we started, what we call bare metal servers, and then we virtualized operating systems. Now we are virtualizing applications and, and we are virtualizing platforms as well, right? So that's where Kubernetes basically got. >>So you see the need for a vCenter like thing for Uber, >>Definitely a need in the market in the way you need to think is like, you know, let's say there is, there is an insurance company who actually mented it and, and they gain the market advantage. Right? Now the, the the competition wants to do it as well, right? So, so, so there's definitely a virtualization of application layer that, that, that's very critical and it's, it's a critical component of cloud strategy as >>A whole. See, you're too humble to say it. I'll say you like the V center of Kubernetes, Explain what that means and your turn. If I said that to you, what would you react? How would you react to that? Would say bs or would you say on point, >>Maybe we should think about what does vCenter do today? Right? It's, it's so in my opinion, by the way, well vCenter in my opinion is one of the best platforms ever built. Like ha it's the best platform in my opinion ever built. It's, VMware did an amazing job because they took an IT engineer and they made him now be able to do storage management, networking management, VMs, multitenancy, access management audit, everything that you need to run a data center, you can do from a single, essentially single >>Platform, from a utility standpoint home >>Run. It's amazing, right? Yeah, it is because you are now able to empower people to do way more. Well why are we not doing that for Kubernetes? So the, the premise man Rafa was, well, oh, bless, I should have IT engineers, same engineers now they should be able to run fleets of clusters. That's what people that mass major are able to do now, right? So to that end, now you need cluster management, you need access management, you need blueprinting, you need policy management, you need ac, you know, all of these things that have happened before chargebacks, they used to have it in, in V center. Now they need to happen in other platforms. But for es so should do we do many of the things that vCenter does? Yes. >>Kind >>Of. Yeah. Are we a vCenter for es? Yeah, that is a John Forer question. >>All right, well, I, I'll, the speculation really goes back down to the earlier speed question. If you can take away the, the complexity and not make it more steps or change a tool chain or do something, then the devs move faster and the service layer that serves the business, the new organization has to enable speed. So this, this is becoming a, a real discussion point in the industry is that, oh yeah, we've got new tool, look at the shiny new toy. But if it doesn't move the needle, does it help productivity for developers? And does it actually scale up the enablement? That's the question. So I'm sure you guys are thinking about this a lot, what's your reaction? >>Yeah, absolutely. And one thing that just, you know, hit my mind is think about, you know, the hoteling industry before Airbnb and after Airbnb, right? Or, or, or the taxi industry, you know, before Uber and after Uber, right? So if I'm providing a platform, a Kubernetes platform for my application folks or for my application partners, they have everything ready. All they need to do is like, you know, build their application and deployed and running, right? They, they, they don't have to worry about provisioning of the servers and then building the middleware on top of it and then, you know, do a bunch of testing to make sure, you know, they, they, they iron out all the, all the compatible issues and whatnot. Yeah. Now, now, today, all I, all I say is like, hey, you have, we have a platform built for you. You just build your application and then deploy it in a development environment. That's where you put all the pieces of puzzle together, make sure you see your application working, and then the next thing that, that you do is like, you know, you know, build >>Production, chip, build production, go and chip release it. Yeah, that's the nirvana. But then we're there. I mean, we're there now we're there. So we see the future. Because if you, if that's the case, then the developers are the business. They have to be coding more features, they have to react to customers. They might see new business opportunities from a revenue standpoint that could be creatively built, got low code, no code, headless systems. These things are happening where this I call the architectural list environment where it's like, you don't need architecture, it's already happening. >>Yeah. And, and on top of it, you know, if, if someone has an idea, they want to implement an idea real quick, right? So how do you do it? Right? And, and, and you don't have to struggle building an environment to implement your idea and testers in real time, right? So, so from an innovation perspective, you know, agility plays a key role. And, and that, that's where the Kubernetes platforms or platforms like Kubernetes >>Plays. You know, Lisa, when we talked to Andy Chasy, when he was the CEO of aws, either one on one or on the cube, he always said, and this is kind of happening, companies are gonna be builders where it's not just utility. You need that table stakes to enable that new business idea. And so he, this last keynote, he did this big thing like, you know, think like your developers are the next entrepreneurial revenue generators. And I think that, I think starting to see that, what do you think about that? You see that coming sooner than later? Or is that in, in sight or is that still ways away? >>I, I think it's already happening at a level, at a certain level now. Now the question comes back to, you know, taking it to the reality, right? Yeah. I mean, you can, you can do your proof of concept, proof of technologies, and then, and then prove it out. Like, Hey, I got a new idea. This idea is great. Yeah. And, and it's to the business advantage, right? But we really want to see it in production live where your customers are actually >>Using it and the board meetings, Hey, we got a new idea that came in, generating more revenue, where'd that come from? Agile developer. Again, this is real. Yeah, >>Yeah. >>Absolutely agree. Yeah. I think, think both of you gentlemen said a word in, in your, as you were talking, you used the word guardrails, right? I think, you know, we're talking about rigidity, but you know, the really important thing is, look, these are enterprises, right? They have certain expectations. Guardrails is key, right? So it's automation with the guardrails. Yeah. Guardrails are like children, you know, you know, shouldn't be hurt. You know, they're seen but not hurt. Developers don't care about guard rails. They just wanna go fast. They also bounce >>Around a little bit. Yeah. Off the guardrails. >>One thing we know that's not gonna slow down is, is the expectations, right? Of all the consumers of this, the Ds the business, the, the business top line, and of course the customers. So the ability to, to really, as your website says, let's see, make life easy for platform teams is not trivial. And clearly what you guys are talking about here is you're, you're really an enabler of those platform teams, it sounds like to me. Yep. So, great work, guys. Thank you so much for both coming on the program, talking about what you're doing together, how you're seeing the, the evolution of Kubernetes, why, and really what the focus should be on those platform games. We appreciate all your time and your insights. >>Thank you so much for having us. Thanks >>For our pleasure. For our guests and for John Furrier, I'm Lisa Martin. You're watching The Cube Live, Cobe Con, Cloud Native con from Detroit. We've out with our next guest in just a minute, so stick around.

Published Date : Oct 27 2022

SUMMARY :

the cube at Coan Cloud Native Con North America. That's the big focus. Ge. Great to have you on the program. Thank you for having me. What are some of the things that you're excited about with on, Like when we hung out at, you know, in Valencia for example, First you gotta get through gen one, which you guys done at Mass Mutual, extremely well, in the traditional world, you know, almost every company is running middleware and their applications So we are, we are past the stage of, you know, It's usually, you know, one of the things I'm seeing here, and John and I have talked about this in the past, You're targeting the builder of the infrastructure and the consumer of that infrastructure. it, you know, it takes iterations to figure these things out, right? And you guys are playing in there partnering. and and, and the customers that you serve and the technology that you serve. So this, it's kind of becoming the, you serve the business, Now it the new, it serves the developers, which is the business. And the, you know, the, the hard line between development and operations, so what is the key challenges you guys are, are both building out together this new transformational direction? In the past it used to take like, you know, probably, you know, a hundred, a hundred percent team and operational Well say that for another interview, we'll do it take time. Mass Mutual, how is it evolv to be able to deliver the velocity that your customers are demanding? So our job is to make sure, you know, So I was gonna ask you if you have VMware in your environment, cause a lot of clients compare So virtualization has gone a long way, you know where we started, you need to think is like, you know, let's say there is, there is an insurance company who actually mented it and, I'll say you like the V center of Kubernetes, networking management, VMs, multitenancy, access management audit, everything that you need to So to that end, now you need cluster management, Yeah, that is a John Forer question. So I'm sure you guys are thinking about this a lot, what's your reaction? Or, or, or the taxi industry, you know, before Uber and after Uber, I call the architectural list environment where it's like, you don't need architecture, it's already happening. So, so from an innovation perspective, you know, agility plays a key role. And I think that, I think starting to see that, what do you think about that? Now the question comes back to, you know, taking it to the reality, Using it and the board meetings, Hey, we got a new idea that came in, generating more revenue, where'd that come from? you know, you know, shouldn't be hurt. Around a little bit. And clearly what you guys are Thank you so much for having us. For our pleasure.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

StevePERSON

0.99+

Lisa MartinPERSON

0.99+

Andy ChasyPERSON

0.99+

ValenciaLOCATION

0.99+

Mass MutualORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

RafayPERSON

0.99+

LisaPERSON

0.99+

John ForerPERSON

0.99+

UlaPERSON

0.99+

Haseeb BudhaniPERSON

0.99+

BaniPERSON

0.99+

six monthsQUANTITY

0.99+

DetroitLOCATION

0.99+

bothQUANTITY

0.99+

three daysQUANTITY

0.99+

UberORGANIZATION

0.99+

Santhosh PasulaPERSON

0.99+

second genQUANTITY

0.99+

todayDATE

0.99+

an yearQUANTITY

0.99+

FirstQUANTITY

0.98+

firstQUANTITY

0.98+

Detroit, MichiganLOCATION

0.98+

thousandsQUANTITY

0.98+

one storyQUANTITY

0.98+

RafaPERSON

0.98+

oneQUANTITY

0.98+

CloudNativeConEVENT

0.98+

OneQUANTITY

0.98+

AirbnbORGANIZATION

0.98+

KubernetesTITLE

0.98+

two constituenciesQUANTITY

0.97+

SwamPERSON

0.97+

KubeConEVENT

0.97+

asariORGANIZATION

0.97+

one personQUANTITY

0.97+

a monthQUANTITY

0.97+

SantosPERSON

0.97+

singleQUANTITY

0.96+

vCenterTITLE

0.96+

CubeORGANIZATION

0.96+

DockerORGANIZATION

0.95+

two constituentsQUANTITY

0.95+

HasiPERSON

0.94+

CoanORGANIZATION

0.93+

awsORGANIZATION

0.92+

Cobe ConEVENT

0.92+

Coan Cloud Native Con North AmericaORGANIZATION

0.91+

gen oneQUANTITY

0.91+

SREORGANIZATION

0.9+

a hundredQUANTITY

0.89+

KubernetesPERSON

0.89+

clustersQUANTITY

0.88+

Cloud Native conEVENT

0.88+

one thingQUANTITY

0.86+

NA 2022EVENT

0.85+

Haseeb Budhani & Santhosh Pasula, Rafay | KubeCon + CloudNativeCon NA 2022


 

(bright upbeat music) >> Hey, guys. Welcome back to Detroit, Michigan. Lisa Martin and John Furrier here live with "theCUBE" at KubeCon CloudNativeCon, North America. John, it's been a great day. This is day one of our coverage of three days of coverage. Kubernetes is growing up. It's maturing. >> Yeah, we got three days of wall-to-wall coverage, all about Kubernetes. We heard about Security, Large scale, Cloud native at scale. That's the big focus. This next segment's going to be really awesome. You have a fast growing private company and a practitioner, big name, blue chip practitioner, building out next-gen cloud. First transforming, then building out the next level. This is classic, what we call Super Cloud-Like interview. It's going to be great. I'm looking forward to this. >> Anytime we can talk about Super Cloud, right? Please welcome back, one of our alumni, Haseeb Budhani is here, the CEO of Rafay. Great to see you. Santhosh Pasula, also joins us, the global head of Cloud SRE at Mass Mutual. Guys, great to have you on the program. >> Thanks for having us. >> Thank you for having me. >> So, Haseeb, you've been on "theCUBE" many times. You were on just recently, with the momentum that's around us today with the maturation of Kubernetes, the collaboration of the community, the recognition of the community. What are some of the things that you're excited about with on day one of the show? >> Wow, so many new companies. I mean, there are companies that I don't know who are here. And I live in this industry, and I'm seeing companies that I don't know, which is a good thing. It means that the community's growing. But at the same time, I'm also seeing another thing, which is, I have met more enterprise representatives at this show than other KubeCons. Like when we hung out at in Valencia, for example, or even other places, it hasn't been this many people. Which means, and this is a good thing that enterprises are now taking Kubernetes seriously. It's not a toy. It's not just for developers. It's enterprises who are now investing in Kubernetes as a foundational component for their applications going forward. And that to me is very, very good. >> Definitely, becoming foundational. >> Haseeb: Yeah. >> Well, you guys got a great traction. We had many interviews at "theCUBE," and you got a practitioner here with you guys, are both pioneering, kind of what I call the next-gen cloud. First you got to get through Gen-One, which you guys done at Mass Mutual extremely well. Take us through the story of your transformation? 'Cause you're on at the front end now of that next inflection point. But take us through how you got here? You had a lot of transformation success at Mass Mutual? >> So, I was actually talking about this topic few minutes back. And the whole cloud journey in big companies, large financial institutions, healthcare industry or insurance sector, it takes generations of leadership to get to that perfection level. And ideally, the cloud for strategy starts in, and then how do you standardize and optimize cloud, right? That's the second-gen altogether, and then operationalization of the cloud. And especially if you're talking about Kubernetes, in the traditional world, almost every company is running middleware and their applications in middleware. And their containerization is a topic that came in. And Docker is basically the runtime containerization. So, that came in first, and from Docker, eventually when companies started adopting Docker, Docker Swarm is one of the technologies that they adopted. And eventually, when we were taking it to a more complicated application implementations or modernization efforts, that's when Kubernetes played a key role. And as Haseeb was pointing out, you never saw so many companies working on Kubernetes. So, that should tell you one story, right? How fast Kubernetes is growing, and how important it is for your cloud strategy. >> And your success now, and what are you thinking about now? What's on your agenda now? As you look forward, what's on your plate? What are you guys doing right now? >> So we are past the stage of proof of concepts, proof of technologies, pilot implementations. We are actually playing it, the real game now. In the past, I used the quote, like "Hello world to real world." So, we are actually playing in the real world, not in the hello world anymore. Now, this is where the real time challenges will pop up. So, if you're talking about standardizing it, and then optimizing the cloud, and how do you put your governance structure in place? How do you make sure your regulations are met? The demands that come out of regulations are met? And how are you going to scale it? And while scaling, how are you going to keep up with all the governance and regulations that come with it? So we are in that stage today. >> Haseeb talked about, you talked about the great evolution of what's going on at Mass Mutual. Haseeb talk a little bit about who? You mentioned one of the things that's surprising you about this KubeCon in Detroit, is that you're seeing a lot more enterprise folks here? Who's deciding in the organization and your customer conversations? Who are the decision makers in terms of adoption of Kubernetes these days? Is that elevating? >> Hmm. Well, this guy. (Lisa laughing) One of the things I'm seeing here, and John and I have talked about this in the past, this idea of a platform organization and enterprises. So, consistently what I'm seeing, is somebody, a CTO, CIO level, an individual is making a decision. I have multiple internal Bus who are now modernizing applications. They're individually investing in DevOps, and this is not a good investment for my business. I'm going to centralize some of this capability so that we can all benefit together. And that team is essentially a platform organization. And they're making Kubernetes a shared services platform so that everybody else can come and sort of consume it. So, what that means to us, is our customer is a platform organization, and their customer is a developer. So we have to make two constituencies successful. Our customer who's providing a multi-tenant platform, and then their customer, who's your developer, both have to be happy. If you don't solve for both, you know, constituencies, you're not going to be successful. >> So, you're targeting the builder of the infrastructure and the consumer of that infrastructure? >> Yes, sir. It has to be both. >> On the other side? >> Exactly, right. So that look, honestly, it takes iteration to figure these things out. But this is a consistent theme that I am seeing. In fact, what I would argue now, is that every enterprise should be really stepping back and thinking about what is my platform strategy? Because if you don't have a platform strategy, you're going to have a bunch of different teams who are doing different things, and some will be successful, and look, some will not be. And that is not good for business. >> Yeah, and Santhosh, I want to get to you. You mentioned your transformations, what you look forward, and your title, Global Head of Cloud, SRE. Okay, so SRE, we all know came from Google, right? Everyone wants to be like Google, but no one wants to be like Google, right? And no one is Google. Google's a unique thing. >> Haseeb: Only one Google. >> But they had the dynamic and the power dynamic of one person to large scale set of servers or infrastructure. But concept can be portable, but the situation isn't. So, Borg became Kubernetes, that's inside baseball. So, you're doing essentially what Google did at their scale, you're doing for Mass Mutual. That's kind of what's happening, is that kind of how I see it? And you guys are playing in there partnering? >> So, I totally agree. Google introduce SRE, Site Reliability Engineering. And if you take the traditional transformation of the roles, in the past, it was called operations, and then DevOps ops came in, and then SRE is the new buzzword. And the future could be something like Product Engineering. And in this journey, here is what I tell folks on my side, like what worked for Google might not work for a financial company. It might not work for an insurance company. It's okay to use the word, SRE, but end of the day, that SRE has to be tailored down to your requirements. And the customers that you serve, and the technology that you serve. >> This is why I'm coming back, this platform engineering. At the end of the day, I think SRE just translates to, you're going to have a platform engineering team? 'Cause you got to enable developers to be producing more code faster, better, cheaper, guardrails, policies. It's kind of becoming the, these serve the business, which is now the developers. IT used to serve the business back in the old days, "Hey, the IT serves the business." >> Yup. >> Which is a term now. >> Which is actually true now. >> The new IT serves the developers, which is the business. >> Which is the business. >> Because if digital transformation goes to completion, the company is the app. >> The hard line between development and operations, so that's thinning down. Over the time, that line might disappear. And that's where SRE is fitting in. >> Yeah, and then building platform to scale the enablement up. So, what is the key challenges? You guys are both building out together this new transformational direction. What's new and what's the same? The same is probably the business results, but what's the new dynamic involved in rolling it out and making people successful? You got the two constituents, the builders of the infrastructures and the consumers of the services on the other side. What's the new thing? >> So, the new thing, if I may go first. The faster market to value that we are bringing to the table, that's very important. Business has an idea. How do you get that idea implemented in terms of technology and take it into real time? So, that journey we have cut down. Technology is like Kubernetes. It makes an IT person's life so easy that they can speed up the process. In a traditional way, what used to take like an year, or six months, can be done in a month today, or less than that. So, there's definitely speed velocity, agility in general, and then flexibility. And then the automation that we put in, especially if you have to maintain like thousands of clusters. These are today, it is possible to make that happen with a click off a button. In the past, it used to take, probably, 100-person team, and operational team to do it, and a lot of time. But that automation is happening. And we can get into the technology as much as possible, but blueprinting and all that stuff made it possible. >> We'll save that for another interview. We'll do it deep time. (panel laughing) >> But the end user on the other end, the consumer doesn't have the patience that they once had, right? It's, "I want this in my lab now." How does the culture of Mass Mutual? How is it evolve to be able to deliver the velocity that your customers are demanding? >> Once in a while, it's important to step yourself into the customer's shoes and think it from their perspective. Business does not care how you're running your IT shop. What they care about is your stability of the product and the efficiencies of the product, and how easy it is to reach out to the customers. And how well we are serving the customers, right? So, whether I'm implementing Docker in the background, Docker Swam or Kubernetes, business doesn't even care about it. What they really care about, it is, if your environment goes down, it's a problem. And if your environment or if your solution is not as efficient as the business needs, that's the problem, right? So, at that point, the business will step in. So, our job is to make sure, from a technology perspective, how fast you can make implement it? And how efficiently you can implement it? And at the same time, how do you play within the guardrails of security and compliance? >> So, I was going to ask you, if you have VMware in your environment? 'Cause a lot of clients compare what vCenter does for Kubernetes is really needed. And I think that's what you guys got going on. I can say that, you're the vCenter of Kubernetes. I mean, as as metaphor, a place to manage it all, is all one paint of glass, so to speak. Is that how you see success in your environment? >> So, virtualization has gone a long way. Where we started, what we call bare metal servers, and then we virtualized operating systems. Now, we are virtualizing applications, and we are virtualizing platforms as well, right? So that's where Kubernetes plays a role. >> So, you see the need for a vCenter like thing for Kubernetes? >> There's definitely a need in the market. The way you need to think is like, let's say there is an insurance company who actually implement it today, and they gain the market advantage. Now, the the competition wants to do it as well, right? So, there's definitely a virtualization of application layer that's very critical, and it's a critical component of cloud strategy as a whole. >> See, you're too humble to say it. I'll say, you're like the vCenter of Kubernetes. Explain what that means in your term? If I said that to you, what would you react? How would you react to that? Would you say, BS, or would you say on point? >> Maybe we should think about what does vCenter do today? So, in my opinion, by the way, vCenter in my opinion, is one of the best platforms ever built. Like it's the best platform in my opinion ever built. VMware did an amazing job, because they took an IT engineer, and they made him now be able to do storage management, networking management, VM's multitenancy, access management, audit. Everything that you need to run a data center, you can do from essentially single platform. >> John: From a utility standpoint, home-run? >> It's amazing. >> Yeah. >> Because you are now able to empower people to do way more. Well, why are we not doing that for Kubernetes? So, the premise man Rafay was, well, I should have IT engineers, same engineers. Now, they should be able to run fleets of clusters. That's what people that Mass Mutual are able to do now. So, to that end, now you need cluster management, you need access management, you need blueprinting, you need policy management. All of these things that have happened before, chargebacks, they used to have it in vCenter, now they need to happen in other platforms but for Kubernetes. So, should we do many of the things that vCenter does? Yes. >> John: Kind of, yeah. >> Are we a vCenter for Kubernetes? >> No. >> That is a John Furrier question. >> All right, well, the speculation really goes back down to the earlier speed question. If you can take away the complexity and not make it more steps, or change a tool chain, or do something, then the Devs move faster. And the service layer that serves the business, the new organization, has to enable speed. This is becoming a real discussion point in the industry, is that, "Yeah, we got new tool. Look at the shiny new toy." But if it move the needle, does it help productivity for developers? And does it actually scale up the enablement? That's the question. So, I'm sure you guys are thinking about this a lot. What's your reaction? >> Yeah, absolutely. And one thing that just hit my mind, is think about the hoteling industry before Airbnb and after Airbnb. Or the taxi industry before Uber and after Uber. So, if I'm providing a platform, a Kubernetes platform for my application folks, or for my application partners, they have everything ready. All they need to do is build their application and deploy it, and run it. They don't have to worry about provisioning of the servers, and then building the Middleware on top of it, and then, do a bunch of testing to make sure they iron out all the compatible issues and whatnot. Now, today, all I say is like, "Hey, we have a platform built for you. You just build your application, and then deploy it in a development environment, that's where you put all the pieces of puzzle together. Make sure you see your application working, and then the next thing that you do is like, do the correction. >> John: Shipping. >> Shipping. You build the production. >> John: Press. Go. Release it. (laughs) That when you move on, but they were there. I mean, we're there now. We're there. So, we need to see the future, because that's the case, then the developers are the business. They have to be coding more features, they have to react to customers. They might see new business opportunities from a revenue standpoint that could be creatively built, got low code, no code, headless systems. These things are happening where there's, I call the Architectural List Environment where it's like, you don't need architecture, it's already happening. >> Yeah, and on top of it, if someone has an idea, they want to implement an idea real quick. So, how do you do it? And you don't have to struggle building an environment to implement your idea and test it in real time. So, from an innovation perspective, agility plays a key role. And that's where the Kubernetes platforms, or platforms like Kubernetes plays. >> You know, Lisa, when we talked to Andy Jassy, when he was the CEO of AWS, either one-on-one or on "theCUBE," he always said, and this is kind of happening, "Companies are going to be builders, where it's not just utility, you need that table stakes to enable that new business idea." And so, in this last keynote, he did this big thing like, "Think like your developers are the next entrepreneurial revenue generators." I think I'm starting to see that. What do you think about that? You see that coming sooner than later? Or is that an insight, or is that still ways away? >> I think it's already happening at a level, at a certain level. Now ,the question comes back to, you know, taking it to the reality. I mean, you can do your proof of concept, proof of technologies, and then prove it out like, "Hey, I got a new idea. This idea is great." And it's to the business advantage. But we really want to see it in production live where your customers are actually using it. >> In the board meetings, "Hey, we got a new idea that came in, generating more revenue, where'd that come from?" Agile Developer. Again, this is real. >> Yeah. >> Yeah. Absolutely agree. Yeah, I think both of you gentlemen said a word as you were talking, you used the word, Guardrails. We're talking about agility, but the really important thing is, look, these are enterprises, right? They have certain expectations. Guardrails is key, right? So, it's automation with the guardrails. Guardrails are like children, you know, shouldn't be heard. They're seen but not heard. Developers don't care about guardrails, they just want to go fast. >> They also bounce around a little bit, (laughs) off the guardrails. >> Haseeb: Yeah. >> One thing we know that's not going to slow down, is the expectations, right? Of all the consumers of this, the Devs, the business, the business top line, and, of course, the customers. So, the ability to really, as your website says, let's say, "Make Life Easy for Platform Teams" is not trivial. And clearly what you guys are talking about here, is you're really an enabler of those platform teams, it sounds like to me. >> Yup. >> So, great work, guys. Thank you so much for both coming on the program, talking about what you're doing together, how you're seeing the evolution of Kubernetes, why? And really, what the focus should be on those platform teams. We appreciate all your time and your insights. >> Thank you so much for having us. >> Thanks for having us. >> Our pleasure. For our guests and for John Furrier, I'm Lisa Martin. You're watching "theCUBE" Live, KubeCon CloudNativeCon from Detroit. We'll be back with our next guest in just a minute, so stick around. (bright upbeat music)

Published Date : Oct 27 2022

SUMMARY :

This is day one of our coverage building out the next level. Haseeb Budhani is here, the CEO of Rafay. What are some of the things It means that the community's growing. and you got a practitioner And Docker is basically the and how do you put your You mentioned one of the One of the things I'm seeing here, It has to be both. Because if you don't what you look forward, and the power dynamic and the technology that you serve. At the end of the day, I The new IT serves the developers, the company is the app. Over the time, that line might disappear. and the consumers of the So, the new thing, if I may go first. We'll save that for another interview. How is it evolve to be able So, at that point, the if you have VMware in your environment? and then we virtualized operating systems. Now, the the competition If I said that to you, So, in my opinion, by the way, So, to that end, now you the new organization, has to enable speed. that you do is like, You build the production. I call the Architectural List And you don't have to struggle are the next entrepreneurial I mean, you can do your proof of concept, In the board meetings, but the really important thing is, (laughs) off the guardrails. So, the ability to really, as coming on the program, guest in just a minute,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

HaseebPERSON

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

LisaPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Santhosh PasulaPERSON

0.99+

AWSORGANIZATION

0.99+

Haseeb BudhaniPERSON

0.99+

ValenciaLOCATION

0.99+

UberORGANIZATION

0.99+

six monthsQUANTITY

0.99+

Mass MutualORGANIZATION

0.99+

three daysQUANTITY

0.99+

bothQUANTITY

0.99+

DetroitLOCATION

0.99+

AirbnbORGANIZATION

0.99+

FirstQUANTITY

0.99+

100-personQUANTITY

0.99+

RafayPERSON

0.99+

second-genQUANTITY

0.99+

todayDATE

0.99+

Detroit, MichiganLOCATION

0.99+

oneQUANTITY

0.99+

one storyQUANTITY

0.99+

firstQUANTITY

0.98+

KubeConEVENT

0.98+

two constituentsQUANTITY

0.98+

CloudNativeConEVENT

0.98+

OneQUANTITY

0.98+

SanthoshPERSON

0.97+

single platformQUANTITY

0.97+

a monthQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

theCUBETITLE

0.96+

Mass MutualORGANIZATION

0.95+

one personQUANTITY

0.95+

BorgPERSON

0.95+

vCenterTITLE

0.95+

an yearQUANTITY

0.95+

one thingQUANTITY

0.94+

thousands of clustersQUANTITY

0.94+

DockerORGANIZATION

0.94+

two constituenciesQUANTITY

0.93+

KubernetesTITLE

0.93+

Gen-OneQUANTITY

0.92+

SRETITLE

0.92+

Michael Cade, Veeam | VeeamON 2022


 

(calm music) >> Hi everybody. We're here at VeeamON 2022. This is day two of the CUBE's continuous coverage. I'm Dave Vellante. My co-host is Dave Nicholson. A ton of energy. The keynotes, day two keynotes are all about products at Veeam. Veeam, the color of green, same color as money. And so, and it flows in this ecosystem. I'll tell you right now, Michael Cade is here. He's the senior technologist for product strategy at Veeam. Michael, fresh off the keynotes. >> Yeah, yeah. >> Welcome. Danny Allen's keynote was fantastic. I mean, that story he told blew me away. I can't wait to have him back. Stay tuned for that one. But we're going to talk about protecting containers, Kasten. You guys got announcements of Kasten by Veeam, you call it K10 version five, I think? >> Yeah. So just rolled into 5.0 release this week. Now, it's a bit different to what we see from a VBR release cycle kind of thing, cause we're constantly working on a two week sprint cycle. So as much as 5.0's been launched and announced, we're going to see that trickling out over the next couple of months until we get round to Cube (indistinct) and we do all of this again, right? >> So let's back up. I first bumped into Kasten, gosh, it was several years ago at VeeamON. Like, wow this is a really interesting company. I had deep conversations with them. They had a sheer, sheer cat grin, like something was going on and okay finally you acquire them, but go back a little bit of history. Like why the need for this? Containers used to be ephemeral. You know, you didn't have to persist them. That changed, but you guys are way ahead of that trend. Talk a little bit more about the history there and then we'll get into current day. >> Yeah, I think the need for stateful workloads within Kubernetes is absolutely grown. I think we just saw 1.24 of Kubernetes get released last week or a couple of weeks ago now. And really the focus there, you can see, at least three of the big ticket items in that release are focused around storage and data. So it just encourages that the community is wanting to put these data services within that. But it's also common, right? It's great to think about a stateless... If you've got stateless application but even a web server's got some state, right? There's always going to be some data associated to an application. And if there isn't then like, great but that doesn't really work- >> You're right. Where'd they click, where'd they go? I mean little things like that, right? >> Yeah. Yeah, exactly. So one of the things that we are seeing from that is like obviously the requirement to back up and put in a lot of data services in there, and taking full like exposure of the Kubernetes ecosystem, HA, and very tiny containers versus these large like virtual machines that we've always had the story at Veeam around the portability and being able to move them left, right, here, there, and everywhere. But from a K10 point of view, the ability to not only protect them, but also move those applications or move that data wherever they need to be. >> Okay. So, and Kubernetes of course has evolved. I mean the early days of Kubernetes, they kept it simple, kind of like Veeam actually. Right? >> Yeah. >> And then, you know, even though Mesosphere and even Docker Swarm, they were trying to do more sophisticated cluster management. Kubernetes has now got projects getting much more complicated. So more complicated workloads mean more data, more critical data means more protection. Okay, so you acquire Kasten, we know that's a small part of your business today but it's going to be growing. We know this cause everybody's developing applications. So what's different about protecting containers? Danny talks about modern data protection. Okay, when I first heard that, I'm like, eh, nice tagline, but then he peel the onion. He explains how in virtualization, you went from agents to backing up of VMware instance, a virtual instance. What's different about containers? What constitutes modern data protection for containers? >> Yeah, so I think the story that Danny tells as well, is so when we had our physical agents and virtualization came along and a lot of... And this is really where Veeam was born, right, we went into the virtualization API, the VMware API, and we started leveraging that to be more storage efficient. The admin overhead around those agents weren't there then, we could just back up using the API. Whereas obviously a lot of our competition would use agents still and put that resource overhead on top of that. So that's where Veeam initially got the kickstart in that world. I think it's very similar to when it comes to Kubernetes because K10 is deployed within the Kubernetes cluster and it leverages the Kubernetes API to pull out that data in a more efficient way. You could use image based backups or traditional NAS based backups to protect some of the data, and backup's kind of the... It's only one of the ticks in the boxes, right? You have to be able to restore and know what that data is. >> But wait, your competitors aren't as fat, dumb and happy today as they were back then, right? So it can't... They use the same APIs and- >> Yeah. >> So what makes you guys different? >> So I think that's testament to the Kubernetes and the community behind that and things like the CSI driver, which enables the storage vendors to take that CSI abstraction layer and then integrate their storage components, their snapshot technologies, and other efficiency models in there, and be able to leverage that as part of a universal data protection API. So really that's one tick in the box and you're absolutely right, there's open source tools that can do exactly what we're doing to a degree on that backup and recovery. Where it gets really interesting is the mobility of data and how we're protecting that. Because as much as stateful workloads are seen within the Kubernetes environments now, they're also seen outside. So things like Amazon RDS, but the front end lives in Kubernetes going to that stateless point. But being able to protect the whole application and being very application aware means that we can capture everything and restore wherever we want that to go as well. Like, so the demo that I just did was actually a Postgres database in AWS, and us being able to clone or migrate that out into an EKS cluster as a staple set. So again, we're not leveraging RDS at that point, but it gives us the freedom of movement of that data. >> Yeah, I want to talk about that, what you actually demoed. One of the interesting things, we were talking earlier, I didn't see any CLI when you were going through the integration of K10 V5 and V12. >> Yeah. >> That was very interesting, but I'm more skeptical of this concept, of the single pane of glass and how useful that is. Who is this integration targeting? Are you targeting the sort of traditional Veeam user who is now adding as a responsibility, the management of protecting these Kubernetes environments? Or are you at the same time targeting the current owners of those environments? Cause I know you talk about shift left and- >> Yeah. >> You know, nobody needs Kubernetes if you only have one container and one thing you're doing. So at some point it's all about automation, it's about blueprints, it's about getting those things in early. So you get up, you talk about this integration, who cares about that kind of integration? >> Yeah, so I think it's a bit of both, right? So we're definitely focused around the DevOps focused engineer. Let's just call it that. And under an umbrella, the cloud engineer that's looking after Kubernetes, from an application delivery perspective. But I think more and more as we get further up the mountain, CIS admin, obviously who we speak to the tech decision makers, the solutions architects systems engineers, they're going to inherit and be that platform operator around the Kubernetes clusters. And they're probably going to land with the requirement around data management as well. So the specific VBR centralized management is very much for the backup admin, the infrastructure admin or the cloud based engineer that's looking after the Kubernetes cluster and the data within that. Still we speak to app developers who are conscious of what their database looks like, because that's an external data service. And the biggest question that we have or the biggest conversation we have with them is that the source code, the GitHub or the source repository, that's fine, that will get your... That'll get some of the way back up and running, but when it comes to a Postgres database or some sort of data service, oh, that's out of the CI/CD pipeline. So it's whether they're interested in that or whether that gets farmed out into another pre-operations, the traditional operations team. >> So I want to unpack your press release a little bit. It's full of all the acronyms, so maybe you can help us- >> Sure. >> Cipher. You got security everywhere enhance platform hardening, including KMS. That's key- >> Yeah, key management service, yeah. >> System, okay. With AWS, KMS and HashiCorp vault. Awesome, love to see HashiCorp company. >> Yeah. >> RBAC objects in UI dashboards, ransomware attacks, AWS S3. So anyway, security everywhere. What do you mean by that? >> So I think traditionally at Veeam, and continue that, right? From a security perspective, if you think about the failure scenario and ransomware's, the hot topic, right, when it comes to security, but we can think about security as, if we think about that as the bang, right, the bang is something bad's happen, fire, flood, blood, type stuff. And we tend to be that right hand side of that, we tend to be the remediation. We're definitely the one, the last line of defense to get stuff back when something really bad happens. And I think what we've done from a K10 point of view, is not only enhance that, so with the likes of being able to... We're not going to reinvent the wheel, let's use the services that HashiCorp have done from a HashiCorp vault point of view and integrate from a key management system. But then also things like S3 or ransomware prevention. So I want to know if something bad's happened and Kasten actually did something more generic from a Veeam ONE perspective, but one of the pieces that we've seen since we've then started to send our backups to an immutable object storage, is let's be more of that left as well and start looking at the preventative tasks that we can help with. Now, we're not going to be a security company, but you heard all the way through Danny's like keynote, and probably when he is been on here, is that it's always, we're always mindful of that security focus. >> On that point, what was being looked for? A spike in CPU utilization that would be associated with encryption? >> Yeah, exactly that. >> Is that what was being looked- >> That could be... Yeah, exactly that. So that could be from a virtual machine point of view but from a K10, and it specifically is that we're going to look at the S3 bucket or the object storage, we're going to see if there's a rate of change that's out of the normal. It's an abnormal rate. And then with that, we can say, okay, that doesn't look right, alert us through observability tools, again, around the cloud native ecosystem, Prometheus Grafana. And then we're going to get insight into that before the bang happens, hopefully before the bang. >> So that's an interesting when we talk about adjacencies and moving into this area of security- >> We're talking to Zeus about that too. >> Exactly. That's that sort of creep where you can actually add value. It's interesting. >> So, okay. So we talked about shift left, get that, and then expanded ecosystem, industry leading technologies. By the way, one of them is the Red Hat Marketplace. And I think, I heard Anton's... Anton was amazing. He is the head of product management at Veeam. Is been to every VeeamON. He's got family in Ukraine. He's based in Switzerland. >> Yeah. >> But he chose not to come here because he's obviously supporting, you know, the carnage that's going on in Ukraine. But anyway, I think he said the Red Hat team is actually in Ukraine developing, you know, while the bombs are dropping. That's amazing. But anyway, back to our interview here, expanded ecosystem, Red Hat, SUSE with Rancher, they've got some momentum. vSphere with Tanzu, they're in the game. Talk about that ecosystem and its importance. >> Yeah, and I think, and it goes back to your point around the CLI, right? Is that it feels like the next stage of Kubernetes is going to be very much focused towards the operator or the operations team. The CIS admin of today is going to have to look after that. And at the moment it's all very command line, it's all CLI driven. And I think the marketplace is OpenShift, being our biggest foothold around our customer base, is definitely around OpenShift. But things like, obviously we are a longstanding alliance partner with VMware as well. So their Tanzu operations actually there's support for TKGS, so vSphere Tanzu grid services is another part of the big release of 5.0. But all three of those and the common marketplace gives us a UI, gives us a way of being able to see and visualize that rather than having to go and hunt down the commands and get our information through some- >> Oh, some people are going to be unhappy about that. >> Yeah. >> But I contend the human eye has evolved to see in color for a very good reason. So I want to see things in red, yellow, and green at times. >> There you go, yeah. >> So when we hear a company like Veeam talk about, look we have no platform agenda, we don't care which cloud it's in. We don't care if it's on-prem or Google Azure, AWS. We had Wasabi on, we have... Great, they got an S3 compatible, you know, target, and others as well. When we hear them, companies like you, talk about that consistent experience, single pane of glass that you're skeptical of, maybe cause it's technically challenging, one of the things, we call it super cloud, right, that's come up. Danny and I were riffing on that the other day and we'll do that more this afternoon. But it brings up something that we were talking about with Zeus, Dave, which is the edge, right? And it seems like Kubernetes, and we think about OpenShift. >> Yeah. >> We were there last week at Red Hat Summit. It's like 50% of the conversation, if not more, was the edge. Right, and really true edge, worst cases, use cases. Two weeks ago we were at Dell Tech, there was a lot of edge talk, but it was retail stores, like Lowe's. Okay, that's kind of near edge, but the far edge, we're talking space, right? So seems like Kubernetes fits there and OpenShift, you know, particularly, as well as some of the others that we mentioned. What about edge? How much of what you're doing with container data protection do you see as informing you about the edge opportunity? Are you seeing any patterns there? Nobody's really talking about it in data protection yet. >> So yeah, large scale numbers of these very small clusters that are out there on farms or in wind turbines, and that is definitely something that is being spoken about. There's not much mention actually in this 5.0 release because we actually support things like K3s,(indistinct), that all came in 4.5, but I think, to your first point as well, David, is that, look, we don't really care what that Kubernetes distribution is. So you've got K3s lightweight Kubernetes distribution, we support it, because it uses the same native Kubernetes APIs, and we get deployed inside of that. I think where we've got these large scale and large numbers of edge deployments of Kubernetes and that you require potentially some data management down there, and they might want to send everything into a centralized location or a more centralized location than a farm shed out in the country. I think we're going to see a big number of that. But then we also have our multi cluster dashboard that gives us the ability to centralize all of the control plane. So we don't have to go into each individual K10 deployment to manage those policies. We can have one big centralized management multi cluster dashboard, and we can set global policies there. So if you're running a database and maybe it's the same one across all of your different edge locations, where you could just set one policy to say I want to protect that data on an hourly basis, a daily basis, whatever that needs to be, rather than having to go into each individual one. >> And then send it back to that central repository. So that's the model that you see, you don't see the opportunity, at least at this point in time, of actually persisting it at the edge? >> So I think it depends. I think we see both, but again, that's the footprint. And maybe like you mentioned about up in space having a Kubernetes cluster up there. You don't really want to be sending up a NAS device or a storage device, right, to have to sit alongside it. So it's probably, but then equally, what's the art of the possible to get that back down to our planet, like as part of a consistent copy of data? >> Or even a farm or other remote locations. The question is, I mean, EVs, you know, we believe there's going to be tons of data, we just don't.. You think about Tesla as a use case, they don't persist a ton of their data. Maybe if a deer runs across, you know, the front of the car, oh, persist that, send that back to the cloud. >> I don't want anyone knowing my Tesla data. I'll tell you that right now. (all laughing) >> Well, there you go, that one too. All right, well, that's future discussion, we're still trying to squint through those patterns. I got so many questions for you, Michael, but we got to go. Thanks so much for coming to theCUBE. >> Always. >> Great job on the keynote today and good luck. >> Thank you. Thanks for having me. >> All right, keep it right there. We got a ton of product talk today. As I said, Danny Allan's coming back, we got the ecosystem coming, a bunch of the cloud providers. We have, well, iland was up on stage. They were just recently acquired by 11:11 Systems. They were an example today of a cloud service provider. We're going to unpack it all here on theCUBE at VeeamON 2022 from Las Vegas at the Aria. Keep it right there. (calm music)

Published Date : May 18 2022

SUMMARY :

Veeam, the color of green, I mean, that story he told blew me away. and we do all of this again, right? about the history there So it just encourages that the community I mean little things like that, right? So one of the things that I mean the early days of Kubernetes, but it's going to be growing. and it leverages the Kubernetes API So it can't... and be able to leverage that One of the interesting things, of the single pane of glass So you get up, you talk And the biggest question that we have It's full of all the acronyms, You got security everywhere With AWS, KMS and HashiCorp vault. So anyway, security everywhere. and ransomware's, the hot topic, right, or the object storage, That's that sort of creep where He is the head of product said the Red Hat team and the common marketplace gives us a UI, to be unhappy about that. But I contend the human eye on that the other day It's like 50% of the and maybe it's the same one So that's the model that you see, but again, that's the footprint. that back to the cloud. I'll tell you that right now. Thanks so much for coming to theCUBE. on the keynote today and good luck. Thanks for having me. a bunch of the cloud providers.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

MichaelPERSON

0.99+

Dave VellantePERSON

0.99+

Danny AllenPERSON

0.99+

SwitzerlandLOCATION

0.99+

UkraineLOCATION

0.99+

DannyPERSON

0.99+

Michael CadePERSON

0.99+

TeslaORGANIZATION

0.99+

50%QUANTITY

0.99+

Las VegasLOCATION

0.99+

LoweORGANIZATION

0.99+

AntonPERSON

0.99+

VeeamONORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

last weekDATE

0.99+

DavePERSON

0.99+

Red HatORGANIZATION

0.99+

Two weeks agoDATE

0.99+

AWSORGANIZATION

0.99+

last weekDATE

0.99+

two weekQUANTITY

0.99+

VMwareORGANIZATION

0.99+

VeeamPERSON

0.99+

11:11 SystemsORGANIZATION

0.99+

Danny AllanPERSON

0.99+

AmazonORGANIZATION

0.99+

bothQUANTITY

0.98+

SUSEORGANIZATION

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

one policyQUANTITY

0.98+

first pointQUANTITY

0.98+

RancherORGANIZATION

0.98+

K10COMMERCIAL_ITEM

0.98+

this weekDATE

0.98+

S3TITLE

0.98+

one containerQUANTITY

0.98+

several years agoDATE

0.97+

KubernetesTITLE

0.97+

CISORGANIZATION

0.97+

KMSTITLE

0.96+

Dell TechORGANIZATION

0.96+

ZeusORGANIZATION

0.96+

K10 V5COMMERCIAL_ITEM

0.95+

OpenShiftTITLE

0.95+

VMwareTITLE

0.95+

firstQUANTITY

0.95+

this afternoonDATE

0.95+

V12COMMERCIAL_ITEM

0.94+

ilandORGANIZATION

0.94+

GitHubORGANIZATION

0.94+

OneQUANTITY

0.94+

TKGSORGANIZATION

0.93+

S3COMMERCIAL_ITEM

0.92+

Red Hat SummitEVENT

0.92+

day twoQUANTITY

0.92+

TanzuORGANIZATION

0.92+

Exploring The Rise of Kubernete's With Two Insiders


 

>>Hi everybody. This is Dave Volante. Welcome to this cube conversation where we're going to go back in time a little bit and explore the early days of Kubernetes. Talk about how it formed the improbable events, perhaps that led to it. And maybe how customers are taking advantage of containers and container orchestration today, and maybe where the industry is going. Matt Provo is here. He's the founder and CEO of storm forge and Chandler Huntington hoes. Hoisington is the general manager of EKS edge and hybrid AWS guys. Thanks for coming on. Good to see you. Thanks for having me. Thanks. So, Jenny, you were the vice president of engineering at miso sphere. Is that, is that correct? >>Well, uh, vice-president engineering basis, fear and then I ran product and engineering for DTQ masons. >>Yeah. Okay. Okay. So you were there in the early days of, of container orchestration and Matt, you, you were working at a S a S a Docker swarm shop, right? Yep. Okay. So I mean, a lot of people were, you know, using your platform was pretty novel at the time. Uh, it was, it was more sophisticated than what was happening with, with Kubernetes. Take us back. What was it like then? Did you guys, I mean, everybody was coming out. I remember there was, I think there was one Docker con and everybody was coming, the Kubernetes was announced, and then you guys were there, doc Docker swarm was, was announced and there were probably three or four other startups doing kind of container orchestration. And what, what were those days like? Yeah. >>Yeah. I wasn't actually atmosphere for those days, but I know them well, I know the story as well. Um, uh, I came right as we started to pivot towards Kubernetes there, but, um, it's a really interesting story. I mean, obviously they did a documentary on it and, uh, you know, people can watch that. It's pretty good. But, um, I think that, from my perspective, it was, it was really interesting how this happened. You had basically, uh, con you had this advent of containers coming out, right? So, so there's new novel technology and Solomon, and these folks started saying, Hey, you know, wait a second, wait if I put a UX around these couple of Linux features that got launched a couple of years ago, what does that look like? Oh, this is pretty cool. Um, so you have containers starting to crop up. And at the same time you had folks like ThoughtWorks and other kind of thought leaders in the space, uh, starting to talk about microservices and saying, Hey, monoliths are bad and you should break up these monoliths into smaller pieces. >>And any Greenfield application should be broken up into individuals, scalable units that a team can can own by themselves, and they can scale independent of each other. And you can write tests against them independently of other components. And you should break up these big, big mandalas. And now we are kind of going back to model this, but that's for another day. Um, so, so you had microservices coming out and then you also had containers coming out, same time. So there was like, oh, we need to put these microservices in something perfect. We'll put them in containers. And so at that point, you don't really, before that moment, you didn't really need container orchestration. You could just run a workload in a container and be done with it, right? You didn't need, you don't need Kubernetes to run Docker. Um, but all of a sudden you had tons and tons of containers and you had to manage these in some way. >>And so that's where container orchestration came, came from. And, and Ben Heineman, the founder of Mesa was actually helping schedule spark at the time at Berkeley. Um, and that was one of the first workloads with spark for Macy's. And then his friends at Twitter said, Hey, come over, can you help us do this with containers at Twitter? He said, okay. So when it helped them do it with containers at Twitter, and that's kinda how that branch of the container wars was started. And, um, you know, it was really, really great technology and it actually is still in production in a lot of shops today. Um, uh, more and more people are moving towards Kubernetes and Mesa sphere saw that trend. And at the end of the day, Mesa sphere was less concerned about, even though they named the company Mesa sphere, they were less concerned about helping customers with Mesa specifically. They really want to help customers with these distributed problems. And so it didn't make sense to, to just do Mesa. So they would took on Kubernetes as well. And I hope >>I don't do that. I remember, uh, my, my co-founder John furrier introduced me to Jerry Chen way back when Jerry is his first, uh, uh, VC investment with Greylock was Docker. And we were talking in these very, obviously very excited about it. And, and his Chandler was just saying, it said Solomon and the team simplified, you know, containers, you know, simple and brilliant. All right. So you guys saw the opportunity where you were Docker swarm shop. Why? Because you needed, you know, more sophisticated capabilities. Yeah. But then you, you switched why the switch, what was happening? What was the mindset back then? We ran >>And into some scale challenges in kind of operationalize or, or productizing our kind of our core machine learning. And, you know, we, we, we saw kind of the, the challenges, luckily a bit ahead of our time. And, um, we happen to have someone on the team that was also kind of moonlighting, uh, as one of the, the original core contributors to Kubernetes. And so as this sort of shift was taking place, um, we, we S we saw the flexibility, uh, of what was becoming Kubernetes. Um, and, uh, I'll never forget. I left on a Friday and came back on a Monday and we had lifted and shifted, uh, to Kubernetes. Uh, the challenge was, um, you know, you, at that time, you, you didn't have what you have today through EKS. And, uh, those kinds of services were, um, just getting that first cluster up and running was, was super, super difficult, even in a small environment. >>And so I remember we, you know, we, we finally got it up and running and it was like, nobody touch it, don't do anything. Uh, but obviously that doesn't, that doesn't scale either. And so that's really, you know, being kind of a data science focused shop at storm forge from the very beginning. And that's where our core IP is. Uh, our, our team looked at that problem. And then we looked at, okay, there are a bunch of parameters and ways that I can tune this application. And, uh, why are the configurations set the way that they are? And, you know, uh, is there room to explore? And that's really where, unfortunately, >>Because Mesa said much greater enterprise capabilities as the Docker swarm, at least they were heading in that direction, but you still saw that Kubernetes was, was attractive because even though it didn't have all the security features and enterprise features, because it was just so simple. I remember Jen Goldberg who was at Google at the time saying, no, we were focused on keeping it simple and we're going from mass adoption, but does that kind of what you said? >>Yeah. And we made a bet, honestly. Uh, we saw that the, uh, you know, the growing community was really starting to, you know, we had a little bit of an inside view because we had, we had someone that was very much in the, in the original part, but you also saw the, the tool chain itself start to, uh, start to come into place right. A little bit. And it's still hardening now, but, um, yeah, we, as any, uh, as any startup does, we, we made a pivot and we made a bet and, uh, this, this one paid off >>Well, it's interesting because, you know, we said at the time, I mean, you had, obviously Amazon invented the modern cloud. You know, Microsoft has the advantage of has got this huge software stays, Hey, just now run it into the cloud. Okay, great. So they had their entry point. Google didn't have an entry point. This is kind of a hail Mary against Amazon. And, and I, I wrote a piece, you know, the improbable, Verizon, who Kubernetes to become the O S you know, the cloud, but, but I asked, did it make sense for Google to do that? And it never made any money off of it, but I would argue they, they were kind of, they'd be irrelevant if they didn't have, they hadn't done that yet, but it didn't really hurt. It certainly didn't hurt Amazon EKS. And you do containers and your customers you've embraced it. Right. I mean, I, I don't know what it was like early days. I remember I've have talked to Amazon people about this. It's like, okay, we saw it and then talk to customers, what are they doing? Right. That's kind of what the mindset is, right? Yeah. >>That's, I, I, you know, I've, I've been at Amazon a couple of years now, and you hear the stories of all we're customer obsessed. We listened to our customers like, okay, okay. We have our company values, too. You get told them. And when you're, uh, when you get first hired in the first day, and you never really think about them again, but Amazon, that really is preached every day. It really is. Um, uh, and that we really do listen to our customers. So when customers start asking for communities, we said, okay, when we built it for them. So, I mean, it's, it's really that simple. Um, and, and we also, it's not as simple as just building them a Kubernetes service. Amazon has a big commitment now to start, you know, getting involved more in the community and working with folks like storm forage and, and really listening to customers and what they want. And they want us working with folks like storm florigen and that, and that's why we're doing things like this. So, well, >>It's interesting, because of course, everybody looks at the ecosystem, says, oh, Amazon's going to kill the ecosystem. And then we saw an article the other day in, um, I think it was CRN, did an article, great job by Amazon PR, but talk about snowflake and Amazon's relationship. And I've said many times snowflake probably drives more than any other ISV out there. And so, yeah, maybe the Redshift guys might not love snowflake, but Amazon in general, you know, they're doing great three things. And I remember Andy Jassy said to me, one time, look, we love the ecosystem. We need the ecosystem. They have to innovate too. If they don't, you know, keep pace, you know, they're going to be in trouble. So that's actually a healthy kind of a dynamic, I mean, as an ecosystem partner, how do you, >>Well, I'll go back to one thing without the work that Google did to open source Kubernetes, a storm forge wouldn't exist, but without the effort that AWS and, and EKS in particular, um, provides and opens up for, for developers to, to innovate and to continue, continue kind of operationalizing the shift to Kubernetes, um, you know, we wouldn't have nearly the opportunity that we do to actually listen to them as well, listen to the users and be able to say, w w w what do you want, right. Our entire reason for existence comes from asking users, like, how painful is this process? Uh, like how much confidence do you have in the, you know, out of the box, defaults that ship with your, you know, with your database or whatever it is. And, uh, and, and how much do you love, uh, manually tuning your application? >>And, and, uh, obviously nobody's said, I love that. And so I think as that ecosystem comes together and continues expanding, um, it's just, it opens up a huge opportunity, uh, not only for existing, you know, EKS and, uh, AWS users to continue innovating, but for companies like storm forge, to be able to provide that opportunity for them as well. And, and that's pretty powerful. So I think without a lot of the moves they've made, um, you know, th the door wouldn't be nearly as open for companies like, who are, you know, growing quickly, but are smaller to be able to, you know, to exist. >>Well, and I was saying earlier that, that you've, you're in, I wrote about this, you're going to get better capabilities. You're clearly seeing that cluster management we've talked about better, better automation, security, the whole shift left movement. Um, so obviously there's a lot of momentum right now for Kubernetes. When you think about bare metal servers and storage, and then you had VM virtualization, VMware really, and then containers, and then Kubernetes as another abstraction, I would expect we're not at the end of the road here. Uh, what's next? Is there another abstraction layer that you would think is coming? Yeah, >>I mean, w for awhile, it looked like, and I remember even with our like board members and some of our investors said, well, you know, well, what about serverless? And, you know, what's the next Kubernetes and nothing, we, as much as I love Kubernetes, um, which I do, and we do, um, nothing about what we particularly do. We are purpose built for Kubernetes, but from a core kind of machine learning and problem solving standpoint, um, we could apply this elsewhere, uh, if we went that direction and so time will tell what will be next, then there will be something, uh, you know, that will end up, you know, expanding beyond Kubernetes at some point. Um, but, you know, I think, um, without knowing what that is, you know, our job is to, to, to serve our, you know, to serve our customers and serve our users in the way that they are asking for that. >>Well, serverless obviously is exploding when you look again, and we tucked the ETR survey data, when you look at, at the services within Amazon and other cloud providers, you know, the functions off, off the charts. Uh, so that's kind of an interesting and notable now, of course, you've got Chandler, you've got edge in your title. You've got hybrid in, in your title. So, you know, this notion of the cloud expanding, it's not just a set of remote services, just only in the public cloud. Now it's, it's coming to on premises. You actually got Andy, Jesse, my head space. He said, one time we just look at it. The data centers is another edge location. Right. Okay. That's a way to look at it and then you've got edge. Um, so that cloud is expanding, isn't it? The definition of cloud is, is, is evolving. >>Yeah, that's right. I mean, customers one-on-one run workloads in lots of places. Um, and that's why we have things like, you know, local zones and wavelengths and outposts and EKS anywhere, um, EKS, distro, and obviously probably lots more things to come. And there's, I always think of like, Amazon's Kubernetes strategy on a manageability scale. We're on one far end of the spectrum, you have EKS distro, which is just a collection of the core Kubernetes packages. And you could, you could take those and stand them up yourself in a broom closet, in a, in a retail shop. And then on the other far in the spectrum, you have EKS far gate where you can just give us your container and we'll handle everything for you. Um, and then we kind of tried to solve everything in between for your data center and for the cloud. And so you can, you can really ask Amazon, I want you to manage my control plane. I want you to manage this much of my worker nodes, et cetera. And oh, I actually want help on prem. And so we're just trying to listen to customers and solve their problems where they're asking us to solve them. Cut, >>Go ahead. No, I would just add that in a more vertically focused, uh, kind of orientation for us. Like we, we believe that op you know, optimization capabilities should transcend the location itself. And, and, and so whether that's part public part, private cloud, you know, that's what I love part of what I love about EKS anywhere. Uh, it, you know, you shouldn't, you should still be able to achieve optimal results that connect to your business objectives, uh, wherever those workloads, uh, are, are living >>Well, don't wince. So John and I coined this term called Supercloud and people laugh about it, but it's different. It's, it's, you know, people talk about multi-cloud, but that was just really kind of vendor diversity. Right? I got to running here, I'm running their money anywhere. Uh, but, but individually, and so Supercloud is this concept of this abstraction layer that floats wherever you are, whether it's on prem, across clouds, and you're taking advantage of those native primitives, um, and then hiding that underlying complexity. And that's what, w re-invent the ecosystem was so excited and they didn't call it super cloud. We, we, we called it that, but they're clearly thinking differently about the value that they can add on top of Goldman Sachs. Right. That to me is an example of a Supercloud they're taking their on-prem data and their, their, their software tooling connecting it to AWS. They're running it on AWS, but they're, they're abstracting that complexity. And I think you're going to see a lot, a lot more of that. >>Yeah. So Kubernetes itself, in many cases is being abstracted away. Yeah. There's a disability of a disappearing act for Kubernetes. And I don't mean that in a, you know, in an, a, from an adoption standpoint, but, uh, you know, Kubernetes itself is increasingly being abstracted away, which I think is, is actually super interesting. Yeah. >>Um, communities doesn't really do anything for a company. Like we run Kubernetes, like, how does that help your bottom line? That at the end of the day, like companies don't care that they're running Kubernetes, they're trying to solve a problem, which is the, I need to be able to deploy my applications. I need to be able to scale them easily. I need to be able to update them easily. And those are the things they're trying to solve. So if you can give them some other way to do that, I'm sure you know, that that's what they want. It's not like, uh, you know, uh, a big bank is making more money because they're running Kubernetes. That's not, that's not the current, >>It gets subsumed. It's just become invisible. Right. Exactly. You guys back to the office yet. What's, uh, what's the situation, >>You know, I, I work for my house and I, you know, we go into the office a couple of times a week, so it's, it's, uh, yeah, it's, it's, it's a crazy time. It's a crazy time to be managing and hiring. And, um, you know, it's, it's, it's, it's definitely a challenge, but there's a lot of benefits of working home. I got two young kids, so I get to see them, uh, grow up a little bit more working, working out of my house. So it's >>Nice also. >>So we're in, even as a smaller startup, we're in 26, 27 states, uh, Canada, Germany, we've got a little bit of presence in Japan, so we're very much distributed. Um, we, uh, have not gone back and I'm not sure we will >>Permanently remote potentially. >>Yeah. I mean, w we made a, uh, pretty like for us, the timing of our series B funding, which was where we started hiring a lot, uh, was just before COVID started really picking up. So we, you know, thankfully made a, a pretty good strategic decision to say, we're going to go where the talent is. And yeah, it was harder to find for sure, especially in w we're competing, it's incredibly competitive. Uh, but yeah, we've, it was a good decision for us. Um, we are very about, you know, getting the teams together in person, you know, as often as possible and in the safest way possible, obviously. Um, but you know, it's been a, it's been a pretty interesting, uh, journey for us and something that I'm, I'm not sure I would, I would change to be honest with you. Yeah. >>Well, Frank Slootman, snowflakes HQ to Montana, and then can folks like Michael Dell saying, Hey, same thing as you, wherever they want to work, bring yourself and wherever you are as cool. And do you think that the hybrid mode for your team is kind of the, the, the operating mode for the, for the foreseeable future is a couple of, >>No, I think, I think there's a lot of benefits in both working from the office. I don't think you can deny like the face-to-face interactions. It feels good just doing this interview face to face. Right. And I can see your mouth move. So it's like, there's a lot of benefits to that, um, over a chime call or a zoom call or whatever, you know, that, that also has advantages, right. I mean, you can be more focused at home. And I think some version of hybrid is probably in the industry's future. I don't know what Amazon's exact plans are. That's above my pay grade, but, um, I know that like in general, the industry is definitely moving to some kind of hybrid model. And like Matt said, getting people I'm a big fan at Mesa sphere, we ran a very diverse, like remote workforce. We had a big office in Germany, but we'd get everybody together a couple of times a year for engineering week or, or something like this. And you'd get a hundred people, you know, just dedicated to spending time together at a hotel and, you know, Vegas or Hamburg or wherever. And it's a really good time. And I think that's a good model. >>Yeah. And I think just more ETR data, the current thinking now is that, uh, the hybrid is the number one sort of model, uh, 36% that the CIO is believe 36% of the workforce are going to be hybrid permanently is kind of their, their call a couple of days in a couple of days out. Um, and the, the percentage that is remote is significantly higher. It probably, you know, high twenties, whereas historically it's probably 15%. Yeah. So permanent changes. And that, that changes the infrastructure. You need to support it, the security models and everything, you know, how you communicate. So >>When COVID, you know, really started hitting and in 2020, um, the big banks for example, had to, I mean, you would want to talk about innovation and ability to, to shift quickly. Two of the bigger banks that have in, uh, in fact, adopted Kubernetes, uh, were able to shift pretty quickly, you know, systems and things that were, you know, historically, you know, it was in the office all the time. And some of that's obviously shifted back to a certain degree, but that ability, it was pretty remarkable actually to see that, uh, take place for some of the larger banks and others that are operating in super regulated environments. I mean, we saw that in government agencies and stuff as well. >>Well, without the cloud, no, this never would've happened. Yeah. >>And I think it's funny. I remember some of the more old school manager thing people are, aren't gonna work less when they're working from home, they're gonna be distracted. I think you're seeing the opposite where people are too much, they get burned out because you're just running your computer all day. And so I think that we're learning, I think everyone, the whole industry is learning. Like, what does it mean to work from home really? And, uh, it's, it's a fascinating thing is as a case study, we're all a part of right now. >>I was talking to my wife last night about this, and she's very thoughtful. And she w when she was in the workforce, she was at a PR firm and a guy came in a guest speaker and it might even be in the CEO of the company asking, you know, what, on average, what time who stays at the office until, you know, who leaves by five o'clock, you know, a few hands up, or who stays until like eight o'clock, you know, and enhancement. And then, so he, and he asked those people, like, why, why can't you get your work done in a, in an eight hour Workday? I go home. Why don't you go in? And I sit there. Well, that's interesting, you know, cause he's always looking at me like, why can't you do, you know, get it done? And I'm saying the world has changed. Yeah. It really has where people are just on all the time. I'm not sure it's sustainable, quite frankly. I mean, I think that we have to, you know, as organizations think about, and I see companies doing it, you guys probably do as well, you know, take a four day, you know, a week weekend, um, just for your head. Um, but it's, there's no playbook. >>Yeah. Like I said, we're a part of a case study. It's also hard because people are distributed now. So you have your meetings on the east coast, you can wake up at seven four, and then you have meetings on the west coast. You stay until seven o'clock therefore, so your day just stretches out. So you've got to manage this. And I think we're, I think we'll figure it out. I mean, we're good at figuring this stuff. >>There's a rise in asynchronous communication. So with things like slack and other tools, as, as helpful as they are in many cases, it's a, it, isn't always on mentality. And like, people look for that little green dot and you know, if you're on the you're online. So my kids, uh, you know, we have a term now for me, cause my office at home is upstairs and I'll come down. And if it's, if it's during the day, they'll say, oh dad, you're going for a walk and talk, you know, which is like, it was my way of getting away from the desk, getting away from zoom. And like, you know, even in Boston, uh, you know, getting outside, trying to at least, you know, get a little exercise or walk and get, you know, get my head away from the computer screen. Um, but even then it's often like, oh, I'll get a slack notification on my phone or someone will call me even if it's not a scheduled walk and talk. Um, uh, and so it is an interesting, >>A lot of ways to get in touch or productivity is presumably going to go through the roof. But now, all right, guys, I'll let you go. Thanks so much for coming to the cube. Really appreciate it. And thank you for watching this cube conversation. This is Dave Alante and we'll see you next time.

Published Date : Mar 10 2022

SUMMARY :

So, Jenny, you were the vice president Well, uh, vice-president engineering basis, fear and then I ran product and engineering for DTQ So I mean, a lot of people were, you know, using your platform I mean, obviously they did a documentary on it and, uh, you know, people can watch that. Um, but all of a sudden you had tons and tons of containers and you had to manage these in some way. And, um, you know, it was really, really great technology and it actually is still you know, containers, you know, simple and brilliant. Uh, the challenge was, um, you know, you, at that time, And so that's really, you know, being kind of a data science focused but does that kind of what you said? you know, the growing community was really starting to, you know, we had a little bit of an inside view because we Well, it's interesting because, you know, we said at the time, I mean, you had, obviously Amazon invented the modern cloud. Amazon has a big commitment now to start, you know, getting involved more in the community and working with folks like storm And so, yeah, maybe the Redshift guys might not love snowflake, but Amazon in general, you know, you know, we wouldn't have nearly the opportunity that we do to actually listen to them as well, um, you know, th the door wouldn't be nearly as open for companies like, and storage, and then you had VM virtualization, VMware really, you know, that will end up, you know, expanding beyond Kubernetes at some point. at the services within Amazon and other cloud providers, you know, the functions And so you can, you can really ask Amazon, it, you know, you shouldn't, you should still be able to achieve optimal results that connect It's, it's, you know, people talk about multi-cloud, but that was just really kind of vendor you know, in an, a, from an adoption standpoint, but, uh, you know, Kubernetes itself is increasingly It's not like, uh, you know, You guys back to the office And, um, you know, it's, it's, it's, it's definitely a challenge, but there's a lot of benefits of working home. So we're in, even as a smaller startup, we're in 26, 27 Um, we are very about, you know, getting the teams together And do you think that the hybrid mode for your team is kind of the, and, you know, Vegas or Hamburg or wherever. and everything, you know, how you communicate. you know, systems and things that were, you know, historically, you know, Yeah. And I think it's funny. and it might even be in the CEO of the company asking, you know, what, on average, So you have your meetings on the east coast, you can wake up at seven four, and then you have meetings on the west coast. And like, you know, even in Boston, uh, you know, getting outside, And thank you for watching this cube conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave AlantePERSON

0.99+

Michael DellPERSON

0.99+

Jen GoldbergPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

JennyPERSON

0.99+

Frank SlootmanPERSON

0.99+

Ben HeinemanPERSON

0.99+

Andy JassyPERSON

0.99+

JapanLOCATION

0.99+

JerryPERSON

0.99+

Dave VolantePERSON

0.99+

AndyPERSON

0.99+

GermanyLOCATION

0.99+

JessePERSON

0.99+

Goldman SachsORGANIZATION

0.99+

15%QUANTITY

0.99+

Matt ProvoPERSON

0.99+

CanadaLOCATION

0.99+

Mesa sphereORGANIZATION

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

MontanaLOCATION

0.99+

2020DATE

0.99+

GoogleORGANIZATION

0.99+

MattPERSON

0.99+

TwoQUANTITY

0.99+

VerizonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

John furrierPERSON

0.99+

Jerry ChenPERSON

0.99+

threeQUANTITY

0.99+

36%QUANTITY

0.99+

five o'clockDATE

0.99+

SolomonPERSON

0.99+

HamburgLOCATION

0.99+

VegasLOCATION

0.99+

MondayDATE

0.99+

firstQUANTITY

0.99+

two young kidsQUANTITY

0.99+

BerkeleyLOCATION

0.99+

26QUANTITY

0.99+

Mesa sphereORGANIZATION

0.99+

FridayDATE

0.99+

EKSORGANIZATION

0.99+

HoisingtonPERSON

0.98+

firsQUANTITY

0.98+

storm forgeORGANIZATION

0.98+

a weekQUANTITY

0.98+

oneQUANTITY

0.98+

todayDATE

0.97+

bothQUANTITY

0.97+

KubernetesORGANIZATION

0.97+

four dayQUANTITY

0.97+

LinuxTITLE

0.97+

MaryPERSON

0.97+

KubernetesTITLE

0.97+

SupercloudORGANIZATION

0.96+

TwitterORGANIZATION

0.96+

eight o'clockDATE

0.96+

last nightDATE

0.96+

S a S a Docker swarmORGANIZATION

0.96+

COVIDORGANIZATION

0.96+

eight hourQUANTITY

0.96+

MesaORGANIZATION

0.95+

seven o'clockDATE

0.95+

27 statesQUANTITY

0.94+

GreylockORGANIZATION

0.94+

Breaking Analysis: The Improbable Rise of Kubernetes


 

>> From theCUBE studios in Palo Alto, in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vollante. >> The rise of Kubernetes came about through a combination of forces that were, in hindsight, quite a long shot. Amazon's dominance created momentum for Cloud native application development, and the need for newer and simpler experiences, beyond just easily spinning up computer as a service. This wave crashed into innovations from a startup named Docker, and a reluctant competitor in Google, that needed a way to change the game on Amazon and the Cloud. Now, add in the effort of Red Hat, which needed a new path beyond Enterprise Linux, and oh, by the way, it was just about to commit to a path of a Kubernetes alternative for OpenShift and figure out a governance structure to hurt all the cats and the ecosystem and you get the remarkable ascendancy of Kubernetes. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we tapped the back stories of a new documentary that explains the improbable events that led to the creation of Kubernetes. We'll share some new survey data from ETR and commentary from the many early the innovators who came on theCUBE during the exciting period since the founding of Docker in 2013, which marked a new era in computing, because we're talking about Kubernetes and developers today, the hoodie is on. And there's a new two part documentary that I just referenced, it's out and it was produced by Honeypot on Kubernetes, part one and part two, tells a story of how Kubernetes came to prominence and many of the players that made it happen. Now, a lot of these players, including Tim Hawkin Kelsey Hightower, Craig McLuckie, Joe Beda, Brian Grant Solomon Hykes, Jerry Chen and others came on theCUBE during formative years of containers going mainstream and the rise of Kubernetes. John Furrier and Stu Miniman were at the many shows we covered back then and they unpacked what was happening at the time. We'll share the commentary from the guests that they interviewed and try to add some context. Now let's start with the concept of developer defined structure, DDI. Jerry Chen was at VMware and he could see the trends that were evolving. He left VMware to become a venture capitalist at Greylock. Docker was his first investment. And he saw the future this way. >> What happens is when you define infrastructure software you can program it. You make it portable. And that the beauty of this cloud wave what I call DDI's. Now, to your point is every piece of infrastructure from storage, networking, to compute has an API, right? And, and AWS there was an early trend where S3, EBS, EC2 had API. >> As building blocks too. >> As building blocks, exactly. >> Not monolithic. >> Monolithic building blocks every little building bone block has it own API and just like Docker really is the API for this unit of the cloud enables developers to define how they want to build their applications, how to network them know as Wills talked about, and how you want to secure them and how you want to store them. And so the beauty of this generation is now developers are determining how apps are built, not just at the, you know, end user, you know, iPhone app layer the data layer, the storage layer, the networking layer. So every single level is being disrupted by this concept of a DDI and where, how you build use and actually purchase IT has changed. And you're seeing the incumbent vendors like Oracle, VMware Microsoft try to react but you're seeing a whole new generation startup. >> Now what Jerry was explaining is that this new abstraction layer that was being built here's some ETR data that quantifies that and shows where we are today. The chart shows net score or spending momentum on the vertical axis and market share which represents the pervasiveness in the survey set. So as Jerry and the innovators who created Docker saw the cloud was becoming prominent and you can see it still has spending velocity that's elevated above that 40% red line which is kind of a magic mark of momentum. And of course, it's very prominent on the X axis as well. And you see the low level infrastructure virtualization and that even floats above servers and storage and networking right. Back in 2013 the conversation with VMware. And by the way, I remember having this conversation deeply at the time with Chad Sakac was we're going to make this low level infrastructure invisible, and we intend to make virtualization invisible, IE simplified. And so, you see above the two arrows there related to containers, container orchestration and container platforms, which are abstraction layers and services above the underlying VMs and hardware. And you can see the momentum that they have right there with the cloud and AI and RPA. So you had these forces that Jerry described that were taking shape, and this picture kind of summarizes how they came together to form Kubernetes. And the upper left, Of course you see AWS and we inserted a picture from a post we did, right after the first reinvent in 2012, it was obvious to us at the time that the cloud gorilla was AWS and had all this momentum. Now, Solomon Hykes, the founder of Docker, you see there in the upper right. He saw the need to simplify the packaging of applications for cloud developers. Here's how he described it. Back in 2014 in theCUBE with John Furrier >> Container is a unit of deployment, right? It's the format in which you package your application all the files, all the executables libraries all the dependencies in one thing that you can move to any server and deploy in a repeatable way. So it's similar to how you would run an iOS app on an iPhone, for example. >> A Docker at the time was a 30% company and it just changed its name from .cloud. And back to the diagram you have Google with a red question mark. So why would you need more than what Docker had created. Craig McLuckie, who was a product manager at Google back then explains the need for yet another abstraction. >> We created the strong separation between infrastructure operations and application operations. And so, Docker has created a portable framework to take it, basically a binary and run it anywhere which is an amazing capability, but that's not enough. You also need to be able to manage that with a framework that can run anywhere. And so, the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat open stack deployment. You could run on another major cloud provider like rec. >> Now Google had this huge cloud infrastructure but no commercial cloud business compete with AWS. At least not one that was taken seriously at the time. So it needed a way to change the game. And it had this thing called Google Borg, which is a container management system and scheduler and Google looked at what was happening with virtualization and said, you know, we obviously could do better Joe Beda, who was with Google at the time explains their mindset going back to the beginning. >> Craig and I started up Google compute engine VM as a service. And the odd thing to recognize is that, nobody who had been in Google for a long time thought that there was anything to this VM stuff, right? Cause Google had been on containers for so long. That was their mindset board was the way that stuff was actually deployed. So, you know, my boss at the time, who's now at Cloudera booted up a VM for the first time, and anybody in the outside world be like, Hey, that's really cool. And his response was like, well now what? Right. You're sitting at a prompt. Like that's not super interesting. How do I run my app? Right. Which is, that's what everybody's been struggling with, with cloud is not how do I get a VM up? How do I actually run my code? >> Okay. So Google never really did virtualization. They were looking at the market and said, okay what can we do to make Google relevant in cloud. Here's Eric Brewer from Google. Talking on theCUBE about Google's thought process at the time. >> One interest things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998 which is the same year that VMware started was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic Unix processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well, that's a good model for us. And we can take what we know internally which was called Borg a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. >> Now, Eric Brewer gave us the bumper sticker version of the story there. What he reveals in the documentary that I referenced earlier is that initially Google was like, why would we open source our secret sauce to help competitors? So folks like Tim Hockin and Brian Grant who were on the original Kubernetes team, went to management and pressed hard to convince them to bless open sourcing Kubernetes. Here's Hockin's explanation. >> When Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on we realized we know what this is going to we know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes, once you get beyond two or three of them, and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please this is going to happen with us or without us. And I think it, the world would be better if we helped. >> So the open source strategy became more compelling as they studied the problem because it gave Google a way to neutralize AWS's advantage because with containers you could develop on AWS for example, and then run the application anywhere like Google's cloud. So it not only gave developers a path off of AWS. If Google could develop a strong service on GCP they could monetize that play. Now, focus your attention back to the diagram which shows this smiling, Alex Polvi from Core OS which was acquired by Red Hat in 2018. And he saw the need to bring Linux into the cloud. I mean, after all Linux was powering the internet it was the OS for enterprise apps. And he saw the need to extend its path into the cloud. Now here's how he described it at an OpenStack event in 2015. >> Similar to what happened with Linux. Like yes, there is still need for Linux and Windows and other OSs out there. But by and large on production, web infrastructure it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was, it was by having a truly open consistent API and a commitment into not breaking APIs and, so on. That allowed Linux to really become ubiquitous in the data center. Yes, there are other OSs, but Linux buy in large for production infrastructure, what is being used. And I think you'll see a similar phenomenon happen for this next level up cause we're treating the whole data center as a computer instead of trading one in visual instance is just the computer. And that's the stuff that Kubernetes to me and someone is doing. And I think there will be one that shakes out over time and we believe that'll be Kubernetes. >> So Alex saw the need for a dominant container orchestration platform. And you heard him, they made the right bet. It would be Kubernetes. Now Red Hat, Red Hat is been around since 1993. So it has a lot of on-prem. So it needed a future path to the cloud. So they rang up Google and said, hey. What do you guys have going on in this space? So Google, was kind of non-committal, but it did expose that they were thinking about doing something that was you know, pre Kubernetes. It was before it was called Kubernetes. But hey, we have this thing and we're thinking about open sourcing it, but Google's internal debates, and you know, some of the arm twisting from the engine engineers, it was taking too long. So Red Hat said, well, screw it. We got to move forward with OpenShift. So we'll do what Apple and Airbnb and Heroku are doing and we'll build on an alternative. And so they were ready to go with Mesos which was very much more sophisticated than Kubernetes at the time and much more mature, but then Google the last minute said, hey, let's do this. So Clayton Coleman with Red Hat, he was an architect. And he leaned in right away. He was one of the first outside committers outside of Google. But you still led these competing forces in the market. And internally there were debates. Do we go with simplicity or do we go with system scale? And Hen Goldberg from Google explains why they focus first on simplicity in getting that right. >> We had to defend of why we are only supporting 100 nodes in the first release of Kubernetes. And they explained that they know how to build for scale. They've done that. They know how to do it, but realistically most of users don't need large clusters. So why create this complexity? >> So Goldberg explains that rather than competing right away with say Mesos or Docker swarm, which were far more baked they made the bet to keep it simple and go for adoption and ubiquity, which obviously turned out to be the right choice. But the last piece of the puzzle was governance. Now Google promised to open source Kubernetes but when it started to open up to contributors outside of Google, the code was still controlled by Google and developers had to sign Google paper that said Google could still do whatever it wanted. It could sub license, et cetera. So Google had to pass the Baton to an independent entity and that's how CNCF was started. Kubernetes was its first project. And let's listen to Chris Aniszczyk of the CNCF explain >> CNCF is all about providing a neutral home for cloud native technology. And, you know, it's been about almost two years since our first board meeting. And the idea was, you know there's a certain set of technology out there, you know that are essentially microservice based that like live in containers that are essentially orchestrated by some process, right? That's essentially what we mean when we say cloud native right. And CNCF was seated with Kubernetes as its first project. And you know, as, as we've seen over the last couple years Kubernetes has grown, you know, quite well they have a large community a diverse con you know, contributor base and have done, you know, kind of extremely well. They're one of actually the fastest, you know highest velocity, open source projects out there, maybe. >> Okay. So this is how we got to where we are today. This ETR data shows container orchestration offerings. It's the same X Y graph that we showed earlier. And you can see where Kubernetes lands not we're standing that Kubernetes not a company but respondents, you know, they doing Kubernetes. They maybe don't know, you know, whose platform and it's hard with the ETR taxon economy as a fuzzy and survey data because Kubernetes is increasingly becoming embedded into cloud platforms. And IT pros, they may not even know which one specifically. And so the reason we've linked these two platforms Kubernetes and Red Hat OpenShift is because OpenShift right now is a dominant revenue player in the space and is increasingly popular PaaS layer. Yeah. You could download Kubernetes and do what you want with it. But if you're really building enterprise apps you're going to need support. And that's where OpenShift comes in. And there's not much data on this but we did find this chart from AMDA which show was the container software market, whatever that really is. And Red Hat has got 50% of it. This is revenue. And, you know, we know the muscle of IBM is behind OpenShift. So there's really not hard to believe. Now we've got some other data points that show how Kubernetes is becoming less visible and more embedded under of the hood. If you will, as this chart shows this is data from CNCF's annual survey they had 1800 respondents here, and the data showed that 79% of respondents use certified Kubernetes hosted platforms. Amazon elastic container service for Kubernetes was the most prominent 39% followed by Azure Kubernetes service at 23% in Azure AKS engine at 17%. With Google's GKE, Google Kubernetes engine behind those three. Now. You have to ask, okay, Google. Google's management Initially they had concerns. You know, why are we open sourcing such a key technology? And the premise was, it would level the playing field. And for sure it has, but you have to ask has it driven the monetization Google was after? And I would've to say no, it probably didn't. But think about where Google would've been. If it hadn't open source Kubernetes how relevant would it be in the cloud discussion. Despite its distant third position behind AWS and Microsoft or even fourth, if you include Alibaba without Kubernetes Google probably would be much less prominent or possibly even irrelevant in cloud, enterprise cloud. Okay. Let's wrap up with some comments on the state of Kubernetes and maybe a thought or two about, you know, where we're headed. So look, no shocker Kubernetes for all its improbable beginning has gone mainstream in the past year or so. We're seeing much more maturity and support for state full workloads and big ecosystem support with respect to better security and continued simplification. But you know, it's still pretty complex. It's getting better, but it's not VMware level of maturity. For example, of course. Now adoption has always been strong for Kubernetes, for cloud native companies who start with containers on day one, but we're seeing many more. IT organizations adopting Kubernetes as it matures. It's interesting, you know, Docker set out to be the system of the cloud and Kubernetes has really kind of become that. Docker desktop is where Docker's action really is. That's where Docker is thriving. It sold off Docker swarm to Mirantis has made some tweaks. Docker has made some tweaks to its licensing model to be able to continue to evolve its its business. To hear more about that at DockerCon. And as we said, years ago we expected Kubernetes to become less visible Stu Miniman and I talked about this in one of our predictions post and really become more embedded into other platforms. And that's exactly what's happening here but it's still complicated. Remember, remember the... Go back to the early and mid cycle of VMware understanding things like application performance you needed folks in lab coats to really remediate problems and dig in and peel the onion and scale the system you know, and in some ways you're seeing that dynamic repeated with Kubernetes, security performance scale recovery, when something goes wrong all are made more difficult by the rapid pace at which the ecosystem is evolving Kubernetes. But it's definitely headed in the right direction. So what's next for Kubernetes we would expect further simplification and you're going to see more abstractions. We live in this world of almost perpetual abstractions. Now, as Kubernetes improves support from multi cluster it will be begin to treat those clusters as a unified group. So kind of abstracting multiple clusters and treating them as, as one to be managed together. And this is going to create a lot of ecosystem focus on scaling globally. Okay, once you do that, you're going to have to worry about latency and then you're going to have to keep pace with security as you expand the, the threat area. And then of course recovery what happens when something goes wrong, more complexity, the harder it is to recover and that's going to require new services to share resources across clusters. So look for that. You also should expect more automation. It's going to be driven by the host cloud providers as Kubernetes supports more state full applications and begins to extend its cluster management. Cloud providers will inject as much automation as possible into the system. Now and finally, as these capabilities mature we would expect to see better support for data intensive workloads like, AI and Machine learning and inference. Schedule with these workloads becomes harder because they're so resource intensive and performance management becomes more complex. So that's going to have to evolve. I mean, frankly, many of the things that Kubernetes team way back when, you know they back burn it early on, for example, you saw in Docker swarm or Mesos they're going to start to enter the scene now with Kubernetes as they start to sort of prioritize some of those more complex functions. Now, the last thing I'll ask you to think about is what's next beyond Kubernetes, you know this isn't it right with serverless and IOT in the edge and new data, heavy workloads there's something that's going to disrupt Kubernetes. So in that, by the way, in that CNCF survey nearly 40% of respondents were using serverless and that's going to keep growing. So how is that going to change the development model? You know, Andy Jassy once famously said that if they had to start over with Amazon retail, they'd start with serverless. So let's keep an eye on the horizon to see what's coming next. All right, that's it for now. I want to thank my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team, who also manages the breaking analysis podcast, Kristin Martin and Cheryl Knight help get the word out on socials, so thanks to all of you. Remember these episodes, they're all available as podcasts wherever you listen, just search breaking analysis podcast. Don't forget to check out ETR website @etr.ai. We'll also publish. We publish a full report every week on wikibon.com and Silicon angle.com. You can get in touch with me, email me directly david.villane@Siliconangle.com or DM me at D Vollante. You can comment on our LinkedIn post. This is Dave Vollante for theCUBE insights powered by ETR. Have a great week, everybody. Thanks for watching. Stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : Feb 12 2022

SUMMARY :

bringing you data driven and many of the players And that the beauty of this And so the beauty of this He saw the need to simplify It's the format in which A Docker at the time was a 30% company And so, the union of Docker and Kubernetes and said, you know, we And the odd thing to recognize is that, at the time. And so scaling that up, you and pressed hard to convince them and said, you know, please And he saw the need to And that's the stuff that Kubernetes and you know, some of the arm twisting in the first release of Kubernetes. of Google, the code was And the idea was, you know and dig in and peel the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

Chris AniszczykPERSON

0.99+

HockinPERSON

0.99+

Dave VollantePERSON

0.99+

Solomon HykesPERSON

0.99+

Craig McLuckiePERSON

0.99+

Cheryl KnightPERSON

0.99+

Jerry ChenPERSON

0.99+

Alex MyersonPERSON

0.99+

Kristin MartinPERSON

0.99+

Brian GrantPERSON

0.99+

Eric BrewerPERSON

0.99+

1998DATE

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Tim HockinPERSON

0.99+

Andy JassyPERSON

0.99+

2013DATE

0.99+

Alex PolviPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Craig McLuckiePERSON

0.99+

Clayton ColemanPERSON

0.99+

2018DATE

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

50%QUANTITY

0.99+

JerryPERSON

0.99+

AppleORGANIZATION

0.99+

2012DATE

0.99+

Joe BedaPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CNCFORGANIZATION

0.99+

17%QUANTITY

0.99+

John FurrierPERSON

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

OracleORGANIZATION

0.99+

23%QUANTITY

0.99+

iOSTITLE

0.99+

1800 respondentsQUANTITY

0.99+

AlibabaORGANIZATION

0.99+

2015DATE

0.99+

39%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AirbnbORGANIZATION

0.99+

Hen GoldbergPERSON

0.99+

fourthQUANTITY

0.99+

twoQUANTITY

0.99+

Chad SakacPERSON

0.99+

threeQUANTITY

0.99+

david.villane@Siliconangle.comOTHER

0.99+

first projectQUANTITY

0.99+

CraigPERSON

0.99+

VMwareORGANIZATION

0.99+

ETRORGANIZATION

0.99+

George Watkins, AMD | AWS re:Invent 2021


 

(upbeat music) Welcome back to theCUBE's coverage of AWS re:Invent 2021. I'm John Furrier, host of theCUBE. We have George Watkins, the product marketing manager, cloud gaming and visual cloud at AMD. George, thanks for coming on theCUBE. >> Thank you for having me. >> Love this segment, accelerating game development. AWS cloud, big topic on how the gaming developer environment's changing and how AMD is powering it. Let's get into it. So streaming remote, working remote, flexible collaboration, all powered by the G4ad virtual workstations, it's been a big part of success. Take us through what's going on there. >> Yeah, certainly. So obviously from a remote working perspective, there was a huge impact on collaboration and productivity for many industries out there. But, a collaborative environment like game design, it was even more so. First off, happy to have these big bulky workstation ship to local artists, so they can actually carry on working was a massive nightmare for IT management. Making sure that they have the right hardware, the right resources, the right applications and security. So it was a real mean task. And on top of that, working remotely also brings in other efficiencies when it comes to collaboration. So for example, working on a data sets, as I mentioned before, it's a huge team collaboration effort when it comes to game development, and using the same dataset happens very very often. So if you're actually working remotely and an artist, for example pulled a dataset, from a server, worked on it, then took it back up into the cloud. I'll tell you now, it takes some time to do. And at the same time you might have one or two other artists trying to use that data set. The problem or the big issue that comes here is version control. And essentially because these artists are using the older version, there's creating errors, and keeping that production timed longer. So it's very very inefficient. And then this is where the cloud really comes to end zone. First off the cloud, and then obviously in this case, the AWS cloud, with G4ad instances, really does bring the whole pipeline together. It brings the data sets and the virtual workstations, obviously, as I mentioned, G4ad, as well as all the applications into one place. It's all centralized. And from an IT perspective, this is fantastic. And actually sending out a workstation now is really really simple. It's log in details into an email to your new staff, and there's some really great benefits as well from a staff perspective. Not only are they not tethered to a local workstation, they have the flexibility of work where they need to, and also how they like to. But it's also really interesting about how they work on a day-to-day basis. So a good example of this is, if a artist is using or working on a very very heavy dataset and the configuration from their VM or virtual workstation, isn't up to snuff because of the such a large dataset, all they need to do is call up IT and say, I need more resource. And literally in a couple of minutes time, they can actually have that resource, again, improving that productivity, reducing that time. So it's really really important. And just a final note here as well, with having all that data and all that resource in the cloud, version control tools, really do help bring that efficiency as it's all built into the applications and that data sets really, really exciting staff and ultimately, bring in that productivity and reducing that time and errors down. >> I could see your point too because, when you don't bring it to the cloud, people are going to be bored, waiting for things to happen. And they say I want to take a shortcut. Shortcuts equal mistakes. So, I can see that the G4ad with focus for artists is cool because it's purpose-built for what you're talking about. So take me through how you see the improved efficiencies in the development pipeline with cloud computing around this area because, obviously it makes a lot of sense. Everything's in the cloud, you've got the instances there. Now what happens next? How does the coding all work? What's going on around the game development pipeline? >> 3D applications today, particularly at use in the game industry, I'll be honest, they are still based on legacy hardware. And what I mean by this is that the applications typically require higher CPU Hertz the typically single threaded, maybe some kind of multi threaded functionality there. But generally they are limited by what the traditional workstation has been. And obviously why not? They've been built over the last 10-15 years to access that type of data. Now that is great, but it's not accessing what could be, all the resources that are available in the cloud. And this is what's really really exciting in my part. So ultimately what we're saying is that is that you have this great virtual workstation experience. You have all your applications running on there, you can be efficient, but then there's these really specific and really interesting use cases that aren't accessing the cloud. And I've got a couple of examples, so first off there's a feature inside Unreal 4 engine, called Unreal Swarm. And this feature helps actually reduce the time it takes, in this case and to bake light maps into auto scale, to bake light maps into a game. And this is done by auto scaling, the compiling in AWS cloud. So for example, after making the amends to a light map, we're ready to essentially recompile, but instead of doing this on the local workstation, using the traditional CPU and memory resource, which you would expect to see in a workstation, and actually in this case, it takes around about 50 minutes to do. When you actually use Unreal Swarm, you can, the coordinator as part of this functionality, bursts the requirement or the actual compiling into the cloud. And actually in this case, it's using, like, 10 C5a instances. So these are all CPU high-performance computing instances. And because you have this ability to auto-scale, you actually essentially bring that time, that original 50 minutes, down to 4 minutes. And this type of kind of functionality or this type of task that you would typically see with a 3d artist or with a programmer, basically happens multiple times a day. So when you start factoring in a saving of 45 minutes multiple times a day, it starts really bringing down, the amount of time saved, and obviously the amount of cost saved as well for that artist's time. So it's really really exciting and, certainly something to talk about. >> That's totally cool. I got to ask you since you're here, because it brings up the question that pops into my head, which is okay. What's the state of the art development trends that you're seeing because, on the cloud side, on non gaming world, so shift left to security. You start to see more agile kind of methods around what used to be different modules, right? So you mentioned compiling, acceleration, what's going on in the actual workflows for the developers? What are some of the cool things that you could share that people might not know about that are important? >> Well certainly it's really about finding, those bursty computational expensive and time consuming processes, and actually moving them to the cloud. So really, from a compiling standpoint, they are usually CPU bound. So essentially the GPU does all the work when it comes to the view pole, all that high rendering frames per second, that's what it's really designed for. And it does a very good job with that. But actually the compiling aspect, the compute aspect is all done on the CPU side. And, the work that we've been doing with AWS and the game tech team is actually finding certain ways of actually helping to reduce the compiling nature because ultimately that is always restricted by the amount of calls that you actually have on a local device. So again, another example is there's a company out there called Incredibuild, and they specialize in accelerating the development of that programming code. And obviously in this case, it's the game code. And if an artist, entered a clean source code built on unreal engine full, it would take approximately around about 60 minutes to do on a local machine. However, using the Incredibuild solution to accelerate that type of workload, you can complete it in just 6 minutes. Because again, it's auto scaled out that compiling to several in this case 16 C5a large instances, which essentially reduces all that time for the artist freeing them up to do more stuff. >> And the more creativity is just the classic use case of the clouds, beautiful thing. It's just reminds me of how good this is, because, when you think about what you guys are doing, pushing the envelope for cloud with the creators. gaming is such a state-of-the-art pressure point to make high performance come better. It really is putting a lot of pressure on AMD and everyone else's to get faster and stronger because, it truly is pushing state-of-the-art in general. It's always been that way. If you look at the gaming world. This is a whole 'nother level. I mean, you starting to see that. What's your view on that? If you look at the gaming as a tail sign for the trends and the tech side, better, faster, cheaper processors and speeds and feeds, and how codes work in between processes GPU's and CPU's, all this is cool. All kind of new, if you will. New patterns, new usage, what's your view on that? >> Well certainly, cloud gaming is a really exciting topic and, we believe that cloud gaming with the introduction of various key elements are reading revolutionalize the way that some people are actually using their complaint gamings and interacting with games. And what I mean by this is like, today we can do cloud gaming, it's a fantastic experience. You're usually hardwired, using a broadband connection to actually play those games. And you tend to try and be close to an actual data sensors, to try to reduce that latency. However this is only going to get better with the introduction of 5G coverage and also just, as important edge computing. And because of these two elements, what we're going to be seeing is very high speeds wirelessly, and more importantly, low latency. And this is very important for, that very dynamic cinematic gaming experiences. But not only this, what it can actually do is bring, 4k, 8K gaming to people wirelessly. It can also bring VR and AR experiences wirelessly, and also it can access, these new emerging technologies that are making higher fidelity gaming experiences like hardware retraces. All this can be done with these new technologies. And it's incredibly incredibly exciting. But more importantly, what's really great about this is, from a game publisher perspective, because it's actually helping them simplify their business processes, particularly from a game development standpoint. And actually what I mean by this is, if we take a typical example of what a game developer has to do for a mobile game, there's certain considerations that they need to think about when they actually comes to developing and validate. First off they'll have to understand what type of OS to account for. And actually what type of version of that OS to account for. What type of IPA they're going to be building on. And also finally, what type of resources, are actually on that end point device. So there's a lot of considerations here, and a lot of testing. So ultimately a lot of work to get that game out, to those gamers who might be on a couple of these different mobile platforms. However, when it comes to game streaming, it really does kind of change all this because ultimately what the game developer is actually doing is that they're developing and they're validating on one source. And that is going to be the server that is essentially pairing that game streaming service. Because how game streaming works is that we essentially trans code the actual game via H.264 to a software client on any end point device. So this could be those mobile devices I just mentioned. It can also be TVs, it could be consoles, it can be even low powered laptops. And what's very exciting is that, from an end user perspective, they're getting the ultimate in gaming experiences and usually these types of solutions are traditionally subscription-based. So you're actually reducing the requirement of this kind of high-end thousands of dollars gaming solution or simply a high-end next gen console. All of this is actually been given to you and delivered as part of a game streaming service. So it's very very exciting and, certainly we can see the adoption on both the game development side, as well as the gamer's side. That's a great way to put an end to this awesome segment. I think that business model innovation around making it easier, and making it better to develop environment, that's just how they work. So that's good, check. But really the business model here, the gaming as a service, you're making it possible for the developer and the artist to see an outcome faster. That's the cloud way. >> Thank you >> And they doubled down on success and they could do that. So again, this is all new and exciting and certainly the edge and having data being processed at the edge as well. Again, all this is coming in to create more good choice. Thank you so much for coming on and sharing that insight with us from the AMD perspective. And again, more power, more speed, we always say, no one's going to complain, they get more compute, that's what I always. >> Absolutely absolutely. >> Thanks for coming I appreciate it. >> Thank you. >> theCUBE coverage here at AWS re:Invent 2021. I'm John Furrier host of theCUBE. Thanks for watching. (upbeat music)

Published Date : Nov 30 2021

SUMMARY :

the product marketing manager, all powered by the G4ad And at the same time you might So, I can see that the G4ad So for example, after making the amends I got to ask you since you're here, So essentially the GPU does all the work And the more creativity and the artist to see an outcome faster. and certainly the edge and I'm John Furrier host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George WatkinsPERSON

0.99+

John FurrierPERSON

0.99+

50 minutesQUANTITY

0.99+

45 minutesQUANTITY

0.99+

oneQUANTITY

0.99+

GeorgePERSON

0.99+

AMDORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IncredibuildORGANIZATION

0.99+

6 minutesQUANTITY

0.99+

4 minutesQUANTITY

0.99+

two elementsQUANTITY

0.99+

FirstQUANTITY

0.98+

todayDATE

0.98+

Unreal 4TITLE

0.97+

firstQUANTITY

0.97+

thousands of dollarsQUANTITY

0.97+

Unreal SwarmTITLE

0.96+

bothQUANTITY

0.96+

one sourceQUANTITY

0.95+

two other artistsQUANTITY

0.92+

singleQUANTITY

0.91+

G4adTITLE

0.9+

H.264OTHER

0.89+

one placeQUANTITY

0.89+

around about 50 minutesQUANTITY

0.84+

times a dayQUANTITY

0.82+

2021TITLE

0.81+

re:Invent 2021EVENT

0.79+

around about 60 minutesQUANTITY

0.74+

G4adORGANIZATION

0.73+

approximatelyQUANTITY

0.72+

G4adCOMMERCIAL_ITEM

0.69+

8KOTHER

0.68+

IncredibuildTITLE

0.68+

theCUBETITLE

0.66+

C5aTITLE

0.65+

4kOTHER

0.64+

5GQUANTITY

0.64+

16OTHER

0.63+

10TITLE

0.58+

agileTITLE

0.57+

re:Invent 2021TITLE

0.52+

yearsDATE

0.49+

10-15QUANTITY

0.49+

lastDATE

0.49+

re:TITLE

0.45+

InventEVENT

0.41+

theCUBEORGANIZATION

0.41+

secondQUANTITY

0.38+

Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI


 

(upbeat music) >> Welcome back to HPE Discover 2021, theCUBE's virtual coverage, continuous coverage of HPE's Annual Customer Event. My name is Dave Vellante, and we're going to dive into the intersection of high-performance computing, data and AI with Doctor Eng Lim Goh, who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Doctor Goh, great to see you again. Welcome back to theCUBE. >> Hello, Dave, great to talk to you again. >> You might remember last year we talked a lot about Swarm intelligence and how AI is evolving. Of course, you hosted the Day 2 Keynotes here at Discover. And you talked about thriving in the age of insights, and how to craft a data-centric strategy. And you addressed some of the biggest problems, I think organizations face with data. That's, you've got a, data is plentiful, but insights, they're harder to come by. >> Yeah. >> And you really dug into some great examples in retail, banking, in medicine, healthcare and media. But stepping back a little bit we zoomed out on Discover '21. What do you make of the events so far and some of your big takeaways? >> Hmm, well, we started with the insightful question, right, yeah? Data is everywhere then, but we lack the insight. That's also part of the reason why, that's a main reason why Antonio on day one focused and talked about the fact that we are in the now in the age of insight, right? And how to try thrive in that age, in this new age? What I then did on a Day 2 Keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So, maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights. You know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Oh, very pertinent question, Dave. You know the two challenges I spoke about, that we need to overcome in order to thrive in this new age. The first one is the current challenge. And that current challenge is, you know, stated is now barriers to insight, when we are awash with data. So that's a statement on how do you overcome those barriers? What are the barriers to insight when we are awash in data? In the Day 2 Keynote, I spoke about three main things. Three main areas that we receive from customers. The first one, the first barrier is in many, with many of our customers, data is siloed, all right. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above, they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know? Barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And you know, it's tough to get at, to tough to get a value out of them, right? And in that case, I use the example of, you know, the May 6, 2010 event where the stock market dropped a trillion dollars in terms of minutes. We all know those who are financially attuned with know about this incident but that this is not the only incident. There are many of them out there. And for that particular May 6 event, you know, it took a long time to get insight. Months, yeah, before we, for months we had no insight as to what happened. Why it happened? Right, and there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road they go with the tough data, right? Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road. And analyze that data took a long time to assemble. And they discovered that there was caught stuffing, right? That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees, the rule in there that says, all trades less than a hundred shares don't need to report in there. And so what people did was sending a lot of less than a hundred shares trades to fly under the radar to do this manipulation. So here is the second barrier, right? Data could be raw and dispersed. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah? Here we have three quick examples of customers. One was a great example, right? Where they were trying to build a language translator or machine language translator between two languages, right? By not do that, they need to get hundreds of millions of word pairs. You know of one language compare with the corresponding other. Hundreds of millions of them. They say, well, I'm going to get all these word pairs. Someone creative thought of a willing source and a huge, it was a United Nations. You see? So sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data, right? The second one has to do with, there was the sometimes you may just have to generate that data. Interesting one, we had an autonomous car customer that collects all these data from their their cars, right? Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hill because that's a rare occurrence. So instead of waiting for a time where the car can drive in hill, they build a simulation by having the car collected in snow and simulated him. So these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated. In fact, that data silo, they federated it. Virus associated with data, that's tough to get at. They just took the hard road, right? And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow! I tell you, I have about a hundred questions based on what you just said, you know? (Dave chuckles) And as a great example, the Flash Crash. In fact, Michael Lewis, wrote about this in his book, the Flash Boys. And essentially, right, it was high frequency traders trying to front run the market and sending into small block trades (Dave chuckles) trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. (Dave chuckles) I guess my question is can technology help us get out of the problem? And that maybe is where AI fits in? >> Yes, yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, right? Assembled them to see if you can find a material trend, right? You can see lots of trends, right? Like, no, we, if humans look at things that we tend to see patterns in Clouds, right? So sometimes you need to apply statistical analysis math to be sure that what the model is seeing is real, right? And that required, well, that's one area. The second area is you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. Now, in this case, before the change in the rules, right? But by the way, after the discovery, the authorities changed the rules and all shares, all trades of different any sizes it has to be reported. >> Right. >> Right, yeah? But the rule was applied, you know, I say earlier that shares under a hundred, trades under a hundred shares need not be reported. So, sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't wanted a various reasons not to put everything in there. So that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such, there are times we just need to go back to the raw data. >> I want to ask you... >> Oh, it could be, that it's going to be tough, yeah. >> Yeah, I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about but. And I'm going to make a statement, you tell me if it's on point or off point. So seems that most of the AI going on in the enterprise is modeling data science applied to, you know, troves of data. But there's also a lot of AI going on in consumer. Whether it's, you know, fingerprint technology or facial recognition or natural language processing. Well, two part question will the consumer market, as it has so often in the enterprise sort of inform us is sort of first part. And then, there'll be a shift from sort of modeling if you will to more, you mentioned the autonomous vehicles, more AI inferencing in real time, especially with the Edge. Could you help us understand that better? >> Yeah, this is a great question, right? There are three stages to just simplify. I mean, you know, it's probably more sophisticated than that. But let's just simplify that three stages, right? To building an AI system that ultimately can predict, make a prediction, right? Or to assist you in decision-making. I have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data, and the machine starts to evolve a model based on all the data it's seeing. It starts to evolve, right? To a point that using a test set of data that you have separately kept aside that you know the answer for. Then you test the model, you know? After you've trained it with all that data to see whether its prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision. And that's the inference, right? So a lot of times, depending on what we are focusing on, we in data science are, are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you need to make. You pick the right models. And then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that robust. It is good, but then it is not consistent, right? Now what you do is you try another model. So sometimes it gets keep trying different models until you get the right kind, yeah? That gives you a good robust decision-making and prediction. Now, after which, if it's tested well, QA, you will then take that model and deploy it at the Edge. Yeah, and then at the Edge is essentially just looking at new data, applying it to the model that you have trained. And then that model will give you a prediction or a decision, right? So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful. Can you also do learning at the Edge? >> Right. >> That's the reason why we spoke about Swarm Learning the last time. Learning at the Edge as a Swarm, right? Because maybe individually, they may not have enough power to do so. But as a Swarm, they may. >> Is that learning from the Edge or learning at the Edge? In other words, is that... >> Yes. >> Yeah. You do understand my question. >> Yes. >> Yeah. (Dave chuckles) >> That's a great question. That's a great question, right? So the quick answer is learning at the Edge, right? And also from the Edge, but the main goal, right? The goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the Call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. Right? So that you don't need to have to send all that data back and assemble it back from all the different Edge devices. Assemble it back to the Cloud Site to do the learning, right? Some on you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send. >> Yeah. >> The autonomous vehicle, example you gave is great. 'Cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front. And then maybe they do that and then they send that smaller data setback and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming through. Let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah. Well, today full of these insightful questions. (Dr. Eng chuckles) That actually touches on the the second challenge, right? How do we, in order to thrive in this new age of insight? The second challenge is our future challenge, right? What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talked about what to collect, right? When to organize it when you collect? And then where will your data be going forward that you are collecting from? So what, when, and where? For what data to collect? That was the question you asked, it's a question that different industries have to ask themselves because it will vary, right? Let me give you the, you use the autonomous car example. Let me use that. And we do have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from a fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars, collecting data so they can train and eventually deploy commercial cars, right? Also this data collection cars, they collect 10, as a fleet of them collect 10 petabytes a day. And then when they came to us, building a storage system you know, to store all of that data, they realized they don't want to afford to store all of it. Now here comes the dilemma, right? What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma, right? Now in working with them on this process of trimming down what they collected, you know, I'm constantly reminded of the 60s and 70s, right? To remind myself 60s and 70s, we called a large part of our DNA, junk DNA. >> Yeah. (Dave chuckles) >> Ah! Today, we realized that a large part of that what we call junk has function as valuable function. They are not genes but they regulate the function of genes. You know? So what's junk in yesterday could be valuable today. Or what's junk today could be valuable tomorrow, right? So, there's this tension going on, right? Between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you worry, you ignore the wrong ones, right? You can see this tension in our customers, right? And then it depends on industry here, right? In healthcare they say, I have no choice. I want it all, right? Oh, one very insightful point brought up by one healthcare provider that really touched me was you know, we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But who also care for the people we are not caring for? How do we find them? >> Uh-huh. >> Right, and that definitely, they did not just need to collect data that they have with from their patients. They also need to reach out, right? To outside data so that they can figure out who they are not caring for, right? So they want it all. So I asked them, so what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us rightfully, that you know we have to then work out a way to help them build a system, you know? So that's healthcare, right? And if you go to other industries like banking, they say they can afford to keep them all. >> Yeah. >> But they are regulated, seemed like healthcare, they are regulated as to privacy and such like. So many examples different industries having different needs but different approaches to what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can install, right? But on the other hand, you know if you kind of don't want to afford it and decide not to start some. Maybe those some become highly valuable in the future, right? (Dr. Eng chuckles) You worry. >> Well, we can make some assumptions about the future. Can't we? I mean, we know there's going to be a lot more data than we've ever seen before. We know that. We know, well, not withstanding supply constraints and things like NAND. We know the prices of storage is going to continue to decline. We also know and not a lot of people are really talking about this, but the processing power, but the says, Moore's law is dead. Okay, it's waning, but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again, another insightful question that we touched on our Keynote. And that goes up to the why, uh, to the where? Where will your data be? Right? We have one estimate that says that by next year there will be 55 billion connected devices out there, right? 55 billion, right? What's the population of the world? Well, of the other 10 billion? But this thing is 55 billion. (Dave chuckles) Right? And many of them, most of them can collect data. So what do you do? Right? So the amount of data that's going to come in, it's going to way exceed, right? Drop in storage costs are increasing compute power. >> Right. >> Right. So what's the answer, right? So the answer must be knowing that we don't, and even a drop in price and increase in bandwidth, it will overwhelm the, 5G, it will overwhelm 5G, right? Given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all of that data from the 55 billion devices of the data back to a central, as a bunch of central cost. Because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you'll still be too expensive given the number of devices out there. You know given storage costs dropping is still be too expensive to try and install them all. So the answer must be to start, at least to mitigate from to, some leave most a lot of the data out there, right? And only send back the pertinent ones, as you said before. But then if you did that then how are we going to do machine learning at the Core and the Cloud Site, if you don't have all the data? You want rich data to train with, right? Sometimes you want to mix up the positive type data and the negative type data. So you can train the machine in a more balanced way. So the answer must be eventually, right? As we move forward with these huge number of devices all at the Edge to do machine learning at the Edge. Today we don't even have power, right? The Edge typically is characterized by a lower energy capability and therefore lower compute power. But soon, you know? Even with low energy, they can do more with compute power improving in energy efficiency, right? So learning at the Edge, today we do inference at the Edge. So we data, model, deploy and you do inference there is. That's what we do today. But more and more, I believe given a massive amount of data at the Edge, you have to start doing machine learning at the Edge. And when you don't have enough power then you aggregate multiple devices, compute power into a Swarm and learn as a Swarm, yeah. >> Oh, interesting. So now of course, if I were sitting and fly on the wall and the HPE board meeting I said, okay, HPE is a leading provider of compute. How do you take advantage of that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products. But there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for the customers? >> Hmm, the wall will have to have a balance, right? Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud Site, right? And it has to be hybrid. And then we need capability on the Edge side that we need to build systems that on one hand is an Edge adapter, right? Meaning they environmentally adapted because the Edge differently are on it, a lot of times on the outside. They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery powered. Right? So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. It must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insight for that Antonio announced in 2018. For the next four years from 2018, right? $4 billion invested to strengthen our Edge portfolio. >> Uh-huh. >> Edge product lines. >> Right. >> Uh-huh, Edge solutions. >> I could, Doctor Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of, certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers, right? The diversity of customers and the diversity in the way they have approached different problems of data strategy. So the excitement is around data strategy, right? Just like, you know, the statement made for us was so was profound, right? And Antonio said, we are in the age of insight powered by data. That's the first line, right? The line that comes after that is as such we are becoming more and more data centric with data that currency. Now the next step is even more profound. That is, you know, we are going as far as saying that, you know, data should not be treated as cost anymore. No, right? But instead as an investment in a new asset class called data with value on our balance sheet. This is a step change, right? Right, in thinking that is going to change the way we look at data, the way we value it. So that's a statement. (Dr. Eng chuckles) This is the exciting thing, because for me a CTO of AI, right? A machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. Right? (Dr. Eng chuckles) So, that's why when the people start to value data, right? And say that it is an investment when we collect it it is very positive for AI. Because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. >> Yeah. >> So it'd be great, if the community values data. >> Well, you certainly see it in the valuations of many companies these days. And I think increasingly you see it on the income statement. You know data products and people monetizing data services. And yeah, maybe eventually you'll see it in the balance sheet. I know Doug Laney, when he was at Gartner Group, wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? >> Yeah, yeah. >> Dr. Goh... (Dave chuckles) >> The question is the process and methods in valuation. Right? >> Yeah, right. >> But I believe we will get there. We need to get started. And then we'll get there. I believe, yeah. >> Doctor Goh, it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh, yeah, no doubt. People will better understand how to align, you know some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCUBE. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (Dave chuckles) (Dr. Eng laughs) >> Excellent. We'll leave it there. Thank you for spending some time with us and keep it right there for more great interviews from HPE Discover 21. This is Dave Vellante for theCUBE, the leader in Enterprise Tech Coverage. We'll be right back. (upbeat music)

Published Date : Jun 8 2021

SUMMARY :

Doctor Goh, great to see you again. great to talk to you again. And you talked about thriving And you really dug in the age of insight, right? of the ones you talked about today? to get what you need. And as a great example, the Flash Crash. is that humans put in the rules to decide But the rule was applied, you know, that it's going to be tough, yeah. So seems that most of the AI and the machine starts to evolve a model they may not have enough power to do so. Is that learning from the Edge You do understand my question. or the Call to do the learning. but the rest can be done at the Edge. When to organize it when you collect? But on the other hand, to help them build a system, you know? all that you can install, right? And so when you think about So what do you do? of the data back to a central, in that opportunity for the customers? And it has to be hybrid. about in the future of, as the data you feed it with. if the community values data. And I think increasingly you The question is the process We need to get started. And then the AI will Dr. Goh, great to see you again. as smart as the data Thank you for spending some time with us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Michael LewisPERSON

0.99+

Doug LaneyPERSON

0.99+

DavePERSON

0.99+

2018DATE

0.99+

$4 billionQUANTITY

0.99+

AntonioPERSON

0.99+

two languagesQUANTITY

0.99+

10 billionQUANTITY

0.99+

55 billionQUANTITY

0.99+

two challengesQUANTITY

0.99+

second challengeQUANTITY

0.99+

55 billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

Gartner GroupORGANIZATION

0.99+

first lineQUANTITY

0.99+

10QUANTITY

0.99+

second areaQUANTITY

0.99+

bothQUANTITY

0.99+

tomorrowDATE

0.99+

Hundreds of millionsQUANTITY

0.99+

TodayDATE

0.99+

todayDATE

0.99+

second barrierQUANTITY

0.99+

two partQUANTITY

0.99+

May 6, 2010DATE

0.99+

OneQUANTITY

0.99+

EdgeORGANIZATION

0.99+

first barrierQUANTITY

0.99+

less than a hundred sharesQUANTITY

0.99+

next yearDATE

0.98+

EngPERSON

0.98+

yesterdayDATE

0.98+

first partQUANTITY

0.98+

May 6DATE

0.98+

United NationsORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

one areaQUANTITY

0.98+

one modelQUANTITY

0.98+

first oneQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

Dr.PERSON

0.97+

less than a hundred sharesQUANTITY

0.97+

three stagesQUANTITY

0.97+

one ruleQUANTITY

0.97+

Three main areasQUANTITY

0.97+

Flash BoysTITLE

0.97+

one languageQUANTITY

0.97+

oneQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.96+

Flash CrashTITLE

0.95+

under a hundredQUANTITY

0.95+

FirstlyQUANTITY

0.95+

Danny Allan & Niraj Tolia | VeeamON 2021


 

>>Welcome back to Vienna on 2021 you're watching the Cube and my name is Dave Volonte. You know, the last 10 years of cloud, they were largely about spinning up virtualized compute infrastructure and accessing cheap and simple object storage and some other things like networking. The cloud was largely though a set of remote resources that simplify deployment and supported the whole spate of native applications that have emerged to power the activity of individuals and businesses the next decade, however, promises to build on the troves of data that live in the cloud, make connections to on premises applications and support new application innovations that are agile, iterative, portable and span resources in all in all the clouds, public clouds, private clouds, cross cloud connections all the way out to the near and far edge. In a linchpin of this new application development model is container platforms and container orchestration, which brings immense scale and capability to technology driven organizations, especially as they have evolved from supporting stateless applications to underpinning mission critical workloads as such containers bring complexities and risks that need to be addressed, not the least of which is protecting the massive amounts of data that are flowing through these systems. And with me to discuss these exciting and challenging trends or Danny Allen, who's the ceo of in and Niraj Tolia, the president at Kasten Bivins gentlemen welcome to the cube. >>Thank you delighted to be here with you Dave. >>Likewise, very excited to be a Dave. >>Okay, so Danny big M and a move. Great little acquisition. You're now seeing others try to make similar moves. Why what did you see in cast in? What was the fit? Why'd you make that move? >>Well, I think you nailed it. Dave's. We've seen an evolution in the infrastructure that's being used over the last two decades. So if you go back 20 years, there was a massive digital transformation to enable users to be self service with digital applications. About 2000 or so, 2010, everything started being virtualized. I know virtualization came along before that but virtualization really started to take off because it gave return on investment and gave flexibility all kinds of benefits. But now we're in a third wave which is built on containers. And the amazing thing about containers is that as you said, it allows you to connect multi cloud, hybrid cloud the edge to the core. And they're designed for the consumption world. If you think about the cloud, you can provision things deep provisions things. That's the way that containers are designed the applications and so because they're designed for a consumption based world because they are designed for portability across all of these different infrastructures, it only made sense for us to invest in the industry's leading provider of data protection for kubernetes. And that of course is costume, >>there's some garage, I mean take us back. I mean, you know, container has been around forever. But then, you know, they started to, you know, hit go mainstream and and and and at first, you know, they were obviously ephemeral, stateless apps, kind of lightweight stuff. But but you at the time you and the team said, okay, these are gonna become more complex microservices. Maybe so micro, but you had to have the vision and you made a bet uh maybe take us back to sort of how you saw that and where where's containers have have come from? >>Sure. So let's rewind the clock right. As you said, containers, old technology in the same way virtualization started with IBM mainframes, right, containers in different forms have been around for a while. But I think when the light bulb went off for me was very early days in 2015 when my engineering team, a previous company started complaining. And the reason they were complaining about different other engineering groups and the reason they were complaining was because the right things, things were coming together sooner. We're identifying things sooner. And that's when I said, this is going to be the next wave of infrastructure. The same way watch a light virtualization revolutionized how people built deployed apps. We saw that with containers and in particular in those days we made that bet on commodities. Right? So we said from first Principles and that's where you know, you had other things like Docker, swarm esos, etcetera and we said community, that's going to be the way to go because it is just so powerful and it is, you know, at the end of the day, what we all do is infrastructure. But what we saw was that containers optimizing for the developer, they were optimizing for the people that really build applications, deliver value to all of their and customers. And that is what made us see that even though the initially we only saw stateless applications state will was going to happen because there's just so much momentum behind it And the writing for us at least was on the wall. And that's how we started off on this journey in 2017. >>What are the unique nuances and differences really in terms of protecting containers from a, from a technical standpoint, what what's different? >>So there are a couple of subtle things. Right again, the jokers, you know, I say, is that I'm a recovering infrastructure person have always worked in infrastructure systems in the past and recovering them. But in this case we really had to flip things around right. I've come at it from the cloud disks volumes. VMS perspective, in this case to do the right thing by the customer needed a clean slate approach of coming out from the application down. So what we look at is what does the application look like? And that means protecting, not just the stuff that sits on disk, what your secrets in networking information, all those hundreds of pieces that make up a cloud native application and that involves scale challenges, work, visualisation challenges for admins, KPI So all of that shifts in a very dramatic way. >>So Danny, I mean typically VM you guys haven't done a ton of acquisitions, uh, you've grown organically. So now you, you, you poppin cast in, what does that mean for you from a platform perspective? You know, IBM has this term blue washing when they buy a company did you green wash cast and how did that all work? And again, what does what does it mean from the, from the platform perspective? >>Well, so our platform is designed for this type of integration and the first type of integration we do with any of our technologies because we do have native technologies, if you think about what we do being back up for AWS for Azure, for G C p, we have backup for Acropolis Hyper Visor. These are all native purpose built solutions for those environments and we integrate with what we call being platform services. And one of the first steps that we do of course is we take the data from those native solutions and send it into the repository and the benefit that you get from that is that you have this portable, self describing format that you can move around the vein platform. And so the platform was already designed for this Now. We already showed this at demon. You saw this on the main stage where we have this integration at a data level but it goes beyond that beam platform services allows us to do not just day one operations, but day two operations. Think about um updating the components of those infrastructures or those software components that also allows reporting. So for example you can report on what is protected, what's not protected. So the platform was already designed for this integration model. But the one thing I want to stress is we will always have that stand alone product for kubernetes for uh you know, for the container world. And the reason for that is the administrator for Kubernetes wants their own purpose build solution. They want it running on kubernetes. They want to protect the uniqueness of their infrastructure. If you think about a lot of the container based systems there, They're using structured data. Non structured data. Sure. But they're also using object based storage. They're using message queues. And so they have their nuances. And we want to maintain that in a stand alone product but integrated back into the Corvin platform. >>So we do these we have a data partner called GTR Enterprise Technology Research. They do these quarterly surveys and and they have this metric called net score is a measure of spending momentum and for the last, I don't know, 8, 10, 12 quarters the big four have been robotic process automation. That's hot space. Cloud obviously is hot and then A I of course. And but containers and container orchestration right up there. Those are the Big four that outshine everything else, even things like security and other infrastructure etcetera. So that's good. I mean you guys skating to the puck back in 2015 rush, you've made some announcements and I'm and I'm wondering sort of how they fit into the trends in the industry. Uh, what what's, what's significant about those announcements and you know, what's new that we need to know about. >>Sure. So let me take that one day. So we've made a couple of big interesting announcement. The most recent one of those was four dot release after casting by women platform, right? We call it kitten and right. We've known rate since a couple of weeks colonial pipeline ransom. Where has been in the news in the US gas prices are being driven up because of that. And that's really what we're seeing from customers where we are >>seeing this >>increase in communities adoption today. We have customers from the world's largest banks all the way to weakly connected cruise ships that one could burn. It is on them. People's data is precious. People are running a large fleet of notes for communities, large number of clusters. So what we said is how do you protect against these malicious attacks that want to lock people out? How do you bring in mutability so that even someone with keys to the kingdom can't go compromise your backups and restores, right? So this echoes a lot of what we hear from customers and what we hear about in the news so well protected that. But we still help through to some of the original vision behind cast. And that is, it's not just saying, hey, I give you ransomware protection. We'll do it in such an easy way. The admin barely notices. This new feature has been turned on if they wanted Do it in a way that gives them choice right. If you're running in a public cloud, if you're running at the edge you have choice of infrastructure available to you and do it in a way that you have 100% automation when you have 100 clusters when you deploy on ships, right, you're not going to be able to have we spoke things. So how do you hook into CHED pipelines and make the job of the admin easier? Is what we focused on in that last >>night. And and that's because you're basically doing this at the point of writing code and it's essentially infrastructure as code. We always talk about, you know, you want to you don't want to bolt on data protection as an afterthought, but that's what we've done forever. Uh This you can't >>so in fact I would say step before that day, right are the most leading customers we work with. Right to light up one of the U. S. Government's largest contractors. Um Hey do this before the first line of code is written right there on the scalp cloud as an example. But with the whole shift left that we all hear the cube talks a lot about. We see at this point where as you bring up infrastructure, you bring up a complete development environment, a complete test environment. And within that you want to deploy security, you want to deploy backup your to deploy protection at day zero before the developer in so it's the first line of cordon. So you protected every step of the journey while trying to bolt it on the sound. Seemingly yes, I stitched together a few pieces of technology but it fundamentally impacts how we're going to build the next generation of secure applications >>Danny, I think I heard you say or announced that this is going to be integrated into Wien backup and replication. Um can you explain what that took? Why? That's important. >>Yeah. So the the timeline on this and when we do integrations from these native solutions into the core platform, typically it begins with the data integration, in other words, the data being collected by the backup tool is sent to a repository and that gives us all the benefits of course of things like instant recovery and leveraging, de doop storage appliances and all of that step to typically is around day to operations, things like pushing out updates to that native solutions. So if you look at what we're doing with the backup for AWS and Azure, we can deploy the components, we can deploy the data proxies and data movers. And then lastly there's also a reporting aspect to this because we want to centralize the visibility for the organization across everywhere. So if your policy says hey I need two weeks of backups and after two weeks and I need weekly backups for X amount of time. This gives you the ability to see and manage across the organization. So what we've demonstrated already is this data level integration between the two platforms and we expect this to continue to go deeper and deeper as we move forward. The interesting thing right now is that the containers team often is different than the standard data center I. T. Team but we are quickly seeing the merge and I think the speed of that merging will also impact how quickly we integrate them within our platform. >>Well I mean obviously you see this for cloud developers and now you're bringing this to any developers and you know, if I'm a developer and I'm living in an insurance company, I've been, you know, writing COBOL code for a while, I want to be signed me up. I want to get trained on this, right? Because it's gonna I'm gonna become more valuable. So this is this is where the industry is headed. You guys talk about modern data protection. I wondered if you could you could paint a picture for us of sort of what what this new world of application development and deployment and and data protection looks like and how it's different from the old world. >>Mhm. So I think that if you mentioned the most important word, which is developer, they come first, they are the decision makers in this environment, the other people that have the most bull and rightly so. Oh, so I think that's the biggest thing at the cultural level that is, developers are saying this is what we want and this is what we need to get the job done, we want to move quickly. So some of the things are let's not slow them down. Let's enable them, let's give them any P I to work with. Right? No. Where in bulk of production, use will be api based versus EY base. Let's transparently integrate into the environment. So therefore protection for security, they need zero lines have changed code. Mm So those are some of the ways we approach things. Now when you go look at the requirements of the developers, they said I have a Ci cd pipeline to integrate into that. I have a development pipeline to integrate into that. I deploy across multiple clouds sometimes. Can you integrate into that and work seamlessly across all those environments? And we see those category of us coming up over and over again from people. >>So the developer rights once and it doesn't have to worry about where it's running. Uh it's got the right security, there are a protection and those policies go with it, so that's that's definitely a different world. Um Okay, last question. Uh maybe you guys could each give your opinion on sort of where we're headed, uh what we can expect from the the acquisition, the the integration, what should we look forward to and what should we pay attention to? >>Well, the one obvious thing that you're going to see is tremendous growth on the company's side and that's because Kubernetes is taking off cloud is taking off um SaAS is taking off and so there's obvious growth there. And one of the things that were clearly doing is um we're leveraging the power of of, you know, a few 1000 sales people to bring this out to market. Um, and so there is emerging of of sales and marketing activities and leveraging that scale. But what you shouldn't expect to see anything different on is this obsessive focus on the product, on quality, on making sure that we're highly differentiated that we have a product that the company that our customers and companies actually need no garage. >>Yeah. So I'll agree with everything down, he said. But a couple of things. Excite me a lot. Dave we've been roughly eight months or so since acquisition and I particularly love how last what in this quarter have gone in terms of how we focuses on solving customer problems. All right. So we'll always have that independent support for a cloud date of customers, but I'm excited about not just working with the broadest side of customers and as we scale the team that's going to happen, but providing a bridge to all the folks that grew up in the virtualization world, right? Grew up in the physical wall of physical service, etcetera and saying, how do we make it easy for you to come over to this new container Ization world? What is the on ramps bridging that gap serving as the on ramp? And we're doing a lot of work there from the product integration and independent product features that just make it easy. Right? And we're already seeing feel very good feedback for that from the field right now. >>I really like your position. I just dropped my quarterly cloud update. I focused, I look at the Big Four, the Big Four last year, spent $100 billion on Capex. And I always say that is a gift to companies like yours because you can be that connection point between the virtualization crowd, the on prem cloud, any cloud. Eventually we'll be, we'll be more than just talking about the Edge will actually be out there, you know, doing real work. Uh, and I just see great times ahead for you guys. So thanks so much for coming on the cube explaining this really exciting new area. Really appreciate it. >>Thank you so much. >>Thank you everybody for watching this day. Volonte for the Cube and our continuous coverage of the mon 2021, the virtual edition. Keep it right there. >>Mm mm mm

Published Date : May 26 2021

SUMMARY :

the next decade, however, promises to build on the troves of data that live in the cloud, Why what did you see in cast And the amazing thing about containers is that as you said, But then, you know, they started to, you know, hit go mainstream and and and So we said from first Principles and that's where you know, you had other things like Docker, And that means protecting, not just the stuff that sits on disk, So Danny, I mean typically VM you guys haven't done a ton of acquisitions, And one of the first steps that we do of course is we take the data from I mean you guys skating to the puck Where has been in the news in the US So what we said is how do you protect against these malicious attacks you know, you want to you don't want to bolt on data protection as an afterthought, but that's what we've done forever. And within that you want to deploy security, you want to deploy backup your to deploy protection at Danny, I think I heard you say or announced that this is going to be integrated into Wien backup and replication. So if you look at what we're doing with the backup for AWS and Azure, we can deploy the components, I wondered if you could you could paint a picture for us of sort of what what this new world So some of the things are So the developer rights once and it doesn't have to worry about where it's running. But what you shouldn't expect to see anything different on is this obsessive focus on etcetera and saying, how do we make it easy for you to come over to this new container Ization So thanks so much for coming on the cube explaining this really exciting new area. Volonte for the Cube and our continuous coverage of the mon

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

Danny AllenPERSON

0.99+

2017DATE

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

two weeksQUANTITY

0.99+

Danny AllanPERSON

0.99+

DannyPERSON

0.99+

GTR Enterprise Technology ResearchORGANIZATION

0.99+

100%QUANTITY

0.99+

100 clustersQUANTITY

0.99+

Niraj ToliaPERSON

0.99+

two platformsQUANTITY

0.99+

$100 billionQUANTITY

0.99+

2015DATE

0.99+

first lineQUANTITY

0.99+

AWSORGANIZATION

0.99+

2010DATE

0.99+

2021DATE

0.99+

ViennaLOCATION

0.99+

last yearDATE

0.99+

oneQUANTITY

0.98+

next decadeDATE

0.98+

eight monthsQUANTITY

0.98+

12QUANTITY

0.98+

CapexORGANIZATION

0.98+

Big FourEVENT

0.98+

hundredsQUANTITY

0.97+

U. S. GovernmentORGANIZATION

0.97+

USLOCATION

0.97+

10QUANTITY

0.97+

20 yearsQUANTITY

0.97+

one dayQUANTITY

0.95+

8QUANTITY

0.95+

AzureTITLE

0.95+

SaASTITLE

0.95+

todayDATE

0.94+

G C pTITLE

0.94+

zero linesQUANTITY

0.92+

third waveEVENT

0.92+

firstQUANTITY

0.92+

Kasten BivinsORGANIZATION

0.91+

first stepsQUANTITY

0.91+

four dot releaseQUANTITY

0.9+

eachQUANTITY

0.89+

piecesQUANTITY

0.89+

COBOLORGANIZATION

0.88+

1000 sales peopleQUANTITY

0.88+

KubernetesORGANIZATION

0.84+

CorvinTITLE

0.82+

AcropolisORGANIZATION

0.81+

first typeQUANTITY

0.79+

Hyper VisorTITLE

0.79+

two operationsQUANTITY

0.79+

last two decadesDATE

0.77+

first PrinciplesQUANTITY

0.77+

nextEVENT

0.74+

day zeroQUANTITY

0.72+

Big fourOTHER

0.72+

last 10 yearsDATE

0.71+

CHEDORGANIZATION

0.7+

swarmORGANIZATION

0.67+

wave ofEVENT

0.64+

weeksQUANTITY

0.61+

DockerORGANIZATION

0.6+

AboutQUANTITY

0.57+

2000DATE

0.56+

I. T.ORGANIZATION

0.55+

CubePERSON

0.51+

AzureORGANIZATION

0.5+

bigORGANIZATION

0.46+

WienORGANIZATION

0.46+

monEVENT

0.44+

coupleQUANTITY

0.39+

VolontePERSON

0.39+

esosORGANIZATION

0.33+

Derek Manky, Fortinet | CUBEConversation


 

>> From "The Cube studios" in Palo Alto and Boston, connecting with thought leaders all around the world. This, is a cube conversation. >> Welcome to this Cube Virtual conversation. I'm Lisa Martin and I'm excited to be talking to one of our cube alumni again, very socially distant, Derek Manky joins me the chief security insights and global for alliances, Fortinet's FortiGuard labs, Derek it's great to see you, even though virtually >> Yep, better safe better safe these days, right? But yeah, it's great to see you again and um I'm really looking forward to a great conversation, as always. >> Yeah! So Wow Has a lot changed since I last saw you? I-I think that's an epic understatement.. But each year we talk with you about the upcoming What's coming up in the threat landscape, what you guys are seeing Some of the attack trends. What are some of the things that you've seen in this very eventful year since we last spoke? >> Yeah.. a lot of a lot of things.. um.. Obviously.. uh.. with the pandemic there has been this big shift in landscape, right? So particularly uh Q3 Q4. So the last half of the year uh now we have a lot of things that were traditionally in corporate safeguards um you know, actual workstations, laptops that were sitting within networks and perimeters of-of organizations, that have obviously moved to work from home. And So, with that, comes a lot of new a-attack opportunities Um We track as, you know, threat until at 40 minutes, so 40 guard labs on a daily basis. And.. uh.. we are clearly seeing that and we're seeing a huge rise in things like um IOT targets, being the number one attacks, so consumer grade routers, um IOT devices, like printers and network attached storage. Those are um some of the most, favorite attack vehicles that cyber criminals are using to get into the-those devices. Of course, once they get in those devices, they can then move, laterally to compromise the..uh corporate laptop as an example. So those are-are very concerning The other thing has been that email that traditionally has been our number one um Another favorite attack platform always has! It's not going away but for the first time this year in.. um in about September, the second half, we saw a web based attacks taking priority for attackers and that's because of this new working environment. A lot of people I'm serving the websites from Again, these devices that were, not, were previously within Um you know, organizations email security is centralized a lot of the times but the web security always isn't. So that's another another shift that we've seen. We're now in the full-blown midst of the online shopping season um action and shopping season is almost every day now (laughter) since this summer >> Yep.. Yep.. >> And we've clearly seen that And we- Just from September up to October we saw over a trillion, not a billion, but a trillion new flows to shopping websites uh In just one month Um So that can- than number continues to rise and continues to rising quickly. >> Yeah. So the- the expanding threat landscape I've talked to a number of Companies the last few months that we're in this situation where it's suddenly It was a maybe 100% onsite workforce now going to work from home taking uh either desktops from uh their offices or using personal devices and that was a huge challenge that we were talking about with respect to endpoint and laptop security But interesting that you- you're seeing now this web security, I know phishing emails are getting more personal but the fact that um That website attacks are going up What are some of the things that you think, especially yo-you bring up a point we are we are now and maybe even s- more supercharged e-commerce season. How can businesses prepare a-and become proactive to defend against some of these things that, since now the threat surface is even bigger? >> Yeah. Multi-pronged approach. You know, Lisa, like we always say that, first of all, it's just like we have physical distancing, cyber distancing, just like we're doing now on this call. But same thing for reuse. I think there's always a false sense of security, right? When you're just in the home office, doing some browsing to a site, you really have to understand that these sites just by touching, literally touching it by going to the URL and clicking on that link you can get infected that easily. We're seeing that, there's a lot of these attacks being driven So, education, there's a lot of free programs. We have one on Fortinet information security awareness training. That is something that we continually need to hone the skills of end users first of all, so that's an easy win I would say, to my eyes in terms of organizations, but then this multi-pronged approach, right? So things like having EDR endpoint detection response, and being able to manage those end users while they're on on their devices at home Being able to have security and making sure those are up to date in terms of patches. So centralized management is important, two factor authentication, or multi-factor authentication Also equally as important. Doing things like network segmentation. For end users and the devices too. So there's a lot of these Things that you look at the risk that's associated The risk is always way higher than the investment upfront in terms of hours, in terms of security platforms. So the good thing is there's a lot of Solutions out there and it doesn't have to be complicated. >> That's good because we have enough complication everywhere else. But you bring up a point, you know, about humans, about education. We're kind of always that weakest link, but so many of us, now that are home, have distractions going on all around. So you might be going, "I've got to do some bill pay and go onto your bank" without thinking that that's that's now a threat landscape. What are some of the things that you're seeing that you think we're going to face in 2021, which is just around the corner? >> Yeah so So we're just talking about those IOT devices They're the main culprit right now. They can continue to be for a while We have this new class of threat emerging technology, which is edge computing. So people always talked about the perimeter of the perimeter being dead in other words, not just building up a wall on the outside, but understanding what's inside, right? That's been the case of IOT, but now edge computing is the emerging technology The main difference You know, we say, is that the edge devices are virtual assistant is the best example I could give, right? That, that users will be aware of in-home networks. Because these devices, traditionally, have more processing power, they handle more data, they have more access and privilege to devices like things like security systems, lights, as an example Beyond home networks, these edge devices are also As an example, being put into military and defense into critical infrastructure, field units for oil and gas and electricity as an example. So this is the new emerging threat, more processing power, more access and privilege, smarter decisions that are being made on those devices Those devices, are going to be targets for cyber criminals. And that's something, I think next year, we're going to see a lot of because it's a Bigger reward to the cyber criminal if they can get into it. And So targeting the edge is going to be a big thing. I think there's going to be a new class of threats. I'm calling these, I haven't heard this coined in the industry yet, but I'm calling these or "EAT"s or "Edge Access Trojans" because that's what it is, they compromise these devices. They can then control and get access to the data. If you think of a virtual assistant, and somebody that can actually compromise that device, think about that data. Voice data that's flowing through those devices that they can then use as a cleverly engineered, you know, attack a social engineering attack to phish a user as an example. >> Wow! I never thought about it from that perspective before Do you think, with all the talk about 5G, and what's coming with 5G, is that going to be an accelerator of some of these trends? Of some of these "EAT"s that you talk about? >> Yeah, definitely. Yeah So 5G is just a conduit. It's an accelerator. Absolutely um Catalyst called, if you will, It's here. Um, it's been deployed, not worldwide, but in many regions, it's going to continue to be 5G is all about, um, speed.. Um right? And so if you think about how swiftly these attacks are moving, you be abl- you need to be able to keep up with that from a defense standpoint, um Threats move without borders, they move without Uh, uh, Unfortunately, without restriction a lot of the time, right? Cyber crime has no borders. Um, the-they don't have rules, or if they have, they don't care about rules (laughter) So break those rules. So they are able to move quickly, right? And that's th- the problem with 5G, of course, is that these devices now can communicate quicker, they can launch even larger scale things like "DDOS", "Distributed Denial Of Service attacks". And That is, is a very big threat. And it also allows the other thing about 5G, Lisa, is that it allows.. um.. Peer to peer connectivity too. Right? So it's like Bluetooth, Um, Bluetooth's um enhanced in a sense, because now you have devices that interact with each other as well, by interacting with each other Um that also uh, you know, what are they talking about? What data are they passing? That's a whole new security inspection point that we need to And that's what I mean about this.. Um that's just It reconfirms that the.. Perimeters that. >> Right. Something we've been talking about, as you said for a while, but That's some pretty hard hitting evidence that it is, indeed, a thing of the past Something that we've talked to you about - with you in the past is Swarm attacks. Ho- What's, What's going on there? How are they progressing? >> Yeah, so this is a real threat, but there's good news, bad news. The Good news is this is a long progressing threat, which means we have more time to prepare. Bad news is we have seen developments in terms of weaponizing this, It's like anything.. Swarm is a tool. It can be as good.. DARPA, as an example, has invested a lot into this from military research, it's all around us now in terms of good applications things like for redundancy, right? Robotics, as an example, there's a lot of good things that come from Swarm technology, but.. There's use for If it's weaponized, It can have some very scary prospects. And that's what we're starting to see. There's a new botnet that was created this year. It is called the "HTH" this is written in Golang. So it's a language that basically allows it to infect any number of devices. It's not just your PC Right? It's the same, it's the same virus, but it can morph into all these different platforms, devices, whether it's a, an IOT device, an edge device But the main, characteristic of this is that it's able to actually have communication. They built a communication protocol into it. So the devices can pass files between each other, talk to each other They don't have a machine learning models yet, so in other words, they're not quote-in-quote "smart" yet, but that's coming. Once that intelligence starts getting baked in, then we have the weaponized Swarm technology And what this means, is that you know, when you have those devices that are making decisions on their own, talking to each other >> A: they're harder to kill. You take one down, another one takes its place. >> B: um They are able to move very swiftly, especially when that piggybacking leveraging on things like 5G. >> So . the I'm just blown away at all these things that you're talking about They are so So talk about how companies, and even individuals, can defend against this and become proactive. As we know one of the things we know about 2020 is all the uncertainty, we're going to continue to see uncertainty, but we also know that we- there's expectation.. globally, that a good amount of people are going to be working from home and connecting to corporate networks for a very long time. So, how can companies and people become proactive against these threats? >> Yes People process procedures and technology. So, we talked, as I really looked at this as a stacked approach, first of all, threats, as it is said, they're becoming quicker, the attack surface is larger, you need threat intelligence visibility This comes down to security platforms from a technology piece. So a security driven networking, AI driven security operations Centers These are new. But it's, it's becoming, as you can imagine, when we talked about critical, to fill that gap, to be able to move as quickly as the attackers you need to be able to use intelligent technology on your end. So people are just too slow. But we can still use people from the process, you know, making sure You know, Trying to understand what the risk is. So looking at threat intelligence reports, we put out weekly threat intelligence briefs as an example of as Fortiguard Labs, to be able to understand what the threats are, how to respond to those, how to prioritize them and then put the proper security measures in place. So, there are absolutely relevant technologies that exist today, And in fact now I think is the time to really get those in deployment before this becomes worse, as we're talking about. And then as I said earlier, there's also free things that can be just part of our daily lives, right? So we don't have this false sense of security. So understanding that that threat is real following up on the threat and being on doing education There's phishing services Again, phishing can be a good tool when it's used in a non-malicious way, to test people's skills sets as an example. So all of that combined is But the biggest thing is definitely relying on things like machine learning, artificial intelligence, to be able to work at speed with these threats. >> Right. So, you also have global threat alliances under your portfolio. Talk to me about how 40 net is working with global Alliance partners to fight this growing attack surface. >> Yeah. So this is the ecosystem. Every, every organization, whether it's private or public sector, has a different role to play in essence, right? So you look at things in the public sector, you have law enforcement, they're focused on attribution, so when we look at cyber crime, and if we find It's the hardest thing to do, but if we find out who these cyber criminals are, we can bring them to justice. Right? Our whole goal is to make it more expensive for the cyber criminals to operate, So by doing this, if we work with law enforcement and it leads to a successful arrest and prosecution, because we've done it in the past, that takes them off line to hit somewhere it hurts Law enforcement will typically work with intelligence leads to freeze assets, as an example from maybe ransom attacks that are happening. So that's one aspect, but then you have other things like working with national computer emergency response. So disrupting cyber crime, we work with national series. If we know that, you know, the bad guys are hosting stolen data or communication infrastructure in public, you know, servers, we can work with them to actually disrupt that, to take those servers offline. Then you have the private space. So this, you know Fortinet we're a founding member of the Cyber Threat Alliance. I'm on the steering committee there. And this is working with even competitors around in our space where we can share quickly up-to-date intelligence on, on attackers. We remain competitive on the technology itself, but, you know, we're working together to actually share as much as we know about the bad guys. And recently we're also a founding member of the "Center for Cyber Security", "C for C" with World Economic Forum. And This is another crucial effort that is basically trying to bridge all of that. To mend all of that together, right? Law enforcement, prosecutors, security vendors, intelligence organizations, all under one roof because we really do need that. It's an entire ecosystem to make this an effective fight. So it's, it's interesting because a lot of people, I don't think see what's happening behind the scenes a lot of the times, but there is a tremendous effort globally that's happening between all the players. So that's really good news. And the industry piece is something close to my heart. I've been involved in a lot of time and we continue to support. >> That's exciting. And that's something that is, you know, unfortunately, so very, very needed and will continue to be as emerging technologies evolve and we get to use them for good things. And to your point, that bad actors also get to take advantage of that for nefarious things as well. Derek it's always great to have you on the program, any particular things on the 40 net website that you would point viewers to to learn more about like the 20, 20 front landscape? >> Sure. You can always check out our blogs, So it's on blogged@fortynet.com, under "Threat Research", As I said on 40 guard.com, we also have our playbooks on there. We have podcasts, we have our updated threat intelligence briefs too. So those are always great to check out and just be rest assured that, you know, everything I've been talking about, we're doing a lot of that heavy lift on the backend. So by having working with managing security service providers and having all this intelligence baked in, organizations don't have to go and have a huge OPEX by you know, hiring, you know, trying to create a massive security center on their own. I mean, it's about this technology working together and that's that's what we're here for, its we can ask what do you guard lapse? >> Awesome Derek, thank you so much for joining me today in this Cube Conversation. Lots of exciting stuff going on at 40 net and 40 guard labs as always, which we expect, it's been great to have you. Thank you. >> It's a pleasure. Thanks Lisa. >> For Derek Manky. I'm Lisa Martin. You're watching the Virtual Cube.

Published Date : Nov 17 2020

SUMMARY :

leaders all around the world. I'm Lisa Martin and I'm excited to be to a great conversation, as always. What are some of the So the last half of the year uh Yep.. So that can- than number continues to rise are some of the things Yeah. and clicking on that link you can get infected that easily. and it doesn't have to be complicated. What are some of the things and privilege to devices are going to be targets So targeting the edge is going to be a big thing. So they are able to move quickly, right? Something that we've talked to you about - Yeah, so this is a real threat, It is called the "HTH" this is written in Golang. is that it's able to A: they're harder to kill. to move very swiftly, one of the things we know about to be able to understand I think is the time to really So all of that combined is to fight this growing attack surface. It's the hardest thing to do, If we know that, you know, It's an entire ecosystem to something that is, you know, its we can ask what do you guard lapse? it's been great to have you. It's a pleasure. I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Center for Cyber SecurityORGANIZATION

0.99+

LisaPERSON

0.99+

Derek MankyPERSON

0.99+

DerekPERSON

0.99+

Palo AltoLOCATION

0.99+

Fortiguard LabsORGANIZATION

0.99+

BostonLOCATION

0.99+

SeptemberDATE

0.99+

World Economic ForumORGANIZATION

0.99+

OctoberDATE

0.99+

100%QUANTITY

0.99+

2021DATE

0.99+

FortinetORGANIZATION

0.99+

next yearDATE

0.99+

blogged@fortynet.comOTHER

0.99+

40 minutesQUANTITY

0.99+

Cyber Threat AllianceORGANIZATION

0.99+

todayDATE

0.99+

one monthQUANTITY

0.99+

over a trillionQUANTITY

0.99+

FortiGuardORGANIZATION

0.99+

one aspectQUANTITY

0.98+

two factorQUANTITY

0.98+

oneQUANTITY

0.98+

this yearDATE

0.98+

first timeQUANTITY

0.97+

pandemicEVENT

0.97+

each yearQUANTITY

0.97+

40 guard labsQUANTITY

0.96+

second halfQUANTITY

0.96+

20QUANTITY

0.96+

2020DATE

0.96+

C for CORGANIZATION

0.95+

5GORGANIZATION

0.93+

this summerDATE

0.93+

40 netORGANIZATION

0.88+

The Cube studiosORGANIZATION

0.85+

last half of the yearDATE

0.82+

CatalystORGANIZATION

0.8+

40QUANTITY

0.75+

40 guard.comOTHER

0.73+

one roofQUANTITY

0.72+

not a billionQUANTITY

0.72+

a trillionQUANTITY

0.69+

last few monthsDATE

0.65+

SwarmEVENT

0.65+

DARPAORGANIZATION

0.56+

Q3 Q4DATE

0.56+

ThreatTITLE

0.56+

CUBEConversationORGANIZATION

0.54+

5GOTHER

0.44+

cubeORGANIZATION

0.44+

GolangTITLE

0.41+

netLOCATION

0.4+

CubeORGANIZATION

0.31+

DOCKER CLI FINAL


 

>>Hello, My name is John John Sheikh from Iran Tous. Welcome to our session on new extensions for doctors CLI as we all know, containers air everywhere. Kubernetes is coming on strong and the CNC F cloud landscape slide has become a marvel to behold its complexities about to surpass that of the photo. Letha dies used to fabricate the old intel to 86 and future generations of the diagram will be built out and up into multiple dimensions using extreme ultraviolet lithography. Meanwhile, complexity is exploding and uncertainty about tools, platform details, processes and the economic viability of our companies in changing and challenging times is also increasing. Mirant ous, as you've already heard today, believes that achieving speed is critical and that speed results from balancing choice with simplicity and security. You've heard about Dr Enterprise Container Cloud, a new framework built on kubernetes, the less you deploy compliant, secure by default. Cooper nineties clusters on any infrastructure, providing a seamless self service capable cloud experience to developers. Get clusters fast, Justus, you need them, Update them seamlessly. Scale them is needed all while keeping workloads running smoothly. And you've heard how Dr Enterprise Container Cloud also provides all the day one and Day two and observe ability, tools, the integration AP ICE and Top Down Security, Identity and Secrets management to run operations efficiently. You've also heard about Lens, an open source i D for kubernetes. Aimed at speeding up the most banding, tightest inner loop of kubernetes application development. Lens beautifully meets the needs of a new class of developers who need to deal with multiple kubernetes clusters. Multiple absent project sufficiently developers who find themselves getting bogged down and seal I only coop CTL work flows and context switches into and out of them. But what about Dr Developers? They're working with the same core technologies all the time. They're accessing many of the same amenities, including Docker, engine Enterprise, Docker, Trusted registry and so on. Sure, their outer loop might be different. For example, they might be orchestrating on swarm. Many companies are our future of Swarm session talks about the ongoing appeal of swarm and Miranda's commitment to maintaining and extending the capabilities of swarm Going forward. Dr Enterprise Container Cloud can, of course, deployed doctor enterprise clusters with 100% swarm orchestration on computes just Aziza Leah's. It can provide kubernetes orchestration or mixed swarming kubernetes clusters. The problem for Dr Dev's is that nobody's given them an easy way to use kubernetes without a learning curve and without getting familiar with new tools and work flows, many of which involved buoys and are somewhat tedious for people who live on the command line and like it that way until now. In a few moments you'll meet my colleagues Chris Price and Laura Powell, who enact a little skit to introduce and demonstrate our new extended docker CLI plug in for kubernetes. That plug in offers seamless new functionality, enabling easy context management between the doctor Command Line and Dr Enterprise Clusters deployed by Dr Enterprise Container Cloud. We hope it will help Dev's work faster, help them adapt decay. TSA's they and their organizations manage platform coexistence or transition. Here's Chris and Laura, or, as we like to call them, developer A and B. >>Have you seen the new release of Docker Enterprise Container Cloud? I'm already finding it easier to manage my collection of UCP clusters. >>I'm glad it's helping you. It's great we can manage multiple clusters, but the user interface is a little bit cumbersome. >>Why is that? >>Well, if I want to use docker cli with a cluster, I need to download a client bundle from UCP and use it to create a contact. I like that. I can see what's going on, but it takes a lot of steps. >>Let me guess. Are these the steps? First you have to navigate to the web. You i for docker Enterprise Container Cloud. You need to enter your user name and password. And since the cluster you want to access is part of the demo project, you need to change projects. Then you have to choose a cluster. So you choose the first demo cluster here. Now you need to visit the U C p u I for that cluster. You can use the link in the top right corner of the page. Is that about right? >>Uh yep. >>And this takes you to the UCP you. I log in page now you can enter your user name and password again, but since you've already signed in with key cloak, you can use that instead. So that's good. Finally, you've made it to the landing page. Now you want to download a client bundle what you can do by visiting your user profile, you'll generate a new bundle called Demo and download it. Now that you have the bundle on your local machine, you can import it to create a doctor context. First, let's take a look at the context already on your machine. I can see you have the default context here. Let's import the bundle and call it demo. If we look at our context again, you can see that the demo context has been created. Now you can use the context and you'll be able to interact with your UCP cluster. Let's take a look to see if any stacks are running in the cluster. I can see you have a stack called my stack >>in >>the default name space running on Kubernetes. We can verify that by checking the UCP you I and there it iss my stack in the default name space running on Kubernetes. Let's try removing the stack just so we could be sure we're dealing with the right cluster and it disappears. As you can see. It's easy to use the Docker cli once you've created a context, but it takes quite a bit of effort to create one in the first place. Imagine? >>Yes. Imagine if you had 10 or 20 or 50 clusters toe work with. It's a management nightmare. >>Haven't you heard of the doctor Enterprise Container Cloud cli Plug in? >>No, >>I think you're going to like it. Let me show you how it works. It's already integrated with the docker cli You start off by setting it up with your container cloud Instance, all you need to get started is the base. You are all of your container cloud Instance and your user name and password. I'll set up my clothes right now. I have to enter my user name and password this one time only. And now I'm all set up. >>But what does it actually dio? >>Well, we can list all of our clusters. And as you can see, I've got the cluster demo one in the demo project and the cluster demo to in the Demo project Taking a look at the web. You I These were the same clusters we're seeing there. >>Let me check. Looks good to me. >>Now we can select one of these clusters, but let's take a look at our context before and after so we can understand how the plug in manages a context for us. As you can see, I just have my default contact stored right now, but I can easily get a context for one of our clusters. Let's try demo to the plug in says it's created a context called Container Cloud for me and it's pointing at the demo to cluster. Let's see what our context look like now and there's the container cloud context ready to go. >>That's great. But are you saying once you've run the plug in the doctor, cli just works with that cluster? >>Sure. Let me show you. I've got a doctor stack right here and it deploys WordPress. Well, the play it to kubernetes for you. Head over to the U C P u I for the cluster so you can verify for yourself. Are you ready? >>Yes. >>First I need to make sure I'm using the context >>and >>then I can deploy. And now we just have to wait for the deployment to complete. It's as easy as ever. >>You weren't lying. Can you deploy the same stack to swarm on my other clusters? >>Of course. And that should also show you how easy it is to switch between clusters. First, let's just confirm that our stack has reported as running. I've got a stack called WordPress demo in the default name space running on Kubernetes to deploy to the other cluster. First I need to select it that updates the container cloud context so I don't even need to switch contexts, since I'm already using that one. If I check again for running stacks, you can see that our WordPress stack is gone. Bring up the UCP you I on your other cluster so you can verify the deployment. >>I'm ready. >>I'll start the deployment now. It should be appearing any moment. >>I see the services starting up. That's great. It seems a lot easier than managing context manually. But how do I know which cluster I'm currently using? >>Well, you could just list your clusters like So do you see how this one has an asterisk next to its name? That means it's the currently selected cluster >>I'm sold. Where can I get the plug in? >>Just go to get hub dot com slash miran tous slash container dash cloud dash cli and follow the instructions

Published Date : Sep 15 2020

SUMMARY :

built on kubernetes, the less you deploy compliant, secure by default. Have you seen the new release of Docker Enterprise Container Cloud? but the user interface is a little bit cumbersome. I can see what's going on, but it takes a lot of steps. Then you have to choose a cluster. what you can do by visiting your user profile, you'll generate the UCP you I and there it iss my stack It's a management nightmare. Let me show you how it works. I've got the cluster demo one in the demo project and the cluster demo to in Looks good to at the demo to cluster. But are you saying once you've run the plug in the doctor, Head over to the U C P u I for the cluster so you can verify for yourself. And now we just have to wait for the deployment to complete. Can you deploy the same stack to swarm And that should also show you how easy it is to switch between clusters. I'll start the deployment now. I see the services starting up. Where can I get the plug in?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Laura PowellPERSON

0.99+

ChrisPERSON

0.99+

Chris PricePERSON

0.99+

John John SheikhPERSON

0.99+

LauraPERSON

0.99+

100%QUANTITY

0.99+

10QUANTITY

0.99+

20QUANTITY

0.99+

FirstQUANTITY

0.99+

Aziza LeahPERSON

0.97+

50 clustersQUANTITY

0.97+

docker Enterprise Container CloudTITLE

0.95+

KubernetesTITLE

0.94+

86QUANTITY

0.94+

WordPressORGANIZATION

0.93+

todayDATE

0.92+

oneQUANTITY

0.91+

one timeQUANTITY

0.9+

Docker Enterprise Container CloudTITLE

0.89+

Dr Enterprise Container CloudTITLE

0.88+

first demo clusterQUANTITY

0.88+

MirandaPERSON

0.85+

Iran TousORGANIZATION

0.84+

intelORGANIZATION

0.84+

LensTITLE

0.83+

TSAORGANIZATION

0.83+

Cooper ninetiesORGANIZATION

0.81+

Day twoQUANTITY

0.78+

DrTITLE

0.73+

DockerORGANIZATION

0.73+

first placeQUANTITY

0.71+

WordPressTITLE

0.71+

Enterprise Container CloudCOMMERCIAL_ITEM

0.65+

LethaPERSON

0.59+

CloudCOMMERCIAL_ITEM

0.58+

DOCKER CLITITLE

0.57+

MirantTITLE

0.57+

dayQUANTITY

0.55+

TrustedORGANIZATION

0.51+

Dr Enterprise ClustersTITLE

0.47+

Dr EnterpriseTITLE

0.46+

CloudTITLE

0.43+

DrPERSON

0.42+

EnterpriseCOMMERCIAL_ITEM

0.33+

SwarmORGANIZATION

0.33+

ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN


 

>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.

Published Date : Sep 14 2020

SUMMARY :

of our application to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

GitHubORGANIZATION

0.99+

FirstQUANTITY

0.99+

one reasonQUANTITY

0.99+

MirantisORGANIZATION

0.99+

OneQUANTITY

0.99+

NGINXTITLE

0.99+

DockerTITLE

0.99+

two approachesQUANTITY

0.99+

MonolithORGANIZATION

0.99+

oneQUANTITY

0.98+

UCPORGANIZATION

0.98+

KubernetesTITLE

0.98+

One thingQUANTITY

0.98+

one developerQUANTITY

0.98+

JenkinsTITLE

0.98+

todayDATE

0.98+

BrownfieldORGANIZATION

0.97+

both worldsQUANTITY

0.97+

twoQUANTITY

0.97+

bothQUANTITY

0.96+

one clickQUANTITY

0.96+

GreenfieldORGANIZATION

0.95+

eachQUANTITY

0.95+

single paneQUANTITY

0.92+

Docker hubTITLE

0.91+

a hundredQUANTITY

0.91+

LensTITLE

0.9+

DockerORGANIZATION

0.9+

MicroserviceORGANIZATION

0.9+

VSTITLE

0.88+

DevOpsTITLE

0.87+

K8SCOMMERCIAL_ITEM

0.87+

Docker hubORGANIZATION

0.85+

waysQUANTITY

0.83+

KubernetesORGANIZATION

0.83+

last six yearsDATE

0.82+

JenkinsPERSON

0.72+

One ofQUANTITY

0.7+

ON DEMAND SEB CONTAINER JOURNEY DEV TO OPS FINAL


 

>> So, hi, my name is Daniel Terry, I work as Lead Designer at SEB. So, today we will go through why we are why we are Mirantis' customer, why we choose Docker Enterprise, and mainly what challenges we were facing before we chose to work with Docker, and where we are today, and our keys to success. >> Hi, my name is Johan, I'm a senior developer and a Tech Lead at SEB. I was in the beginning with Docker for like, four years ago. And as Daniel was saying here, we are going to present to you our journey with Docker and the answers. >> Yeah, who are we? We are SEB group. So we are a classic, financial large institutions. So, classic and traditional banking services. In Sweden, we are quite a big bank, one of the largest. And we are on a journey of transforming the bank so it has to be online 24-seven. People can do their banking business every day, whenever they want, nothing should stop them to be online. So this is putting a lot of pressure on us on infrastructure to be able to give them that service. (drum fill) >> So our timeline here. Is look, we started out with how to facilitate the container technology it has to be. 2016. And, in 2018, we had the first Docker running in SEB in a standalone mode. You need that. We didn't have any swarm, or given up this cluster since a while. For 2019, we have our first Docker-prise enterprise cluster at SEB. And today, 2020, we have the latest and greatest version of Docker installed. We are running around approximately two and a fifth at 450 specs. Around a thousand services and around 1500 containers. So, developer challenges. As for me as a developer, previous to Docker was really, really hard to get things in production. Times. It took big things and ordering services and infrastructures was a pain in the... yeah, you know what I mean? So for me, it was all about processes. We use natural processes and meaning that I wasn't able to, to see maintaining my system in production. I was handing that over to our operations teams and operation teams in that time, they didn't know how the application works. They didn't know how to troubleshoot it and see, well, what's going wrong. They were experts on the infrastructure and the platforms, but not on our applications. We were working in silos, meaning that I as a developer, only did developing things. The operations side did their things, and the security side did their things. But we didn't work as a team. I mean, today we have a completely different way of working. We will not see shapes. I mean, we have persons that were really good in maybe MQ technologies, or in some programming language and so on, but we didn't have the knowledge in the team techs to solve things, as we should have. Long lead times. I mean, everything we were trying to do had to follow the processes as we had. I mean that we should fill in some forms, send it away, hopefully someone was getting, getting back to us and saying, yeah guys, we can help you out with these services or this infrastructure, but it takes a really long time to do that. I mean, ordering infrastructure is when you're not an expert on that really hard to do. And often the orders we made or placed were wrong. When we have forms to fill in, it wasn't possible for us to do things automatically. Meaning that we didn't have the code, or the infrastructure as code. Meaning that if we didn't get the right persons into the meetings the first time, we didn't have the possibility to do it the right way, meaning that we had to redo and redo, and hopefully sometimes we got the right. We didn't have consistent set ups between the environments. When we order, as for example, a test environment, we could maybe order it with some minor resources, less CPU, so less memory, less disc or whatever. Or actually less performance on the hardware, but then we moved up to production. We realized that we have different hardware, different discs, different memories, and that could actually cause some serious problems in applications, access-wise. I mean, everyone likes to have exercise, especially if you are the maintainer of the system. That was really, really hard to get. I mean, every system has their own services, their own service, and therefore they need to apply for access to those other services. But today there's a complete difference since we only have one class to produce. Since we don't have infrastructure as a code back then, there were really lots of human errors. I mean, everyone was doing things manually. When you're coming from the Windows perspective, everything is a UI. You tend to prefer that way of working, meaning that if you used to click something in between the environments, the environments will not look the same. Life cycles. I mean, just imagine. When we have the server installed, it's like a pet. You have everything configured all from certificates to port openings, cartels, install patches, you name it. And then imagine that Windows are terminating a version and you need to reinstall that. Everything needs to be redone from the beginning. So there was a really long time taking to, to do the LCM activities, General lack of support of Microservice architecture was really also, a thing that are driving us forward with the containers technology, since we can't scale our applications in the same way as for containers. We, for example, couldn't have two applications or two processes using the same TCP port. For example, if you'd like to scale a web server, you can't do that on the same hardware. You need to have two different servers. And just imagine replacing all the excesses, replacing all the orders again for more hardware, and then manually a setting up there. The low balancer in front is a really huge task to do. And necessarily if you don't have the knowledge how the infrastructure is where you're working, then it's also really hard for you as a developer to do things right. Traditionalist. I mean, the services for us are like pets. They were really, really hard to set up. It'd take maybe a week or so. And if something was wrong with them, we will try to fix them as a pet. I mean, we couldn't just kill them and throw them away. It will actually destroy the application as this, our, like a unit box where all our things are installed. >> So, coming in from the infrastructure part of this, we've also seen challenges. For my team, we're coming from a Windows environment. So doing like a DevOps journey, which we want to do, makes it harder due to our nature in our environments. We are not used to, maybe use API, so we are not used to giving open APIs to our developers to do changes on the servers. Since we are a bank, we don't allow users to log into the servers, which means we have to do things for them all the time. This was very time consuming. And a lot of the challenges we actually still are seeing is the existing infrastructure. You can't just put that container platform on it, and thinking you're sold and everything. One of the biggest issues for us is, has been to getting servers. Windows servers usually takes like 15 minutes, Linux servers can takes up to two week in a bad day. So we really lack like, infrastructure as code. If we want a low balancer, that is also an order form. If we want the firewall opening, that's an order form. Hopefully they will not deny it. So it will go faster. So it's a lot of old processes that we need to go through. So what we wanted to do is that we want to move all of these things to the developers, so they can do it. They can own up their problems, but with our old infrastructure, that wasn't possible. We are a heavily ITIL-based organization, meaning that everything went from a cab. Still does in some way, we have one major service window every month where we take everything down. There is a lot of people involved in everything. So it's quite hard to know what will be done during the maintenance window. We lack supporting tools, or we lacked supporting tools, like log-in, good log-in tools. We have a bunch of CI/CD tools, but the maturity level of the infrastructure team wasn't that good. Again, order form and processes. If we want to, like, procure, do our procurement on a new like, storage system, or a backup system, we talk about here. So to do it is, for us, with containers, it would solve a lot of problems, because we cause we would then move the problems, not maybe move the problems to the developer, but we would make it able for them to own their own problems. So everything that we have talked about up till now boils down to business drivers. So the management's gave- gave us some policies to, or what they, how they want to change the company, so we can be this agile and fast moving bank. So one of the biggest drivers are cloud readiness, where Containers comes in perfectly. So we can build it on premises, and then we can move it to the cloud when we are ready but we can't, but we also need an exit strategy to move it back on premises if we need, due to hard regulations. Maybe you can throw it in the air. >> Absolutely. I definitely can. You're absolutely right. We need to develop things in a certain way. So we can move from infrastructure to infrastructure depending, or regardless of the vendor. Meaning that if we are able to run it on-prem, we should be able to run it in cloud or vice versa. We should also be able to move between clouds, and not be forced into one cloud provider. So that's really important for us at SEB. Short time to market is also a thing here. I mean, we are working with the huge customers. I can't name them, but they're really huge. And they need to have us being moving forward. I mean, able to really fast switch from one technology, maybe to another, we are here for them. And it's really important to us to be really fast for us to get new things out in production. All right, maybe. Nothing else? >> I don't, don't really. From the upside, we are in a huge staff DevOps transition. So, or a forced DevOps transition, which means we need to start looking at new infrastructure solutions, maybe deploy our infrastructure parts inside of containers to be able to use it the same way in the cloud. That's what we do prior, do here on premises, we have private clouds which are built on techno- technology, container technology today. So this fits quite good to have the Docker platform being one part of that one. >> Yeah. And this is solid, we are also working really, really actively on open source platforms and open source drivers. We can see that we have a huge amount of vendors in SEB, really huge ones, but we can also see that we can, facilitate open source platforms, and open source technology as well. So container technology will bring that for us. I mean, instead of having a SaaS platform and SaaS services, we can actually instantiate our own with containers and stuff. >> Also we are, since we are quite heavily regulated, the process of going through to you as like a SaaS service can take up to two years for us to go through, and then maybe the SaaS service, is it, is it what we want to use anymore? So, also we want to develop the things in our own premises and maybe, and scale it to the cloud if we need. And also we want to be an attractive employer, where maybe it's not that, the coolest thing for a young student to work in mainframe, we have a mainframe it's, it's not going anywhere, but it's hard to get people, and we want to be an attractive employer, and everyone is talking Kubernetics and containers or, and clouds. So we need to transition into those technologies. >> Yeah, we need to be open minded and necessarily facilitate the new technologies. So we can actually attract new employees. So it's really important to us to have an open mind. Our experience with Docker Containers. I mean, as I said before, scalability is a really important thing for us today. When we are using a more microservice architecture, we need to be able to Skype. We need to be scaling horizontally instead of vertically. So for that, containers are perfect storage. As we said before, we have a huge problem with environments being differently set up, since it was often manually done. Today, as we have a infrastructure as a code, it's really, really nice to have the same things exactly configured, the same in all environments. And we also have the same tooling, meaning that if I can run it on my machine, it's the same tooling I will be using to run it for test purposes or in production. That's a huge benefit for us as a developer. Time to market. I mean, today, we don't have to order service, we are using the service approach here. So we have a container cluster that are actually just sitting there waiting for our services to be hosted. So no more forms, no more calls, no more meetings before we can set up anything. We also own our problems. I mean, before, as I said, we have the processes, meaning that we ship our applications to any server. And then the operation sites take over. That's not the case now. We are actually using this as we should in DevOps. Meaning the other teams are actually responsible for all their errors as well. Even if it's on the infrastructure part, it's completely different if it's a platform's problem, because then it's the platform's team, and we can use different windows. We can try stuff out, we have an open mind. And that says that I can download and try any container image I would like on my developer machine. It's not maybe, okay to run it in production without having the security people look at it. But normally it's really, really much faster instead of waiting maybe six months, we can maybe wait one week or so. And of course less to none LCM activities. I mean, as I said before, it will take months, maybe, to do an LCM activity on multiple servers. Today, our LCM activities more or less are just switching to a new version of the image from Docker hub. That's all we have to do. So that's actually maintained during the processes we have in CI/CD pipelines. >> And the last one. So our keys to success: you should get demanded from the managers and management that everything should be a container. All the new development has to go through a container before you start ordering servers. Everything shall go through a CI/CD pipeline. We don't actually, here at SEB. Our developers build their own CI/CD pipeline. We just provide a platform for them to use it against, and the CI/CD to systems, but they build everything for themselves. Cause they know how their application works, how it should be deployed, with what tools. We just provide them with a tool set. Build a Cross Team. So you should incorporate all the processes that you need, but you should focus on the developer part, because you are building a platform for the developers, not for operations or security. >> And then maybe >> A lot of... >> you'll be able to take flight >> Yeah. Luck has nothing to do with it? Yes, it has. Of course, luck has something to do with it, even if you're really passionate, even if you're really good at some things. I mean, we got some really nice help from Dr. Inc. We were really... Came in with the technology in the right time for us to be, and we had really engaged people with these projects and that's a really luck for us to have. >> Yeah. And also we... I want to thank our colleagues, because we have another container team who started before us. And they have actually run into a lot of organizational problems, which they have sold, so we could piggyback on that, on those solutions. Also, start small and scale it. This is where Docker swarm comes, fits perfectly. So we have actually, we started with swarm. We are moving into Kubernetics in this platform. We will not force-move anything. The developers just should show us, what their- fits their needs. Thank you! >> Thank you very much.

Published Date : Sep 14 2020

SUMMARY :

So, today we will go through we are going to present to you our journey So we are a classic, had to follow the processes as we had. So everything that we have maybe to another, we are here for them. we have private clouds can also see that we can, to the cloud if we need. the processes we have in CI/CD pipelines. and the CI/CD to systems, I mean, we got some really So we have actually,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

JohanPERSON

0.99+

15 minutesQUANTITY

0.99+

2018DATE

0.99+

SwedenLOCATION

0.99+

Daniel TerryPERSON

0.99+

2019DATE

0.99+

SEBORGANIZATION

0.99+

six monthsQUANTITY

0.99+

2016DATE

0.99+

one weekQUANTITY

0.99+

2020DATE

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

SkypeORGANIZATION

0.99+

450 specsQUANTITY

0.99+

two processesQUANTITY

0.99+

one classQUANTITY

0.99+

TodayDATE

0.99+

firstQUANTITY

0.99+

two applicationsQUANTITY

0.99+

four years agoDATE

0.99+

OneQUANTITY

0.98+

first timeQUANTITY

0.98+

a weekQUANTITY

0.98+

oneQUANTITY

0.98+

24QUANTITY

0.98+

LinuxTITLE

0.97+

two different serversQUANTITY

0.97+

around 1500 containersQUANTITY

0.97+

WindowsTITLE

0.96+

Dr. Inc.ORGANIZATION

0.94+

up to two yearsQUANTITY

0.92+

one partQUANTITY

0.92+

one technologyQUANTITY

0.89+

approximately twoQUANTITY

0.89+

Mirantis'ORGANIZATION

0.88+

Around a thousand servicesQUANTITY

0.87+

Docker EnterpriseORGANIZATION

0.85+

one cloud providerQUANTITY

0.8+

fifthQUANTITY

0.79+

DockerTITLE

0.79+

up to two weekQUANTITY

0.73+

one major serviceQUANTITY

0.72+

aroundQUANTITY

0.7+

KuberneticsORGANIZATION

0.67+

sevenQUANTITY

0.65+

DevOpsTITLE

0.6+

everyQUANTITY

0.56+

swarmORGANIZATION

0.53+

ON DEMAND BUILDING MULTI CLUSTER CONTAINER PLATFORM SPG FINAL 2


 

>> Hello, everyone. I'm Khalil Ahmad, Senior Director, Architecture at S&P Global. I have been working with S&P Global for six years now. Previously, I worked for Citigroup and Prudential. Overall, I have been part of IT industry for 30 years, and most of my professional career has been within financial sector in New York City metro area. I live in New Jersey with my wife and son, Daniel Khalil. I have a Master degree in software engineering from the University of Scranton, and Master in mathematics University of Punjab, Lahore. And currently I am pursuing TRIUM global Executive MBA. A joint program from the NYU Stern, LSE and HEC Paris. So today, I'm going to talk about building multi-cluster scalable container platform, supporting on-prem hybrid and multicloud use cases, how we leverage that with an S&P Global and what was our best story. As far as the agenda is concerned, I will go over, quickly the problem statement. Then I will mention the work of our core requirements, how we get solutioning, how Docker Enterprise helped us. And at the end, I will go over the pilot deployment for a proof of concept which we leverage. So, as far as the problem statement is concerned. Containers, as you all know, in the enterprise are becoming mainstream but expertise remains limited and challenges are mounting as containers enter production. Some companies are building skills internally and someone looking for partners that can help catalyze success, and choosing more integrated solutions that accelerate deployments and simplify the container environment. To overcome the challenges, we at S&P Global started our journey a few years back, taking advantage of both options. So, first of all, we met with all the stakeholder, application team, Product Manager and we define our core requirements. What we want out of this container platform, which supports multicloud and hybrid supporting on-prem as well. So, as you see my core requirements, we decided that we need first of all a roadmap or container strategy, providing guidelines on standards and specification. Secondly, with an S&P Global, we decided to introduce Platform as a Service approach, where we bring the container platform and provide that as a service internally to our all application team and all the Product Managers. Hosting multiple application on-prem as well as in multicloud. Third requirement was that we need Linux and Windows container support. In addition to that, we would also require hosted secure image registry with role based access control and image security scanning. In addition to that, we also started DevOps journey, so we want to have a full support of CI/CD pipeline. Whatever the solution we recommend from the architecture group, it should be easily integrated to the developer workstation. And developer workstation could be Windows, Mac or Linux. Orchestration, performance and control were few other parameter which we'll want to keep in mind. And the most important, dynamic scaling of container clusters. That was something we were also want to achieve, when we introduce this Platform as a Service. So, as far as the standard specification are concerned, we turn to the Open Container Initiative, the OCI. OCI was established in June 2015 by Docker and other leaders in the technology industry. And OCI operates under Linux Foundation, and currently contains two specification, runtime specification and image specification. So, at that time, it was a no brainer, other than to just stick with OCI. So, we are following the industry standard and specifications. Now the next step was, okay, the container platform. But what would be our runtime engine? What would be orchestration? And how we support, in our on-prem as well as in the multicloud infrastructure? So, when it comes to runtime engine, we decided to go with the Docker. Which is by default, runtime engine and Kubernetes. And if I may mention, DataDog in one of their public report, they say Docker is probably the most talked about infrastructure technology for the past few years. So, sticking to Docker runtime engine was another win-win game and we saw in future not bringing any challenge or issues. When it comes to orchestration. We prefer Kubernetes but that time there was a challenge, Kubernetes did not support Windows container. So, we wanted something which worked with a Linux container, and also has the ability or to orchestrate Windows containers. So, even though long term we want to stick to Kubernetes, but we also wanted to have a Docker swarm. When it comes to on-prem and multicloud, technically you could only support as of now, technology may change in future, but as of now, you can only support if you bring your own orchestration too. So, in our case, if we have control over orchestration control and not locked in with one cloud provider, that was the ideal situation. So, with all that, research, R&D and finding, we found Docker Enterprise. Which is securely built, share and run modern applications anywhere. So, when we come across Docker Enterprise, we were pleased to see that it meets our most of the core requirements. Whether it is coming on the developer machine, to integrating their workstation, building the application. Whether it comes to sharing those application, in a secure way and collaborating with our pipeline. And the lastly, when it comes to the running. If we run in hybrid or multicloud or edge, in Kubernetes, Docker Enterprise have the support all the way. So, three area one I just call up all the Docker Enterprise, choice, flexibility and security. I'm sure there's a lot more features in Docker Enterprise as a suite. But, when we looked at these three words very quickly, simplified hybrid orchestration. Define application centric policies and boundaries. Once you define, you're all set. Then you just maintain those policies. Manage diverse application across mixed infrastructure, with secure segmentation. Then it comes to secure software supply chain. Provenance across the entire lifecycle of apps and infrastructure through enforceable policy. Consistently manage all apps and infrastructure. And lastly, when it comes to infrastructure independence. It was easily forever lift and shift, because same time, our cloud journey was in the flight. We were moving from on-prem to the cloud. So, support for lift and shift application was one of our wishlist. And Docker Enterprise did not disappoint us. It also supported both traditional and micro services apps on any infrastructure. So, here we are, Docker Enterprise. Why Docker Enterprise? Some of the items in previous slides I mentioned. But in addition to those industry-leading platform, simplifying the IT operations, for running modern application at scale, anywhere. Docker Enterprise also has developer tools. So, the integration, as I mentioned earlier was smooth. In addition to all these tools, the main two components, the Universal Control Plane and the Docker Trusted Registry, solve lot of our problems. When it comes to the orchestration, we have our own Universal Control Plane. Which under the hood, manages Kubernetes and Docker swarm both clusters. So, guess what? We have a Windows support, through Docker swarm and we have a Linux support through Kubernetes. Now that paradigm has changed, as of today, Kubernetes support Windows container. So, guess what? We are well after the UCP, because we have our own orchestration tool, and we start managing Kubernetes cluster in Linux and introduce now, Windows as well. Then comes to the Docker Trusted Registry. Integrated Security and role based access control, made a very smooth transition from our RT storage to DTR. In addition to that, binary level scanning was another good feature from the security point of view. So that, these all options and our R&D landed the Docker Enterprise is the way to go. And if we go over the Docker Enterprise, we can spin up multiple clusters on-prem and in the cloud. And we have a one centralized location to manage those clusters. >> Khalil: So, with all that, now let's talk about how what was our pilot deployment, for proof of concept. In this diagram, you can see we, on the left side is our on-prem Data Center, on the right side is AWS, US East Coast. We picked up one region three zones. And on-prem, we picked up our Data Center, one of the Data Center in the United States of America, and we started the POC. So, our Universal Control Plane had a five nodes cluster. Docker Trusted Registry, also has a five node cluster. And the both, but in our on-prem Data Center. When it comes to the worker nodes, we have started with 18 node cluster, on the Linux side and the four node cluster on the Windows side. Because the major footprint which we have was on the Linux side, and the Windows use cases were pretty small. Also, this is just a proof of concept. And in AWS, we mimic the same web worker nodes, virtual to what we have on-prem. We have a 13 nodes cluster on Linux. And we started with four node cluster of Windows container. And having the direct connect from our Data Center to AWS, which was previously existing, so we did not have any connectivity or latency issue. Now, if you see in this diagram, you have a centralized, Universal Control Plane and your trusted registry. And we were able to spin up a cluster, on-prem as well as in the cloud. And we made this happen, end to end in record time. So later, when we deploy this in production, we also added another cloud provider. So, what you see the box on the right side, we just duplicate test that box in another cloud platform. So, now other orchestration tool, managing on-prem and multicloud clusters. Now, in your use case, you may find this little, you know, more in favor of on-prem. But that fit in our use case. Later, we did have expanded the cluster of Universal Control Plane and DTR in the cloud as well. And the clusters have gone and hundreds and thousands of worker nodes span over two cloud providers, third being discussed. And this solution has been working so far, very good. We did not see any downtime, not a single instance. And we were able to provide multicloud platform, container Platform as a Service for our S&P Global. Thank you for your time. If any questions, I have put my LinkedIn and Twitter account holder, you're welcome to ask any question

Published Date : Sep 14 2020

SUMMARY :

and in the cloud. and the Windows use

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Daniel KhalilPERSON

0.99+

CitigroupORGANIZATION

0.99+

S&P GlobalORGANIZATION

0.99+

June 2015DATE

0.99+

S&P GlobalORGANIZATION

0.99+

Khalil AhmadPERSON

0.99+

LSEORGANIZATION

0.99+

six yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

New JerseyLOCATION

0.99+

PrudentialORGANIZATION

0.99+

United States of AmericaLOCATION

0.99+

New York CityLOCATION

0.99+

13 nodesQUANTITY

0.99+

University of ScrantonORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

OCIORGANIZATION

0.99+

University of PunjabORGANIZATION

0.99+

todayDATE

0.99+

LinuxTITLE

0.99+

three wordsQUANTITY

0.99+

thirdQUANTITY

0.99+

WindowsTITLE

0.99+

Linux FoundationORGANIZATION

0.99+

TwitterORGANIZATION

0.98+

KhalilPERSON

0.98+

three zonesQUANTITY

0.98+

bothQUANTITY

0.98+

HEC ParisORGANIZATION

0.98+

oneQUANTITY

0.98+

DockerTITLE

0.98+

NYU SternORGANIZATION

0.98+

five nodesQUANTITY

0.97+

two componentsQUANTITY

0.97+

both optionsQUANTITY

0.97+

Docker EnterpriseTITLE

0.97+

SecondlyQUANTITY

0.96+

single instanceQUANTITY

0.96+

firstQUANTITY

0.95+

KubernetesTITLE

0.94+

two cloud providersQUANTITY

0.94+

DataDogORGANIZATION

0.93+

DockerORGANIZATION

0.93+

twoQUANTITY

0.92+

Third requirementQUANTITY

0.92+

four nodeQUANTITY

0.91+

both clustersQUANTITY

0.91+

TRIUMORGANIZATION

0.91+

five node clusterQUANTITY

0.88+

Docker EnterpriseORGANIZATION

0.87+

US East CoastLOCATION

0.85+

one cloud providerQUANTITY

0.83+

LahoreLOCATION

0.82+

Open Container InitiativeORGANIZATION

0.81+

Jeff Klink, Sera4 | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020, Virtual. Brought to you by Red Hat, The Cloud Native Computing Foundation and Ecosystem partners. >> Welcome back, I'm Stu Miniman and this is CUBEs coverage of KubeCon CloudNativeCon 2020 in Europe, the virtual edition and of course one of the things we love when we come to these conferences is to get to the actual practitioners, understanding how they're using the various technologies especially here at the CNCF show, so many projects, lots of things changing and really excited. We're going to talk about security in a slightly different way than we often do on theCUBE so happy to welcome to the program from Sera4 I have Jeff Klink who's the Vice President of Engineering and Cloud. Jeff, thanks so much for joining us. >> Thanks too, thanks for having me. >> All right so I teed you up there, give us if you could just a quick thumbnail on Sera4, what your company does and then your role there. >> Absolutely so we're a physical hardware product addressing the telco markets, utility space, all of those so we kind of differentiate herself as a Bluetooth lock for that higher end space, the highest security market where digital encryption is really an absolute must. So we have a few products including our physical lock here, this is a physical padlock, it is where door locks and controllers that all operate over the Bluetooth protocol and that people can just use simply through their mobile phones and operate at the enterprise level. >> Yeah, I'm guessing it's a little bit more expensive than the the padlock I have on my shed which is getting a little rusty and needs a little work but it probably not quite what I'm looking for but you have Cloud, you know, in your title so give us if you could a little bit you know, what the underlying technology that you're responsible for and you know, I understand you've rolled out Kubernetes over the last couple of years, kind of set us up with what were the challenges you were facing before you started using that? >> Absolutely so Stu We've grown over the last five years really as a company like in leaps and bounds and part of that has been the scalability concern and where we go with that, you know, originally starting in the virtual machine space and, you know, original some small customers in telco as we build up the locks and eventually we knew that scalability was really a concern for us, we needed to address that pretty quickly. So as we started to build out our data center space and in this market it's a bit different than your shed locks. Bluetooth locks are kind of everywhere now, they're in logistics, they're on your home and you actually see a lot of compromises these days actually happening on those kind of locks, the home security locks, they're not built for rattling and banging and all that kind of pieces that you would expect in a telco or utility market and in the nuclear space or so you really don't want to lock that, you know, when it's dropped or bang the boat immediately begins to kind of fall apart in your hands and two you're going to expect a different type of security much like you'd see in your SSH certificates, you know, a digital key certificate that arrives there. So in our as we grew up through that piece Kubernetes became a pretty big player for us to try to deal with some of the scale and also to try to deal with some of the sovereignty pieces you don't see in your shed locks. The data sovereignty meeting in your country or as close to you as possible to try to keep that data with the telco, with the utility and kind of in country or in continent with you as well. That was a big challenge for us right off the bat. >> Yeah, you know Jeff absolutely, I have some background from the telco space obviously, there's very rigorous certifications, there's lots of environments that I need to fit into. I want to poke at a word that you mentioned, scale. So scale means lots of things to lots of different people, this year at the KubeCon CloudNativeCon show, one of the scale pieces we're talking about is edge just getting to lots of different locations as opposed to when people first thought about, you know, scale of containers and the like, it was like, do I need to be like Google? Do I have to have that much a scale? Of course, there is only one Google and there's only a handful of companies that need that kind of scale, what was it from your standpoint, is it you know, the latency of all of these devices, is it you know, just the pure number of devices, the number of locations, what was what was the scale limiting factor that you were seeing? >> It's a bit of both in two things, one it was a scale as we brought new customers on, there were extra databases, there was extra identity services, you know, the more locks we sold and the more telcos we sold too suddenly what we started finding is that we needed all these virtual machines and sources in some way to tie them together and the natural piece to those is start to build shared services like SSO and single sign on was a huge driver for us of how do we unite these spaces where they may have maintenance technicians in that space that work for two different telcos. Hey, tower one is down could you please use this padlock on this gate and then this padlock on this cabinet in order to fix it. So that kind of scale immediately showed us, we started to see email addresses or other on two different places and say, well, it might need access into this carrier site because some other carrier has a equipment on that site as well. So the scale started to pick up pretty quickly as well as the space where they started to unite together in a way that we said, well, we kind of have to scale to parts, not only the individuals databases and servers and identity and the storage of their web service data but also we had to unite them in a way that was GDPR compliant and compliant with a bunch of other regulations to say, how do we get these pieces together. So that's where we kind of started to tick the boxes to say in North America, in Latin America, South America we need centralized services but we need some central tie back mechanism as well to start to deal with scale. And the scale came when it went from Let's sell 1000 locks to, by the way, the carrier wants 8000 locks in the next coming months. That's a real scalability concern right off the bat, especially when you start to think of all the people going along with those locks in space as well. So that's the that's the kind of first piece we had to address and single sign on was the head of that for us. >> Excellent, well you know, today when we talk about how do i do container orchestration Kubernetes of course, is the first word that comes to mind, can you bring us back though, how did you end up with Kubernetes, were there other solutions you you looked at when you made your decision? What were your kind of key criteria? How did you choose what partners and vendors you ended up working with? >> So the first piece was is that we all had a lot of VM backgrounds, we had some good DevOps backgrounds as well but nobody was yet into the the container space heavily and so what we looked at originally was Docker swarm, it became our desktop, our daily, our working environment so we knew we were working towards microservices but then immediately this problem emerged that reminded me of say 10, 15 years ago, HD DVD versus Blu-ray and I thought about it as simply as that, these two are fantastic technologies, they're kind of competing in this space, Docker Compose was huge, Docker Hub was growing and growing and we kind of said you got to kind of pick a bucket and go with it and figure out who has the best backing between them, you know from a security policy, from a usage and size and scalability perspective, we knew we would scale this pretty quickly so we started to look at the DevOps and the tooling set to say, scale up by one or scale up by 10, is it doable? Infrastructure as code as well, what could I codify against the best? And as we started looking at those Kubernetes took a pretty quick change for us and actually the first piece of tooling that we looked at was Rancher, we said well there's a lot to learn the Kubernetes space and the Rancher team, they were growing like crazy and they were actually really, really good inside some of their slack channels and some of their groups but they said, reach out, we'll help you even as a free tier, you know and kind of grow our trust in you and you know, vice versa and develop that relationship and so that was our first major relationship was with Rancher and that grew our love for Kubernetes because it took away that first edge of what am i staring at here, it looks like Docker swarm, they put a UI on it, they put some lipstick on it and really helped us get through that first hurdle a couple years ago. >> Well, it's a common pattern that we see in this ecosystem that you know, open source, you try it, you get comfortable with it, you get engaged and then when it makes sense to roll it into production and really start scaling out, that's when you can really formalize those relationships so bring us through the project if you will. You know, how many applications were you starting with? What was the timeline? How many people were involved? Were there, you know, the training or organizational changes, you know, bring us through under the first bits of the project. >> Sure, absolutely. So, like anything it was a series of VMs, we had some VM that were load balanced for databases in the back and protected, we had some manual firewalls through our cloud provider as well but that was kind of the edge of it. You had your web services, your database services and another tier segregated by firewalls, we were operating at a single DCs. As we started to expand into Europe from the North America, Latin America base and as well as Africa, we said this has got to kind of stop. We have a lot of Vms, a lot of machines and so a parallel effort went underway to actually develop some of the new microservices and at first glance was our proxies, our ingresses, our gateways and then our identity service and SSL would be that unifying factor. We honestly knew that moving to Kubernetes in small steps probably wasn't going to be an easy task for us but moving the majority of services over to Kubernetes and then leaving some legacy ones in VM was definitely the right approach for us because now we're dealing with ingressing around the world. Now we're dealing with security of the main core stacks, that was kind of our hardcore focus is to say, secure the stacks up front, ingress from everywhere in the world through like an Anycast Technology and then the gateways will handle that and proxy across the globe and we'll build up from there exactly as we did today. So that was kind of the key for us is that we did develop our micro services, our identity services for SSO, our gateways and then our web services were all developed in containers to start and then we started looking at complimentary pieces like email notification mechanisms, text notification, any of those that could be containerized later, which is dealt with a single one off restful services were moved at a later date. All right. >> So Jeff, yeah absolutely. What to understand, okay, we went through all this technology, we did all these various pieces, what does this mean to your your business projects? So you talked about I need to roll out 8000 devices, is that happening faster? Is it you know, what's the actual business impact of this technology that you've rolled out? >> So here's the key part and here's a differentiator for us is we have two major areas we differentiate in and the first one is asymmetric cryptography. We do own the patents for that one so we know our communication is secure, even when we're lying over Bluetooth. So that's kind of the biggest and foremost one is that how do we communicate with the locks on how do we ensure we can all the time. Two is offline access, some of the major players don't have offline access, which means you can download your keys and assign your keys, go off site do a site to a nuclear bunker wherever it may be and we communicate directly with the lock itself. Our core technology is in the embedded controllers in the lock so that's kind of our key piece and then the lock is a housing around it, it's the mechanical mechanism to it all. So knowing that we had offline technology really nailed down allowed us to do what many called the blue-green approach, which is we're going down for four hours, heads up everybody globally we really need to make this transition but the transition was easy to make with our players, you know, these enterprise spaces and we say we're moving to Kubernetes. It's something where it's kind of a badge of honor to them and they're saying these guys, you know, they really know what they're doing. They've got Kubernetes on the back end, some we needed to explain it to but as soon as they started to hear the words Docker and Kubernetes they just said, wow, this guys are serious about enterprise, we're serious about addressing it and not only that they're forefront of other technologies. I think that's part of our security plan, we use asymmetric encryption, we don't use the Bluetooth security protocol so every time that's compromised, we're not compromised and it's a badge of honor we were much alongside the Kubernetes. >> Alright, Jeff the thing that we're hearing from a lot of companies out there is that that transition that you're going through from VMs to containerization I heard you say that you've got a DevOps practice in there, there's some skill set challenges, there's some training pieces, there's often, you know, maybe a bump or two in the road, I'm sure your project went completely smoothly but what can you share about, you know, the personnel skill sets, any lessons learned along the way that might help others? >> There was a ton. Rancher took that first edge off of us, you know, cube-cuddle, get things up, get things going, RKE in the Rancher space so the Rancher Kubernetes engine, they were kind of that first piece to say how do I get this engine up and going and then I'll work back and take away some of the UI elements and do it myself, from scheduling and making sure that nodes came up to understanding a deployment versus a DaemonSet, that first UI as we moved from like a Docker swarm environment to the the Rancher environment was really kind of key for us to say, I know what these volumes are, I know the networking and I all know these pieces but I don't know how to put core DNS in and start to get them to connect and all of those aspects and so that's where the UI part really took over. We had guys that were good on DevOps, we had guys are like, hey how do I hook it up to a back end and when you have those UI, those clicks like your pod security policy on or off, it's incredible. You turn it on fine, turn on the pod security policy and then from there, we'll either use the UI or we'll go deeper as we get the skill sets to do that so it gave us some really good assurances right off the bat. There were some technologies we really had to learn fast, we had to learn the cube-cuddle command line, we had to learn Helm, new infrastructure pieces with Terraform as well, those are kind of like our back end now. Those are our repeatability aspects that we can kind of get going with. So those are kind of our cores now is it's a Rancher every day, it's cube-cuddle from our command lines to kind of do those, Terraform to make sure we're doing the same thing but those are all practices we, you know, we cut our teeth with Rancher, we looked at the configs that are generated and said, alright, that's actually pretty good configure, you know, maybe there's a team to tolerance or a tweak we could make there but we kind of work backwards that way to have them give us some best practices and then verify those. >> So the space you're in, you have companies that rely on what you do. Security is so important, if you talk about telecommunications, you know, many of the other environments they have, you know, rigid requirements. I want to get to your understanding from you, you're using some open source tools, you've been working with startups, one of your suppliers Rancher was just acquired by SUSE, how's that relationship between you know, this ecosystem? Is that something that is there any concerns from your end user clients and what are your own comfort level with the moves and changes that are happening? >> Having gone through acquisitions myself and knowing the SUSE team pretty well, I'd say actually it's a great thing to know that the startups are funded in a great source. It's great to hear internally, externally their marketing departments are growing but you never know if a startup is growing or not. Knowing this acquisitions taking place actually gives me a lot of security. The team there was healthy, they were growing all the time but sometimes that can just be a face on a company and just talking to the internals candidly as they've always done with us, it's been amazing. So I think that's a great part knowing that there's some great open source texts, Helm Kubernetes as well that have great backers towards them, it's nice to see part of the ecosystem getting back as well in a healthy way rather than a, you know, here's $10,000 Platinum sponsorship. To see them getting the backing from an open source company, I can't say enough for. >> All right, Jeff how about what's going forward from you, what projects you're looking at or what what additions to what you've already done are you looking at doing down the road? >> Absolutely. So the big thing for us is that we've expanded pretty dramatically across the world now. As we started to expand into South Africa, we've expanded into Asia as well so managing these things remotely has been great but we've also started to begin to see some latencies where we're, you know, heading back to our etcd clusters or we're starting to see little cracks and pieces here in some of our QA environment. So part of this is actually the introduction and we started looking into the fog and the edge compute. Security is one of these games where we try to hold the security as core and as tight as you can but trying to get them the best user experience especially in South Africa and serving them from either Europe or Asia, we're trying to move into those data centers and region as well, to provide the sovereignty, to provide the security but it's about latency as well. When I opened my phone to download my digital keys I want that to be quick, I want the administrators to assign quickly but also still giving them that aspect to say I could store this in the edge, I could keep it secure and I could make sure that you still have it, that's where it's a bit different than the standard web experience to say no problem let's put a PNG as close as possible to you to give you that experience, we're putting digital certificates and keys as close as possible to people as well so that's kind of our next generation of the devices as we upgrade these pieces. >> Yeah, there was a line that stuck with me a few years ago, if you look at edge computing, if you look at IoT, the security just surface area is just expanding by orders or magnitude so that just leaves, you know, big challenges that everyone needs to deal with. >> Exactly, yep. >> All right, give us the final word if you would, you know, final lessons learned, you know, you're talking to your peers here in the hallways, virtually of the show. Now that you've gone through all of this, is there anything that you say, boy I wish I had known this it would have been this good or I might have accelerated things or which things, hey I wish I pulled these people or done something a little bit differently. >> Yep, there's a couple actually a big parts right off the bat and one, we started with databases and containers, followed the advice of everyone out there either do managed services or on standalone boxes themselves. That was something we cut our teeth on over a period of time and we really struggled with it, those databases and containers they really perform as poorly as you think they might, you can't get the constraints on those guys, that's one of them. Two we are a global company so we operate in a lot of major geographies now and ETC has been a big deal for us. We tried to pull our ETC clusters farther apart for better resiliency, no matter how much we tweak and play with that thing, keep those things in a region, keep them in separate, I guess the right word would be availability zones, keep them make redundant as possible and protect those at all costs. As we expanded we thought our best strategy would do some geographical distribution, the layout that you have in your Kubernetes cluster as you go global for hub-and-spoke versus kind of centralized clusters and pods and pieces like that, look it over with a with an expert in Kubernetes, talk to them talk about latencies and measure that stuff regularly. That is stuff that kind of tore us apart early in proof of concept and something we had to learn from very quickly, whether it'll be hub-and-spoke and centralize ETC and control planes and then workers abroad or we could spread the ETC and control planes a little more, that's a strategy that needs to be played with if you're not just in North America, South America, Europe, Asia, those are my two biggest pieces because those are our big performance killers as well as discovering PSP, Pod Security Policies early. Get those in, lock it down, get your environments out of route out of, you know, Port 80 things like that on the security space, those are just your basic housecleaning items to make sure that your latency is low, your performances are high and your security's as tight as you can make it. >> Wonderful, well, Jeff thank you so much for sharing Sera4 for story, congratulations to you and your team and wish you the best luck going forward with your initiatives. >> Absolutely, thanks so much Stu. >> All right, thank you for watching. I'm Stu Miniman and thank you for watching theCUBE. (soft music)

Published Date : Aug 18 2020

SUMMARY :

Brought to you by Red Hat, course one of the things we love All right so I teed you up there, all of those so we kind to lock that, you know, when it's dropped that you were seeing? and the natural piece to those is start and we kind of said you got that you know, open source, you try it, to start and then we started looking Is it you know, what's and it's a badge of honor we to a back end and when you that rely on what you do. that the startups are to you to give you that experience, that just leaves, you know, you know, you're talking the layout that you have congratulations to you All right, thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff KlinkPERSON

0.99+

JeffPERSON

0.99+

Red HatORGANIZATION

0.99+

South AfricaLOCATION

0.99+

EuropeLOCATION

0.99+

$10,000QUANTITY

0.99+

AsiaLOCATION

0.99+

North AmericaLOCATION

0.99+

South AfricaLOCATION

0.99+

Stu MinimanPERSON

0.99+

1000 locksQUANTITY

0.99+

RancherORGANIZATION

0.99+

Latin AmericaLOCATION

0.99+

AfricaLOCATION

0.99+

8000 locksQUANTITY

0.99+

8000 devicesQUANTITY

0.99+

first wordQUANTITY

0.99+

South AmericaLOCATION

0.99+

first pieceQUANTITY

0.99+

telcoORGANIZATION

0.99+

TwoQUANTITY

0.99+

KubeConEVENT

0.99+

GDPRTITLE

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.99+

oneQUANTITY

0.99+

TerraformORGANIZATION

0.98+

Sera4ORGANIZATION

0.98+

first pieceQUANTITY

0.98+

four hoursQUANTITY

0.98+

bothQUANTITY

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

two biggest piecesQUANTITY

0.97+

AnycastORGANIZATION

0.97+

two different telcosQUANTITY

0.97+

first edgeQUANTITY

0.97+

firstQUANTITY

0.95+

singleQUANTITY

0.95+

CloudNativeCon Europe 2020EVENT

0.95+

two major areasQUANTITY

0.94+

first bitsQUANTITY

0.94+

SUSEORGANIZATION

0.93+

KubeCon CloudNativeCon 2020EVENT

0.92+

10QUANTITY

0.92+

CNCFEVENT

0.92+

first hurdleQUANTITY

0.91+

CloudNativeCon Europe 2020EVENT

0.91+

KubernetesTITLE

0.91+

this yearDATE

0.91+

few years agoDATE

0.89+

two different placesQUANTITY

0.89+

DockerORGANIZATION

0.88+

first oneQUANTITY

0.86+

KubernetesORGANIZATION

0.86+

Dr. Eng Lim Goh, Joachim Schultze, & Krishna Prasad Shastry | HPE Discover 2020


 

>> Narrator: From around the globe it's theCUBE, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi everybody. Welcome back. This is Dave Vellante for theCUBE, and this is our coverage of discover 2020, the virtual experience of HPE discover. We've done many, many discoveries, as usually we're on the show floor, theCUBE has been virtualized and we talk a lot at HPE discovers, a lot of storage and server and infrastructure and networking which is great. But the conversation we're going to have now is really, we're going to be talking about helping the world solve some big problems. And I'm very excited to welcome back to theCUBE Dr. Eng Lim Goh. He's a senior vice president of and CTO for AI, at HPE. Hello, Dr. Goh. Great to see you again. >> Hello. Thank you for having us, Dave. >> You're welcome. And then our next guest is Professor Joachim Schultze, who is the Professor for Genomics, and Immunoregulation at the university of Bonn amongst other things Professor, welcome. >> Thank you all. Welcome. >> And then Prasad Shastry, is the Chief Technologist for the India Advanced Development Center at HPE. Welcome, Prasad. Great to see you. >> Thank you. Thanks for having me. >> So guys, we have a CUBE first. I don't believe we've ever had of three guests in three separate times zones. I'm in a fourth time zone. (guests chuckling) So I'm in Boston. Dr. Goh, you're in Singapore, Professor Schultze, you're in Germany and Prasad, you're in India. So, we've got four different time zones. Plus our studio in Palo Alto. Who's running this program. So we've got actually got five times zones, a CUBE first. >> Amazing. >> Very good. (Prasad chuckles) >> Such as the world we live in. So we're going to talk about some of the big problems. I mean, here's the thing we're obviously in the middle of this pandemic, we're thinking about the post isolation economy, et cetera. People compare obviously no surprise to the Spanish flu early part of last century. They talk about the great depression, but the big difference this time is technology. Technology has completely changed the way in which we've approached this pandemic. And we're going to talk about that. Dr. Goh, I want to start with you. You've done a lot of work on this topic of swarm learning. If we could, (mumbles) my limited knowledge of this is we're kind of borrowing from nature. You think about, bees looking for a hive as sort of independent agents, but somehow they come together and communicate, but tell us what do we need to know about swarm learning and how it relates to artificial intelligence and we'll get into it. >> Oh, Dave, that's a great analogy using swarm of bees. That's exactly what we do at HPE. So let's use the of here. When deploying artificial intelligence, a hospital does machine learning of the outpatient data that could be biased, due to demographics and the types of cases they see more also. Sharing patient data across different hospitals to remove this bias is limited, given privacy or even sovereignty the restrictions, right? Like for example, across countries in the EU. HPE, so I'm learning fixers this by allowing each hospital, let's still continue learning locally, but at each cycle we collect the lumped weights of the neural networks, average them and sending it back down to older hospitals. And after a few cycles of doing this, all the hospitals would have learned from each other, removing biases without having to share any private patient data. That's the key. So, the ability to allow you to learn from everybody without having to share your private patients. That's swarm learning, >> And part of the key to that privacy is blockchain, correct? I mean, you you've been too involved in blockchain and invented some things in blockchain and that's part of the privacy angle, is it not? >> Yes, yes, absolutely. There are different ways of doing this kind of distributed learning, which swarm learning is over many of the other distributed learning methods. Require you to have some central control. Right? So, Prasad, and the team and us came up together. We have a method where you would, instead of central control, use blockchain to do this coordination. So, there is no more a central control or coordinator, especially important if you want to have a truly distributed swamp type learning system. >> Yeah, no need for so-called trusted third party or adjudicator. Okay. Professor Schultze, let's go to you. You're essentially the use case of this swarm learning application. Tell us a little bit more about what you do and how you're applying this concept. >> I'm actually by training a physician, although I haven't seen patients for a very long time. I'm interested in bringing new technologies to what we call precision medicine. So, new technologies both from the laboratories, but also from computational sciences, married them. And then I basically allow precision medicine, which is a medicine that is built on new measurements, many measurements of molecular phenotypes, how we call them. So, basically that process on different levels, for example, the genome or genes that are transcribed from the genome. We have thousands of such data and we have to make sense out of this. This can only be done by computation. And as we discussed already one of the hope for the future is that the new wave of developments in artificial intelligence and machine learning. We can make more sense out of this huge data that we generate right now in medicine. And that's what we're interesting in to find out how can we leverage these new technologies to build a new diagnostics, new therapy outcome predictors. So, to know the patient benefits from a disease, from a diagnostics or a therapy or not, and that's what we are doing for the last 10 years. The most exciting thing I have been  through in the last three, four, five years is really when HPE introduced us to swarm learning. >> Okay and Prasad, you've been helping Professor Schultze, actually implements swarm learning for specific use cases that we're going to talk about COVID, but maybe describe a little bit about what you've been or your participation in this whole equation. >> Yep, thank. As Dr Eng Lim Goh, mentioned. So, we have used blockchain as a backbone to implement the decentralized network. And through that we're enabling a privacy preserved these centralized network without having any control points, as Professor explained in terms of depression medicines. So, one of the use case we are looking at he's looking at the blood transcriptomes, think of it, different hospitals having a different set of transcriptome data, which they cannot share due to the privacy regulations. And now each of those hospitals, will clean the model depending upon their local data, which is available in that hospital. And shared the learnings coming out of that training with the other hospitals. And we played to over several cycles to merge all these learnings and then finally get into a global model. So, through that we are able to kind of get into a model which provides the performance is equal of collecting all the data into a central repository and trying to do it. And we could really think of when we are doing it, them, could be multiple kinds of challenges. So, it's good to do decentralized learning. But what about if you have a non ID type of data, what about if there is a dropout in the network connections? What about if there are some of the compute nodes we just practice or probably they're not seeing sufficient amount of data. So, that's something we tried to build into the swarm learning framework. You'll handle the scenarios of having non ID data. All in a simple word we could call it as seeing having the biases. An example, one of the hospital might see EPR trying to, look at, in terms of let's say the tumors, how many number of cases and whereas the other hospital might have very less number of cases. So, if you have kind of implemented some techniques in terms of doing the merging or providing the way that different kind of weights or the tuneable parameters to overcome these set of challenges in the swarm learning. >> And Professor Schultze, you you've applied this to really try to better understand and attack the COVID pandemic, can you describe in more detail your goals there and what you've actually done and accomplished? >> Yeah. So, we have actually really done it for COVID. The reason why we really were trying to do this already now is that we have to generate it to these transcriptomes from COVID-19 patients ourselves. And we realized that the scene of the disease is so strong and so unique compared to other infectious diseases, which we looked at in some detail that we felt that the blood transcriptome would be good starting point actually to identify patients. But maybe even more important to identify those with severe diseases. So, if you can identify them early enough that'd be basically could care for those more and find particular for those treatments and therapies. And the reason why we could do that is because we also had some other test cases done before. So, we used the time wisely with large data sets that we had collected beforehand. So, use cases learned how to apply swarm learning, and we are now basically ready to test directly with COVID-19. So, this is really a step wise process, although it was extremely fast, it was still a step wise probably we're guided by data where we had much more knowledge of which was with the black leukemia. So, we had worked on that for years. We had collected many data. So, we could really simulate a Swarm learning very nicely. And based on all the experience we get and gain together with Prasad, and his team, we could quickly then also apply that knowledge to the data that are coming now from COVID-19 patients. >> So, Dr. Goh, it really comes back to how we apply machine intelligence to the data, and this is such an interesting use case. I mean, the United States, we have 50 different States with 50 different policies, different counties. We certainly have differences around the world in terms of how people are approaching this pandemic. And so the data is very rich and varied. Let's talk about that dynamic. >> Yeah. If you, for the listeners who are or viewers who are new to this, right? The workflow could be a patient comes in, you take the blood, and you send it through an analysis? DNA is made up of genes and our genes express, right? They express in two steps the first they transcribe, then they translate. But what we are analyzing is the middle step, the transcription stage. And tens of thousands of these Transcripts that are produced after the analysis of the blood. The thing is, can we find in the tens of thousands of items, right? Or biomarkers a signature that tells us, this is COVID-19 and how serious it is for this patient, right? Now, the data is enormous, right? For every patient. And then you have a collection of patients in each hospitals that have a certain demographic. And then you have also a number of hospitals around. The point is how'd you get to share all that data in order to have good training of your machine? The ACO is of course a know privacy of data, right? And as such, how do you then share that information if privacy restricts you from sharing the data? So in this case, swarm learning only shares the learnings, not the private patient data. So we hope this approach would allow all the different hospitals to come together and unite sharing the learnings removing biases so that we have high accuracy in our prediction as well at the same time, maintaining privacy. >> It's really well explained. And I would like to add at least for the European union, that this is extremely important because the lawmakers have clearly stated, and the governments that even non of these crisis conditions, they will not minimize the rules of privacy laws, their compliance to privacy laws has to stay as high as outside of the pandemic. And I think there's good reasons for that, because if you lower the bond, now, why shouldn't you lower the bar in other times as well? And I think that was a wise decision, yes. If you would see in the medical field, how difficult it is to discuss, how do we share the data fast enough? I think swarm learning is really an amazing solution to that. Yeah, because this discussion is gone basically. Now we can discuss about how we do learning together. I'd rather than discussing what would be a lengthy procedure to go towards sharing. Which is very difficult under the current privacy laws. So, I think that's why I was so excited when I learned about it, the first place with faster, we can do things that otherwise are either not possible or would take forever. And for a crisis that's key. That's absolutely key. >> And is the byproduct. It's also the fact that all the data stay where they are at the different hospitals with no movement. >> Yeah. Yeah. >> Learn locally but only shared the learnings. >> Right. Very important in the EU of course, even in the United States, People are debating. What about contact tracing and using technology and cell phones, and smartphones to do that. Beside, I don't know what the situation is like in India, but nonetheless, that Dr. Goh's point about just sharing the learnings, bubbling it up, trickling just kind of metadata. If you will, back down, protects us. But at the same time, it allows us to iterate and improve the models. And so, that's a key part of this, the starting point and the conclusions that we draw from the models they're going to, and we've seen this with the pandemic, it changes daily, certainly weekly, but even daily. We continuously improve the conclusions and the models don't we. >> Absolutely, as Dr. Goh explained well. So, we could look at like they have the clinics or the testing centers, which are done in the remote places or wherever. So, we could collect those data at the time. And then if we could run it to the transcripting kind of a sequencing. And then as in, when we learn to these new samples and the new pieces all of them put kind of, how is that in the local data participate in the kind of use swarm learning, not just within the state or in a country could participate into an swarm learning globally to share all this data, which is coming up in a new way, and then also implement some kind of continuous learning to pick up the new signals or the new insight. It comes a bit new set of data and also help to immediately deploy it back into the inference or into the practice of identification. To do these, I think one of the key things which we have realized is to making it very simple. It's making it simple, to convert the machine learning models into the swarm learning, because we know that our subject matter experts who are going to develop these models on their choice of platforms and also making it simple to integrate into that complete machine learning workflow from the time of collecting a data pre processing and then doing the model training and then putting it onto inferencing and looking performance. So, we have kept that in the mind from the beginning while developing it. So, we kind of developed it as a plug able microservices kind of packed data with containers. So the whole library could be given it as a container with a kind of a decentralized management command controls, which would help to manage the whole swarm network and to start and initiate and children enrollment of new hospitals or the new nodes into the swarm network. At the same time, we also looked into the task of the data scientists and then try to make it very, very easy for them to take their existing models and convert that into the swarm learning frameworks so that they can convert or enabled they're models to participate in a decentralized learning. So, we have made it to a set callable rest APIs. And I could say that the example, which we are working with the Professor either in the case of leukemia or in the COVID kind of things. The noodle network model. So we're kind of using the 10 layer neural network things. We could convert that into the swarm model with less than 10 lines of code changes. So, that's kind of a simply three we are looking at so that it helps to make it quicker, faster and loaded the benefits. >> So, that's an exciting thing here Dr. Goh is, this is not an R and D project. This is something that you're actually, implementing in a real world, even though it's a narrow example, but there are so many other examples that I'd love to talk about, but please, you had a comment. >> Yes. The key thing here is that in addition to allowing privacy to be kept at each hospital, you also have the issue of different hospitals having day to day skewed differently. Right? For example, a demographics could be that this hospital is seeing a lot more younger patients, and other hospitals seeing a lot more older patients. Right? And then if you are doing machine learning in isolation then your machine might be better at recognizing the condition in the younger population, but not older and vice versa by using this approach of swarm learning, we then have the biases removed so that both hospitals can detect for younger and older population. All right. So, this is an important point, right? The ability to remove biases here. And you can see biases in the different hospitals because of the type of cases they see and the demographics. Now, the other point that's very important to reemphasize is what precise Professor Schultze mentioned, right? It's how we made it very easy to implement this.Right? This started out being so, for example, each hospital has their own neural network and they training their own. All you do is we come in, as Pasad mentioned, change a few lines of code in the original, machine learning model. And now you're part of the collective swarm. This is how we want to easy to implement so that we can get again, as I like to call, hospitals of the world to uniting. >> Yeah. >> Without sharing private patient data. So, let's double click on that Professor. So, tell us about sort of your team, how you're taking advantage of this Dr. Goh, just describe, sort of the simplicity, but what are the skills that you need to take advantage of this? What's your team look like? >> Yeah. So, we actually have a team that's comes from physicians to biologists, from medical experts up to computational scientists. So, we have early on invested in having these interdisciplinary research teams so that we can actually spend the whole spectrum. So, people know about the medicine they know about them the biological basics, but they also know how to implement such new technology. So, they are probably a little bit spearheading that, but this is the way to go in the future. And I see that with many institutions going this way many other groups are going into this direction because finally medicine understands that without computational sciences, without artificial intelligence and machine learning, we will not answer those questions with this large data that we're using. So, I'm here fine. But I also realize that when we entered this project, we had basically our model, we had our machine learning model from the leukemia's, and it really took almost no efforts to get this into the swarm. So, we were really ready to go in very short time, but I also would like to say, and then it goes towards the bias that is existing in medicine between different places. Dr. Goh said this very nicely. It's one aspect is the patient and so on, but also the techniques, how we do clinical essays, we're using different robots a bit. Using different automates to do the analysis. And we actually try to find out what the Swan learning is doing if we actually provide such a bias by prep itself. So, I did the following thing. We know that there's different ways of measuring these transcriptomes. And we actually simulated that two hospitals had an older technology and a third hospital had a much newer technology, which is good for understanding the biology and the diseases. But it is the new technology is prone for not being able anymore to generate data that can be used to learn and then predicting the old technology. So, there was basically, it's deteriorating, if you do take the new one and you'll make a classifier model and you try old data, it doesn't work anymore. So, that's a very hard challenge. We knew it didn't work anymore in the old way. So, we've pushed it into swarm learning and to swarm recognize that, and it didn't take care of it. It didn't care anymore because the results were even better by bringing everything together. I was astonished. I mean, it's absolutely amazing. That's although we knew about this limitations on that one hospital data, this form basically could deal with it. I think there's more to learn about these advantages. Yeah. And I'm very excited. It's not only a transcriptome that people do. I hope we can very soon do it with imaging or the DCNE has 10 sites in Germany connected to 10 university hospitals. There's a lot of imaging data, CT scans and MRIs, Rachel Grimes. And this is the next next domain in medicine that we would like to apply as well as running. Absolutely. >> Well, it's very exciting being able to bring this to the clinical world And make it in sort of an ongoing learnings. I mean, you think about, again, coming back to the pandemic, initially, we thought putting people on ventilators was the right thing to do. We learned, okay. Maybe, maybe not so much the efficacy of vaccines and other therapeutics. It's going to be really interesting to see how those play out. My understanding is that the vaccines coming out of China, or built to for speed, get to market fast, be interested in U.S. Maybe, try to build vaccines that are maybe more longterm effective. Let's see if that actually occurs some of those other biases and tests that we can do. That is a very exciting, continuous use case. Isn't it? >> Yeah, I think so. Go ahead. >> Yes. I, in fact, we have another project ongoing to use a transcriptome data and other data like metabolic and cytokines that data, all these biomarkers from the blood, right? Volunteers during a clinical trial. But the whole idea of looking at all those biomarkers, we talking tens of thousands of them, the same thing again, and then see if we can streamline it clinical trials by looking at it data and training with that data. So again, here you go. Right? We have very good that we have many vaccines on. In candidates out there right now, the next long pole in the tenth is the clinical trial. And we are working on that also by applying the same concept. Yeah. But for clinical trials. >> Right. And then Prasad, it seems to me that this is a good, an example of sort of an edge use case. Right? You've got a lot of distributed data. And I know you've spoken in the past about the edge generally, where data lives bringing moving data back to sort of the centralized model. But of course you don't want to move data if you don't have to real time AI inferencing at the edge. So, what are you thinking in terms of other other edge use cases that were there swarm learning can be applied. >> Yeah, that's a great point. We could kind of look at this both in the medical and also in the other fields, as we talked about Professor just mentioned about this radiographs and then probably, Using this with a medical image data, think of it as a scenario in the future. So, if we could have an edge note sitting next to these medical imaging systems, very close to that. And then as in when this the systems producers, the medical immediate speed could be an X-ray or a CT scan or MRI scan types of thing. The system next to that, sitting on the attached to that. From the modernity is already built with the swarm lending. It can do the inferencing. And also with the new setup data, if it looks some kind of an outlier sees the new or images are probably a new signals. It could use that new data to initiate another round up as form learning with all the involved or the other medical images across the globe. So, all this can happen without really sharing any of the raw data outside of the systems but just getting the inferencing and then trying to make all of these systems to come together and try to build a better model. >> So, the last question. Yeah. >> If I may, we got to wrap, but I mean, I first, I think we've heard about swarm learning, maybe read about it probably 30 years ago and then just ignored it and forgot about it. And now here we are today, blockchain of course, first heard about with Bitcoin and you're seeing all kinds of really interesting examples, but Dr. Goh, start with you. This is really an exciting area, and we're just getting started. Where do you see swarm learning, by let's say the end of the decade, what are the possibilities? >> Yeah. You could see this being applied in many other industries, right? So, we've spoken about life sciences, to the healthcare industry or you can't imagine the scenario of manufacturing where a decade from now you have intelligent robots that can learn from looking at across men building a product and then to replicate it, right? By just looking, listening, learning and imagine now you have multiple of these robots, all sharing their learnings across boundaries, right? Across state boundaries, across country boundaries provided you allow that without having to share what they are seeing. Right? They can share, what they have lunch learnt You see, that's the difference without having to need to share what they see and hear, they can share what they have learned across all the different robots around the world. Right? All in the community that you allow, you mentioned that time, right? That will even in manufacturing, you get intelligent robots learning from each other. >> Professor, I wonder if as a practitioner, if you could sort of lay out your vision for where you see something like this going in the future, >> I'll stay with the medical field at the moment being, although I agree, it will be in many other areas, medicine has two traditions for sure. One is learning from each other. So, that's an old tradition in medicine for thousands of years, but what's interesting and that's even more in the modern times, we have no traditional sharing data. It's just not really inherent to medicine. So, that's the mindset. So yes, learning from each other is fine, but sharing data is not so fine, but swarm learning deals with that, we can still learn from each other. We can, help each other by learning and this time by machine learning. We don't have to actually dealing with the data sharing anymore because that's that's us. So for me, it's a really perfect situation. Medicine could benefit dramatically from that because it goes along the traditions and that's very often very important to get adopted. And on top of that, what also is not seen very well in medicine is that there's a hierarchy in the sense of serious certain institutions rule others and swarm learning is exactly helping us there because it democratizes, onboarding everybody. And even if you're not sort of a small entity or a small institutional or small hospital, you could become remembering the swarm and you will become as a member important. And there is no no central institution that actually rules everything. But this democratization, I really laugh, I have to say, >> Pasad, we'll give you the final word. I mean, your job is very helping to apply these technologies to solve problems. what's your vision or for this. >> Yeah. I think Professor mentioned about one of the very key points to use saying that democratization of BI I'd like to just expand a little bit. So, it has a very profound application. So, Dr. Goh, mentioned about, the manufacturing. So, if you look at any field, it could be health science, manufacturing, autonomous vehicles and those to the democratization, and also using that a blockchain, we are kind of building a framework also to incentivize the people who own certain set of data and then bring the insight from the data into the table for doing and swarm learning. So, we could build some kind of alternative monetization framework or an incentivization framework on top of the existing fund learning stuff, which we are working on to enable the participants to bring their data or insight and then get rewarded accordingly kind of a thing. So, if you look at eventually, we could completely make dais a democratized AI, with having the complete monitorization incentivization system which is built into that. You may call the parties to seamlessly work together. >> So, I think this is just a fabulous example of we hear a lot in the media about, the tech backlash breaking up big tech but how tech has disrupted our lives. But this is a great example of tech for good and responsible tech for good. And if you think about this pandemic, if there's one thing that it's taught us is that disruptions outside of technology, pandemics or natural disasters or climate change, et cetera, are probably going to be the bigger disruptions then technology yet technology is going to help us solve those problems and address those disruptions. Gentlemen, I really appreciate you coming on theCUBE and sharing this great example and wish you best of luck in your endeavors. >> Thank you. >> Thank you. >> Thank you for having me. >> And thank you everybody for watching. This is theCUBE's coverage of HPE discover 2020, the virtual experience. We'll be right back right after this short break. (upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the globe it's theCUBE, But the conversation we're Thank you for having us, Dave. and Immunoregulation at the university Thank you all. is the Chief Technologist Thanks for having me. So guys, we have a CUBE first. Very good. I mean, here's the thing So, the ability to allow So, Prasad, and the team You're essentially the use case of for the future is that the new wave Okay and Prasad, you've been helping So, one of the use case we And based on all the experience we get And so the data is very rich and varied. of the blood. and the governments that even non And is the byproduct. Yeah. shared the learnings. and improve the models. And I could say that the that I'd love to talk about, because of the type of cases they see sort of the simplicity, and the diseases. and tests that we can do. Yeah, I think so. and then see if we can streamline it about the edge generally, and also in the other fields, So, the last question. by let's say the end of the decade, All in the community that you allow, and that's even more in the modern times, to apply these technologies You may call the parties to the tech backlash breaking up big tech the virtual experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PrasadPERSON

0.99+

IndiaLOCATION

0.99+

Joachim SchultzePERSON

0.99+

DavePERSON

0.99+

Palo AltoLOCATION

0.99+

Dave VellantePERSON

0.99+

BostonLOCATION

0.99+

ChinaLOCATION

0.99+

SchultzePERSON

0.99+

GermanyLOCATION

0.99+

SingaporeLOCATION

0.99+

United StatesLOCATION

0.99+

10 sitesQUANTITY

0.99+

Prasad ShastryPERSON

0.99+

10 layerQUANTITY

0.99+

10 university hospitalsQUANTITY

0.99+

COVID-19OTHER

0.99+

GohPERSON

0.99+

50 different policiesQUANTITY

0.99+

two hospitalsQUANTITY

0.99+

thousandsQUANTITY

0.99+

two stepsQUANTITY

0.99+

Krishna Prasad ShastryPERSON

0.99+

pandemicEVENT

0.99+

thousands of yearsQUANTITY

0.99+

Eng Lim GohPERSON

0.99+

HPEORGANIZATION

0.99+

firstQUANTITY

0.99+

ACOORGANIZATION

0.99+

DCNEORGANIZATION

0.99+

European unionORGANIZATION

0.99+

each hospitalsQUANTITY

0.99+

less than 10 linesQUANTITY

0.99+

both hospitalsQUANTITY

0.99+

oneQUANTITY

0.99+

Rachel GrimesPERSON

0.99+

eachQUANTITY

0.99+

three guestsQUANTITY

0.99+

each cycleQUANTITY

0.99+

third hospitalQUANTITY

0.99+

each hospitalQUANTITY

0.98+

fourQUANTITY

0.98+

30 years agoDATE

0.98+

India Advanced Development CenterORGANIZATION

0.98+

bothQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

fourth time zoneQUANTITY

0.98+

threeQUANTITY

0.98+

one aspectQUANTITY

0.97+

EULOCATION

0.96+

five yearsQUANTITY

0.96+

2020DATE

0.96+

todayDATE

0.96+

Dr.PERSON

0.95+

PasadPERSON

0.95+

Elton Stoneman & Julie Lerman | DockerCon 2020


 

>> Speaker: From around the Globe, it's theCUBE with digital coverage of DockerCon Live 2020, brought to you by Docker and its ecosystem partners. >> Hello, how you doing? Welcome to DockerCon. We're kind of halfway through now, I guess. Thank you for joining us on this session. So my name is Elton, I'm a Docker Captain. And I'm joined by Julie who was also a Docker Captain. This is actually this session was Julie's idea. We were talking about this learning of Docker and how it's a light bulb moment for lots of people. But Julie, she came up with this great idea for DevOps. So I'll let Julie introduce herself, and tell you a bit about what we're going to talk about. >> Thanks, Elton. So I'm Julie Lerman. I'm a Software Coach. I'm a developer. I've been a developer for over 30 years. I work independently and I'm a Docker captain. Also a Microsoft Regional Director. I wouldn't let them put it on there, because it makes people think I work for Microsoft but I don't. (he laughs) >> Yeah, so it's a weird title. So the Microsoft ID the Regional Director, it's like a kind of Uber, MVP. So I'm an MVP. And that's fine. That's just like a community recognition, just like you get with a Docker captain. So MVP is kind of like the micro version, Julie's MVP too. But then you get the Regional Director which is something that MVP get. >> Doesn't matter. >> I'm not surprised Julie. >> Stop, a humble man. (he laughs) >> We've been using Docker for years 10 years between. >> You probably, how long ago was your Docker aha moment? >> So 2014 I first started using Docker, so I was working on a project, where I was assaulting for a team who were building an Android tablet, and they were building the whole thing, so they Spec out the tablet, they got a bill over in the far East. They were building their own OS their own app to run on and of course all that stacks within it. But they was all talking to the services that were running in the power they wanted to use as your for that and .NET that was on-prem, though that technology historically . So I came in to do the .NET stuff is running in as your, but I got really friendly with the Linux guys. It was very DevOps, it was one team who did the whole thing. And they were using Docker for that their build tools, and for have the and the CI tools, and they were running their own get server and it was all in. >> Already until 2014. That's pretty cool. >> Yeah, pretty early introduction to it. And it was super cool. So I'd always been interested in Linux, but never really dug into it. Because the entry bar was so high runs nothing in it. So you read about this great open source project, and then you go and look at the documentation and you have to download the source code and build it and it's like, well, I'm not going to be doing that stuff. And then Docker came along. I do Docker run. (he laughs) >> Well, I would say it was a little definitely delayed from that. I'm still thinking Wait, when you first started saying that this company was building their own android system, you start thinking, they're building software, but no, they weren't building everything, which is pretty amazing. So, I have to say it took me quite a while, but I was also behind on understanding virtual machines. (both laughs) So, Docker comes along, and I have lots of friends who are using it, I spent a lot of time with Michelle Noorali this Monday, and she's big container person. And most of the people I hear talking about Docker are really doing DevOps, which is not my thing. As a developer, I always just said, let somebody else do that stuff. I want to code an architect and do things like that. And I also do a lot of data work. I'm not like a big data person doing analytics. Or I'm not a DBA. I'm more very involved in getting data in and out of applications. So my aha moment, I would say was like, four years ago, after Microsoft moved SQL Server over to Linux, and then put it inside a Docker image. So that was my very first experience, just saying, oh, what does this do and I downloaded the image. And Docker run. And then like literally I was like, holy smokes. SQL Servers already installed. The containers up like that, and then it's got to run a couple of Bashan SQL scripts to get all the system tables, and databases and things like that. So that's another 15 seconds. But that was literally for me. The not really aha, it was more like OMG, and I'll keep the EFF out just to keep it clean here. It was my OMG moment with Docker. So getting that start, then I worked with the SQL Server image and container and did some different things, with that in applications. And then eventually, expanded my knowledge out bit by bit, and got a deeper understanding of it and tried more things. So I get to a comfort level and then add to it and add to it. >> Yeah. And I think that the great thing about that is that as you're going on that journey that aha moments keep coming, along we had another aha moment this week, with the new announcement that you can use your Docker compose files, and use your Docker commands to spin stuff up running in as your container instances. So like that you've kept up that learning journey is there if you want to go down, How do I take my monolithic application, and break up into pieces and run those in containers? Like suddenly the fact that you can just glue all these things together in run it on one platform, and manage everything in the same way? And these light bulbs keep on coming. So, you've seen the modernization things that people are doing that's a lot of the work that I do now, and taking these big applications, you just write a Docker file, and you've got your 15 year old .NET application running in the container. And you can run that in the cloud with no changes to code, and not see them. But that's super powerful for people. >> And I think one of the really important things, especially for people like you and I, who are also teachers, and is to try to really remember that moment, because I know a lot of times, when people are deeply expert in something it they forget how hard it was, or what it felt like not to understand it that context. So I still have held on to that. So when I talk, I like to do introduction, I like to help people get that aha moment. And then I say, Okay, now go on to the, they're really expert people. You're ready to learn more, but it's really important to especially, maybe we're teachers, conference speakers, book authors, pluralsight, etc. But lots of other people, who are working on teams they might already be somebody who's gotten there with Docker, and they want to help their teammates understand Docker. So I think it's really important to, for everybody who wants to share that to kind of have a little empathy, and remember what that was like, and understand that sometimes it just takes explaining it a different way explaining maybe, just tweaking your expression, or some of the words or your analogies. >> Yeah, that's definitely true. And you often find this it's a technology, that people really become affectionate for, they have a real deep feeling for documents, once they start using it, and you get these internal champions in companies who say, "This is the stuff I've been using, I've been using this at home or whatever." And they want to bring it into their project, and it's pretty cool to be able to say to them this is, take me on the same journey that you've been on, or you've been on a journey, which was probably slightly more investment for you, because you had to learn from scratch. But now you can relay that back into your own project. So you can take, you don't have to take everyone from scratch like you did. You can say, here's the Docker file for our own application. This is how it works. And bringing things into the terms that people are using everyday , I think is something that's super powerful. Why because you're completely strange. (he laughs) >> Oh, I was being really cool about your video. (both laughs) Maybe it's just how it streaming back to me. I think the teacher thing again, like we'll work a little harder and, bump our knees and stub our toes, or tear our hair out or whatever pain we have to go through, with that learning because, it's also kind of obsessive. And you can steer people away from those things, although it's also helpful to let them be aware like this might happen, and if it does, it's because of this. But that's not the happy path. >> Yeah, absolutely. And I think, it's really interesting talking to people about the time you're trying to get to what problem are they trying to solve? It's interesting, you talk about DevOps there, and how that sort of not an area, that you've done a lot of stuff in. Writing a couple of organizations, whether they're really trying hard to move that model, and trying to break down the barriers, between the team who build the software, and the team who run the software, but they have those barriers, but 20 years, it's really hard to write that stuff down. And it's a big cultural shift, it needs a lot of investment. But if you can make a technological change as well, if you can get people using the same tools, the same languages, the same processes to do things, that makes it so much easier. Like now my operators are using Docker files, on there and the security team are going into the Docker file and cozening it, or DevOps team or building up my compose file, and everyone's using the same thing, it really helps a lot, to bind people together to work on the same area. >> I also do a lot of work in domain Dave Vellante design, and that whole idea of collaboration, and bringing together teams, that don't normally work together, and bringing them together, and enabling them to find a way to collaborate giving them tools for collaboration, just like what you're saying with, having the same terms and using the same tools. So that's really powerful. You gave me a great example of one of your clients, aha moments with Docker. Do you remember which that was? The money yes, it's a very powerful Aha. >> Yes. >> She cherish that. >> The company that I've worked for before, when I was doing still get thought that I can sort a thing, and they knew I'd go into containers. I was working for Docker at the time. And I went in just as if I wasn't a sales pitch or anything, I was just as a favor to talk to them about what containers would look like if payments, their operation, big heavy Windows users, huge number of environment, lots of VMs that are all running stuff, to get the isolation, and give them what they needed. And I did this presentation of IT. So it wasn't a technical thing. It was very high level, it was about how containers kind of work. And I'm fundamentally a technical person, so I probably have more detail in there. And then you would get from a sales pitch, but it was very much about, you can take your applications, you can wrap them up the running these things for containers, you still get the isolation, you can run loads more of them on the same hardware that you've got, and you don't pay a Windows license each of those containers, you pay a license for the server that the right one. >> That's it, that's the moment. >> And the head of IT said that's going to save us millions of dollars. (he laughs) And that was his aha moment. >> I'm going to wrap that into my conference session, about getting to the Docker, for sure getting that aha moment. My experience is less that but wow, I mean, that's so powerful. When you're talking to come C level people about making those kinds of changes, because you need to have their buy in. So as a developer and somebody who works with developers, and that's kind of my audience, my experience more has been, when I'm giving conference presentations, and I'll start out in a room of people, and I have to say, when I'm at .NET focus conference, I find that the not there yet with Docker. Part of the audience is a big one. So I kind of do a poll at the beginning of the talk. Who's heard of Docker, obviously, they're in the room, but curious because you still don't really understand it. And that's usually a bulk of the room. And what I like to ask at the end is, of all of you that, that first group, like, do you feel like you get it now, like you just get what it is and what it does, as opposed to I don't know what this thing is. It's for rocket scientists. Is that's how I felt about it. I was like, I'm just a developer. It wasn't my thing. But now, I'm still not doing DevOps, I use Docker as a really important tool, during development and test and that's actually one of it I'm going to be talking about that. But it's my session a little later. Oh, like the next hour. It's about using Docker, that my aha Docker, SQL Server, in an image and but using that in Dave Vellante, it's not about the DevOps and the CI/CD and Kubernetes, I can spell it. (he laughs) Especially when I get to say k eight s, Like I even know the cool Lingo (mumbles) on Twitter. (he laughs) >> I think that's one of the cool things about this technology stack in particular, I think to get the most out of it, you need to dig in really light if you want to, if you're looking at doing this stuff in production, if you're attracted by the fact that I can have a managed container platform in anytime. And I can deploy my app, everywhere using the same set of things that compose files or humidity files or whatever. And if you really want to take advantage of that, you kind of have to get down to the principles understand all go on a proper kind of learning journey. But if you don't want to do that, you can kind of stop wherever it makes sense for you. So like even when I'm talking to different audiences, is a lot strangely enough, I did a pool size large bin this morning. It was quite a specific topic. It was about building applications in containers. So is about using containers, to compile your app and then package it, so you can build anywhere. But even a session like that, the first maybe two minutes, I give a lightning quick overview, of what containers are and how you use them. Here's exactly like you say, people will come to a session, if it's got Docker or humanities in the title. But if they don't have the entry requirements. They've never really used this stuff. And we were up here and it's a big dump for them. So I try and always have that introductory slide. >> I had to do that on the fly. >> Sorry. >> I've done that on the fly in conference, because yes, doing like, ASP.NET Core with Entity Framework and containers. And, 80% of the room, really didn't know anything about Docker. So, instead of talking like five minutes about Docker and then demoing the rest, I ended up spending more time talking about Docker, to make sure everybody was really you could tell that difference when they're like oh, like that they understood enough, in order to be follow along and understand the value of what it was that I was there to show, about it in that core, I'm also this is making me remember that first time I actually use Docker compose, because it was a while, I was just using the SQL Server, Docker image, in on my development machine for quite a while. And because I wasn't deploying, I was learning and exploring and so I was on my development machine, so I didn't need to do anything else. So the first time I really started orchestrating, that was yet another aha moment. But I was ready for it then. I think you know if you start with Docker compose and you don't haven't done the other, maybe I would write but I was ready, because I'd already gotten used to using the tooling and, really understanding what was going on with containers. Then that Docker compose was like yeah. (he laughs) >> It's just the next one, in the line is a great comment actually in the chat about someone in the chat. >> From chat? >> Yeah, from Steve saying, that he could see there would be an aha moment for his about security. And actually that's absolutely, it's so when security people, first want to get their head around containers, they get worried that if someone can compromise the app in the container, they might get a break out, and get to all the other containers. And suddenly, instead of having one VM compromised, you have 100 containers compromised. But actually, when you dig into it so much easier to get this kind of defense in depth, when you're building in containers, because you have your tape on an image that's owned by your team who produced the path, whether or not they will have their own images, that are built with best practices. You can sign your images, through your platform doesn't run anything that isn't signed, you have a full history of exactly what's in the source code is what's in production, there's all sorts of, ways you can layer on security that, attract that side of the audience. >> I've been looking at you this whole time, and like I forgot about the live chat. There's the live chat. (he laughs) There's Scott Johnston in live chat. >> Yes. >> People talking about Kubernetes and swarm. I'm scrolling through quickly to see if anybody's saying, well, my aha moment was. >> There was a good one. What was this one from Fatima earlier on, Maya was pointing out with almost no configuration onto a VM, and couldn't believe it never looked back on us. >> Yeah. >> That's exactly, on one command, if your image is mostly built, SaaS has some sensible defaults, it just all works. And everyone's (mumbles). >> Yeah, and the thing that I'm doing in my session is, what I love. the fact that for development team, Development Testing everybody on the team, and then again on up the pipeline to CI/CD. It's just a matter of, not only do you have your SaaS code, but in your SaaS code, you've got your Docker compose, and your Docker compose just makes sure, that you have the development environment that you need, all the frame, everything that you need is just there, without having to go out and find it and install it. >> There were no gap in a development environment with CI build the production. So I'm hearing, you don't hear but I can hear that we need to wrap up. >> Oh, yeah. >> Get yourself prepared for your next session, which everyone should definitely, I'll be watching everyone else do. So thanks everyone for joining. Thanks, Julie for a great idea for a conversation, was about 4050 we'll have a beer with and I would, I would Yeah. >> Yeah, we live many thousands of miles away from one another. >> Well, hopefully next year, there will be a different topic on how we can all meet some of you guys. >> And I do need to point out, the last time we were together, Elton, I got a copy of Alan's book and he signed it. (both laughs) And we took a picture of it. >> There are still more books on the stand >> Yeah, I know that's an old book, but it's the one that you signed. Thank you so much. >> Thanks everyone for joining and we'll enjoy the rest of the topic home. >> Bye. (soft music)

Published Date : May 29 2020

SUMMARY :

brought to you by Docker and tell you a bit about what and I'm a Docker captain. So MVP is kind of like the micro version, (he laughs) We've been using Docker and for have the and the CI tools, That's pretty cool. and then you go and look and then it's got to run a couple that you can use your and is to try to really and it's pretty cool to be able And you can steer people and the team who run the software, and enabling them to find a way and you don't pay a Windows license each And that was his aha moment. I find that the not there yet with Docker. and how you use them. and so I was on my development machine, in the chat about someone in the chat. and get to all the other containers. and like I forgot about the live chat. Kubernetes and swarm. and couldn't believe it it just all works. Yeah, and the thing that So I'm hearing, you don't hear and I would, I would Yeah. Yeah, we live many how we can all meet some of you guys. And I do need to point out, but it's the one that you signed. and we'll enjoy the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

JuliePERSON

0.99+

Michelle NooraliPERSON

0.99+

Scott JohnstonPERSON

0.99+

Julie LermanPERSON

0.99+

EltonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AlanPERSON

0.99+

2014DATE

0.99+

two minutesQUANTITY

0.99+

80%QUANTITY

0.99+

100 containersQUANTITY

0.99+

DockerTITLE

0.99+

androidTITLE

0.99+

five minutesQUANTITY

0.99+

DockerORGANIZATION

0.99+

next yearDATE

0.99+

15 secondsQUANTITY

0.99+

one platformQUANTITY

0.99+

over 30 yearsQUANTITY

0.99+

DockerConEVENT

0.99+

SQL ServersTITLE

0.99+

SQL ServerTITLE

0.99+

first experienceQUANTITY

0.98+

four years agoDATE

0.98+

SQLTITLE

0.98+

LinuxTITLE

0.98+

WindowsTITLE

0.98+

bothQUANTITY

0.97+

firstQUANTITY

0.96+

first timeQUANTITY

0.96+

AndroidTITLE

0.96+

both laughsQUANTITY

0.96+

this weekDATE

0.96+

oneQUANTITY

0.95+

DockerCon Live 2020EVENT

0.95+

Elton StonemanPERSON

0.94+

eachQUANTITY

0.94+

.NETTITLE

0.94+

TwitterORGANIZATION

0.93+

UberORGANIZATION

0.93+

KubernetesTITLE

0.93+

one teamQUANTITY

0.93+

Docker composeTITLE

0.93+

Entity FrameworkTITLE

0.91+

millions of dollarsQUANTITY

0.91+

thousands of milesQUANTITY

0.9+

first groupQUANTITY

0.86+

years 10 yearsQUANTITY

0.85+

one commandQUANTITY

0.81+

Amr Abdelhalem, Fidelity Investments | KubeCon + CloudNativeCon NA 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE! Covering KubeCon and CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back. I'm Stu Miniman, my cohost, John Troyer, and this is theCUBE's fourth year of coverage of KubeCon, CloudNativeCon 2019. We're in here San Diego and happy to welcome to the program a first-time guest, Amr Abdelhalem, who is the head of Cloud Platforms at Fidelity Investments. Of course, Fidelity, we love talking to an end user. Big financial company. Your boss was up on the main stage in front of 8000 people, just in that room, there's over 12,000 here in person. Fidelity itself, you know, founded in 1946, first computers in 1965. In the last year, you've now got over 500 applications running in the public cloud, and Fidelity also joined the CNCS. So let's start there, Amr, if we would. Just kind of how does Fidelity look at kind of Kubernetes and CNCS? How does that fit into your company's mission? >> Absolutely, I mean thank you so much for inviting me here. Innovation in Fidelity is, a big part of the process. We're very focused at this time in cloud computing and machine learning, NEI technology. We had the first financial robot in 2015, I believe. We have the first augmented reality financial advisor, was actually released this year as a prototype. So a part of that innovation, we're seeing, CNCF and cloud computing and Cloud Native, is keys for strategy for our innovation part. >> All right, maybe if you could, give us a little bit of the breadth and depth of your team, what they cover, cloud platforms. What does that mean inside of Fidelity? >> Sure, so Fidelity had over, like, over 10,000 of IT. Hundreds and hundreds of develop teams, thousands of applications. It's globally distributed. It had all kind of workloads, that you can imagine. And it's in a highly regulated environment as well. And that's where we are seeing that we are all looking for this autonomy between teams, and agility, and improved time to market and customer experience. And the key for that is Cloud Native. We're seeing Kubernetes and CNCF and Cloud Native technology is like a key player for us when we go, multicloud to hypercloud model. >> Can you talk a little bit about more into that portfolio of technologies? You know, there's a lot of talk about public cloud verses on-prem, and, as if one thing is going to, one knife is going to be the only thing you need in your kitchen. >> Amr: Right. >> So you have a portfolio of platforms, you have a portfolio of destinations and a portfolio of applications. Can you talk a little bit, both about what you're using, and maybe how you're organized to access and address all those needs? >> Absolutely. So, I think, 2019, I would say, is the year of multicloud-hypercloud modeling, right? Actually, I would say that 2020 is going to more about distributed cloud, where you can distribute your workload across multicloud providers. We're not there yet. I don't think we're, anyone, is there yet. But at least we should start somewhere. We already has this multicloud providing. Distributing the workload itself between, I mean, it's a journey to move thousands of applications and thousands workloads and data as well, between on-premises data centers to a public cloud. You need to move through this journey of hypercloud models. And be able to move apps slowly and aggressively to other apps. >> All right. Amr, I want to dig into what you talked about there, multiclouds. >> Sure. >> So when you talk about multiple clouds, yes, everybody has that. I've got, walk us through a little bit, you know, where you have workloads and how many public clouds you use in life, but I want to set you up with a premise. You know, we really said, for multicloud to really be a reality-- >> Amr: Right. >> The value that you extract should be greater than the sum of its parts. And most of us lived through the multi vendor years, and that wasn't necessarily happiness and joy, when I had to span between those environments. So how do we make sure that multicloud doesn't become the least common denominator or a detriment to what I need to do with my data, my applications, the value that the company has? >> And that's why we are here. We are actually incorporated at Kubecon for that reason. That where we see this abstract layer that guarantee you the portability for moving your application from one cloud provider to another. That capability of the ability to deploy the same workload into multiclouds, the ability to have the workload itself, managed in different characteristic, next to assess services that you will find in AWS via Azure, via Google Cloud, the others. That's were we need that flexibility, and Kubernetes and Cloud Native itself, the ability to have the same deployable structure for your application, the ability to have the same ecosystem around that construction, around that artifact. The ability to move all of that, as-is, from one cloud provider to another cloud provider is big, big key. And that you can only find with script native. >> All right, Amr, can you share which cloud or clouds you're working on today, and what is your roadmap, do you have a timeline to when that vision becomes reality? >> At this moment, we're with a major cloud provider keys that, you guys can name them, all the colors. >> Stu: You're using all of them, okay. >> All the colors. >> And how are you using Kubernetes today? Where are you in that journey? >> So Kubernetes is mainly, I mean, I would say the majority is still running on premise. We are very intensively moving to public cloud in the Kubernates side. At this moment, actually, we're building an offer, inside my team, which is a cloud platform team. That offer will guarantee that portability between all the cloud provider. So for development team to port our platform, it will be kind of seamless for them, where it's going to land, is it going to be landing in AWS or Azures or on premise. >> Okay, joining the CNCF as a member, bring us inside. I understand the journey. Are there any specific goals you have? How do you measure the investment, and what you're hoping to, both as a company as well as part of the community, get out of it? >> So we have a big hold right now and opensource our project our little project about multiclouding, and our focus is mainly about the high regulation part. We're very focused in compliance and security, and in that way we can, I think, we can contribute back to the open source community around that. >> So Amr, you talked about, you know, we talked about the platforms here, and Kubernetes, but that goes hand-in-hand with the culture, and the up-skilling, and the organization and the processes. What intrigued me is you said, well, we put some things on Kubernetes on-prem, and then, and you know some things in the cloud, but then we're going to move some of those apps over time, we'll move to other appropriate homes. So that implies that you've changed process and you've changed, or maybe to be able to build cloud native apps, and that was actually separate, in some cases, from being in the public cloud. Is that the case, can you talk a little bit about how you've approached from the perspective of people who are listening or watching who are IT admins, and wondering how a company, a major organization, like your org, gets there? >> Right, and this is a main challenge. The challenge is not in the technology side itself, or the tools, that seems a majority there in the ecosystem at this moment. The challenge is mainly building the sculpture inside teams. So we're building many like, star-point or COEs across all of our business unit and all of our teams. And again, to build a sculpture across 10,000 developers plus, that's a major. >> And it's funny, because sometimes people go, well, COE is a dirty word, right, don't do a COE, but you said multiple COEs distributed across. >> So it's like nuclear reaction, our COEs, the first one, that will communicate with few COEs, each one of them would be with other COEs, and that's how that chain will go and expand quite quickly. >> All right. >> And this is happening at this moment. >> So, Amr, I have a few friends that this is the first time that they've come, and they go into the keynote, or they look at the schedule, and they're a bit overwhelmed. >> Amr: Right >> They say, it's not just Kubernetes, there's dozens and dozens of projects. The ecosystem is sprawling. If you could, give us a little walkthrough as to, the projects you're using, any key partners that you're allowed to talk about that are useful in helping you to achieve your mission. >> So, we're very focused at this moment, actually, in the Kubernetes project itself. We start exploring some of the open source project and in the CICD part, additional to that, we are starting using few frameworks like Flux, this is one of the frameworks like GitOps in general, building this culture of GitOps deployment, and moving toward, like, more ops of deployment, that's one of areas that we are very invested in. We're exploring service mesh at this time, and I hope like, we're going to get, like, maybe next year we can talk about service mesh more. >> Yeah, is there something that's holding you back on service mesh, 'cause there's a few options out there at various maturity levels, and who's driving them. What will some of your criteria be? >> I would say it's mainly, I'm waiting little bit more, I feel like 214 for me, when we had that discussion, instead of sitting here, 214, you will be discussing Mesos via Kubernetes via Swarm. So I think we are still moving at this time, service mesh as well. >> Any partners that you can speak to from a technology standpoint that are helping you, that you're allowed to talk about? >> Amr: Well, I mean, first of all CNCF. >> Yeah. >> I greatly appreciate all their help in that. Most of the public cloud providers are helping us in this areas as well, yeah. >> I'll be interested in catching you after the show and seeing how you thought, I mean this is, in some ways, it's a science project a few years ago, and now it's this robust thing. Did you bring, I'm curious, did you bring mostly engineers, mostly managers, a mix of the two? >> Amr: Mostly engineers, yeah, mostly engineers. >> Hands on? >> All hands on, I mean, this is like another change in culture right now, where most of our engineers are in innovation, like, they are full stack engineers. We're using VDI process at this moment, to move forward. All our road maps, in turn, have been published, it's being used like evolving process, to go, like, with continuous deployment, and continues feature enhancement for the teams. So it's fantastic honestly, yeah. >> Okay, Amr, what things does your team hope to achieve this week, anything that is on your roadmap, or on the public open source road map that you're waiting on? We talked a little bit, service mesh? >> We're definitely exploring OPA at this moment. I think that's like, that's big potentials there. So that's one of them, yeah. I think going through that showroom and try to see what option we have as well, that's on the area where we going to be very interested at. >> OPA, the Policy Agent, I mean, you talked about compliance before >> Yeah. >> A few years ago, with folks in the financial industry, you would have some arguments, some discussions, sometimes heated discussions about security in the cloud and et cetera and highly regulated industry, yet, kind of, maybe ironically or somewhat, maybe surprisingly for some, right? Very advanced in many areas, the whole industry. That's well known if you're in it. Do you still have to have discussions about compliance and security in the cloud? Maybe, I guess, maybe when you talk about data locality and international borders more? >> Right, and that's why we already have our own policy management tool, which is built in, we build it ourself, and that's where I see the potential, like, our moving from building it yourself to more of using an open source project and try to reuse it and contribute back to that open source community, like something like OPA, for example. So that's the next generation, where I can see it will help us as well. >> Amr, any advice you'd give your peers out there, if they're new to the community? Things you've learned along the journey so far? >> I would say start small, don't boil the ocean. Start with small COEs, small pilots program. Look for success, look for goals. Technology is great, but don't just move toward technology, because it's a moving target, it will never end. Try to set business goals for you, like targets for your project, and that's how you can achieve success. >> Well, Amr, really appreciate you sharing Fidelity's update. >> Thank you. >> Wish you and your team the best of luck here at the show and beyond, and we definitely hope to catch up soon. >> Thank you, I appreciate it. >> All right, for John Troyer, I'm Stu Miniman, be sure to checkout theCUBE.net for all of the coverage of this, as well as all the cloud, Cloud Native, and more shows that we have. Thank you for watching theCUBE. (upbeat electronic music)

Published Date : Nov 19 2019

SUMMARY :

Brought to you by Red Hat, and Fidelity also joined the CNCS. Innovation in Fidelity is, a big part of the process. All right, maybe if you could, It had all kind of workloads, that you can imagine. you need in your kitchen. So you have a portfolio of platforms, where you can distribute your workload Amr, I want to dig into what you talked about there, So when you talk about multiple clouds, and that wasn't necessarily happiness and joy, And that you can only find with script native. that, you guys can name them, all the colors. in the Kubernates side. How do you measure the investment, and in that way we can, I think, we can contribute back Is that the case, can you talk a little bit about how in the ecosystem at this moment. but you said multiple COEs distributed across. the first one, that will communicate with few COEs, So, Amr, I have a few friends that this is the first time in helping you to achieve your mission. and in the CICD part, additional to that, Yeah, is there something that's holding you back on you will be discussing Mesos via Kubernetes via Swarm. Most of the public cloud providers are helping us and seeing how you thought, I mean this is, and continues feature enhancement for the teams. that's on the area where we going to be very interested at. in the cloud and et cetera and highly regulated industry, So that's the next generation, and that's how you can achieve success. Well, Amr, really appreciate you sharing Wish you and your team the best of luck here at the show and more shows that we have.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LauraPERSON

0.99+

Lisa MartinPERSON

0.99+

Stu MinimanPERSON

0.99+

2015DATE

0.99+

John TroyerPERSON

0.99+

Umair KhanPERSON

0.99+

Laura DuboisPERSON

0.99+

Keith TownsendPERSON

0.99+

1965DATE

0.99+

KeithPERSON

0.99+

Laura DuboisPERSON

0.99+

DellORGANIZATION

0.99+

EmilPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

FidelityORGANIZATION

0.99+

LisaPERSON

0.99+

1946DATE

0.99+

10 secondsQUANTITY

0.99+

2020DATE

0.99+

2019DATE

0.99+

Amr AbdelhalemPERSON

0.99+

AWSORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Kapil ThangaveluPERSON

0.99+

AmazonORGANIZATION

0.99+

San DiegoLOCATION

0.99+

10 feetQUANTITY

0.99+

AvamarORGANIZATION

0.99+

AmrPERSON

0.99+

OneQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

12 monthsQUANTITY

0.99+

one toolQUANTITY

0.99+

Fidelity InvestmentsORGANIZATION

0.99+

tens of thousandsQUANTITY

0.99+

OracleORGANIZATION

0.99+

thousandsQUANTITY

0.99+

one repositoryQUANTITY

0.99+

LambdaTITLE

0.99+

Dell TechnologiesORGANIZATION

0.99+

Tens of thousandsQUANTITY

0.99+

six monthQUANTITY

0.99+

8000 peopleQUANTITY

0.99+

next yearDATE

0.99+

10,000 developersQUANTITY

0.99+

last yearDATE

0.99+

214OTHER

0.99+

six months laterDATE

0.99+

C twoTITLE

0.99+

todayDATE

0.99+

fourth yearQUANTITY

0.99+

threeQUANTITY

0.99+

NoSQLTITLE

0.99+

CNCFORGANIZATION

0.99+

oneQUANTITY

0.99+

150,000QUANTITY

0.99+

79%QUANTITY

0.99+

KubeConEVENT

0.99+

2022DATE

0.99+

OpenVMSTITLE

0.99+

NetworkerORGANIZATION

0.99+

GitOpsTITLE

0.99+

DODORGANIZATION

0.99+

Sanjay Poonen, VMware | Dell Technologies World 2019


 

>> live from Las Vegas. It's the queue covering Dell Technologies. World twenty nineteen. Brought to you by Dell Technologies and its ecosystem partners. >> The one Welcome to the Special Cube Live coverage here in Las Vegas with Dell Technologies World 2019. I'm John Furrier with Dave Vellante breaking down day one of three days of wall the wall Coverage - 2 Cube sets. Uh, big news today and dropping here. Dell Technology World's series of announcements Cloud ability, unified work spaces and then multi cloud with, uh, watershed announced with Microsoft support for VMware with Azure are guests here theCUBE alumni that Seo, senior leader of'Em Where Sanjay *** and such a great to see you, >> John and Dave always a pleasure to be on your show. >> So before we get into the hard core news around Microsoft because you and Satya have a relationship, you also know Andy Jassy very well. You've been following the Clouds game in a big way, but also as a senior leader in the industry and leading BM where, um, the evolution of the end user computing kind of genre,  that whole area is just completely transformed with mobility and cloud kind of coming together with data and all this new kinds of applications. The modern applications are different. It's changing the game on how end users, employees, normal people use computing because some announcement here on their What's your take on the ever changing role of cloud and user software? >> Yeah, John, I think that our vision , as  you know, it was the first job I came to do at VMware almost six years ago, to run and use a computing. And the vision we had at that time was that you should be able to work at the speed of life, right? You and I happen to be on a plane at the same time  yesterday coming here, we should be able to pick our amps up on our devices. You often have Internet now even up at thirty thousand feet. In the consumer world, you don't lug around your CDs, your music, your movies come to you. So the vision of any app on any device was what we articulated with the digital workspace We. had Apple and Google very well figured out. IOS later on Mac,  Android,  later on chrome . The Microsoft relationship in end use the computing was contentious because we overlapped. They had a product, PMS and in tune. But we always dreamed of a day. I tweeted out this morning that for five and a half years I competed with these guys. It was always my dream to partner with the With Microsoft. Um, you know, a wonderful person, whom I respect there, Brad Anderson. He's a friend, but we were like LeBron and Steph Curry. We were competing against each other. Today everything changed. We are now partners. Uh, Brad and I we're friends, we'll still be friends were actually partners  now why? Because we want to bring the best of the digital workspace solution VMware brings workspace one to the best of what Microsoft brings in Microsoft 365 , active directory, E3 capabilities around E. M. S and into it and combined those together to help customers get the best for any device. Apple, Google and Microsoft that's a game changer. >> Tell about the impact of the real issue of Microsoft on this one point, because is there overlap is their gaps, as Joe Tucci used to say, You can't have any. There's no there's no overlap if you have overlapped. That's not a >> better to have overlapped and seems right. A gaps. >> So where's the gaps? Where this words the overlapping cloud. Next, in the end user world, >> there is a little bit of overlap. But the much bigger picture is the complementarity. We are, for example, not trying to be a directory in the Cloud That's azure active directory, which is the sequel to Active Directory. So if we have an identity access solution that connect to active directory, we're gonna compliment that we've done that already. With Octo. Why not do that? Also inactive Directory Boom that's clear. Ignored. You overlap. Look at the much bigger picture. There's a little bit of overlap between in tune and air Watch capabilities, but that's not the big picture. The big picture is combining workspace one with E. M s. to allow Office 365 customers to get conditional access. That's a game, so I think in any partnership you have to look past, I call it sort of these Berlin Wall moments. If the U. S and Soviet Union will fighting over like East Germany, vs West Germany, you wouldn't have had that Berlin wall moment. You have to look past the overlaps. Look at the much bigger picture and I find the way by which the customer wins. When the customer wins, both sides are happy. >> Tearing down the access wall, letting you get seamless. Access the data. All right, Cloud computing housely Multi cloud announcement was azure something to tell on stage, which was a surprise no one knew was coming. No one was briefed on this. It was kind of the hush hush, the big news Michael Delll, Pat Girl singer and it's nothing to tell up there. Um, Safia did a great job and really shows the commitment of Microsoft with the M wear and Dell Technologies. What is this announcement? First, give us your take an analysis of what they announced. And what does it mean? Impact the customers? >> Yeah, listen, you know, for us, it's a further That's what, like the chess pieces lining up of'Em wars vision that we laid up many years for a hybrid cloud world where it's not all public cloud, it isn't all on premise. It's a mixture. We coined that Tom hybrid loud, and we're beginning to see that realize So we had four thousand cloud providers starting to build a stack on VM, where we announced IBM Cloud and eight of us. And they're very special relationships. But customers, some customers of azure, some of the retailers, for example, like Wal Mart was quoted in the press, released Kroger's and some others so they would ask us, Listen, we're gonna have a way by which we can host BMO Workloads in there. So, through a partnership now with Virtue Stream that's owned by Dell on DH er, we will be able to allow we, um, where were close to run in Virtue Stream. Microsoft will sell that solution as what's called Azure V M, where solutions and customers now get the benefit of GMO workloads being able to migrate there if they want to. Or my great back on the on premise. We want to be the best cloud infrastructure for that multi cloud world. >> So you've got IBM eight of us Google last month, you know, knock down now Azure Ali Baba and trying you. Last November, you announced Ali Baba, but not a solution. Right >> now, it's a very similar solutions of easy solution. There's similar what's announced with IBM and Nash >> So is it like your kids where you loved them all equally or what? You just mentioned it that Microsoft will sell the VM wear on Azure. You actually sell the eight of us, >> so there is a distinction. So let me make that clear because everything on the surface might look similar. We have built a solution that is first and preferred for us. Called were MacLeod on a W s. It's a V m er manage solution where the Cloud Foundation stack compute storage networking runs on a ws bare metal, and V. Ember manages that our reps sell that often lead with that. And that's a solution that's, you know, we announced you were three years ago. It's a very special relationship. We have now customer attraction. We announce some big deals in queue, for that's going great, and we want it even grow faster and listen. Eight of us is number one in the market, but there are the customers who have azure and for customers, one azure very similar. You should think of this A similar to the IBM ah cloud relationship where the V C P. V Partners host VM where, and they sell a solution and we get a subscription revenue result out of that, that's exactly what Microsoft is doing. Our reps will get compensated when they sell at a particular customer, but it's not a solution that's managed by BM. Where >> am I correct? You've announced that I think a twenty million dollars deal last quarter via MacLeod and A W. And that's that's an entire deal. Or is that the video >> was Oh, that was an entirely with a customer who was making a big shift to the cloud. When I talked to that customer about the types of workloads, they said that they're going to move hundreds off their APs okay on premise onto via MacLeod. And it appears, so that's, you know, that's the type of cloud transformation were doing. And now with this announcement, there will be other customers. We gave an example of few that Well, then you're seeing certain verticals that are picking as yours. We want those two also be happy. Our goal is to be the undisputed cloud infrastructure for any cloud, any cloud, any AP any device. >> I want to get your thoughts. I was just in the analysts presentation with Dell technology CFO and looking at the numbers, the performance numbers on the revenue side Don Gabin gap our earnings as well as market share. Dell. That scales because Michael Delll, when we interviewed many years ago when it was all going down, hinted that look at this benefits that scale and not everyone's seeing the obvious that we now know what the Amazon scale winds so scale is a huge advantage. Um, bm Where has scale Amazon's got scale as your Microsoft have scales scales Now the new table stakes just as an industry executive and leader as you look at the mark landscape, it's a having have not world you'd have scale. You don't If you don't have scale, you're either ecosystem partner. You're in a white space. How do companies compete in this market? Sanjay, what's your thoughts on I thinkit's >> Jonah's? You said there is a benefit to scale Dell, now at about ninety billion in revenue, has gone public on their stock prices. Done where Dellvin, since the ideal thing, the leader >> and sir, is that point >> leader in storage leader inclined computing peces with Vienna and many other assets like pivotal leaders and others. So that scale VM, Where about a ten billion dollar company, fifth largest software company doing verywell leader in the softer to find infrastructure leader, then use a computing leader and softer, defined networking. I think you need the combination of scale and speed, uh, just scale on its own. You could become a dinosaur, right? And what's the fear that every big company should have that you become ossified? And I think what we've been able to show the world is that V M wear and L can move with scale and speed. It's like having the combination of an elephant and a cheetah and won and that to me special. And for companies like us that do have scaled, we've to constantly ask ourselves, How do we disrupt ourselves? How do we move faster? How do we partner together? How do we look past these blind spots? How do we pardon with big companies, small companies and the winner is the customer. That's the way we think. And we could keep doing that, you'll say so. For example, five, six years ago, nobody thought of VMware--this is going before Dell or EMC--in the world of networking, quietly with ten thousand customers, a two million dollar run rate, NSX has become the undisputed leader and software-defined networking. So now we've got a combination of server, storage and a networking story and Dell VMware, where that's very strong And that's because we moved with speed and with scale. >> So of course, that came to an acquisition with Nice Sarah. Give us updates on the recent acquisitions. Hep C e o of Vela Cloud. What's happening there? >> Yeah, we've done three. That, I think very exciting to kind of walk through them in chronological order about eighteen months ago was Velo Cloud. We're really excited about that. It's sort of like the name, velocity and cloud fast. Simple Cloud based. It is the best solution. Ston. How do we come to deciding that we went to talk to our partners like t other service providers? They were telling us this is the best solution in town. It connects to the data center story to the cloud story and allows our virtual cloud network to be the best softer. To find out what you can, you have your existing Mpls you might have your land infrastructure but there's nobody who does softer to find when, like Philip, they're excited about that cloud health. We're very excited about that because that brings a multi cloud management like, sort of think of it like an e r P system on top of a w eso azure to allow you to manage your costs and resource What ASAP do it allows you to manage? Resource is for materials world manufacturing world. In this world, you've got resources that are sitting on a ws or azure. Uh, cloud held does it better than anybody else. Hefty. Oh, now takes a Cuban eighty story that we'd already begun with pivotal and with Google is you remember at at PM world two years ago. And that's that because the founders of Cuban eighties left Google and started FTO. So we're bringing that DNA we've become now one of the top two three contributors to communities, and we want to continue to become the de facto platform for containers. If you go to some of the airports in San Francisco, New York, I think Keilani and Heathrow to you'LL see these ads that are called container where okay, where do you think the Ware comes from Vienna, where, OK, and our goal is to make containers as container where you know, come to you from the company that made vmc possible of'Em where So if we popularized PM's, why not also popularised the best enterprise contain a platform? That's what helped you will help us do >> talk about Coburn at ease for a minute because you have an interesting bridge between end user computing and their cloud. The service is micro. Services that are coming on are going to be powering all these APS with either data and or these dynamic services. Cooper, Nettie sees me the heart of that. We've been covering it like a blanket. Um, I'm gonna get your take on how important that is. Because back Nelson, you're setting the keynote at the Emerald last year. Who burn it eases the dial tone. Is Cooper Netease at odds with having a virtual machine or they complimentary? How does that evolving? Is it a hedge? What's the thoughts there? >> Yeah, First off, Listen, I think the world has begun to realize it is a world of containers and V ems. If you looked at the company that's done the most with containers. Google. They run their containers in V EMS in their cloud platform, so it's not one or the other. It's vote. There may be a world where some parts of containers run a bare metal, but the bulk of containers today run and Beyonce And then I would say, Secondly, you know, five. Six years ago, people all thought that Doctor was going to obliterate VM where, But what happened was doctors become a very good container format, but the orchestration layer from that has not become daugher. In fact, Cuban Eddie's is kind of taking a little of the head and steam off Dr Swarm and Dr Enterprise, and it is Cooper Navy took the steam completely away. So Senses Way waited for the right time to embrace containers because the obvious choice initially would have been some part of the doctor stack. We waited as Borg became communities. You know, the story of how that came on Google. We've embraced that big time, and we've stated a very important ball hefty on All these moves are all part of our goal to become the undisputed enterprise container platform, and we think in a multi cloud world that's ours to lose. Who else can do multi cloud better than VM? Where may be the only company that could have done that was Red Hat. Not so much now, inside IBM, I think we have the best chance of doing that relative. Anybody else >> Sanjay was talking about on our intro this morning? Keynote analysis. Talking about the stock price of Dell Technologies, comparing the stock price of'Em where clearly the analysis shows that the end was a big part of the Dell technologies value. How would you summarize what v m where is today? Because on the Kino there was a Bank of America customers. She said she was the CTO ran, she says, Never mind. How we got here is how we go floors the end wars in a similar situation where you've got so much success, you always fighting for that edge. But as you go forward as a company, there's all these new opportunities you outlined some of them. What should people know about the VM? We're going forward. What is the vision in your words? What if what is VM where >> I think packed myself and all of the key people among the twenty five thousand employees of'Em are trying to create the best infrastructure company of all time for twenty one years. Young. OK, and I think we have an opportunity to create an incredible brand. We just have to his use point on the begins show create platforms. The V's fear was a platform. Innocent is a platform workspace. One is a platform V san, and the hyper convert stack of weeks right becomes a platform that we keep doing. That Carbonetti stuff will become a platform. Then you get platforms upon platforms. One platforms you create that foundation. Stone now is released. ADelle. I think it's a better together message. You take VX rail. We should be together. The best option relative to smaller companies like Nutanix If you take, you know Veum Where together with workspace one and laptops now put Microsoft in the next. There's nobody else. They're small companies like Citrix Mobile. I'm trying to do it. We should be better than them in a multi cloud world. They maybe got the companies like Red Hat. We should have bet on them. That said, the end. Where needs toe also have a focus when customers don't have Dale infrastructure. Some people may have HP servers and emcee storage or Dell Silvers and netapp storage or neither. Dellery emcee in that case, usually via where, And that's the way we roll. We want to be relevant to a multi cloud, multi server, multi storage, any hardware, any cloud. Any AP any device >> I got. I gotta go back to the red hat. Calm in a couple of go. I could see you like this side of IBM, right? So So it looks like a two horse race here. I mean, you guys going hard after multi cloud coming at it from infrastructure, IBM coming at it with red hat from a pass layer. I mean, if I were IBM, I had learned from VM where leave it alone, Let it blossom. I mean, we have >> a very good partisan baby. Let me first say that IBM Global Services GTS is one about top sai partners. We do a ton of really good work with them. Uh, I'm software re partner number different areas. Yeah, we do compete with red hat with the part of their portfolios. Relate to contain us. Not with Lennox. Eighty percent plus of their businesses. Lennox, They've got parts of J Boss and Open Stack that I kind of, you know, not doing so well. But we do compete with open ship. That's okay, but we don't know when we can walk and chew gum so we can compete with Red Hat. And yet partner with IBM. That's okay. Way just need to be the best at doing containing platform is better than open shifter. Anybody, anything that red hat has were still partner with IBM. We have to be able to look at a world that's not black and white. And this partnership with Microsoft is a good example. >> It's not a zero sum game, and it's a huge market in its early days. Talk >> about what's up for you now. What's next? What's your main focus? What's your priorities? >> Listen, we're getting ready for VM World now. You know in August we want to continue to build momentum on make many of these solutions platforms. So I tell our sales reps, take the number of customers you have and add a zero behind that. OK, so if you've got ten thousand customers of NSX, how do we get one hundred thousand customers of insects. You have nineteen thousand customers of Visa, which, by the way, significantly head of Nutanix. How do we have make one hundred ninety thousand customers? And we have that base? Because we have V sphere and we have the Delll base. We have other partners. We have, I think, eighty thousand customers off and use of computing tens of millions of devices. How do we make sure that we are workspace? One is on billion. Device is very much possible. That's the vision. >> I think that I think what's resonating for me when I hear you guys, when you hear you talk when we have conversations also in Pat on stage talks about it, the simplification message is a good one and the consistency of operating across multiple environments because it sounds great that if you can achieve that, that's a good thing. How you guys get into how you making it simple to run I T. And consistent operating environment. It's all about keeping the customer in the middle of this. And when we listen to customs, all of these announcements the partnership's when there was eight of us, Microsoft, anything that we've done, it's about keeping the customer first, and the customer is basically guiding up out there. And often when I sit down with customers, I had the privilege of talking hundreds of thousands of them. Many of these CEOs the S and P five hundred I've known for years from S athe of'Em were they'LL Call me or text me. They want us to be a trusted advisor to help them understand where and how they should move in their digital transformation and compared their journey to somebody else's. So when we can bring the best off, for example, of developer and operations infrastructure together, what's called DEV Ops customers are wrestling threw that in there cloud journey when we can bring a multi device world with additional workspace. Customers are wrestling that without journey there, trying to figure out how much they keep on premise how much they move in the cloud. They're thinking about vertical specific applications. All of these places where if there's one lesson I've learned in my last ten twenty years of it has become a trusted advisor to your customers. Lean on them and they will lean on you on when you do that. I mean the beautiful world of technology is there's always stuff to innovate. >> Well, they have to lean on you because they can't mess around with all this infrastructure. They'LL never get their digital transformation game and act together, right? Actually, >>= it's great to see you. We'Ll see you at PM, >> Rollo. Well, well, come on, we gotta talk hoops. All right, All right, All right, big. You're a big warriors fan, right? We're Celtics fan. Would be our dream, for both of you are also Manny's themselves have a privileged to go up against the great Warriors. But what's your prediction this year? I mean, I don't know, and I >> really listen. I love the warriors. It's ah, so in some senses, a little bit of a tougher one. Now the DeMarcus cousins is out for, I don't know, maybe all the playoffs, but I love stuff. I love Katie. I love Clay, you know, and many of those guys is gonna be a couple of guys going free agents, so I want to do >> it again. Joy. Well, last because I don't see anybody stopping a Celtics may be a good final. That would be fun if they don't make it through the rafters, though. That's right. Well, I Leonard, it's tough to make it all right. That sounds great. >> Come on. Sanjay Putin, CEO of BM Wear Inside the Cube, Breaking down his commentary of you on the landscape of the industry and the big news with Microsoft there. Other partner's bringing you all the action here Day one of three days of coverage here in the Cubicle two sets a canon of cube coverage out there. We're back with more after this short break.

Published Date : Apr 29 2019

SUMMARY :

Brought to you by Dell Technologies The one Welcome to the Special Cube Live coverage here in Las Vegas with Dell Technologies World 2019. It's changing the game And the vision we had at that time was that you should be Tell about the impact of the real issue of Microsoft on this one point, because is there overlap is their gaps, better to have overlapped and seems right. Next, in the end user world, That's a game, so I think in any partnership you have to look Tearing down the access wall, letting you get seamless. But customers, some customers of azure, some of the retailers, for example, like Wal Mart was quoted in the press, Last November, you announced Ali Baba, but not a solution. There's similar what's announced with IBM and Nash You actually sell the eight of us, You should think of this A similar to the IBM ah cloud relationship where the V C P. Or is that the video We gave an example of few that Well, then you're seeing certain verticals that are picking not everyone's seeing the obvious that we now know what the Amazon scale winds so scale is a You said there is a benefit to scale Dell, now at about ninety billion in revenue, That's the way we think. So of course, that came to an acquisition with Nice Sarah. OK, and our goal is to make containers as container where you know, Services that are coming on are going to be powering all these APS with either data to become the undisputed enterprise container platform, and we think in a multi cloud world that's ours What is the vision in your words? OK, and I think we have an opportunity to create an incredible brand. I could see you like this side of IBM, Open Stack that I kind of, you know, not doing so well. It's not a zero sum game, and it's a huge market in its early days. about what's up for you now. take the number of customers you have and add a zero behind that. I think that I think what's resonating for me when I hear you guys, when you hear you talk when we have conversations Well, they have to lean on you because they can't mess around with all this infrastructure. We'Ll see you at PM, for both of you are also Manny's themselves have a privileged to go up against the great I love Clay, you know, and many of those guys is gonna be a couple of guys I Leonard, it's tough to make it all right. of you on the landscape of the industry and the big news with Microsoft there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

JohnPERSON

0.99+

AppleORGANIZATION

0.99+

Brad AndersonPERSON

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Sanjay PutinPERSON

0.99+

DellORGANIZATION

0.99+

Sanjay PoonenPERSON

0.99+

Michael DelllPERSON

0.99+

Dave VellantePERSON

0.99+

Joe TucciPERSON

0.99+

BradPERSON

0.99+

SanjayPERSON

0.99+

Andy JassyPERSON

0.99+

KatiePERSON

0.99+

San FranciscoLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

Don GabinPERSON

0.99+

AmazonORGANIZATION

0.99+

NettiePERSON

0.99+

Wal MartORGANIZATION

0.99+

EMCORGANIZATION

0.99+

John FurrierPERSON

0.99+

Las VegasLOCATION

0.99+

AugustDATE

0.99+

threeQUANTITY

0.99+

ClayPERSON

0.99+

SatyaPERSON

0.99+

Steph CurryPERSON

0.99+

DavePERSON

0.99+

CooperPERSON

0.99+

Eighty percentQUANTITY

0.99+

eighty thousand customersQUANTITY

0.99+

ten thousand customersQUANTITY

0.99+

eightQUANTITY

0.99+

HPORGANIZATION

0.99+

Last NovemberDATE

0.99+

IOSTITLE

0.99+

NSXORGANIZATION

0.99+

twenty one yearsQUANTITY

0.99+

ten thousand customersQUANTITY

0.99+

MannyPERSON

0.99+

twenty million dollarsQUANTITY

0.99+

New YorkLOCATION

0.99+

TodayDATE

0.99+

twoQUANTITY

0.99+

ViennaLOCATION

0.99+

FirstQUANTITY

0.99+

last monthDATE

0.99+

yesterdayDATE

0.99+

LeonardPERSON

0.99+

nineteen thousand customersQUANTITY

0.99+

NutanixORGANIZATION

0.99+

bothQUANTITY

0.99+

NashORGANIZATION

0.99+

todayDATE

0.99+

DeMarcusPERSON

0.99+

two horseQUANTITY

0.99+

firstQUANTITY

0.99+

Pat GirlPERSON

0.99+

CelticsORGANIZATION

0.99+

billionQUANTITY

0.99+

Derek Manky, Fortinet | Fortinet Accelerate 2019


 

>> live from Orlando, Florida It's the que covering accelerate nineteen. Brought to you by important >> Hey, welcome back to the Cube. We are live at forty nine. Accelerate nineteen in Orlando, Florida I am Lisa Martin with Peter Births, and Peter and I are pleased to welcome one of our alumni back to the program during Mickey, the chief of security insights for forty nine. Derek. It's great to have you back on the program, >> so it's always a pleasure to be here. It's tze always good conversations. I really look forward to it and it's It's never a boring day in my office, so we're than happy to talk about this. >> Fantastic. Excellent. Well, we've been here for a few hours, talking with a lot of your leaders. Partners as well. The keynote this morning was energetic. Talked a lot about the evocation, talked a lot about the evolution of not just security and threat, but obviously of infrastructure, multi cloud hybrid environment in which we live. You have been with forty girl lives for a long time. Talk to us about the evolution that you've seen of the threat landscape and where we are today. >> Sure, Yeah, so you know? Yeah, I've been fifteen years now, forty guards. So I flashed back. Even a two thousand, for it was a vastly different landscape back there and Internet and even in terms of our security technology in terms of what the attack surface was like back then, you know, Ken Kennedy was talking about EJ computing, right? Because that's what you know. Seventy percent of data is not going to be making it to the cloud in the future. A lot of processing is happening on the edge on DH. Threats are migrating that way as well, right? But there's always this mirror image that we see with the threat landscape again. Threat landscape. Back in nineteen eighty nine, we started with the Morris Worm is very simple instructions. It took down about eighty percent of the Internet at the time, but he was It is very simple. It wasn't to quote unquote intelligence, right? Of course, if we look through the two thousands, we had a lot of these big worms that hit the scene like Conficker. I love you, Anna Kournikova. Blaster slammer. All these famous rooms I started Teo become peer to peer, right? So they were able to actually spread from network to network throughout organizations take down critical services and so forth. That was a big evolutionary piece at the time. Of course, we saw fake anti virus ransomware. Come on stage last. Whereas I called it, which was destructive Mauer That was a big shift that we saw, right? So actually physically wiping out data on systems these air typically in like star but warfare based attacks. And that takes us up to today, right? And what we're seeing today, of course, we're still seeing a lot of ransom attacks, but we're starting to see a big shift in technology because of this edge computing used case. So we're seeing now things like Swarm networks have talked about before us. So these are not only like we saw in the two thousand's threats that could shift very quickly from network to network talk to each other, right? In terms of worms and so forth. We're also seeing now in intelligence baked in. And that's a key difference in technology because these threats are actually able, just like machine to machine. Communication happens through a pea eye's protocols and so forth threats are able to do this a swell. So they ableto understand their own local environment and how to adapt to that local environment and capitalized on that effort on DH. That's a very, very big shift in terms of technology that we're seeing now the threat landscape. >> So a lot of those old threats were depending upon the action of a human being, right? So in many respects, the creativity was a combination of Can you spook somebody make it interesting so that they'll do something that was always creativity in the actual threat itself. What you're describing today is a world where it's almost like automated risk. We're just as we're trying to do automation to dramatically increase the speed of things, reduce the amount of manual intervention. The bad guy's doing the same thing with the swarms there, introducing technology that is almost an automated attack and reconfigures itself based on whatever environment, conditions of encounters. >> Yeah, and the interesting thing is, what's happening here is we're seeing a reduction in what I call a t t be a time to breach. So if you look at the attack lifecycle, everything does doesn't happen in the blink of an instant it's moving towards that right? But if you look at the good, this's what's to come. I mean, we're seeing a lot of indications of this already. So we work very closely with Miter, the minor attack framework. It describes different steps for the attack life cycle, right? You start with reconnaissance weaponization and how do you penetrator system moving the system? Collect data monetize out as a cyber criminal. So even things like reconnaissance and weaponization. So if you look at fishing campaigns, right, people trying to fish people using social engineering, understanding data points about them that's becoming automated, that you sought to be a human tryingto understand their target, try toe fish them so they could get access to their network. There's tool kits now that will actually do that on their own by learning about data points. So it's scary, yes, but we are seeing indications of that. And and look, the endgame to this is that the attacks were happening much, much quicker. So you've got to be on your game. You have to be that much quicker from the defensive point of view, of course, because otherwise, if successful breach happens, you know we're talking about some of these attacks. They could. They could be successful in matter of seconds or or minutes instead of days or hours like before. You know, we're talking about potentially millions dollars of revenue loss, you know, services. They're being taken out flying intellectual properties being reached. So far, >> though. And this is, you know, I think of health care alone and literally life and death situations. Absolutely. How is Fortinet, with your ecosystem of partners poised to help customers mitigate some of these impending risk changing risk >> coverage? Strengthen numbers. Right. So we have, ah, strong ecosystem, of course, through our public ready program. So that's a technology piece, right? And to end security, how we can integrate how we can use automation to, you know, push security policies instead of having an administrator having to do that. Humans are slow a lot of the time, so you need machine to machine speed. It's our fabric ready program. You know, we have over fifty seven partners there. It's very strong ecosystem. From my side of the House on Threat Intelligence. I had up our global threat alliances, right? So we are working with other security experts around the World Cyberthreat Alliance is a good example. We've created intelligence sharing platforms so that we can share what we call indicators of compromise. So basically, blueprints are fingerprints. You can call them of attacks as they're happening in real time. We can share that world wide on a platform so that we can actually get a heads up from other security vendors of something that we might not see on. We can integrate that into our security fabric in terms of adding new, new, you know, intelligence definitions, security packages and so forth. And that's a very powerful thing. Beyond that, I've also created other alliances with law enforcement. So we're working with Interpol that's attribution Base work right that's going after the source of the problem. Our end game is to make it more expensive for cyber criminals to operate. And so we're doing that through working with Interpol on law enforcement. As an example, we're also working with national computer emergency response, so ripping malicious infrastructure off line, that's all about partnership, right? So that's what I mean strengthen numbers collaboration. It's It's a very powerful thing, something close to my heart that I've been building up over over ten years. And, you know, we're seeing a lot of success and impact from it, I think. >> But some of the, uh if you go back and look at some of the old threats that were very invasive, very problematic moved relatively fast, but they were still somewhat slow. Now we're talking about a new class of threat that happens like that. It suggests that the arrangement of assets but a company like Ford and that requires to respond and provide valued customers has to change. Yes, talk a little about how not just the investment product, but also the investment in four guard labs is evolving. You talked about partnerships, for example, to ensure that you have the right set of resources able to be engaged in the right time and applied to the right place with the right automation. Talk about about that. >> Sure, sure. So because of the criticality of this nature way have to be on point every day. As you said, you mentioned health care. Operational technology is a big thing as well. You know, Phyllis talking about sci fi, a swell right. The cyber physical convergence so way have to be on our game and on point and how do we do that? A couple of things. One we need. People still way. Can't you know Ken was talking about his his speech in Davos at the World Economic Forum with three to four million people shortage in cyber security of professionals There's never going to be enough people. So what we've done strategically is actually repositioned our experts of forty guard labs. We have over two hundred thirty five people in forty guard lab. So as a network security vendor, it's the largest security operation center in the world. But two hundred thirty five people alone are going to be able to battle one hundred billion threat events that we process today. Forty guard lab. So so what we've done, of course, is take up over the last five years. Machine learning, artificial intelligence. We have real practical applications of a I and machine learning. We use a supervised learning set so we actually have our machines learning about threats, and we have our human experts. Instead of tackling the threat's one on one themselves on the front lines, they let them in. The machine learning models do that and their training the machine. Just it's It's like a parent and child relationship. It takes time to learn a CZ machines learn. Over time they started to become more and more accurate. The only way they become more accurate is by our human experts literally being embedded with these machines and training them >> apart for suspended training. But also, there's assortment ation side, right? Yeah, we're increasing. The machines are providing are recognizing something and then providing a range of options. Thie security, professional in particular, doesn't have to go through the process of discovery and forensics to figure out everything. Absolution is presenting that, but also presenting potential remedial remediation options. Are you starting to see that become a regular feature? Absolutely, and especially in concert with your two hundred thirty five experts? >> Yeah, absolutely. And that's that's a necessity. So in my world, that's what I refer to is actionable intelligence, right? There's a lot of data out there. There's a lot of intelligence that the world's becoming data centric right now, but sometimes we don't have too much data. Askew Mons, a CZ analysts administrators so absolutely remediation suggestions and actually enforcement of that is the next step is well, we've already out of some features in in forty six two in our fabric to be able to deal with this. So where I think we're innovating and pioneering in the space, sir, it's it's ah, matter of trust. If you have the machines O R. You know, security technology that's making decisions on its own. You really have to trust that trust doesn't happen overnight. That's why for us, we have been investing in this for over six years now for our machine learning models that we can very accurate. It's been a good success story for us. I think. The other thing going back to your original question. How do we stack up against this? Of course, that whole edge computing use case, right? So we're starting to take that machine learning from the cloud environment also into local environments, right? Because a lot of that data is unique, its local environments and stays there. It stays there, and it has to be processed that such too. So that's another shift in technology as we move towards edge computing machine learning an artificial intelligence is absolutely part of that story, too. >> You mentioned strengthen numbers and we were talking about. You know, the opportunity for Fortinet to help customers really beat successful here. I wanted to go back to forty guard labs for a second because it's a very large numbers. One hundred billion security events. Forty Guard labs ingests and analyzes daily. Really? Yes, that is a differentiator. >> Okay, that that's a huge huge differentiator. So, again, if I look back to when I started in two thousand four, that number would have been about five hundred thousand events today, compared to one hundred billion today. In fact, even just a year ago, we were sitting about seventy five to eighty billion, so that numbers increased twenty billion and say twenty percent right in in just a year. So that's that's going to continue to happen. But it's that absolutely huge number, and it's a huge number because we have very big visibility, right. We have our four hundred thousand customers worldwide. We have built a core intelligence network for almost twenty years now, since for Deena was founded, you know, we we worked together with with customers. So if customers wish to share data about attacks that are happening because attackers are always coming knocking on doors. Uh, we can digest that. We can learn about the attacks. We know you know what weapons that these cybercriminals they're trying to use where the cybercriminals are. We learned more about the cyber criminals, so we're doing a lot of big data processing. I have a date, a science team that's doing this, in fact, and what we do is processes data. We understand the threat, and then we take a multi pronged approach. So we're consuming that data from automation were pushing that out first and foremost to our customers. So that's that automated use case of pushing protection from new threats that we're learning about were contextualizing the threat. So we're creating playbooks, so that playbook is much like football, right? You have to know your your your offense, right? And you have to know how to best understand their tactics. And so we're doing that right. We're mapping these playbooks understanding, tactics, understanding where these guys are, how they operate. We take that to law enforcement. As I was saying earlier as an example, we take that to the Cyber Threat Alliance to tow our other partners. And the more that we learn about this attack surface, the more that we can do in terms of protection as well. But it's it's a huge number. We've had a scale and our data center massively to be able to support this over the years. But we are poised for scale, ability for the future to be able to consume this on our anti. So it's it's, um it's what I said You know the start. It's never a boring day in my office. >> How can it be? But it sounds like, you know, really the potential there to enable customers. Any industry too convert Transport sees for transform Since we talked about digital transformation transformed from being reactive, to being proactive, to eventually predictive and >> cost effective to write, this's another thing without cybersecurity skills gap. You know this. The solution shouldn't be for any given customer to try. Toe have two hundred and thirty people in their security center, right? This is our working relationship where we can do a lot of that proactive automation for them, you know, by the fabric by the all this stuff that we're doing through our investment in efforts on the back end. I think it's really important to and yeah, at the end of the day, the other thing that we're doing with that data is generating human readable reports. So we're actually helping our customers at a high level understand the threat, right? So that they can actually create policies on their end to be able to respond to this right hard in their own security. I deal with things like inside of threats for their, you know, networks. These air all suggestions that we give them based off of our experience. You know, we issue our quarterly threat landscape report as an example, >> come into cubes. Some of your people come in the Cuban >> talk about absolutely so That's one product of that hundred billion events that were processing every day. But like I said, it's a multi pronged approach. We're doing a lot with that data, which, which is a great story. I think >> it is. I wish we had more time. Derek, Thank you so much for coming by. And never a dull moment. Never a dull interview when you're here. We appreciate your time. I can't wait to see what that one hundred billion number is. Next year. A forty nine twenty twenty. >> It will be more. I can get you. >> I sound like a well, Derek. Thank you so much. We appreciate it for Peter Burress. I'm Lisa Martin. You're watching the Cube?

Published Date : Apr 9 2019

SUMMARY :

Brought to you by important It's great to have you back on the program, so it's always a pleasure to be here. of the threat landscape and where we are today. So these are not only like we saw in the two thousand's threats that could So a lot of those old threats were depending upon the action of a human being, right? And and look, the endgame to this is that the attacks were happening much, And this is, you know, I think of health care alone and literally life and death situations. We've created intelligence sharing platforms so that we can share what we call indicators of compromise. have the right set of resources able to be engaged in the So because of the criticality of this nature way have to be on the process of discovery and forensics to figure out everything. There's a lot of intelligence that the world's becoming data centric right now, You know, the opportunity for Fortinet to help customers So that's that's going to continue to happen. But it sounds like, you know, really the potential there to enable customers. So that they can actually create policies on their end to be able to respond to this right hard in their own Some of your people come in the Cuban talk about absolutely so That's one product of that hundred billion events that were processing Derek, Thank you so much for coming by. I can get you. Thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PeterPERSON

0.99+

FordORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Peter BirthsPERSON

0.99+

Anna KournikovaPERSON

0.99+

Ken KennedyPERSON

0.99+

DerekPERSON

0.99+

KenPERSON

0.99+

Peter BurressPERSON

0.99+

Orlando, FloridaLOCATION

0.99+

InterpolORGANIZATION

0.99+

twenty percentQUANTITY

0.99+

Next yearDATE

0.99+

fifteen yearsQUANTITY

0.99+

World Cyberthreat AllianceORGANIZATION

0.99+

twenty billionQUANTITY

0.99+

Derek MankyPERSON

0.99+

Seventy percentQUANTITY

0.99+

millions dollarsQUANTITY

0.99+

one hundred billionQUANTITY

0.99+

four hundred thousand customersQUANTITY

0.99+

House on Threat IntelligenceORGANIZATION

0.99+

threeQUANTITY

0.99+

PhyllisPERSON

0.99+

Askew MonsPERSON

0.99+

two hundred thirty five expertsQUANTITY

0.99+

todayDATE

0.99+

FortinetORGANIZATION

0.99+

about five hundred thousand eventsQUANTITY

0.99+

two hundred thirty five peopleQUANTITY

0.99+

World Economic ForumEVENT

0.99+

over fifty seven partnersQUANTITY

0.98+

forty girlQUANTITY

0.98+

two thousandsQUANTITY

0.98+

one hundred billionQUANTITY

0.98+

MickeyPERSON

0.98+

a year agoDATE

0.98+

oneQUANTITY

0.98+

four million peopleQUANTITY

0.98+

eighty billionQUANTITY

0.97+

two thousandQUANTITY

0.97+

2019DATE

0.97+

about seventy fiveQUANTITY

0.97+

over two hundred thirty five peopleQUANTITY

0.97+

about eighty percentQUANTITY

0.97+

over six yearsQUANTITY

0.97+

OneQUANTITY

0.97+

Cyber Threat AllianceORGANIZATION

0.96+

hundred billion eventsQUANTITY

0.96+

One hundred billion security eventsQUANTITY

0.95+

fortyQUANTITY

0.94+

a yearQUANTITY

0.93+

one hundred billion threat eventsQUANTITY

0.93+

over ten yearsQUANTITY

0.91+

forty guardsQUANTITY

0.91+

two hundred and thirty peopleQUANTITY

0.91+

DavosLOCATION

0.89+

overQUANTITY

0.89+

two thousand fourQUANTITY

0.88+

almost twenty yearsQUANTITY

0.86+

forty six twoQUANTITY

0.85+

this morningDATE

0.83+

guard labsORGANIZATION

0.82+

nineteenQUANTITY

0.81+

guard labORGANIZATION

0.79+

last five yearsDATE

0.79+

one productQUANTITY

0.77+

FortyQUANTITY

0.76+

TeoPERSON

0.71+

labORGANIZATION

0.67+

CubeORGANIZATION

0.66+

Forty guardQUANTITY

0.66+

nineteen eightyDATE

0.65+

forty nineDATE

0.64+

firstQUANTITY

0.64+

ConfickerTITLE

0.63+

GuardORGANIZATION

0.63+

MauerPERSON

0.62+

forty nine twenty twentyDATE

0.61+

MiterORGANIZATION

0.61+

secondQUANTITY

0.6+

Dan Kohn, CNCF | KubeCon 2018


 

>> Live from Seattle, Washington it's the CUBE covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Hey, welcome back, everyone. We are here live with CUBE coverage at KubeCon, CloudNativeCon 2018 in Seattle. I'm John Furrier with Stu Miniman your hosts all week, three days of coverage. We're in day two. 8,000 attendees, up from 4,000, spanning to China, in Europe, everywhere, the CNCF is expanding. The Linux Foundation, and the ecosystems expanding, we're here with Dan Kohn who's the executive director of the CNCF. Dan, great to see you. I know you work hard. (laughs) I see you out in China. You've done the work. You guys and the team have taken this hockey stick as it's described on the Twittersphere, really up and to the right, you've doubled, it's almost like Moore's law for attendance. (laughs) Doubling every six months. It's really a testament of how it's structured, how you guys are managing it, the balances that you go through. So congratulations. >> So thank you very much, and I'm thrilled that you guys have been with us through that whole ride, that we met here in Seattle two years ago at the first KubeCon we ran with 1,000 attendees. And here we are eight times higher two years later. But I absolutely do need to say it is the community that's growing, and we try and organize them a little bit and harness some of that excitement and energy and then there is a ton of logistics and effort that it takes to go from 28 members to 349 and to put on an event like this, but we do have an amazing team at the Linux Foundation and this is absolutely an all hands on deck where the entire events team is out here and working really hard. >> You guys are smart, you know what you're doing, and you have the right tone and posture, but you set it up right, so it's end user driven, it's open-source community as the core of the event, and you're seeing end users that have contributed, they're now consuming, you have vendors coming in, but you set the nice playbook up, and the downstream benefits of that open-source core has impacted IT, developers, average developers, and this is the magic. And you guys don't take too many hard stands on things, you take a good enough stand on the enablement piece of it. This is a critical piece. Explain the rationale because I think this is a success formula. You don't go too far and say, here's the CNCF stack. >> Right. >> You pull back a little bit on that and let the ecosystem enable it. Talk about that rationale because I think this is an important point. >> Sure and I would say that one of the huge advantages that CNCF has had is that we came later after a lot of other projects. So our parent, the Linux Foundation, has been around for 15 years. We've been able to leverage all of their expertise. We've looked at some of the mistakes that OpenStack, and Apache, and IETF, and other giants who came before us did, and our aspiration has always been to make entirely new mistakes rather than to replicate the old ones. But as you mentioned end user is a key focus, so when you look at our community, how CNCF is set up, we have a governing board that's mainly vendors, it does have developer and other reps on it. We have our technical oversight committee of these nine experts, kind of like our supreme court, and then we have this end user community that is feeding requirements and feedback back to the other group. >> I want to ask you about the structure, and I think this is important because you guys have a great governance model, but you have this concept of graduation. You have Kubernetes, and it's really solid, people are very happy with it, and there's always debates in open-source as you know, but there's a concept of graduating. Anyone can have projects, and explain that dynamic. 'Cause that's, I've heard people say, oh that's part of the CNCF, and well it hasn't graduated, but it's a project. It's important as a laddering there, explain that concept. I think this is important for people to understand that you're open, but there's kind of a model of graduation. What does it mean? >> Sure and it, people have said, oh you mean they've graduated, so they've left now, right? Like the kids leaving the home. And it's definitely not that model. Kubernetes is still very much part of CNCF. We're happy to do it. But we think that one of CNCF's functions is as a signaling and a marketing to enterprise users. And we like the cliche of crossing the chasm where we talk about 2018 was really the year that Kubernetes crossed the chasm. Went from as early adopters who'd been using it for years and were thrilled with it but they actually jump over now to the early majority. I will say though that the late majority, the laggards, the skeptics, they're not using these technologies yet. We still have a ton of opportunity for years to come on that. So we say the graduated projects, which today is not just Kubernetes but also Prometheus and Envoy. Those are the ones that are suitable for really any enterprise company, and that they should feel confident these are very mature, serious technologies for companies of all size. The majority of our projects are incubating. Those are great projects, technically capable, companies should absolutely use them if the use case fits, but they're less mature. And then we have this other category of the Sandbox, 11 projects in there, and we say look, these are incredibly promising. If you are technical enough and you have the use cases, you absolutely should consider it, but they are less mature. And then our hope is to help the projects move along that graduation phase. >> And that's how companies start. Bloomberg's plan, I thinking jumping into Sandbox, they'll start getting some code in there that'll attract some people, they get their code, they don't have to come back after the fact and join in. So you have the Sandbox, you've got projects, you've got graduation, so. >> Now Bloomberg's a little bit unusual, and I like them as an example where they have, I don't know if they mentioned this, but almost a philosophy not to spend money on software. And of course that's great. All of our projects are free and open-source, and they're willing to spend money on people, and they hire a spectacular group of engineers, and then they support everything in-house. But in reality, the vast majority of end users are very happy to work with the vendor, including a lot of our members, and pay for some of that support. And so a Bloomberg can be a little bit more adventurous than many, I think. >> Dan, I wonder if you can provide a little bit of context. I hear some people look at really kind of the conformance and certification that the CNCF does. And I think in many ways learn from the mistakes of some of the things we've done in the past because they'll see there's so many companies, it's like, well there's too many distributions. Maybe you could help explain the difference between a distribution-- >> Sure. >> And what's supported and how that makes sense. >> And I think when you look back at, and we just had, CNCF just had our three-year birthday this week, we have a little birthday cake on Twitter and everything. But if you look at all the activities we've been involved in over those three years, KubeCon, CloudNativeCon, we have a service provider program, we've done a lot of marketing, helping projects, I think it's the certification and the software conformance is the single thing that we've had done that's had the biggest impact on the community. And the idea here is that we wanted a way for individual companies to be able to make changes to Kubernetes because they all want to, but to still have confidence that you could take the same workload and move it between the different public clouds, between the different enterprise distros or just vanilla Kubernetes that you download or different installers out there. And so the solution was an open-source software conformance project that anyone can download these tasks and run them, and then a process where people upload the test results and say, yes my implementation is still conformant. I've made these changes, but I haven't broken anything. And we really have some amazing cases of our members, some of our biggest members, who had turned off APIs, maybe in their public cloud for good reasons. They said, oh this doesn't apply or we don't, but that's exactly the kind of thing that can cause incompatibility. >> Yeah, I mean that's critically important, and the other thing that is, what I haven't heard, is there's so many projects here. And we go to the Amazon show and it's like, I'm overwhelmed and I don't know what to do, and I can't keep up with everything. I'm actually surprised I don't hear that here because there are pockets, and this is multiple communities, not like a single monolithic community, so you've got, you know Envoy has their own little separate show and Operators has a thing on Friday that they're doing, and there's the Helm community and sometimes I'm putting many of the pieces together, but oftentimes I'm taking just a couple of the pieces. How do you manage this loosely coupled, it's like distributed architecture. >> Loosely coupled is a key phrase. I think the big advantage we have is our anchor tenant of Kubernetes has its own gravitational field. And so from a compatibility standpoint, we have this, excuse me, certification program for Kubernetes and then all of the other projects essentially ensure they're orbiting around and they ensure that they're compatible with Kubernetes, that also ensures they're compatible with each other. Now it's definitely the case that our projects are used beyond just Kubernetes. We were thrilled with Amazon's announcement two weeks ago of commercial support for Envoy and talking about how one of the things they loved about Envoy is that is doesn't just work on Kubernetes, they can use it on their proprietary ECS platform on their regular EC2 environment as well. And that's true for almost all of our projects. Prometheus is used in Mesos, is used in Docker Swarm, is used in VMs, but I do think that having so much traction and momentum around Kubernetes just is a forcing function for the whole community to come together and stay compatible. >> Well you guys did a great job. That happened last year. It's really to me is an example of a historic moment in the computer industry because this is a modern version of enabling technology that's going to enable a lot of value creation, a lot of wealth creation, a lot of customer, and it's all in a new way, so I think you guys really cracked the code on that and continued success. You've obviously had China going gangbusters, you're expanding, China by the way is one of the largest areas we've reported on Siliconangle.com and the CUBE in the past. China has emerged as one of the largest contributors and consumers of open-source given the rise of all the action going on in China. >> And we've been thrilled to see that, and I mean there was just the example yesterday where etcd is now the newest project, the newest incubating project in CNCF, and the co-creator of that and really the lead maintainer for it left CoreOS when it was acquired by Red Hat and is now with Alibaba. And he's originally from China. He is helping Alibaba just who's a platinum member of CNCF, who's been offering a certified Kubernetes service, but they're now looking at how they can move much more of their internal workloads over to it. JD.com has 25,000 servers. That's the second biggest retailer in China. >> It's a constituent. >> I was there six times last year. >> I know you were. >> I ran into you once in a hotel lobby. (laughing) >> What are you doing in China? It's huge, we're here. This is a big dynamic. This is new. I mean this is a big force and function. >> And to have so much energy, and I do also want to really emphasize the two-way street, that it's not just Chinese companies adopting these technologies that started in the US. >> They're contributing. >> We were thrilled a month ago to have Harbor come in as an incubating project and that started in China and is now being used across the world. >> Dan, 2019, you've got three shows again, Barcelona, Shanghai, and San Diego. >> Exactly. >> Of course the numbers are going to be up and to the right, but what else should we be looking for? >> So I think the two, so definitely China, we're going to continue doing it there, we continue to be relations serverless, we're thrilled with the progress of our serverless working group. They have this new cloud event spec, we have all of the different major clouds participating in it. The third area that I think you're going to see us that is somewhat new is looking at telcos. And our vision is that you can take a lot, most networking code today is done in virtual machines called virtual network functions. We think those should evolve to become cloud native network functions. The same networking code running in containers on Kubernetes. And so this is actually going to be our first time with a booth at Mobile World Congress in Barcelona in February. And we're going to be talking about-- >> Makes a lot of sense. IOT, over the top, a lot of enablement there. Makes inefficiencies in that inefficient stacks. >> Yeah, and on the edge as well. >> Dan, thanks for coming out, I appreciate it. Again, you've done the work, hard work, and continue it, great success, congratulations. I know it's early days still but. >> I hope it is. At some date Kubernetes is going to plateau. But it really doesn't feel like it'll be 2019. >> Yeah, it definitely is not boring. (laughing) Even though we had much more, Dan. >> Dan Kohn, executive director of the CNCF. Here inside the CUBE, breaking it all down, again, another successful show. Just the growth, this is the tsunami, it's a rise of Kubernetes and the ecosystem around it, creating values, the CUBE coverage, live here in Seattle. I'll be back with more coverage after this short break. I'm John Furrier with Stu Miniman. Be right back. (upbeat music)

Published Date : Dec 13 2018

SUMMARY :

it's the CUBE covering KubeCon of the CNCF. at the first KubeCon we ran and the downstream benefits and let the ecosystem enable it. and then we have this end user community and I think this is important because of crossing the chasm after the fact and join in. and pay for some of that support. and certification that the CNCF does. how that makes sense. and the software conformance and the other thing that and talking about how one of the things and the CUBE in the past. and really the lead maintainer I ran into you once in a hotel lobby. I mean this is a big force and function. And to have so much as an incubating project and that started Barcelona, Shanghai, and San Diego. And our vision is that you can take a lot, IOT, over the top, a and continue it, great is going to plateau. Even though we had much more, Dan. and the ecosystem around it,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlibabaORGANIZATION

0.99+

Dan KohnPERSON

0.99+

ChinaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

SeattleLOCATION

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

twoQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

USLOCATION

0.99+

Red HatORGANIZATION

0.99+

JD.comORGANIZATION

0.99+

28 membersQUANTITY

0.99+

EuropeLOCATION

0.99+

DanPERSON

0.99+

2019DATE

0.99+

25,000 serversQUANTITY

0.99+

CNCFORGANIZATION

0.99+

last yearDATE

0.99+

FebruaryDATE

0.99+

2018DATE

0.99+

yesterdayDATE

0.99+

six timesQUANTITY

0.99+

eight timesQUANTITY

0.99+

KubeConEVENT

0.99+

349QUANTITY

0.99+

MoorePERSON

0.99+

BarcelonaLOCATION

0.99+

BloombergORGANIZATION

0.99+

three showsQUANTITY

0.99+

three-yearQUANTITY

0.99+

three yearsQUANTITY

0.99+

three daysQUANTITY

0.99+

KubernetesTITLE

0.99+

1,000 attendeesQUANTITY

0.99+

FridayDATE

0.99+

Seattle, WashingtonLOCATION

0.99+

11 projectsQUANTITY

0.99+

two years agoDATE

0.99+

nine expertsQUANTITY

0.99+

third areaQUANTITY

0.99+

first timeQUANTITY

0.99+

MesosTITLE

0.99+

two years laterDATE

0.98+

San DiegoLOCATION

0.98+

a month agoDATE

0.98+

singleQUANTITY

0.98+

two weeks agoDATE

0.98+

PrometheusTITLE

0.98+

ApacheORGANIZATION

0.98+

Docker SwarmTITLE

0.98+

15 yearsQUANTITY

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

Siliconangle.comORGANIZATION

0.97+

Nirmal Mehta & Bret Fisher, Booz Allen Hamilton | DockerCon 2018


 

>> Live, from San Francisco, it's The Cube! Covering DockerCon '18. Brought to you by Docker and its ecosystem partners. >> Hey, welcome back to The Cube. We are live at DockerCon 2018 on a beautiful day in San Francisco. We're glad you're not playing hooky though if you're in the city because it's important to be here watching John Troyer and myself, Lisa Martin, talk to some awesome, inspiring guests. We're excited to welcome two Docker captains, that's right, to The Cube. We've got Nirmal Mehta, you are the chief technologist of Booz Allen. Welcome back to The Cube. And, we've got Bret Fisher, the author of Docker Mastery. Both of you, Docker captains. Can't wait to dig into that. But you're both speakers here at the fifth annual DockerCon. So Bret, let's talk, you just came off the stage basically. So, thank you for carving out some time for us. Talk to us about your session. What did you talk about? What was some of the interaction with the attendees? >> Well the focus is on Docker Swarm and I'm a assist admin at heart so I focus on ops more than developer but I spend my life helping developers get their stuff into production. And so, that talk centers around the challenges of going in and doing real work that's for a business with containers and how do you get what seems like an incredible amount of new stuff into production all at the same time on a container ecosystem. So, kind of helping them build the tools they need, and what we call a stack, a stack of tools, that ultimately create a full production solution. >> What were some of the commentary you heard from attendees in terms of... Were these mostly community members, were there users of container technology, what was sort of the dynamic like? >> Well you have, there's all sorts of dynamics, right? I mean you have startups, I think I took a survey in the room because it was packed and like 20% of the people in the room about were a solo DevOps admin. So they were the only person responsible for their infrastructure and their needs are way different than a team that has 20 or 30 people all serving that responsibility. So, the talk was a little bit about how do they handle their job and do this stuff. You know, all this latest technology without being overwhelmed and, then, how does it grow in complexity to a larger team and how do they sustain that. So, yeah. >> Bret, it's nice that the technology is mature enough now that people are in production, but what are some of the barriers that people hit when they try to go into production the first time? >> Yeah, great question. I think the biggest barrier is trying to do too much new at the same time. And, I don't know why we keep relearning this lesson in IT, right? We've had that problem for decades of projects being over cost, over budget, over timed, and I think with so much exciting new stuff in containers it's susceptible to that level of, we need all these new things, but you actually don't, right? You can actually get by with very small amounts of change, incrementally. So, we try to teach that pattern of growing over time, and, yeah. >> You mentioned like the one person team versus the multi-person team kind of DevOps organization. Does that same problem of boiling the ocean, do you see that in both groups? >> Yeah, I mean you have fundamentally the same needs, the same problem that you have to solve, but different levels of complexity is really all it has to do with and different levels of budget, obviously, right? So, usually the solo admin doesn't have the million dollar budget for all the tools and bells and whistles, so they might have to do more on their own, but, then, they also have less time so it's a tough row to hoe, you know, to deal with, because you've got those two different fundamental problems of time and money and people are using the most expensive thing. So, no matter what the tool is you're trying to buy, it's usually your time that's the most valuable thing. So how do we get more of our time back? And that's really what containers were all about originally was just getting more of our time back out of it and so we can put back into the business instead of focusing on the tech itself. >> Nirmal, your talk tomorrow is on empathy. >> Yes. >> Very provocative, dig into that for us. >> Sure, so it was actually inspired by a conversation I had with John a couple years ago on Geek Whisperers podcast and he asked the folks on that show, yourself included, asked if there was an event in my past that I kind of regret or taught me a lot. And it was about basically neglecting someone on my team and just kind of shoving them away. And, that moment was a big change in how I felt about the IT industry. And, what I had done was pushed someone who probably needed that help and built up a lot of courage to talk to me and I kind of just dismissed him too quickly. And, from there, I was thinking more and more about game theory and behavioral economics and seeing a lot of our clients and organizations struggle to go through a digital transformation, a DevOps transformation, a cultural transformation. So, to me, culture is kind of the core of what's happening in the industry. And so, the idea of my talk is a little bit of behavioral economics, a little bit of game theory, to kind of set the stage for where your IT organization is probably kind of is right now and how to use empathy to get your organization to that DevOps and to a more efficient place and resolve those conflicts that happen inherently. And, somehow tie that all together with Docker. So, that's kind of what my talk is all about. >> Nice, I mean what's interesting to me, Lisa, is that we do Cubes and there are many Cubes actually all across the country during conference season, right? And we talk to CEOs and VPs of very large companies and even today, at DockerCon, the word 'culture' and the talking about culture and process and people has come up every single interview. So, it's not just from the techies up that this conversation is going... this DevOps and empathy conversation is going on, it seems to be from the top down as well. Everyone seems to recognize that, if you really are going to get this productivity gain, it's not just about the tech, you gotta have culture. >> Absolutely, a successful transformation of an organization is both grassroots and top down. Can't have it without either. And, I think we inherently want to have a... Like, we want to take a pill to solve that problem and there's lots of pills: Docker or cloud or CICD or something. But, those tools are the foundational safety net for a cultural transformation, that's all that it is. So, if you're implementing Docker or Jenkins or some CICD pipeline or automation, that's a safety blanket for providing trust in an organization to allow that change in the culture to happen. But, you still need that cultural change. Just adopting Docker isn't going to make you automatically a more effective organization. Sorry, but it's just one piece and it's an important piece but you have to have that top down understanding of where you are now as an organization and where you want to be in the future. And understanding that this kind of legacy, siloed team mindset is no longer how you can achieve that. >> You talked about trust earlier from a thematic perspective as something that comes up. You know we were at SAP Sapphire last week and trust came up a lot as really paramount. And that was in the context of a vendor/customer relationship. But, to your point, it's imperative that it's actually coming from within organizations. We talk a lot about, well stuff today: multi-cloud--multi-cloud, silos-- but, there's also silos with people and without that cultural shift and probably that empathy, how successful, how big of an impact can a technology make? Are you talking with folks that are at the executive level as well as the developer level in terms of how they each have a stake and need to contribute to this empathy? >> Yeah, absolutely. So, the talk I'm doing is basically the ammunition a lower level person would need to go up to management and say, hey, you know this is where the organization is, this is what the IT department kind of looks like, these are the conflicts, and we have to change in order to succeed. And a lot of folks don't. They see the technology changes that they need. You know, adopting the new javascript framework or the new UX pattern. But, they might not have the ammunition to understand the business strategy, the organizational issues. But, they still need that evidence to actually convince a CTO or a CEO or a COO for the need to change. So, I've talked to both groups. From the C-level side, I think it comes from the inherent speed of the industry, the competitive landscape, those are all the pressures that they see and the disruptions that they are tackling. Maybe it's incumbent disruption or new startups that they may have to compete with in the future. The need for constant innovation is kind of the driver. And, IT is kind of where all that is, these days. >> That's great. Building on the concept of trust and this morning at the keynote, Matt Mckesson where they talked about trusting Docker, trusting Docker the company, trusting Docker the technology. Almost the very first words out of Steve Singh's mouth this morning were about community. And, I think community is one of the big reasons people do trust Docker and one of the things that brings them along. You guys are both Docker captains, part of a program of advocacy, community programs. I don't know, Bret, can you tell us a little bit about the program and what's involved in it? >> Yeah, sure. So, it's been around over two years now and it actually spawned out of Docker's pre-existing programs were focusing on speakers and bloggers and supporting them as well as community leaders that run meetups. And they kind of figured out that a key set of people were kind of doing two or three of those things all at once. And so, they were sort of deciding how do we make like super-groups of these people and they came up with the term Docker captain It really just means you know something about Docker, you share it constantly, something about a Docker toolset, something about the container tools. And that you're sort of... And you don't work for Docker. You're a community person that is, maybe you're working for someone that is a partner of Docker or maybe you're just a meetup volunteer that also blogs a lot about patterns and practices of Docker or new Docker features. And so, they kind of use the engineering teams at Docker to kind of pick through people on the internet and the people they see in the community that are sort of rising out of all the noise out there. And they ask them to be a part of the program and then, of course, we get nice jackets and lots of training. And, it's really just a great group of people, we're about 70 people now around the world. >> And yeah, this is global as well, right? >> Oh yeah, yep. It's one of my favorite aspects is the international aspect. I work for Booz Allen which is a more US government focused and I don't get to interact with the global community much. But, through the Docker captain program got friendships and connections almost on every continent and a lot of locations. I just saw a post of a Docker meetup in like, I think it was like Tunisia. Very, very out there kind of places. There was a Cuban one, recently, in Havana. The best connections to a global community that I've ever seen. I think one of the biggest drivers is the rapid adoption and kind of industry trend of containerization and the Docker brand and what it is basically gave rise to a ton of folks just beginners, just wanting to know what it's all about. And, we've been identified as folks that are approachable and have kind of a mandate to be people that can help answer those initial questions, help align folks that have questions with the right resources, and also just make it like a soft, warm, fuzzy kind of introduction to the community. And engage on all kinds of levels, advanced to beginner levels. >> It was interesting, again, this morning, I think about half the people raised their hands to the question, "is it their first year?" So, it still seems like the Docker, the inbound people interested in Docker is still growing and millions of developers all over the world, right? I don't know, Bret, you have a course, Docker Mastery, you also do meetups, and so I'm curious like what is the common pathway or drivers for new folks coming in, that you see and talk with? >> Yeah, what's the pathways? >> Yeah, the pathway, what's driving them? What are they trying to do? Again, are they these solo folks? >> Yeah, it's sort of a little bit of everything. We're very lucky in the course. We actually just crossed 55,000 students worldwide, 161 countries on a course that is only a year old. So, it kind of speaks to the volume of people around the world that really want to learn containers and all the tools around them. I think that the common theme there is I think we had the early adopters, right, and that was the first three or four years of Docker was people that were Silicon Valley, startups, people who were already on the bleeding edge of technology, whether it was hobbyist or enterprise. It was all people, but it was sort of the Linux people. Now, what we're getting is the true enterprise admins and developers, right. And that means, Microsoft, IBM mainframes, .Net, Java, you're getting all of these sort of traditional enterprise technologies but they all have the same passion, they're just coming in a few years later. So, what's funny is, you're meetups don't really change. They're just growing. Like what you see worldwide, the trend is we're still on the up-climb of all the groups, we have over 200 meetups worldwide now that meet once a month about Docker. It's just a crazy time right now. Everything's growing and it's like you wonder if it's ever going to stop, right How big are we gonna get, gonna take over the world with containers? >> Yeah, about 60% or more of all our meetups are completely new to Docker. And, it ranges from, you know, my boss told me about it so I gotta learn it or I found it and I want to convince other people in my organization to use it so I need to learn it more so I can make that case or, it's immediately solving a problem but I don't know how to take it to the next level, don't know where it's going, all that. It's a lot of new people. >> I get students a lot, college students that want to be more aggressive when they get in the marketplace and they hear the word 'DevOps' a lot and they think DevOps is a thing I need to learn in order to get a job. They don't really know what that is. And, of course, we don't even. At this point, it's so watered down, I don't know if anyone really knows what it is. But eventually, they search that and they come up with sort of key terms and I think one of those the come up right away is Docker. And they don't know what that is. But, I get asked the question a lot, If I go to this workshop or if I go the meetup or whatever, can I put that on my resume so I can get my first job out of school? They're always looking for something else beyond their schooling to make them a better first resume. So, it's cool to see even the people just stepping into the job market getting their feet wet with Docker even when they don't even know why they need it. >> It sounds like a symbiotic thought leadership community that you guys are part of and it sounds like the momentum we heard this morning in the general session is really carried out through the Docker captains and the communities. So, Nirmal, Bret, thanks so much for stopping by bringing your snazzy sweatshirts and sharing what you guys are doing as Docker captains. We appreciate your time. >> Thank you. >> Thank you. >> We want to thank you for watching The Cube. I'm Lisa Martin with John Troyer. We're live at DockerCon 2018. Stick around, John and I will be right back with our next guest.

Published Date : Jun 13 2018

SUMMARY :

Brought to you by Docker and its ecosystem partners. So, thank you for carving out some time for us. And so, that talk centers around the challenges of going in What were some of the commentary you heard and like 20% of the people in the room about and I think with so much exciting new stuff in containers Does that same problem of boiling the ocean, the same problem that you have to solve, and how to use empathy to get your organization and the talking about culture and process and people in the culture to happen. and need to contribute to this empathy? or new startups that they may have to compete with Building on the concept of trust and the people they see in the community and have kind of a mandate to be people that can help So, it kind of speaks to the volume of people but I don't know how to take it to the next level, and they think DevOps is a thing I need to learn and it sounds like the momentum we heard this morning We want to thank you for watching The Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Matt MckessonPERSON

0.99+

Bret FisherPERSON

0.99+

JohnPERSON

0.99+

HavanaLOCATION

0.99+

John TroyerPERSON

0.99+

20%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

20QUANTITY

0.99+

Nirmal MehtaPERSON

0.99+

Steve SinghPERSON

0.99+

DockerORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

55,000 studentsQUANTITY

0.99+

IBMORGANIZATION

0.99+

last weekDATE

0.99+

161 countriesQUANTITY

0.99+

LisaPERSON

0.99+

NirmalPERSON

0.99+

threeQUANTITY

0.99+

one pieceQUANTITY

0.99+

four yearsQUANTITY

0.99+

BretPERSON

0.99+

a yearQUANTITY

0.99+

both groupsQUANTITY

0.99+

DockerCon 2018EVENT

0.99+

oneQUANTITY

0.99+

BothQUANTITY

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.98+

Docker MasteryTITLE

0.98+

first wordsQUANTITY

0.98+

millionsQUANTITY

0.98+

DockerConEVENT

0.98+

30 peopleQUANTITY

0.98+

first jobQUANTITY

0.98+

TunisiaLOCATION

0.98+

over 200 meetupsQUANTITY

0.98+

twoQUANTITY

0.98+

decadesQUANTITY

0.98+

SAP SapphireORGANIZATION

0.98+

first timeQUANTITY

0.98+

first yearQUANTITY

0.98+

todayDATE

0.98+

one personQUANTITY

0.97+

Geek WhisperersTITLE

0.97+

tomorrowDATE

0.97+

DockerCon '18EVENT

0.97+

CubesORGANIZATION

0.96+

million dollarQUANTITY

0.96+

about 60%QUANTITY

0.96+

first threeQUANTITY

0.96+

The CubeORGANIZATION

0.94+

JavaTITLE

0.94+

a couple years agoDATE

0.93+

fifthQUANTITY

0.92+

two different fundamental problemsQUANTITY

0.91+

once a monthQUANTITY

0.91+

both speakersQUANTITY

0.91+

this morningDATE

0.91+

about 70 peopleQUANTITY

0.91+

first resumeQUANTITY

0.9+

Chris Brown, Nutanix | DockerCon 2018


 

>> Live from San Francisco, it's theCUBE! Covering DockerCon 18, brought to you by Docker and it's ecosystem partners. >> Welcome back to theCUBE, I'm Lisa Martin with John Troyer we are live from DockerCon 2018 on a sunny day here in San Francisco at Moscone Center. Excited to welcome to theCUBE Chris Brown the Technical Marketing Manager at Nutanix, Chris welcome to theCUBE! >> Thank you so much for having me. >> So you've been with Nutanix for a couple years, so we'll talk about Nutanix and containers, you have a session control and automate your container journey with Nutanix. Talk to us about what you're gonna be talking about in the session, what's Nutanix's role in helping the customers get over this trepidation of containers? >> Yeah, definitely, and it's, it's a 20 minute session, so we've got a lot of information to cover there 'cause wanna go over a little bit about, you know, who Nutanix is from the beginning to end but, the main part I'm gonna be focusing on in that session is talking about how we, with our com product, can automate VMs and containers together and how we're moving towards being able to, you know, define you application in a blueprint and understand what you're trying to do with your application. You know, one of the things I always say is that nobody runs Sequel because they love running Sequel, they run Sequel to do something, and our goal with the com is to capture that something, what it depends on, what it relies on. Once we understand what this particular component is supposed to do in your application, we can change that, we can move that to another cloud, or we can move it to containers without losing that definition, and without losing its dependence on the other pieces of the infrastructure and exchange information back and forth. So we're talking a little bit about what we're doing today with com and where we're going with it to add Kubernetes support. >> Chris, we're sitting here in the ecosystem expo at DockerCon and your booth is busy, there's a lot of good activity. Are people coming up to you and asking, do they know Nutanix, do they understand who you are, do they just say oh you guys sell boxes? You know you're both a, you're a systems provider, you're a private cloud provider, and a hybrid-cloud provider, do people understand that, the crowd here, and what kinda conversations are you having? >> It's actually really interesting 'cause we're seeing a broad range of people, some customers are comin' up, or some people are coming up that they don't reali--they don't know that other pieces, places their company use Nutanix, but they wanted to learn more about us, so they've got some sort of initiative that you know, a lot of times it is around containers, around understanding, you know, they're starting to figure out, you know, how do we deploy this, how do we connect? You know, we've got something we wanna deploy here and there how do we do that in a scalable way? But we also have some that have no idea who we are and just comin' up like so you've got a booth and some awesome giveaways, (laughing) what do I have to do to get that, and what do you do? And you know, I really kinda summarize it as two main main groups of people that I've seen is, one of 'em is, the people who've been doing containers for forever, they know it, they've been doing it, they're very familiar with the command line, they're ret-- any gooey is too much gooey for them. And then we've got the people who are just getting started, they've kinda been told hey, containers are coming, we need to figure out how to do this, or we've got, we need to start figuring out our containers strategy. And so they're here to learn and figure out how to begin that. And so it's really interesting because those, the ones that are just getting started or just learning, we obviously help out a ton because the people who came before had to go through all the fire, all the configuration, all of the challenges, and figure out there own solutions where as we can, now we kinda come in, there's a little bit more opinionated example of how to do these things. >> So DockerCon, this year is the fifth DockerCon, they've got between five thousand and six thousand people, I was talking with John earlier and Steve Singh as well, that how I really impressed when I was leaving the general session, it was standing room only a sea of heads so they've got, obviously developers here right, sweet spot, IT folks, enterprise architects, and execs, you talked about Nutanix getting those the two polar opposite ends of the spectrum, the container lovers, the ones who are the experts, and the ones going I know I have to do this. I'm curious, what target audience are you talking to that goes hey I'm tasked with doing this, are those developers, are those IT folks, are you talking with execs as well, give us that mix. >> For the most part they are IT folks, you're artusional operators who are trying to figure out this new shift in technology and we have to talk to some developers, and it's actually been interesting to have speak with developers because you know, in general that's not, that hasn't been Nutanix's traditional audience, we've sold this product called infrastructure to develop. But developers, the few developers I've talked to have gotten really receptive and really excited about what we can do and how we can help them do their job faster by getting their IT people on board but for the most part it'd be traditional IT operators who're looking at this new technology and you know, givin' it kind of a little squinty eye, trying to figure out where it's going, because at the end of the day, with any shift in IT, there's never a time where something is completely sunset, I mean people are still using mainframes today, people will be using mainframes forever, people are just starting their virtualization journey today they're just going from bare metal to VMs, so, and then even with that shift, there's always something that gets left behind, so, they're trying to figure out how can we get used to this new container shift because at the end of the day not everything is gonna be containerized because there's just simply some things that won't be able to or they'll scope out the project and then it'll end up falling by the wayside or budget will go somewhere else. So they're trying to figure out how they can understand the container world from the world that they come from, the VM-centric world, and then, you know, it's really interesting to talk to them and show them how we're able to bring those two together and give you, not only bring the container journey up another step, but also carry your VMs along the way as well. >> Chris, Nutanix is at a, the center of several different transitions, right, both old school hardware to kind of hyper converge, but not now also kind of private hybrid-cloud to more kind of multi-cloud, hybrid-cloud. When we're not at DockerCon, so when you're out in the field, how real is multi-cloud, how real is containers in a normal enterprise? >> Definitely, so, multi-cloud is a very hot topic for sure, everyone, there's no company, no IT department that doesn't have some sort of cloud strategy or analyzing it or looking at it. The main way that we get there, or one of the core tools we have is com once again, so, and I'm obviously biased because that's my wheelhouse, right, in marketing, so I talk about that day in day out, but, with com you can add, we support today AHV and EXSi both on and off Nutanix, as well as AWS, AWS gov cloud and GCP, and Azure's coming in down the line that's where Kubernetes will come in as well, so we see a lot of people looking a this and saying hey you know, we do wanna be able to move into AWS, we do wanna be able to move into GCP and use those clouds or unify them together, and some com lets us do that. There's a couple other of prongs to that as well, one of them is Beam, Nutanix Beam, which is a product we announced at DotNext last month, which is around multi-cloud cost optimization, Beam came from an acquisition that of bought metric--the company was called milinjar, I'm probably saying that horribly wrong, but made a product called bought metric which we've rebranded and are integrating into the platform as Nutanix Beam. So what that allows you to do is, you can, it's provided as a SaaS service, so you can go use it today, there's a trial available, all that, you give it AWS credentials and it reaches out and takes a look at your billing account and says hey, we noticed that these VMs are running 50% of the time at no capacity, or they're not being used at all, you can probably cut that down shrink these and save it or hey we noticed that in general you're using this level, this baseline level, you should buy these in reserved instances to save this much per month. And it presents all that up in a really easy to use interface, and then, depending on how you wanna use it, you can even have it automatically go and resize your VMs for you, so it can say, hey you've got a T2 medium or an M2 medium running, it really would make a lot more sense as a you know M2 small. You can, it'll give you the API call, you can go make it on your own, or you can have, if you give crede-- authorization of course, it can go ahead and run that for you and just downsize those and start saving you that money, so that's another fork of that, the multi-cloud strategy. And the last one is one of the other announcements we made around last month which was around--excuse me extract for VMs, so extract is a portfolio of products, we've got extract for DBs where we can scan your sequel databases and move into ESXi or AHV, both from bare metal, or wherever the sequel databases running, extract for VMs allows us to scan the ESXi VMs, and move them over to AHV. And then, we're taking extract for VMs to the next step and being able to scan your AWS VMs and pull them on, back on-prem, if that's what you're looking for as well, so that's right now in beta and they're working on fine tuning that. Because at the end of the day, it's not just enough to view and manage, we really need to get to someplace where we can move workloads between, and put the workload in the right place. Because really with IT, it's always a balance of tools, there's never one golden bullet that solves every problem, every time a new project comes out you're trying to choose the right tool based on the expertise of the team, based on what tools are already in use, based on policy. So, we wanna be able to make sure that we have the tool sets across, that you can choose and change those choices later on, and always use the right thing for the particular application you're running. >> Choice was a big theme this morning during the general session where Docker was talking about choice agility and security. I'm curious with some of the things that were announced, you know they're talking about the only multi-cloud, multi-OS, multi-Linux, they also were talking about, they announced this federated, containerized application management saying hey, containers have always been portable but management hasn't been. I'm curious what your perspectives are on some of the of the evolution that Docker is announcing today, and how will that help Nutanix customers be able to successfully navigate this container journey? >> Definitely. And--(clears throat) you know federation's critical, being able to, container management in general is always a challenge, one of the things that I've heard time and time again is that getting are back to work for Kubernetes has always been very difficult. (laughs) And so, getting that in there, getting, that is such a basic feature that people expect, you're getting the ability to properly federate roles or federate out authentication is huge. There's a reason that SAML took the world by storm, it's that nobody wants to manage passwords, you wanna rely on some external source of truth, being able to pull that in, being able to use some cloud service and have it federated against having Docker federated against other pieces is very important there. I might've gone way off there, but whatever. (laughing) >> No, no, absolutely. >> And then, the other piece of it is that we, with a multi-cloud, with the idea of it doesn't matter whether you're running on-prem or in the cloud or, that is what people need, that's one of the true promises of containers has always been is the portability, so seeing the delivery of that is huge, and being able to provision it on-prem, on Nutanix obviously because that's who I'm here from. (laughing) but, and being able to provision to the cloud and bring those together, that's huge. >> Chris you talked about Kubernetes couple times now, obviously a big topic here, seems to be kind of emerging de facto application deployment configuration for multi-cloud. What's Nutanix doing with Kubernetes? >> Yeah, so I've definitely, Kubernetes is, it's really in many ways winning that particular battle, I mean don't get me wrong Swarm is great, and the other pieces are great, but, Kubernetes is becoming the de facto standard. One of the things we're working on is bringing containers as a service through Kubernetes, natively on Nutanix, to give you an easy way to manage, through Prism manage containers just the way you manage VMs, manage Kubernetes clusters, and you know it's, it's really important that that's, that is just one solution, because we, there's as many different Kubernetes orchestration engines as you can name, every, any name you bring in, so that's my-- >> It's like Linux, back in the day, they're a lot of different distributions or there're a lot of different ways to consume Kubernetes. >> Exactly. And so, we wanna be able to bring a opinionated way of consuming Kubernetes to the platform natively, just as a, so it's a couple of clicks away, it's very easy to do. But that's not the only way that we're doing it, we're also we do have a partnership with Docker where we're doing things like deploying Docker EE through com, or Docker, it's of course all sorts of legalese but, they're working on that so it's natively in everyone's Prism central you can just one click deploy Docker EE, we have a demo running at our booth deploying rancher using com as well, because we wanna be able to provide whatever set of infrastructure makes the most sense for the customer based on, this is what they've used in the past, this is what they're familiar with, or this is what they want. But we also want to offer an opinionated way to deliver containers as a service so that those of you that don't know, or just trying to get started, or that that's what they're looking for, this, when you've got a thousand choices to make everyone's gonna make slightly different ones. So we can't ever offer one, no one can offer the true, this is the only way to do Kubernetes, we need to offer flexibility across as well. >> One of the words we here all the time at trade shows is flexibility. So, love customer stories, as a customer marketing person, I think there's no greater brand validation you can get than the voice of the customer, and I was looking on the Docker website recently and they were saying: customers that migrate to Docker Enterprise Edition, are actually reducing costs by 50%, so, you're a marketing guy, what're some of your favorite examples of customers where Nutanix is really helping them to just kill it on their container journey? >> Yeah, so, there's a, wish I'd thought of this sooner, I shoulda. (laughing) No, but we have a, one of our customers actually, I, this always brings a smile to my face 'cause they they came and saw us last year at the booth, they're one of our existing long time customers, and they're looking to adopt Docker. They came up and we gave 'em a demo, showed them how all the pieces were doing all of the, and he's just looking at it and he's like man, I need this in my life right now, and it was mostly a demo around Docker EE, using the unified control plane, and showing off, using Nutanix drivers showing how we can back up the data and protect individual components of the containers in a very granular fashion. He's like man I need this in my life, this is incredible, and he went and grabbed his friend ran him over, and was like dude we're already using Nutanix look what they can do! And the perfect example of the two kinds of customers, this guy goes like hold on a second, jumps on the command line, like oh yeah I do this all the time from there. (laughing) >> But, that was the, that light up, the light in the eyes of the customer where they were like, this, I need to be able to see this, to be able to use this, and be able to integrate this, that's, I will not forget that anytime soon. That's really why I think we're going down a very good path there, because the ability to, when you have these tinkerers, the people who are really good at code, I mean I spend a lot of time on the command line myself even though I'm in marketing, so, I don't know what I'm doing there, Powerpoints maybe? (laughing) Just because I can understand it from the command line or an expert can understand it, doesn't mean you can share that. I've been tryin' to hand off some of the gear that I manage off to another person, and was like oh you just type out all these commands, and they're like I have no idea what's going on here. (laughing) And so, seeing the customers be able to, to understand what they're more in depth coworkers have done in a gooey fashion, that's just really, that makes a lot of sense to me and it's, I like that a lot. >> It's great. >> Are you seeing any, and the last question is, as we wrap up, some of the, one of the stats actually that was mentioned in the Docker press release this morning about the new announcements was, 85% of enterprise organizations have multi-cloud, and then we were talking with Scott Johnston, their Chief Product Officer, that said, upwards of 90% of IT budgets are spent on keeping the lights on for existing applications, so, there's a lot of need there for enterprises to go this road. I'm wondering, are you seeing at Nutanix, any particular industries that are really leading edge here saying hey we have a lot of money that we're not able to use for innovation, are you seeing that in any specific industries, or is it kinda horizontal? >> I, to be honest, I've seen it kind of horizontally, I mean I've had, I've spoken to many different customers, mostly around com because, but, and they come from all different walks of life. I've seen, I've talked to customers from sled, who've been really excited about their ability to start better doing hadoop, because they do thousands of hadoop clusters a year for their researchers. I've talked to, you know in the cloud or on-prem, or across. I've talked to people in governments, I've talked to people in hospitals and, you know, all sorts of-- >> I can imagine oil and gas, some of those industries that have a ton of data. >> Yeah and it's actually, the oil and gas is really fascinating because a lot of times they, for in a rig, they wanna be able to use compute, but they can't exactly get to a cloud, so how do you, how do you innovate there and on the edge, without, how do you make a change in the core without making it on the edge, and how do you bring those together? So it's, there's really a lot of really fascinating things happening around that, but, I haven't noticed any one industry in particular it's, it's across, it's that everyone is, but then again, by the time they get to me, it's probably self selected. (laughing) But it's across horizontally, is that everyone is looking at how can we use this vast storage, I just found out this is already being used in my environment because it's super easy, how do I, how do I keep a job? (chuckles) Or how do I adopt this and free up my investments in keeping the lights on into innovation, how do I save time, how do I-- Because one of the things that I've noticed with all of this cloud adoption or container adoption all of that is that many times a customer will start making this push, not always from a low level, maybe from a high level, but, they start making this push because they hear it's faster and better and that it'll just solve all their problems if they just start using this. And, because they rush into they don't often they don't solve the fundamental problems that gave 'em the issue to begin with, and so they're just hoping that this new technology fixes it. So, now there's, I am seeing some customers shift back and say hey, I do wanna adopt that, but I need to do it in a smart way, 'cause we just ran to it and that caused us problems. >> Well it sounds like with all the momentum, John, that we've heard in the keynote, the general session this morning, and with some of the guests, you know, I think even Steve Singh was saying only about half of the audience is actually using containers so it's sounds like, with what you're talking about, with what we've heard consistently today, it's sort of the tip of the iceberg, so lots of opportunity. Chris thank you so much for stopping by theCUBE and sharing with us all the exciting things that are going on at Nutanix with containers and more. >> Thank you so much for having me, it was a lot of fun. >> And we wanna thank you for watching theCUBE, Lisa Martin with John Troyer, from DockerCon 2018 stick around we will be right back with our next guest. (bubbly music)

Published Date : Jun 13 2018

SUMMARY :

brought to you by Docker the Technical Marketing about in the session, move that to another cloud, they understand who you are, they're starting to figure out, you know, and the ones going I and it's actually been interesting to have the center of several and Azure's coming in down the line of the evolution that one of the things that I've heard and being able to provision it on-prem, seems to be kind of emerging de facto just the way you manage VMs, back in the day, they're a or that that's what customers that migrate to and they're looking to adopt Docker. and was like oh you just and the last question is, as we wrap up, and they come from all that have a ton of data. that gave 'em the issue to begin with, and with some of the guests, you know, Thank you so much for we will be right back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

50%QUANTITY

0.99+

Steve SinghPERSON

0.99+

JohnPERSON

0.99+

NutanixORGANIZATION

0.99+

John TroyerPERSON

0.99+

Chris BrownPERSON

0.99+

ChrisPERSON

0.99+

20 minuteQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Scott JohnstonPERSON

0.99+

AWSORGANIZATION

0.99+

90%QUANTITY

0.99+

last yearDATE

0.99+

85%QUANTITY

0.99+

ESXiTITLE

0.99+

two kindsQUANTITY

0.99+

last monthDATE

0.99+

OneQUANTITY

0.99+

one solutionQUANTITY

0.99+

DockerCon 2018EVENT

0.98+

two main main groupsQUANTITY

0.98+

five thousandQUANTITY

0.98+

Moscone CenterLOCATION

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

thousandsQUANTITY

0.98+

oneQUANTITY

0.98+

DockerORGANIZATION

0.98+

DockerConEVENT

0.98+

bothQUANTITY

0.97+

Docker EETITLE

0.97+

six thousand peopleQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

LinuxTITLE

0.96+

KubernetesTITLE

0.96+

fifthQUANTITY

0.95+

DockerConORGANIZATION

0.95+

this yearDATE

0.93+

T2 mediumCOMMERCIAL_ITEM

0.93+

DotNextORGANIZATION

0.92+

two polarQUANTITY

0.91+

Steve Singh, CEO, Docker | DockerCon 2018


 

>> Live from San Francisco, it's theCUBE, covering DockerCon 18. Brought to you by Docker and it's ecosystem partners. >> Welcome back to theCUBE's coverage of DockerCon 2018 in beautiful San Francisco. It's a stunning day here. We're at Moscone West, I'm Lisa Martin with John Troyer. Very honored to welcome to theCUBE, for the first time, the CEO of Docker Inc., Steve Singh. Welcome, Steve. >> Hi Lisa, very nice to meet you. John, how are you? >> So the general session this morning, standing room only between five and six thousand people. I gotta say a couple things that jumped out at me. One, coolest stage entrance I've ever seen with this great, if you haven't seen it from the livestream, this, like, 3D Golden Gate Bridge and I loved that and I loved the demo of Docker Desktop that your kids did, fueled by Mountain Dew, which actually single handedly got me through college here in San Francisco. So, the momentum that you guys, it was kicking off with a bang. >> Yeah, I, look, I've got a great team and one of the things we wanted to communicate this morning is that you're seeing a massive transformation in the world of software. And this transformation is enabling every company in the world to think about their business in a new light. To think about how their business meets customer needs in a way that's much more personal, in a way that delivers more value. And this is the beauty of where Docker is, right, we have a chance to help literally every company in the world. And that's the part, honestly, that gets me excited, is, like, how do you help other people go create amazing businesses? And so this is, I couldn't be more happy to be at Docker. >> Steve, keying on that, one of the customers on stage today, McKesson. >> Yeah. >> And I loved Rashmi Kumar came out and talked about future-proofing for applications, their infrastructure, their applications in partnership with Docker. >> Yeah. And that implies a certain amount of trust that they have in Docker and Docker's technology platform and in partnering with you. You come from a, so you've been at Docker for about a year now, right? Came in as CEO. Docker is still a small company, a couple hundred folks but punching way above its weight with a huge community impact. How do you, and, you know, you've worked with the biggest companies in the world, how do you come in and establish that trust and help reassure them that you're gonna be a good partner for them and, kinda, what are you seeing with your customers? >> It's a great question, John, and look, there's maybe two or three pieces of how we think about that. The first thing, trust is very human, right? You've gotta know that you're walking into a situation as a vendor and as a customer but really as partners. And you're trying to solve a problem together. Because the reality is, this transformation that companies are going through is the first time in 40 years that this kind of transformation has happened. Second is, the technology stack is still in the early stages. Now, it's incredible and it enables amazing things, but it's still in the early stages. So both of us have to walk into the relationship knowing that, you know what, sometimes it won't go perfect, but guess what? We're gonna be, you know, if it doesn't go perfect we're gonna honor everything we ever committed to you and the same thing on the customer's side. They look at it and say, "I may have actually described my needs differently than what they actually are." And that's what a real partnership is. That's number one. Number two is, trust is driven by culture. And one of the things that I love about Docker is that we see our place in the world but we wanna make sure the customer always has choice. We wanna make sure that if we do a great job the customer will choose to work with us. If we don't, they should have the choice to go somewhere else. And that's what our platform enables, is the choice to be able to work with anybody you'd like to work with, whether you're the developer or you're an operator or you're an IT, I'm sorry, an architect, or the executive. The other piece around this is that part of the value of Docker is it's not just the 400 people of our company, right? There's 5,000 members of our community that are adding value to our community. One of the things that I wanna make sure we do for our community is help them not just innovate on this incredible platform but how do we help them take their innovations to market? And so that's part of the ethos of our company. >> One of the things that you talked about this morning that I thought was really compelling was, you said software innovation used to be, for the last 40 years, it's been driven by tech companies. That's changing. You talked about distributed innovation and distributed consumption. How is Docker helping to, culturally, I don't wanna say instill, but helping to influence, maybe, organizations to be able to distribute innovation and be able to share bi-directionally? >> Yeah, so, a great question, Lisa. So, first of all, is there's a cultural change within companies. When you think about the next generation or the next 40 years being, software being driven from non-technology companies. First of all, we're seeing that. Second is that it requires a cultural change within the business but that change is critical 'cause in the absence of becoming more of a software company your business is gonna be under threat, right? From the competing business. Look at what Netflix has done in media compared to every other media company. That same example applies in every single industry. Now, the way that we help enable that software transformation is to provide a platform that is so easy to use that it doesn't require a lot of training. Now this is complicated platforms, so, yes you have to be a fantastic developer or an IT professional but our job is to take complicated technology like container management software, orchestration layers like Swarm or Kubernetes, service mesh, storage networking, all of those, and make it so simple and easy to use that your IT department can say, "I can use this platform to effectively future-proof your company," right? So, how do you have a platform that you can build every application on, take all of you legacy applications on, run it, and then run it anywhere you like. >> I think that's been one of the through lines for Docker since the very beginning, that developer experience, right? >> Yes. >> And what's been interesting in Docker's development was, I think for both inside and outside, is kind of, what is Docker Inc, and the project versus the company, what is it selling, what's the commercial aspect here? I think, I kind of think back to my experience at BMWare, where there was an enterprise side and then a huge install base of workstation folks. And it's even stronger with Docker because actually now with Docker Desktop as an application development environment or a, you know, I don't wanna, not quite development environment but, you know, the one you announced today with Docker Desktop. That's an even more valuable through line into the Enterprise Edition. >> Yeah. >> But I don't, so, I guess where I'm heading, Steve, is, can you talk a little bit about the commercial situation? Docker EE as the flagship platform. >> Yeah, of course. >> And, kind of, where we are in the maturity journey with customers right now, it's real and important. >> Absolutely John, but you're bringing up a great point within this. Look, we're both, we're a enterprise software company and we're this incredible community where innovation is being brought in by every member of the community. And there's nothing in the world that says you can't do both. This idea that you're one company versus another, this is nonsense, alright? It's a very narrow view of the world. In fact, I would argue that, more and more, companies have to think about that they have multiple people that they serve. Multiple constituents that they serve. In our case we serve the Enterprise IT organization and we also serve developers. And developers are a critical part, not just of our community, that is the life of every company going forward. Which is why we're so excited about this. That's the life of every company. So, Docker Desktop, the reason we're so excited about it is, first of all, it is the easiest way to engage with Docker, to build applications. And then we feel like there's a lot more innovation that we can actually deliver within Docker Desktop. Alright, so a million new developers joined on Docker Desktop this year. In fact, we're growing about seven or eight percent month over month on that. And so you should expect over the next year another million will be on Docker Desktop. But it's incumbent upon us to say, the only way that we continue to earn the trust of that portion of our constituents, that of the developer community, is to make sure we're innovative, to make sure we're open to allow others to innovate on top of us. >> I'd love to, kind of, explore on audience a little bit. So, in terms of innovation, you know, we know that the companies that have the ability to aggressively innovate, and to do that they have to have the budget, are the ones that stay relevant and that are the most competitive. But I think I saw some stats and I think Scott Johnson said that close to 90 percent of IT budgets are spent keeping the lights on. So you have very little dollars to actually drive innovation. So when you're talking with customers, and you said you just met with 25 of Docker Inc's biggest customers just this morning, are you talking to both the developer guys and girls as well as the C suite? >> Yeah. >> What is, how are you connecting and then, maybe, is it a conversation to enable the developers to be able to sell the value up the stack or is it vice versa? >> A couple of things here, so, first of all, John, I didn't answer part of your question which is the growth in our Enterprise customer base. We've literally doubled it year over year, right? So, more than 500 Global 10,000 companies that are using Docker to run their applications and to manage their applications. The way that we engage with our customers is literally across the entire constituents of that organization, right? A developer by themselves, as genius as that group of people are, you can't deliver the application. And delivering the application is just as important as building it. And so the IT organization, the ops organization is critical. And then there's gotta be an overriding objective. What is it we're trying to do? How do we transform ourselves into a software company? You think about, think about just for example, Tesla, right? When you have a company, and I realize Tesla's stock goes up and down, they're always in the news, but when you have a company that's worth more than some of the biggest automotive companies in the world, you have to ask yourself why. Well, part of the reason why isn't just the fact that we've got an electric vehicle that's better for the environment. Part of it is, it's really as much a software company as it is a automotive company. They have incredible amounts of data about how we use our cars, where we go, and in fact the Tesla cars are actually interconnected. And so, that brings a perspective in how you build cars and how they're gonna be used and how they're gonna be consumed that's radically different than if you're just an auto manufacturer. Now, look, Ford and GM and Volvo are all really smart, great companies and they're quickly moving through to themselves being software companies. >> Steve, can you talk a little bit about ecosystems? Microsoft, on stage this morning, a long partnership with them but also here at the show, right, enterprise folks, Dell and Accenture and I'm just looking down the list as well as Google and Amazon, right? So, you need to be partnering with a lot of folks to make all this work. How are you approaching that? >> John, part of the reason for that is, let's start with a simple premise, is something this large, alright, you can't possibly innovate fast enough on your own, alright? There's seven billion amazing people on this planet. The only way you can really drive mass scale global innovation, is you have to be open, right? I'm literally a guy that was born in a mud house in India, so I certainly appreciate the opportunity to participate in the rest of the world's economy. So we have to be open to say, anybody that wants to contribute, can. Now, obviously we think that contribution has to be within an ethos, right? If your definition of contribution is how do you help your own business, that's not good enough. You have to look at this and say, there has to be choice, in our view, choice, security and agility. So, how do we deliver those values or that ethos to our customers? And if you're willing to do that, man we want to partner with everybody in this space. >> Yeah, I, sometimes I despair of the tech press, although I consume a lot of it and if I never have to read another Swarm versus Kubernetes article again I would be happy. But Kubernetes' all over the keynote and it seems like Docker you all have embraced it and in fact are supporting it in very innovative ways with the cloud providers. In terms of ecosystem can you talk a little bit about-- >> Yeah, well, part of the value of Docker is we simplify very complex things and make it available to our customers to consume with little training, little understanding of the underlying deep technology. And the other part is that it comes back to this idea that innovation will happen everywhere. Why should we view the world as it's our solution or, you know, nobody's? That's nonsense, right? Kubernetes is a fantastic orchestrational entity. Why shouldn't it be integrated into the Docker container platform? And so, as we did that, guess what happened? Our customers, all they saw was, instead of conflict they saw the opportunity to work together. In fact it's been amazing for the growth in our business, that's why ewe doubled year over year. >> Now, collaboration is essential and we were talking with Scott Johnson a little bit earlier today about the internal collaboration but also the external collaboration with customers. You talked about partnerships, I think that the MTA program, the Modernization of Traditional Apps launched about a year ago with Avanade, Cisco, HPE and Microsoft. Tell us a little bit about that, probably around the same time that you came to the helm. You're seeing, you know, customers like Visa, PayPal as part of this program, be able to transform and go to the container journey. >> Yeah, and Lisa, this speaks to an observation you made a few minutes ago about the fact that, you know, 85, 90% of IT budgets are fixed before you even walk into the year. So, look, the Docker platform can be used for any kind of application. Legacy apps, next generation apps that run in the data center, next generation apps that run on Edge devices. But if you accept that 90% of the apps that sit within a company are all legacy apps, well, guess what, that's where their cost is. And then if you marry that to the fact that every CIO has this problem that I don't have a lot of money that's free in my budget. Well, how do we help solve that? And the way we chose to solve it is this Docker MTA solution. Modernizing Traditional Apps. Take your traditional apps, run 'em on the Docker platform, run 'em on any infrastructure you like, cut your app and infrastructure management costs in half. Now, then take that savings and then apply it towards innovation. This is why it resonates with CIO's. I mean, as much as they may love Docker and they may love us, they have a business to serve and they're very, very practical in how they think about, you know, going about their business. >> So with that approach, thanks John, how receptive were those enterprise CIO's to going, "You're right, we've gotta start with our enterprise apps." They don't have the luxury of time, of ripping out old infrastructure and building them on containers or microservices architectures. And these are often mission-critical applications. Was that an easy sell, was that, tell me about that. >> (laughs) Well, nothing's easy but the reality is, is that it, they got it quickly, right? Because it speaks directly to their paying point. And what I'm very proud of with my team is not only were we able to deliver a great product for MTA but we're also helping our customers actually make sure they can migrate these apps over. But what's been really a positive, you know, kind of a signal we've seen, that's still the early stages, is that as our customers are moving their legacy apps to Docker and running 'em on new infrastructure, sometimes public cloud, and cutting costs, they're starting to take that cost savings and actually applying it to their next generation apps. So they're not using Docker for new apps. And so that is, that's the benefit of when you really try to solve the problem the way the customer wants to consume it. >> So, Steve, the user conference, very energizing, right. >> Yeah. >> Already the energy's been good here, you've been doing trainings and certifications, there's people behind us, everyone's talking, so that kind of in some ways sets the tone for the year, so as you and your team go back to the office after this week, you know, what are you looking to do and what can we expect out of Docker? >> I'll just speak to two things. First of all, there's so much innovation we still have to deliver. If anything, you know, I would say my team will tell me I might be pushing a little hard. But you know what, this is the fun, you only have x number of years in life and you should make the most of it. So we're really excited about new apps, we're excited about SecurEdge apps. We're excited about, I don't know if you saw the demo this morning, of Armada, which allows you to run any app on any operating system, on any infrastructure, all from a single pane of glass. Our customers love that and they're very excited about that. That said, you know, this is a, it's a big test. We have a huge opportunity to welcome a lot of other companies, so when you walk around and see 5,000 people that see amazing opportunity, not just for Docker, for themselves, right? That's the secret part of Docker that I love. We're creating jobs that didn't exist before, right? I mean, you see kids coming out of college now getting Docker skills and they're using that to grow their IT profession. In fact, I was just at i.c.stars, this is an amazing organization in Chicago that helps individuals who've been displaced in the workforce learn the IT skills required to come back to the workforce and really help run internal IT organizations. Guess what they're learning? They're learning Docker. So that's, these are the kind of things that get us excited. >> And that's essential for enterprise organizations who, that's one of the challenges they face, was, you know, modernizing the data center, which they have to do, but then it requires new skill sets, maybe upskilling, so it's exciting to hear that you're seeing this investment in people that have an opportunity, the proclivity to actually learn this technology. >> Yeah, this is, we are happy because we help customers but we also create amazing new jobs that, you know, are, certainly our community can still benefit from. >> So, last question, the three themes that came out of your session and really the general session this morning was, you talked about someone's choice, agility and security. Are those the three pillars that you believe Docker, upon which Docker sits as really competitive differentiators? >> Amen, amen, number one, but it's also our values, right? This is rooted in our values and when a company performs best is when their values show up in their products. Because then you're never lost, you'll always know what you're focused on. And you know, when I ran Concur, we had this vision, north star, called The Perfect Trip. And our objective was to always go create a delightful business trip experience. And for Docker I wanna make sure that we have a north star. And our north star is our values and they have to translate directly to what actually helps the customer. >> Love that, the north star. Well, hopefully theCUBE is the north star of modern tech media. Steve, thanks so much for stopping by. >> Thank you, it's wonderful to meet you. It was great to meet you as well and congratulations on the big success. >> Thank you. >> We look forward to hearing-- >> Thank you Lisa, thank you John. >> What's coming out in the next year. >> Thank you. >> And we wanna thank you for watching theCUBE, I'm Lisa Martin with John Troyer today live from San Francisco DockerCon 2018. Stick around, we'll be back after a short break. (upbeat music)

Published Date : Jun 13 2018

SUMMARY :

Brought to you by Docker the CEO of Docker Inc., Steve Singh. John, how are you? So, the momentum that you guys, and one of the things we wanted one of the customers and talked about future-proofing companies in the world, is the choice to be able One of the things that Now, the way that we help the one you announced Docker EE as the flagship platform. are in the maturity journey that is the life of every and that are the most competitive. and in fact the Tesla cars but also here at the show, or that ethos to our customers? despair of the tech press, And the other part is that that you came to the helm. And the way we chose to solve it They don't have the luxury of time, And so that is, that's the benefit So, Steve, the user conference, and you should make the most of it. that have an opportunity, the proclivity new jobs that, you know, and really the general and they have to translate directly is the north star of modern tech media. and congratulations on the big success. you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

John TroyerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Rashmi KumarPERSON

0.99+

VolvoORGANIZATION

0.99+

FordORGANIZATION

0.99+

Steve SinghPERSON

0.99+

LisaPERSON

0.99+

twoQUANTITY

0.99+

IndiaLOCATION

0.99+

ChicagoLOCATION

0.99+

CiscoORGANIZATION

0.99+

DockerORGANIZATION

0.99+

AvanadeORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

90%QUANTITY

0.99+

Scott JohnsonPERSON

0.99+

AmazonORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

5,000 membersQUANTITY

0.99+

Docker IncORGANIZATION

0.99+

DellORGANIZATION

0.99+

GMORGANIZATION

0.99+

Mountain DewORGANIZATION

0.99+

SecondQUANTITY

0.99+

400 peopleQUANTITY

0.99+

5,000 peopleQUANTITY

0.99+

bothQUANTITY

0.99+

PayPalORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

oneQUANTITY

0.99+

Docker Inc.ORGANIZATION

0.99+

40 yearsQUANTITY

0.99+

three themesQUANTITY

0.99+

BMWareORGANIZATION

0.99+

two thingsQUANTITY

0.99+

first timeQUANTITY

0.99+

ConcurORGANIZATION

0.98+

three pillarsQUANTITY

0.98+

three piecesQUANTITY

0.98+

McKessonPERSON

0.98+

next yearDATE

0.98+

more than 500QUANTITY

0.98+

OneQUANTITY

0.98+

VisaORGANIZATION

0.98+

fiveQUANTITY

0.98+

25QUANTITY

0.98+

DockerCon 2018EVENT

0.97+

north starORGANIZATION

0.97+

this yearDATE

0.97+

six thousand peopleQUANTITY

0.97+

FirstQUANTITY

0.97+

Moscone WestLOCATION

0.97+