Image Title

Search Results for Kubernetes Service:

Chris Rosen, IBM Kubernetes Service | KubeCon 2018


 

(upbeat techo music) >> Covering Kubecon and CloudNativeCon North America 2018 brought to you by RedHat, The Cloud Native Computing Foundation, and its ecosystem partners. (upbeat techno music) >> Okay welcome back everyone, we're live here in Seattle for KubeCon 2018, CloudNativeCon, I'm John Furrier with theCUBE coverage, three days. Our next guest is Chris Rosen who's the Program Director for Offering Management, for Kubernetes, IBM's Kubernetes Service. Chris, welcome to theCUBE thanks for joining us. >> Thank you very much, glad to be here. >> We always love covering IBM. Think is coming up this year. It's going to be in San Francisco. Want to get that out there because we're psyched it's in our backyard. It's always been in Vegas. We've been covering IBM's events for a long time. We've seen the evolution of Cloud, you know, Bluemix, SoftLayer all coming together. Kubernetes, actually the timing of Kubernetes couldn't have been better. >> Absolutely. >> With all the software investments in Bluemix, all the customers that you guys have, now with scale and choice with CNCF. Kind of a perfect storm for you guys, explain kind of what's going on, your role and how it's all kind of clicking together. >> Sure, so it is, you're exactly right it's an exciting time to be there. There's a lot of change. Everyone here at the conference, so excited there is so much new going on. About 2 1/2 years ago, IBM went all in on Kubernetes for our Cloud as well as for on-prem offerings to leverage and provide flexibility, portability, eliminating vendor lock-in, all those things that our customers asked us for and then adding capabilities on top of it. So, we are really excited to kind of grow and participate in the ecosystem. >> So, I hear a lot of people talk about Kubernetes. First of all, we love covering it, but the language around what is Kubernetes, they're even doing children's books, stories, trying to break it down. The rise of Kubernetes kind of has gone mainstream, but I hear things like the Kubernetes stack, the CNCF stack. I mean, it's not necessarily a stack per se. Could you break down, 'cause a lot of people are going to CNCF for a variety of other things. >> Right. >> With Kubernetes, at the core, describe how you talk to customers, how do you explain it. Unpack the positioning of Kubernetes at the core, and the CNCF offerings, or what do people call it? The stack, the CNCF stack? Or, how does this all break down? >> Yeah, so you're right. It's a very complex stack and that's where the complexity comes in that we're trying to eliminate for our customers is to simplify managing that stack. So, at the top of the stack, of course we've got Kubernetes for the orchestration layer. Below that, we've got the engine. We're using containerd now but we also have Prometheus, Fluentd, Calico, it's a very complex stack. And, when you think about managing that and a new version comes out from Kubernetes, how does that effect anything else in that stack? >> Chris, wonder if you can explain a little bit what IBM's doing here because some people I've heard, they've said, ah, there's like over 70 different you know, platforms with Kubernetes, oh they're all trying to sell me a Kubernetes distribution. >> Right. >> I don't believe that's the case. So, maybe you just explain what bakes into your products, what IBM bakes into the community. >> Right. >> And your role, yeah. >> Well, you're exactly right. So we're not forking and doing anything IBM-esk with Kubernetes. >> Right We have core maintainers that live out there. That's their job, is to focus upstream. We think that's very important to be agnostic and to participate in these communities. Now, what we do is, we build our solutions on top of these open source projects, adding value, simplifying the management of those solutions. So, you think about the CNCF conformance testing, IBM participates. We typically are the first public cloud to add support for a new version of Kubernetes. So we're really excited to do that, and the only way we can do that is by actively participating in the community. >> The upstream dynamic is important. Just talk about that for a second because this is, I think why one of the reasons it's been so successful is the upstream contribution is not your IBM perspective, it's just pure contribution for the benefit of the community then downstream, you guys are productizing that piece. >> Right. >> That is kind of, that is the purpose of open source. >> Exactly, exactly, and you hear time and time again at these conferences that the power of the community is so much greater than one individual company. So, let's work together as a community, build that solid foundation at the open source level and then IBM's going to add things that we think are differentiating and unique to our offering. >> What's the number one end-user conversation, problem that's being solved with the evolution of CNCF and Kubernetes at the core? Obviously, choice is one, but when specifically as you talk to customers, what is the big nead? What's the conversations like? Can you share some input into, insight into the customer equation? >> Probably the biggest request is around security, and that's a couple of fronts. One, maybe this is my first step into public cloud, so how do I ensure, in a multi-tenant world, that I am secure in isolation and all of those things. But then also, thinking about maybe I'm just starting with containers and microservices. So, this is a completely different mental paradigm in how I'm developing code, running code, and to explain to them how IBM is helping simplify that security aspect along that entire journey. >> So talk about the auto-scaling security piece, because, again, the touch points, it's interesting about Cloud, the entry point is multiple avenues for a customer could be workload, portability. It could be for a native application in the Cloud. Where's the scale come in? How do you guys see the scale picture developing? >> Right, so again, scaling comes kind of two factors. One, Pod Autoscaling from Kubernetes. So, you can define, let your application scale out when it needs to, but then there is also the Infrastructure side. So, I need to be able to set parameters to scale up when I need to and then scale back down to kind of meet my requirements as well as managing my cost. >> Well IBM Think's coming up on February 15th, just a plug for theCUBE. We'll be there, obviously register but IBM Think is a big conference. How much of Kubernetes will be at the center of IBM Think? >> Kubernetes will be a huge part at Think. We encourage everyone listening to come sign up and join us. There will be a range from hands-on for your Developer focus or your Operators. We'll have much larger business benefits for our C-level participants. So, a lot of activities, a lot of fun, a lot to learn at IBM Think 2019 in San Francisco. >> What's the biggest story here at KubeCon, CloudNative conference for the folks not here, or watching, or maybe are wait-listed in the lobby-con (Chris laughs) that's happening in Seattle? What's the biggest story? >> The biggest story is the vibrant ecosystem. When you look at the amount of people that are here, the chatter, the booths are packed, the sessions are packed, the keynotes are packed. It's great, everyone wants to share a story, learn from each other. It's a fantastic community to be a part of. >> I got to ask you the programmability piece, because, one of the things that people look for is virtual private networks, they're using VPNing, they want to take VPNs to the next level, SD-WAN, super-hot trend that's kicking back up, people want to program networks. >> Right. >> They don't want to have to actually provision networks anymore. this is DevOps but now it's also the network layer. Storage and compute looking good? >> Right. >> Network is evolving, how do you guys see that picture? Can you comment on that, it's a hot area. I just want to get your perspective. >> Yeah, definitely evolving just like the rest of the space. So, we are excited to work with various vendors here. IBM has our own point of view of what virtual private cloud means supporting, bring your own IP, private end-points, private cluster, so that way, if I only want connectivity inside my backbone network, I can configure my networks that way, creating a VPN tunnel back to my resources on-prem, and just have it completely isolated from the rest of the world. >> You see a lot of on-premises activity, Azure stack, Amazon announces this Outposts Cloud Sys supposed to be about a year away, and their whole message is latency. >> Right. >> Workloads need certain things, some of them need low-latency. >> Right. >> Some need more security. Just a, is that just a course of business now, that customers have to have these diverse sets of needs met? >> Absolutely, so IBM has two offerings, IBM Cloud Private for on-prem with multi-cloud manager that's really focused at managing in that hybrid or multi-cloud world. How do we simplify resources that are running on-prem, IBM Cloud, other Clouds, and how do we do so efficiently? So, we definitely see a lot of hybrid, hybrid architectures, whether that's on-prem to IBM Cloud, IBM Cloud to other Clouds, and latency really becomes a minimal factor. >> And what's your to do list on Kubernetes as you look at this event, obviously continuing to grow, the international piece is pretty compelling as well, growth in China, we're seeing that. What's your plans for IBM Kubernetes offering, what's the roadmap look like, what can you share some insight into what's next for you guys? >> Absolutely, so we're definitely focused on security, continues to be paramount, even though we think we are a very secure offering already, but continuing to expand on that. The private endpoints that I mentioned, the private connectivity, isolating network traffic is a huge piece of it, staying compliant and up to date with Kubernetes versions as they come out, making sure that they're scalable, performant, upgradeable, and then making those available to our users. >> IBM continuing to transform obviously the big news we saw with the RedHat acquisition, you know, obviously you've been in the Cloud for a while, everyone knows that with Bluemix, maybe not get to know as much work that went into Bluemix for instance, a lot of great stuff. You guys have built, you know, the Developer side within Cloud. IBM Think is February 15th, it's going to be in San Francisco. theCUBE will be there. Check these guys out. They're going to have a lot of workshops we're excited to see how the evolution of IBM and IBM Cloud continues. Chris coming on theCUBE, appreciate it. >> Thank you very much. >> theCUBE coverage, I'm John Furrier, Stu Miniman, stay with us for more coverage, here in Seattle, after this short break. (upbeat techno music)

Published Date : Dec 11 2018

SUMMARY :

brought to you by RedHat, for KubeCon 2018, We've seen the evolution of all the customers that you guys have, and participate in the ecosystem. 'cause a lot of people are going to CNCF and the CNCF offerings, So, at the top of the stack, of course you know, platforms with Kubernetes, I don't believe that's the case. IBM-esk with Kubernetes. and the only way we can do that for the benefit of the community the purpose of open source. and then IBM's going to add things Probably the biggest application in the Cloud. the Infrastructure side. be at the center of IBM Think? lot of fun, a lot to learn the chatter, the booths are packed, I got to ask you the also the network layer. do you guys see that picture? just like the rest of the space. Cloud Sys supposed to Workloads need that customers have to have and how do we do so efficiently? the international piece is the private connectivity, how the evolution of IBM here in Seattle, after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Chris RosenPERSON

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

February 15thDATE

0.99+

AmazonORGANIZATION

0.99+

VegasLOCATION

0.99+

ChinaLOCATION

0.99+

firstQUANTITY

0.99+

two offeringsQUANTITY

0.99+

KubeConEVENT

0.99+

RedHatORGANIZATION

0.99+

CloudNativeConEVENT

0.99+

KubeCon 2018EVENT

0.99+

oneQUANTITY

0.98+

first stepQUANTITY

0.98+

KubernetesTITLE

0.98+

three daysQUANTITY

0.98+

two factorsQUANTITY

0.98+

this yearDATE

0.97+

CloudNativeCon North America 2018EVENT

0.96+

IBM Think 2019EVENT

0.95+

OneQUANTITY

0.95+

About 2 1/2 years agoDATE

0.94+

FirstQUANTITY

0.94+

AzureTITLE

0.93+

IBM ThinkEVENT

0.92+

CloudNativeEVENT

0.92+

IBM Kubernetes ServiceORGANIZATION

0.91+

CNCFORGANIZATION

0.91+

KubernetesORGANIZATION

0.9+

PrometheusTITLE

0.88+

secondQUANTITY

0.88+

ThinkORGANIZATION

0.87+

BluemixORGANIZATION

0.87+

over 70 differentQUANTITY

0.87+

one individualQUANTITY

0.86+

ThinkEVENT

0.84+

a yearQUANTITY

0.83+

The Cloud Native Computing FoundationORGANIZATION

0.83+

theCUBEORGANIZATION

0.78+

CloudTITLE

0.77+

CalicoORGANIZATION

0.75+

Cloud PrivateTITLE

0.66+

Steve George, Weaveworks & Steve Waterworth, Weaveworks | AWS Startup Showcase S2 E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two of the ongoing series. We're covering exciting start startups in the AWS ecosystem to talk about open source community stuff. I'm your host, Dave Nicholson. And I'm delighted today to have two guests from Weaveworks. Steve George, COO of Weaveworks, and Steve Waterworth, technical marketing engineer from Weaveworks. Welcome, gentlemen, how are you? >> Very well, thanks. >> Very well, thanks very much. >> So, Steve G., what's the relationship with AWS? This is the AWS Startup Showcase. How do Weaveworks and AWS interact? >> Yeah sure. So, AWS is a investor in Weaveworks. And we, actually, collaborate really closely around EKS and some specific EKS tooling. So, in the early days of Kubernetes when AWS was working on EKS, the Elastic Kubernetes Service, we started working on the command line interface for EKS itself. And due to that partnership, we've been working closely with the EKS team for a long period of time, helping them to build the CLI and make sure that users in the community find EKS really easy to use. And so that brought us together with the AWS team, working on GitOps and thinking about how to deploy applications and clusters using this GitOps approach. And we've built that into the EKS CLI, which is an open source tool, is a project on GitHub. So, everybody can get involved with that, use it, contribute to it. We love hearing user feedback about how to help teams take advantage of the elastic nature of Kubernetes as simply and easily as possible. >> Well, it's great to have you. Before we get into the specifics around what Weaveworks is doing in this area that we're about to discuss, let's talk about this concept of GitOps. Some of us may have gotten too deep into a Netflix series, and we didn't realize that we've moved on from the world of DevOps or DevSecOps and the like. Explain where GitOps fits into this evolution. >> Yeah, sure. So, really GitOps is an instantiation, a version of DevOps. And it fits within the idea that, particularly in the Kubernetes world, we have a model in Kubernetes, which tells us exactly what we want to deploy. And so what we're talking about is using Git as a way of recording what we want to be in the runtime environment, and then telling Kubernetes from the configuration that is stored in Git exactly what we want to deploy. So, in a sense, it's very much aligned with DevOps, because we know we want to bring teams together, help them to deploy their applications, their clusters, their environments. And really with GitOps, we have a specific set of tools that we can use. And obviously what's nice about Git is it's a very developer tool, or lots and lots of developers use it, the vast majority. And so what we're trying to do is bring those operational processes into the way that developers work. So, really bringing DevOps to that generation through that specific tooling. >> So Steve G., let's continue down this thread a little bit. Why is it necessary then this sort of added wrinkle? If right now in my organization we have developers, who consider themselves to be DevOps folks, and we give them Amazon gift cards each month. And we say, "Hey, it's a world of serverless, "no code, low code lights out data centers. "Go out and deploy your code. "Everything should be fine." What's the problem with that model, and how does GitOps come in and address that? >> Right. I think there's a couple of things. So, for individual developers, one of the big challenges is that, when you watch development teams, like deploying applications and running them, you watch them switching between all those different tabs, and services, and systems that they're using. So, GitOps has a real advantage to developers, because they're already sat in Git, they're already using their familiar tooling. And so by bringing operations within that developer tooling, you're giving them that familiarity. So, it's one advantage for developers. And then for operations staff, one of the things that it does is it centralizes where all of this configuration is kept. And then you can use things like templating and some other things that we're going to be talking about today to make sure that you automate and go quickly, but you also do that in a way which is reliable, and secure, and stable. So, it's really helping to bring that run fast, but don't break things kind of ethos to how we can deploy and run applications in the cloud. >> So, Steve W., let's start talking about where Weaveworks comes into the picture, and what's your perspective. >> So, yeah, Weaveworks has an engine, a set of software, that enables this to happen. So, think of it as a constant reconciliation engine. So, you've got your declared state, your desired state is declared in Git. So, this is where all your YAML for all your Kubernetes hangs out. And then you have an agent that's running inside Kubernetes, that's the Weaveworks GitOps agent. And it's constantly comparing the desired state in Git with the actual state, which is what's running in Kubernetes. So, then as a developer, you want to make a change, or an operator, you want to make a change. You push a change into Git. The reconciliation loop runs and says, "All right, what we've got in Git does not match "what we've got in Kubernetes. "Therefore, I will create story resource, whatever." But it also works the other way. So, if someone does directly access Kubernetes and make a change, then the next time that reconciliation loop runs, it's automatically reverted back to that single source of truth in Git. So, your Kubernetes cluster, you don't get any configuration drift. It's always configured as you desire it to be configured. And as Steve George has already said, from a developer or engineer point of view, it's easy to use. They're just using Git just as they always have done and continue to do. There's nothing new to learn. No change to working practices. I just push code into Git, magic happens. >> So, Steve W., little deeper dive on that. When we hear Ops, a lot of us start thinking about, specifically in terms of infrastructure, and especially since infrastructure when deployed and left out there, even though it's really idle, you're paying for it. So, anytime there's an Ops component to the discussion, cost and resource management come into play. You mentioned this idea of not letting things drift from a template. What are those templates based on? Are they based on... Is this primarily an infrastructure discussion, or are we talking about the code itself that is outside of the infrastructure discussion? >> It's predominantly around the infrastructure. So, what you're managing in Git, as far as Kubernetes is concerned, is always deployment files, and services, and horizontal pod autoscalers, all those Kubernetes entities. Typically, the source code for your application, be it in Java, Node.js, whatever it is you happen to be writing it in, that's, typically, in a separate repository. You, typically, don't combine the two. So, you've got one set of repository, basically, for building your containers, and your CLI will run off that, and ultimately push a container into a registry somewhere. Then you have a separate repo, which is your config. repo, which declares what version of the containers you're going to run, how many you're going to run, how the services are bound to those containers, et cetera. >> Yeah, that makes sense. Steve G., talk to us about this concept of trusted application delivery with GitOps, and frankly, it's what led to the sort of prior question. When you think about trusted application delivery, where is that intertwinement between what we think of as the application code versus the code that is creating the infrastructure? So, what is trusted application delivery? >> Sure, so, with GitOps, we have the ability to deploy the infrastructure components. And then we also define what the application containers are, that would go to be deployed into that environment. And so, this is a really interesting question, because some teams will associate all of the services that an application needs within an application team. And sometimes teams will deploy sort of horizontal infrastructure, which then all application teams services take advantage of. Either way, you can define that within your configuration, within your GitOps configuration. Now, when you start deploying speed, particularly when you have multiple different teams doing these sorts of deployments, one of the questions that starts to come up will be from the security team, or someone who's thinking about, well, what happens if we make a deployment, which is accidentally incorrect, or if there is a security issue in one of those dependencies, and we need to get a new version deployed as quickly as possible? And so, in the GitOps pipeline, one of the things that we can do is to put in various checkpoints to check that the policy is being followed correctly. So, are we deploying the right number of applications, the right configuration of an application? Does that application follow certain standards that the enterprise has set down? And that's what we talk about when we talk about trusted policy and trusted delivery. Because really what we're thinking about here is enabling the development teams to go as quickly as possible with their new deployments, but protecting them with automated guard rails. So, making sure that they can go fast, but they are not going to do anything which destroys the reliability of the application platform. >> Yeah, you've mentioned reliability and kind of alluded to scalability in the application environment. What about looking at this from the security perspective? There've been some recently, pretty well publicized breaches. Not a lot of senior executives in enterprises understand that a very high percentage of code that their businesses are running on is coming out of the open source community, where developers and maintainers are, to a certain degree, what they would consider to be volunteers. That can be a scary thing. So, talk about why an enterprise struggles today with security, policy, and governance. And I toss this out to Steve W. Or Steve George. Answer appropriately. >> I'll try that in a high level, and Steve W. can give more of the technical detail. I mean, I'll say that when I talk to enterprise customers, there's two areas of concern. One area of concern is that, we're in an environment with DevOps where we started this conversation of trying to help teams to go as quickly as possible. But there's many instances where teams accidentally do things, but, nonetheless, that is a security issue. They deploy something manually into an environment, they forget about it, and that's something which is wrong. So, helping with this kind of policy as code pipeline, ensuring that everything goes through a set of standards could really help teams. And that's why we call it developer guard rails, because this is about helping the development team by providing automation around the outside, that helps them to go faster and relieves them from that mental concern of have they made any mistakes or errors. So, that's one form. And then the other form is the form, where you are going, David, which is really around security dependencies within software, a whole supply chain of concern. And what we can do there, by, again, having a set of standard scanners and policy checking, which ensures that everything is checked before it goes into the environment. That really helps to make sure that there are no security issues in the runtime deployment. Steve W., anything that I missed there? >> Yeah, well, I'll just say, I'll just go a little deeper on the technology bit. So, essentially, we have a library of policies, which get you started. Of course, you can modify those policies, write your own. The library is there just to get you going. So, as a change is made, typically, via, say, a GitHub action, the policy engine then kicks in and checks all those deployment files, all those YAML for Kubernetes, and looks for things that then are outside of policy. And if that's the case, then the action will fail, and that'll show up on the pull request. So, things like, are your containers coming from trusted sources? You're not just pulling in some random container from a public registry. You're actually using a trusted registry. Things like, are containers running as route, or are they running in privileged mode, which, again, it could be a security? But it's not just about security, it can also be about coding standards. Are the containers correctly annotated? Is the deployment correctly annotated? Does it have the annotation fields that we require for our coding standards? And it can also be about reliability. Does the deployment script have the health checks defined? Does it have a suitable replica account? So, a rolling update. We'll actually do a rolling update. You can't do a rolling update with only one replica. So, you can have all these sorts of checks and guards in there. And then finally, there's an admission controller that runs inside Kubernetes. So, if someone does try and squeeze through, and do something a little naughty, and go directly to the cluster, it's not going to happen, 'cause that admission controller is going to say, "Hey, no, that's a policy violation. "I'm not letting that in." So, it really just stops. It stops developers making mistakes. I know, I know, I've done development, and I've deployed things into Kubernetes, and haven't got the conflict quite right, and then it falls flat on its face. And you're sitting there scratching your head. And with the policy checks, then that wouldn't happen. 'Cause you would try and put something in that has a slightly iffy configuration, and it would spit it straight back out at you. >> So, obviously you have some sort of policy engine that you're you're relying on. But what is the user experience like? I mean, is this a screen that is reminiscent of the matrix with non-readable characters streaming down that only another machine can understand? What does this look like to the operator? >> Yeah, sure, so, we have a console, a web console, where developers and operators can use a set of predefined policies. And so that's the starting point. And we have a set of recommendations there and policies that you can just attach to your deployments. So, set of recommendations about different AWS resources, deployment types, EKS deployment types, different sets of standards that your enterprise might be following along with. So, that's one way of doing it. And then you can take those policies and start customizing them to your needs. And by using GitOps, what we're aiming for here is to bring both the application configuration, the environment configuration. We talked about this earlier, all of this being within Git. We're adding these policies within Git as well. So, for advanced users, they'll have everything that they need together in a single unit of change, your application, your definitions of how you want to run this application service, and the policies that you want it to follow, all together in Git. And then when there is some sort of policy violation on the other end of the pipeline, people can see where this policy is being violated, how it was violated. And then for a set of those, we try and automate by showing a pull request for the user about how they can fix this policy violation. So, try and make it as simple as possible. Because in many of these sorts of violations, if you're a busy developer, there'll be minor configuration details going against the configuration, and you just want to fix those really quickly. >> So Steve W., is that what the Mega Leaks policy engine is? >> Yes, that's the Mega Leaks policy engine. So, yes, it's a SaaS-based service that holds the actual policy engine and your library of policies. So, when your GitHub action runs, it goes and essentially makes a call across with the configuration and does the check and spits out any violation errors, if there are any. >> So, folks in this community really like to try things before they deploy them. Is there an opportunity for people to get a demo of this, get their hands on it? what's the best way to do that? >> The best way to do it is have a play with it. As an engineer, I just love getting my hands dirty with these sorts of things. So, yeah, you can go to the Mega Leaks website and get a 30-day free trial. You can spin yourself up a little, test cluster, and have a play. >> So, what's coming next? We had DevOps, and then DevSecOps, and now GitOps. What's next? Are we going to go back to all infrastructure on premises all the time, back to waterfall? Back to waterfall, "Hot Tub Time Machine?" What's the prediction? >> Well, I think the thing that you set out right at the start, actually, is the prediction. The difference between infrastructure and applications is steadily going away, as we try and be more dynamic in the way that we deploy. And for us with GitOps, I think we're... When we talk about operations, there's a lots of depth to what we mean about operations. So, I think there's lots of areas to explore how to bring operations into developer tooling with GitOps. So, that's, I think, certainly where Weaveworks will be focusing. >> Well, as an old infrastructure guy myself, I see this as vindication. Because infrastructure still matters, kids. And we need sophisticated ways to make sure that the proper infrastructure is applied. People are shocked to learn that even serverless application environments involve servers. So, I tell my 14-year-old son this regularly, he doesn't believe it, but it is what it is. Steve W., any final thoughts on this whole move towards GitOps and, specifically, the Weaveworks secret sauce and superpower. >> Yeah. It's all about (indistinct)... It's all about going as quickly as possible, but without tripping up. Being able to run fast, but without tripping over your shoe laces, which you forgot to tie up. And that's what the automation brings. It allows you to go quickly, does lots of things for you, and yeah, we try and stop you shooting yourself in the foot as you're going. >> Well, it's been fantastic talking to both of you today. For the audience's sake, I'm in California, and we have a gentleman in France, and a gentlemen in the UK. It's just the wonders of modern technology never cease. Thanks, again, Steve Waterworth, Steve George from Weaveworks. Thanks for coming on theCUBE for the AWS Startup Showcase. And to the rest of us, keep it right here for more action on theCUBE, your leader in tech coverage. (upbeat music)

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase This is the AWS Startup Showcase. So, in the early days of Kubernetes from the world of DevOps from the configuration What's the problem with that model, to make sure that you and what's your perspective. that enables this to happen. that is outside of the how the services are bound to that is creating the infrastructure? one of the things that we can do and kind of alluded to scalability that helps them to go And if that's the case, is reminiscent of the matrix and start customizing them to your needs. So Steve W., is that what that holds the actual policy engine So, folks in this community So, yeah, you can go to on premises all the in the way that we deploy. that the proper infrastructure is applied. and yeah, we try and stop you and a gentlemen in the UK.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve WaterworthPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

Steve GeorgePERSON

0.99+

AWSORGANIZATION

0.99+

Steve G.PERSON

0.99+

FranceLOCATION

0.99+

Steve W.PERSON

0.99+

CaliforniaLOCATION

0.99+

30-dayQUANTITY

0.99+

WeaveworksORGANIZATION

0.99+

GitTITLE

0.99+

UKLOCATION

0.99+

GitOpsTITLE

0.99+

JavaTITLE

0.99+

twoQUANTITY

0.99+

Node.jsTITLE

0.99+

one advantageQUANTITY

0.99+

two guestsQUANTITY

0.99+

Mega LeaksTITLE

0.99+

Mega LeaksTITLE

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

each monthQUANTITY

0.99+

DevOpsTITLE

0.98+

NetflixORGANIZATION

0.98+

one setQUANTITY

0.98+

DevSecOpsTITLE

0.98+

one formQUANTITY

0.98+

EKSTITLE

0.98+

oneQUANTITY

0.97+

One areaQUANTITY

0.97+

KubernetesTITLE

0.97+

two areasQUANTITY

0.97+

one replicaQUANTITY

0.96+

GitHubORGANIZATION

0.95+

Clive Charlton and Aditya Agrawal | AWS Public Sector Summit Online


 

(upbeat music) >> Narrator: From around the globe. It's The CUBE, with digital coverage of AWS public sector online, (upbeat music) brought to you by, Amazon Web Services. >> Everyone welcome back to The CUBE virtual coverage, of AWS public sector summit online. I'm John Furrier, your host of The CUBE. Normally we're in person, out on Asia-Pacific, and all the different events related to public sector. But this year we have to do it remote, and we're going to do the remote virtual CUBE, with Data Virtual Public Sector Online Summit. And we have two great guests here, about Digital Earth Africa project, Clive Charlton. Head of Solutions Architecture, Sub-Saharan Africa with AWS, Clive thanks for coming on, and Aditya Agrawal founder of D4DInsights, and also the advisor for the Digital Earth Africa project with AWS. So gentlemen, thank you for coming on. Appreciate you coming on remotely. >> Thanks for having us. >> Thank you for having us, John. >> So Clive take us through real quickly. Just take a minute to describe what is the Digital Earth Africa Project. What are the problems, that you're aiming to solve? >> Well, we're really aiming to provide, actionable data to governments, and organization around Africa, by providing satellite imagery, in an easy to use format, and doing that on the cloud, that serves countries throughout Africa. >> And just from a cloud perspective, give us a quick taste of what's going on, just with the tech, it's on Amazon. You got a little satellite action. Is there ground station involved? Give us a little bit more color around, you know, what's the scope of the project. >> Yeah, so, historically speaking you'd have to process satellite imagery down link it, and then do some heavy heavy lifting, around the processing of the data. Digital Earth Africa was built, from the experiences from Digital Earth Australia, originally developed by a Geo-sciences Australia and they use container services for Kubernetes's called Elastic Kubernetes Service to spin up virtual machines, which we are required to process the raw satellite imagery, into a format called a Cloud Optimized GeoTIFF. This format is used to store very large volumes of data in a format that's really easy to query. So, organizations can just use NHTTP get range request. Just a query part of the file, that they're interested in, which means, the results are served much, much quicker, from much, much better overall experience, under the hood, the store where the data is stored in the Amazon Simple Storage Service, which is S3, and the Metadata Index in a Relational Database Service, that runs the Open Data CUBE Library, which is allows Digital Earth Africa, to store this data in both space and time. >> It's interesting. I just did a, some interviews last week, on a symposium on space and cybersecurity, and we were talking about , the impact of satellites and GPS and just the overall infrastructure shift. And it's just another part of the edge of the network. Aditya, I want to get your thoughts on this, and your reaction to the Digital Earth, cause you're an advisor. Let's zoom out. What's the impact of people's lives? Give us a quick overview, of how you see it playing out because, explaining to someone, who doesn't know anything about the project, like, okay what is it about, and how does it actually impact people? >> Sure. So, you know, as, as Clive mentioned, I mean there's, there's definitely a, a digital infrastructure behind Digital Earth Africa, in a way that it's going to be able to serve free and open satellite data. And often the, the issue around satellite data, especially within the context of Africa, and other parts of the world is that there's a level of capacity that's required, in order to be able to use that data. But there's also all kinds of access issues, because, traditionally satellite data is heavy. There's the old model of being able to download the data and then being able to do something with it. And then often about 80% of the time, that you spend on satellite data is spent, just pre processing the data, before you can actually, do any of the fun analysis around it, that really sort of impacts the kinds of decisions and actions that you're looking for. And so that's why Digital Earth Africa. And that's why this partnership, with Amazon is a fantastic partnership, because it really allows us, to be able, to scale the approach across the entire continent, make it easy for that data to be accessed and make it easier for people to be able to use that data. The way that Digital Earth Africa is being operationalized, is that we're not just looking at it, from the perspective of, let's put another infrastructure into Africa. We want this program, and it is a program, that we want institutionalized within Africa itself. One that leverages expertise across the continent, and one that brings in organizations across the continent to really sort of take the leadership and ownership of this program as it moves forward. The idea of it is that, once you're able to have this information, being able to address issues like food security, climate change, coastal resilience, land degradation where illegal mining is, where is the water? We want to be able to do that, in a way that it's really looking at what are the national development priorities within the countries themselves, and how does it also then support regional and global frameworks like Africa's Agenda 2063 and the sustainable development goals. >> No doubt in my mind, obviously, is that huge benefits to these kinds of technologies. I want to also just ask you, as a follow up is a huge space race going on, right now, explosion of availability of satellite data. And again, more satellites going up, There's more congestion, more contention. Again, we had a big event on that cybersecurity, and the congestion issue, but, you know, satellite data was power everyone here in the United States, you want an Uber, you want Google Maps you've got your everywhere with GPS, without it, we'd be kind of like (laughing), wondering what's going on. How do we even vote these days? So certainly an impact, but there's a huge surge of availability, of the use of satellite data. How do you explain this? And what are some of the challenges, from the data side that's coming, from the Digital Earth Africa project that you guys hope to resolve? >> Sure. I mean, that's a great question. I mean, I think at one level, when you're looking at the space race right now, satellites are becoming cheaper. They're becoming more efficient. There's increased technology now, on the types of sensors that you can deploy. There's companies like Planet, that are really revolutionizing how even small countries are able to deploy their own satellites, and the constellation that they're putting forward, in terms of the frequency by which, you're able to get data, for any given part of the earth on a daily basis, coupled with that. And you know, this is really sort of in climbs per view, but the cloud computing capabilities, and overall computing power that you have today, then what you had 10 years, 15 years ago is so vastly different. What used to take weeks to do before, for any kind of analysis on satellite data, which is heavy data now takes, you know, minutes or hours to do. So when you put all that together, again, you know, I think it really speaks, to the power of this partnership with Amazon and really, what that means, for how this data is going to be delivered to Africa, because it really allows for the scalability, for anything that happens through Digital Earth Africa. And so, for example, one of the approaches, that we're taking us, we identify what the priorities, and needs are at the country level. Let's say that it's a land degradation, there's often common issues across countries. And so when we can take one particular issue, tested with additional countries, and then we can scale it across the whole continent because the infrastructure is there for the whole continent. >> Yeah. That's a great point. So many storylines here. We'll get to climb in a second on sustainability. And I want to talk about the Open Data Platform. Obviously, open data, having data is one thing, but now train data, and having more trusted data becomes a huge issue. Again, I want to dig into that for a second, but, Clive, I want to ask you, first, what region are we in? I mean, is this, you guys actually have a great, first of all, we've been covering the region expansion from Bahrain all the way, as moves around the world, probably soon in space. There'll be a region Amazon space station region probably, someday in the future but, what region are you running the project out of? Can you, and why is it important? Can you share the update on the regional piece? >> Well, we're very pleased, that Digital Earth Africa, is using the new Africa region in Cape Town, in South Africa, which was launched in April of this year. It's one of 24 regions around the world and we have another three new regions announced, what this means for users of Digital Earth Africa is, they're able to use region closest to them, which gives them the best user experience. It's the, it's the quickest connection for them. But more importantly, we also wanted to use, an African solution, for African people and using the Africa region in Cape Town, really aligned with that thinking. >> So, localization on the data, latency, all that stuff is kind of within the region, within country here. Right? >> That's right, Yeah >> And why is that important? Is there any other benefits? Why should someone care? Obviously, this failover option, I mean, in any other countries to go to, but why is having something, in that region important for this project? >> Well, it comes down to latency for the, for the users. So, being as close to the data, as possible is, is really important, for the user experience. Especially when you're looking at large data sets, and big queries. You don't want to be, you don't want to be waiting a long lag time, for that query to go backwards and forwards, between the user and the region. So, having the data, in the Africa region in Cape Town is important. >> So it's about the region, I love when these new regions rollout from Amazon, Cause obviously it's this huge buildup CapEx, in this huge data center servers and everything. Sustainability is a huge part of the story. How does the sustainability piece fit into the, the data initiative supported in Africa? Can you share some updates on that? >> Well, this, this project is also closely aligned with the, Amazon Sustainability Data Initiative, which looks to accelerate sustainability research. and innovation, really by minimizing the cost, and the time required to acquire, and analyze large sustainability datasets. So the initiative supports innovators, and researchers with the data and tools, and, and technical experience, that they need to move sustainability, to the next level. These are public datasets and publicly available to anyone. In addition, to that, the initiative provides cloud grants to those who are interested in exploring, exploring the use of AWS technology and scalable infrastructure, to serve sustainability challenges, of this nature. >> Aditya, I want to hear your thoughts, on this comment that Clive made around latency, and certainly having a region there has great benefits. You don't need to hop on that. Everyone knows I'm a big fan of the regional model, but it brings up the issue, of what's going on in the country, from an infrastructure standpoint, a lot of mobility, a lot of edge computing. I can almost imagine that. So, so how do you see that evolving, from a business standpoint, from a project standpoint data standpoint, can you comment and react to that edge, edge angle? >> Yeah, I mean, I think, I think that, the value of an open data infrastructure, is that, you want to use that infrastructure, to create a whole data ecosystem type of an approach. And so, from the perspective of being able. to make this data readily accessible, making it efficiently accessible, and really being able to bring industry, into that ecosystem, because of what we really want as we, as the program matures, is for this program, to then also instigate the development of new businesses, entrepreneurship, really get the young people across Africa, which has the largest proportion of young people, anywhere in the world, to be engaged around what you can do, with satellite data, and the types of businesses that can be developed around it. And, so, by having all of our data reside in Cape Town on the continent there's obviously technical benefits, to that in terms of, being able to apply the data, and create new businesses. There's also a, a perception in the fact that, the data that Digital Earth Africa is serving, is in Africa and residing in Africa which does have, which does go a long way. >> Yeah. And that's a huge value. And I can just imagine the creativity cloud, if you can comment on this open data platform idea, because some of the commentary that we've been having on The CUBE here, and all around the world is data's great. We all know we're living with a lot of data, you starting to see that, the commoditization and horizontal scalability of data, is one thing, but to put it into software defined environments, whether, it's an entrepreneur coding up an app, or doing something to share some transparency, around some initiatives going on within the region or on the continent, it's about trusted data. It's about sharing algorithms. AI is also a consumer of data, machines consume data. So, it's not just the technology data, is part of this new normal. What's this Open Data Platform, And how does that translate into value in your opinion? >> I, yeah. And you know, when, when data is shared on, on AWS anyone can analyze it and build services on top of it, using a broad range of compute and data to data analytics products, you know, things like Amazon EC2, or Lambda, which is all serverless compute, to things like Amazon Elastic MapReduce, for complex extract and transformation processes, but sharing data in the cloud, lets users, spend more time on the data analysis, rather than, than the data acquisition. And researchers can analyze data shared on AWS, without needing to pay to store their own copy, which is what the Open Data Platform provides. You only have to pay for the compute that you use and you don't need to purchase storage, to start a new project. So the registry of the open data on AWS, makes it easy to find those datasets, but, by making them publicly available through AWS services. And when you share, share your data on AWS, you make it available, to a large and growing community of developers, and startups, and enterprises, all around the world. And you know, and we've been talking particularly around, around Africa. >> Yeah. So it's an open source model, basically, it's free. You don't, it doesn't cost you anything probably, just started maybe down the road, if it gets heavy, maybe to charging but the most part easy for scientists to use and then you're leveraging it into the open, contributing back. Is that right? >> Yep. That's right. To me getting, getting researchers, and startups, and organizations growing quickly, without having to worry about the data acquisition, they can just get going and start building. >> I want to get back to Aditya, on this skill gap issue, because you brought up something that, I thought was really cool. People are going to start building apps. I'm going to start to see more innovation. What are the needs out there? Because we're seeing a huge onboarding of new talent, young talent, people rescaling from existing jobs, certainly COVID accelerated, people looking for more different kinds of work. I'm sure there's a lot of (laughing) demand to, to do some innovative things. The question I always get, and want to get your reaction is, what are the skills needed to, to get involved, to one contribute, but also benefit from it, whether it's the data satellite, data or just how to get involved skill-wise >> Sure. >> Yes. >> Yeah. So most recently we've created a six week training course. That's really kind of taken users from understanding, the basics of Earth Observation Data, to how to work, with Python, to how to create their own Jupyter notebooks, and their own Use cases. And so there's a, there's a wide sort of range of skill sets, that are required depending on who you are because, effectively, what we want to be able to do is get everyone from, kind of the technical user, that might have some remote sensing background to the developer, to the policy maker, and decision maker, to understand the value of this infrastructure, whether you're the one who's actually analyzing the data. If you're the one who's developing new applications, or you're taking that information from a managerial or policy level discussion to actually deliver the action and sort of impact that you're looking for. And so, you know, in, in that regard, we're working with ITC in the Netherlands and again, with institutions across Africa, that already have a mandate, and expertise in this particular area, to create a holistic capacity development program, that will address all of those different factors. >> So I guess the follow up question I want to have is, how do you ensure the priorities of Africa are addressed, as part of this program? >> Yeah, so, we are, we've created a governance model, that really is both top down, and bottom up. At the bottom up level, We have a technical advisory committee, that has over 15 institutions, many of which are based across Africa, that really have a good understanding of the needs, the priorities, and the mandate for how to work with countries. And at the top down level, we're developing a governing board, that will be inclusive, of the key continental level institutions, that really provide the political buy-in, the sustainability of the program, and really provide overall guidance. And within that, we're also creating an operational models, such that these institutions, that do have the capacity to support the program, they're actually the ones, who are also going to be supporting, the implementation of the program itself. >> And there's been some United Nations, sustained development projects all kinds of government involvement, around making sure certain things would happen, within the country. Can you just share, some of the highlights, or some of the key initiatives, that are going on, that you're supporting, to make it a better, better world? >> Yeah. So this is, this program is very closely aligned to a sustainable development agenda. And so looking after, looking developing methods, that really address, the sustainable development goals as one facet, in Africa, there's another program looking overall, overall national development priorities and sustainability called the Agenda 2063. And really like, I think what it really comes down to this, this wouldn't be happening, without the country level involvement themselves. So, this started with five countries, originally, Senegal, Ghana, Kenya, Tanzania, and the government of Kenya itself, has really been, a kind of a founding partner for, how Digital Earth Africa and it's predecessor of Africa Regional Data Cube, came to be. And so without high level support, and political buying within those governments, I mean, it's really because of that. That's why we're, we're where we are. >> I need you to thank you for coming on and sharing that insight. Clive will give you the final word, for the folks watching Digital Earth Africa, processes, petabytes of data. I mean the satellite data as well, huge, you mentioned it's a new region. You're running Kubernetes, Elastic Kubernetes Service, making containers easy to use, pay as you go. So you get cutting edge, take the one minute to, to share why this region's cutting edge. Does it have the scale of other regions? What should they know about AWS, in Cape Town, for Africa's new region? Take a minute to, to put plugin. >> Yeah, thank you for that, John. So all regions are built in the, in the same way, all around the world. So they're built for redundancy and reliability. They typically have a minimum of three, what we call Availability Zones. And each one is a contains a, a cluster of, of data centers, and all interconnected with fast fiber. So, you know, you can survive, you know, a failure with with no impact to your services. And the Cape Town region is built in exactly the same the same way, we have most of the services available in the, in the Cape Town region, like most other regions. So, as a user of AWS, you, you can have the confidence that, You can deploy your services and workloads, into AWS and run it in the same in the same way, with the same kind of speed, and the same kind of support, and infrastructure that's backing any region, anywhere else in the world. >> Well great. Thanks for that plug, Aditya, thank you for your insight. And again, innovation follows cloud computing, whether you're building on top of it as a startup a government or enterprise, or the big society better, in this case, the Digital Earth Africa project. Great. A great story. Thank you for sharing. I appreciate it. >> Thank you for having us. >> Thank you for having us, John >> I'm John Furrier with, The CUBE, virtual remote, not in person this year. I hope to see you next time in person. Thanks for watching. (upbeat music) (upbeat music decreases)

Published Date : Oct 20 2020

SUMMARY :

Narrator: From around the globe. and all the different events What are the problems, and doing that on the cloud, you know, and the Metadata Index in a and just the overall infrastructure shift. and other parts of the world and the congestion issue, and the constellation that on the regional piece? It's one of 24 regions around the world So, localization on the data, in the Africa region in So it's about the region, and the time required to acquire, fan of the regional model, and the types of businesses and all around the world is data's great. the compute that you use it into the open, about the data acquisition, What are the needs out there? kind of the technical user, and the mandate for how or some of the key initiatives, and the government of Kenya itself, I mean the satellite data as well, and the same kind of support, or the big society better, I hope to see you next time in person.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Aditya AgrawalPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

ClivePERSON

0.99+

Cape TownLOCATION

0.99+

JohnPERSON

0.99+

AfricaLOCATION

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

United StatesLOCATION

0.99+

six weekQUANTITY

0.99+

Agenda 2063TITLE

0.99+

Clive CharltonPERSON

0.99+

PythonTITLE

0.99+

AdityaPERSON

0.99+

NetherlandsLOCATION

0.99+

South AfricaLOCATION

0.99+

five countriesQUANTITY

0.99+

United NationsORGANIZATION

0.99+

last weekDATE

0.99+

one minuteQUANTITY

0.99+

Digital Earth AfricaORGANIZATION

0.99+

earthLOCATION

0.99+

bothQUANTITY

0.98+

D4DInsightsORGANIZATION

0.98+

April of this yearDATE

0.98+

10 yearsQUANTITY

0.98+

this yearDATE

0.98+

UberORGANIZATION

0.98+

BahrainLOCATION

0.98+

S3TITLE

0.97+

15 years agoDATE

0.97+

over 15 institutionsQUANTITY

0.97+

each oneQUANTITY

0.97+

Data Virtual Public Sector Online SummitEVENT

0.97+

oneQUANTITY

0.96+

firstQUANTITY

0.96+

about 80%QUANTITY

0.96+

threeQUANTITY

0.96+

EarthLOCATION

0.96+

Africa Regional Data CubeORGANIZATION

0.96+

Google MapsTITLE

0.95+

one levelQUANTITY

0.94+

Deepak Singh, AWS | DockerCon 2020


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of DockerCon LIVE 2020, brought to you by Docker and its ecosystem partners. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of DockerCon LIVE 2020. Happy to welcome back to the program one of our CUBE alumni, Deepak Singh. He's the vice president of compute services at Amazon Web Services. Deepak, great to see you. >> Likewise, hi, Stu. Nice to meet you again. >> All right, so for our audience that hasn't been in your previous times on theCUBE, give us a little bit about, you know, your role and your organization inside AWS? >> Yeah, so I'm, I've been part of the AWS compute services world from, for the last 12 years in various capacities. Today, I run a number of teams, all our container services, our Linux teams, I also happen to run a high performance computing organization, so it's a nice mix of all the computing that our customers do, especially some of the more new and large scale compute types that our customers are doing. >> All right, so Deepak, obviously, you know, the digital events, we understand what's happening with the global pandemic. DockerCon was actually always planned to be an online event but I want to understand, you know, your teams, how things are affecting, we know distributed is something that Amazon's done, but you have to cut up those two pizza and send them out to the additional groups or, you know, what advice are you giving the developers out there? >> Yeah, in many ways, obviously, how we operate has changed. We are at home, maybe I think with our families. DockerCon was always going to be virtual, but many other events like AWS Summits are now virtual so, you know, in some ways, the teams, the people that get most impacted are not necessarily the developers in our team but people who interact a lot with customers, who go to conferences and speak and they are finding new ways of being effective and being successful and they've been very creative at it. Our customers are getting very good at working with us virtually because we can always go to their site, they can always come to Seattle, or run of other sites for meeting. So we've all become very good at, and disciplined at how do you conduct really nice virtual meetings. But from a customer commitment side, from how we are operating, the things that we're doing, not that much has changed. We still run our projects the same way, the teams work together. My team tends to do a lot of happy things like Friday happy hours, they happen to be all virtual. I think last time we played, what word, bingo? I forget exactly what game we played. I know I got some point somewhere. But we do our best to maintain sort of our team chemistry or camaraderie but the mission doesn't change which is our customers expect us to keep operating their services, make sure that they're highly available, keep delivering new capabilities and I think in this environment, in some ways that's even more important than ever, as customer, as the consumer moves online and so much business is being done virtually so it keeps us on our toes but it's been an adjustment but I think we are all, not just us, I think the whole world is doing the best that they can under the circumstances. >> Yeah, absolutely, it definitely has humanized things quite a bit. From a technology standpoint, Deepak, you know, distributed systems has really been the challenge of you know, quite a long journey that people have been going on. Docker has played, you know, a really important role in a lot of these cloud native technologies. It's been just amazing to watch, you know, one of the things I point to in my career is, you know, watching from those very, very early days of Docker to the Cambrian explosion of what we've seen container based services, you know, you've been part of it for quite a number of years and AWS had many services out there. For people that are getting started, you know, what guidance do you give them? What do they understand about, you know, containerization in 2020? >> Yeah, containerization in 2020 is quite a bit different from when Docker started in 2013. I remember speaking at DockerCon, I forget, that's 2014, 2015, and it was a very different world. People are just trying to figure out what containers are that they could package code in deeper. Today, containers are mainstream, it is more customers or at least many customers and they are starting to build new applications, probably starting them either with containers or with some form of server technology. At least that's the default starting point but increasingly, we also seen customers with existing applications starting to think about how do they adapt? And containers are a means to an end. The end is how can we move faster? How can we deliver more quickly? How can our teams be more productive? And how can you do it more, less expensively, at lower cost? And containers are a big part, important and critical piece of that puzzle, both from how customers are operating their infrastructure, that there's a whole ecosystem of schedulers and orchestration and security tools and all the things that an enterprise need to deliver applications using containers that they have built up. Over the last few years, you know, we have multiple container services that meet those needs. And I think that's been the biggest change is that there's so much more. Which also means that when you're getting started, you're faced with many more options. When Docker started, it was this cute whale, Docker run, Docker build Docker push, it was pretty simple, you could get going really quickly. And today you have 500 different options. My guidance to customers really is, boils down to what are you trying to achieve? If you're an organization that's trying to corral infrastructure and trying to use an existing VM more effectively, for example, you probably do want to invest in becoming experts at schedulers and understanding orchestration technologies like ECS and EKS work but if you just want to run applications, you probably want to look at something like Fargate or more. I mean, you could go towards Lambda and just run code. But I think it all boils down to where you're starting your journey. And by the way, understanding Docker run, Docker build and Docker push is still a great idea. It helps you understand how things work. >> All right, so Deepak, you've already brought up a couple of AWS services of, you know, talk about the options out there, that you can either run on top of AWS, you have a lot of native services, you know, ECS, EKS, you mentioned, Fargate there, and very broad ecosystem in space. Could you just, you know, obviously, there are entire breakout sessions to talk about , the various AWS services, but you know, give us that one on one level as to what to understand for container service by AWS. >> Yeah, and these services evolved organically and we launched the Amazon Elastic Container Service or ECS in preview in November or whenever re:Invent was that year in 2014, which seems ages ago in the world of containers but in the end, our goal is to give our customers the most choice, so that they can solve problems the way they want to solve them. So Amazon ECS is our native container orchestration service, it's designed to work with and the rest of the AWS ecosystem. So it uses VPC for networking, it uses IAM identity, it uses ALB for load balancing, other than just good examples, some examples of how it works. But it became pretty clear over time that there was a lot of customers who were investing in communities, very often starting in their own data centers. And as they migrated onto the cloud, they wanted to continue using the same tool plane but they also wanted to not have to manage the complexity of communities control planes, upgrades. And they also wanted some of the same integrations that they were getting with ECS and so that's where the Amazon Elastic Kubernetes Service or EKS comes in, which is, okay, we will manage a control plane for you. We will manage upgrades and patches for you. You focus on building your applications in Kubernetes way, so it embraces Kubernetes. It has, invokes with all the Kubernetes tooling and gives you a Kubernetes native experience, but then also ties into the broad AWS ecosystem and allows us to take care of some of the muck that many customers quite frankly don't and shouldn't have to worry about. But then we took it one step further and actually launched the same time as EKS and that's, AWS Fargate, and Fargate was, came from the recognition that we had, actually, a long time ago, which is, one of the beauties of EC2 was that customers never had, had to stop, didn't have to worry about racking and stacking and where a server was running anymore. And the idea was, how can we apply that to the world of containers. And we also learned a little bit from what we had done with Lambda. And we took that and took the server layer and took it out of the way. Then from a customer standpoint, all you're launching is a pod or a task or a service and you're not worrying about which machines I need to get, what types of machines I need to get. And the operational simplicity that comes with it is quite remarkable and quite finding not that, surprisingly, our customers want us to keep pushing the boundary of the kind operational simplicity we can give them but Fargate serves a critical building block and part of that, and we're super excited because, you know, today by far when a new customer, when a customer comes and runs a container on AWS the first time they pick Fargate, we're usually using ECS because EKS and Fargate is much newer, but that is a default starting point for any new container customer on AWS which is great. >> All right, well, you know, Docker, the company really helped a lot with that democratization, container technologies, you know, all those services that you talked about from AWS. I'm curious now, the partnership with Docker here, you know, how do some of the AWS services, you know, fit in with Docker? I'm thinking Docker Desktop probably someplace that they're, you know, or some connection? >> Yeah, I think one of the things that Docker has always been really good at as a company, as a project, is understanding the developer and the fact that they start off on a laptop. That's where the original Docker experience that go well, and Docker Desktop since then and we see a ton of Docker Desktop customers have used AWS. We also learned very early on, because originally ECS CLI supported Docker Compose. That ecosystem is also very rich and people like building Docker files and post files and just being able to launch them. So we continue to learn from what Docker is doing with Docker Desktop. We continue working with them on making sure that customizing the Docker Compose and Docker Desktop can run all their services and application on AWS. And we'll continue working with Docker, the company, on how we make that a lot easier for our customers, they are our mutual customers, and how we can learn from their simplicity that Docker, the simplicity that Docker brings and the sort of ease of use the Docker bring for the developer and the developer experience. We learn from that for our own services and we love working with them to make sure that the customer that's starting with Docker Desktop or the Docker CLI has a great experience as they move towards a fully orchestrated experience in the cloud, for example. There's a couple of other areas where Docker has turned out to have had foresight and driven some of our thinking. So a few years ago, Docker released this thing called containerd, where they took out their container runtime from inside the bigger Docker engine. And containerd has become a very important project for us as well as, it's the underpinning of Fargate now and we see a lot of interest from customers that want to keep building on containerd as well. And it's going to be very interesting to see how we work with Docker going forward and how we can continue to give our customers a lot of value, starting from the laptop and then ending up with large scale services in the cloud. >> Very interesting stuff, you know, interesting. Anytime we have a conversation about Docker, there's Docker the technology and Docker the company and that leads us down the discussion of open-source technologies . You were just talking about, you know, containerd believe that connects us to Firecracker. What you and your team are involved in, what's your viewpoint is the, you know, what you're seeing from open-source, how does Amazon think of that? And what else can you share with the audience on this topic? >> Yeah, as you've probably seen over the last few years, both from our work in Kubernetes, with things like Firecracker and more recently Bottlerocket. AWS gets deeply involved with open-source in a number of ways. We are involved heavily with a number of CNCF projects, whether it be containerd, whether it be things like Kubernetes itself, projects in the Kubernetes ecosystem, the service mesh world with Envoy and with the containerd project. So where containerd fits in really well with AWS is in a project that we call firecracker-containerd. They're effectively for Fargate, firecracker-containerd as we move Fargate towards Firecracker becomes out of the container in which you run containerd. It's effectively the equivalent of runC in a traditional Docker engine world. And, you know, one of the first things we did when Firecracker got rolled out was open-source the firecracker-containerd project. It's a go project and the idea was it's a great way for people to build VM like isolation and then build sort of these serverless container architectures like we want to do with Fargate. And, you know, I think Firecracker itself has been a great success. You see customer, you know, companies like Libvirt integrating with Firecracker. I've seen a few other examples of, sometimes unbeknownst to us, of people picking a Firecracker and using it for very, very interesting use cases and not just on AWS in other places as well. And we learnt a lot from that that's kind of why Bottlerocket is, was released the way it was. It is both a product and a project. Bottlerocket, the operating system is an open-source project. It's on GitHub, it has all the building tooling, you can take it and do whatever you want with it. And then on the AWS side, we will build and publish Bottlerocket armies, Amazon machine images, we will support them on AWS and there it's a product. But then Bottlerocket the project is something that anybody in the world who wants to run a minimal operating system can choose to pick up. And I think we've learnt a lot from these experiences, how we deal with the community, how we work with other people who are interested in contributing. And you know, Docker is one of the, the Docker open-source pieces and Docker the company are both part of the growing open-source ecosystem that's coming from AWS, especially on the container world. So it's going to be very interesting. And I'll end with, containerization has started impacting other parts of AWS, as well as our other services are being built, very often through ECS and EKS, but they're also influencing how we think about what capabilities we need to build into the broader container ecosystem. >> Yeah, Deepak, you know, you mentioned that some of the learnings from Lambda has impacted the services you're doing on the containerization side. You know, we've been watching some of the blurring of the lines between another container world and the containerization world. You know, there's some open-source projects out there, the CNCS working on things, you know, what's the latest, as you see kind of containerization and serverless and you know, where do you see them going forward? >> This is that I say that crystal balls are not my strong suite. But we hear customers, customers often want the best of both world. What we see very often is that customers don't actually choose just Fargate or just Lambda, they'll choose both. Where for different pieces of their architecture, they may pick a different solution. And sometimes that's driven by what they know, sometimes driven by what fits into their need. Some of the lines blur but they're still quite different. Lambda, for example, as a very event driven architecture, it is one process at a time. It has all these event hooks into the rest of AWS that are hard to replicate. And if that's the world you want to live in or benefit from, you're going to use lambda. If you're running long running services or you want a particular size that you don't get in Lambda or you want to take a more traditional application and convert it into a more modern application, chances are you're starting on Fargate but it fits in really well you have an existing operational model that fits into it. So we see applications evolving very interestingly. It's one reason why when we build a service mesh, we thought forward instead. It is almost impossible that we will have a world that's 100% containers, 100% Lambda or 100% EC2. It's going to be some mix of all of these. We have to think about it that way. And it's something that we constantly think about is how can we do things in a way that companies aren't forced to pick one way to it and "Oh, I'm going to build on Fargate" and then months later, they're like, "Yeah, we should have probably done Lambda." And I think that is something we think a lot about, whether it's from a developer's experience side or if it's from service meshes, which allow you to move back and forth or make the mesh. And I think that is the area where you'll see us do a lot more going forward. >> Excellent, so last last question for you Deepak is just give us a little bit as to what, you know, industry watchers will be looking at the container services going forward, next kind of 12, 18 months? >> Yeah, so I think one of the great things of the last 18 months has been that type of application that we see customers running, I don't think there's any bound to it. We see everything from people running microservices, or whatever you want to call decoupled services these days, but are services in the end, people are running, most are doing a lot of batch processing, machine learning, artificial intelligence that work with containers. But I think where the biggest dangers are going to come is as companies mature, as companies make containers, not just things that they build greenfield applications but also start thinking about migrating legacy applications in much more volume. A few things are going to happen. I think we'll be, containers come with a lot of complexity right now. I think you've, if you've seen my last two talks at re:Invent along with David Richardson from the Lambda team. You'll hear that we talk a lot about the fact that we see, we've made customers think about more things than they used to in the pre container world. I think you'll see now that the early adopter techie part has done, cloud has adopted containers and the next wave of mainstream users is coming in, you'll see more attractions come on as well, you'll see more governance, I think service meshes have a huge role to play here. How identity works or this fits into things like control tower and more sort of enterprise focused tooling around how you put guardrails around your containerized applications. You'll see it two or three different directions, I think you'll see a lot more on the serverless side, just the fact that so many customers start with Fargate, they're going to make us do more. You'll see a lot more on the ease of use developer experience of production side because you started off with the folks who like to tinker and now you're getting more and more customers that just want to run. And then you'll see, and that's actually a place where Docker, the company and the project have a lot to offer, because that's always been different. And then on the other side, you have the governance guardrails, and how is going to be in a compliant environment, how am I going to migrate all these applications over so that work will keep going on and you'll more and more of that. So those are the three buckets I'll use, the world can surprise us and you might end up with something completely radically different but that seems like what we're hearing from our customers right now. >> Excellent, well, Deepak, always a pleasure to catch up with you. Thanks so much for joining us again on theCUBE. >> No, always a pleasure Stu and hopefully, we get to do this again someday in person. >> Absolutely, I'm Stu Miniman, thanks as always for watching theCUBE. >> Deepak: Yep, thank you. (gentle music)

Published Date : May 29 2020

SUMMARY :

brought to you by Docker He's the vice president Nice to meet you again. of the AWS compute services world from, but I want to understand, you know, and disciplined at how do you conduct It's been just amazing to watch, you know, Over the last few years, you know, a couple of AWS services of, you know, and actually launched the same time as EKS how do some of the AWS services, you know, and the fact that they and Docker the company the first things we did the CNCS working on things, you know, And if that's the world you and the next wave of to catch up with you. and hopefully, we get to do Absolutely, I'm Stu Miniman, Deepak: Yep, thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amazon Web ServicesORGANIZATION

0.99+

David RichardsonPERSON

0.99+

Deepak SinghPERSON

0.99+

DeepakPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

SeattleLOCATION

0.99+

2013DATE

0.99+

NovemberDATE

0.99+

Stu MinimanPERSON

0.99+

2020DATE

0.99+

LambdaTITLE

0.99+

2014DATE

0.99+

twoQUANTITY

0.99+

DockerORGANIZATION

0.99+

DockerConEVENT

0.99+

2015DATE

0.99+

12QUANTITY

0.99+

18 monthsQUANTITY

0.99+

todayDATE

0.99+

TodayDATE

0.99+

StuPERSON

0.99+

Docker DesktopTITLE

0.99+

bothQUANTITY

0.99+

DockerTITLE

0.98+

FirecrackerTITLE

0.98+

Docker DesktopTITLE

0.98+

KubernetesTITLE

0.98+

ECSTITLE

0.98+

FargateORGANIZATION

0.98+

one reasonQUANTITY

0.98+

100%QUANTITY

0.98+

three bucketsQUANTITY

0.98+

500 different optionsQUANTITY

0.97+

first timeQUANTITY

0.97+

oneQUANTITY

0.97+

two pizzaQUANTITY

0.97+

LibvirtORGANIZATION

0.97+

Daniel Berg, IBM | KubeCon 2018


 

>> Narrator: Live From Seattle, Washington it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the cloud-native computing foundation and antiquo system partners. >> Okay welcome back everyone it's live coverage here at theCUBE at KubeCon and CloudNativeCon here at Seattle for 2018 event. 8,000 people, up from 4,000 last year. I'm John Furrier with Stuart Miniman, my cohost. Next guest Daniel Berg, distinguished engineer at IBM Cloud Kubernetes Service. Daniel, great to have you on. >> Thank you. >> Thanks for joining us. Good to see you. I'll say you guys know a lot about Kubernetes. You've been using it for a while. >> Yes very much. >> Blue mix, you guys did a lot of cloud, a lot of open source. What's going on with the service? Take a minute to explain your role, what you guys are doing, how it all fits into the big picture here. >> Yeah yeah yeah so I'm the distinguished engineer over top of the architecture and everything around the Kubernetes Service. I'm backed by a crazy wicked awesome team. Right? They are amazing. They're the real wizards behind the curtain right? I'm the curtain is basically all it is. But we've done a phenomenal amount of work on IKS. We've delivered it. We've delivered some amazing HA capabilities, highly reliable but what's really great about it is the service that we provide to all of our customers? We're actually running all of IBM cloud on it, so all of our services, the Watson service, the cloud dataset base services, our keepertech service, identity management, billing, all of it, it's all running. First of all it's moving to containers and Kubernetes and it's running on our managed service. >> So just to make sure I get it all out there, I know we talked to a lot of other folks at IBM. I want to make sure we table it. You guys are highly contributing to the upstream. >> Daniel: Yes. >> As well as running your workload and other customers' workloads on Kubernetes within the IBM cloud. >> Unmodified right? I mean we're taking upstream and we're packed in and the key thing that we're doing is we're providing it as a managed service with our extensions into it. But yeah we're running, we've hit problems over the last 18, 20 months right? There's lots of problems. >> Take us into people always wonder what happens when this reaches real scale. So what experiences, what can you share with us? >> So when you really start hitting like real scale, real scale being like 500, 1,000, couple thousand nodes, right, then you're hitting real scale there. And we're dealing with tens of thousands of clusters, right? You start hitting different pressure points inside of Kubernetes, things that most customers are not going to hit and they're gnarly problems, right? They're really complicated problems. One of the most recent ones that we hit is just scaling problems with CRDs. Now that we've been promoting heavily CRDs, customized Kubernetes, which is a good thing. Well, it starts to hit another pressure point that you then have to start working through scaling of Kubernetes, scaling of the master, dealing with scheduling problems. Once you start getting into these larger numbers that's when you start hitting these pressure points and yes we are making changes and then we're contributing those back up to the upstream. >> One of the things we've been hearing in the interviews here and obviously in the coverage is that the maturation of Kubernetes, great, check, you guys are pushing these pressure points, which is great cause you're actually using it. What are the key visibility points that you're seeing where value's being created, and two what're some of the key learnings that you guys have had? I mean, so you're starting to see some visibility around where people can have value in the stack. Well, or not stack, but in the open source and create value and then learnings that you guys have had. >> Right, right, right. I mean for us the key value here is first of all providing a certified Kubernetes platform, right? I mean, Kubernetes has matured. It has gotten better. It's very mature. You can run production workloads on it no doubt. We've got many many examples of it so providing a certified managed solution around that where customers can focus on their application and not so much the platform, highly valuable right? Because it's certified, they can code to Kubernetes. We always push our teams both internal and external focus on Kubernetes, focus on building a Kube native experience cause that's going to give you the best portability ability moving whether you're using IBM cloud or another cloud provider right? It's a fully certified platform for that. >> Dan, you know, it's one thing if you're building on that platform but what experience do you have of taking big applications and moving it on there? I remember a year or two ago it seemed like it was sexy to talk about lift and shift and most people understand it's like really you just can't take what you had and take advantage of it. You need to be, it might be part of the journey but I'm sure you've got a lot of experiences there. >> Yeah we've got, I mean, we've seen almost every type of workload now cause a lot of people were asking Well, what kind of workloads can you containerize? Can you move to Kubernetes? Based on what we've seen pretty much all of them can move so and we do see a lot of the whole lifT and shift and just put it on Kubernetes but they really don't get the value and we've seen some really crazy uses of Kubernetes where they're on Kubernetes but they're not really, like what I say Kube native. They're not adhering to the Kubernetes principles and practices and therefore they don't get the full value so they're on Kubernetes and they get some of the okay we're doing some health checking but they don't have the proper probes right? They don't have the proper scheduling hints. They don't have the proper quotas. They don't have the proper limits. So they're not properly using Kubernetes so therefore they don't get the full advantage out of it. So what we're seeing a lot though is that customers do that lift and shift, but ultimately they have to, they have to rewrite a lot of what they're doing. To get the most value, and this is true of cloud and cloud native, ultimately at the end of the day if you truly want to get the value of cloud and cloud native you're going to do a rewrite eventually and that will be full cloud native. You're going to take advantage of the APIs and you're going to follow the best practices and the concepts of the platform. >> Containers give you some luxury to play with workloads that you don't maybe have time to migrate over but this brings up the point of the question that we hear a lot and I want to get your thoughts on this because the world's getting educated very fast on cloud native and rearchitecting, replatforming, whatever word you want to use, reimagining their infrastructure. How do you see multicloud driving the investment or architectural thinking with customers? What're they, what're some of the things that you see that are important for 2019 as people are saying you know what? My IT is transforming, we know that, we're going to be a multicloud world. I've got to make investments. >> You definitely have to make those. >> What are those investments architecturally, how should they lay those out? What're your thoughts? >> So my thought there is ultimately, you've got focus on a standardized platform that you're going to use across those because multicloud it's here. It's here to stay whether it's just on premises and you're doing off premises or you're doing on premises and multiple cloud vendors and that's where everybody's going and it's going to be give it another six, 12 months. That's going to be the practice. That's going to be what everybody does. You're not one cloud provider, you're multiple. So standardization, community, massive. Do you have a community around that? You can't vendor lock in if you're going to be doing portability across all of these cloud providers. Standardization governance around the platform the certification so Kubernetes you have a certified process that you certify every version so you at least know I'm using a vendor that's certified. Right? I have some promise that my application's going to run on that. Now is that as simple as well I picked a certified Kubernetes and therefore I should be able to run my application. Not so simple. >> And operationally, they're running CICD, you got to run that over the top. >> You've got to have a common, yeah, You've got to have a common observability model across all of that, what you're logging, what're you're monitoring, what's your CICD process. You've got to have a common CICD process that's going to go across all of those cloud providers, right, all of your cloud environments. >> Dan, take us inside. How're we doing with security? It's one of those sort of choke points. Go back to containers when they first started through to Kubernetes. Are we doing well on security now and where do we need to go? >> Are we doing well on it? Yes we are. I think we're doing extremely well on security. Do we have room for improvement? Absolutely everybody does. I've just spent the last eight months doing compliance and compliance work. That's not necessarily security but it dips into it quite often right? Security is a central focus. Anybody doing public cloud, especially providers, we're highly focused on security and you've got to secure your platforms. I think with Kubernetes and providing first of all proper isolation and customers need to understand what levels of isolation am I getting? What levels of sharing am I getting? Are those well documented and I understand what my providers providing me. But the community's improving. Things that we're seeing around like Kubernetes and what they're doing with secrets and proper encryption, encryption, notary with the image repositories and everything. All that plays into providing a more secure platform so we're getting there, things are getting better. >> Well there was a recent vulnerability that just got patched rather fast. >> Daniel: There was. >> It seemed like it moved really quick. What do we learn from that? >> Well we've learned that Kubernetes itself is not perfect, right? Actually I would be a little bit concerned if we didn't find a security hole because then that means there's not enough adoption, where we just haven't found the problems. Yes we found a security hole. The thing is the community addressed it, communicated it, and all of the vendors provided a patch very quickly and many of them like with IKS we rolled out the patch to all of our clusters, all of our customers, they didn't have to do anything and I believe Google did the same thing so these are things that the community is improving, we're maturing and we're handling those security problems. >> Dan, talk about the flexibility that Kubernetes provides. Certainly you mentioned earlier the value that can be extracted if you do it properly. Some people like to roll their own Kubernetes or they want the managed service because it streamlines things a bit faster. When do I want management? When do I want to roll my own? Is there kind of a feel? Is it more of a staffing thing? Is it more scale? Is it more application, like financial services might want to roll their own? We're starting to maybe see a different industry. What's your take on this? >> Well obviously I'm going to be super biased on this. But my belief there is that I mean obviously if you're going to be doing on premises and you need a lot of flexibility. You need flexibility of the kernel you may need to roll your own right? Because at that point you can control and drive a lot of the flexibility in there, understanding that you take on the responsibility of deploying and managing and updating your platform, which means generally that's an investment you're going to make that takes away from your critical investment of your developers on your business so personally I would say first and foremost... >> It's a big investment. >> It's a massive investment. I mean look at what the vendor, look at IKS. I've got a large team. They live and breathe Kubernetes. Live and breathe every single release, test it, validate it, roll updates. We're experts at updating Kubernetes without any down time. That's a massive investment. Let the experts do it. Focus on your business. >> John: And that's where the manage piece shines. >> That's where the mange piece absolutely shines. >> Okay so the question about automation comes up. I want to get your thoughts on the future state of Kubernetes because you know we go down the cloud native devops model. We want to automate away things. >> Daniel: Yes. >> Kubernetes is that some differentiation but I don't want to manage clusters. I don't want to manage it. I want it automated. >> Daniel: Yeah. >> So is it automating faster? Is it going to be automated? What's your take on the automation component? When and where and how? >> Well, I mean through the manage services I mean it's cloud native. It's all API driven, CLIs. You've got one command and you're scaling up a cluster. You get a cluster with one command, you can go across multiple zones with one command. Your cluster needs to be updated you call one command and you go home. >> John: That sounds automated to me. >> I mean that's fully and that's the only way that we can scale that. We're talking about thousands of updates on a daily basis. We're talking about tens of thousands of clusters fully automated. >> A lot of people have been talking the past couple of weeks around this notion of well all containers might have security boundary issues. Let's put a VM around it maybe stay for is it maybe just more of a fix? Cause why do I want to have a VM or is it better to just keep native core? Is that real conversation or is that fud? >> I mean it is a real conversation because people are starting to understand what are the proper isolation levels with my cluster. My personal belief around that is you really only need that level of isolation, those mini VMs, around your containers. Running a single container in a single VM seems overkill to me. However if you're running a multitenant cluster with untrusted content you better be taking extra precautions. First and foremost I would say don't do it because you're adding risk, right? But if you're going to do it yes, you might start looking at those types but if you're running a cluster in its an isolated cluster with full isolation levels all the way down to the hardware in a trusted environment, trust being it's your organization, it's your code. I think it's overkill then. >> Future of Kubernetes what happens next? People are hot on this. You've got service meshes, a lot of other goodness. People are really trying to stay with the pace, a lot of change and again a lot of education. But it's not a stack like I hear words like Kubernetes stack and the CNCM has a stack. So it's not necessarily a stack per se. >> Right it's not. >> Clarify the linguistic language around what we're talking about here. What's a stack? What's not a stack? It's all services. >> Look at it this way. So Kubernetes has done a phenomenal job as a project in the community to state exactly what it's trying to achieve, right? It is a platform. It is platform for running cloud native applications. That is what it is and it allows vendors to build on top of it. It allows customers to build on it and it's not trying to grow larger than that. It's just trying to improve overall that platform and that's what's fantastic about Kubernetes because that allows us and when you see the stack it's really cloud native. What pieces am I going to add to that awesome platform to make my life even better? Knative, Istio, a service measure. I'm going to put that on because I'm evolving, I'm doing more microservices. I'm going to build that on top of it. Inside of IBM we did cloud foundry enterprise environment, CFEE, cloud foundry on Kubernetes. Why not, right? It's a perfect combination. It's just going up the level and it's providing more usability, better different prescriptive uses of Kubernetes but Kubernetes is the platform. >> When I think about the composability of services it's not a stack. It's lego blocks. >> Daniel: Yeah it's pieces. I'm using different pieces here, there, everywhere. >> All right well Daniel thanks for coming on, sharing great insight. Congratulations on your success running major workloads within IBM for you guys and the customers. Again just the beginning, Kubernetes the beginning. Congratulations. Here inside the Cube we're breaking down all the action. Three days of live coverage. We're at day one at KubeCon and CloudNativeCon. We'll be right back with more coverage after this short break.

Published Date : Dec 12 2018

SUMMARY :

Brought to you by Red Hat, Daniel, great to have you on. I'll say you guys know a lot about Kubernetes. Take a minute to explain your role, First of all it's moving to containers and Kubernetes So just to make sure I get it all out there, and other customers' workloads on Kubernetes and the key thing that we're doing So what experiences, what can you share with us? One of the most recent ones that we hit is just the key learnings that you guys have had? experience cause that's going to give you the best but what experience do you have of taking and the concepts of the platform. that you don't maybe have time to migrate over the certification so Kubernetes you have you got to run that over the top. across all of that, what you're logging, Go back to containers when they first started and what they're doing with secrets that just got patched rather fast. What do we learn from that? communicated it, and all of the vendors provided that can be extracted if you do it properly. and drive a lot of the flexibility in there, Let the experts do it. Okay so the question about automation comes up. I don't want to manage it. Your cluster needs to be updated I mean that's fully and that's the only way A lot of people have been talking the past couple with untrusted content you better be taking Kubernetes stack and the CNCM has a stack. Clarify the linguistic language around as a project in the community to state it's not a stack. Daniel: Yeah it's pieces. and the customers.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stuart MinimanPERSON

0.99+

DanielPERSON

0.99+

Daniel BergPERSON

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SeattleLOCATION

0.99+

Three daysQUANTITY

0.99+

DanPERSON

0.99+

2019DATE

0.99+

OneQUANTITY

0.99+

one commandQUANTITY

0.99+

8,000 peopleQUANTITY

0.99+

FirstQUANTITY

0.99+

KubeConEVENT

0.99+

500QUANTITY

0.99+

twoQUANTITY

0.99+

1,000QUANTITY

0.98+

last yearDATE

0.98+

bothQUANTITY

0.98+

CloudNativeConEVENT

0.98+

IKSORGANIZATION

0.98+

IBM Cloud KubernetesORGANIZATION

0.97+

tens of thousands of clustersQUANTITY

0.97+

KubernetesTITLE

0.96+

Seattle, WashingtonLOCATION

0.96+

oneQUANTITY

0.96+

firstQUANTITY

0.96+

CloudNativeCon North America 2018EVENT

0.94+

couple thousand nodesQUANTITY

0.93+

KubernetesORGANIZATION

0.93+

one thingQUANTITY

0.91+

past couple of weeksDATE

0.89+

single VMQUANTITY

0.89+

2018DATE

0.87+

two agoDATE

0.87+

KubeCon 2018EVENT

0.87+

single containerQUANTITY

0.85+

six, 12 monthsQUANTITY

0.85+

20 monthsQUANTITY

0.84+

CNCMORGANIZATION

0.84+

a year orDATE

0.83+

day oneQUANTITY

0.82+

18QUANTITY

0.79+

thousandsQUANTITY

0.78+

about tens of thousandsQUANTITY

0.78+

4,000QUANTITY

0.77+

one cloudQUANTITY

0.77+

last eight monthsDATE

0.76+

Kubernetes ServiceORGANIZATION

0.75+

Tarun Thakur, Rubrik Datos IO | CUBEConversation, Sept 2018


 

(uplifting music) >> Hello and welcome to this special CUBE Conversation. I'm John Furrier, here in Palo Alto at theCUBE studios for a special conversation with Tarun Thaker, general manager of Datos IO, part of Rubrik. Last time I interviewed you, you were the CEO. You guys got acquired, congratulations. >> Thank you, thank you, John. Very happy to be here. >> How'd that go? How'd the acquisition go? >> Excellent, excellent. I Met Bipul about August of last year and it was sort of perfect marriage waiting to happen. We were both going after the broader irresistible opportunity of data management. >> I've enjoyed our previous conversations because you guys were a hot, growing start up and then you look at Rubrik, if you look at the success that they've been having, just the growth in data protection, the growth in cloud, you guys were on with from the beginning with Datos. Now you got a management team, you got all this growth, it is pretty fun to watch and I'll see you locally in Palo Alto so it's been interesting to see you guys. Huge growth opportunity. Cloud people are realizing that this is not a side decision. >> No. >> It's got to be done centrally. The customers are re architecting to be cloud native. The on premises, we saw big industry movements happening with Amazon at VMworld announcing RDS on VMware on premises. >> Correct. >> Which validates that the enterprises want to have a cloud operation, both on premise. >> Yes. >> And in cloud. How has this shaped you guys? You have big news, but this is a big trend. >> No, absolutely. John, I think you rightly said, the pace of innovation at Rubrik and the pace of market adoption is beyond everybody's imagination, right? When I said that it was sort of a marriage waiting to be happened, is if you look at the data management tam it's close to 50 billion dollars, right? And you need to build a portfolio of products, right? You need to sort of think about the classical data center applications because on prem is still there and on premises is still a big part of spending. But if you look at where enterprises are racing to the cloud. They're racing given digital transformation. They're racing customer 360 experience. Every organization, whether it be financials, maybe healthcare, maybe commerce, wants to get closer to the end customers, right? And if you look underneath that macro trend, it's all this cloud native space. Whether it be Kubernetes and Docker based containers or it could be RDS which is natively built in the cloud or it could be, hey I want to now run Oracle in the cloud, right? Once you start thinking of this re architecting stack being built in the cloud, enterprises will not leap and spend those top dollars that they spend on prem if they don't get a true, durable data management stack. >> And one of the things I really was impressed when you Datos, now it's part of Rubrik, is you were cloud up and down the stack. You were early on cloud, you guys thought like cloud native. Your operations was very agile. >> Thank you. >> Everything about you, beyond the product, was cloud. This is a critical success now for companies. They have to not just do cloud with product. >> Correct. >> Their operational impact has to be adjusted, how they do business, the supply chains, the value chains. These things are changing. >> The licensing, the pricing. >> This is the new model. >> Yes. >> This is where the data comes in. This is where the support comes in. You guys have some hard news, Datos IO 3.0. What's the big news? >> John, as you said, we've been very squarely focused on what we called the NoSQL big data market, right? We, if you look at, you know you talked about Amazon RDS, if you go to the Amazon business, Amazon database business is about four billon dollars today, right? Just think about that. If you take a guess on number one data base in Amazon native, it's not Oracle, it's MySQL. Number two, it's not SQL Server, it's Mongo DB. So if you look at the cloud native stack, we made this observation four years ago, as you said, that underneath this was all NoSQL. We really found that blue ocean, as we call it, the green field opportunity and go build the next Veritas for that space. You know, with 3.0, Bipul likes to call it in accordance to his leadership, consolidate your gains. Once you find an island full of gold coins, you don't leave that island. (laughing) You go double down, triple down, right? You don't want to distract your focus so 3.0 is all about us focusing. Really sort of the announcements are rooted around three vectors, as we call it. Number one, if you look at why Rubrik was so successful, you know you went into a pretty gorilla market of backup but why Rubrik has been successful at the heart is this ease of use and simplicity. And we wanted to bring that culture into, not only Datos team, but also into our product, right? So that was simplicity. Large scale distributive systems are difficult to deploy and manage so that was the first part. Second part was all about, you know, if you look at Mongo. Mongo has gone from zero to four billion dollars in less than 10 years. Every Fortune 2000, 500, Global 2000 customer is using Mongo in some critical way. >> Why is that? I mean people were always, personally we love Mongo DB, but people were predicting their demise every year. "Oh, it's never going to scale," I've heard people say and again, this is the competition. >> Correct. >> We know who they are. But why is the success there? Obviously NoSQL and unstructured data's big tsunami and there's more data coming in than ever before. Why are they successful? >> Excellent. That's why I enjoy being here, you go to the why not the what and the how. And the why is rooted for why Mongo DB's so successful, is application developers. We've all read this book, developers are the king makers of the IT, not your IT and storage admins? And Mongo found that niche, that if I can go build a database which is easier for an application developer, I will build a company. And that was the trend they built the company around. Fast forward, it's stock that is trading at $80 a piece. >> Yeah. >> To four billion plus in market. >> Yeah and I think the other thing I would just add, just riffing on that, is that cloud helps. Because where Mongo DB horizontally scales-- >> Elastic. >> The old critics were saying, thinking vertical scale. >> Correct. >> Cloud really helps that. >> Absolutely, absolutely. Cloud is our elastic resources, right? You turn it out and you turn it down. What we found in the first, as you know in the last two to three years journey of 1.0, 2.0, that we were having a great reception with Mongo DB deployments and again, consolidate your gains towards Mongo so that was the second vector, making Datos get scale out for Mongo DB deployments. Number three, which is really my most favorite was really around multi cloud is here, right? No enterprise is going to really, bet only on one form of Amazon or one form of Google Cloud, they're going to bet it across these multiple clouds, right? We were always on Amazon, Google. We now announced Datos natively available on Amazon, so now if you have enterprise customers doing NoSQL applications in Amazon, you can protect that data natively to the cloud, being the Azure cloud. >> So which clouds are you guys supporting now with 3.0? Can you just give the list? >> Yep, yep. We supported Amazon from very early days, AWS. Majority of customers are on Amazon. Number two is Google Cloud, we have a great relationship with Google Cloud team, very entrepreneurial people also. And number three's Azure. The fourth, which is sort of a hidden Trojan horse is Oracle Cloud. We also announced Datos on Oracle Cloud. Why, you may ask? Because if you look at, again, NoSQL and data stacks in Cassandra, we saw a very healthy ecosystem building for Cassandra and Oracle Cloud, for obvious reasons. It was very good for us to follow that tailwind. >> Interestingly I was just at Oracle yesterday for a briefing, and I'm not going to reveal any confidential information, because it's all on the record. They're heavily getting to cloud native. They have to. >> They have to. There's no choice. They cannot be like tiptoeing, they have to go all in. >> And microservices are a big thing. This is something that you guys now have focus on. Talk about the microservices. How does that fit in? Because you look at Kubernetes, Kubernetes is becoming that kind of TCPIP moment for the cloud world or TCPIP powered networked and created inter working. The inter cloud or the multi cloud relationship? >> Correct, all the cloud native. >> Kubernetes is becoming that core catalyst. Got containers on one side, service meshes on the other. This brings in the data equation, stateful applications, stateless applications, this is going to change the game for developers. >> Absolutely. >> Actually now you have a backup equation, how do you know what to back up? >> Correct. >> What's the data? >> Correct. >> What's the impact? >> Yeah. So the announcement that we announced, just to cover that quickly, is we were seeing that trend. If you look at these developers or these DBAs or data base admins who are going to the cloud and racing to the cloud? They're not deploying OVA files. They're not deploying, as you said, IP network files, right? They want to deploy these as containerized applications. So running Mongo as a Docker container or Cassandra as a Docker container or Couch as a Docker container and you cannot go to them as a data management product as an age old mechanism of various bits and bytes. So we announced two things, Datos is now available as a Docker container, so you can just get a Docker file and run your way. And number two is we can also protect your NoSQL applications that are Dockerized or that are containerized, right? And that's really our first step into what you're seeing with Amazon EKS, right? Elastic Kubernetes Service. If you saw NetApp announced yesterday the acquisition of Kubernetes as a service, right? And so our next step, now that we've enabled Docker container of Datos, is to how do we bring Kubernetes as a service on top of Docker because Docker to deploy, orchestrate, manage that by itself is really still a challenge. >> Yeah containers is the stepping stone to orchestration. >> Correct, correct. >> You need Kubernetes to orchestrate the containers. >> That is correct, that is correct. >> Alright so summarize the announcements. If you had to boil this down, what's the 3.0? >> So if I were to sort come back and give you sort of the headline message, it is really our release to go crack open into the Fortune 500, Global 2000 enterprises. So if you remember, 60% of our customers are already what we call it internally, R2K, global 2000 customers so Datos, 60% of our customers who are large Fortune 500 customers. >> They're running mission critical? >> They're mission critical, no support applications. >> So you're supporting mission critical applications? >> Absolutely, some of our biggest customers, ACL Worldwide, one of the largest financial leading organization. Home Depot, that we have talked about in the past, right? Palo Alto Networks, the worlds largest cloud security networking company, right? If you look at these organizations they are running cloud native applications today. And so this release is really our double down into cracking open the Global 2000 enterprises and really staying focused at that market. >> And multi cloud is critical for you guys? >> Oh, absolutely. Any enterprise software company without, especially a data company, right? At the end of the day, it's all about data. >> Tarun, talk about why multi cloud, at some point. I'd love to get your expert opinion on this because you know Kubernetes, you see what's coming around the corner with service meshes and all this cool stuff because it impacts the infrastructure. With multi cloud, certainly what everyone's asking about, hybrid and multi cloud. Why is multi cloud important? What's the impact of multi cloud? >> Great question, John. You know, I think it's rooted in sort of three key reasons, right? Number one, if you look at what enterprises did back in the day, right history repeats itself, right? They never betted only on IBM servers. They bought Dell servers, they bought HP servers. Never anybody betted only on ESX as the virtual hypervisor platform. They betted on KBM and others, right? Similarly if you look at these enterprises, the ones that we talked about, Palo Alto Networks, they're going to run some of the applications natively on Amazon but they want DR in Google Cloud so think about a business use case being across clouds. So that's the one, right? I want to run some applications in Amazon because of elasticity, ease of use, orchestration but I want to keep my DR in a different site but I don't want to a colo, right? I want to do another cloud, so that's one. Number two is some of your application developers are, you know, in different regions, right? You want to enable sort of different cloud sites for them, right? So it's just locality, would be more of a reason and number three which is actually, probably I think the most important, is if you look at Amazon and what they have done with the book business, what they've done with others, e-commerce organizations like eBay, like Home Depot, like Foot Locker, they're very wary of betting the farm on a retail organization. Fundamentally Amazon is a retail organization, right? So they will go back, their use cases on Google cloud, they'll go back their use cases on Azure cloud so it's like vertical. Which vertical is prone or more applicable to a particular cloud, if that make sense? >> And so having multi vendors been around for a while in the enterprise, so multi vendor just translates to multi cloud? >> There you go, yes, yes. >> How about what's goin' on with you guys? Next week is Microsoft Ignite, their big cloud show from Microsoft. You guys have a relationship with them. In November you announced a partnership. >> Correct. >> Rubrik and you guys are doing that, so what's going on with them? You're co-selling together? Are they joint developing? What's the update? >> Ignite, so Microsoft, I'll give an update on Microsoft and then Ignite. As you know, John Thompson is on our board and you know fundamentally the product that we have built, Azure team, working with them, we have come to realize that it's a great product to bring data to the cloud. >> Right. >> And we have a very good, strong product relationship with Microsoft, we have a co-sell meaning their reps can sell Rubrik and get quota retirement, that's massive, right? Think for both the companies, right? And companies don't make those decisions, John, lightly. Those decisions are made very strictly. >> Quota relief is great. >> It's huge. >> It's a sales force for you guys. >> Exactly, yep. For us, specifically on Ignite, with this release we announced Azure. We worked very closely with the Azure storage division. When we pitched them, hey we are now, Datos is available on Azure, the respect that we got was amazing. We had a Microsoft quote in our press release. At Ignite next week we have dedicated sessions talking about NoSQL back ups on Microsoft, natively being protected on Azure Cloud. It's good for them, good for us, huge announcement next week. >> That's good. You guys have done the work in the cloud and it's interesting, early cloud adopters get some dividends on that. Just to summarize the chat here, if you had to talk to customer who's watching or interested and sees all this competition out there, a lot of noise in the industry, how would you summarize your value proposition? What's the value that you're bringing to the table? How do you guys compete on that value? Why Datos? >> Perfect, thank you. It's, again, simple order in one to three. Number one, we're helping you accelerate journey to the cloud. Right, you want to go the cloud, we understand Fortune 500 enterprises want to race to the cloud. You don't want to race without protection, without data management. It's your data, it needs to be in your control so that's one. We're helping you race to the cloud, yet keeping your data in your hands. Number two, you are buying a truly cloud native software not a software that was built 20 years ago and shrink wrapped into cloud. This is a product built into technologies which are cloud native, right? Elasticity, you can scale up Datos, you can scale down Datos, just like Amazon resources so you're truly buying an elastic technologies rooted data management product. And number three, you know if you really look at cloud, cloud to you as a customer is all about, hey can I build, not lift and shift, cloud native. And you're adopting these new technologies, you don't want to not think about protection, management, DR, those critical business use cases. >> And thinking differently about cloud operations is critical. Great to see you Tarun. Thanks for coming on and sharing the news on Datos 3.0, appreciate it. I'm John Furrier, here in Palo Alto Studios with the general manager of Datos IO, now part of Rubrik, formerly the CEO of Datos, Tarun Thaker, thanks for watching. I'm John Furrier, thanks for watching theCUBE. (uplifting music)

Published Date : Sep 20 2018

SUMMARY :

Hello and welcome to this Very happy to be here. and it was sort of perfect the growth in cloud, you guys were on with The on premises, we saw big want to have a cloud operation, How has this shaped you guys? And if you look underneath is you were cloud up and down the stack. beyond the product, was cloud. the supply chains, the value chains. What's the big news? So if you look at the cloud native stack, "Oh, it's never going to Obviously NoSQL and And the why is rooted for Yeah and I think the The old critics were saying, What we found in the first, as you know So which clouds are you Because if you look at, again, NoSQL because it's all on the record. they have to go all in. This is something that you This brings in the data and you cannot go to them Yeah containers is the stepping stone orchestrate the containers. If you had to boil this So if you remember, 60% of They're mission critical, If you look at these organizations At the end of the day, on this because you know Kubernetes, is if you look at Amazon goin' on with you guys? and you know fundamentally the Think for both the companies, right? the respect that we got was amazing. if you had to talk to cloud to you as a customer is all about, Great to see you Tarun.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

TarunPERSON

0.99+

IBMORGANIZATION

0.99+

DellORGANIZATION

0.99+

MongoORGANIZATION

0.99+

Sept 2018DATE

0.99+

Palo AltoLOCATION

0.99+

Tarun ThakerPERSON

0.99+

Palo Alto NetworksORGANIZATION

0.99+

60%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Tarun ThakurPERSON

0.99+

John ThompsonPERSON

0.99+

HPORGANIZATION

0.99+

zeroQUANTITY

0.99+

NovemberDATE

0.99+

Home DepotORGANIZATION

0.99+

DatosORGANIZATION

0.99+

yesterdayDATE

0.99+

first partQUANTITY

0.99+

next weekDATE

0.99+

eBayORGANIZATION

0.99+

ESXTITLE

0.99+

Second partQUANTITY

0.99+

Next weekDATE

0.99+

VMworldORGANIZATION

0.99+

RubrikORGANIZATION

0.99+

bothQUANTITY

0.99+

OracleORGANIZATION

0.99+

Foot LockerORGANIZATION

0.99+

four billionQUANTITY

0.99+

less than 10 yearsQUANTITY

0.99+

four years agoDATE

0.99+

fourthQUANTITY

0.99+

firstQUANTITY

0.99+

NoSQLTITLE

0.99+

Datos IOORGANIZATION

0.98+

threeQUANTITY

0.98+

MySQLTITLE

0.98+

four billion dollarsQUANTITY

0.98+

BipulPERSON

0.98+

first stepQUANTITY

0.98+

three yearsQUANTITY

0.98+

second vectorQUANTITY

0.98+

one formQUANTITY

0.98+

IgniteORGANIZATION

0.98+

two thingsQUANTITY

0.97+

oneQUANTITY

0.97+