Keynote Enabling Business and Developer Success | Open Cloud Innovations
(upbeat music) >> Hello, and welcome to this startup showcase. It's great to be here and talk about some of the innovations we are doing at AWS, how we work with our partner community, especially our open source partners. My name is Deepak Singh. I run our compute services organization, which is a very vague way of saying that I run a number of things that are connected together through compute. Very specifically, I run a container services organization. So for those of you who are into containers, ECS, EKS, fargate, ECR, App Runner Those are all teams that are within my org. I also run the Amazon Linux and BottleRocketing. So anything AWS does with Linux, both externally and internally, as well as our high-performance computing team. And perhaps very relevant to this discussion, I run the Amazon open source program office. Serving at AWS for over 13 years, almost 14, involved with compute in various ways, including EC2. What that has done has given me a vantage point of seeing how our customers use the services that we build for them, how they leverage various partner solutions, and along the way, how AWS itself has gotten involved with opensource. And I'll try and talk to you about some of those factors and how they impact, how you consume our services. So why don't we get started? So for many of you, you know, one of the things, there's two ways to look at AWS and open-source and Amazon in general. One is the number of contributors you may have. And the number of repositories that contribute to. Those are just a couple of measures. There are people that I work with on a regular basis, who will remind you that, those are not perfect measures. Sometimes you could just contribute to one thing and have outsized impact because of the nature of that thing. But it address being what it is, increasingly we'll look at different ways in which we can help contribute and enhance open source 'cause we consume a lot of it as well. I'll talk about it very specifically from the space that I work in the container space in particular, where we've worked a lot with people in the Kubernetes community. We've worked a lot with people in the broader CNCF community, as well as, you know, small projects that our customers might have got started off with. For example, I want to like talking about is Argo CD from Intuit. We were very actively involved with helping them figure out what to do with it. And it was great to see how into it. And we worked, etc, came together to think about get-ups at the Kubernetes level. And while those are their projects, we've always been involved with them. So we try and figure out what's important to our customers, how we can help and then take because of that. Well, let's talk about a little bit more, here's some examples of the kinds of open source projects that Amazon and AWS contribute to. They arranged from the open JDK. I think we even now have our own implementation of Java, the Corretto open source project. We contribute to projects like rust, where we are very active in the rest foundation from a leadership role as well, the robot operating system, just to pick some, we collaborate with Facebook and actively involved with the pirates project. And there's many others. You can see all the logos in here where we participate either because they're important to us as AWS in the services that we run or they're important to our customers and the services that they consume or the open source projects they care about and how we get to those. How we get and make those decisions is often depends on the importance of that particular project. At that point in time, how much impact they're having to AWS customers, or sometimes very feel that us contributing to that project is super critical because it helps us build more robust services. I'll talk about it in a completely, you know, somewhat different basis. You may have heard of us talk about our new next generation of Amazon Linux 2022, which is based on fedora as its sub stream. One of the reasons we made this decision was it allows us to go and participate in the preneurial project and make sure that the upstream project is robust, stays robust. And that, that what that ends up being is that Amazon Linux 2022 will be a robust operating system with the kinds of capabilities that our customers are asking for. That's just one example of how we think about it. So for example, you know, the Python software foundation is something that we work with very closely because so many of our customers use Python. So we help run something like PyPy which is many, you know, if you're a Python developer, I happened to be a Ruby one, but lots of our customers use Python and helping the Python project be robust by making sure PyPy is available to everybody is something that we help provide credits for help support in other ways. So it's not just code. It can mean many different ways of contributing as well, but in the end code and operations is where we hang our happens. Good examples of this is projects that we will create an open source because it makes sense to make sure that we open source some of the core primitives or foundations that are part of our own services. A great example of that, whether this be things that we open source or things that we contribute to. And I'll talk about both and I'll talk about things near and dear to my heart. There's many examples I've picked the two that I like talking about. The first of these is firecracker. Many of you have heard about it, a firecracker for those of you who don't know is a very lightweight virtual machine manager, which allows you to run these micro VMs. And why was this important many years ago when we started Lambda and quite honestly, Fugate and foggy, it still runs quite a bit in that mode, we used to have to run on VMs like everything else and finding the right VM for the size of tasks that somebody asks for the size of function that somebody asks for is requires us to provision capacity ahead of time. And it also wastes a lot of capacity because Lambda function is small. You won't even if you find the smallest VM possible, those can be a little that can be challenging. And you know, there's a lot of resources that are being wasted. VM start at a particular speed because they have to do a whole bunch of things before the operating system spins up and the virtual machine spins up and we asked ourselves, can we do better? come up with something that allows us to create right size, very lightweight, very fast booting. What's your machines, micro virtual machine that we ended up calling them. That's what led to firecracker. And we open source the project. And today firecrackers use, not just by AWS Lambda or foggy, but by a number of other folks, there's companies like fly IO that are using it. We know people using firecracker to run Kubernetes on prem on bare metal as an example. So we've seen a lot of other folks embrace it and use it as the foundation for building their own serverless services, their own container services. And we think there's a lot of value and learnings that we can bring to the table because we get the experience of operating at scale, but other people can bring to the table cause they may have specific requirements that we may not find it as important from an AWS perspective. So that's firecracker an example of a project where we contribute because we feel it's fundamentally important to us as continually. We were found, you know, we've been involved with continuity from the beginning. Today, we are a whole team that does nothing else, but contribute to container D because container D underlies foggy. It underlies our Kubernetes offerings. And it's increasingly being used by customers directly by their placement. You know, where they're running container D instead of running a full on Docker or similar container engine, what it has allowed us to do is focus on what's important so that we can operate continuously at scale, keep it robust and secure, add capabilities to it that AWS customers need manifested often through foggy Kubernetes, but in the end, it's a win-win for everybody. It makes continuously better. If you want to use containers for yourself on AWS, that's a great way to you. You know, you still, you still benefit from all the work that we're doing. The decision we took was since it's so important to us and our customers, we wanted a team that lived in breathed container D and made sure a super robust and there's many, many examples like that. No, that we ended up participating in, either by taking a project that exists or open sourcing our own. Here's an example of some of the open source projects that we have done from an AWS on Amazon perspective. And there's quite a few when I was looking at this list, I was quite surprised, not quite surprised I've seen the reports before, but every time I do, I have to recount and say, that's a lot more than one would have thought, even though I'd been looking at it for such a long time, examples of this in my world alone are things like, you know, what work had to do with Amazon Linux BottleRocket, which is a container host operating system. That's been open-sourced from day one. Firecracker is something we talked about. We have a project called AWS peril cluster, which allows you to spin up high performance computing clusters on AWS using the kind of schedulers you may use to use like slum. And that's an open source project. We have plenty of source projects in the web development space, in the security space. And more recently things like the open 3d engine, which is something that we are very excited about and that'd be open sourced a few months ago. And so there's a number of these projects that cover everything from tooling to developer, application frameworks, all the way to database and analytics and machine learning. And you'll notice that in a few areas, containers, as an example, machine learning as an example, our default is to go with open source option is where we can open source. And it makes sense for us to do so where we feel the product community might benefit from it. That's our default stance. The CNCF, the cloud native computing foundation is something that we've been involved with quite a bit. You know, we contribute to Kubernetes, be contribute to Envoy. I talked about continuity a bit. We've also contributed projects like CDK 8, which marries the AWS cloud development kit with Kubernetes. It's now a sandbox project in Kubernetes, and those are some of the areas. CNCF is such a wide surface area. We don't contribute to everything, but we definitely participate actively in CNCF with projects like HCB that are critical to eat for us. We are very, very active in just how the project evolves, but also try and see which of the projects that are important to our customers who are running Kubernetes maybe by themselves or some other project on AWS. Envoy is a good example. Kubernetes itself is a good example because in the end, we want to make sure that people running Kubernetes on AWS, even if they are not using our services are successful and we can help them, or we can work on the projects that are important to them. That's kind of how we think about the world. And it's worked pretty well for us. We've done a bunch of work on the Kubernetes side to make sure that we can integrate and solve a customer problem. We've, you know, from everything from models to work that we have done with gravity on our arm processor to a virtual GPU plugin that allows you to share and media GPU resources to the elastic fabric adapter, which are the network device for high performance computing that it can use at Kubernetes on AWS, along with things that directly impact Kubernetes customers like the CDKs project. I talked about work that we do with the container networking interface to the Amazon control of a Kubernetes, which is an open source project that allows you to use other AWS services directly from Kubernetes clusters. Again, you notice success, Kubernetes, not EKS, which is a managed Kubernetes service, because if we want you to be successful with Kubernetes and AWS, whether using our managed service or running your own, or some third party service. Similarly, we worked with premetheus. We now have a managed premetheus service. And at reinvent last year, we announced the general availability of this thing called carpenter, which is a provisioning and auto-scaling engine for Kubernetes, which is also an open source project. But here's the beauty of carpenter. You don't have to be using EKS to use it. Anyone running Kubernetes on AWS can leverage it. We focus on the AWS provider, but we've built it in such a way that if you wanted to take carpenter and implemented on prem or another cloud provider, that'd be completely okay. That's how it's designed and what we anticipated people may want to do. I talked a little bit about BottleRocket it's our Linux-based open-source operating system. And the thing that we have done with BottleRocket is make sure that we focus on security and the needs of customers who want to run orchestrated container, very focused on that problem. So for example, BottleRocket only has essential software needed to run containers, se Linux. I just notice it says that's the lineups, but I'm sure that, you know, Lena Torvalds will be pretty happy. And seeing that SE linux is enabled by default, we use things like DM Verity, and it has a read only root file system, no shell, you can assess it. You can install it if you wanted to. We allowed it to create different bill types, variants as we call them, you can create a variant for a non AWS resource as well. If you have your own homegrown container orchestrator, you can create a variant for that. It's designed to be used in many different contexts and all of that is open sourced. And then we use the update framework to publish and secure repository and kind of how this transactional system way of updating the software. And it's something that we didn't invent, but we have embraced wholeheartedly. It's a bottle rockets, completely open source, you know, have partners like Aqua, where who develop security tools for containers. And for them, you know, something I bought in rocket is a natural partnership because people are running a container host operating system. You can use Aqua tooling to make sure that they have a secure Indiana environment. And we see many more examples like that. You may think so over us, it's all about AWS proprietary technology because Lambda is a proprietary service. But you know, if you look peek under the covers, that's not necessarily true. Lambda runs on top of firecracker, as we've talked about fact crackers and open-source projects. So the foundation of Lambda in many ways is open source. What it also allows people to do is because Lambda runs at such extreme scale. One of the things that firecracker is really good for is running at scale. So if you want to build your own firecracker base at scale service, you can have most of the confidence that as long as your workload fits the design parameters, a firecracker, the battle hardening the robustness is being proved out day-to-day by services at scale like Lambda and foggy. For those of you who don't know service support services, you know, in the end, our goal with serverless is to make sure that you don't think about all the infrastructure that your applications run on. We focus on business logic as much as you can. That's how we think about it. And serverless has become its own quote-unquote "Sort of environment." The number of partners and open-source frameworks and tools that are spun up around serverless. In which case mostly, I mean, Lambda, API gateway. So it says like that is pretty high. So, you know, number of open source projects like Zappa server serverless framework, there's so many that have come up that make it easier for our customers to consume AWS services like Lambda and API gateway. We've also done some of our own tooling and frameworks, a serverless application model, AWS jealous. If you're a Python developer, we have these open service runtimes for Lambda, rust dot other options. We have amount of number of tools that we opened source. So in general, you'll find that tooling that we do runtime will tend to be always be open-sourced. We will often take some of the guts of the things that we use to build our systems like firecracker and open-source them while the control plane, etc, AWS services may end up staying proprietary, which is the case in Lambda. Increasingly our customers build their applications and leverage the broader AWS partner network. The AWS partner network is a network of partnerships that we've built of trusted partners. when you go to the APN website and find a partner, they know that that partner meets a certain set of criteria that AWS has developed, and you can rely on those partners for your own business. So whether you're a little tiny business that wants some function fulfill that you don't have the resources for or large enterprise that wants all these applications that you've been using on prem for a long time, and want to keep leveraging them in the cloud, you can go to APN and find that partner and then bring their solution on as part of your cloud infrastructure and could even be a systems integrator, for example, to help you solve this specific development problem that you may have a need for. Increasingly, you know, one of the things we like to do is work with an apartment community that is full of open-source providers. So a great one, there's so many, and you have, we have a panel discussion with many other partners as well, who make it easier for you to build applications on AWS, all open source and built on open source. But I like to call it a couple of them. The first one of them is TIDELIFT. TIDELIFT, For those of you who don't know is a company that provides SAS based tools to curate track, manage open source catalogs. You know, they have a whole network of maintainers and providers. They help, if you're an independent open developer, or a smart team should probably get to know TIDELIFT. They provide you benefits and, you know, capabilities as a developer and maintainer that are pretty unique and really help. And I've seen a number of our open source community embraced TIDELIFT quite honestly, even before they were part of the APN. But as part of the partner network, they get to participate in things like ISP accelerate and they get to they're officially an advanced tier partner because they are, they migrated the SAS offering onto AWS. But in the end, if you're part of the open source supply chain, you're a maintainer, you are a developer. I would recommend working with TIDELIFT because their goal is making all of you who are developing open source solutions, especially on AWS, more successful. And that's why I enjoy this partnership with them. And I'm looking to do a lot more because I think as a company, we want to make sure that open source developers don't feel like they are not supported because all you have to do is read various forums. It's challenging often to be a maintainer, especially of a small project. So I think with helping with licensing license management, security identification remediation, helping these maintainers is a big part of what TIDELIFT to us and it was great to see them as part of a partner network. Another partner that I like to call sysdig. I actually got introduced to them many years ago when they first launched. And one of the things that happened where they were super interested in some of our serverless stuff. And we've been trying to figure out how we can work together because all of our customers are interested in the capabilities that cystic provides. And over the last few years, he found a number of areas where we can collaborate. So sysdig, I know them primarily in a security company. So people use cystic to secure the bills, detect, you know, do threat response, threat detection, completely continuously validate their posture, get this continuous analytics signal on how they're doing and monitor performance. At the end of it, it's a SAS platform. They have a very nice open source security stack. The one I'm most familiar with. And I think most of you are probably familiar with is Falco. You know, sysdig, a CNCF project has been super popular. It's just to go SSS what 3, 37, 40 million downloads by now. So that's pretty, pretty cool. And they have been a great partner because we've had to do make sure that their solution works at target, which is not a natural place for their software to run, but there was enough demand and interest from our customers that, you know, or both companies leaned in to make sure they can be successful. So last year sister got a security competency. We have a number of specific competencies that we for our partners, they have integration and security hub is great. partners are lean in the way cystic has onto making our customer successful. And working with us are the best partners that we have. And there's a number of open source companies out there built on open source where their entire portfolio is built on open source software or the active participants like we are that we love working with on a day to day basis. So, you know, I think the thing I would like to, as we wind this out in this presentation is, you know, AWS is constantly looking for partnerships because our partners enable our customers. They could be with companies like Redis with Mongo, confluent with Databricks customers. Your default reaction might be, "Hey, these are companies that maybe compete with AWS." but no, I mean, I think we are partners as well, like from somebody at the lower end of the spectrum where people run on top of the services that I own on Linux and containers are SE 2, For us, these partners are just as important customers as any AWS service or any third party, 20 external customer. And so it's not a zero sum game. We look forward to working with all these companies and open source projects from an AWS perspective, a big part of how, where my open source program spends its time is making it easy for our developers to contribute, to open source, making it easy for AWS teams to decide when to open source software or participate in open source projects. Over the last few years, we've made significant changes in how we reduce the friction. And I think you can see it in the results that I showed you earlier in this stock. And the last one is one of the most important things that I say and I'll keep saying that, that we do as AWS is carry the pager. There's a lot of open source projects out there, operationalizing them, running them at scale is not easy. It's not all for whatever reason. It may not have anything to do with the software itself. But our core competency is taking that and being really good at operating it and becoming experts at operating it. And then ideally taking that expertise and experience and operating that project, that software and contributing back upstream. Cause that makes it better for everybody. And I think you'll see us do a lot more of that going forward. We've been doing that for the last few years, you know, in the container space, we do it every day. And I'm excited about the possibilities. With that. Thank you very much. And I hope you enjoy the rest of the showcase. >> Okay. Welcome back. We have Deepak sing here. We just had the keynote closing keynote vice-president of compute services. Deepak. Great to a great keynote, great wisdom and insight from that session. A very notable highlights and cutting edge trends and product information. Thanks for sharing. >> No, anytime it's always good to be here. It's too bad that we still doing this virtually, but always good to talk to you, John. >> We'll get hopefully through this way pretty quickly, I want to jump right in. Cause we don't have a lot of time. I want to get some quick question. You've brought up a good things. Open source innovation. Okay. Going next level. You've seen the rise of super clouds and super apps developing at open source. You're seeing big companies contributing, you know, you mentioned Argo into it. You're seeing that dynamic where companies are forming around this. This is a rising tide. This is, this is actually real. It's not the old school of, okay, here's a project. And then someone manages support and commercialization of it. It's actually platform in cloud scale. This is next gen. >> Yeah. And actually I think it started a few years ago. We can talk about a company that, you know, you're very familiar with as part of this event, which is armory many years ago, Netflix spun off this project called Spinnaker. A Spinnaker is CISED you know, CSED system that was developed at Netflix for their own purposes, but they chose to open solicit. And since then, it's become very popular with customers who want to use it even on prem. And you have a company that spun up on it. I think what's making this world very unique is you have very large companies like Facebook that will build things for themselves like VITAS or Netflix with Spinnaker and open source them. And you can have a lot of discussion about why they chose to do so, etc. But increasingly that's becoming the default when Amazon or Netflix or Facebook or Mehta, I guess you call them these days, build something for themselves for their own needs. The first question we ask ourselves is, should it be opensource? And increasingly we are all saying yes. And here's what happens because of that. It gives an opportunity depending on how you open source it for innovation through commercial deployments, so that you get SaaS companies, you know, that are going to take that product and make it relevant and useful to a very broad number of customers. You build partnerships with cloud providers like AWS, because our customers love this open source project and they need help. And they may choose an AWS managed service, or they may end up working with this partner on a day-to-day basis. And we want to work with that partner because they're making our customers successful, which is one reason all of us are here. So you're having this set of innovation from large companies from, you know, whether they are just consumer companies like Metta infrastructure companies like us, or just random innovation that's happening in an open source project that which ends up in companies being spun up and that foster that innovative innovation and that flywheel that's happening right now. And I think you said that like, this is unique. I mean, you never saw this happen before from so many different directions. >> It really is a nice progression on the business model side as well. You mentioned Argo, which is a great organic thing that was Intuit developed. We just interviewed code fresh. They just presented here in the showcase as well. You seeing the formation around these projects develop now in the community at a different scale. I mean, look at code fresh. I mean, Intuit did it Argo and they're not just supporting it. They're building a platform. So you seeing the dynamics of tools and now emerging the platforms, you mentioned Lambda, okay. Which is proprietary for AWS and your talk powered by open source. So again, open source combined with cloud scale allows for new potential super applications or super clouds that are developing. This is a new phenomenon. This isn't just lift and shift and host on the cloud. This is actually a construction production developer workflow. >> Yeah. And you are seeing consumers, large companies, enterprises, startups, you know, it used to be that startups would be comfortable adopting some of these solutions, but now you see companies of all sizes doing so. And I said, it's not just software it's software, the services increasingly becoming the way these are given, delivered to customers. I actually think the innovation is just getting going, which is why we have this. We have so many partners here who are all in inventing and innovating on top of open source, whether it's developed by them or a broader community. >> Yeah. I liked, I liked the represent container. Do you guys have, did that drove that you've seen a lot of changes and again, with cloud scale and open source, you seeing the dynamics change, whether you're enabling that, and then you see kind of like real big change. So let's take snowflake, a big customer of AWS. They started out as a startup too, but they weren't a data warehouse. They were bringing data warehouse like functionality and then changing everything differently and making it consumable for the cloud. And hence they're huge. So that's a disruption into an incumbent leader or sector. Then you've got new capabilities emerging. What's your thoughts, Deepak? Can you share your vision on how you have the disruption to existing leaders, old guard, if you will, as you guys call them and then new capabilities as these new platforms emerge at a net new functionality, how do you see that emerging? >> Yeah. So I speak from my side of the world. I've lived in over the last few years, which has containers and serverless, right? There's a lot of, if you go to any enterprise and ask them, do you want to modernize the infrastructure? Do you want to take advantage of automated software delivery, continuous delivery infrastructure as code modern observability, all of them will say yes, but they also are still a large enterprise, which has these enterprise level requirements. I'm using the word enterprise a lot. And I usually it's a trigger word for me because so many customers have similar requirements, but I'm using it here as large company with a lot of existing software and existing practices. I think the innovation that's coming and I see a lot of companies doing that is saying, "Hey, we understand the problems you want to solve. We understand the world where you live in, which could be regulated." You want to use all these new modalities. How do we allow you to use all of them? Keep the advantages of switching to a Lambda or switching to, and a service running on far gate, but give you the same capabilities. And I think I'll bring up cystic here because we work so closely with them on Falco. As an example, I just talked about them in my keynote. They could have just said, "Oh no, we'll just support the SE2 and be done with it." They said, "No, we're going to make sure that serverless containers in particular are something that you're going to be really good at because our customers want to use them, but requires us to think differently. And then they ended up developing new things like Falco that are born in this new world, but understand the requirements of the old world. If you get what I'm saying. And I think that a real example. >> Yeah. Oh, well, I mean, first of all, they're smart. So that was pretty obvious for most people that know, sees that you can connect the dots on serverless, which is a great point, but not everyone can see that again, this is what's new and and systig was just found in his backyard. As I found out on my interview, a great, great founder, they would do a new thing. So it was a very easy to connect the dots there again, that's the trend. Well, I got to ask if they're doing that for serverless, you mentioned graviton in your speech and what came out of you mentioned graviton in your speech and what came out of re-invent this past year was all the innovation going on at the compute level with gravitron at many levels in the Silicon. How should companies and open source developers think about how to innovate with graviton? >> Yeah, I mean, you've seen examples from people blogging and tweeting about how fast their applications run and grab it on the price performance benefits that they get, whether it's on, you know, whether it's an observability or other places. something that AWS is going to embrace across a compute something that AWS is going to embrace across a compute portfolio. Obviously you can go find EC2 instances, the gravitron two instances and run on them and that'll be great. But we know that most of our customers, many of our customers are building new applications on serverless containers and serveless than even as containers increasingly with things like foggy, where they don't want to operate the underlying infrastructure. A big part of what we're doing is to make sure that graviton is available to you on every compute modality. You can run it on a C2 forever. You've been running, being able to use ECS and EKS and run and grab it on almost since launch. What do you want me to take it a step further? You elastic Beanstalk customers, elastic Beanstalk has been around for a decade, but you can now use it with graviton. people running ECS on for gate can now use graviton. Lambda customers can pick graviton as well. So we're taking this price performance benefits that you get So we're taking this price performance benefits that you get from graviton and basically putting it across the entire compute portfolio. What it means is every high level service that gets built on compute infrastructure. And you get the price performance benefits, you get the price performance benefits of the lower power consumption of arm processes. So I'm personally excited like crazy. And you know, this has graviton 2 graviton 3 is coming. >> That's incredible. It's an opportunity like serverless was it's pretty obvious. And I think hopefully everyone will jump on that final question as the time's ticking here. I want to get your thoughts quickly. If you look at what's happened with containers over the past say eight years since the original founding of the first Docker instance, if you will, to how that's evolved and then the introduction of Kubernetes and the cloud native wave we're seeing now, what is, how would you describe the relationship between the success Docker, seeing now with Kubernetes in the cloud native construct what's different and why is this combination so successful? >> Yeah. I often say that containers would have, let me rephrase that. what I say is that people would have adopted sort of the modern way of running applications, whether containers came around or not. But the fact that containers came around made that migration and that journey is so much more efficient for people. So right from, I still remember the first doc that Solomon gave Billy announced DACA and starting to use it on customers, starting to get interested all the way to the more sort of advanced orchestration that we have now for containers across the board. And there's so many examples of the way you can do that. Kubernetes being the most, most well-known one. Here's the thing that I think has changed. I think what Kubernetes or Docker, or the whole sort of modern way of building applications has done is it's taken people who would have taken years adopting these practices and by bringing it right to the fingertips and rebuilding it into the APIs. And in the case of Kubernetes building an entire sort of software world around it, the number of, I would say number of decisions people have to take has gone smaller in many ways. There's so many options, the number of decisions that become higher, but the com the speed at which they can get to a result and a production version of an application that works for them is way low. I have not seen anything like what I've seen in the last 6, 7, 8 years of how quickly the most you know, the most I would say is, you know, a company that you would think would never adopt modern technology has been able to go from, this is interesting to getting a production really quickly. And I think it's because the tooling makes it So, and the fact that you see the adoption that you see right and the fact that you see the adoption that you see right from the fact that you could do Docker run Docker, build Docker, you know, so easily back in the day, all the way to all the advanced orchestration you can do with container orchestrator is today. sort of taking all of that away as well. there's never been a better time to be a developer independent of whatever you're trying to build. And I think containers are a big central part of why that's happened. >> Like the recipe, the combination of cloud-scale, the timing of Kubernetes and the containerization concepts just explode as a beautiful thing. And it creates more opportunities and will challenges, which are opportunities that are net new, but it solves the automation piece that we're seeing this again, it's only makes things go faster. >> Yes. >> And that's the key trend. Deepak, thank you so much for coming on. We're seeing tons of open cloud innovations, thanks to the success of your team at AWS and being great participants in the community. We're seeing innovations from startups. You guys are helping enabling that. Of course, they want to live on their own and be successful and build their super clouds and super app. So thank you for spending the time with us. Appreciate. >> Yeah. Anytime. And thank you. And you know, this is a great event. So I look forward to people running software and building applications, using AWS services and all these wonderful partners that we have. >> Awesome, great stuff. Great startups, great next generation leaders emerging. When you see startups, when they get successful, they become the modern software applications platforms out there powering business and changing the world. This is the cube you're watching the AWS startup showcase. Season two episode one open cloud innovations on John Furrier your host, see you next time.
SUMMARY :
And the thing that we have We just had the keynote closing but always good to talk to you, John. It's not the old school And I think you said that So you seeing the dynamics but now you see companies and then you see kind How do we allow you to use all of them? sees that you can connect is available to you on Kubernetes and the cloud of the way you can do that. but it solves the automation And that's the key trend. And you know, and changing the world.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Deepak | PERSON | 0.99+ |
Lena Torvalds | PERSON | 0.99+ |
Falco | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
Mehta | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Lambda | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Solomon | PERSON | 0.99+ |
two ways | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
PyPy | TITLE | 0.99+ |
last year | DATE | 0.99+ |
over 13 years | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Today | DATE | 0.99+ |
Indiana | LOCATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
How Open Source is Changing the Corporate and Startup Enterprises | Open Cloud Innovations
(gentle upbeat music) >> Hello, and welcome to theCUBE presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two episode one of an ongoing series covering setting status from the AWS ecosystem. Talking about innovation, here it's open source for this theme. We do this every episode, we pick a theme and have a lot of fun talking to the leaders in the industry and the hottest startups. I'm your host John Furrier here with Lisa Martin in our Palo Alto studios. Lisa great series, great to see you again. >> Good to see you too. Great series, always such spirited conversations with very empowered and enlightened individuals. >> I love the episodic nature of these events, we get more stories out there than ever before. They're the hottest startups in the AWS ecosystem, which is dominating the cloud sector. And there's a lot of them really changing the game on cloud native and the enablement, the stories that are coming out here are pretty compelling, not just from startups they're actually penetrating the enterprise and the buyers are changing their architectures, and it's just really fun to catch the wave here. >> They are, and one of the things too about the open source community is these companies embracing that and how that's opening up their entry to your point into the enterprise. I was talking with several customers, companies who were talking about the 70% of their pipeline comes from the open source community. That's using the premium version of the technology. So, it's really been a very smart, strategic way into the enterprise. >> Yeah, and I love the format too. We get the keynote we're doing now, opening keynote, some great guests. We have Sir John on from AWS started program, he is the global startups lead. We got Swami coming on and then closing keynote with Deepak Singh. Who's really grown in the Amazon organization from containers now, compute services, which now span how modern applications are being built. And I think the big trend that we're seeing that these startups are riding on that big wave is cloud natives driving the modern architecture for software development, not just startups, but existing, large ISV and software companies are rearchitecting and the customers who buy their products and services in the cloud are rearchitecting too. So, it's a whole new growth wave coming in, the modern era of cloud some say, and it's exciting a small startup could be the next big name tomorrow. >> One of the things that kind of was a theme throughout the conversations that I had with these different guests was from a modern application security perspective is, security is key, but it's not just about shifting lab. It's about doing so empowering the developers. They don't have to be security experts. They need to have a developer brain and a security heart, and how those two organizations within companies can work better together, more collaboratively, but ultimately empowering those developers, which goes a long way. >> Well, for the folks who are watching this, the format is very simple. We have a keynote, editorial keynote speakers come in, and then we're going to have a bunch of companies who are going to present their story and their showcase. We've interviewed them, myself, you Dave Vallante and Dave Nicholson from theCUBE team. They're going to tell their stories and between the companies and the AWS heroes, 14 companies are represented and some of them new business models and Deepak Singh who leads the AWS team, he's going to have the closing keynote. He talks about the new changing business model in open source, not just the tech, which has a lot of tech, but how companies are being started around the new business models around open source. It's really, really amazing. >> I bet, and does he see any specific verticals that are taking off? >> Well, he's seeing the contribution from big companies like AWS and the Facebook's of the world and large companies, Netflix, Intuit, all contributing content to the open source and then startups forming around them. So Netflix does some great work. They donated to open source and next thing you know a small group of people get together entrepreneurs, they form a company and they create a platform around it with unification and scale. So, the cloud is enabling this new super application environment, superclouds as we call them, that's emerging and this new supercloud and super applications are scaling data-driven machine learning and AI that's the new formula for success. >> The new formula for success also has to have that velocity that developers expect, but also that the consumerization of tech has kind of driven all of us to expect things very quickly. >> Well, we're going to bring in Serge Shevchenko, AWS Global Startup program into the program. Serge is our partner. He is the leader at AWS who has been working on this program Serge, great to see you. Thanks for coming on. >> Yeah, likewise, John, thank you for having me very excited to be here. >> We've been working together on collaborating on this for over a year. Again, season two of this new innovative program, which is a combination of CUBE Media partnership, and AWS getting the stories out. And this has been a real success because there's a real hunger to discover content. And then in the marketplace, as these new solutions coming from startups are the next big thing coming. So, you're starting to see this going on. So I have to ask you, first and foremost, what's the AWS startup showcase about. Can you explain in your terms, your team's vision behind it, and why those startup focus? >> Yeah, absolutely. You know John, we curated the AWS Startup Showcase really to bring meaningful and oftentimes educational content to our customers and partners highlighting innovative solutions within these themes and ultimately to help customers find the best solutions for their use cases, which is a combination of AWS and our partners. And really from pre-seed to IPO, John, the world's most innovative startups build on AWS. From leadership downward, very intentional about cultivating vigorous AWS community and since 2019 at re:Invent at the launch of the AWS Global Startup program, we've helped hundreds of startups accelerate their growth through product development support, go to market and co-sell programs. >> So Serge question for you on the theme of today, John mentioned our showcases having themes. Today's theme is going to cover open source software. Talk to us about how Amazon thinks about opensource. >> Sure, absolutely. And I'll just touch on it briefly, but I'm very excited for the keynote at the end of today, that will be delivered by Deepak the VP of compute services at AWS. We here at Amazon believe in open source. In fact, Amazon contributes to open source in multiple ways, whether that's through directly contributing to third-party project, repos or significant code contributions to Kubernetes, Rust and other projects. And all the way down to leadership participation in organizations such as the CNCF. And supporting of dozens of ISV myself over the years, I've seen explosive growth when it comes to open source adoption. I mean, look at projects like Checkov, within 12 months of launching their open source project, they had about a million users. And another great example is Falco within, under a decade actually they've had about 37 million downloads and that's about 300% increase since it's become an incubating project in the CNCF. So, very exciting things that we're seeing here at AWS. >> So explosive growth, lot of content. What do you hope that our viewers and our guests are going to be able to get out of today? >> Yeah, great question, Lisa. I really hope that today's event will help customers understand why AWS is the best place for them to run open source, commercial and which partner solutions will help them along their journey. I think that today the lineup through the partner solutions and Deepak at the end with the ending keynote is going to present a very valuable narrative for customers and startups in selecting where and which projects to run on AWS. >> That's great stuff Serge would love to have you on and again, I want to just say really congratulate your team and we enjoy working with them. We think this showcase does a great service for the community. It's kind of open source in its own way if I can co contributing working on out there, but you're really getting the voices out at scale. We've got companies like Armory, Kubecost, Sysdig, Tidelift, Codefresh. I mean, these are some of the companies that are changing the game. We even had Patreon a customer and one of the partners sneak with security, all the big names in the startup scene. Plus AWS Deepak saying Swami is going to be on the AWS Heroes. I mean really at scale and this is really a great. So, thank you so much for participating and enabling all of this. >> No, thank you to theCUBE. You've been a great partner in this whole process, very excited for today. >> Thanks Serge really appreciate it. Lisa, what a great segment that was kicking off the event. We've got a great lineup coming up. We've got the keynote, final keynote fireside chat with Deepak Singh a big name at AWS, but Serge in the startup showcase really innovative. >> Very innovative and in a short time period, he talked about the launch of this at re:Invent 2019. They've helped hundreds of startups. We've had over 50 I think on the showcase in the last year or so John. So we really gotten to cover a lot of great customers, a lot of great stories, a lot of great content coming out of theCUBE. >> I love the openness of it. I love the scale, the storytelling. I love the collaboration, a great model, Lisa, great to work with you. We also Dave Vallante and Dave Nicholson interview. They're not here, but let's kick off the show. Let's get started with our next guest Swami. The leader at AWS Swami just got promoted to VP of the database, but also he ran machine learning and AI at AWS. He is a leader. He's the author of the original DynamoDB paper, which is celebrating its 10th year anniversary really impacted distributed computing and open source. Swami's introduced many opensource aspects of products within AWS and has been a leader in the engineering side for many, many years at AWS, from an intern to now an executive. Swami, great to see you. Thanks for coming on our AWS startup showcase. Thanks for spending the time with us. >> My pleasure, thanks again, John. Thanks for having me. >> I wanted to just, if you don't mind asking about the database market over the past 10 to 20 years cloud and application development as you see, has changed a lot. You've been involved in so many product launches over the years. Cloud and machine learning are the biggest waves happening to your point to what you're doing now. Software is under the covers it's powering it all infrastructure is code. Open source has been a big part of it and it continues to grow and change. Deepak Singh from AWS talks about the business model transformation of how like Netflix donates to the open source. Then a company starts around it and creates more growth. Machine learnings and all the open source conversations around automation as developers and builders, like software as cloud and machine learning become the key pistons in the engine. This is a big wave, what's your view on this? How how has cloud scale and data impacting the software market? >> I mean, that's a broad question. So I'm going to break it down to kind of give some of the back data. So now how we are thinking about it first, I'd say when it comes to the open source, I'll start off by saying first the longevity and by ability of open sources are very important to our customers and that is why we have been a significant contributor and supporter of these communities. I mean, there are several efforts in open source, even internally by actually open sourcing some of our key Amazon technologies like Firecracker or BottleRocket or our CDK to help advance the industry. For example, CDK itself provides some really powerful way to build and configure cloud services as well. And we also contribute to a lot of different open source projects that are existing ones, open telemetries and Linux, Java, Redis and Kubernetes, Grafana and Kafka and Robotics Operating System and Hadoop, Leucine and so forth. So, I think, I can go on and on, but even now I'd say the database and observability space say machine learning we have always started with embracing open source in a big material way. If you see, even in deep learning framework, we championed MX Linux and some of the core components and we open sourced our auto ML technology auto Glue on, and also be open sourced and collaborated with partners like Facebook Meta on Fighter showing some major components and there, and then we are open search Edge Compiler. So, I would say the number one thing is, I mean, we are actually are very, very excited to partner with broader community on problems that really mattered to the customers and actually ensure that they are able to get amazing benefit of this. >> And I see machine learning is a huge thing. If you look at how cloud group and when you had DynamoDB paper, when you wrote it, that that was the beginning of, I call the cloud surge. It was the beginning of not just being a resource versus building a data center, certainly a great alternative. Every startup did it. That's history phase one inning and a half, first half inning. Then it became a large scale. Machine learning feels like the same way now. You feel like you're seeing a lot of people using it. A lot of people are playing around with it. It's evolving. It's been around as a science, but combined with cloud scale, this is a big thing. What should people who are in the enterprise think about how should they think about machine learning? How has some of your top customers thought about machine learning as they refactor their applications? What are some of the things that you can share from your experience and journey here? >> I mean, one of the key things I'd say just to set some context on scale and numbers. More than one and a half million customers use our database analytics or ML services end-to-end. Part of which machine learning services and capabilities are easily used by more than a hundred thousand customers at a really good scale. However, I still think in Amazon, we tend to use the phrase, "It's day one in the age of internet," even though it's an, or the phrase, "Now, but it's a golden one," but I would say in the world of machine learning, yes it's day one but I also think we just woke up and we haven't even had a cup of coffee yet. That's really that early, so. And, but when you it's interesting, you've compared it to where cloud was like 10, 12 years ago. That's early days when I used to talk to engineering leaders who are running their own data center and then we talked about cloud and various disruptive technologies. I still used to get a sense about like why cloud and basic and whatnot at that time, Whereas now with machine learning though almost every CIO, CEO, all of them never asked me why machine learning. Instead, the number one question, I get is, how do I get started with it? What are the best use cases? which is great, and this is where I always tell them one of the learnings that we actually learned in Amazon. So again, a few years ago, probably seven or eight years ago, and Amazon itself realized as a company, the impact of what machine learning could do in terms of changing how we actually run our business and what it means to provide better customer experience optimize our supply chain and so far we realized that the we need to help our builders learn machine learning and the help even our business leaders understand the power of machine learning. So we did two things. One, we actually, from a bottom-up level, we built what I call as machine learning university, which is run in my team. It's literally stocked with professors and teachers who offer curriculum to builders so that they get educated on machine learning. And now from a top-down level we also, in our yearly planning process, we call it the operational planning process where we write Amazon style narratives six pages and then answer FAQ's. We asked everyone to answer one question around, like how do you plan to leverage machine learning in your business? And typically when someone says, I really don't play into our, it does not apply. It's usually it doesn't go well. So we kind of politely encourage them to do better and come back with a better answer. This kind of dynamic on top-down and bottom-up, changed the conversation and we started seeing more and more measurable growth. And these are some of the things you're starting to see more and more among our customers too. They see the business benefit, but this is where to address the talent gap. We also made machine learning university curriculum actually now open source and freely available. And we launched SageMaker Studio Lab, which is a no cost, no set up SageMaker notebook service for educating learner profiles and all the students as well. And we are excited to also announce AIMLE scholarship for underrepresented students as well. So, so much more we can do well. >> Well, congratulations on the DynamoDB paper. That's the 10 year anniversary, which is a revolutionary product, changed the game that did change the world and that a huge impact. And now as machine learning goes to the next level, the next intern out there is at school with machine learning. They're going to be writing that next paper, your advice to them real quick. >> My biggest advice is, always, I encourage all the builders to always dream big, and don't be hesitant to speak your mind as long as you have the right conviction saying you're addressing a real customer problem. So when you feel like you have an amazing solution to address a customer problem, take the time to articulate your thoughts better, and then feel free to speak up and communicate to the folks you're working with. And I'm sure any company that nurtures good talent and knows how to hire and develop the best they will be willing to listen and then you will be able to have an amazing impact in the industry. >> Swami, great to know you're CUBE alumni love our conversations from intern on the paper of DynamoDB to the technical leader at AWS and database analyst machine learning, congratulations on all your success and continue innovating on behalf of the customers and the industry. Thanks for spending the time here on theCUBE and our program, appreciate it. >> Thanks again, John. Really appreciate it. >> Okay, now let's kick off our program. That ends the keynote track here on the AWS startup showcase. Season two, episode one, enjoy the program and don't miss the closing keynote with Deepak Singh. He goes into great detail on the changing business models, all the exciting open source innovation. (gentle bright music)
SUMMARY :
of the AWS Startup Showcase Good to see you too. and the buyers are changing and one of the things too Yeah, and I love the format too. One of the things and the AWS heroes, like AWS and the Facebook's of the world but also that the consumerization of tech He is the leader at AWS who has thank you for having me and AWS getting the stories out. at the launch of the AWS Talk to us about how Amazon And all the way down to are going to be able to get out of today? and Deepak at the end and one of the partners in this whole process, but Serge in the startup in the last year or so John. Thanks for spending the time with us. Thanks for having me. and data impacting the software market? but even now I'd say the database are in the enterprise and all the students as well. on the DynamoDB paper. take the time to articulate and the industry. Thanks again, John. and don't miss the closing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Serge | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Swami | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Codefresh | ORGANIZATION | 0.99+ |
Deepak | PERSON | 0.99+ |
Armory | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Sysdig | ORGANIZATION | 0.99+ |
Serge Shevchenko | PERSON | 0.99+ |
Kubecost | ORGANIZATION | 0.99+ |
Tidelift | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
14 companies | QUANTITY | 0.99+ |
six pages | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
more than a hundred thousand customers | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
last year | DATE | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
More than one and a half million customers | QUANTITY | 0.98+ |
two organizations | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
CDK | ORGANIZATION | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
DynamoDB | TITLE | 0.98+ |
first half inning | QUANTITY | 0.98+ |
Deepak Singh, AWS | DockerCon 2021
>>mhm Yes, everyone, welcome back to the cubes coverage of dr khan 2021. I'm john for your host of the cube. Got a great segment here. One of the big supporters and open source amazon web services returning back second year. Dr khan virtual Deepak Singh, vice president of the compute services at AWS Deepak, Great to see you. Thanks for coming back on remotely again soon. We'll be in real life. Reinvent is going to be in person, we'll be there. Good to see you. >>Good to see you too, john it's always good to do these. I don't know how how often I've been at the cube now, but it's great every single time your >>legend and getting on there, a lot of important things to discuss your in one of the most important areas in the technology industry right now and that is at the confluence of cloud scale and modern development applications as they shift towards as Andy Jassy says, the new guard, right. It's been happening. You guys have been a big proponent of open source and enabling open source is a service creating business models for companies. But more importantly, you guys are powering, making it easier for folks to use software. And doctor has been a big relationship for you. Could you take a minute to first talk about the doctor, a W S relationship and your involvement and what you're doing? >>Yeah, actually it goes back a long way. Uh you know, Justin, we announced PCS had reinvented 2014 and PCS at that time was very much managed orchestration service on top of DACA at that time. I think it was the first really big one out there from a cloud provider. And since then, of course, the world has evolved quite a bit and relationship with DR has evolved a lot. The thing I'd like to talk to is something that we announced that Dr last year, I don't remember if I talked about it on the cube at that time. But last year we started working with DR on how can we go from doctor Run, which customers love or DR desktop, which customers love and make it easy for people to run containers on pcs and Fergie. Uh so most new customers running containers and AWS today start with this Yes and party or half of them and we wanted to make it very easy for them to start with where they are on the laptop which is often bucket to stop and have running services the native US. So we started working with DR and that that collaboration has been very successful. We want to keep you look forward to continuing to work on evolving that where you can use Docker compose doctor, desktop, doctor run the fuel that darker customers used and the labour grand production services on the end of your side, which is the part that we've got that on. So I think that's one area where we work really well together. Uh, the other area where I think the two companies continue to work well together. It's open source in general as some of, you know, AWS has a very strong commitment to contain a. D uh, EKS our community service is moving towards community. Forget it actually runs all on community today and uh, we collaborate dr Rhonda on the Ocr specification because, you know, the Oc I am expect is becoming the de facto packaging format idea. W S. This morning we launched yesterday, we launched a service called Opera. And the main expected input for opera is an Ocr image are being in this Atlanta as well, where those ci images now a way of packaging for lambda. And I think the last one I like to call out and it has been an amazing partnership and it's an area where most people don't pay attention is amid signing. Uh, there's a project called Notary. We do the second version of the Notary Spec for remit signing and AWS Docker and a couple of other companies have been working very closely together on bringing that uh, you know, finalizing no tv too, so that at least in our case we can start building services for our customers on top of that. You know, it's it's a great relationship and I expect to see it continue. >>Well, I think one of the themes this year is developer experience. So good. Good call out there in the new announcements on the tools you have and software because that seems to be a great developer integration with Docker question I have for you is how should the customers think about things like E C. S and versus E K. S. App, Runner lambda uh for kind of running their containers. How do they understand the difference is, what's there? What's the, what's the thought process there? What's >>that? It's a good question actually been announced after. And I think there was one of the questions I started getting on twitter. You know, let's start at the very beginning. Anyone can pick up a Docker container and run it on easy to today. You can run it on easy to, we can run a light sail, but doc around works just fine. It's the limits machine. Then people want to do more complex things. They want to run large scale orchestrated services. They won't run their entire business and containers. We have customers will do that today. Uh, you know, you have people like Vanguard who runs a significant portion of the infrastructure on pcs frg or you have to elope with the heavy user of chaos, our community service. So in general, if you're running large scale systems, you're building your platforms, you're most likely to use the csny Chaos. Um, if you come from a community's background, you're, you're running communities on prem or you want the flexibility and control the communities gives you, you're gonna end up with the chaos. That's what we see our customers doing. If you just want to run containers, you want to use AWS to its fullest extent where you want the continue a P I to be part of the W A S A P. I said then you pick is yes. And I think one of the reasons you see so many customers start with the CSN, Forget is with forget to get the significant ease of use from an operational standpoint. And we see many start ups and you know, enterprises, especially security focus enterprises leaning towards farming. But there's a class of customers that doesn't want to think about orchestration that just wants. Here's my code, here's my container image just run my service for me and that's when things like happen, I can come and that's one of the reasons we launched it. Land is a little bit different. Lambda is a unique service. You buy into an event driven architecture. If you do that, then you can figure our application into this. That's they should start its magic. Uh, the container part, there is what land announced agreement where they now support containers, packaging. So instead of zip files, you can package up your functions as containers. Then lambda will run them for you. The advantage it gives you with all the tooling that you built, that you have to build your containers now works the land as well. So I won't call and a container orchestration service in the same sense of the CSC cso Afrin are but it definitely allows the container image format as a standard packaging format. I think that's the sort of universal common theme that you find across AWS at this point of time. >>You know, one of the things that we're observing at this at this event here is a lot of developers Coop con and Lennox foundations. A lot of operators to kubernetes hits that. But here's developers. And the thing is I want to ease of use, simplicity experience, but also I want the innovation. Yeah, I want all of it. When I ask you what is amazon bring to the table for the new equation, what would you say? >>Yeah, I mean for me it's always you've probably heard me say this 100 times. Many 1000 times. It's foggy fog. It's unique to us. It takes a lot of what we have learned about operating infrastructure scale. The question we asked ourselves, you know, in many ways we talk about forget even before belong pcs but we have to learn on what it meant and what customers really wanted. But the idea was when you are running clusters of instances of machines to run containers on, you have to start thinking about a lot of things that in some ways VMS but BMS in the car were taken away capacity. What kind of infrastructure to run it on? Should have been touched. Should have not been back. You know, where is my container running? Those are things. They suddenly started having to think about those kind of backwards almost. So the idea was how can we make your containerized bundles? So TCS task or community is part of the thing that you talk to and that is the main unit that you operate on. That is the unit that you get built on and meet it on. That's where Forget comes in and it allows us to do many interesting things. We've effectively changed the engine of forget since we've launched it. Uh, we run it on ec two instances and we run it on fire cracker. Uh, we have changed the forget agent architecture. We've made a lot of underneath the hood, uh, changes that even take the take advantage of the broader innovation, the rate of us, We did a whole bunch more to launch acronym trans on top of family customers don't have to think about it. They don't have to worry about it. It happens underneath the hood. It's always your engine as as you go along and it takes away all the operational pain of managing clusters of running into picking which instances to use to getting out, trying to figure out how to bend back and get efficiency. That becomes our problem. So, you know, that is an area where you should expect to see a Stuart done more. It's becoming the fabric of so many things that eight of us now. Uh, it's, you know, in some ways we're just talking a lot more to do. >>Yeah. And it's a really good time. A lot more wave of developers coming in. One of the things that we've been reporting on on Silicon England cube with our cute videos is more developers keep on coming on, more people coming in and contributing to the open source community. Even end users, not just the normal awesome hyper scholars you're talking about like classic, I call main street enterprises. So two things I want to ask you on the customer side because you have kind of to customers, you have the community that open source community and you have enterprise customers that want to make it easier. What are you seeing and hearing from customers? I know you guys work backwards from the customer. So I got to ask you work backwards from the community and work backwards from the enterprise customer. What's going on in their environment? What's the key trends that they're riding? What's the big challenges? What's the big opportunities that they're facing and saying for the community? >>Yeah, I start with the enterprise. That's almost an easier answer. Which is, you know, we're seeing increasingly enterprises moving into the cloud wholesale. Like in some ways you could argue that the pandemic has just accelerated it, but we have started seeing that before. Uh they want to move to the cloud and adult modern best practices. Uh If you see my talk agreement last few years, I've talked about modernization and all the aspects of modernization, and that's 90% of our conversation with enterprises, I've walked into a meeting supposedly to talk about containers, whatever half a conversation is spent on. How does an organization modernize? What does an organization need to do to modernize and containers and serverless play a pretty important part in it, because it gives them an opportunity to step away from the shackles of sort of fixed infrastructure and the methods and approaches that built in. But equally, we are talking about C I C. D, you know, fully automated deployments. What does it mean for developers to run their own services? What are the child, how do you monitor and uh, instrument uh, your services? How do you do observe ability in the modern world? So those are the challenges that enterprises are going towards, and you're spending a ton of time helping them there. But many of them are still running infrastructure on premises. So, you know, we have outpost for them. Uh, you know, just last week, you're talking to a bunch of our customers and they have lots of interesting ideas and things that they want to do without both, but many of them also have their own infrastructure and that's where something like UCS anywhere came from, which is hey, you like using Pcs in the cloud, You like having the safety i that just orchestrates containers for you. It does it on on his in an AWS region. It will do it in an outpost. It'll do it on wavelength, it'll do it on local zone. How about we allow you to do it on whatever infrastructure you bring to us. Uh you want to bring a raspberry pi, you can do that. You want to bring your on premises data center infrastructure, we can do that or a point of sale device, as long as you can get the agent running and you can connect to an AWS region, even though it's okay to lose connectivity every now and then. We can orchestrate a container for you over there and, you know, the same customer that likes the ease of use of Vcs. And the simplicity really resonated with that message really resonates with them. So I think where we are today with the enterprise is we've got some really good solutions for you in eight of us and we are now allowing you to take those a. P. I. S and then launch containers wherever you want to run them, whether it's the edge or whether it's your own data center. I think that's a big part of where the enterprise is going. But by and large, I think yes, a lot of them are still making that change from running infrastructure and applications the way they used to do a modern sort of, if you want to use the word cloud native way and we're helping them a lot. We've done, the community is interesting. They want to be more participatory. Uh that's where things like co pilot comes from. God, honestly, the best thing we've ever done in my order is probably are open road maps where the community can go into the road map and engage with us over there, whether it's an open source project or just trying to tell us what the feature is and how they would like to see it. It's a great engagement and you know, it's not us a lot. It's helped us prioritize correctly and think about what we want to do next. So yeah, I think that's, that >>must be very hard to do for opening up the kimono on the road map because normally that's the crown jewels and its secretive and you know, and um, now it's all out in the open. I think that is a really interesting, um, experiment and what's your reaction to that? What's been the feedback on the road map peace? Because I mean, I definitely want to see, uh, >>we do it pretty much for every service in my organization and we've been doing it now for three years. So years forget, I think about three years and it's been great. Now we are very we are very upfront, which is security and availability. Our job 000 and you know, 100 times out of 100 at altitudes between a new feature and helping our customers be available and safe. We'll do that. And this is why we don't put dates in that we just tell you directionally where we are and what we are prioritizing Uh, there every now and then we'll put something in there that, you know, well not choose not to put a feature in there because we want to keep it secret until it launches. But for the most part, 99% of our own myself there and people engaged with it. And it's not proven to be a problem because you've also been very responsible with how we manage and be very transparent on whether we can commit to something or not. And I think that's not. >>I gotta ask you on as a leader uh threaded leader on this group. Open source is super important, as you know, and you continue to do it from under years. How are you investing in the future? What's your plan? Uh plans for your team, the industry actually very inclusive, Which is very cool. It's gonna resonate well, what's the plans? Give us some details on what you're investing in, what your priorities? What's your first principles? >>Yeah, So it goes in many ways, one when I I also have the luxury also on the amazon open source program office. So, you know, I get the chance to my team, rather not me help amazon engineers participate in open source. That that's the team that helps create the tools for them, makes it easy for them to contribute, creates, you know, manages all the licenses, etcetera. I'll give you a simple example, you know, in there, just think of the cr credential helper that was written by one of our engineers and he kind of distorted because he felt it was something that we needed to do. And we made it open source in general, in in many of our teams. The first question we asked is should something the open why is this thing not open source, especially if it's a utility or some piece of software that runs along with services. So they'll step one. But we've done some big things also, I, you know, a couple of years ago we launched Lennox operating system called bottle Rocket. And right from the beginning it was very clear to us that bottle Rocket was two things. It was both in AWS product. But first it was an open source project. We've already learned a little bit from what we've done at Firecracker. But making bottle rocket and open source operating system is very important. Anyone can take part of Rocket the open source to build tooling. You can run it whatever you want. If you want to take part of Rocket and build a version and manage it for another provider. For another provider wants to do it, go for it. There's nothing stopping you from doing that. So you'll see us do a lot there. Obviously there's multiple areas. You've seen WS investing on the open source side. But to me, the winds come from when engineers can participate in small things, released little helpers or get contributions from outside. I think that's where we're still, we can always have that. We're going to continue to strive to make it better and easier. And uh, you know, I said, I have, you know, me and my team, we have an opportunity to help their inside the company and we continue to do so. But that's what gets me excited. >>Yeah, that's great stuff. And congratulations on investing in the community, really enjoys it and I know it moves the needle for the industry. Deepak, I gotta ask you why I got you here. Dr khan obviously, developers, what's the most important story that they should be paying attention to as a developer because of what's going on shift left for security day two operations also known as a I ops getups, whatever you wanna call it, you know, ongoing, you get server lists, you got land. I mean, all kinds of great things are going on. You mentioned Fargate, >>um >>what should they be paying attention to that's going to really help their life, both innovation wise and just the quality of life. >>Yeah, I would say look at, you know, in the end it is very easy developers in particular, I want to build the buildings and it's very easy to get tempted to try and get learn everything about something. You have access to all the bells and whistles and knobs, but in reality, if you want to run things you want to, you want to focus on what's important, the business application, that and you the application. And I think a lot of what I'll tell developers and I think it's a lot of where the industry is going is we have built a really solid foundation, whether it's humanity, so you CSN forget or you know, continue industries out there. We have very solid foundation that, you know, our customers and develop a goal of the world can use to build upon. But increasingly, and you know, they are going to provide tools that sort of take that wrap them up and providing a nice package solution After another great example, our collaboration, the doctor around Dr desktop are a great example where we get all the mark focus on the application and build on top of that and you can get so much done. I think that's one trend. You'll see more and more. Those things are no longer toys, their production grade systems that you can build real world applications on, even though they're so easy to use. The second thing I would add to that is uh, get uh, it is, you know, you can give it whatever name you want. There's uh, there's nuances there, but I actually think get up is the way people should be running the infrastructure, my virus in my personal, you know, it's something that we believe a lot in homicide as hard as you go towards immutable infrastructure, infrastructure, automation, we can get off plays a significant role. I think developers naturally gravitate towards it. And if you want to live in a world where development and operations are tightly linked, I think it after the huge role to play in that it's actually a big part of how we're planning to do things like yes, anywhere, for example, a significant player and that it would be a proton. I think get up will be a significant in the future of proton as well. So I think that's the other trend. If you wanted to pick a trend that people should pay attention. That's what I believe in a lot. >>Well you're an expert. So I want to get you a quick definition. What is get Ops, how would you define it? Because that's a big trend. What does it, what does that mean? >>Electricity will probably shoot me for getting this wrong. I tell you how I think about it. Which is, you know, in many cases, um, you when you're doing deployments are pushing a deployment getups is more of a full deployment. When you are pushing code to get depository, you have a system that knows that the event has happened and then pulls from there and triggers the thing as opposed to you telling it take I have this new piece of code now go deployed everywhere. So to me, the biggest changes that Two parts one is it's more for full mechanism where you're pulling because something has changed. So it needs systems like container orchestrators to keep them, you know, to keep them in sync. And the second part of the natural natural evolution of infrastructure score, which is basically everything is called the figures code. Infrastructure as code, code is code and everything is getting stored in that software repo and the software repo becomes your store of record and drives everything. Uh So for a glass of customers, that's going to be a pretty big deal. >>Yeah, when you're checking in code, that's again, it's like a compiler for the compiler, a container for the container, you've got things for each other. Automation is ultimately what we're talking about here. And that's to me where machine learning kicks in. So again, having this open source foundational fabric, as you said, forget out the muck or the undifferentiated heavy lifting. This is what we're talking about automation, isn't it? Deepak? >>Yes. I mean I said uh one thing where we hang our hat on is there's such good stuff out there in the world which we like to contribute to, but the thing we like to hang our hat on is how do you run this? How do you do it this in ways that you can uniquely bring capabilities to customers where there's things like nitro or things are nitro open stuff. Well, the fact that we have built up this operational infrastructure over the last in a decade plus or in the container space over the last seven years where we really really know how to run these things at scale and have made all the investments to make it easy to do. So that's that's where we have hanger hard keeping people safe, helping them only available applications, their new startup, that just completely takes off in over the weekend. For whatever reason, because, you know, you're the next hot thing on twitter and our goal is to support you whether you are, you know, uh enterprise that's moving from the main train or you are the next hot startup, that's you know, growing virally and uh, you know, we've done a lot to build systems help both sides and yeah, it's >>interesting if you sing about open source where it's come from, I mean I remember that base wouldn't open source wasn't open, I would be peddling software, there's a free copy of Linux, UNIX um in college and now it's all free. But I mean just what's changed now. It used to be just free software, download software. You got it now, it's a service. Service now can be monetized quickly. And what you guys are offering with AWS and cloud scale is you've done all these things as I don't have to have a developer. I get the benefits of the scale, I can bring my open source code to the table, make it a service integrated in with other services and be the next snowflake, be the next, you know, a company that could scale. And that is that's the that's the innovation, right? That's the this is a new phenomenon. So it also changes the business model. >>Yeah, actually you're you're quite right. Actually, I I like one more thing to it. But you look at how a lot of enterprises use containers today. Most of them are using something like this year, Symphony or GS to build an internal developer platform and internal developer portal. And then the question then becomes this hard to scale this modern and development practices to an entire organization. What is your big bank that's been around as thousands and thousands of ID stuff That may not all be experts are running communities running container is when you scale it out different systems that proton come into play. That was actually the inspiration is how do you help an organization where they're building these developer Portholes and developer infrastructure, developer platforms, How do you make it easy for them to build it? Be almost use it as a way to get these modern practices into the hands of all the business units, where they may not have the time to become experts at the modern ways of running infrastructure because they're busy doing other things. And I think you'll see the a lot more happening that space that's not happening in the open source community. There's proton, there's a bunch of interesting things happening here and be interesting to see how that evolves. >>And also, you know, the communal, communal aspect of not just writing code together, but succeeding, right, building something. I mean, that's when you start to see the commercial meets open kind of ethos of communal activity of working together and sharing a big part of this year's. Dakar Con is sharing not just running and shipping code but sharing. >>Yeah, I mean if you think about it uh Dockers original value was you build run and shit right? You use the same code to build it, you use the same code to ship it, the same sort of infrastructure interface and then you run it and that, you know, the fact that the doctor images such a wonderfully shareable entity uh that can run every girl is such a powerful and it's called the Ci Image. Now I still call him Dr images because it's just easier. But that to me like that is a big deal and I think it's becoming and become an even bigger deal over the years. I came from something before, Amazon has to work in The sciences and bioinformatics and you know, the ability to share codeshare dependencies, package all of that up in a container image is a big deal. It's what got me one of the reasons I got fascinated with container 78 years ago. So it will be interesting to see where all of systems. >>It's great, great stuff. Great success. And congratulations. Deepak, Great to always talk to you got a great finger on the pulse. You lead a really important organizations at AWS and you know, doctor has such a huge success with developers, even though the company has gone through kind of a uh change over and a pivot to what they're doing now. They're back to their open source roots, but they have millions and millions of developers use Docker and new developers are coming in dot net developers are coming in. Windows developers are coming in and and so it's no longer about Lennox anymore. It's about just coding. >>Yeah. And it's it's part of this big trend towards infrastructure, automation and and you know development and deployment practices that I think everyone is going to adopt faster than we think they will. But you know, companies like Doctor and opens those projects that they involved are critical in making that a lot easier for them. And then you know, folks like us get to build on top of that orbit them and make it even easier. >>Well, great testimony the doctor that you guys based your E C. S on Docker Doctor has a critical role in developing community. I run composed in their hub with dr desktop and we'll be watching amazon and and the community activity and see what kind of experiences you guys can bring to the table and continue that momentum. Thank you Deepak for coming on the >>cube. Thank you, john. That's always a pleasure. >>Okay. Mr cubes. Dr khan 2021 virtual coverage. I'm john for your host of the cube. Thanks for watching.
SUMMARY :
One of the big supporters and open source amazon web services returning back Good to see you too, john it's always good to do these. you guys are powering, making it easier for folks to use software. on the Ocr specification because, you know, the Oc I am expect is becoming the de facto packaging with Docker question I have for you is how should the customers think about things like E C. And I think one of the reasons you see so many customers start with the CSN, Forget is with forget you what is amazon bring to the table for the new equation, what would you say? So TCS task or community is part of the thing that you talk to and that is the main unit So two things I want to ask you on the customer side because you have kind of to the enterprise is we've got some really good solutions for you in eight of us and we are now allowing secretive and you know, and um, now it's all out in the open. and you know, 100 times out of 100 at altitudes between a new feature and helping our customers Open source is super important, as you know, and you continue to do it from under years. makes it easy for them to contribute, creates, you know, manages all the licenses, etcetera. Deepak, I gotta ask you why I got you here. and just the quality of life. important, the business application, that and you the application. So I want to get you a quick definition. Which is, you know, in many cases, um, you when you're doing deployments fabric, as you said, forget out the muck or the undifferentiated heavy lifting. that's you know, growing virally and uh, you know, we've done a lot to build systems help both be the next, you know, a company that could scale. How do you make it easy for them to build it? And also, you know, the communal, communal aspect of not just writing code together, I came from something before, Amazon has to work in The sciences and bioinformatics and you Deepak, Great to always talk to you got a great finger on the pulse. And then you know, folks like us get to build on top of that orbit them and make it even and and the community activity and see what kind of experiences you guys can bring to the table and continue that That's always a pleasure. I'm john for your host of the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
100 times | QUANTITY | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Deepak | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
Coop con | ORGANIZATION | 0.99+ |
Atlanta | LOCATION | 0.99+ |
three years | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
john | PERSON | 0.99+ |
100 | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
Rhonda | PERSON | 0.99+ |
Vanguard | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second version | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Firecracker | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Symphony | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
WS | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
Two parts | QUANTITY | 0.98+ |
second part | QUANTITY | 0.98+ |
2021 | DATE | 0.98+ |
pandemic | EVENT | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
UNIX | TITLE | 0.97+ |
one area | QUANTITY | 0.97+ |
both sides | QUANTITY | 0.97+ |
Windows | TITLE | 0.97+ |
eight | QUANTITY | 0.97+ |
78 years ago | DATE | 0.96+ |
Dakar Con | ORGANIZATION | 0.96+ |
thousands | QUANTITY | 0.96+ |
E C. S | TITLE | 0.96+ |
This morning | DATE | 0.96+ |
Dr | PERSON | 0.95+ |
GS | ORGANIZATION | 0.95+ |
this year | DATE | 0.94+ |
first principles | QUANTITY | 0.94+ |
Notary | TITLE | 0.94+ |
second year | QUANTITY | 0.94+ |
khan | PERSON | 0.94+ |
Rocket | TITLE | 0.94+ |
lambda | TITLE | 0.94+ |
Another test of transitions
>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, (upbeat music) >> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music) >> Narrator: Live from Las Vegas It's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with it's ecosystem partners. >> Okay, welcome back everyone theCUBE's live coverage of AWS re:Invent 2019. This is theCUBE's 7th year covering Amazon re:Invent. It's their 8th year of the conference. I want to just shout out to Intel for their sponsorship for these two amazing sets. Without their support we wouldn't be able to bring our mission of great content to you. I'm John Furrier. Stu Miniman. We're here with the chief of AWS, the chief executive officer Andy Jassy. Tech athlete in and of himself three hour Keynotes. Welcome to theCUBE again, great to see you. >> Great to be here, thanks for having me guys. >> Congratulations on a great show a lot of great buzz. >> Andy: Thank you. >> A lot of good stuff. Your Keynote was phenomenal. You get right into it, you giddy up right into it as you say, three hours, thirty announcements. You guys do a lot, but what I liked, the new addition, the last year and this year is the band; house band. They're pretty good. >> Andy: They're good right? >> They hit the queen notes, so that keeps it balanced. So we're going to work on getting a band for theCUBE. >> Awesome. >> So if I have to ask you, what's your walk up song, what would it be? >> There's so many choices, it depends on what kind of mood I'm in. But, uh, maybe Times Like These by the Foo Fighters. >> John: Alright. >> These are unusual times right now. >> Foo Fighters playing at the Amazon Intersect Show. >> Yes they are. >> Good plug Andy. >> Headlining. >> Very clever >> Always getting a good plug in there. >> My very favorite band. Well congratulations on the Intersect you got a lot going on. Intersect is a music festival, I'll get to that in a second But, I think the big news for me is two things, obviously we had a one-on-one exclusive interview and you laid out, essentially what looks like was going to be your Keynote, and it was. Transformation- >> Andy: Thank you for the practice. (Laughter) >> John: I'm glad to practice, use me anytime. >> Yeah. >> And I like to appreciate the comments on Jedi on the record, that was great. But I think the transformation story's a very real one, but the NFL news you guys just announced, to me, was so much fun and relevant. You had the Commissioner of NFL on stage with you talking about a strategic partnership. That is as top down, aggressive goal as you could get to have Rodger Goodell fly to a tech conference to sit with you and then bring his team talk about the deal. >> Well, ya know, we've been partners with the NFL for a while with the Next Gen Stats that they use on all their telecasts and one of the things I really like about Roger is that he's very curious and very interested in technology and the first couple times I spoke with him he asked me so many questions about ways the NFL might be able to use the Cloud and digital transformation to transform their various experiences and he's always said if you have a creative idea or something you think that could change the world for us, just call me he said or text me or email me and I'll call you back within 24 hours. And so, we've spent the better part of the last year talking about a lot of really interesting, strategic ways that they can evolve their experience both for fans, as well as their players and the Player Health and Safety Initiative, it's so important in sports and particularly important with the NFL given the nature of the sport and they've always had a focus on it, but what you can do with computer vision and machine learning algorithms and then building a digital athlete which is really like a digital twin of each athlete so you understand, what does it look like when they're healthy and compare that when it looks like they may not be healthy and be able to simulate all kinds of different combinations of player hits and angles and different plays so that you could try to predict injuries and predict the right equipment you need before there's a problem can be really transformational so we're super excited about it. >> Did you guys come up with the idea or was it a collaboration between them? >> It was really a collaboration. I mean they, look, they are very focused on players safety and health and it's a big deal for their- you know, they have two main constituents the players and fans and they care deeply about the players and it's a-it's a hard problem in a sport like Football, I mean, you watch it. >> Yeah, and I got to say it does point out the use cases of what you guys are promoting heavily at the show here of the SageMaker Studio, which was a big part of your Keynote, where they have all this data. >> Andy: Right. >> And they're data hoarders, they hoard data but the manual process of going through the data was a killer problem. This is consistent with a lot of the enterprises that are out there, they have more data than they even know. So this seems to be a big part of the strategy. How do you get the customers to actually wake up to the fact that they got all this data and how do you tie that together? >> I think in almost every company they know they have a lot of data. And there are always pockets of people who want to do something with it. But, when you're going to make these really big leaps forward; these transformations, the things like Volkswagen is doing where they're reinventing their factories and their manufacturing process or the NFL where they're going to radically transform how they do players uh, health and safety. It starts top down and if the senior leader isn't convicted about wanting to take that leap forward and trying something different and organizing the data differently and organizing the team differently and using machine learning and getting help from us and building algorithms and building some muscle inside the company it just doesn't happen because it's not in the normal machinery of what most companies do. And so it always, almost always, starts top down. Sometimes it can be the Commissioner or CEO sometimes it can be the CIO but it has to be senior level conviction or it doesn't get off the ground. >> And the business model impact has to be real. For NFL, they know concussions, hurting their youth pipe-lining, this is a huge issue for them. This is their business model. >> They lose even more players to lower extremity injuries. And so just the notion of trying to be able to predict injuries and, you know, the impact it can have on rules and the impact it can have on the equipment they use, it's a huge game changer when they look at the next 10 to 20 years. >> Alright, love geeking out on the NFL but Andy, you know- >> No more NFL talk? >> Off camera how about we talk? >> Nobody talks about the Giants being 2 and 10. >> Stu: We're both Patriots fans here. >> People bring up the undefeated season. >> So Andy- >> Everybody's a Patriot's fan now. (Laughter) >> It's fascinating to watch uh, you and your three hour uh, Keynote, uh Werner in his you know, architectural discussion, really showed how AWS is really extending its reach, you know, it's not just a place. For a few years people have been talking about you know, Cloud is an operational model its not a destination or a location but, I felt it really was laid out is you talked about Breadth and Depth and Werner really talked about you know, Architectural differentiation. People talk about Cloud, but there are very-there are a lot of differences between the vision for where things are going. Help us understand why, I mean, Amazon's vision is still a bit different from what other people talk about where this whole Cloud expansion, journey, put ever what tag or label you want on it but you know, the control plane and the technology that you're building and where you see that going. >> Well I think that, we've talked about this a couple times we have two macro types of customers. We have those that really want to get at the low level building blocks and stitch them together creatively however they see fit to create whatever's in their-in their heads. And then we have the second segment of customers that say look, I'm willing to give up some of that flexibility in exchange for getting 80% of the way there much faster. In an abstraction that's different from those low level building blocks. And both segments of builders we want to serve and serve well and so we've built very significant offerings in both areas. I think when you look at microservices um, you know, some of it has to do with the fact that we have this very strongly held belief born out of several years of Amazon where you know, the first 7 or 8 years of Amazon's consumer business we basically jumbled together all of the parts of our technology in moving really quickly and when we wanted to move quickly where you had to impact multiple internal development teams it was so long because it was this big ball, this big monolithic piece. And we got religion about that in trying to move faster in the consumer business and having to tease those pieces apart. And it really was a lot of impetus behind conceiving AWS where it was these low level, very flexible building blocks that6 don't try and make all the decisions for customers they get to make them themselves. And some of the microservices that you saw Werner talking about just, you know, for instance, what we-what we did with Nitro or even what we did with Firecracker those are very much about us relentlessly working to continue to uh, tease apart the different components. And even things that look like low level building blocks over time, you build more and more features and all of the sudden you realize they have a lot of things that are combined together that you wished weren't that slow you down and so, Nitro was a completely re imagining of our Hypervisor and Virtualization layer to allow us, both to let customers have better performance but also to let us move faster and have a better security story for our customers. >> I got to ask you the question around transformation because I think that all points, all the data points, you got all the references, Goldman Sachs on stage at the Keynote, Cerner, I mean healthcare just is an amazing example because I mean, that's demonstrating real value there there's no excuse. I talked to someone who wouldn't be named last night, in and around the area said, the CIA has a cost bar like this a cost-a budget like this but the demand for mission based apps is going up exponentially, so there's need for the Cloud. And so, you see more and more of that. What is your top down, aggressive goals to fill that solution base because you're also a very transformational thinker; what is your-what is your aggressive top down goals for your organization because you're serving a market with trillions of dollars of spend that's shifting, that's on the table. >> Yeah. >> A lot of competition now sees it too, they're going to go after it. But at the end of the day you have customers that have a demand for things, apps. >> Andy: Yeah. >> And not a lot of budget increase at the same time. This is a huge dynamic. >> Yeah. >> John: What's your goals? >> You know I think that at a high level our top down aggressive goals are that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding, but also that it allows them to build, uh, build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time. And then, we also really want to be the technology infrastructure platform under all the applications that people build. And we're realistic, we know that you know, the market segments we address with infrastructure, software, hardware, and data center services globally are trillions of dollars in the long term and it won't only be us, but we have that goal of wanting to serve every application and that requires not just the security operational premise but also a lot of functionality and a lot of capability. We have by far the most amount of capability out there and yet I would tell you, we have 3 to 5 years of items on our roadmap that customers want us to add. And that's just what we know today. >> And Andy, underneath the covers you've been going through some transformation. When we talked a couple of years ago, about how serverless is impacting things I've heard that that's actually, in many ways, glue behind the two pizza teams to work between organizations. Talk about how the internal transformations are happening. How that impacts your discussions with customers that are going through that transformation. >> Well, I mean, there's a lot of- a lot of the technology we build comes from things that we're doing ourselves you know? And that we're learning ourselves. It's kind of how we started thinking about microservices, serverless too, we saw the need, you know, we would have we would build all these functions that when some kind of object came into an object store we would spin up, compute, all those tasks would take like, 3 or 4 hundred milliseconds then we'd spin it back down and yet, we'd have to keep a cluster up in multiple availability zones because we needed that fault tolerance and it was- we just said this is wasteful and, that's part of how we came up with Lambda and you know, when we were thinking about Lambda people understandably said, well if we build Lambda and we build this serverless adventure in computing a lot of people were keeping clusters of instances aren't going to use them anymore it's going to lead to less absolute revenue for us. But we, we have learned this lesson over the last 20 years at Amazon which is, if it's something that's good for customers you're much better off cannibalizing yourself and doing the right thing for customers and being part of shaping something. And I think if you look at the history of technology you always build things and people say well, that's going to cannibalize this and people are going to spend less money, what really ends up happening is they spend less money per unit of compute but it allows them to do so much more that they ultimately, long term, end up being more significant customers. >> I mean, you are like beating the drum all the time. Customers, what they say, we encompass the roadmap, I got that you guys have that playbook down, that's been really successful for you. >> Andy: Yeah. >> Two years ago you told me machine learning was really important to you because your customers told you. What's the next traunch of importance for customers? What's on top of mind now, as you, look at- >> Andy: Yeah. >> This re:Invent kind of coming to a close, Replay's tonight, you had conversations, you're a tech athlete, you're running around, doing speeches, talking to customers. What's that next hill from if it's machine learning today- >> There's so much I mean, (weird background noise) >> It's not a soup question (Laughter) And I think we're still in the very early days of machine learning it's not like most companies have mastered it yet even though they're using it much more then they did in the past. But, you know, I think machine learning for sure I think the Edge for sure, I think that um, we're optimistic about Quantum Computing even though I think it'll be a few years before it's really broadly useful. We're very um, enthusiastic about robotics. I think the amount of functions that are going to be done by these- >> Yeah. >> robotic applications are much more expansive than people realize. It doesn't mean humans won't have jobs, they're just going to work on things that are more value added. We're believers in augmented virtual reality, we're big believers in what's going to happen with Voice. And I'm also uh, I think sometimes people get bored you know, I think you're even bored with machine learning already >> Not yet. >> People get bored with the things you've heard about but, I think just what we've done with the Chips you know, in terms of giving people 40% better price performance in the latest generation of X86 processors. It's pretty unbelievable in the difference in what people are going to be able to do. Or just look at big data I mean, big data, we haven't gotten through big data where people have totally solved it. The amount of data that companies want to store, process, analyze, is exponentially larger than it was a few years ago and it will, I think, exponentially increase again in the next few years. You need different tools and services. >> Well I think we're not bored with machine learning we're excited to get started because we have all this data from the video and you guys got SageMaker. >> Andy: Yeah. >> We call it the stairway to machine learning heaven. >> Andy: Yeah. >> You start with the data, move up, knock- >> You guys are very sophisticated with what you do with technology and machine learning and there's so much I mean, we're just kind of, again, in such early innings. And I think that, it was so- before SageMaker, it was so hard for everyday developers and data scientists to build models but the combination of SageMaker and what's happened with thousands of companies standardizing on it the last two years, plus now SageMaker studio, giant leap forward. >> Well, we hope to use the data to transform our experience with our audience. And we're on Amazon Cloud so we really appreciate that. >> Andy: Yeah. >> And appreciate your support- >> Andy: Yeah, of course. >> John: With Amazon and get that machine learning going a little faster for us, that would be better. >> If you have requests I'm interested, yeah. >> So Andy, you talked about that you've got the customers that are builders and the customers that need simplification. Traditionally when you get into the, you know, the heart of the majority of adoption of something you really need to simplify that environment. But when I think about the successful enterprise of the future, they need to be builders. how'l I normally would've said enterprise want to pay for solutions because they don't have the skill set but, if they're going to succeed in this new economy they need to go through that transformation >> Andy: Yeah. >> That you talk to, so, I mean, are we in just a total new era when we look back will this be different than some of these previous waves? >> It's a really good question Stu, and I don't think there's a simple answer to it. I think that a lot of enterprises in some ways, I think wish that they could just skip the low level building blocks and only operate at that higher level abstraction. That's why people were so excited by things like, SageMaker, or CodeGuru, or Kendra, or Contact Lens, these are all services that allow them to just send us data and then run it on our models and get back the answers. But I think one of the big trends that we see with enterprises is that they are taking more and more of their development in house and they are wanting to operate more and more like startups. I think that they admire what companies like AirBnB and Pintrest and Slack and Robinhood and a whole bunch of those companies, Stripe, have done and so when, you know, I think you go through these phases and eras where there are waves of success at different companies and then others want to follow that success and replicate it. And so, we see more and more enterprises saying we need to take back a lot of that development in house. And as they do that, and as they add more developers those developers in most cases like to deal with the building blocks. And they have a lot of ideas on how they can creatively stich them together. >> Yeah, on that point, I want to just quickly ask you on Amazon versus other Clouds because you made a comment to me in our interview about how hard it is to provide a service to other people. And it's hard to have a service that you're using yourself and turn that around and the most quoted line of my story was, the compression algorithm- there's no compression algorithm for experience. Which to me, is the diseconomies of scale for taking shortcuts. >> Andy: Yeah. And so I think this is a really interesting point, just add some color commentary because I think this is a fundamental difference between AWS and others because you guys have a trajectory over the years of serving, at scale, customers wherever they are, whatever they want to do, now you got microservices. >> Yeah. >> John: It's even more complex. That's hard. >> Yeah. >> John: Talk about that. >> I think there are a few elements to that notion of there's no compression algorithm for experience and I think the first thing to know about AWS which is different is, we just come from a different heritage and a different background. We ran a business for a long time that was our sole business that was a consumer retail business that was very low margin. And so, we had to operate at very large scale given how many people were using us but also, we had to run infrastructure services deep in the stack, compute storage and database, and reliable scalable data centers at very low cost and margins. And so, when you look at our business it actually, today, I mean its, its a higher margin business in our retail business, its a lower margin business in software companies but at real scale, it's a high volume, relatively low margin business. And the way that you have to operate to be successful with those businesses and the things you have to think about and that DNA come from the type of operators we have to be in our consumer retail business. And there's nobody else in our space that does that. So, you know, the way that we think about costs, the way we think about innovation in the data center, um, and I also think the way that we operate services and how long we've been operating services as a company its a very different mindset than operating package software. Then you look at when uh, you think about some of the uh, issues in very large scale Cloud, you can't learn some of those lessons until you get to different elbows of the curve and scale. And so what I was telling you is, its really different to run your own platform for your own users where you get to tell them exactly how its going to be done. But that's not the way the real world works. I mean, we have millions of external customers who use us from every imaginable country and location whenever they want, without any warning, for lots of different use cases, and they have lots of design patterns and we don't get to tell them what to do. And so operating a Cloud like that, at a scale that's several times larger than the next few providers combined is a very different endeavor and a very different operating rigor. >> Well you got to keep raising the bar you guys do a great job, really impressed again. Another tsunami of announcements. In fact, you had to spill the beans earlier with Quantum the day before the event. Tight schedule. I got to ask you about the musical festival because, I think this is a very cool innovation. It's the inaugural Intersect conference. >> Yes. >> John: Which is not part of Replay, >> Yes. >> John: Which is the concert tonight. Its a whole new thing, big music act, you're a big music buff, your daughter's an artist. Why did you do this? What's the purpose? What's your goal? >> Yeah, it's an experiment. I think that what's happened is that re:Invent has gotten so big, we have 65 thousand people here, that to do the party, which we do every year, its like a 35-40 thousand person concert now. Which means you have to have a location that has multiple stages and, you know, we thought about it last year and when we were watching it and we said, we're kind of throwing, like, a 4 hour music festival right now. There's multiple stages, and its quite expensive to set up that set for a party and we said well, maybe we don't have to spend all that money for 4 hours and then rip it apart because actually the rent to keep those locations for another two days is much smaller than the cost of actually building multiple stages and so we thought we would try it this year. We're very passionate about music as a business and I think we-I think our customers feel like we've thrown a pretty good music party the last few years and we thought we would try it at a larger scale as an experiment. And if you look at the economics- >> At the headliners real quick. >> The Foo Fighters are headlining on Saturday night, Anderson Paak and the Free Nationals, Brandi Carlile, Shawn Mullins, um, Willy Porter, its a good set. Friday night its Beck and Kacey Musgraves so it's a really great set of um, about thirty artists and we're hopeful that if we can build a great experience that people will want to attend that we can do it at scale and it might be something that both pays for itself and maybe, helps pay for re:Invent too overtime and you know, I think that we're also thinking about it as not just a music concert and festival the reason we named it Intersect is that we want an intersection of music genres and people and ethnicities and age groups and art and technology all there together and this will be the first year we try it, its an experiment and we're really excited about it. >> Well I'm gone, congratulations on all your success and I want to thank you we've been 7 years here at re:Invent we've been documenting the history. You got two sets now, one set upstairs. So appreciate you. >> theCUBE is part of re:Invent, you know, you guys really are apart of the event and we really appreciate your coming here and I know people appreciate the content you create as well. >> And we just launched CUBE365 on Amazon Marketplace built on AWS so thanks for letting us- >> Very cool >> John: Build on the platform. appreciate it. >> Thanks for having me guys, I appreciate it. >> Andy Jassy the CEO of AWS here inside theCUBE, it's our 7th year covering and documenting the thunderous innovation that Amazon's doing they're really doing amazing work building out the new technologies here in the Cloud computing world. I'm John Furrier, Stu Miniman, be right back with more after this short break. (Outro music)
SUMMARY :
at org the org to the andyc and it was. of time. That's hard. I think that
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy Clemenko | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
3 | QUANTITY | 0.99+ |
StackRox | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
4 hours | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
Rodger Goodell | PERSON | 0.99+ |
AirBnB | ORGANIZATION | 0.99+ |
Roger | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Brandi Carlile | PERSON | 0.99+ |
Pintrest | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
two days | QUANTITY | 0.99+ |
4 hour | QUANTITY | 0.99+ |
7th year | QUANTITY | 0.99+ |
Willy Porter | PERSON | 0.99+ |
Friday night | DATE | 0.99+ |
andy@stackrox.com | OTHER | 0.99+ |
7 years | QUANTITY | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
two tags | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
Foo Fighters | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Giants | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
andyc.info/dc20 | OTHER | 0.99+ |
65 thousand people | QUANTITY | 0.99+ |
Saturday night | DATE | 0.99+ |
Slack | ORGANIZATION | 0.99+ |
two sets | QUANTITY | 0.99+ |
flask.docker.life | OTHER | 0.99+ |
Werner | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
Shawn Mullins | PERSON | 0.99+ |
Robinhood | ORGANIZATION | 0.99+ |
Intersect | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Kacey Musgraves | PERSON | 0.99+ |
4 hundred milliseconds | QUANTITY | 0.99+ |
first image | QUANTITY | 0.99+ |
Deepak Singh, AWS | DockerCon 2020
>> Narrator: From around the globe, it's theCUBE with digital coverage of DockerCon LIVE 2020, brought to you by Docker and its ecosystem partners. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of DockerCon LIVE 2020. Happy to welcome back to the program one of our CUBE alumni, Deepak Singh. He's the vice president of compute services at Amazon Web Services. Deepak, great to see you. >> Likewise, hi, Stu. Nice to meet you again. >> All right, so for our audience that hasn't been in your previous times on theCUBE, give us a little bit about, you know, your role and your organization inside AWS? >> Yeah, so I'm, I've been part of the AWS compute services world from, for the last 12 years in various capacities. Today, I run a number of teams, all our container services, our Linux teams, I also happen to run a high performance computing organization, so it's a nice mix of all the computing that our customers do, especially some of the more new and large scale compute types that our customers are doing. >> All right, so Deepak, obviously, you know, the digital events, we understand what's happening with the global pandemic. DockerCon was actually always planned to be an online event but I want to understand, you know, your teams, how things are affecting, we know distributed is something that Amazon's done, but you have to cut up those two pizza and send them out to the additional groups or, you know, what advice are you giving the developers out there? >> Yeah, in many ways, obviously, how we operate has changed. We are at home, maybe I think with our families. DockerCon was always going to be virtual, but many other events like AWS Summits are now virtual so, you know, in some ways, the teams, the people that get most impacted are not necessarily the developers in our team but people who interact a lot with customers, who go to conferences and speak and they are finding new ways of being effective and being successful and they've been very creative at it. Our customers are getting very good at working with us virtually because we can always go to their site, they can always come to Seattle, or run of other sites for meeting. So we've all become very good at, and disciplined at how do you conduct really nice virtual meetings. But from a customer commitment side, from how we are operating, the things that we're doing, not that much has changed. We still run our projects the same way, the teams work together. My team tends to do a lot of happy things like Friday happy hours, they happen to be all virtual. I think last time we played, what word, bingo? I forget exactly what game we played. I know I got some point somewhere. But we do our best to maintain sort of our team chemistry or camaraderie but the mission doesn't change which is our customers expect us to keep operating their services, make sure that they're highly available, keep delivering new capabilities and I think in this environment, in some ways that's even more important than ever, as customer, as the consumer moves online and so much business is being done virtually so it keeps us on our toes but it's been an adjustment but I think we are all, not just us, I think the whole world is doing the best that they can under the circumstances. >> Yeah, absolutely, it definitely has humanized things quite a bit. From a technology standpoint, Deepak, you know, distributed systems has really been the challenge of you know, quite a long journey that people have been going on. Docker has played, you know, a really important role in a lot of these cloud native technologies. It's been just amazing to watch, you know, one of the things I point to in my career is, you know, watching from those very, very early days of Docker to the Cambrian explosion of what we've seen container based services, you know, you've been part of it for quite a number of years and AWS had many services out there. For people that are getting started, you know, what guidance do you give them? What do they understand about, you know, containerization in 2020? >> Yeah, containerization in 2020 is quite a bit different from when Docker started in 2013. I remember speaking at DockerCon, I forget, that's 2014, 2015, and it was a very different world. People are just trying to figure out what containers are that they could package code in deeper. Today, containers are mainstream, it is more customers or at least many customers and they are starting to build new applications, probably starting them either with containers or with some form of server technology. At least that's the default starting point but increasingly, we also seen customers with existing applications starting to think about how do they adapt? And containers are a means to an end. The end is how can we move faster? How can we deliver more quickly? How can our teams be more productive? And how can you do it more, less expensively, at lower cost? And containers are a big part, important and critical piece of that puzzle, both from how customers are operating their infrastructure, that there's a whole ecosystem of schedulers and orchestration and security tools and all the things that an enterprise need to deliver applications using containers that they have built up. Over the last few years, you know, we have multiple container services that meet those needs. And I think that's been the biggest change is that there's so much more. Which also means that when you're getting started, you're faced with many more options. When Docker started, it was this cute whale, Docker run, Docker build Docker push, it was pretty simple, you could get going really quickly. And today you have 500 different options. My guidance to customers really is, boils down to what are you trying to achieve? If you're an organization that's trying to corral infrastructure and trying to use an existing VM more effectively, for example, you probably do want to invest in becoming experts at schedulers and understanding orchestration technologies like ECS and EKS work but if you just want to run applications, you probably want to look at something like Fargate or more. I mean, you could go towards Lambda and just run code. But I think it all boils down to where you're starting your journey. And by the way, understanding Docker run, Docker build and Docker push is still a great idea. It helps you understand how things work. >> All right, so Deepak, you've already brought up a couple of AWS services of, you know, talk about the options out there, that you can either run on top of AWS, you have a lot of native services, you know, ECS, EKS, you mentioned, Fargate there, and very broad ecosystem in space. Could you just, you know, obviously, there are entire breakout sessions to talk about , the various AWS services, but you know, give us that one on one level as to what to understand for container service by AWS. >> Yeah, and these services evolved organically and we launched the Amazon Elastic Container Service or ECS in preview in November or whenever re:Invent was that year in 2014, which seems ages ago in the world of containers but in the end, our goal is to give our customers the most choice, so that they can solve problems the way they want to solve them. So Amazon ECS is our native container orchestration service, it's designed to work with and the rest of the AWS ecosystem. So it uses VPC for networking, it uses IAM identity, it uses ALB for load balancing, other than just good examples, some examples of how it works. But it became pretty clear over time that there was a lot of customers who were investing in communities, very often starting in their own data centers. And as they migrated onto the cloud, they wanted to continue using the same tool plane but they also wanted to not have to manage the complexity of communities control planes, upgrades. And they also wanted some of the same integrations that they were getting with ECS and so that's where the Amazon Elastic Kubernetes Service or EKS comes in, which is, okay, we will manage a control plane for you. We will manage upgrades and patches for you. You focus on building your applications in Kubernetes way, so it embraces Kubernetes. It has, invokes with all the Kubernetes tooling and gives you a Kubernetes native experience, but then also ties into the broad AWS ecosystem and allows us to take care of some of the muck that many customers quite frankly don't and shouldn't have to worry about. But then we took it one step further and actually launched the same time as EKS and that's, AWS Fargate, and Fargate was, came from the recognition that we had, actually, a long time ago, which is, one of the beauties of EC2 was that customers never had, had to stop, didn't have to worry about racking and stacking and where a server was running anymore. And the idea was, how can we apply that to the world of containers. And we also learned a little bit from what we had done with Lambda. And we took that and took the server layer and took it out of the way. Then from a customer standpoint, all you're launching is a pod or a task or a service and you're not worrying about which machines I need to get, what types of machines I need to get. And the operational simplicity that comes with it is quite remarkable and quite finding not that, surprisingly, our customers want us to keep pushing the boundary of the kind operational simplicity we can give them but Fargate serves a critical building block and part of that, and we're super excited because, you know, today by far when a new customer, when a customer comes and runs a container on AWS the first time they pick Fargate, we're usually using ECS because EKS and Fargate is much newer, but that is a default starting point for any new container customer on AWS which is great. >> All right, well, you know, Docker, the company really helped a lot with that democratization, container technologies, you know, all those services that you talked about from AWS. I'm curious now, the partnership with Docker here, you know, how do some of the AWS services, you know, fit in with Docker? I'm thinking Docker Desktop probably someplace that they're, you know, or some connection? >> Yeah, I think one of the things that Docker has always been really good at as a company, as a project, is understanding the developer and the fact that they start off on a laptop. That's where the original Docker experience that go well, and Docker Desktop since then and we see a ton of Docker Desktop customers have used AWS. We also learned very early on, because originally ECS CLI supported Docker Compose. That ecosystem is also very rich and people like building Docker files and post files and just being able to launch them. So we continue to learn from what Docker is doing with Docker Desktop. We continue working with them on making sure that customizing the Docker Compose and Docker Desktop can run all their services and application on AWS. And we'll continue working with Docker, the company, on how we make that a lot easier for our customers, they are our mutual customers, and how we can learn from their simplicity that Docker, the simplicity that Docker brings and the sort of ease of use the Docker bring for the developer and the developer experience. We learn from that for our own services and we love working with them to make sure that the customer that's starting with Docker Desktop or the Docker CLI has a great experience as they move towards a fully orchestrated experience in the cloud, for example. There's a couple of other areas where Docker has turned out to have had foresight and driven some of our thinking. So a few years ago, Docker released this thing called containerd, where they took out their container runtime from inside the bigger Docker engine. And containerd has become a very important project for us as well as, it's the underpinning of Fargate now and we see a lot of interest from customers that want to keep building on containerd as well. And it's going to be very interesting to see how we work with Docker going forward and how we can continue to give our customers a lot of value, starting from the laptop and then ending up with large scale services in the cloud. >> Very interesting stuff, you know, interesting. Anytime we have a conversation about Docker, there's Docker the technology and Docker the company and that leads us down the discussion of open-source technologies . You were just talking about, you know, containerd believe that connects us to Firecracker. What you and your team are involved in, what's your viewpoint is the, you know, what you're seeing from open-source, how does Amazon think of that? And what else can you share with the audience on this topic? >> Yeah, as you've probably seen over the last few years, both from our work in Kubernetes, with things like Firecracker and more recently Bottlerocket. AWS gets deeply involved with open-source in a number of ways. We are involved heavily with a number of CNCF projects, whether it be containerd, whether it be things like Kubernetes itself, projects in the Kubernetes ecosystem, the service mesh world with Envoy and with the containerd project. So where containerd fits in really well with AWS is in a project that we call firecracker-containerd. They're effectively for Fargate, firecracker-containerd as we move Fargate towards Firecracker becomes out of the container in which you run containerd. It's effectively the equivalent of runC in a traditional Docker engine world. And, you know, one of the first things we did when Firecracker got rolled out was open-source the firecracker-containerd project. It's a go project and the idea was it's a great way for people to build VM like isolation and then build sort of these serverless container architectures like we want to do with Fargate. And, you know, I think Firecracker itself has been a great success. You see customer, you know, companies like Libvirt integrating with Firecracker. I've seen a few other examples of, sometimes unbeknownst to us, of people picking a Firecracker and using it for very, very interesting use cases and not just on AWS in other places as well. And we learnt a lot from that that's kind of why Bottlerocket is, was released the way it was. It is both a product and a project. Bottlerocket, the operating system is an open-source project. It's on GitHub, it has all the building tooling, you can take it and do whatever you want with it. And then on the AWS side, we will build and publish Bottlerocket armies, Amazon machine images, we will support them on AWS and there it's a product. But then Bottlerocket the project is something that anybody in the world who wants to run a minimal operating system can choose to pick up. And I think we've learnt a lot from these experiences, how we deal with the community, how we work with other people who are interested in contributing. And you know, Docker is one of the, the Docker open-source pieces and Docker the company are both part of the growing open-source ecosystem that's coming from AWS, especially on the container world. So it's going to be very interesting. And I'll end with, containerization has started impacting other parts of AWS, as well as our other services are being built, very often through ECS and EKS, but they're also influencing how we think about what capabilities we need to build into the broader container ecosystem. >> Yeah, Deepak, you know, you mentioned that some of the learnings from Lambda has impacted the services you're doing on the containerization side. You know, we've been watching some of the blurring of the lines between another container world and the containerization world. You know, there's some open-source projects out there, the CNCS working on things, you know, what's the latest, as you see kind of containerization and serverless and you know, where do you see them going forward? >> This is that I say that crystal balls are not my strong suite. But we hear customers, customers often want the best of both world. What we see very often is that customers don't actually choose just Fargate or just Lambda, they'll choose both. Where for different pieces of their architecture, they may pick a different solution. And sometimes that's driven by what they know, sometimes driven by what fits into their need. Some of the lines blur but they're still quite different. Lambda, for example, as a very event driven architecture, it is one process at a time. It has all these event hooks into the rest of AWS that are hard to replicate. And if that's the world you want to live in or benefit from, you're going to use lambda. If you're running long running services or you want a particular size that you don't get in Lambda or you want to take a more traditional application and convert it into a more modern application, chances are you're starting on Fargate but it fits in really well you have an existing operational model that fits into it. So we see applications evolving very interestingly. It's one reason why when we build a service mesh, we thought forward instead. It is almost impossible that we will have a world that's 100% containers, 100% Lambda or 100% EC2. It's going to be some mix of all of these. We have to think about it that way. And it's something that we constantly think about is how can we do things in a way that companies aren't forced to pick one way to it and "Oh, I'm going to build on Fargate" and then months later, they're like, "Yeah, we should have probably done Lambda." And I think that is something we think a lot about, whether it's from a developer's experience side or if it's from service meshes, which allow you to move back and forth or make the mesh. And I think that is the area where you'll see us do a lot more going forward. >> Excellent, so last last question for you Deepak is just give us a little bit as to what, you know, industry watchers will be looking at the container services going forward, next kind of 12, 18 months? >> Yeah, so I think one of the great things of the last 18 months has been that type of application that we see customers running, I don't think there's any bound to it. We see everything from people running microservices, or whatever you want to call decoupled services these days, but are services in the end, people are running, most are doing a lot of batch processing, machine learning, artificial intelligence that work with containers. But I think where the biggest dangers are going to come is as companies mature, as companies make containers, not just things that they build greenfield applications but also start thinking about migrating legacy applications in much more volume. A few things are going to happen. I think we'll be, containers come with a lot of complexity right now. I think you've, if you've seen my last two talks at re:Invent along with David Richardson from the Lambda team. You'll hear that we talk a lot about the fact that we see, we've made customers think about more things than they used to in the pre container world. I think you'll see now that the early adopter techie part has done, cloud has adopted containers and the next wave of mainstream users is coming in, you'll see more attractions come on as well, you'll see more governance, I think service meshes have a huge role to play here. How identity works or this fits into things like control tower and more sort of enterprise focused tooling around how you put guardrails around your containerized applications. You'll see it two or three different directions, I think you'll see a lot more on the serverless side, just the fact that so many customers start with Fargate, they're going to make us do more. You'll see a lot more on the ease of use developer experience of production side because you started off with the folks who like to tinker and now you're getting more and more customers that just want to run. And then you'll see, and that's actually a place where Docker, the company and the project have a lot to offer, because that's always been different. And then on the other side, you have the governance guardrails, and how is going to be in a compliant environment, how am I going to migrate all these applications over so that work will keep going on and you'll more and more of that. So those are the three buckets I'll use, the world can surprise us and you might end up with something completely radically different but that seems like what we're hearing from our customers right now. >> Excellent, well, Deepak, always a pleasure to catch up with you. Thanks so much for joining us again on theCUBE. >> No, always a pleasure Stu and hopefully, we get to do this again someday in person. >> Absolutely, I'm Stu Miniman, thanks as always for watching theCUBE. >> Deepak: Yep, thank you. (gentle music)
SUMMARY :
brought to you by Docker He's the vice president Nice to meet you again. of the AWS compute services world from, but I want to understand, you know, and disciplined at how do you conduct It's been just amazing to watch, you know, Over the last few years, you know, a couple of AWS services of, you know, and actually launched the same time as EKS how do some of the AWS services, you know, and the fact that they and Docker the company the first things we did the CNCS working on things, you know, And if that's the world you and the next wave of to catch up with you. and hopefully, we get to do Absolutely, I'm Stu Miniman, Deepak: Yep, thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon Web Services | ORGANIZATION | 0.99+ |
David Richardson | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
Deepak | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
November | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Lambda | TITLE | 0.99+ |
2014 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
2015 | DATE | 0.99+ |
12 | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Today | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
Docker Desktop | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Docker | TITLE | 0.98+ |
Firecracker | TITLE | 0.98+ |
Docker Desktop | TITLE | 0.98+ |
Kubernetes | TITLE | 0.98+ |
ECS | TITLE | 0.98+ |
Fargate | ORGANIZATION | 0.98+ |
one reason | QUANTITY | 0.98+ |
100% | QUANTITY | 0.98+ |
three buckets | QUANTITY | 0.98+ |
500 different options | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
two pizza | QUANTITY | 0.97+ |
Libvirt | ORGANIZATION | 0.97+ |
Joe Fernandes, Red Hat | Red Hat Summit 2020
>> From around the globe, it's the CUBE with digital coverage of Red Hat Summit 2020 brought to you by Red Hat. >> Hi, I'm Stu Miniman, and this is the CUBE's coverage of a Red Hat Summit 2020 happening digitally. We're connecting with Red Hat executives, thought leaders, practitioners, wherever they are around the globe, bringing them remotely into this online event. Happy to welcome back to the program, Joe Fernandez, who's the Vice President and General Manager, of Core Cloud Platforms with Red Hat. Joe, thanks so much for joining us. >> Yeah, thanks for having me. Glad to be here. >> All right, so, Joe, you know, Cloud, of course, has been a conversation we've been having for a lot of years. When I went to Red Hat Summit last year, when I went to IBM, I think last year, there was discussion of moving from kind of chapter one, if you will, to chapter two. Some of the labels that we put on things back in the early days, like Hybrid Cloud and Multicloud, they're coming into a little bit clearer picture. So, let's just give a high level, what you're seeing from your customers when they talk about Hybrid and Multicloud environment? What does that mean to your customers? And therefore, how is Red Hat meeting them where they are? >> Yeah, sure. So, Red Hat obviously, serves an enterprise customer base. And what we've seen in that customer base, really since the start and it's really informed our strategy, is the fact that all their applications aren't going to run in one place, right? So they're really employing a hybrid class strategy, a Hybrid and Multicloud strategy, that spans from their data centers out to a public cloud, typically then out to multiple public clouds as their cloud investments grow, as they move more applications. And now, even out to the edge for many of those customers. So that's the newest footprint that we're getting asked about. So really we think of that as the open hybrid cloud. And you know, our goal is really to provide a consistent platform for applications regardless of where they run across all those environments. >> Yeah. Let's get down a second on that because we've had consistency for quite a while. You look at the largest cloud provider out there, they said, hybrid environment, will give you the exact same hardware that we're running in the public cloud of your bet. You know, that in your environment. Of course, Red Hat's a software company. You've lived across lots of platforms. We're going to Red Hat's entire existence. So, you know, where is that consistency needed? How do you, well, think about how Red Hat does things? Maybe the same and a little different than some of the other players that are then, positioning and even repositioning their hybrid story over the last year or so. >> Yeah. So, we're really excited to see a lot of folks in the industry, including all the major public cloud providers are now talking about Hybrid and talking about these types of initiatives that we've been talking about for quite some time. But yeah, it's a little bit different when we talk about Hybrid Cloud, when we talk about Multicloud, we're talking about being able to run not just in one public cloud and then in a non-premise clients that mirrors that cloud. We're really talking about being able to run across multiple clouds. So having that consistency across, running in, say Amazon to Azure to Google, and then carrying that into your on-premise environments, whether that's on Bare Metal, on VMware, on OpenStack, and then, like I said, out out to the edge, right? So that consistency is important for people who are concerned about how their applications are going to operate in these different environments. Because otherwise, they'd have to manage those differences themselves. I'm speaking as part of Red Hat, right? This is what the company was built on, right? In 20 years ago, it was all about Linux bringing consistency for enterprise applications running across x86 hardware, right? So regardless of who your OEM vendor was, as long as you're building to the x86 standard and leveraging Linux as a base, Red Hat Enterprise Linux became that same consistent operating environment for applications, which is important for our software vendors, but also more importantly for customers themselves as they yep those apps into production. >> Yeah, I guess, you know, last question I have for kind of just the landscape out there. We've been talking for a number of years. When you talk to practitioners, they don't get caught up in the labels that we use in the industry. Do they have a cloud strategy? Yes, most companies have a cloud strategy, and if you ask them is their cloud strategy same today, as it was a quarter ago or a year ago, they say, of course not. Everything's changed. We know in today's day and age, what I was doing a month ago is probably very different from what I am doing today. So, I know you've got a survey that was done of enterprise users. I saw it when it came out a month ago. And, you know, some good data in there. So, you know, where are we? And what data do you have to share with us on kind of the customer adoption with (mumbles). >> Yeah, so I think, you know, we put out a survey not too long ago and we started as, I think, over 60% of customers were adopting a hybrid cloud strategy exactly as I described. Thinking about their applications in terms of, in an environment that spans multiple cloud infrastructures, as well as on-premise footprints. And then, you know, going beyond that, we think that number will grow based on what we saw in that survey. That just mirrors the conversations that I've had with customers, that many of us here at Red Hat have been having with those same customers over the years. Because everybody's in a different spot in terms of their transformation efforts, in terms of their adoption of cloud technologies and what it means for their business. So we need to meet customers where they're at, understand that everybody's at a different spot and then make sure that we can help them make that transition. And it's really an evolution, as opposed to , I think, some people in the past might've thought of as a revolution where all the data centers are going to shut down and everything's going to move all at once. And so helping customers evolve. And that transition is really what Red Hat is all about. >> Yeah. And, so often, Joe, when I talk to some of the vendors out there, when you talk about Hybrid, you talk about Multicloud, it's talking about something you mentioned, it's a box, it's a place, it's, you know, the infrastructure discussion. But when I've been having conversations with a lot of your peers of these interviews for Red Hat Summit. We know that, it's the organization and it's the applications that are hugely important as these changes go and happen. So talk a little bit about that. What's happening to the organization? How are you helping the infrastructure team keep up and the app dev team move forward? >> Yeah, so first, I'll start with, that on the technology side, right? One of the things that that has enabled this type of consistency and portability has been sort of the advent of Linux containers as a standard packaging format that can span across all these different (mumbles), right? So we know that Linux runs in all these different footprints and Linux containers, as a portable packaging format, enables that. And then Kubernetes enables customers to orchestrate containers at scale. So that's really what OpenShift is focused on, is delivering an enterprise Kubernetes platform. Again, spanning all these environments that leverages container-based packaging, provides enterprise Kubernetes orchestration and management, to manage in all those environments. What that then also does on the people front is bring infrastructure and operations teams together, right? Because Kubernetes containers represents the agility for both sides, right? Or application developers, it represents the ability to pay their application and all their dependencies. And know that when they run it in one environment, it will be consistent with how it runs in other environments. So eliminating that problem of, works on my machine, but it doesn't work, you know, in prod or what have you. So it brings consistency for developers. Infrastructure teams, it gives them the ability to basically make decisions around where the best places to run these applications without having to think about that from a technology perspective, but really from things that should matter more, like cost and convenience to customers and performance and so forth. So, I think we see those teams coming together. That being said, it is an evolution in people and process and culture. So we've done a lot of work. We launched a global transformation office. We had previously launched a Red Hat open innovation labs and have done a lot of work with our consulting services and our partners as well, to help with, sort of, people in process evolutions that need to occur to adopt these types of technologies as well as, to move towards a more cloud native approach. >> All right. So Joe, what one of the announcements that made it the show, it is talking about how OpenShift is working with virtualization. So, I think back to the earliest container days, there was a discussion of, "oh, you know, Docker and containers, "it kills VM." Or you know, Cloud of course. Some Cloud services run on VMs, other run on containers, they're serverless. So there's a lot of confusion out there as to. >> Yep. >> What happened, we know in IT, no technology ever dies, everything's always additive. It's figuring out the right solutions and the right bet. So, help us understand what Red Hat is doing when it comes to virtualization in OpenShift and Kubernetes and, how is your approach different than some of what we've already seen in the marketplace? >> Yeah, so definitely we've seen just explosive adoption of containers technology, right? Which has driven the OpenShift business and Red Hat's business overall. So, we expect that to continue, right? More applications moving towards that container-based, packaging and deployment model and leveraging Kubernetes and OpenShift to manage those environments. That being said, as you mentioned, virtualization has been around for a really long time, right? And, predominantly, most applications, today, are running virtualized. And so some of them have made the transition to containers or were built a container native from the start. But many more are still running in VM based environments and may never make that switch. So, what we were looking at is, how do we manage this sort of hybrid environment from the application perspective where you have some applications running in containers, other applications running in VMs? We have platforms like Red Hat, OpenStack, Red Hat Virtualization that leveraged the KVM hypervisor and Red Hat Enterprise Linux to serve apps running in a VM based environment. What we did with Kubernetes is, instead, how could we innovate to have convergence on the orchestration and management fund? And we leveraged the fact that, KVM, you know, a chosen hypervisor, is actually a Linux process that can itself be containerized. And so by running the hypervisor in a container, we can then span VMs that could be managed on that same platform as the containers run. So what you have in OpenShift Virtualization is the ability to use Kubernetes to manage containerized workloads, as well as, standard VM based workloads. And these are full VMs. These aren't micro VMs or, you know, things like Firecracker Kata Container. These are standard VMs that could be, well, Windows guests or Linux guests, running inside those VMs. And so it helps you basically, manage that type of environment where you may be moving to containers and more cloud native approach, but those containers need to interact or work with applications that are still in a VM based deployment environment. And we think it's really exciting, we've demoed it at the last Red Hat Summit. We're going to talk about it even more here, in terms of how we're going to bring those products to market and enable customers. >> Okay, yeah, Joe, let me make sure I understand this because as you said, it is a different approach. So, number one, if I'm moving towards a (mumbles) management solution, this is going to fit natively into what I'm doing. It's not taking some of my traditional management tools and saying, "oh, I also get some visibility containers." There's more, you know, here's my Kubernetes solution. And just some of those containers happen to be virtualized. Did I get that piece right? >> Yeah, I think it's more like... so we know that Kubernetes is going to be in in the environment because we know that, yeah, people are moving application workloads to standard Linux containers. But we also know that virtual machines are going to still exist in that environment. So you can think about it as, how would we enable Kubernetes to manage a virtual machine in the same way that it manages a Linux container? And, what we do there, is we actually, put the VM inside the container, right? So because the VM, specifically with (mumbles) is just a Linux process, and that's what a Linux container is. It's a Linux process, right? So you can run the hypervisor, span the virtual machines, inside of containers. But those virtual machines, are just like any other VM that would run in OpenStack or Red Hat Virtualization or what have you. And you could, vSphere for example. So those are traditional virtual machines, that are now being managed in a Kubernetes environment. And what we're seeing is sort of, this evolution of Kubernetes to take on these new types of workloads. VMs is just one example, of something that you can now manage with Kubernetes. >> Okay. And, help me understand what this means to really the app dev in my application portfolio. Because you know, the original promise of virtualization was, I can just stick my application in a VM and I never need to think about it ever again. And well, that was super helpful when windows NT was going end of life. In 2020, we do find that most companies do want to update their applications, and they are talking about, do I refactor them? Do I make them microservices architecture? I don't want to have that iceberg of an application that I'm just dragging along slowly into the new world. So. >> Yeah. >> What is this virtualization integration with Kubernetes? You mean for the AppDev and the applications? >> Yeah, sure, so what we see customers doing, what we see the application development team is doing is modernizing a lot of their existing applications, right? So they're taking traditional monolithic applications or end tier, like the applications that may run in a VM based environment and they're moving them towards more of a distributed architecture leveraging microservices based approach. But that doesn't happen all at once either, right? So, oftentimes what you see is your microservices, are still connected to VM based applications. Or maybe you're breaking down a monolithic application. The core is still running in a VM, but some of those business functions have now been carved out and containerized. So, you're going to end up in a hybrid environment from the application perspective in terms of how these applications are packaged, and deployed. The question is, what does that mean for your deployment architecture? Does it mean you always have to run a virtualization platform and a container platform together? That's how it's done today, right? OpenShift and Kubernetes run on top of vSphere, they run on top of Amazon and Azure and Google bands, and on top of OpenStack. But what if you could actually just run Kubernetes directly on Bare Metal and manage those types of workloads? That's really sort of the idea. A whole bunch of virtualization solution was based on is, let's just merge VMs natively with Kubernetes in the same way that we manage containers. And then, it can facilitate for the application developer. This evolution of apps that are running in one environment towards apps that are running essentially, in a hybrid environment from how they're packaged and deployed. >> Yeah, absolutely, something I've been hearing for the last year or so, that hybrid deployment, pulling apart application, sometimes it's even, the core piece as you said, is on premises and then I might have some of the more transactional pieces happening in the public cloud. So really interesting. So, how long has Red Hat been working on this? My (mumbles), something, you know, I'm familiar with in the CNCF. I believe it has been around for a couple of years. >> Yeah. >> So talk to us about just kind of how long it took to get here and, fully support stateful applications now. What's the overall roadmap look like? >> Yeah, so, so (mumbles) as a open source project was launched more than two years ago now. As you know, Red Hat really drives all of our development upstream in the open source community. So we launched (mumbles) project. We've been collaborating with other vendors and even customers on that. But then, you know, over time we then decided, how do we bring these technologies to market, which technologies make sense to bring the market? So, (mumbles) is the open source project. OpenShift and OpenShift Virtualization, which is what this feature is referred to commercially, is the product that then we would ship and support for running this in production environments. The capabilities, right. So, I think, those have been evolving as well. So, virtual machines have a specific requirements in terms of not only how they're deployed and managed, but how they connect to storage, how they connect networking, how do you do things like fencing and all sorts of live migration and that type of thing. We've been building out those types of capabilities. They're certainly still more to do there. But it's something that we're really excited about, not just from the perspective of running VMs, but just even more broadly from the perspective of how Kubernetes is expanding to take on new workloads, right? Because Kubernetes has moved far beyond just running, cloud native applications, today, you can run stateful services in containers. You can run things like AI and machine learning and analytics and IoT type services. But it hasn't come for free, right? This has come through a lot of hard work in the Kubernetes community, in the various associated communities, the container communities, communities like (mumbles). But it's all kind of trying to leverage that same automation, that same platform to just do more things. The cool thing is, it'll not just be Red Hat talking about it, but you'll see that from a lot of customers that are doing sessions at our summit this year and beyond. Talking about how, what it means to them. >> Yeah, that's great. Always love hearing the practitioner viewpoint. All right, Joe, I want to give you the final word when it comes to this whole space things kind of move pretty fast, but also we remember it when we first saw it. So, tell us what the customers who were kind of walking away from Red Hat Summit 2020 should be looking at and understanding that they might not have thought about if they were looking at Kubernetes, a year or two ago? >> Yeah, I think a couple of things. One is, yeah, Kubernetes and this whole container ecosystem is continuing to evolve, continuing to add capabilities and continue to expand the types of workloads, that it can run. Red Hat is right in the center of it. It's all happening in open source. Red Hat as a leading contributor to Kubernetes and open source in general, is driving a lot of this innovation. We're working with some great customers and partners, other vendors, who are working side by side with us as well. And I think the most important thing is we understand that it's an evolution for customers, right? So this evolution towards moving applications to the public cloud, adopting a hybrid cloud approach. This evolution in terms of expanding the types of workloads, and how you run and manage them. And that approach is something that we've always helped customers do and we're doing that today as they move out towards embracing a cloud native. >> All right, well, Joe Fernandez, thank you so much for the updates. Congratulations on the launch of OpenShift Virtualization. I definitely look forward to talking to some the customers in finding out that helping them along their hybrid cloud journey. All right. Lots more coverage from the CUBE at Red Hat Summit. I'm Stu Miniman ,and thank you for watching the CUBE.
SUMMARY :
brought to you by Red Hat. and General Manager, of Core Cloud Platforms with Red Hat. Glad to be here. What does that mean to your customers? is the fact that all their applications aren't going to run So, you know, where is that consistency needed? and then, like I said, out out to the edge, right? And what data do you have And that transition is really what Red Hat is all about. and it's the applications that are hugely important and management, to manage in all those environments. So, I think back to the earliest container days, It's figuring out the right solutions and the right bet. is the ability to use Kubernetes And just some of those containers happen to be virtualized. of something that you can now manage with Kubernetes. that I'm just dragging along slowly into the new world. in the same way that we manage containers. sometimes it's even, the core piece as you said, So talk to us about just kind of is the product that then we would All right, Joe, I want to give you the final word and continue to expand the types of workloads, Congratulations on the launch of OpenShift Virtualization.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Fernandez | PERSON | 0.99+ |
Joe Fernandes | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
a month ago | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
a year ago | DATE | 0.99+ |
both sides | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Red Hat Summit 2020 | EVENT | 0.99+ |
Red Hat Summit | EVENT | 0.98+ |
a year | DATE | 0.98+ |
today | DATE | 0.98+ |
OpenShift | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
a quarter ago | DATE | 0.98+ |
over 60% | QUANTITY | 0.97+ |
Multicloud | ORGANIZATION | 0.97+ |
Red Hat Virtualization | TITLE | 0.97+ |
Red Hat Enterprise | TITLE | 0.97+ |
one place | QUANTITY | 0.97+ |
Red Hat | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Windows | TITLE | 0.96+ |
Firecracker | TITLE | 0.96+ |
this year | DATE | 0.96+ |
One | QUANTITY | 0.95+ |
Red Hat Enterprise Linux | TITLE | 0.95+ |
IBM | ORGANIZATION | 0.93+ |
one environment | QUANTITY | 0.93+ |
two ago | DATE | 0.92+ |
windows NT | TITLE | 0.92+ |
x86 | TITLE | 0.91+ |
Clayton Coleman, Red Hat | Red Hat Summit 2020
>>from around the globe. It's the Cube with digital coverage of Red Hat. Summit 2020 Brought to you by Red Hat. >>Hi, I'm stupid, man. And this is the Cube's coverage of the Red Hat Summit 2020 course. The event this year is digital. We're talking to Red Hat executives, partners and customers where they are around the globe, pulling them in remotely happy to welcome back to the program. One of our Cube alumni on a very important topic, of course, that red hat open shift and joining me is Clayton Coleman. Who's the open shift chief architect with Red Hat. Clayton, thanks so much for joining us. Thank you >>for having me today. >>All right, So before we get into the product, it's probably worthwhile that we talked about you know what's happening in the community and talking specifically, you know, kubernetes the whole cloud, native space. Normally we would have gotten together. I would have seen you at Cube Con Ah, you know, at the end of March. But instead, here we are at the end of April. Looking out, you know, more CN cf events later this year, but first Red Hat Summit is a great open source event and broad community. So would really love your viewpoint as to what's happening in that ecosystem. >>It's been a really interesting year, obviously. Ah, with an open source community, you know, we react to this. Um, like we always react to all the things that go on in open source. People come to the community and sometimes they have more time, and sometimes they have less time. I think just from a community perspective, there's been a lot of people you know. It's reaching out to their colleagues outside of their companies, to their friends and coworkers and all of the different participants in the community. And there's been a lot of people getting together for a little bit of extra time trying todo, you know, connect virtually where they can't connect physically. And it's been it's been great to at least see where we've come this year. We haven't had Cube con and that'll be coming up later this year. But Kubernetes just had the 1 18 release, and I think Kubernetes is moving into that phase where it's a mature, open source project. We've got a lot of the processes down. I'm really happy with the work that the steering committee, um, has gone through. We handed off the last of the bootstrap Steering Committee members hand it off to the new, fully elected steering committee last year, and it's gone absolutely smoothly, which has been phenomenal on the The core project is trying to be a little bit more stable and to focus on closing out those loose ends being a little bit more conservative to change. And at the same time, the ecosystem has really exploded in a number of directions, as as Kubernetes becomes more of a bedrock technology for, um, enterprises and individuals and startups and everything in between. We've really seen a huge amount of of innovation in the space, and every year it just gets bigger and bigger. There's a lot of exciting projects that >>I >>have never even talk to somebody on the Kubernetes project. But they have made and build and, uh, and solve problems for their environments without us ever having to be involved, which I think it's success. >>Yeah, Clayton, you know, one of the challenges when you talk to practitioners out there is just keeping up with the pace of change. Can really be challenging. Something we really saw acutely was Docker was rolling out updates every six weeks. Most customers aren't going to be able to change fast enough to keep up with things you love your view point both is toe really what the CN CF says, as well as how Red Hat thinks of products. So you talked about you know, kubernetes 1.18. My understanding, even Google isn't yet packaging and offering that version there. So there's a lag between things. And as we start talking about managing across lots of clusters, how does Red Hat think of this? How should customers think about this? How do we make sure that we're, you know, staying secure and keeping updated on things without getting run over by the constant treadmill of >>change? That the interesting part about kubernetes Is it so much more than just that core project? You know, no matter what any of us in the in the core kubernetes project or in the products that red hat that build around open shift and layers on top, there's a There's a whole ecosystem of components that most people think of this fundamental to accomplishing building applications deploying them, running them, Whether it's their continuous integration pipelines or it's their monitoring stacks, we really as communities has become a little bit more conservative. >>Um, I >>think we really nail down our processes for taking that change from the community, testing it. You know, we run tens of thousands of automation tests a week on the latest and greatest kubernetes code, given time to soak, and we did it together with all those pieces of the ecosystem and then make sure that they work well together. And I've noticed over the last two years that the rate of oops we missed that in KUBERNETES 1 17 that by the time someone saw it, people are already using that that started to go down for us, it really hasn't been about the pace of keeping up with the upstream. But it's about making sure that we can responsibly pull together all the other ecosystem components that are still have much newer and a little bit. How do we say, Ah, they are then the exciting phase of their development while still giving ah predictable, reliable update stream. I would say that the challenges that most people are going to see is how they bring together all those pieces. And that's something that, on open shift, we think of as our goal is to help pull together all the pieces of this ecosystem, Um, and to make some choices for customers that makes sense and to give them flexibility where it's not clear yet what the right choice might be or where different people could reasonably disagree. And I'm really excited. I feel like we've got our We have a release cadence down and we're shipping the latest Cube after it's had time to quickly review, and I think we've gotten better and better at that. So I'm really proud of the team on Red Hat and how they've worked within the community so that everybody benefits from that in that testing of that stability. >>Great. I'd like to teach here, you dig in a little bit on the application side what's happening from the work loads that customers are using? Ah, what other innovations happening around that space? And how is Red Hat really helping? Really, The the infrastructure team and the developer team work even closer together, like Red Hat has done for a long time. >>This is This is a great question. I say There's two key, um, two key groups coming together. People are bringing substantial important critical production workloads, and they expect things both to just work, but also to be able to understand it. And they're making the transition. Ah, lot of folks I talked to were making the transition from previous systems they've got. They've been running open shift for a while, or they've been running kubernetes for a while, and they're getting ready to move, um, a significant portion of their applications over. And so, you know, in the early days of any project, you get the exciting Greenfield development and you get to go play with new technologies. But as you start moving your 1st 1 and then 10 and then 100 of your core business applications from the EMS or from bare metal into containers, you're taking advantage of that technology in a responsible way. And so the the expectations on us as engineers and community members is to really make sure that we're closing out the little stuff. You know, no bug is too small, but it can't trip up someone's production applications. So seeing a lot of that whether it's something new and exciting like, Um uh, model is a service or ai workloads or whether it's traditional big enterprise transaction processing. APS on the other side on that development, um, model I think we're starting to see phase to our community is 2.0, in the community, which is people are really leveraging the flexibility and the power of containers, things that aren't necessarily new to people who had. We got into containers early and had a chance to go through a couple of iterations. But now people are starting to find patterns that up level development teams, so being able to run applications the same way on a local machine as in a production environment. Well, most production environments are there now, and so people are really having toe. They're having to go through all of their tools and saying, Well, does this process that works for an individual developer also work when I want to move it there, my production or staging environments to production, and so on. New projects like K native and tectonic, which are kubernetes native, that's just one part of the ecosystem around development. On top of kubernetes, there's tons of exciting projects out there from companies that have adopted the full stack of kubernetes. They built it into their mindset, this idea of flexible infrastructure, and we're seeing this explosion of new ways where kubernetes is really just a detail, and containers are just the detail and the fact that it's running this little thing called Docker down at the heart of it. Nobody talks about anymore, and so that that transition has been really exciting. I think there's a lot that we're trying to do to help developers and administrators see eye to eye. And a lot of it's learning from the customers and users out there who really paved the way the which is the open source way. It's learning from others and helping others benefit from that. >>Yeah, I think you bring up a really important point we've been saying for a couple of years. Now that you know KUBERNETES should get to the point where it's boring and boring in a way also cause it's gonna be baked in everywhere we saw from basically customers just taking the code, really spending a lot of their own things by building the stack to, of course, lots of customers have used open shift over the year to If I'm adopting Public Cloud more and more, they're using those services from that standpoint. Can you talk a bit about how Red Hat is really integrating with public clouds? And you know your architectural technical philosophy on that? And how might that be? Differ from some other companies that you might call a little bit more, you know, Cloud of Jason, as opposed to being deeply integrated with the public cloud. >>The interesting thing about Kubernetes is that while it was developed on top of the clouds, it wasn't really built from Day one assuming a cloud underneath it. And I think that was an opportunity that we really missed. And to be fair, we had to make the thing work first before we depended on these unreliable clouds. You know, when we started, the clouds were really hitting their stride on stability and reliability, and people were it was the hot was becoming the obvious choice to some of what we've tried to do is take flexible infrastructure is a given, um, assume that the things that the cloud provides should be programmed for the for the benefit of the developer and the application, and I think that's a that's a key trend is we're not using the cloud because our administration teams want us. We're using the cloud because it makes us more powerful developers. That enables new scenarios. It shortens the the time between idea reality. What we have done in open shift is we've really built around The idea of open shift running on a cloud should take advantage of that cloud to an extreme degree, which is infrastructure could be flexible. The machines in that cluster need to come and go according to the demands of the applications on top of it. So giving a little bit more power to the cluster and taking a little bit of way from the cloud I'm. But that benefits. That also needs to benefit that those who are running on premise because I think, as you noted, our goal is you want this ubiquitous kubernetes environment everywhere, and the operations teams and the development teams and the Dev Ops teams in between need to have a consistent environment and so you can do this on the cloud. But you don't have that flexibility on premise. You've lost something. And so what we've tried to do as well is to think about those ideas that are what we think of as quote unquote cloud native that starts with a mutable operating systems. It starts with everything being declarative and working backwards from, you know, I wanna have 15 machines and then the cluster or controllers on the cluster say, Oh, well, you know, one of the machines has gone bad. Let's replace it on the cloud. You ask for a new I'm cloud infrastructure provider or you ask the the cloud a p i for a new machine, and then you replace it automatically, and no one knows any better on premise. We'd love to do the same thing with both bare metal virtualization on top of kubernetes. So we have that flexibility to say you may not have all of the options, but we should certainly be able to say, Oh, well, this hardware is bad or the machine stopped, so let's reboot it, and there's a lot of that same mindset that could be applied. We think that'll, um if you need virtualization, you can always use it. But virtualization is a layer on top benefits from some of the same things that all the other extensions and applications on top of kubernetes competitive trump. So trying to pay that layer and make sure that you have flexible, reliable storage on premise through our SEF and red hat storage products, which are built on top of the cluster exactly like virtualization, is both on top of the cluster. So you get cloud native storage mixed in working with those teams toe. Take those operational best practices. You know there's well, I think one of the things that interests me is no. 1 20 years ago, who was running an early version of SEF wouldn't have some approach to run these very large things that scales organizations like CERN have been using SEF for over a decade at extremely large scales. Some of what our mindset is we think it's time to bake some of that knowledge actually into our software for a very long time. We've kind of been building out and adding more and more software, but we always left the automation and the the knowledge about how that software supposed to be run to the side. And so by taking that and we talked about operators. Kubernetes really enshrines. This principle is taking that idea, taking some of that operational knowledge into the software we ship. Um, though that software can rely on kubernetes open shift tries to hide the details of the infrastructure underneath and our goal. I think in the long run it will just make everybody's lives easier. I shouldn't have to ship you a SEF admin for you to be successful. And we think we think there's a lot more room here that's really gonna improve how operations teams work, that the software that they use day to day. >>So Clinton you mentioned virtualization is one of the topics in there. Of course, virtualization is very prevalent in a customer's data center environment today. Red Hat open shift, oftentimes in data centers, is sitting on BM ware environments. Of course. Recently, VM Ware announced that they have kubernetes baked into the solution, and red hat has open shift with red hat virtualization. Maybe, you know, without going into too much depth, and you probably have breakouts and white papers on this. But you know what kind of decision point should customers be thinking about when they're deciding? Do I do this in bare metal. Do I do it in virtualization? What are some of the, you know, just high level trade offs there when they need to make those decisions, >>I think it's, um I think the 1st 1 is Virtualization is a mature technology. It's a known quantity for many organizations, and so those who are comfortable with virtualization, I'd say, like any responsible, uh, architecture engineering team. You don't want to stop using something that's working well just because you can. And a lot of what I would see as the transition that companies on is for some organizations without a big investment in virtualization. They don't see the need for it anymore, except as maybe a technical detail of how they isolate insecure workloads. One of the great things about virtualization technology that we're all aware of over the last couple years is it creates a boundary between work loads and the underlying environment. That doesn't mean that the underlying environment and containers can't be as secure or benefit from those same techniques. And so we're starting to see that in the community, this kind of spectrum of virtualization all the way from the big traditional virtualization to very streamlined, stripped down virtualization wrappers around containers. Um, like some of the cloud providers use for their application environments. So I'm really excited about the open source. Community is touching each of these points on the spectrum. Some of our goals are if you're happy with your infrastructure provider, we want to work well with, and that's kind of the pragmatic of everyone's on a different step in that journey. The benefit of containers is no matter how fast you make of VM, it's never gonna be quite as fast, is it containers. And it's never gonna be quite as easy for a developer to run on their laptop. And I think working through this is there's still a lot of work that we as a community to do around, making it easier for developers to build containers and test them locally in smaller environments. But all of that flexibility can still benefit from virtualization under later or virtualization used as an isolation technology. So projects like Kata and some of the work that's being done in the open source community around projects like firecracker taking the same, um, open source ideas and remixing them a different points gives us a lot of flexibility. So I would say, um, I'm actually less interested in virtualization then all of the other technologies that are application centric and at the heart of it, a VM isn't really a developer centric idea. It's specifically an administrative concept that benefits the administrator, and developers can take advantage of it. But I think all of the capabilities that you think of when you think about building an application like scaling out and making sure patches are applied, being able to roll back separating your configuration on then all of the hundreds of other levels of complexity that will add around that like service MASH and the ability to gracefully tolerate failures in your database. These were where I think, um, virtualization needs to work with the platform rather than being something that dominates how we think about the platform. It's application first, not being first. >>Yeah, no, you're absolutely right that the critique I've always given, you know for a number of years now is if you look at virtualization, the promise was, let's take that old application that probably should have been updated and just shove it in a VM and never think about it again. That's not doing good things for the user. So if I look at that at one end of the spectrum away at the other end of the spectrum, trying not to think about infrastructure, you mentioned K native s 01 of the things that you know I've been digging in tryingto learn more about at Red Hat Summit has really been the open shift server lists. So give us the update on that piece. Um, you know, that's obviously very different discussion than what we were just having from a virtualization standpoint. Eso How does open shift look at server lists? How does that tie into what? You know, if I'm doing server, listen, Amazon versus you know some of the other open source options for serverless. How should I be thinking about that? >>There's a lot of great choices on the spectrum out there. I think one of the interesting things and I love the word spectrum here because cane native kind of sits in a spot where it tries to be, as the name says, it tries to be as kubernetes native as possible, which lets you tap into some of those additional capabilities when you need it. And one of the things I've always appreciate it is the more restrictive framework is usually the better. It is doing that one thing and doing it really well. We learned this with rails. We learned this with no Js. And as people have built over the years, the idea of simple development platforms. The core function idea is a great simple idea, but sometimes you need to break out of that. You need extra flexibility or your application needs to run longer or slow Start is actually an issue. One of the things I think is most interesting about K native and I see comers and user. I think this way it's a good point. Um, that gives you some of the flexibility of kubernetes and a lot of the simplicity of, um, the functions is a service, but I think that there's going to be an inevitable set of use cases that tie into that which are simpler where open organization has a very opinionated way of running applications, and I think that flexibility will really benefit K native. Whereas some of the more opinionated remarks around server lists lose a little bit of that. So that's one dimension that I still think a native is well positioned to kind of capture the broadest possible audience, which for kubernetes and Containers was kind of our mindset. We wanted to solve enough of the problems that you can solve. You can run all your software. We don't have to solve all those problems to such a level that there's endless complexity, although we've been accused of having endless complexity and Cooper days before, but just trying to think through what are the problems that everyone's going to have to give them a way out? I'm at the same time for us, when we think about prioritization functions is service about integration. It's about taking applications and connecting them, connecting them through kubernetes. And so it really depends on identity and access to data and tying that into your cloud environment. If you're running on top of a cloud or tying it into your back end databases, if your on premise, >>I >>think that is where the ecosystem is still working to bring together and standardize some of those pieces in kubernetes or on top of Kubernetes. What I'm really excited about is the team as much. You know, there's been this core community effort to get a native to a G, a quality. Alongside that, the open shift serverless team has been trying to make it a dramatically simpler action. If you have kubernetes and open shift, it's a one click action to get started with, Um Kay native and just like any other technology. How accessible it is determines how easy users find it to get started and to build the applications they need. So for us, it's not just about the core technology. It's about someone who's not familiar with Serverless or not familiar with kubernetes. Bring up an editor and build a function and then deploy it on top of open shift. See it scale out like a normal kubernetes application, not having to know about pods or persistent volumes or notes. And so these air, these are some of the steps. I've been really proud that the team's done. I think there's a huge amount of innovation that will happen this year and next year, as the maturity of the kubernetes ecosystem really grows up, we'll start to see standardized technologies, for I'm sharing identity across multiple clouds across multiple environments. It's no good if you've got these applications on the cloud that need to tie into your corporate L dap. But you can't connect your corporate held up to the cloud. And so your applications need 1/3 identity system. Nobody wants 1/3 identity system. And so, working through some of this thing where the challenges I think that hybrid organizations are already facing and our job is just to work with them in the open source communities and with the cloud providers partner with them and open source so that the technologies in kubernetes fit very well into whatever environment they run it. Alright, >>well, Clayton, really appreciate all the updates there. I know the community is definitely looking forward to digging through some of the breakout sessions reading all the new announcements. And, of course, we look forward to seeing you on the team participating in many of the kubernetes related events happening later this >>year. That's right. It's ah, gonna be a good year. >>All right. Thanks so much for joining us. I'm still Minuteman and as always thank you for watching you. >>Yeah, yeah, yeah, yeah
SUMMARY :
Summit 2020 Brought to you by Red Hat. Who's the open shift chief architect with Red Hat. All right, So before we get into the product, it's probably worthwhile that we talked about you We handed off the last of the bootstrap Steering Committee members hand it off to the new, have never even talk to somebody on the Kubernetes project. going to be able to change fast enough to keep up with things you love your view point both in the products that red hat that build around open shift and layers on top, there's it really hasn't been about the pace of keeping up with the upstream. I'd like to teach here, you dig in a little bit on the application side what's And a lot of it's learning from the customers and users out there who really And you know your architectural technical philosophy on that? on the cluster say, Oh, well, you know, one of the machines has gone bad. What are some of the, you know, just high level trade offs the ability to gracefully tolerate failures in your database. the things that you know I've been digging in tryingto learn more about at Red Hat Summit has really the functions is a service, but I think that there's going to be an inevitable and open source so that the technologies in kubernetes fit very well into I know the community is definitely looking forward to digging It's ah, gonna be a good year. I'm still Minuteman and as always thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Clayton | PERSON | 0.99+ |
15 machines | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Clinton | PERSON | 0.99+ |
CERN | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
100 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
red hat | ORGANIZATION | 0.99+ |
Clayton Coleman | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two key groups | QUANTITY | 0.99+ |
VM Ware | ORGANIZATION | 0.99+ |
one click | QUANTITY | 0.99+ |
two key | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
Summit 2020 | EVENT | 0.99+ |
end of April | DATE | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
SEF | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
end of March | DATE | 0.97+ |
this year | DATE | 0.97+ |
one part | QUANTITY | 0.97+ |
Red Hat Summit 2020 | EVENT | 0.97+ |
one dimension | QUANTITY | 0.97+ |
later this year | DATE | 0.96+ |
today | DATE | 0.96+ |
each | QUANTITY | 0.93+ |
Kubernetes | TITLE | 0.93+ |
Day one | QUANTITY | 0.93+ |
hundreds | QUANTITY | 0.92+ |
Kay | PERSON | 0.92+ |
one end | QUANTITY | 0.91+ |
20 years ago | DATE | 0.91+ |
one thing | QUANTITY | 0.91+ |
Kata | TITLE | 0.91+ |
1st 1 | QUANTITY | 0.91+ |
red hat | TITLE | 0.89+ |
CN CF | ORGANIZATION | 0.87+ |
over a decade | QUANTITY | 0.86+ |
tens of thousands of automation tests | QUANTITY | 0.85+ |
last two years | DATE | 0.84+ |
Minuteman | PERSON | 0.82+ |
Kubernetes | ORGANIZATION | 0.82+ |
Cube | COMMERCIAL_ITEM | 0.82+ |
every six weeks | QUANTITY | 0.81+ |
1/3 | QUANTITY | 0.79+ |
cf | EVENT | 0.75+ |
Steering Committee | ORGANIZATION | 0.75+ |
last couple years | DATE | 0.74+ |
K native s | ORGANIZATION | 0.74+ |
a week | QUANTITY | 0.73+ |
BM | ORGANIZATION | 0.68+ |
kubernetes | TITLE | 0.66+ |
later this | DATE | 0.63+ |
Jeremy Daly, Serverless Chats | CUBEConversation January 2020
(upbeat music) >> From the Silicon Angle Media office in Boston, Massachusetts, it's theCube. Now, here's your host, Stu Miniman. >> Hi, I'm Stu Miniman, and welcome to the first interview of theCube in our Boston area studio for 2020. And to help me kick it off, Jeremy Daly who is the host of Serverless Chats as well as runs the Serverless Day Boston. Jeremy, saw you at reInvent, way back in 2019, and we'd actually had some of the people in the community that were like hey, "I think you guys like actually live and work right near each other." >> Right. >> And you're only about 20 minutes away from our office here, so thanks so much for making the long journey here, and not having to get on a plane to join us here. >> Well, thank you for having me. >> All right, so as Calvin from Calvin and Hobbes says, "It's a new decade, but we don't have any base on the moon, "we don't have flying cars that general people can use, "but we do have serverless." >> And our robot vacuum cleaners. >> We do have robot vacuum cleaners. >> Which are run by serverless, as a matter of fact. >> A CUBE alum on the program would be happy that we do get to mention there. So yeah, you know serverless there are things like the iRobot, as well as Alexa, or some of the things that people, you know usually when I'm explaining to people what this is, and they don't understand it, it's like, Oh, you've used Alexa, well those are the functions underneath, and you think about how these things turn on, and off, a little bit like that. But maybe, we don't need to get into the long ontological discussion or everything, but you know you're a serverless hero, so you know give us a little bit, what your hearing from people, what are some of the exciting use cases out there, and you know where serverless is being used in that maturity today. >> Yeah, I mean well, so the funny thing about serverless and the term serverless itself, and I do not want to get into a long discussion about this, obviously. I actually wrote a post last year that was called stop calling everything serverless, because basically people are calling everything serverless. So it really, what it, what I look at it as, is something where, it just makes it really easy for developers to abstract away that back end infrastructure, and not having to worry about setting up Kubernetes, or going through the process of setting up virtual machines and installing software is just, a lot of that stuff is kind of handled for you. And I think that is enabled, a lot of companies, especially start-ups is a huge market for serverless, but also enterprises. Enabled them to give more power to their developers, and be able to look at new products that they want to build, new services they want to tackle or even old services that they need to, you know that may have some stability issues or things like long running ETL tasks, and other things like that, that they found a way to sort of find the preferal edges of these monolithic applications or these mainframes that they are using and find ways to run very small jobs, you know using functions as a server, something like that. And so, I see a lot of that, I think that is a big use case. You see a lot of large companies doing. Obviously, people are building full fledged applications. So, yes, the web facing user application, certainly a thing. People are building API's, you got API Gateway, they just released the new HEDP API which makes it even faster. To run those sort of things, this idea of cold starts, you know in AWS trying to get rid of all that stuff, with the new VPC networking, and some of the things they are doing there. So you have a lot of those type of applications that people are building as well. But it really runs the gambit, there are things all across the board that you can do, and pretty much anything you can do with the traditional computing environment, you can do with a serverless computing environment. And obviously that's focusing quite a bit on the functions as a service side of things, which is a very tiny part of serverless, if you want to look at it, you know sort of the broader picture, this service full or managed services, type approach. And so, that's another thing that you see, where you used to have companies setting up you know, mySQL databases and clusters trying to run these things, or even worse, Cassandra rings, right. Trying to do these things and manage this massive amount of infrastructure, just so that they could write a few records to a database and read them back for their application. And that would take months sometimes, for them to get it setup and even more time to try to keep running them. So this sort of revolution of managed services and all these things we get now, whether that the things like managed elastic search or elastic search cloud doing that stuff for you, or Big Table and Dynamo DB, and Manage Cassandra, whatever those things are. I'm just thinking a lot easier for developers to just say hey, I need a database, and okay, here it is, and I don't have to worry about the infrastructure at all. So, I think you see a lot of people, and a lot of companies that are utilizing all of these different services now, and essentially are no longer trying to re-invent the wheel. >> So, a couple of years ago, I was talking to Andy Jassy, at an interview with theCube, and he said, "If I was to build AWS today, "I would've built it on serverless." And from what I've seen over the last two or three years or so, Amazon is rebuilding a lot of there servers underneath. It's very interesting to watch that platform changing. I think it's had some ripple effect dynamics inside the company 'cause Amazon is very well known for their two pizza teams and for all of their products are there, but I think it was actually in a conversation with you, we're talking about in some ways this new way of building things is, you know a connecting fabric between the various groups inside of Amazon. So, I love your view point that we shouldn't just call everything serverless, but in many ways, this is a revolution and a new way of thinking about building things and therefore, you know there are some organizational and dynamical changes that happen, for an Amazon, but for other people that start using it. >> Yeah, well I mean I actually was having a conversation with a Jay Anear, whose one of the product owners for Lambda, and he was saying to me, well how do we sell serverless. How do we tell people you know this is what the next way to do things. I said, just, it's the way, right. And Amazon is realized this, and part of the great thing about dog fooding your own product is that you say, okay I don't like the taste of this bit, so we're going to change it to make it work. And that's what Amazon has continued to do, so they run into limitations with serverless, just like us early adopters, run into limitations, and they say, we'll how do we make it better, how do we fix it. And they have always been really great to listening to customers. I complain all the time, there's other people that complain all the time, that say, "Hey, I can't do this." And they say, "Well what if we did it this way, and out of that you get things like Lambda Destinations and all different types of ways, you get Event Bridge, you get different ways that you can solve those problems and that comes out of them using their own services. So I think that's a huge piece of it, but that helps enable other teams to get past those barriers as well. >> Jeremy, I'm going to be really disappointed if in 2020, I don't see a T-shirt from one of the Serverless Days, with the Mandalorian on it, saying, "Serverless, this is the way." Great, great, great marketing opportunity, and I do love that, because some of the other spaces, you know we're not talking about a point product, or a simple thing we do, it is more the way of doing things, it's just like I think about Cybersecurity. Yes, there are lots of products involved here but, you know this is more of you know it's a methodology, it needs to be fully thought of across the board. You know, as to how you do things, so, let's dig in a little bit. At reInvent, there was, when I went to the serverless gathering, it was serverless for everyone. >> Serverless for everyone, yes. >> And there was you know, hey, serverless isn't getting talked, you know serverless isn't as front and center as some people might think. They're some people on the outside look at this and they say, "Oh, serverless, you know those people "they have a religion, and they go so deep on this." But I thought Tim Wagner had a really good blog post, that came out right after reInvent, and what we saw is not only Amazon changing underneath the way things are done, but it feel that there's a bridging between what's happening in Kubernetes, you see where Fargate is, Firecracker, and serverless and you know. Help us squint through that, and understand a little bit, what your seeing, what your take was at reInvent, what you like, what you were hoping to see and how does that whole containerization, and Kubernetes wave intersect with what we're doing with serverless? >> Yeah, well I mean for some reason people like Kubernetes. And I honestly, I don't think there is anything wrong with it, I think it's a great container orchestration system, I think containers are still a very important part of the workloads that we are putting into a cloud, I don't know if I would call them cloud native, exactly, but I think what we're seeing or at least what I'm seeing that I think Amazon is seeing, is they're saying people are embracing Kubernetes, and they are embracing containers. And whether or not containers are ephemeral or long running, which I read a statistic at some point, that was 63% of containers, so even running on Kubernetes, or whatever, run for less than 10 minutes. So basically, most computing that's happening now, is fairly ephemeral. And as you go up, I think it's 15 minutes or something like that, I think it's 70% or 90% or whatever that number is, I totally got that wrong. But I think what Amazon is doing is they're trying to basically say, look we were trying to sell serverless to everyone. We're trying to sell this idea of look managed services, managed compute, the idea that we can run even containers as close to the metal as possible with something like Fargate which is what Firecracker is all about, being able to run virtual machines basically, almost you know right on the metal, right. I mean it's so close that there's no level of abstraction that get in the way and slow things down, and even though we're talking about milliseconds or microseconds, it's still something and there's efficiencies there. But I think what they looked at is, they said look at we are not Apple, we can't kill Flash, just because we say we're not going to support it anymore, and I think you mention this to me in the past where the majority of Kubernetes clusters that were running in the Public Cloud, we're running in Amazon anyways. And so, you had using virtual machines, which are great technology, but are 15 years old at this point. Even containerization, there's more problems to solve there, getting to the point where we say, look you want to take this container, this little bit of code, or this small service and you want to just run this somewhere. Why are we spinning up virtual containers. Why are we using 15 or 10 year old technology to do that. And Amazon is just getting smarter about it. So Amazon says hay, if we can run a Lambda function on Firecracker, and we can run a Fargate container on Firecracker, why can't we run, you know can we create some pods and run some pods for Kubernetes on it. They can do that. And so, I think for me, I was disappointed in the keynotes, because I don't think there was enough serverless talk. But I think what they're trying to do, is there trying to and this is if I put my analyst hat on for a minute. I think they're trying to say, the world is at Kubernetes right now. And we need to embrace that in a way, that says we can run your Kubernetes for you, a lot more efficiently and without you having to worry about it than if you use Google or if you use some other cloud provider, or if you run on-prem. Which I think is the biggest competitor to Amazon is still on-prem, especially in the enterprise world. So I see them as saying, look we're going to focus on Kubernetes, but as a way that we can run it our way. And I think that's why, Fargate and Kubernetes, or the Kubernetes for Fargate, or whatever that new product is. Too many product names at AWS. But I think that's what they are trying to do and I think that was the point of this, is to say, "Listen you can run your Kubernetes." And Claire Legore who showed that piece at the keynote, Vernor's keynote that was you know basically how quickly Fargate can scale up Kubernetes, you know individual containers, Kubernetes, as opposed to you know launching new VM's or EC2 instances. So I thought that was really interesting. But that was my overall take is just that they're embracing that, because they think that's where the market is right now, and they just haven't yet been able to sell this idea of serverless even though you are probably using it with a bunch of things anyways, at least what they would consider serverless. >> Yeah, to part a little bit from the serverless for a second. Talk about multi-cloud, it was one of the biggest discussions, we had in 2019. When I talk to customers that are using Kubernetes, one of the reasons that they tell me they're doing it, "Well, I love Amazon, I really like what I'm doing, "but if I needed to move something, it makes it easier." Yes, there are some underlying services I would have to re-write, and I'm looking at all those. I've talked to customers that started with Kubernetes, somewhere other than Amazon, and moved it to Amazon, and they said it did make my life easier to be able to do that fundamental, you know the container piece was easy move that piece of it, but you know the discussion of multi-cloud gets very convoluted, very easily. Most customers run it when I talk to them, it's I have an application that I run, in a cloud, sometimes, there's certain, you know large financials will choose two of everything, because that's the way they've always done things for regulation. And therefore they might be running the same application, mirrored in two different clouds. But it is not follow the sun, it is not I wake up and I look at the price of things, and deploy it to that. And that environment it is a little bit tougher, there's data gravity, there's all these other concerns. But multi-cloud is just lots of pieces today, more than a comprehensive strategy. The vision that I saw, is if multi-cloud is to be a successful strategy, it should be more valuable than the sum of its pieces. And I don't see many examples of that yet. What do you see when it comes to multi-cloud and how does that serverless discussion fit in there? >> I think your point about data gravity is the most important thing. I mean honestly compute is commoditized, so whether your running it in a container, and that container runs in Fargate or orchestrated by Kubernetes, or runs on its own somewhere, or something's happening there, or it's a fast product and it's running on top of K-native or it's running in a Lambda function or in an Azure function or something like that. Compute itself is fairly commoditized, and yes there's wiring that's required for each individual cloud, but even if you were going to move your Kubernetes cluster, like you said, there's re-writes, you have to change the way you do things underneath. So I look at multi-cloud and I think for a large enterprise that has a massive amount of compliance, regulations and things like that they have to deal with, yeah maybe that's a strategy they have to embrace, and hopefully they have the money and tech staff to do that. I think the vast majority of companies are going to find that multi-cloud is going to be a completely wasteful and useless exercise that is essentially going to waste time and money. It's so hard right now, keeping up with everything new that comes out of one cloud right, try keeping up with everything that comes out of three clouds, or more. And I think that's something that doesn't make a lot of sense, and I don't think you're going to see this price gauging like we would see with something. Probably the wrong term to use, but something that we would see, sort of lock-in that you would see with Oracle or with Microsoft SQL, some of those things where the licensing became an issue. I don't think you're going to see that with cloud. And so, what I'm interested in though in terms of the term multi-cloud, is the fact that for me, multi-cloud really where it would be beneficial, or is beneficial is we're talking about SaaS vendors. And I look at it and I say, look it you know Oracle has it's own cloud, and Google has it's own cloud, and all these other companies have their own cloud, but so does Salesforce, when you think about it. So does Twilio, even though Twilio runs inside AWS, really its I'm using that service and the AWS piece of it is abstracted, that to me is a third party service. Stripe is a third-party service. These are multi-cloud structure or SaaS products that I'm using, and I'm going to be integrating with all those different things via API's like we've done for quite some time now. So, to me, this idea of multi-cloud is simply going to be, you know it's about interacting with other products, using the right service for the right job. And if your duplicating your compute or you're trying to write database services or something like that that you can somehow share with multiple clouds, again, I don't see there being a huge value, except for a very specific group of customers. >> Yeah, you mentioned the term cloud-native earlier, and you need to understand are you truly being cloud-native or are you kind of cloud adjacent, are you leveraging a couple of things, but you're really, you haven't taken advantage of the services and the promise of what these cloud options can offer. All right, Jeremy, 2020 we've turned the calendar. What are you looking at, you know you're planning, you got serverless conference, Serverless Days-- >> Serverless Days Boston. >> Boston, coming up-- >> April 6th in Cambridge. >> So give us a little views to kind of your view point for the year, the event itself, you got your podcast, you got a lot going on. >> Yeah, so my podcast, Serverless Chats. You know I talk to people that are in the space, and we usually get really really technical. So if you're a serverless geek or you like that kind of stuff definitely listen to that. But yeah, but 2020 for me though, this is where I see what is happened to serverless, and this goes back to my "Stop calling everything serverless" post, was this idea that we keep making serverless harder. And so, as a someone whose a serverless purist, I think at this point. I recognize and it frustrates me that it is so difficult now to even though we're abstracting away running that infrastructure, we still have to be very aware of what pieces of the infrastructure we are using. Still have setup the SQS Queue, still have to setup Event Bridge. We still have to setup the Lambda function and API gateways and there's services that make it easier for us, right like we can use a serverless framework, or the SAM framework, or ARCH code or architect framework. There's a bunch of these different ones that we can use. But the problem is that it's still very very tough, to understand how to stitch all this stuff together. So for me, what I think we're going to see in 2020, and I know there is hints for this serverless framework just launched their components. There's other companies that are doing similar things in the space, and that's basically creating, I guess what I would call an abstraction as a service, where essentially it's another layer of abstraction, on top of the DSL's like Terraform or Cloud Formation, and essentially what it's doing is it's saying, "I want to launch an API that does X-Y-Z." And that's the outcome that I want. Understanding all the best practices, am I supposed to use Lambda Destinations, do I use DLQ's, what should I throttle it at? All these different settings and configurations and knobs, even though they say that there's not a lot of knobs, there's a lot of knobs that you can turn. Encapsulating that and being able to share that so that other people can use it. That in and of itself would be very powerful, but where it becomes even more important and I think definitely from an enterprise standpoint, is to say, listen we have a team that is working on these serverless components or abstractions or whatever they are, and I want Team X to be able to use, I want them to be able to launch an API. Well you've got security concerns, you've got all kinds of things around compliance, you have what are the vetting process for third-party libraries, all that kind of stuff. If you could say to Team X, hey listen we've got this component, or this piece of, this abstracted piece of code for you, that you can take and now you can just launch an API, serverless API, and you don't have to worry about any of the regulations, you don't have to go to the attorneys, you don't have to do any of that stuff. That is going to be an extremely powerful vehicle for companies to adopt things quickly. So, I think that you have teams now that are experimenting with all of these little knobs. That gets very confusing, it gets very frustrating, I read articles all the time, that come out and I read through it, and this is all out of date, because things have changed so quickly and so if you have a way that your teams, you know and somebody who stays on top of the learning this can keep these things up to date, follow the most, you know leading practices or the best practices, whatever you want to call them. I think that's going to be hugely important step from making it to the teams that can adopt serverless more quickly. And I don't think the major cloud vendors are doing anything in this space. And I think SAM is a good idea, but basically SAM is just a re-write of the serverless framework. Whereas, I think that there's a couple of companies who are looking at it now, how do we take this, you know whatever, this 1500 line Cloud Formation template, how do we boil that down into two or three lines of configuration, and then a little bit of business logic. Because that's where we really want to get to. It's just we're writing business logic, we're no where near there right now. There's still a lot of stuff that has to be done, around configuration and so even though it's nice to say, hey we can just write some business logic and all the infrastructure is handled for us. The infrastructure is handled for us, if we configure it correctly. >> Yeah, really remind me some of the general thread we've been talking about, Cloud for a number of years is, remember back in the early days, is cloud is supposed to be inexpensive and easy to use, and of course in today's world, it isn't either of those things. So serverless needs to follow those threads, you know love some of those view points Jeremy. I want to give you the final word, you've got your Serverless Day Boston, you got your podcast, best way to get in touch with you, and keep up with all you're doing in 2020. >> Yeah, so @Jeremy_daly on Twitter. I'm pretty active on Twitter, and I put all my stuff out there. Serverless Chats podcast, you can just find, serverlesschats.com or any of the Pod catchers that you use. I also publish a newsletter that basically talks about what I'm talking about now, every week called Off by None, which is, collects a bunch of serverless links and gives them some IoPine on some of them, so you can go to offbynone.io and find that. My website is jeremydaly.com and I blog and keep up to date on all the kind of stuff that I do with serverless there. >> Jeremy, great content, thanks so much for joining us on theCube. Really glad and always love to shine a spotlight here in the Boston area too. >> Appreciate it. >> I'm Stu Miniman. You can find me on the Twitter's, I'm just @Stu thecube.net is of course where all our videos will be, we'll be at some of the events for 2020. Look for me, look for our co-hosts, reach out to us if there's an event that we should be at, and as always, thank you for watching theCube. (upbeat music)
SUMMARY :
From the Silicon Angle Media office that were like hey, "I think you guys like actually live and not having to get on a plane to join us here. "we don't have flying cars that general people can use, and you know where serverless is being used that they need to, you know and therefore, you know there are some organizational and out of that you get things like Lambda Destinations You know, as to how you do things, and they say, "Oh, serverless, you know those people and I think you mention this to me in the past and I look at the price of things, and deploy it to that. that you can somehow share with multiple clouds, again, and you need to understand are you truly being cloud-native for the year, the event itself, you got your podcast, and so if you have a way that your teams, I want to give you the final word, serverlesschats.com or any of the Pod catchers that you use. Really glad and always love to shine a spotlight and as always, thank you for watching theCube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Claire Legore | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Tim Wagner | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeremy | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jeremy Daly | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
Cambridge | LOCATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
10 year | QUANTITY | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
jeremydaly.com | OTHER | 0.99+ |
Jay Anear | PERSON | 0.99+ |
January 2020 | DATE | 0.99+ |
Calvin | PERSON | 0.99+ |
April 6th | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
offbynone.io | OTHER | 0.99+ |
three lines | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
serverlesschats.com | OTHER | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Lambda | ORGANIZATION | 0.98+ |
two different clouds | QUANTITY | 0.98+ |
@Jeremy_daly | PERSON | 0.98+ |
Twilio | ORGANIZATION | 0.98+ |
three clouds | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
today | DATE | 0.97+ |
about 20 minutes | QUANTITY | 0.97+ |
1500 line | QUANTITY | 0.97+ |
first interview | QUANTITY | 0.96+ |
two pizza teams | QUANTITY | 0.96+ |
Lambda | TITLE | 0.96+ |
one cloud | QUANTITY | 0.96+ |
Alexa | TITLE | 0.96+ |
theCube | ORGANIZATION | 0.95+ |
Azure | TITLE | 0.94+ |
each individual cloud | QUANTITY | 0.94+ |
Serverless Days | EVENT | 0.93+ |
Big Table | ORGANIZATION | 0.93+ |
Deepak Singh, AWS & Abby Fuller, AWS | AWS re:Invent 2019
>> Narrator: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with it's ecosystem partners. >> Welcome back, about 65,000 here in attendance, at AWS re:Invent 2019. You're watching theCUBE, and I am Stu Miniman, the host for this seg, and happy to welcome back to our program two of our CUBE alumni. Sitting to my right is Abby Fuller, who is the principal technologist for containers and Linux, with Amazon Web Services. Sitting to her right is Deepak Singh, Vice President of Compute Services, also with AWS. Thank you so much for joining us on the program. >> Thanks for having us. >> Thank you for having us. >> Stu: All right, so as I said, both of you have been on the program, and boy your team's been busy. I mean, one of the things I love, first of all, there is a roadmap for many of the things that are going on. So, we do understand what's happen in the future, but, Deepak, maybe just tell us a little bit about your group and kind of the main focus, and let's start there. >> Deepak: So, my group goes beyond containers. It includes things like Linux systems, our high performance computing organization. But for the purposes of re:Invent, let's stick to the containers org. The containers org owns all of AWS's containerized products. So that includes ECS, EKS, Fargate. We also own our service mesh offering, which is App Mesh. So the way I like to think about it is, it's the right way to build applications in the modern era group, and it's a team that stays quite busy, because this is such a hot space to be in. >> Stu: All right, so we're going to talk mostly about containers, but your shirt is talking about the Linux piece. Tell us what your shirt says. >> Deepak: Ahh, yes, this is the only right way to spell AMI. Unfortunately, my previous, when I was in New York, Corey was at the table interviewing me, and I wore this just for him. >> Stu: So, so, so, if it is AMI, then we're going to spend some time talking about EKS. >> Yes. (Abby chuckling) >> And Esses. >> Yes, which one? (Deepak laughing) We will figure that. For AWS is AWS, I think, is how we will do it. So, absolutely, we're not going to talk about ontological arguments in there. But, Abby, a whole lot of new services in the container space. I want to put a pin and put Fargate to aside for a second. >> Abby: Sure. >> Cause lots of things we want to dig into there. But a lot of other things have been announced, in like the last month or so. Maybe, give us a little bit of a view. >> Yeah, I think a couple big ones for us. So, Fargate and Spot, so run on spare Fargate Capacity for up to a 70% discount off of standard Fargate pricing. (mumbling) things like vulnerability image for scanning for images on ECR. We launched, over the last few days as re:Invent, a capacity providers for ECS, which let's you run, split your traffic between on-demand and spot instances in the same cluster. We also launched something called Cluster Auto Scaler. So, some finer-grained control over how your cluster scales in on ECS. >> Stu: All right, want to take a quick step back. So , Fargate, announced a couple of years ago. >> Deepak: Yep. >> Was only first supported on ECS. Definitely, I've talked to lots of customers, very excited about it. >> Deepak: Yep. >> Maybe talk to us a little bit about how Fargate fits in the whole container discussion. >> Deepak: Yeah. >> And we'll hit with the news. >> Yeah, and, actually, a good way to think about it is from a native US standpoint. If you're a customer running containers, the way we think about our services is: You need a place to store those containers, so that's ECR. You could use your own registry, you could pick a third party one, that's fine. But most of our customers just use ECR. Then you pick your containers carrier. That's either ECS or EKS depending on your preferences. And then you need to figure out where you want to run your containers. And, of course, when we launched ECS five years ago, at re:Invent, there was only one way to do it: On EC2 instances. And two years ago, we added in what in our mind is a cloud native natural way to run containers, which is Fargate. So Fargate serves as a runtime compute engine for containers, and you can pick your scheduler on top of it, and go make hay with your applications. So that's kind of how we think the hierarchy works, and it works pretty well for most customers. They'll start off often with EC2 and move to Fargate over time or mix and match, and it's kind of fascinating to see how many customers of ours have decided they want to be all-in on Fargate. Which is a great place to be for us. >> Stu: Okay, but the big news which actually got a good cheer in the key note yesterday, is Fargate for EKS. So what's the importance of this? >> Yeah I think (mumbling) I think it's saying we've been talking to customers about for a while and it's the ability to run your Kubernetes pods on Fargate Capacity. I think it's really speaking to folks love Kubernetes as a tool and as a community, but it can be a pretty significant lift operationally. And with Fargate they can use APIs that they want or the open source tooling that they want but they don't have to worry about provisioning and managing that EC2 capacity. >> Stu: All right, so Deepak I actually was having a conversation with a good AWS customer, yesterday, and he said he actually started out on Kubernetes before EKS existed, on AKS. And migrated over to AWS when EKS became available. And he said Fargate really interests me, but one of the main reasons he does Kubernetes is he wants to have some portability, has some concerns that, he knows what services he uses and how if he needed to move something there, what do you say to customer that says Fargate's interesting me, but I'm concerned I'm going to get locked in if I buy into this model. >> I would say that he shouldn't worry about it, because of two reasons: maybe more than two. One is: the unit in Fargate that you interact with and work on is the same unit that you interact and work on with Kubernetes in general. Which is the Kubernetes pod. It's the broadspec, it's just a pod, no difference. You can take that same pod and run it on Timbuktu cloud and it will still run. So that's part one. The other one is that he's using the same tools, he's using coup CDL. And in fact you can mix and match your Kubernetes casters. You can run 95% of the application on Fargate, and five percent of it on EC2. All they are doing is changing the part annotation, and if you decide you want to run none of it on Fargate, you just flip that and suddenly everything is running on EC2 capacity. So actually think there's that much to worry about, because it's just the same pod. It's still the same tooling, the operational model is a lot simpler. >> So Abby, we've talked to you at DockerCon, and KubeCon, simplicity is not the word that we hear when we talk about this whole container space. >> Abby: Sure. >> Traditionally. How are we doing overall? I mean, I'm watching the community here, and it's like, wait, Fargate sounds cool but where's my persistent volumes? You know, where are we in, you know give us a little bit of the road map as to where we are to make this, you know, simple and managing more of my environment. >> Yeah, I think the way that I like to look at it, right, is that we've spent, and it's not just us, but we spent a lot of time looking at things like patterns and abstractions that help make these work flows easier for developers. And I think one of the launches that's interesting in that vein is the ECS CLI version two, which we launched a few days ago. And that will help you deploy like a production ready containerized application. It'll help you with the CICD angle, it'll help you with the monitoring and the observability. So I think it's about abstracting away, and adding patterns on top to make some of these common operations and work flows really modular and repeatable, and extendable. And then it's about having the ability to customize where I need to. So being able to run on Fargate, but also to use work loads running on EC2 where I need to, and being able to mix and match, and to focus my energy where I really get any benefit from customizing, rather than having to do the whole thing from the ground up. >> Stu: You know, feedback I've gotten from my friends and the app dev community, is that hybrid is more and more becoming a standard deployment model. Obviously things like outposts and some of the other solutions from Amazon are extending the AWS model of doing things, but many of them also look at just Kubernetes, >> Deepak: Yep >> as a layer to do that. How should we be thinking of this from your solutions? >> Deepak: Yeah, so I thought without both, though, if you noticed in Andy's announcement yesterday, among the list of services available on day one were ECS and EKS. And actually app meshes well weren't on the list, but app meshes available on our post on day one as well. I think when we think about customers who want to run and stay in their own capacity and their own data centers, because EKS is built on (mumbling) Kubernetes with no modifications, the same application, as long as they're running on upstream Kubernetes, on their side, will just run on EKS. And there's a number of models that work there. A great model is the kind that SisCo is running, where they will manage it for you in both places. They become the first person you call, and on AWS it's just EKS. And on premise (mumbling) it's what SisCo has decided to build. Our pro-serf team will also help you by example. So I think there's a number of modes that work there but the key part, and it's the reason why we have stayed with (mumbling) stream Kubernetes, is we never want to make someone say, oh we can't use EKS because they're (mumbling). Somehow modified Kubernetes, and I think that is super important for us. >> Stu: Yeah, I mean Abby I know you're an active participant in the community, what do you say to people that look at Amazon, Deepak you talked a little bit about Fargate. You don't need to be concerned to the same images, so speak a little bit, maybe if you could, to Amazon's community participation, and what you're generally hearing from your customers. >> Abby: Yeah, so I think the root of it right is that we're all building with the same building blocks. I think something that Amazon has been really strong at is open sourcing primitive. So, Firecracker last year, I think was a good example. And we, I think we do really well with saying we built this to solve a problem for us, but we think you might want it too. And in terms of community support, we have been open sourcing more over the last year, we open source our road maps in November last year. We run developer previews off the GitHub road map, App Mesh has a public preview channel as well, so we've been trying to involve the community participation earlier and earlier in our product development life cycle, so that, especially with things like service mesh, where it's really pretty new, we can make sure that we have the voice of all our users and our customers, and there, as early as possible. But to get their hands on keyboards to try it out as soon as they can. >> Deepak: And actually a great example of that is, a word that Weave Works has done. Talking about people who can run Kubernetes on AWS and on premises, they have this project called "Weave Ignite" where they're basically running Kubernetes on Firecracker on premises. And then on AWS a customer just runs on EKS, as an example. And that, I think that part has been not everybody realizes that this is possible. But I think the fact that people are doing it is, excites us a lot. >> Stu: All right, I know you're both meeting with a lot of customers this week, maybe Deepak start with you. Any surprises or any misconceptions other than I know there a lot of people wearing teal shirts, with a certain pronunciation. But bring us inside some of the mind set of your customers here. >> Deepak: So actually, our conversation is very consistent. I think the community as a whole, our customer base has a whole, they all want to get to the same place. How can we move really quickly? How can we give our developers the ability to be more productive? Without putting our company at risk, having the right level of governance? Having the right controls, in place? And I think that's mainly consistent theme across the board. I guess the one thing that would be hard to remind people of a little bit, is a lot of people often think Fargate sits on top of ECS and EKS, it sits below that, and actually the fact that now there is an EKS Fargate, people understand that more quickly. Before that it was a little trickier. But other than that, I think our customers almost all. They come from different places, have very similar problems, they want developers to move quickly and develop deliver business value, and platform engineering teams that we speak to want to figure out how to get out of the way. And that's been great! >> It's interesting, Abby, I love your view point from the developer community Andy talked on stage about very much, to do true transformation, there needs to be the leadership driving things down. I'm curious what you're seeing, customers you're talked to, people you had, cause many of these tools we're talking about, you know, started in the developer world. >> Yeah, I mean there's been, like an increasing amount of curiosity around the cultural side of it. So how can I get my team to work like that? How can I get my team to ship more safely, more quickly, but getting operations out of the way? And I think you see more and more interest in that. So how can we build the tools that work the way our developers do? So we get all the thing that we want, so security and compliance and availability. The developers get what they want, which is easy work flows that match the way they want to work. So you see a lot of curiosity around that. So how do we get to the place where we can run everything on Fargate, and benefit from all the new serverless, severless style (mumbling). >> Stu: All right, real quick just give you the final word. Any websites, or events, or things that people should know when they want to learn more and get engaged? >> Yeah, I think I'd send people first and foremost to the GitHub public road maps. It is the easiest, fastest way to let us hear your voice, and what you want to see us build next. I think especially these next couple weeks coming out of re:Invent, as people start to get their hands on what we announced, think I'm really curious for them to take that back, and then be like, this is great, but here's what I want to see next. And I'd love to see that happen on the road maps. >> Yeah, about a month or so ago, maybe a couple months, we started a dedicated blog for containers on AWS site. One of the nice things about it is a lot of the contributors to that blog site are principal engineers, and engineers in our organization. For example, one of our, the principal engineers in my org are Malcolm Featonby, has a whole blog post on how should to think about scaling and best practices. I think I would encourage people who've now seen what we have, all the new services we're developing, and that's where you'll get the details on how you can use them, how we built them, and I encourage everybody to go to that blog site and check out what we're doing. >> Stu: All right, Deepak, Abby, congratulation to you and your team, great progress, and really appreciate (mumbling) are able to look at the road map, and definitely hope to catch up with you both soon. >> Abby: Thanks so much! >> Thank you so much. >> Stu: All right, I'm Stu Miniman, and back with much more, right in a second, thank for watching theCube. (Techno music)
SUMMARY :
Brought to you by Amazon Web Services and Intel, and happy to welcome back to our program on the program, and boy your team's been busy. So the way I like to think about it is, Stu: All right, so we're going to talk and I wore this just for him. then we're going to spend some time talking about EKS. in the container space. in like the last month or so. which let's you run, split your traffic between Stu: All right, want to take a quick step back. Definitely, I've talked to lots of customers, Maybe talk to us a little bit about how Fargate fits and it's kind of fascinating to see Stu: Okay, but the big news which actually and it's the ability to run your Kubernetes pods and how if he needed to move something there, So actually think there's that much to worry about, and KubeCon, simplicity is not the word that we hear as to where we are to make this, you know, and to focus my energy where I really get any benefit and the app dev community, is that hybrid as a layer to do that. is running, where they will manage it for you and what you're generally hearing from your customers. but we think you might want it too. And that, I think that part of your customers here. and platform engineering teams that we speak to there needs to be the leadership driving things And I think you see more and more Stu: All right, real quick just give you and foremost to the GitHub public road maps. a lot of the contributors to that blog site and definitely hope to catch up with you both soon. and back with much more, right in a second,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Deepak | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Abby Fuller | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Malcolm Featonby | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
Andy | PERSON | 0.99+ |
Corey | PERSON | 0.99+ |
two reasons | QUANTITY | 0.99+ |
five percent | QUANTITY | 0.99+ |
Abby | PERSON | 0.99+ |
November last year | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ECR | TITLE | 0.99+ |
five years ago | DATE | 0.98+ |
SisCo | ORGANIZATION | 0.98+ |
US | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
both places | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
ECS | TITLE | 0.98+ |
Linux | TITLE | 0.97+ |
DockerCon | ORGANIZATION | 0.97+ |
one way | QUANTITY | 0.97+ |
Fargate | ORGANIZATION | 0.96+ |
EKS | TITLE | 0.96+ |
more than two | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
Fargate | TITLE | 0.95+ |
EC2 | TITLE | 0.95+ |
Breaking Analysis: VMworld 2019 Containers in Context
>> From the Silicon Angle Media Office, in Boston Massachusetts, it's theCUBE. Now, here's your host Dave Vellante. >> Hi everybody, welcome to this breaking analysis where we try to provide you some insights on theCUBE. My name is Dave Vellante. I'm here with Jim Kobielus who was up today, and Jim we were just off of the VMworld 2019. Big show, lot of energy, lot of announcements. I specifically want to focus on containers and the impact that containers are having on VMware, specifically the broader ecosystem and the industry at large. So, first of all, what was you take on VMworld 2019? >> Well, my take was that VMware is growing fast, and they're investing in the future, which is fairly clearly cloud and native computing on containers with Kubernetes and all that. But really that's the future and so, what VMware is doing is they're making significant bets that containers will rule the roost in cloud computing and application infrastructures going forward. But in fact virtual machines, VMs hypervisors are hotter than ever and that was well established last week by the fact that the core predominate announcement last week was a VMware Tanzu, which is not yet a production solution, but is in a limited preview, which is the new platform for coexistence of containers and vSphere. A container run time embedded in vSphere, so that customers can run containers in a highly-iso workloads, in a highly isolated VM environment. In other words, VMware is saying, we're saying to their customers, "You don't have to migrate away from VMs "until you're good and ready. "You can continue to run whatever containers "you build on vShpere, "but we more than encourage you to continue to run VMs "until you're good and ready "to migrate, if ever." >> All right. So, I want to come back and unpack that a little bit, but does your data, does your analysis, when you're talking to customers and the industry at large, is there any evidence from what you see that containers are hurting VMware's business? >> I don't get any sense that containers are hurting VMware's business. I get the strong sense that containers, they've just of course acquired Pivotal, a very additive to the revenue mix at VMware. And VMware, most of their announcements last week were in fact all around Kubernetes, and containers, and products that are very much for those customers who are going deep down the container road. >> So that was a setup question. >> You've got lots of products for them. >> So that was a setup question. So I have some data on this. >> Go ahead >> Right answer. So, I want to show you this. So, Alex, if you wouldn't mind bringing up that slide. And we shared this with you last week when we were prepping for VMworld. This is data from Enterprise Technology Research ETR, and they have a panel of 4500 end user customers that they go out and do spending surveys with them. So, what this shows is, this is container customers spending on VMware. So, you can see it goes back to early January. Now it's a little deceiving here. You see that big spike, but what it shows it that, A, that big spike is the number of shared customers. So, you really didn't have many customers back then that were doing both containers and VMware that ETR found. But as the N gets bigger, 186, 248, 257, 361, across those 461 customers, those are the shared customers in the green. And you can see that it's kind of a flat line. It's holding very well in the high 30's percent range, which is their sort of proprietary metric. So, there's absolutely no evidence, Jim, that containers, thus far anyway, are hurting VMware's business. Which of course was the narrative, containers are going to kill VMware, no evidence of that. But then why would they acquire Pivotal? Are they concerned about the future, what's your-- >> Well, they're concerned about cross selling their existing customer base who are primarily on V's, fearing the hypervisors, cross selling them on the new world of Kubernetes base products for cloud computing, and so forth and so on. In other words it's all about how do they grow their revenue base? VMware's been around for more than 20 years now. They rule the roost on the hypervisors. Where do they go from here, in terms of their product mix? Well, Kubernetes and beyond that, things like serverless will clearly be in the range of the things that they could add on. Their customers could add on to their existing deploys. I mean, look at Pivotal. Pivotal has a really strong Kubernetes distribution, which of course VMware co-developed with them. Pivotal also has a strong functions as a service backplane, the Pivotal function service for, serverless environments. So, this acquisition of Pivotal very much positions VMware to capitalize on those opportunities to sell those products when that market actually develops. But I see some evidence that virtual machines are going like gang busters in terms of customer deployments. Last week on theCUBE at VMworld, Mark Lohmeyer who's an SVP at a VMware for one of their cloud business unit, said that in the last year, for example, customers who are using a VMware cloud on AWS, VMware grew the customer base by 400% last year, and grew the number of VMs running in VMware, cloud, and AWS by 900%, which would imply that on average each customer more than doubled the number of VMs they're running on that particular cloud service. That means VMs are very much relevant now, and probably will be going forward. And why is that? That's a good question, we can debate that. >> Well, so the naysayers at VMworld in the audience were tweeting that, "Oh, I though we started Pivotal. "We launched Pivotal so that we didn't have to run VMs on, "or run containers on VMs, "so we could run them on bare metal." Are people running containers on virtual machines? >> Well, they are, yes. In fact, there's a broad range of industry initiatives, not just Tanzu at VMware, to do just that. To run containers on VMs. I mean, there is the KubeVirt, open source project over at CNCF, that's been going for a couple years now. But also, Google has Gvisor, Intel has the Kata containers initiative, I believe that there are a few others. Oh yeah, AWS with Firecracker, last year's reinvent. All this would imply, strongly indicate that these large cloud and tech vendors wouldn't be investing heavily into convergence of containers and VMs and hypervisors, if there weren't a strong demand from customers for hybrid environments where they're going to run both stacks as it were in parallel, why? Well, one of the strong advantages of VMs is workload isolation at the hardware level, which is something that typically container run times don't offer. For example, the workload isolation seems to be one of the strong features that VMware's touting for Tanzu going forward. >> So, VMware is--the centerpiece of VMware's strategy is obviously multicloud, Kubernetes as a lynch pin to enable running applications on different platforms. Will, in your opinion, and of course VMware is hard core enterprise, right? Will VMware, two things, will they be able to attract the developers, number one. And number two, will those developers build on top of VMware's platform or are they going to look to their cloud? >> That's a very important question. Last week at VMworld, I didn't get a sense that VMware has a strong developer story. I think that's a really open issue going forward for them. Why would a developer turn to VMware as their core solution provider when they don't offer a strong workbench for building these hybridized VM, /container/serverless applications that seem to be springing up all over? AWS and Microsoft and Google are much stronger in that area with their respective portfolios. >> So, I guess the obvious answer there is Pivotal is their answer to the developer quandary. >> Yes. >> And so, let's talk about that. So, Pivotal was struggling. I talked last week in my analysis, you saw the IPO price and then it dipped down, it never made it back up. Essentially the price that VMware paid the public shareholders for Pivotal was about half of it's initial IPO price, so, okay. So, the stock was struggling, the company didn't have the kind of momentum that, I think, that it wanted, so VMware picks it up. Can VMware fold in Pivotal, and use its go-to-market, and its largess to really prop up Pivotal and make it a leader? >> Well, possibly because Cloud Foundry, Pivotal Cloud Foundry could be the lynch pin of VMware's emerging developer story, if they position in that and really invest in the product in that regard. So yeah, in other words this could very much make VMware a go-to-vendor for the developers who are building the new generation of applications that present serverless functional interfaces, but will have containers under the cover, but also have VMs under the cover providing strong workload isolation in a multi-tenant environment. That would be the promise. >> Now, a couple things. You mentioned Microsoft, of course as you're in the clouding, and Google. The ETR data that I dug into when I wanted to understand, better understand multicloud. Who's got the multicloud momentum? Well, guess who has the most multicloud momentum? It's the cloud guys. Now, AWS doesn't specifically say they participate in multicloud. Certainly their marketing suggest that multicloud is for somebody else, that really they want to have uni-cloud. Whereas Google, and as you're kind of embracing multicloud and Kubernetes specifically, now of course AWS has a Kubernetes offering, but I suspect it's not something that they want to promote hard in the market place because it makes it easier for people to get off of AWS. Your thoughts on multicloud generally, but specifically Kubernetes, and containers as it relates to the big cloud providers. >> Yeah, well my thoughts on multicloud generally is that multicloud is the strategy of the second tier cloud vendors, obviously. If they can't dominate the entire space, at least they can maintain a strong, provide a strong connective tissue for the clouds that actually are deployed in their customer's environments. So, in other words, the Ciscos of the world, the VMwares of the world, IBM. In other words, these are not among the top tier of the public cloud players, hence where do they go to remain relevant? Well, they provide the connective tissue, and they provide the virtualized networking backbones, and they provide the AI ops that enables end-to-end automated monitoring management of the entire mesh. The whole notion of a mesh architecture is something that grew up with IBM and Google for lots of reasons, especially due to the fact that they themselves, as vendors, didn't dominate the public cloud. >> Well, so I agree with you. The only issue I would take is I think Microsoft is a leader in public cloud, but because it has a big On-Prem presence, it's in its best interest to push containers and Kubernetes, and so forth. But you're right about the others. Cisco doesn't have a public cloud, VMware doesn't have a public cloud, IBM has a public cloud but it's really small market share, and so it's in those companies, and Google is behind, but it's in those companies best interest really to promote multicloud, try to use it as a bull work against AWS, who's got an obviously awesome market momentum. The other thing that's interesting in the ETR data when I poke in there, it seems like there are more people looking at Google. Now maybe that's 'cause they have such strong strength in data and analytics, maybe it's 'cause they're looking for a hedge on AWS, but the spending data suggests that more and more people are kicking the tires, and more than kicking the tires on Google. Who of course is obviously behind Kubernetes and that container movement, and open source, your thoughts? >> Yeah, well, many ways, you have to think, that Google has developed the key pieces of the new stack for application development in the multicloud. Clearly they developed Kubernetes, its open source, and also they developed TensorFlow open sources, it's the predominant AI workbench essentially for the new generation of AI driven applications, which is everything. But also, if you look at Google developed Node JS for web applications and so forth. So really, Google now is the go-to-vendor for the new generation of open source application development, and increasingly DevOps in a multicloud environment, running over Istio meshes and so forth. So, I think that's, so, look at one of the announcements last weekend at VMworld. VMware and NVIDIA, their announcement of their collaboration, their joint offering to enable AI workloads, training workloads to run in GPUs in an optimal high performance fashion within a distributive of VMware cloud end-to-end. So really, I think VMware recognizes that the new workloads in the multicloud are predominately, increasingly AI workloads. And in order to, as the market goes towards those kinds of workloads, VMware very much recognizes they need to have a strong developer play, and they do with NVIDIA in a sense. Very much so because NVIDIA with the rapid framework and so forth, and NVIDIA being the predominant GPU vendor, very much is a very strategic partner for VMware as they're going forward, as they hope to line up the AI developers. But Google still is the vendor to beat as regards to AI developers of the world, in that regard, so-- >> So we're entering a world we sometimes call the post-virtual machine world. John Furrier is kind of tongue and cheek on a play on web tudauto. He calls it cloud tudauto, which is a world of multiple clouds. As I've said many times, I'm not sure multicloud is necessarily a coherent strategy yet as opposed to sort of a multi-vendor situation, Shadow IT, >> Yes. >> Lines on business, et cetera. But Jim, thanks very much-- >> Sure. >> For coming on and breaking down the container market, and VMworld 2019. It was great to see you. >> Likewise. >> All right, thank you for watching everybody. This is Dave Vellante with Jim Kobielus. We'll see you next time on theCUBE. (upbeat music)
SUMMARY :
From the Silicon Angle Media Office, and the industry at large. But really that's the future and so, what VMware is doing is there any evidence from what you see that containers and products that are very much for those customers So that was a setup question. A, that big spike is the number of shared customers. said that in the last year, for example, Well, so the naysayers at VMworld in the audience Well, one of the strong advantages of VMs or are they going to look to their cloud? AWS and Microsoft and Google are much stronger in that area So, I guess the obvious answer there So, the stock was struggling, Pivotal Cloud Foundry could be the lynch pin that they want to promote hard in the market place is that multicloud is the strategy and more than kicking the tires on Google. that Google has developed the key pieces of the new stack the post-virtual machine world. But Jim, thanks very much-- For coming on and breaking down the container market, This is Dave Vellante with Jim Kobielus.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Mark Lohmeyer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
900% | QUANTITY | 0.99+ |
400% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
last week | DATE | 0.99+ |
last year | DATE | 0.99+ |
Last week | DATE | 0.99+ |
461 customers | QUANTITY | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
Alex | PERSON | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
early January | DATE | 0.99+ |
vSphere | TITLE | 0.99+ |
today | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
4500 end user customers | QUANTITY | 0.99+ |
Node JS | TITLE | 0.99+ |
more than 20 years | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
Silicon Angle Media Office | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
second tier | QUANTITY | 0.98+ |
Ciscos | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
both stacks | QUANTITY | 0.97+ |
VMworld | ORGANIZATION | 0.97+ |
Kelsey Hightower, Google Cloud Platform | KubeCon 2018
>> Live from Seattle, Washington, it's theCUBE, covering KubeCon and CloudNativeCon North America 2018, brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Hello everyone, welcome back to the live Cube coverage here, three days at Seattle's KubeCon and CloudNativeCon. It's a conference put on by the Linux Foundation. Cube's been there from the beginning, breaking down all the action. 8,000 people, doubling attendance from the last one, now global, on a global scale, seen great traction in China and other areas around the world. It's about the cloud global. I'm John Furrier with Stu Miniman, our next guest, Kelsey Hightower with Google. Former code program share, now out in the wild on his own, super dope, playing with all kinds of new technology, it's great to see you, thanks for coming on. >> Proper you said the word dope, by the way, so congratulations there. I'm an attendee, I still have a keynote on Thursday but I do get to enjoy the floor like everyone else. >> So what's new, so you're now, again, there's a lot of pressure now every year. It's more and more people here, so it's a lot of pressure to kind of get all the action packed, but the growth has been pretty phenomenal. You've been looking at serverless, we saw some tweets, again you mention it's super dope, serverless is. You've got serverless, you've got a lot of stuff going on within the CNC app, you've got Kubernetes at the core. A lot of people like calling it the Kubernetes stack or the CNCF stack. Is it really a stack, is it really more of an operating model because there's stacks involved but how do you describe it, because this is a point of clarification. I mean, Kubernetes isn't necessarily a stack. Is it, how do people use it, what's the current state? >> I think when people say stack, you think about the LAMP stack, right? Linux, Apache, MySQL, it's a way of pre-packaging these ideas. This is something that worked for me, it may work for you, you say that enough times and then you say things like the Kubernetes stack. It's a quick, shorthand for Kubernetes and building on top of it. I think from the engineering perspective, when you look at Kubernetes and all the gaps that the CNC app is trying to fill these days, it's all this stuff you're probably building yourself, someone else is building it, and now we kind of have an outlet now. If you're working on a service mesh like list was, you have an outlet to give it to the rest of the world, open governance, and get some contributors. I think what we're seeing now is that hey, CNCF is kind of the place people go to figure out is someone building the thing that I've already started building and can I stop and just download that and go off? >> It's been very successful open source community, obviously, it's been end user leverage, it's been great and it's been open source, community led. Not so much vendor led, but vendors have been participating, so it's been great, but now as Kubernetes is going mainstream, the rise of Kubernetes is undeniable. No one can really deny that. Other end users are now coming in either to participate or to consume Kubernetes. How is that going in your mind? What's going on in the landscape, because people want multicloud, they want hybrid, they want choice. How are end users coming into the ecosystem to consume Kubernetes and the variety of goodness around it and what's going on there? Can you give some color around that option? >> I think regardless of the industry buzzwords like multicloud and hybrid and all that, Kubernetes is good on its own. It solves a lot of problems that your previous tools didn't solve, so people are gravitating towards it regardless in that direction. When you start to talk about portability, yes, it's nice to have two different environments and have the same tools work in a similar way between those environments, that's working well. The people that started three years ago that were doing it themselves, they're finding value and treating that as a service. We saw this happen to DNS, e-mail, so people are saying maybe the value isn't running it myself, so now you kind of see the vendor ecosystem understand what the value is. For a lot of the cloud providers, it's running Kubernetes, patching it, updating it, upgrading it, so that you can go focus on the other parts on top. That's where I think we are as an industry, and then there's gaps to fill, so that's where you see things like native, people building CI-CD tools on top, that's just where the new opportunities are so I think we've kind of matured. People kind of know what Kubernetes is, they know where their value line is for Kubernetes, now they're looking for their partners or vendors or community to just layer the new stuff on top. >> Kelsey, you bring up a great point there because understanding that line of what I should do myself and what I have to do versus what I can buy, consume as a service, is really tough for people, you know. I always say, ask IT departments, what do you really suck at? Because there's somebody else that probably does it better. A year ago, when I talked to users at this show, they were really downloading stuff, putting their things together, and when you asked them why, it was well, the Azure stuff hasn't matured. It just released, Amazon, I'm not sure where they're going with it. It feels like a lot has changed in the last year. You did Amazon the hard way a little over a year ago. What has changed over the last year, you know. >> We saw this with Linux, right? >> Are we ready for that, yeah. >> In Linux everyone use to build their own Linux distro, you took pride in it, using Gentoo and Slackware, and then you're like, I'm tired of that so you go get Red Hat or Ubuntu and call it good, and then you go focus on the other things. Naturally, Kubernetes is early project, has lots of gaps, you can fill those gaps by gluing together open source yourself, but now most of the managed services fill in the gaps by default. You click a button in GKE and a thing comes up, it's secure, has most of the pieces you need, it's integrated, you're like alright, I'm done with that part. >> The other thing, we talked a year ago. There's lots of companies here that are involved in Kubernetes. We've got over 70 that are compliant, and then you've got the service providers. From what I hear, it's people aren't trying to differentiate with Kubernetes and that's probably a good thing. It's something that's going to be baked into the platform, it's something you're going to consume with the other services that I offer, what do you say? >> If you make it different, then it won't work. >> Right. >> It'll be a different thing, so if you make it too different then you lose most of the benefits that we're all talking about here. The ability to learn a set of abstractions once, kind of like we did on Linux, if you start changing the system calls on Linux, then it's not Linux anymore, it's a different thing. >> Just to clarify though, if I'm running in one cloud that has their Kubernetes and I want to go to another, is it similar enough? Can I make that move? Do I need a vendor-independent version? >> So I think up to this value line I've run this container, ship the log somewhere, give me a way to secure access, that's pretty standard. Give me a load balancer. What isn't standard is how do I do CI-DC on top of that, that's not standard. There's different opinions on how to do that. If I'm in Google Cloud, we have IEM one way, Azure has IEM a different way, and same thing for Amazon. There's things around networking, security, that are going to be different based on the environment you're in. Same for on-prem, and that's where you start to look for help. If I go to Google, I'm going to use GKE maybe instead of running it myself on just a bunch of VMs, so that's where you kind of see that little divide. >> Is that going to be custom work, that's a great point, security for instance, we'll just pull that out there. Is that going to automate and be seamless or is that going to be a work area that's always going to have to be differentiated or coded or? >> So for example, we have the big vulnerability recently in Kubernetes world, right? >> It's a big CVE, it affected everyone running Kubernetes. That's a thing, as a vendor, for us GKE people, we upgraded automatically for them and said hey, there's a CVE, it's going to be really scary when you read about it but hey, you're patched. We've taken care of you, so I think people will still look for that relationship. Will it always be custom? At the app level, that is a different story. When you run your container and you want to access the things in your environment, so if you're in Google Cloud you may want to talk to Spanner, you're going to need an IEM set of credentials. That's a little out of scope of Kubernetes, so that's going to be integration work that the provider will do. >> So the holy trinity of computing industry has always been storage, network, and compute, and it changes certainly with cloud and all the goodness that comes out from serverless and whatnot, so containers is interesting. We always love containers but I've heard conversations recently where it's like hey, I want to treat containers not as a first class citizen because it doesn't meet my security boundary. I'm going to put a VM around that and run that under the covers with say, Lambda. Is that feasible, is than an option? I've heard talk about it, is anyone doing that? Is that an alternative, is this going to introduce new elements? >> Let's put it right, in Kubernetes by defaults we chose to build on top of Docker. Industry momentum, great developer workflow, but you're right, it made a security trade off. We know VMs are a much tighter security boundary that people are comfortable with. In that world, at that time, they were too slow for what we needed to happen. Thanks to Intel and others who pulled the thread of let's make VMs faster. Recently you heard the announcement of Firecracker, right, it's part of a derivative from the Chrome VM and that thing is optimized for these kinds of workloads, containers and serverless workloads. Now we go from 10, 20 seconds to hundred milliseconds. Now it makes sense to probably have this become an underlying thing. Now that we have the speed, maybe people say hey, we can maybe take the security without sacrificing the performance. >> That's the trade off. >> Pulled on the thread, you mentioned Firecracker. There's still this tension between what's happening in Kubernetes and serverless. We saw Knative is a hot topic point. It's probably natural that there's some tension there because it's like oh wait, why do you need to learn any of this stuff because if serverless will just make it as a service and make it easy and you don't need to learn all that container stuff and everything, what do you say? >> If you're a Kubernetes user, if you really think about the very broad definition of serverless, meaning I'm not managing the database, I'm using a managed database, serverless database. Storage, I'm using S3 or Google Cloud storage, serverless. Your load balancer, also serverless. So most people in the Kubernetes ecosystem, networking, serverless, storage, serverless, their database, serverless. The only thing that you can say isn't serverless is this compute component, everything else is. Now people are looking at serverless as this spectrum. How serverless are you? If you're on-prem and you buy a server and you rack it and install Kubernetes, you're less serverless, you're probably not serverless at all, no matter what you do. Now, if you put a lot of work in, you can probably put a serverless interface on top. This is what native is designed to do for people. Maybe you have an organization that supports multiple businesses inside of your org. They may not know anything about Kubernetes. You just tell them hey, put your code here, it will run, oh, that feels serverless. You can provide a serverless experience. The delta then becomes what can we do between a container and a function, so the foundation of my keynote is exactly that. What does it mean to take a container and put it into Lambda? What do you have to change? In my presentation, I don't even read write the code. There's a small shim between the two worlds because you're already using managed services around it. We're not talking about throwing away Kubernetes and then starting over our entire architecture. We're swapping out the compute layer. One is a subset of the other. Lambda is about events and functions, Kubernetes is about container and run it however you want. You want to run it when an event comes in, that's native. You want to run it as a batch job, run it as a job. You want to run it as a long running service, run it as a deployment, so that's all we're really talking about here. When we break it down, you're just talking about compute. >> You talk a lot about automation in the CI-CD areas, that differentiation where the value is. In a world as automation goes faster, what does Kubernetes look like when it becomes automated away? Because I don't want to manage anything, why even have managed Kubernetes? It should just automatically, you mentioned the patching. In an automated world, is Kubernetes just running under the covers, how does Kubernetes look down the road in your mind, in terms of when automation comes in? >> I've been in this game maybe over 15 years and one thing holds true: most developers want to focus on the business logic. We hire them because that's their skillset. When they check in code, it would be really nice if you can take it from there and get it where it needs to be. That's been the holy grail. We see it in mobile, you build an app, you put it on the App Store, Apple gets it to every device on the planet, done. Now it's the server side turn to do this. Whether you're doing serverless functions, Kubernetes, VMWare, or Linux, if you have CI-CD in front of any of that, the developer can still have the same experience. I check in code and you're picking a different deploy target. If you did that five years ago, and you understood it, and you were using, let's say maybe Mesos or just VMs, you bring in Kubernetes, you don't even have to change this part of the equation. This is why I tell most people, just focus on this endgame. My keynote last year was about this is the endgame because this is your coacher, this is your change management process, this is your discipline, and this is just a target where that compute goes. >> Alright, we've got two minutes left. I want to get your thoughts and share with the audience who's not here, a big waiting list, I know there's some lobby con going on all around Seattle, people flew in. Great place too to actually have some good lobby con meetings around the lobby area. So what's happening here, in your mind's eye, now you're not in the throes of all the events, you're kind of in the wild here with us, everyone else. What's the top story, what's going on, what's the vibe, what are you extracting out of all this activity as a top story, top level stories here? >> I think everyone's finding their place. If you're a security vendor, you kind of know where your line is, right? I've got this Twistlock shirt on. They want to plan a world where they need to integrate closer to the developer workflow, not just on the infrastructure side. If you're selling load balancers, service mesh is a thing, where do you fit in? The lines are getting a lot clearer. Kubernetes is starting to say maybe we should stop here. Maybe service measures should take it from here and that's where Istio comes in. Traditional vendors can now play in this well-defined space. On the storage side, what are you integrating? Now we have the storage interface, like the container storage interface. Now, if you're a net app, you know where you fit into the puzzle. You don't need to have your own Kubernetes distro. Two years ago, everyone was trying to come out with their own Kubernetes distro so they can actually have an anchor. Now you're like, ah, now I know where to play and now we also know what's missing. After years of doing this, people look back and say there's a lot of stuff missing. It's OK now to go create something new. >> It's a clear visibility into the landscape. What about the impact to end users? What is notable in your mind in terms of highlights, impact to end user organizations really going through this quote digital transformation, which is very cloud-based of course, but they're certainly changing and impacting, what's your thoughts on the end user? >> We're using some of the same words now. Forget the technology piece, now we can all start to talk about the same things, so when we say container, we kind of now are talking about the same thing. When we start to talk about sidecars, whether that's a service mesh, Envoy sidecar, or something that adapts your existing code to the new world, now that we're using the same language, we can actually talk. Traditional enterprise can talk to the startups and have a meaningful conversation. >> That's awesome, any other observations here in terms of the size of the show? Got a lot more activity, feels a little bit like re:Invent, I'm bumping into people, swimming through the crowds, the swag's hot. >> It's 8,000 people here and it feels like there's more users that know nothing about Kubernetes so even though we're about five years in, it reminds me of when we were just getting started. >> Lot more work to do but great, congratulations on all the work you've done Kelsey. Really appreciate you taking the time every year to come on theCUBE. We love having you on, great commentary, great keynotes, very entertaining. Thanks for coming on, appreciate it. >> Awesome, thank you. >> I'm John Furrier, Cube here with Kelsey Hightower telling us about all the breakdown of KubeCon, CloudNativeCon, the beginning of the cloud tsunami is happening, certainly changing businesses, changing open source, it's changing, it's on a global scale. We're here with coverage for three days. We'll be right back with more after this short break.
SUMMARY :
brought to you by Red Hat, It's about the cloud global. Proper you said the we saw some tweets, again you mention Kubernetes and all the gaps What's going on in the landscape, and have the same tools and when you asked them why, of the pieces you need, that I offer, what do you say? If you make it different, so if you make it too different based on the environment you're in. or is that going to be a work area that the provider will do. and all the goodness that comes out a derivative from the Chrome VM Pulled on the thread, and run it however you want. automation in the CI-CD areas, in front of any of that, the developer What's the top story, what's going on, where you fit into the puzzle. What about the impact to end users? the same language, we can actually talk. in terms of the size of the show? here and it feels like congratulations on all the the beginning of the cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Wojcicki | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Porter | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Marty Lans | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Jackson | PERSON | 0.99+ |
Jason Newton | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Dave Winokur | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Lena | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Julie Sweet | PERSON | 0.99+ |
Marty | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Kayla Nelson | PERSON | 0.99+ |
Mike Piech | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ireland | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Daniel Laury | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Todd Kerry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$20 | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
January 30th | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
Mark Little | PERSON | 0.99+ |
Luke Cerney | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Allan | PERSON | 0.99+ |
40 gig | QUANTITY | 0.99+ |
Adrian Cockcroft, AWS | AWS re:Invent 2018
live from Las Vegas it's the cube covering AWS reinvent 2018 brought to you by Amazon Web Services Intel and their ecosystem partners welcome back to Las Vegas everybody I'm Dave Villeneuve my co-host David Flair you want to the cube the leader in live tech coverage this is our third day of coverage at AWS reinvent 2018 our sixth year covering this event that keeps getting bigger and bigger Dave at 53,000 people amazing place is still jam we still barely have our voices 18 Cockroft is here he's a vice president of cloud architecture and strategy very well known in the industry q Balam thanks so much for coming back on thank you yeah it's the I've been to all of the reinvents we've been far as the customer and then we've been off of one but we watched remotely and hung on every word you know back when there wasn't a lot of information about a DMS now it's like too much information to process it's gonna take us months to sort through it all but at any rate it's it's a phenomenal opportunity for us to to learn to share to inspire folks and you do with some great work talk a little bit about you know some of the fun stuff you're working on and in your current role yeah I have a few different things I do one is one part of my role as I go around the world giving keynotes AWS summits but mostly I call it doing one of Ogle's impressions his deck and I get to presented around the world so we have to digest all of this stuff into a 90-minute deck that we can take to around the world that's a you know what do you leave out there's some it's it's harder and harder every year so that's a lot of fun but the team that I run for AWS I mean recruiting and running is around open-source right and we do we sponsor various events we members of various foundations we make contributions to projects and have been helping that by hiring people from the open-source communities into AWS to help help some of the edge over service teams with their launches of open-source related projects so what I've got what's been happening this year is had like a hundred blog posts related to open source lots of tweets lots of activity lots of events like ask on all things open in coupe car so be there in a couple of weeks exciting to you guys probably again but this week there are a few of the launches where we got quite deeply involved we did a blog posts on the open source blog most at the same time as Jeff fires okay here's the service and here's the open source part of it this is how you contribute and this is what's going on so we've had some fun with that so but it was it two years ago when we first met you've just been on the job for about a month about that particular time and you laid out what you wanted to do in terms of from your previous experience about how you wanted to turn AWS into a an open-source contributor how would you rate yourself in two years I think we've made some good progress really made me a AWS was making contributions to open source but had nobody talking about it and nobody know it was nobody's job to go out and explain what we were doing so that what part of the problem two years ago it was actually more happening so most people knew about but we were just not telling the story and it said it wasn't coming across well and the culture and the culture I mean it was spotty like some parts of AWS were doing a lot of open source other parts we're kind of not really seeing it as a priority so by talking a lot more about it we kind of get a more uniform acceptance across AWC huge organized just there but Amazon as a whole we are actually telling that story the story a much broader story than just AWS and be able to bring that and get everyone go oh this i see everyone doing it so i should be doing that so it helps create the the the leadership for more teams to follow and what we've seen in with you know really the first year building the team the last year kind of getting the content flowing and getting the processes kind of working to get all the all of the different events and blog posts and out the outbound part grips getting increasing number of contributions and launches so now Corrado was a few weeks ago so it you need us launch but that was that was an example that was it's a lot a lot happened from my team from Aaron Gupta my team his a Java champion he used to be at Sun he was a worked at Red Hat on J bar so he's like he knows everybody in Java has great credibility across the Java community and he said we should launch this product in Belgium at like midnight or so you know West Coast time and let's fly in James Gosling and like to a secret like get him on stage without anyone knowing he's gonna do it and do the introduction so it's like this totally crazy idea and it came off beautifully and we even had the the you know the Oracle Java people saying nice things about it the contributions to open JDK just just a really nice example of figuring it out all that get everybody on board get everything done right and then say here's something that matters to the community that we can contribute it'll show up on the rooftop complete thanks the star power thing but mincing James to do it was a right around a lot of credit for that that particular launch but you know this is the kind of people I have on my team and we're like we're pulling them in and pointing them at okay can you help this team figure out how to take this open-source project to market now I mean that was a major contribution to the open-source community and it was just in time wasn't it but another slight view would might be that you and Oracle should have been working this out until not leaving it until the last minute but I mean we were doing this work anyway right okay we're effectively self-supporting our own version of Java or internally we were getting better performance and better sooner bug fixes on open JDK so it made a decision to just move to the open JDK dream and we were just unhooking our internal use of the of the other the other options we have home mix you know a very large organization along for you acquire lots of different versions and flavors of Java you notice this one language so we like clean it up let's get JDK 8 and 10 we're self supporting it and then we announce to our cave will support our Amazon Linux version right and the final step was like the customers were saying please just like supportive on my laptop and anywhere else I need it and the thing we didn't announce then we didn't make a big thing out and arm support we didn't we kind of it was in there by default we didn't talk about it because the ARM chips came out this week so hey and part of it was also have exactly the same version of Java now on all of the Amazon Linux is even the the Intel AMD and arm so that helps the compatibility for people kind of going well it's a different processor architectures ties together so it was all part of the thinking if you didn't want to tip your hand on the announcement this young is right ok so I think sometimes a AWS is misunderstood partly from its own doing I mean you just mentioned you contribute a lot to open-source but you never talked about it generally when AWS doesn't have something to say they don't say a lot about it so others are left to you know make the narrative you come on you've now got an open-source agenda can you just sort of summarize what that motivation is and what the objectives are well we have you know lots of different pieces of this but you have service teams saying I'm gonna launch this product and there's an open source component to it can you help and sometimes that means I hire someone in my team to specialize in that area sometimes it's just our consulting with the team we may know connecting them to the open-source community so that's one piece of it is having that if you think about CN CF in particular cloud native computing foundation that's got lots of projects if you think about the AWS service teams no one team really owns the scope of CN CF but my team has that ownership for CN CF as a whole we have the board seat position and we say ok we have the serval as people over here we've got some entertaining things over here there's some Linux kernel virtualization bits here we can reach out to lots of different teams across AWS but act as a central point where you have something about open-source you want to talk about with with AWS or Amazon even as a whole you can come to us and we'll find the right people and we'll help you make those connections so part of it is acting as an on-ramp for the sort of buffer between the internal the external concerns of the communities there's somewhere to go and partly just getting contributions out there and what we could gain criticized for not making enough contributions well we've been making more and we're making more and we'll just keep making more contributions until people give credit for it and that's that's the if you're like what's the strategy contribute more and then tell people point at it and hope the people like what we did and take the input no it's the customer driven thing right we're gonna do what our customers ask us to do and their customer community focus on the things we want to do and we've been contributing to spinnaker the the Netflix OSS project we made some serious contributions to that in the this year firecracker myths which talk about that a bit and the Robo maker that those are all areas where we've been working with firecracker is particularly interesting isn't it I mean that's a major contribution of improving the performance and capability of those micro VMs yeah can you talk about that a little bit yeah it's the baby it's interesting because it's a piece of software pretty much no one will ever see your use it's the thing you run on the bare metal but lets you run your container Dee that lets you run your container on top right well it's deep down in the guts of the system there's this piece of code but we we kind of there's a few reasons we're using it particularly in production now with its supporting some of our production use of Fargate and lambda there in the middle it's not a hundred centraal out but there's a good chunk of the capacity running on it and that's where it turns out to be useful and just to cook how long we have to get into this but if you think about a customer running a lambda function we would put create a VM with that lambda function in it if they wanted a second lambda function we put it alongside that one no the customer comes and we start a new VM for them and we start a lambda function in that VMs take a while to start up so you have cancer pre-made some sitting there waiting but these are big VMs and we're putting lots of little functions in them what what firecracker lets you do is start a separate micro VM for every function and safely put all of the customers on one machine so you start packing them in it's a much more efficient way to run your capacity our utilization of those machines supporting lambda is vastly higher than having a machine with a bunch of empty space in it that we're trying to weight running for running for the customer so it's that efficiency is the thing and then the speed of starting a VM it's a very it's a very cut-down VM so it's 125 milliseconds with just to start the VM which is incredibly fast when you think hey give me a VM on ec2 it's you know they're in kinda like 30 seconds to a few minutes like I get 12 terabyte VM takes a little while to boot up but you don't have to pay for it till it finished including my good things about these huge machines right how about Robo maker can you talk a little bit about that and it's important so a rubber makers interesting on the open source blog which we posted on Slate on Sunday night early on Monday morning I did an interview with Brian Goerke who's the founder of the open robotics foundation and what we've done there is it's kind of an extension of sage maker if you think about that being AI if you've got these eight where I can deploy an AI model what is the AI model I want to do it wants to read something from the real world and modified the real word so it's a read from a camera or at some of the sensor and then control motors and servos and that's what Robo maker does it wraps the intelligence you can build with sage maker with the robotic operating system that has actually a library of actuators and a library of algorithms control algorithms you've got little brain in the middle and you've got a new robot that does something and we had the the Robo racer low racing car to which where all of these things come together to make an old toy race car that we can drive around tracks which is a whole other topic we get into but what interviewed Brian on what is the history of Rose the robotic operating system where did it come from you know what is the hard thing about running in it turns out the hard thing with Rose wasn't building the robots it was simulating the robots and the simulators quite a CPU intensive job it's graphics intensive you got this virtual world you're running and VR worlds are quite intensive and getting that installed and running was the hard part so what what what robot maker is is that as the service it's this simulator is called gazebo just a funny name so gazebo as a service is the actual service that effectively were charging for with a free tier so you can play with it and then we charge you for the sort of simulation units like how much computing time you're using when the rest of it is all you know cloud9 for the front end and deployment of fleets of data to fleets the robots and updating them and managing them but they're interesting thing is this is getting into like the people that the field of the first robotic thing is high schools high school robotic competitions they're interested yeah universities are interested in a university solar so we kind of it's not just for commercial production robots it's the whole training thing we're getting into STEM education if kids like playing with robots it's like Center and we're pulling all this in so now you can go home and take these like the latest most advanced AI algorithms that used to have to be doing a PhD at Stanford to be playing with and play with your kid you know over Christmas and see what you can come up with really simplifying the whole software development side of that when you look at the Dean came in competitions we're just awesome yeah all the kids they could have gravitate to the hardware cuz they can touch the software was really hard and and and this is gonna I think take a new level is particularly enough and it's all open source yeah you can go yes oh you've got this robot there no no I pointed them somebody who's complaining that we'd done it and no it was some proprietary robot thingy with the toy cars and I pointed them at the github URL it's that you can go build this thing it's all open source you can put anything else you want on it but the robot cars robot has rolls on it the robotic operating system H maker Robo maker all combined together and they're off running races and having all having fun now you guys are both Formula one fans yeah and you guys have been having some you know profile of Formula One folks here you got the little the mini vehicle riff on that really open source but I have another like thing I'm doing on the site it turns out the over the last year or so we started looking for opportunities to do sports sponsorship with a particular focus on Europe and the rest of the world we had a few US sports where they I don't know something with balls I like I like sports with wheels so about the middle of last year like this June we announced the deal with Formula One which is a multi-part deal part of the deal was just take them to the cloud that they have some data centers stuff they were running at a space and their data center is like no they wanted to do a technology refresh so for all the reasons that everyone else is moving to cloud we moved the sports core infrastructure to cloud over some number of years right so that's a process for starting and part of that is the archive of all Formula One races it's a treasure trove like 67 years of archive of everything they've got all the videos were digitizing it we're gonna figure out what to do what you know we've got to process it to label everything anyway so that's one thing and then we went turned up it we all turned up at Silverstone in the UK at that race it was the week after the announcement and that race we have a do as logos turning up on the screen because another piece was sponsorship so we start sponsoring the core video feed that Formula One uses to the world and that's 500 million fans watch Formula One so now 500 million fans for the next few years they're going to see a dope race logos on screen around the analytical insights of what is going on in the sport the odd rear tires are overheating you went round a corner this fast here's the pit stop strategy so we brand advertising associate with a high-technology sport and analytical insights and that's why we did that deal and they get all of our technology AI a lot of help helping them migrate and then the third thing we did that I got involved with was I'd already done a few CIO summits at Formula One races along the way so I was kind of like trying to poke my way into this thing that was happening I'm not involved in sponsorship set up right so hang on if you've done that thing yet and then them so we decided to do some executive events around Formula one so we'll pick a few races we'll have some you know corporate hospitality like things but when you put a bunch of senior executives together for a few days they share they solve each other's problems and you just get out of the way and they know the people that have solved one problem will share it with the other so it's a really it's like a tiny reinvent right here everyone is sharing if you sit next to someone what problem have you sold you can find stuff out so this is a concentrated version of that and we retired it in Monza earlier this year went great amazing I mean it's fun and it you know next to the business so it finally was like can we get someone on the car on Reba okay who's in Abu Dhabi on Saturday can we get them on Sunday night for the launch for the robot slut no this is like top guy in Formula One got here from Abu Dhabi if by Wednesday morning I'm just happy that they got here yeah that was that was a huge tire cube team we've watched your career you've been somebody who you know shares his knowledge and done some great work so thank you so much for coming back in the cube like that congratulations on all your great work Andy Jesse's coming up next we're excited about that keeper right to everybody we'll be back with our next guest Andy Jesse CEO of AWS right - this short break [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Goerke | PERSON | 0.99+ |
David Flair | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Dave Villeneuve | PERSON | 0.99+ |
Belgium | LOCATION | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Abu Dhabi | LOCATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Abu Dhabi | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
125 milliseconds | QUANTITY | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Sunday night | DATE | 0.99+ |
Saturday | DATE | 0.99+ |
67 years | QUANTITY | 0.99+ |
James Gosling | PERSON | 0.99+ |
90-minute | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
JDK 8 | TITLE | 0.99+ |
Wednesday morning | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silverstone | LOCATION | 0.99+ |
James | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Monza | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Aaron Gupta | PERSON | 0.99+ |
Rose | PERSON | 0.99+ |
53,000 people | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
two years ago | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Ogle | PERSON | 0.99+ |
10 | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Christmas | EVENT | 0.98+ |
eight | QUANTITY | 0.98+ |
two years | QUANTITY | 0.98+ |
third day | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
JDK | TITLE | 0.98+ |
500 million fans | QUANTITY | 0.98+ |
12 terabyte | QUANTITY | 0.98+ |
Cockroft | PERSON | 0.98+ |
two years ago | DATE | 0.98+ |
one machine | QUANTITY | 0.98+ |
sixth year | QUANTITY | 0.97+ |
500 million fans | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
US | LOCATION | 0.96+ |
this year | DATE | 0.96+ |
one piece | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
first year | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
AWC | ORGANIZATION | 0.95+ |
one part | QUANTITY | 0.95+ |
third thing | QUANTITY | 0.95+ |
earlier this year | DATE | 0.94+ |
Formula One | TITLE | 0.94+ |
one thing | QUANTITY | 0.94+ |
Netflix | ORGANIZATION | 0.93+ |
Formula one | ORGANIZATION | 0.93+ |
West Coast | LOCATION | 0.93+ |
CN CF | ORGANIZATION | 0.92+ |
Stanford | ORGANIZATION | 0.91+ |
second lambda | QUANTITY | 0.88+ |
Jeff fires | PERSON | 0.88+ |
next few years | DATE | 0.87+ |
Eron Kelly, AWS | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hey, welcome back everyone! It's theCUBE's live coverage, day two, of three days of wall-to-wall coverage. Keynotes, amazing announcements, great vibe here. Again, 52,000 people here at Amazon re:Invent, I'm John Furrier, my co-host Dave Vellante, we're here again, 6th year, just an amazing amount of content, two sets Eron Kelly is the General Manager of the Enterprise Services Marketing Basically part of marketing for Lilac Groups a lot of the areas. Great to have you on theCUBE. Thanks for spending the time! >> It's great to be here John and Dave, I really appreciate it. >> Alright so, how hard is your job? (all men laugh) I mean we were just doing an analysis of Andy's keynote and there's so much to talk about you have so many things under your purview. You have a broad portfolio of Amazon services It's really crazy good for you guys, the business is going great, highly profitable How do you do it all? How do keep track of it? What's your favorite child? Tell us >> It's a great point, there's so much going on and the speed and the pace of innovation but it's exactly what builders are looking for, right? They want to be able to come to AWS and not have to compromise. Because they want to see every tool that they need for every job And so for me I have a pretty broad portfolio I've really been excited this week about a lot of our compute announcements, right? So our announcements around our new instance family with A1 based on some custom silicon AWS Graviton processor Really excited about that Bringing the Arm community to the Cloud for the first time, we're jazzed about that one. >> And that motivation there is lowering costs for Arm-based apps, right? >> It's really two-fold, right? It's bringing the Arm community to the cloud It's the first time we've got an Arm processor in a large Cloud provider, so giving them that scale-up, elasticity, pay-for-what-you-use kind of model but then the second thing is lowering costs for customers for scale-out workloads so things like web tiers containerized microservices and you know in the general purpose area customers, we think they can save up to 45% which is meaningful, so we're really excited about that. That's been a really neat announcement a neat project we've been working on for a while. I would also say in the area of Compute We've added 100GB networking to a couple of our Instance types. C5n based on Intel processors and that's really been the workhorse in the HPC community and now with 100 GB of networking we're going to be able to do even more processing, more power, more advanced scenarios there And it's kind of an interesting dialogue, right? The more data you have, the more compute you need and it creates this virtuous cycle and one of the gaps was networking, so bringing 100GB It really allows those Intel chips to run >> Well, Big Data guys are going to eat that up to, I would think I mean, the links between Big Data and HPC become clearer >> Exactly right. >> Talk about the latency thing Andy talked to me last Monday about this I had a long conversation with him about it Latency matters, you guys listen to customers. Networking, you mentioned a key part of the flywheel Compute storage networking obviously morphing with the Cloud while you guys are optimizing and raising the bar. How are you guys handling the latency question? Because this comes up a lot You got the On-premises piece now you guys are doing things in the Cloud How do you market that service? How do you handle the latency? Talk about the role of latency in the portfolio. >> There's a couple things in there The first one, what I would highlight is Latency within an HPC environment or a machine learning environment and so that's where again this 100GB networking has been really powerful. We've also announced a new networking feature or protocol, Elastic Fabric Adapter which actually allows you to go even faster now in some networking scenarios, which is particularly interesting, again, in HPC So we've really worked hard in reducing the latency throughout the data centers for these higher end compute scenarios. >> Well, you have the custom silicon I wanted to just take you through this because EC2 has always been great, but setting up an Instance could take 10 seconds. Lambda now you can do it in hundreds of milliseconds. By having the custom silicon, How does that impact the network stack? Because I would imagine that the performance gains on having kind of a custom silicon around EC2 and the Compute, would be a gamechanger for running things under the covers, for instance, like a Vmware or managing security boundaries issues, what's the... >> So we made these investments a few years ago with the Nitro system, Where we took a look at the current Instance environment and said hey if we can offload some of that computer networking to a purpose-built chips on those cards we can actually free up more capacity for those processors to run faster and give you more value basically for each Instance type. So that was part of the beginning of Annapurna Labs and the Nitro system was offloading this networking into those custom built chips. That's the start and then what we've done is the Nitro system has allowed us to innovate much faster So we've added three times more Instances this year than last year because we're building on the Nitro system Graviton processor is just one more example We've added new processors from AMD And of course we've continued to advance with our Intel chip set as well. >> Talk about, I just want to just change gears for a bit Because you're in product marketing Executive General Manager Andy talks about the new way the culture at Amazon Old guard, new guard... Traditional product marketing, you can take a product you can bring it to market, waterfall it out at the beach and then you do all the activities How do you raise the bar in your job where you got to go out and take not just products, this is services now so you have a series of, a lot of services How does that change how you do product marketing? And how's that different from people who might not know how you guys operate? Talk a little about the culture of product marketing at Amazon. >> Sure, yeah. So I would say first and foremost it's all about education, right? So we really want to make sure that whenever where there's a new service that comes out we're super clear on what does it do what is the benefit and how can customers take advantage of it. And we're trying to position it in a way We like to say internally, sort of a non-technical CIO can understand So whenever you look at a new service You look at our detail pages. We put a tremendous amount of rigor and clarity We make it very simple, make the value pop clear I think the second thing we're starting to do and we're seeing it reflected in our products as well Is how can we tell more aggregated stories? So today during the Keynote, You saw Andy talk about abstractions, right? And one of the first ones he mentioned was Control Tower, which is one of my products So I got a little passionate around it. But what's interesting about that is that customers are coming to us and saying OK, I love the AWS, I love all these tools I love the granularity of managing things at a certain level and setting policies at a certain level But you guys have thousands, millions of customers running AWS, what's the right way to set up my environment? Can you give me a blueprint to do it? To set it up and run it in a very secure and compliant way. So Control Tower is a great example of a both a service that helps you do that as well as a marketing message that says Hey, let's look at this now in totality Let's you set up these environments faster based on best practices and now you can control in a much easier way. >> So you're basically trying to simplify the message so it's not speeds and feeds >> Well, what I would say is we want to simplify the message so that everyone can understand but we don't want to lose track of those builders those tinkerers that get in there, they want the speeds and feeds, they want the nobs and part of their differentiation as a developer is understanding all the details So we want to have both. >> It's also trying to help people figure out what to use where. As your portfolio grows and grows and grows the complexity becomes amazing for some people And Tailgate helped me figure out mapping to my workload, what to use where what's the best cost solution sometimes it's hard to figure that stuff out, isn't it? >> Yeah, well, it's again, it's this balance between We want to be able to provide the right tool for the right job I think Andy had a nice analogy today in the Keynote Where he talked about building fixing a house with just a hammer, right? And instead you're going to want to have that right tool for that right job And so part of our job in Product Marketing is making it very clear When do I use this particular Instance type versus this versus this? What are the trade-offs? And that's a key part of our job. >> And that resonates with people because there's a lot of redundancy in tools, too in the marketplace, people, a lot of them have the same tool the same hammer and you guys have a variety of services. So the question I got to ask you though As you look at the services and Amazon's role here at the event How would you summarize what's going on here? Because there's so much, Andy had a slide up there that said "Signal from the noise" and that's our phrase Extracting the signal from the noise which is kind of fun, but you have so much signal this noise and there's too much signal how would you encapsulate, for someone watching, what is happening this year? Where is Amazon for customers? What's the positioning? How should they think about, you mention builders? How would you summarize what the action is going on here? >> Right, so I would talk about it like this First and foremost, I would say we're adding more capability and building the broadest and deepest platform so that builders can always, they never have to compromise they always can find the right tool they need for the right job. So first and foremost, that speed of innovation that pace of innovation is continued and if there was one message that people should take away is Wow! They are still innovating like crazy they have an amazing amount of technology and so I don't have to compromise when it comes to datalinks. So that's kind of the first main message. Then I think the second thing I would say which kind of follows on that is OK but we also recognize that we've got a lot of services and now we need to start to build some services that bring these together and again Control Tower is a great example of that Lake Foundation is a great example of that for datalinks. And so that's sort of the second thing we are starting to create services that are abstraction layers that bring together a lot of the details to solve very specific problems. The third thrust that I would highlight is just the amazing focus around machine learning and AI and just how that has been such a key investment area for us and such a key ask for our customers and our mission there is to just democratize it. We want every builder to able to bring machine learning and add it to their solutions. And the number of services you saw announced today in the keynote as well as some earlier this week and last week just shows that our commitment and focus on that >> And extending EC2 to support some of that stuff >> Exactly right >> And the training and the like >> I'm a Star Trek fan so I always go to Scotty Scotty, more power! You guys are bringing more power to the table with each Compute and these abstractions. I want to get your thoughts on something that I talked with Andy about in depth last week before the show, and we were riffing on this notion of a new kind of Developer emerging and he talked about in the keynote, a new persona of developer, A new kind of developer is emerging. And he also kind of talked about net new workloads I wrote about it in my stories on SiliconANGLE in the forums about it, which is this all this goodness going on at the abstraction layer with a lot of horsepower enabling things that were hard to do. AI's a great example AI's been around since I was in college in the 80's and 90's and now it's rose up with power What are some of those persona developers look like? How do you look at that net new workloads? What's your reaction to that? Because this seems to be a big trend that's not your old school developer banging out code now there's OpenSource communities, we got that but in the working day and life of companies people are building apps. What's this new persona developer look like? >> Well, there's different personalities, right? There's the core tinkerer like you talked about there's now the data scientist that's coming in and taking advantage of these machine learning tools. You have kind of a cloud administrator that's kind of trying to look across everything and they want to build as well, right? They don't just want to sit there and manage the dashboards, they want to build as well and so we're seeing that in some of those personas. of course, app developers is another big part of it. >> Now you could talk to Firecracker too, right? >> We could talk a little bit, yeah >> So I met with Adrian yesterday and of course people used to poke at Amazon a lot hey, what about Open-source, you guys giving back to Open-source? And so Firecracker explain that and sort of what you guys are doing there and specifically in Open-source? >> So that's a great example of where we had some technology, and what Firecracker is it's a container for microservices that can run in a non-virtualized environment and we've used it as the underpinning for Lambda and Fargate And we looked at it and we said you know what? We want more people to be able to take advantage of this because it's about saving money it's about improving security. And so we decided to open-source it. And so that was one of our announcements on Monday was open-sourcing Firecracker and making it available to the community and so we're really excited about it. >> One of the things I want to ask you as we wrap up here, first of all great job on all the work you've done at Amazon impressive to see the level of services >> Thank you! That you guys are announcing and it's become a competitive advantage and you've got a great trajectory a lot of learnings and as Andy says there's time compression for experience and time which is good for a competitive strategy but as you look forward to 2019 What's your plan? What are your goals? How are you going to raise the bar? The term you guys use a lot. What's your goals? What are you trying to accomplish? >> You know the number one thing that we spend our time on is listening to customers and saying what's next? What do you need next, right? 90% of our innovation comes from customer input and so now we got a new wave of services we're introducing we're going to spend time with customers they're going to give us feedback we're going to make those services better and then we're going to find new places where they want us to go so next year is going to be just as exciting as this year and next year when I see you guys here we're going to be talking about a whole new wave of things coming out, it's been fun. >> You're certainly running hard and the other thing I've noticed in learning how Amazon works and getting deeper under the covers there you got a growing field, great professional services and a sales force that is not trying to grab the wallet from the customer you guys have a long game perspective >> That's right >> Carla, had a great conversation with her about this you have to service that How are you going to enable these guys You mentioned education earlier this is a big part of your plan, right? The integration with the field how does that work? You going to provide the messaging all the tools...how..cuz that's grow you've got to service that What's your perspective on the field >> Oh for sure, you're highlighting my Q1 goals right now It's really important to dial up that connection because as we get more and more services our field sellers, what's great about our field teams is that they're so aligned to customer needs So they don't carry specific quotas on individual products and things like that they're really focused on hey, what do ya need? And how can use the full portfolio to help you out? And so part of our challenge as a product marketer is not only educating customers on our products but educating the field on our products and which ones are most viable for which scenario and so that's a big part of our focus as well within the product marketing function is hey how do we really nail these scenarios very crisp, very compelling both for the customer and field seller to say ooh I saw that pattern in a customer! Let me go bring this technology forward and talk to them about it, so really excited about next year and you hit on something else that I think is really important which is this long term view, is our sales teams have always taken a long term view with customers they're not sitting there at the end of the Quarter trying to, you know, close the deal it's all about that long term view And it's allowed us to make some of these investments we've had, we've made >> You guys also use your own technology too I noticed a lot of the different groups You've got all the goodness of Cloud. You got to use some of that tech You've probably got some machine learning waiting around the AI bots and all kinds of cool tools you're integrating in dog-fooding or what do they call it? >> I mean, the reason why we're able to add so many services every year is that we build on our own platform, right? So you can have a very small team we talk about the two pizza team A very small team is able to build these services because they use AWS and in its entirety. And so it's very very exciting as these things get connected Like last week we talked about a predictive auto-scaling, so one of the features Auto-scaling's been a pretty popular feature over the years where people can scale up and scale down a large fleet of EC2 Instances but now we've added machine learning to that where it will now predict when the scaling should happen. So it allows you to scale up your EC2 Instances ahead of time based on historical patterns. So there's ML coming into everything we do. >> And server-less is booming too you know, that's going to be a big part of your focus by the way, you mention the Fleets I love this, we haven't talked about it much on theCUBE, but this notion of fleets is pretty powerful Having just a bunch of fleets of servers ready to go >> Ready to go and being able to manage across the different pricing models is also very, very powerful. I want to really ramp up very very quickly take advantage of those spot instances in those moments with really big cost savings as well as ramp back down >> Hey, keep adding functionality keep removing all the barriers lowering the price, making it high performance and I love that business model a lot of companies don't have that so Congratulations! >> You know, we just always want to add more we want to give customers more tools so they can have the right tool for the right job we want to give them the most powerful platform so they can do the highest end things as well as give them scenarios where hey, it's a little bit lower cost and it's for smaller workloads we don't want to overpay and be over-provisioned that's a key part of our strategy. >> I want it all and I want it now >> That's right! >> Well, you did a good job on the messaging Andy wants to mention builders and right tool for the job I think there's a drinking game going on on that he mentioned multiple times Congratulations Eron Kelly, Thank you! >> Thanks John and Dave, Really appreciate your time on theCUBE. >> General Manager Amazon Product Marketing here inside theCUBE, breaking down what's going on, what's his goals how Amazon keeps up with the pace Good insight I'm John Furrier, Dave Vellante Stay with us for more coverage after this short break. (upbeat music)
SUMMARY :
Brought to you by Amazon Web Services, a lot of the areas. It's great to be here John and Dave, and there's so much to talk about Bringing the Arm community to the Cloud and you know in the general purpose area How are you guys handling the latency question? which actually allows you to go even faster now How does that impact the network stack? and the Nitro system was offloading this networking and then you do all the activities both a service that helps you do that simplify the message so that everyone can understand the complexity becomes amazing for some people What are the trade-offs? So the question I got to ask you though And the number of services you saw and he talked about in the keynote, and manage the dashboards, they want to build as well And we looked at it and we said you know what? How are you going to raise the bar? and next year when I see you guys here with her about this you have to service that I noticed a lot of the different groups So it allows you to scale up your EC2 Instances to manage across the different pricing models and it's for smaller workloads Thanks John and Dave, how Amazon keeps up with the pace
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Eron Kelly | PERSON | 0.99+ |
Carla | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
100GB | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
100 GB | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
Lake Foundation | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Adrian | PERSON | 0.99+ |
last year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Annapurna Labs | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
three times | QUANTITY | 0.99+ |
52,000 people | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
one message | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Star Trek | TITLE | 0.99+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Lilac Groups | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
AMD | ORGANIZATION | 0.98+ |
6th year | QUANTITY | 0.98+ |
last Monday | DATE | 0.98+ |
hundreds of milliseconds | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
90's | DATE | 0.96+ |
earlier this week | DATE | 0.96+ |
each | QUANTITY | 0.96+ |
Lambda | TITLE | 0.96+ |
day two | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |