Keynote Enabling Business and Developer Success | Open Cloud Innovations
(upbeat music) >> Hello, and welcome to this startup showcase. It's great to be here and talk about some of the innovations we are doing at AWS, how we work with our partner community, especially our open source partners. My name is Deepak Singh. I run our compute services organization, which is a very vague way of saying that I run a number of things that are connected together through compute. Very specifically, I run a container services organization. So for those of you who are into containers, ECS, EKS, fargate, ECR, App Runner Those are all teams that are within my org. I also run the Amazon Linux and BottleRocketing. So anything AWS does with Linux, both externally and internally, as well as our high-performance computing team. And perhaps very relevant to this discussion, I run the Amazon open source program office. Serving at AWS for over 13 years, almost 14, involved with compute in various ways, including EC2. What that has done has given me a vantage point of seeing how our customers use the services that we build for them, how they leverage various partner solutions, and along the way, how AWS itself has gotten involved with opensource. And I'll try and talk to you about some of those factors and how they impact, how you consume our services. So why don't we get started? So for many of you, you know, one of the things, there's two ways to look at AWS and open-source and Amazon in general. One is the number of contributors you may have. And the number of repositories that contribute to. Those are just a couple of measures. There are people that I work with on a regular basis, who will remind you that, those are not perfect measures. Sometimes you could just contribute to one thing and have outsized impact because of the nature of that thing. But it address being what it is, increasingly we'll look at different ways in which we can help contribute and enhance open source 'cause we consume a lot of it as well. I'll talk about it very specifically from the space that I work in the container space in particular, where we've worked a lot with people in the Kubernetes community. We've worked a lot with people in the broader CNCF community, as well as, you know, small projects that our customers might have got started off with. For example, I want to like talking about is Argo CD from Intuit. We were very actively involved with helping them figure out what to do with it. And it was great to see how into it. And we worked, etc, came together to think about get-ups at the Kubernetes level. And while those are their projects, we've always been involved with them. So we try and figure out what's important to our customers, how we can help and then take because of that. Well, let's talk about a little bit more, here's some examples of the kinds of open source projects that Amazon and AWS contribute to. They arranged from the open JDK. I think we even now have our own implementation of Java, the Corretto open source project. We contribute to projects like rust, where we are very active in the rest foundation from a leadership role as well, the robot operating system, just to pick some, we collaborate with Facebook and actively involved with the pirates project. And there's many others. You can see all the logos in here where we participate either because they're important to us as AWS in the services that we run or they're important to our customers and the services that they consume or the open source projects they care about and how we get to those. How we get and make those decisions is often depends on the importance of that particular project. At that point in time, how much impact they're having to AWS customers, or sometimes very feel that us contributing to that project is super critical because it helps us build more robust services. I'll talk about it in a completely, you know, somewhat different basis. You may have heard of us talk about our new next generation of Amazon Linux 2022, which is based on fedora as its sub stream. One of the reasons we made this decision was it allows us to go and participate in the preneurial project and make sure that the upstream project is robust, stays robust. And that, that what that ends up being is that Amazon Linux 2022 will be a robust operating system with the kinds of capabilities that our customers are asking for. That's just one example of how we think about it. So for example, you know, the Python software foundation is something that we work with very closely because so many of our customers use Python. So we help run something like PyPy which is many, you know, if you're a Python developer, I happened to be a Ruby one, but lots of our customers use Python and helping the Python project be robust by making sure PyPy is available to everybody is something that we help provide credits for help support in other ways. So it's not just code. It can mean many different ways of contributing as well, but in the end code and operations is where we hang our happens. Good examples of this is projects that we will create an open source because it makes sense to make sure that we open source some of the core primitives or foundations that are part of our own services. A great example of that, whether this be things that we open source or things that we contribute to. And I'll talk about both and I'll talk about things near and dear to my heart. There's many examples I've picked the two that I like talking about. The first of these is firecracker. Many of you have heard about it, a firecracker for those of you who don't know is a very lightweight virtual machine manager, which allows you to run these micro VMs. And why was this important many years ago when we started Lambda and quite honestly, Fugate and foggy, it still runs quite a bit in that mode, we used to have to run on VMs like everything else and finding the right VM for the size of tasks that somebody asks for the size of function that somebody asks for is requires us to provision capacity ahead of time. And it also wastes a lot of capacity because Lambda function is small. You won't even if you find the smallest VM possible, those can be a little that can be challenging. And you know, there's a lot of resources that are being wasted. VM start at a particular speed because they have to do a whole bunch of things before the operating system spins up and the virtual machine spins up and we asked ourselves, can we do better? come up with something that allows us to create right size, very lightweight, very fast booting. What's your machines, micro virtual machine that we ended up calling them. That's what led to firecracker. And we open source the project. And today firecrackers use, not just by AWS Lambda or foggy, but by a number of other folks, there's companies like fly IO that are using it. We know people using firecracker to run Kubernetes on prem on bare metal as an example. So we've seen a lot of other folks embrace it and use it as the foundation for building their own serverless services, their own container services. And we think there's a lot of value and learnings that we can bring to the table because we get the experience of operating at scale, but other people can bring to the table cause they may have specific requirements that we may not find it as important from an AWS perspective. So that's firecracker an example of a project where we contribute because we feel it's fundamentally important to us as continually. We were found, you know, we've been involved with continuity from the beginning. Today, we are a whole team that does nothing else, but contribute to container D because container D underlies foggy. It underlies our Kubernetes offerings. And it's increasingly being used by customers directly by their placement. You know, where they're running container D instead of running a full on Docker or similar container engine, what it has allowed us to do is focus on what's important so that we can operate continuously at scale, keep it robust and secure, add capabilities to it that AWS customers need manifested often through foggy Kubernetes, but in the end, it's a win-win for everybody. It makes continuously better. If you want to use containers for yourself on AWS, that's a great way to you. You know, you still, you still benefit from all the work that we're doing. The decision we took was since it's so important to us and our customers, we wanted a team that lived in breathed container D and made sure a super robust and there's many, many examples like that. No, that we ended up participating in, either by taking a project that exists or open sourcing our own. Here's an example of some of the open source projects that we have done from an AWS on Amazon perspective. And there's quite a few when I was looking at this list, I was quite surprised, not quite surprised I've seen the reports before, but every time I do, I have to recount and say, that's a lot more than one would have thought, even though I'd been looking at it for such a long time, examples of this in my world alone are things like, you know, what work had to do with Amazon Linux BottleRocket, which is a container host operating system. That's been open-sourced from day one. Firecracker is something we talked about. We have a project called AWS peril cluster, which allows you to spin up high performance computing clusters on AWS using the kind of schedulers you may use to use like slum. And that's an open source project. We have plenty of source projects in the web development space, in the security space. And more recently things like the open 3d engine, which is something that we are very excited about and that'd be open sourced a few months ago. And so there's a number of these projects that cover everything from tooling to developer, application frameworks, all the way to database and analytics and machine learning. And you'll notice that in a few areas, containers, as an example, machine learning as an example, our default is to go with open source option is where we can open source. And it makes sense for us to do so where we feel the product community might benefit from it. That's our default stance. The CNCF, the cloud native computing foundation is something that we've been involved with quite a bit. You know, we contribute to Kubernetes, be contribute to Envoy. I talked about continuity a bit. We've also contributed projects like CDK 8, which marries the AWS cloud development kit with Kubernetes. It's now a sandbox project in Kubernetes, and those are some of the areas. CNCF is such a wide surface area. We don't contribute to everything, but we definitely participate actively in CNCF with projects like HCB that are critical to eat for us. We are very, very active in just how the project evolves, but also try and see which of the projects that are important to our customers who are running Kubernetes maybe by themselves or some other project on AWS. Envoy is a good example. Kubernetes itself is a good example because in the end, we want to make sure that people running Kubernetes on AWS, even if they are not using our services are successful and we can help them, or we can work on the projects that are important to them. That's kind of how we think about the world. And it's worked pretty well for us. We've done a bunch of work on the Kubernetes side to make sure that we can integrate and solve a customer problem. We've, you know, from everything from models to work that we have done with gravity on our arm processor to a virtual GPU plugin that allows you to share and media GPU resources to the elastic fabric adapter, which are the network device for high performance computing that it can use at Kubernetes on AWS, along with things that directly impact Kubernetes customers like the CDKs project. I talked about work that we do with the container networking interface to the Amazon control of a Kubernetes, which is an open source project that allows you to use other AWS services directly from Kubernetes clusters. Again, you notice success, Kubernetes, not EKS, which is a managed Kubernetes service, because if we want you to be successful with Kubernetes and AWS, whether using our managed service or running your own, or some third party service. Similarly, we worked with premetheus. We now have a managed premetheus service. And at reinvent last year, we announced the general availability of this thing called carpenter, which is a provisioning and auto-scaling engine for Kubernetes, which is also an open source project. But here's the beauty of carpenter. You don't have to be using EKS to use it. Anyone running Kubernetes on AWS can leverage it. We focus on the AWS provider, but we've built it in such a way that if you wanted to take carpenter and implemented on prem or another cloud provider, that'd be completely okay. That's how it's designed and what we anticipated people may want to do. I talked a little bit about BottleRocket it's our Linux-based open-source operating system. And the thing that we have done with BottleRocket is make sure that we focus on security and the needs of customers who want to run orchestrated container, very focused on that problem. So for example, BottleRocket only has essential software needed to run containers, se Linux. I just notice it says that's the lineups, but I'm sure that, you know, Lena Torvalds will be pretty happy. And seeing that SE linux is enabled by default, we use things like DM Verity, and it has a read only root file system, no shell, you can assess it. You can install it if you wanted to. We allowed it to create different bill types, variants as we call them, you can create a variant for a non AWS resource as well. If you have your own homegrown container orchestrator, you can create a variant for that. It's designed to be used in many different contexts and all of that is open sourced. And then we use the update framework to publish and secure repository and kind of how this transactional system way of updating the software. And it's something that we didn't invent, but we have embraced wholeheartedly. It's a bottle rockets, completely open source, you know, have partners like Aqua, where who develop security tools for containers. And for them, you know, something I bought in rocket is a natural partnership because people are running a container host operating system. You can use Aqua tooling to make sure that they have a secure Indiana environment. And we see many more examples like that. You may think so over us, it's all about AWS proprietary technology because Lambda is a proprietary service. But you know, if you look peek under the covers, that's not necessarily true. Lambda runs on top of firecracker, as we've talked about fact crackers and open-source projects. So the foundation of Lambda in many ways is open source. What it also allows people to do is because Lambda runs at such extreme scale. One of the things that firecracker is really good for is running at scale. So if you want to build your own firecracker base at scale service, you can have most of the confidence that as long as your workload fits the design parameters, a firecracker, the battle hardening the robustness is being proved out day-to-day by services at scale like Lambda and foggy. For those of you who don't know service support services, you know, in the end, our goal with serverless is to make sure that you don't think about all the infrastructure that your applications run on. We focus on business logic as much as you can. That's how we think about it. And serverless has become its own quote-unquote "Sort of environment." The number of partners and open-source frameworks and tools that are spun up around serverless. In which case mostly, I mean, Lambda, API gateway. So it says like that is pretty high. So, you know, number of open source projects like Zappa server serverless framework, there's so many that have come up that make it easier for our customers to consume AWS services like Lambda and API gateway. We've also done some of our own tooling and frameworks, a serverless application model, AWS jealous. If you're a Python developer, we have these open service runtimes for Lambda, rust dot other options. We have amount of number of tools that we opened source. So in general, you'll find that tooling that we do runtime will tend to be always be open-sourced. We will often take some of the guts of the things that we use to build our systems like firecracker and open-source them while the control plane, etc, AWS services may end up staying proprietary, which is the case in Lambda. Increasingly our customers build their applications and leverage the broader AWS partner network. The AWS partner network is a network of partnerships that we've built of trusted partners. when you go to the APN website and find a partner, they know that that partner meets a certain set of criteria that AWS has developed, and you can rely on those partners for your own business. So whether you're a little tiny business that wants some function fulfill that you don't have the resources for or large enterprise that wants all these applications that you've been using on prem for a long time, and want to keep leveraging them in the cloud, you can go to APN and find that partner and then bring their solution on as part of your cloud infrastructure and could even be a systems integrator, for example, to help you solve this specific development problem that you may have a need for. Increasingly, you know, one of the things we like to do is work with an apartment community that is full of open-source providers. So a great one, there's so many, and you have, we have a panel discussion with many other partners as well, who make it easier for you to build applications on AWS, all open source and built on open source. But I like to call it a couple of them. The first one of them is TIDELIFT. TIDELIFT, For those of you who don't know is a company that provides SAS based tools to curate track, manage open source catalogs. You know, they have a whole network of maintainers and providers. They help, if you're an independent open developer, or a smart team should probably get to know TIDELIFT. They provide you benefits and, you know, capabilities as a developer and maintainer that are pretty unique and really help. And I've seen a number of our open source community embraced TIDELIFT quite honestly, even before they were part of the APN. But as part of the partner network, they get to participate in things like ISP accelerate and they get to they're officially an advanced tier partner because they are, they migrated the SAS offering onto AWS. But in the end, if you're part of the open source supply chain, you're a maintainer, you are a developer. I would recommend working with TIDELIFT because their goal is making all of you who are developing open source solutions, especially on AWS, more successful. And that's why I enjoy this partnership with them. And I'm looking to do a lot more because I think as a company, we want to make sure that open source developers don't feel like they are not supported because all you have to do is read various forums. It's challenging often to be a maintainer, especially of a small project. So I think with helping with licensing license management, security identification remediation, helping these maintainers is a big part of what TIDELIFT to us and it was great to see them as part of a partner network. Another partner that I like to call sysdig. I actually got introduced to them many years ago when they first launched. And one of the things that happened where they were super interested in some of our serverless stuff. And we've been trying to figure out how we can work together because all of our customers are interested in the capabilities that cystic provides. And over the last few years, he found a number of areas where we can collaborate. So sysdig, I know them primarily in a security company. So people use cystic to secure the bills, detect, you know, do threat response, threat detection, completely continuously validate their posture, get this continuous analytics signal on how they're doing and monitor performance. At the end of it, it's a SAS platform. They have a very nice open source security stack. The one I'm most familiar with. And I think most of you are probably familiar with is Falco. You know, sysdig, a CNCF project has been super popular. It's just to go SSS what 3, 37, 40 million downloads by now. So that's pretty, pretty cool. And they have been a great partner because we've had to do make sure that their solution works at target, which is not a natural place for their software to run, but there was enough demand and interest from our customers that, you know, or both companies leaned in to make sure they can be successful. So last year sister got a security competency. We have a number of specific competencies that we for our partners, they have integration and security hub is great. partners are lean in the way cystic has onto making our customer successful. And working with us are the best partners that we have. And there's a number of open source companies out there built on open source where their entire portfolio is built on open source software or the active participants like we are that we love working with on a day to day basis. So, you know, I think the thing I would like to, as we wind this out in this presentation is, you know, AWS is constantly looking for partnerships because our partners enable our customers. They could be with companies like Redis with Mongo, confluent with Databricks customers. Your default reaction might be, "Hey, these are companies that maybe compete with AWS." but no, I mean, I think we are partners as well, like from somebody at the lower end of the spectrum where people run on top of the services that I own on Linux and containers are SE 2, For us, these partners are just as important customers as any AWS service or any third party, 20 external customer. And so it's not a zero sum game. We look forward to working with all these companies and open source projects from an AWS perspective, a big part of how, where my open source program spends its time is making it easy for our developers to contribute, to open source, making it easy for AWS teams to decide when to open source software or participate in open source projects. Over the last few years, we've made significant changes in how we reduce the friction. And I think you can see it in the results that I showed you earlier in this stock. And the last one is one of the most important things that I say and I'll keep saying that, that we do as AWS is carry the pager. There's a lot of open source projects out there, operationalizing them, running them at scale is not easy. It's not all for whatever reason. It may not have anything to do with the software itself. But our core competency is taking that and being really good at operating it and becoming experts at operating it. And then ideally taking that expertise and experience and operating that project, that software and contributing back upstream. Cause that makes it better for everybody. And I think you'll see us do a lot more of that going forward. We've been doing that for the last few years, you know, in the container space, we do it every day. And I'm excited about the possibilities. With that. Thank you very much. And I hope you enjoy the rest of the showcase. >> Okay. Welcome back. We have Deepak sing here. We just had the keynote closing keynote vice-president of compute services. Deepak. Great to a great keynote, great wisdom and insight from that session. A very notable highlights and cutting edge trends and product information. Thanks for sharing. >> No, anytime it's always good to be here. It's too bad that we still doing this virtually, but always good to talk to you, John. >> We'll get hopefully through this way pretty quickly, I want to jump right in. Cause we don't have a lot of time. I want to get some quick question. You've brought up a good things. Open source innovation. Okay. Going next level. You've seen the rise of super clouds and super apps developing at open source. You're seeing big companies contributing, you know, you mentioned Argo into it. You're seeing that dynamic where companies are forming around this. This is a rising tide. This is, this is actually real. It's not the old school of, okay, here's a project. And then someone manages support and commercialization of it. It's actually platform in cloud scale. This is next gen. >> Yeah. And actually I think it started a few years ago. We can talk about a company that, you know, you're very familiar with as part of this event, which is armory many years ago, Netflix spun off this project called Spinnaker. A Spinnaker is CISED you know, CSED system that was developed at Netflix for their own purposes, but they chose to open solicit. And since then, it's become very popular with customers who want to use it even on prem. And you have a company that spun up on it. I think what's making this world very unique is you have very large companies like Facebook that will build things for themselves like VITAS or Netflix with Spinnaker and open source them. And you can have a lot of discussion about why they chose to do so, etc. But increasingly that's becoming the default when Amazon or Netflix or Facebook or Mehta, I guess you call them these days, build something for themselves for their own needs. The first question we ask ourselves is, should it be opensource? And increasingly we are all saying yes. And here's what happens because of that. It gives an opportunity depending on how you open source it for innovation through commercial deployments, so that you get SaaS companies, you know, that are going to take that product and make it relevant and useful to a very broad number of customers. You build partnerships with cloud providers like AWS, because our customers love this open source project and they need help. And they may choose an AWS managed service, or they may end up working with this partner on a day-to-day basis. And we want to work with that partner because they're making our customers successful, which is one reason all of us are here. So you're having this set of innovation from large companies from, you know, whether they are just consumer companies like Metta infrastructure companies like us, or just random innovation that's happening in an open source project that which ends up in companies being spun up and that foster that innovative innovation and that flywheel that's happening right now. And I think you said that like, this is unique. I mean, you never saw this happen before from so many different directions. >> It really is a nice progression on the business model side as well. You mentioned Argo, which is a great organic thing that was Intuit developed. We just interviewed code fresh. They just presented here in the showcase as well. You seeing the formation around these projects develop now in the community at a different scale. I mean, look at code fresh. I mean, Intuit did it Argo and they're not just supporting it. They're building a platform. So you seeing the dynamics of tools and now emerging the platforms, you mentioned Lambda, okay. Which is proprietary for AWS and your talk powered by open source. So again, open source combined with cloud scale allows for new potential super applications or super clouds that are developing. This is a new phenomenon. This isn't just lift and shift and host on the cloud. This is actually a construction production developer workflow. >> Yeah. And you are seeing consumers, large companies, enterprises, startups, you know, it used to be that startups would be comfortable adopting some of these solutions, but now you see companies of all sizes doing so. And I said, it's not just software it's software, the services increasingly becoming the way these are given, delivered to customers. I actually think the innovation is just getting going, which is why we have this. We have so many partners here who are all in inventing and innovating on top of open source, whether it's developed by them or a broader community. >> Yeah. I liked, I liked the represent container. Do you guys have, did that drove that you've seen a lot of changes and again, with cloud scale and open source, you seeing the dynamics change, whether you're enabling that, and then you see kind of like real big change. So let's take snowflake, a big customer of AWS. They started out as a startup too, but they weren't a data warehouse. They were bringing data warehouse like functionality and then changing everything differently and making it consumable for the cloud. And hence they're huge. So that's a disruption into an incumbent leader or sector. Then you've got new capabilities emerging. What's your thoughts, Deepak? Can you share your vision on how you have the disruption to existing leaders, old guard, if you will, as you guys call them and then new capabilities as these new platforms emerge at a net new functionality, how do you see that emerging? >> Yeah. So I speak from my side of the world. I've lived in over the last few years, which has containers and serverless, right? There's a lot of, if you go to any enterprise and ask them, do you want to modernize the infrastructure? Do you want to take advantage of automated software delivery, continuous delivery infrastructure as code modern observability, all of them will say yes, but they also are still a large enterprise, which has these enterprise level requirements. I'm using the word enterprise a lot. And I usually it's a trigger word for me because so many customers have similar requirements, but I'm using it here as large company with a lot of existing software and existing practices. I think the innovation that's coming and I see a lot of companies doing that is saying, "Hey, we understand the problems you want to solve. We understand the world where you live in, which could be regulated." You want to use all these new modalities. How do we allow you to use all of them? Keep the advantages of switching to a Lambda or switching to, and a service running on far gate, but give you the same capabilities. And I think I'll bring up cystic here because we work so closely with them on Falco. As an example, I just talked about them in my keynote. They could have just said, "Oh no, we'll just support the SE2 and be done with it." They said, "No, we're going to make sure that serverless containers in particular are something that you're going to be really good at because our customers want to use them, but requires us to think differently. And then they ended up developing new things like Falco that are born in this new world, but understand the requirements of the old world. If you get what I'm saying. And I think that a real example. >> Yeah. Oh, well, I mean, first of all, they're smart. So that was pretty obvious for most people that know, sees that you can connect the dots on serverless, which is a great point, but not everyone can see that again, this is what's new and and systig was just found in his backyard. As I found out on my interview, a great, great founder, they would do a new thing. So it was a very easy to connect the dots there again, that's the trend. Well, I got to ask if they're doing that for serverless, you mentioned graviton in your speech and what came out of you mentioned graviton in your speech and what came out of re-invent this past year was all the innovation going on at the compute level with gravitron at many levels in the Silicon. How should companies and open source developers think about how to innovate with graviton? >> Yeah, I mean, you've seen examples from people blogging and tweeting about how fast their applications run and grab it on the price performance benefits that they get, whether it's on, you know, whether it's an observability or other places. something that AWS is going to embrace across a compute something that AWS is going to embrace across a compute portfolio. Obviously you can go find EC2 instances, the gravitron two instances and run on them and that'll be great. But we know that most of our customers, many of our customers are building new applications on serverless containers and serveless than even as containers increasingly with things like foggy, where they don't want to operate the underlying infrastructure. A big part of what we're doing is to make sure that graviton is available to you on every compute modality. You can run it on a C2 forever. You've been running, being able to use ECS and EKS and run and grab it on almost since launch. What do you want me to take it a step further? You elastic Beanstalk customers, elastic Beanstalk has been around for a decade, but you can now use it with graviton. people running ECS on for gate can now use graviton. Lambda customers can pick graviton as well. So we're taking this price performance benefits that you get So we're taking this price performance benefits that you get from graviton and basically putting it across the entire compute portfolio. What it means is every high level service that gets built on compute infrastructure. And you get the price performance benefits, you get the price performance benefits of the lower power consumption of arm processes. So I'm personally excited like crazy. And you know, this has graviton 2 graviton 3 is coming. >> That's incredible. It's an opportunity like serverless was it's pretty obvious. And I think hopefully everyone will jump on that final question as the time's ticking here. I want to get your thoughts quickly. If you look at what's happened with containers over the past say eight years since the original founding of the first Docker instance, if you will, to how that's evolved and then the introduction of Kubernetes and the cloud native wave we're seeing now, what is, how would you describe the relationship between the success Docker, seeing now with Kubernetes in the cloud native construct what's different and why is this combination so successful? >> Yeah. I often say that containers would have, let me rephrase that. what I say is that people would have adopted sort of the modern way of running applications, whether containers came around or not. But the fact that containers came around made that migration and that journey is so much more efficient for people. So right from, I still remember the first doc that Solomon gave Billy announced DACA and starting to use it on customers, starting to get interested all the way to the more sort of advanced orchestration that we have now for containers across the board. And there's so many examples of the way you can do that. Kubernetes being the most, most well-known one. Here's the thing that I think has changed. I think what Kubernetes or Docker, or the whole sort of modern way of building applications has done is it's taken people who would have taken years adopting these practices and by bringing it right to the fingertips and rebuilding it into the APIs. And in the case of Kubernetes building an entire sort of software world around it, the number of, I would say number of decisions people have to take has gone smaller in many ways. There's so many options, the number of decisions that become higher, but the com the speed at which they can get to a result and a production version of an application that works for them is way low. I have not seen anything like what I've seen in the last 6, 7, 8 years of how quickly the most you know, the most I would say is, you know, a company that you would think would never adopt modern technology has been able to go from, this is interesting to getting a production really quickly. And I think it's because the tooling makes it So, and the fact that you see the adoption that you see right and the fact that you see the adoption that you see right from the fact that you could do Docker run Docker, build Docker, you know, so easily back in the day, all the way to all the advanced orchestration you can do with container orchestrator is today. sort of taking all of that away as well. there's never been a better time to be a developer independent of whatever you're trying to build. And I think containers are a big central part of why that's happened. >> Like the recipe, the combination of cloud-scale, the timing of Kubernetes and the containerization concepts just explode as a beautiful thing. And it creates more opportunities and will challenges, which are opportunities that are net new, but it solves the automation piece that we're seeing this again, it's only makes things go faster. >> Yes. >> And that's the key trend. Deepak, thank you so much for coming on. We're seeing tons of open cloud innovations, thanks to the success of your team at AWS and being great participants in the community. We're seeing innovations from startups. You guys are helping enabling that. Of course, they want to live on their own and be successful and build their super clouds and super app. So thank you for spending the time with us. Appreciate. >> Yeah. Anytime. And thank you. And you know, this is a great event. So I look forward to people running software and building applications, using AWS services and all these wonderful partners that we have. >> Awesome, great stuff. Great startups, great next generation leaders emerging. When you see startups, when they get successful, they become the modern software applications platforms out there powering business and changing the world. This is the cube you're watching the AWS startup showcase. Season two episode one open cloud innovations on John Furrier your host, see you next time.
SUMMARY :
And the thing that we have We just had the keynote closing but always good to talk to you, John. It's not the old school And I think you said that So you seeing the dynamics but now you see companies and then you see kind How do we allow you to use all of them? sees that you can connect is available to you on Kubernetes and the cloud of the way you can do that. but it solves the automation And that's the key trend. And you know, and changing the world.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Deepak | PERSON | 0.99+ |
Lena Torvalds | PERSON | 0.99+ |
Falco | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
Mehta | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Lambda | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Solomon | PERSON | 0.99+ |
two ways | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
PyPy | TITLE | 0.99+ |
last year | DATE | 0.99+ |
over 13 years | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Today | DATE | 0.99+ |
Indiana | LOCATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Java's Relevance for Modern Enterprises: theCUBE Power Panel
(upbeat music) >> Facilitator: From theCUBE studios in Palo Alto in Boston, connecting with other leaders all around the world. This is a CUBE conversation. >> Java is the world's most popular programming language. And it remains the leading application development platform. But what's the status of Java? What a customers doing? And very importantly, what is Oracle's and the community strategy with respect to Java? Welcome everybody to this Java power panel on theCUBE. I'm your host, Dave Vellante. Manish Gupta here, he's the Vice President of Global Marketing at Java for Oracle, Donald Smith is also on the panel, and he's the Senior Director of Product Management at Oracle and we're joined by David Floyd who is a CTO of Wikibon Research and has done a number of research activities on this very topic. Gentlemen, welcome to theCUBE, great to see you. >> Thank you. >> Thank you. >> Manish, I want to start with you. Can you help us understand really what dig into Oracle strategy with respect to Java. The technology, the licensing, the support. How has that evolved over time? Take us through that. >> Dave, with 51 billion JVMs deployed worldwide, Java has truly cemented its position as the language of innovation and the technology world. There's no question about that. In fact, I like to say it's really the language of empowerment. Given the the impact it has had numerous applications ranging from the Mars Rover to genomics and everything in between. As Oracle acquired sign over 10 years ago, it's really kept it front of mind, two aspects of what we want to do with the technology and the platform. The first one was to ensure there was broad accessibility to the technology and the platform for anybody that wanted to benefit from it. And the second one was to ensure that the ecosystem remained vibrant and thriving throughout. I managed to do both. And underpinning these two objectives were really three pillars of our strategy. The first one was around trust, ensuring that openness and transparency of the technology was as was before continued to be the case going forward. The second element of that within the trust pillar was to ensure that as enterprises invested in the technology that investment was protected, it was not, you invest and you lose over a period of time in a backward compatibility, interoperability, certifications, were all foundational to the platform itself to the features, to the innovation moving forward. And more recently as we have rethought to the support, the licensing and the overall structure of the pricing that we have ensured that ultimately the trust comes along in those dimensions as well. So the launch of the Java subscription came along with, pay as you go model, it's a transparent pricing structure and discuss structure published on the website. So you can go and see what it would cost for the desktop on servers or cloud deployment. So those were the things that made kind of the first pillar happen. The second one was Dunno innovation. Over the last 25 years, Java has stood the test of time. It has delivered the needs of today while preparing for the future. And that remains the case. It is not something that has sort of focused on the fat of the day and the hot thing for the day, but really more important that it is prepared to deal with the mission critical, massive scale deployments that can run for years, for decades, in some cases. And keeping that in mind, Oracle has continued to put more and more technology into the open source world with every release that comes out, you can see 80 plus percent of the contributions come from Oracle. So that's the second pillar around innovation. And the third piece of the strategy has been around predictability. Ensuring that Java, the technology and platform perform as advertised, and that goes into the feature releases, it goes into the release process, it goes into the fact that you were broadly within the open JDK environment for developing and executing the roadmap. From a CIO standpoint, it's important to know that the technology used to develop your applications has talent around. And your, if you're going to develop something like Java, you'll find the right Java engineers to do the job. That is not a question, right? And so that's part of predictability. And finally, again with the change in the six months release cadence that came about three years ago, with the release of Java 10, we've really made sure that it's not, no, a bunch of things come about. You don't know when they're going to be released, but you know, like clockwork, you'll have a new Java list every six months. And that's been the case every March and September, since Java 10, you've had a new release of Java with certain features that come up and we just launched Java 15. So trust innovation predictability, have really been the three pillars on which we've executed the strategy for Java. >> Excellent, thank you for that intro, and we're going to get into it now. I'm glad you mentioned the sun acquisition. I said at the time that Java was the linchpin of that acquisition, many people, of course, we looked at the integration piece with the hardware, but it was really Java and the capabilities that it brings. And of course, a lot of Oracle software written in Java and not the least of which is a fusion. But now let's get into the components of this. And I want to talk a little bit about the methodology of this and going to call on you David Floyer. But essentially my understanding is that Wikibon went through and David, you led this, you did a technical deep dive, which you always do, did a number of in depth interviews with Java customers. And then of course you also did a web survey and then you built from that data and economic model. So you can try to understand the sort of dimensions of the financials if you will. So what were your key findings there? >> So the key findings were that Java was in a good state that people were happy with the Java. The second key finding is that the business case itself for using the Oracle services, the subscription services was good. It didn't mean to say that that wasn't for every company, the right way to do it, but there was a very good return on that. And the third area was that there was a degree of confidence that the new way of doing things, the six-month cycle, as opposed to the three-year cycle was overall a benefit to the rate of change, the ability for them to introduce new features quickly. >> Okay, well, I mean, you know, and I read that research. And to me my takeaways where I saw the continued relevance of Java, which is kind of goes without saying, but a lot of times it gets lost in the headlines. That subscription piece is key. We're going to get into some of the economics as to how that affects customers and it saves you money. And the other piece was the roadmap becoming more transparent. And I don't want to dig into that a little bit, but before we do, let's get into that innovation component Manish, mentioned that several times, but Don, I want to go to you guys. We have a slide on the various components of the innovation. If you would bring this up and Don I wonder if you could talk to this and give us some examples if you would. >> Yeah, sure. So we were the number one development platform for the last 25 years. We want to be the number one development platform for the next 25 years. And in order to do that, we have to be constantly innovating and constantly innovating not only the business side in terms of the subscription and the support offerings and commercial features like Manish was talking about, but also the platform in general. And so the way we like to talk about innovation as we break it down by these pillars that you can see on the slide. And so the first pillar is continuous improvements to the language. So this is watching developers trying to write the same piece of code over and over again, and us asking, can we make you more efficient? Can we give you more language features that reduce the amount of boilerplate that you have to write? The second pillar is a project that we just announced a few months ago called Leyden. And the idea with Leyden is addressing the longterm pinpoints of Java slow startup time and time to peak performance. So if you go back 10 years ago, everybody knows about Java as an enterprise platform, Java EE application servers. They all had the notion of being very long lived. And so Java at that time would be optimized towards long lived applications, startup, and performance. Where if it took a little while to get there, it didn't matter as long as when it got there, it was super fast. And so we're trying to get that peak performance faster in the world of microservices. In a similar vein with project loom, we're looking at making concurrency simple again, looking at how developers are doing more reactive style programming and realizing that the threading model needs to be rethought from the ground up, that project is looking really, really good. Then we have project Panama. Project Panama is all about making it easier to connect Java with native libraries. Valhalla is all about improving, there's a couple of benefits, but it's all about improving memory density and being able to access and iterate and operate over primitive data types at super fast speeds by better optimizing how that information is stored in memory. And then the other pillar of the final pillar that we have been working on from an innovation perspective is ZGC. We introduced a new garbage collector technology a few years ago, G1GCE a generational garbage collector with the eye towards making garbage collection in Java pause lists. So if you, again, if you go back in time and look at the history of Java, memory management is awesome, but there's always that cost and risk of a garbage collection cycle, taking a bit of time away from a critical application. And ZGC is all about getting rid of that. So lots of innovation, lots of different pillars going on right now. >> Awesome, I'm impressed. There's something after Valhalla. I thought that was Nirvana. (laughing) But now, and these are all open source projects, right? And you guys obviously provide committers, there are other people in the open source world who provide that, is that correct Don? >> Yeah, that's correct. We have about 80% of the contributions in open JDK. We are the stewards of open JDK and lead the project. Most of the pillars I talked about here are you know Oracle folks working on that. >> Awesome. Okay, let's get into some of the data. David, I want to come back to you and talk about some of the survey results guys, if you bring up that next slide. Why David, why do people upgrade? What are the drivers? It's really talks to the large companies and what's different from the small company or mid-size companies? What are the takeaways here? >> David: Well, so this is interesting, and as you might expect, large enterprises, have very concerned about application stability. Whereas midsize or enterprises are much more concerned about the performance, making sure that the performance is good. They are both concerned about reliable performance and security, but it's interesting that from a regulation point of view, mid-size companies really want to make sure that they are obeying the regulations, that they are meeting those. Whereas larger organizations usually have their own security and regulation functions looking very hard at these things. So that looking less to the platform to provide those than their own people. >> Yeah, I think you're right. I think the midsize organizations don't have as many people running around taking care of security and it's harder for them to keep up with the edicts of the organization. So they want to stay more current. Don, I wonder if you can add anything to this data from an innovation standpoint. >> Yeah, well, and from a product management standpoint, and what we see here is when you look at just going from fortune 500 to global 2000, you see things that are important to one or less so than the other. You can extrapolate that all the way down to a small company or a startup. And that's why providing the most flexibility in terms of an offering to allow people to decide what, when, where, and how they would be going to upgrade their software so they can do it when they want, and on their own terms. You can see that that becomes really important. And also making sure that we're providing innovation in a broad way so that it'll appeal both to the enterprise and again extrapolating that forward down to even very small startups. >> You know, David, the other thing that struck me in the data, if we bring up that other piece is the upgrade strategy, and there was a stark difference between large enterprises and midsize organizations. Talk to this data, if you would. >> Yes, this is again, a pretty stark difference between them. When you're looking at large enterprises, they really wants stability and they don't want to upgrade so often. Whereas mid-size enterprises, are much more willing to both upgrade on a regular cadence and really have a much more up-to-date, or have always have the latest software. They're driving smaller applications, but they're much more agile about their approach to it. Again, emphasizing what Don was saying about the smaller enterprises wanting a different strategy and a different way of doing things than large enterprises. >> So Manish this says to me that you got it right from a strategy standpoint. I mean, any color you can add here. >> Yeah, it's very intuitive that whether you're a large organization, a mid-sized enterprise or a small business, right? You face competitive pressures, your dynamics are unique. What you're able to do with the resources, what you desire to do at the pace that is appropriate for your environment, are really unique to you, and to try to force one model across any one size or across any set of dynamics is just not appropriate. So we've always felt that giving the enterprises and the organization, the ability to move at the pace of their business is the right approach. And so when we designed the Oracle Java SE subscription, we truly have that front and center in our thought process. And that structure seems to be working well. >> David, what I like about the way you do research is you actually build an economic model. A lot of these business value projects. I know this well, having been in the business a long time, they'll go out to ask the customer what they got, and then the customer said, "Well, I got a 111% ROI, and boom, that's what it is. You actually construct an economic model, you bring in rules of thumb, it allows you to do what ifs you can test that model and calibrate it against the real world. So I commend you on that. You've done a lot of hard work there, but bottom line at forests, I mean, let's bring up the economics. I mean, that's what people ultimately want to know. Does this save me money? What's the bottom line here? >> Yeah. Yes, that's a very important question. And the way we go about it is to ask the questions so that we can extract from those questions, how much effort it took, for example, to upgrade things, how much effort it took for important applications and not so important applications. So we have a very detailed model driven by the survey itself and in the back of the research, I'm a great believer that you should be able to follow exactly what the research said, what the survey said and how it was applied to the model. So, and what were you focused on was, what was the return of using the Java subscription service or taking an upgrade every six months? Those were the two ways that we looked at it. And for large enterprises, the four-year costs for the enterprise was $11 million, but for taking the additional subscription service, and this was well well covered, the payback is within a year, well covered by the lower costs of managing in a lot of systems and environment. And we found a very similar result on those midsize services. There, it was 3 million, and again, they got that back the year in terms of payback. So, but that's one alternative. There is another alternative that may be worthwhile the extra money if you really want to be up-to-date and or if you want to drive a much more aggressive strategy for your organization. >> So these are huge numbers. I mean, he's talking about 30% savings on average for large and mid-sized enterprises in the percentage terms, but the absolute dollars are actually enormous. So, you know, large companies here, we're talking about $20 billion enterprises with 500 or more Java applications. And mid-size is, you're talking about a couple, two, $3 billion companies. Manish, what are you saying in the customer base in terms of the economics? >> Yeah, you know anytime an organization is looking at an offering and a solution, they want to make sure just giving them the value. And we all know that the priorities of businesses have, they want to focus on that. Managing the Java estate is important, but is it the thing where they want to invest the dollars? And if they are investing the dollars, are they getting the return? We find that if you can give the enterprises an ability where they can see the return, the cost is right for them. And if you can mirror that and you can map it also with reduce risk, then you put the right formula. And with the subscription, they're able to not only see the cost savings that the model indicates clearly, but they're also able to reduce the risk in terms of security protection and other things. So it's a really, really good combination for the enterprises. >> Well, thank you, I wonder Manish, if you could bring us home here and just kind of summarize from your thoughts, everything you've heard today, what are the key takeaways? >> You know Java has been around for 25 years, and we certainly believe it's really positioned well for what's required today. And perhaps more importantly, what is needed for the next decade and for the next 25 years. Having now served thousands of customers with the Java subscription, it's clear that it is meeting the needs of fortune 10 organizations all the way down to a 5% development house, for example. What we're hearing from across the board is really Java has been the go to platform and it continues to be the go to platform for mission critical development and deployment. However, the complexity as the Java estate becomes large when you've got tens to hundreds, in some cases over a thousand applications running across the enterprise, that complexity can be daunting. And the Java subscription is really serving the needs in three ways. One, it's getting them the best of class support from Oracle, which is a steward of Java, the company that is generating over 80% of innovation with every single release. The second thing they're getting the business flexibility. So they can move at the pace that works for them. And therapies is as a business model as indicated that they're getting it at a lower cost while having your list. So the combination of these things is the reason why we're seeing very high renewal rates, why we're seeing thousands of organization take it over. And I want to wrap it up by saying one final thing, that you can count on Oracle to be the transparent, to be the right steward or both technology innovation, as well as to ensuring the support for the vast ecosystem whether it's libraries, frameworks, user groups, educational services and so on. So Java is here, has been here for the enterprise, large and small, and it's ready for the next generation as well. >> Great, thank you for that. Well, one more question. What's the call to action? If I'm a mid-sized company or a large company, I've made investments in Java, what, what should I do next? >> I would say, take a look at the Oracle subscription. It will reduce your rates. It'll save you a cost and it'll give you a lower risk parameter for your organization. >> Great, nice and crisp, I like it. If you like, if you guys don't object, I'm going to give you my summary. I've been taking notes this whole time and so, we've explored two options. Customers can do it yourself or go with the subscription on a regular cadence. It's very clear to me that that Java remains relevant as we set up top. It's the world's most popular programming language we know about all that. The ecosystem is really moving fast. Of course, with the stewardship of Oracle cloud microservices, the development of, of modern applications. I think that the directional changes that Manish you guys and, and Don and Oracle have made were really the right call. The research that David you did, shows that it's serving customers better. It lowers costs, it's cutting down risk particularly for the mid-sized companies that maybe are, or don't have the security infrastructure and the talent to go chase those problems. And I love the roadmap piece. The more transparent roadmap really is going to give the industry and the community much more confidence to invest and move forward. So guys, thanks very much for coming on this CUBE Java power panel. It was great to have you. >> Thank you. >> Thank you. >> Thank you. >> All right, I thank you for watching everybody. This is Dave Vellante, for theCUBE, and we'll see you next time. (soft music)
SUMMARY :
leaders all around the world. And it remains the leading The technology, the and that goes into the feature releases, of the financials if you will. And the third area was that And the other piece and realizing that the threading in the open source world JDK and lead the project. What are the drivers? making sure that the performance is good. and it's harder for them to keep up You can extrapolate that all the way down in the data, if we bring or have always have the latest software. me that you got it right the ability to move at and calibrate it against the real world. and in the back of the research, in terms of the economics? but is it the thing where they and for the next 25 years. What's the call to action? at the Oracle subscription. and the talent to go chase those problems. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyd | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
3 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
four-year | QUANTITY | 0.99+ |
Java 15 | TITLE | 0.99+ |
$11 million | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
six-month | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Donald Smith | PERSON | 0.99+ |
three-year | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
Java 10 | TITLE | 0.99+ |
Manish Gupta | PERSON | 0.99+ |
111% | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Wikibon Research | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Manish | ORGANIZATION | 0.99+ |
second element | QUANTITY | 0.99+ |
third piece | QUANTITY | 0.99+ |
500 | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
second pillar | QUANTITY | 0.99+ |
two ways | QUANTITY | 0.99+ |
first pillar | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Don | PERSON | 0.99+ |
Manish | PERSON | 0.99+ |
second pillar | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
one alternative | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
two options | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
two aspects | QUANTITY | 0.98+ |
over 80% | QUANTITY | 0.98+ |
$3 billion | QUANTITY | 0.98+ |
September | DATE | 0.98+ |
Java EE | TITLE | 0.98+ |
Dave | PERSON | 0.98+ |
10 years ago | DATE | 0.98+ |
Boston | LOCATION | 0.98+ |
second one | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Java Power Panel V1 FOR REVIEW
(upbeat music) >> Facilitator: From theCUBE studios in Palo Alto in Boston, connecting with other leaders all around the world. This is a CUBE conversation. >> Java is the world's most popular programming language. And it remains the leading application development platform. But what's the status of Java? What a customers doing? And very importantly, what is Oracle's and the community strategy with respect to Java? Welcome everybody to this Java power panel on theCUBE. I'm your host, Dave Vellante. Manish Gupta here, he's the Vice President of Global Marketing at Java for Oracle, Donald Smith is also on the panel, and he's the Senior Director of Product Management at Oracle and we're joined by David Floyd who is a CTO of Wikibon Research and has done a number of research activities on this very topic. Gentlemen, welcome to theCUBE, great to see you. >> Thank you. >> Thank you. >> Manish, I want to start with you. Can you help us understand really what dig into Oracle strategy with respect to Java. The technology, the licensing, the support. How has that evolved over time? Take us through that. >> Dave, with 51 billion JVMs deployed worldwide, Java has truly cemented its position as the language of innovation and the technology world. There's no question about that. In fact, I like to say it's really the language of empowerment. Given the the impact it has had numerous applications ranging from the Mars Rover to genomics and everything in between. As Oracle acquired sign over 10 years ago, it's really kept it front of mind, two aspects of what we want to do with the technology and the platform. The first one was to ensure there was broad accessibility to the technology and the platform for anybody that wanted to benefit from it. And the second one was to ensure that the ecosystem remained vibrant and thriving throughout. I managed to do both. And underpinning these two objectives were really three pillars of our strategy. The first one was around trust, ensuring that openness and transparency of the technology was as was before continued to be the case going forward. The second element of that within the trust pillar was to ensure that as enterprises invested in the technology that investment was protected, it was not, you invest and you lose over a period of time in a backward compatibility, interoperability, certifications, were all foundational to the platform itself to the features, to the innovation moving forward. And more recently as we have rethought to the support, the licensing and the overall structure of the pricing that we have ensured that ultimately the trust comes along in those dimensions as well. So the launch of the Java subscription came along with, pay as you go model, it's a transparent pricing structure and discuss structure published on the website. So you can go and see what it would cost for the desktop on servers or cloud deployment. So those were the things that made kind of the first pillar happen. The second one was Dunno innovation. Over the last 25 years, Java has stood the test of time. It has delivered the needs of today while preparing for the future. And that remains the case. It is not something that has sort of focused on the fat of the day and the hot thing for the day, but really more important that it is prepared to deal with the mission critical, massive scale deployments that can run for years, for decades, in some cases. And keeping that in mind, Oracle has continued to put more and more technology into the open source world with every release that comes out, you can see 80 plus percent of the contributions come from Oracle. So that's the second pillar around innovation. And the third piece of the strategy has been around predictability. Ensuring that Java, the technology and platform perform as advertised, and that goes into the feature releases, it goes into the release process, it goes into the fact that you were broadly within the open JDK environment for developing and executing the roadmap. From a CIO standpoint, it's important to know that the technology used to develop your applications has talent around. And your, if you're going to develop something like Java, you'll find the right Java engineers to do the job. That is not a question, right? And so that's part of predictability. And finally, again with the change in the six months release cadence that came about three years ago, with the release of Java 10, we've really made sure that it's not, no, a bunch of things come about. You don't know when they're going to be released, but you know, like clockwork, you'll have a new Java list every six months. And that's been the case every March and September, since Java 10, you've had a new release of Java with certain features that come up and we just launched Java 15. So trust innovation predictability, have really been the three pillars on which we've executed the strategy for Java. >> Excellent, thank you for that intro, and we're going to get into it now. I'm glad you mentioned the sun acquisition. I said at the time that Java was the linchpin of that acquisition, many people, of course, we looked at the integration piece with the hardware, but it was really Java and the capabilities that it brings. And of course, a lot of Oracle software written in Java and not the least of which is a fusion. But now let's get into the components of this. And I want to talk a little bit about the methodology of this and going to call on you David Floria. But essentially my understanding is that Wikibon went through and David, you led this, you did a technical deep dive, which you always do, did a number of in depth interviews with Java customers. And then of course you also did a web survey and then you built from that data and economic model. So you can try to understand the sort of dimensions of the financials if you will. So what were your key findings there? >> So the key findings were that Java was in a good state that people were happy with the Java. The second key finding is that the business case itself for using the Oracle services, the subscription services was good. It didn't mean to say that that wasn't for every company, the right way to do it, but there was a very good return on that. And the third area was that there was a degree of confidence that the new way of doing things, the six-month cycle, as opposed to the three-year cycle was overall a benefit to the rate of change, the ability for them to introduce new features quickly. >> Okay, well, I mean, you know, and I read that research. And to me my takeaways where I saw the continued relevance of Java, which is kind of goes without saying, but a lot of times it gets lost in the headlines. That subscription piece is key. We're going to get into some of the economics as to how that affects customers and it saves you money. And the other piece was the roadmap becoming more transparent. And I don't want to dig into that a little bit, but before we do, let's get into that innovation component Manish, mentioned that several times, but Don, I want to go to you guys. We have a slide on the various components of the innovation. If you would bring this up and Don I wonder if you could talk to this and give us some examples if you would. >> Yeah, sure. So we were the number one development platform for the last 25 years. We want to be the number one development platform for the next 25 years. And in order to do that, we have to be constantly innovating and constantly innovating not only the business side in terms of the subscription and the support offerings and commercial features like Manish was talking about, but also the platform in general. And so the way we like to talk about innovation as we break it down by these pillars that you can see on the slide. And so the first pillar is continuous improvements to the language. So this is watching developers trying to write the same piece of code over and over again, and us asking, can we make you more efficient? Can we give you more language features that reduce the amount of boilerplate that you have to write? The second pillar is a project that we just announced a few months ago called Leyden. And the idea with Leyden is addressing the longterm pinpoints of Java slow startup time and time to peak performance. So if you go back 10 years ago, everybody knows about Java as an enterprise platform, Java EE application servers. They all had the notion of being very long lived. And so Java at that time would be optimized towards long lived applications, startup, and performance. Where if it took a little while to get there, it didn't matter as long as when it got there, it was super fast. And so we're trying to get that peak performance faster in the world of microservices. In a similar vein with project loom, we're looking at making concurrency simple again, looking at how developers are doing more reactive style programming and realizing that the threading model needs to be rethought from the ground up, that project is looking really, really good. Then we have project Panama. Project Panama is all about making it easier to connect Java with native libraries. Valhalla is all about improving, there's a couple of benefits, but it's all about improving memory density and being able to access and iterate and operate over primitive data types at super fast speeds by better optimizing how that information is stored in memory. And then the other pillar of the final pillar that we have been working on from an innovation perspective is ZGC. We introduced a new garbage collector technology a few years ago, G1GCE a generational garbage collector with the eye towards making garbage collection in Java pause lists. So if you, again, if you go back in time and look at the history of Java, memory management is awesome, but there's always that cost and risk of a garbage collection cycle, taking a bit of time away from a critical application. And ZGC is all about getting rid of that. So lots of innovation, lots of different pillars going on right now. >> Awesome, I'm impressed. There's something after Valhalla. I thought that was Nirvana. (laughing) But now, and these are all open source projects, right? And you guys obviously provide committers, there are other people in the open source world who provide that, is that correct Don? >> Yeah, that's correct. We have about 80% of the contributions in open JDK. We are the stewards of open JDK and lead the project. Most of the pillars I talked about here are you know Oracle folks working on that. >> Awesome. Okay, let's get into some of the data. David, I want to come back to you and talk about some of the survey results guys, if you bring up that next slide. Why David, why do people upgrade? What are the drivers? It's really talks to the large companies and what's different from the small company or mid-size companies? What are the takeaways here? >> David: Well, so this is interesting, and as you might expect, large enterprises, have very concerned about application stability. Whereas midsize or enterprises are much more concerned about the performance, making sure that the performance is good. They are both concerned about reliable performance and security, but it's interesting that from a regulation point of view, mid-size companies really want to make sure that they are obeying the regulations, that they are meeting those. Whereas larger organizations usually have their own security and regulation functions looking very hard at these things. So that looking less to the platform to provide those than their own people. >> Yeah, I think you're right. I think the midsize organizations don't have as many people running around taking care of security and it's harder for them to keep up with the edicts of the organization. So they want to stay more current. Don, I wonder if you can add anything to this data from an innovation standpoint. >> Yeah, well, and from a product management standpoint, and what we see here is when you look at just going from fortune 500 to global 2000, you see things that are important to one or less so than the other. You can extrapolate that all the way down to a small company or a startup. And that's why providing the most flexibility in terms of an offering to allow people to decide what, when, where, and how they would be going to upgrade their software so they can do it when they want, and on their own terms. You can see that that becomes really important. And also making sure that we're providing innovation in a broad way so that it'll appeal both to the enterprise and again extrapolating that forward down to even very small startups. >> You know, David, the other thing that struck me in the data, if we bring up that other piece is the upgrade strategy, and there was a stark difference between large enterprises and midsize organizations. Talk to this data, if you would. >> Yes, this is again, a pretty stark difference between them. When you're looking at large enterprises, they really wants stability and they don't want to upgrade so often. Whereas mid-size enterprises, are much more willing to both upgrade on a regular cadence and really have a much more up-to-date, or have always have the latest software. They're driving smaller applications, but they're much more agile about their approach to it. Again, emphasizing what Don was saying about the smaller enterprises wanting a different strategy and a different way of doing things than large enterprises. >> So Manish this says to me that you got it right from a strategy standpoint. I mean, any color you can add here. >> Yeah, it's very intuitive that whether you're a large organization, a mid-sized enterprise or a small business, right? You face competitive pressures, your dynamics are unique. What you're able to do with the resources, what you desire to do at the pace that is appropriate for your environment, are really unique to you, and to try to force one model across any one size or across any set of dynamics is just not appropriate. So we've always felt that giving the enterprises and the organization, the ability to move at the pace of their business is the right approach. And so when we designed the Oracle Java SE subscription, we truly have that front and center in our thought process. And that structure seems to be working well. >> David, what I like about the way you do research is you actually build an economic model. A lot of these business value projects. I know this well, having been in the business a long time, they'll go out to ask the customer what they got, and then the customer said, "Well, I got a 111% ROI, and boom, that's what it is. You actually construct an economic model, you bring in rules of thumb, it allows you to do what ifs you can test that model and calibrate it against the real world. So I commend you on that. You've done a lot of hard work there, but bottom line at forests, I mean, let's bring up the economics. I mean, that's what people ultimately want to know. Does this save me money? What's the bottom line here? >> Yeah. Yes, that's a very important question. And the way we go about it is to ask the questions so that we can extract from those questions, how much effort it took, for example, to upgrade things, how much effort it took for important applications and not so important applications. So we have a very detailed model driven by the survey itself and in the back of the research, I'm a great believer that you should be able to follow exactly what the research said, what the survey said and how it was applied to the model. So, and what were you focused on was, what was the return of using the Java subscription service or taking an upgrade every six months? Those were the two ways that we looked at it. And for large enterprises, the four-year costs for the enterprise was $11 million, but for taking the additional subscription service, and this was well well covered, the payback is within a year, well covered by the lower costs of managing in a lot of systems and environment. And we found a very similar result on those midsize services. There, it was 3 million, and again, they got that back the year in terms of payback. So, but that's one alternative. There is another alternative that may be worthwhile the extra money if you really want to be up-to-date and or if you want to drive a much more aggressive strategy for your organization. >> So these are huge numbers. I mean, he's talking about 30% savings on average for large and mid-sized enterprises in the percentage terms, but the absolute dollars are actually enormous. So, you know, large companies here, we're talking about $20 billion enterprises with 500 or more Java applications. And mid-size is, you're talking about a couple, two, $3 billion companies. Manish, what are you saying in the customer base in terms of the economics? >> Yeah, you know anytime an organization is looking at an offering and a solution, they want to make sure just giving them the value. And we all know that the priorities of businesses have, they want to focus on that. Managing the Java estate is important, but is it the thing where they want to invest the dollars? And if they are investing the dollars, are they getting the return? We find that if you can give the enterprises an ability where they can see the return, the cost is right for them. And if you can mirror that and you can map it also with reduce risk, then you put the right formula. And with the subscription, they're able to not only see the cost savings that the model indicates clearly, but they're also able to reduce the risk in terms of security protection and other things. So it's a really, really good combination for the enterprises. >> Well, thank you, I wonder Manish, if you could bring us home here and just kind of summarize from your thoughts, everything you've heard today, what are the key takeaways? >> You know Java has been around for 25 years, and we certainly believe it's really positioned well for what's required today. And perhaps more importantly, what is needed for the next decade and for the next 25 years. Having now served thousands of customers with the Java subscription, it's clear that it is meeting the needs of fortune 10 organizations all the way down to a 5% development house, for example. What we're hearing from across the board is really Java has been the go to platform and it continues to be the go to platform for mission critical development and deployment. However, the complexity as the Java estate becomes large when you've got tens to hundreds, in some cases over a thousand applications running across the enterprise, that complexity can be daunting. And the Java subscription is really serving the needs in three ways. One, it's getting them the best of class support from Oracle, which is a steward of Java, the company that is generating over 80% of innovation with every single release. The second thing they're getting the business flexibility. So they can move at the pace that works for them. And therapies is as a business model as indicated that they're getting it at a lower cost while having your list. So the combination of these things is the reason why we're seeing very high renewal rates, why we're seeing thousands of organization take it over. And I want to wrap it up by saying one final thing, that you can count on Oracle to be the transparent, to be the right steward or both technology innovation, as well as to ensuring the support for the vast ecosystem whether it's libraries, frameworks, user groups, educational services and so on. So Java is here, has been here for the enterprise, large and small, and it's ready for the next generation as well. >> Great, thank you for that. Well, one more question. What's the call to action? If I'm a mid-sized company or a large company, I've made investments in Java, what, what should I do next? >> I would say, take a look at the Oracle subscription. It will reduce your rates. It'll save you a cost and it'll give you a lower risk parameter for your organization. >> Great, nice and crisp, I like it. If you like, if you guys don't object, I'm going to give you my summary. I've been taking notes this whole time and so, we've explored two options. Customers can do it yourself or go with the subscription on a regular cadence. It's very clear to me that that Java remains relevant as we set up top. It's the world's most popular programming language we know about all that. The ecosystem is really moving fast. Of course, with the stewardship of Oracle cloud microservices, the development of, of modern applications. I think that the directional changes that Manish you guys and, and Don and Oracle have made were really the right call. The research that David you did, shows that it's serving customers better. It lowers costs, it's cutting down risk particularly for the mid-sized companies that maybe are, or don't have the security infrastructure and the talent to go chase those problems. And I love the roadmap piece. The more transparent roadmap really is going to give the industry and the community much more confidence to invest and move forward. So guys, thanks very much for coming on this CUBE Java power panel. It was great to have you. >> Thank you. >> Thank you. >> Thank you. >> All right, I thank you for watching everybody. This is Dave Vellante, for theCUBE, and we'll see you next time. (soft music)
SUMMARY :
leaders all around the world. And it remains the leading The technology, the and that goes into the feature releases, of the financials if you will. And the third area was that And the other piece and realizing that the threading in the open source world JDK and lead the project. What are the drivers? making sure that the performance is good. and it's harder for them to keep up You can extrapolate that all the way down in the data, if we bring or have always have the latest software. me that you got it right the ability to move at and calibrate it against the real world. and in the back of the research, in terms of the economics? but is it the thing where they and for the next 25 years. What's the call to action? at the Oracle subscription. and the talent to go chase those problems. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyd | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
3 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
four-year | QUANTITY | 0.99+ |
Java 15 | TITLE | 0.99+ |
$11 million | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
six-month | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Donald Smith | PERSON | 0.99+ |
David Floria | PERSON | 0.99+ |
three-year | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
Java 10 | TITLE | 0.99+ |
Manish Gupta | PERSON | 0.99+ |
111% | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Wikibon Research | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Manish | ORGANIZATION | 0.99+ |
second element | QUANTITY | 0.99+ |
third piece | QUANTITY | 0.99+ |
500 | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
second pillar | QUANTITY | 0.99+ |
two ways | QUANTITY | 0.99+ |
first pillar | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Don | PERSON | 0.99+ |
Manish | PERSON | 0.99+ |
second pillar | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
one alternative | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
two options | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
two aspects | QUANTITY | 0.98+ |
over 80% | QUANTITY | 0.98+ |
$3 billion | QUANTITY | 0.98+ |
September | DATE | 0.98+ |
Java EE | TITLE | 0.98+ |
Dave | PERSON | 0.98+ |
10 years ago | DATE | 0.98+ |
Boston | LOCATION | 0.98+ |
second one | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Rich Sharples, Red Hat | Red Hat Summit 2020
>> From around the globe, it's The Cube, with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. >> Hi, and welcome back, I'm Stu Miniman, this is The Cube's coverage of the Red Hat 2020, bringing you guests from Red Hat and their partner ecosystem, practitioners, where they are around the globe, bringing to them this digital event, and while we wish we could all be together in person, we'll just be together apart for 2020. Happy to welcome to the program, a longtime Red Hatter, but first time, on The Cube, Rich Sharples, who's the senior director of product management inside Red Hat, Rich, thank you so much for joining us. >> Yeah, thanks for the invitation, great to be here. >> All right, so the topic we're going to talk about today is something you've got a long background of the middleware space. But in, Quarkus so, I personally was not familiar with Quarkus. Obviously we know, god, I believe someone told me once that there's like, 2 million open source projects out there, so I believe I can be forgiven for not having every one of them memorized there, but of course anybody in our community is going to know Java. What a huge impact that has had on the industry. Linux and Java are two of the, you know, major movers of how we, you know, build an, you know, deal with application today, so give us a little bit of a framework as to what Quarkus is, you know, why it was created. >> Yeah, so it's no secret that as organizations and developers move to this kind of new styled cloud native development, developing applications running in containers or in a kind of serverless environment that Java is not necessarily the best fit. Java does many incredible things, it's an amazing field of engineering. But many of the coolest things it does, assumes that it's going to be a long running application, it can do this cool dynamic class loading and dynamic optimization as the application runs. Those things are pretty impressive, but they're also fairly, very heavyweight. And in our kind of ephemeral environments, whether containers or functions of service, you don't have long running applications. And you can't make use of those things, so in a Java environment you pay for those radical features that you don't necessarily get any benefit from them. So, you know, where we're really trying to lay focus is ensure developers to continue to use Quarkus, it's still the, you know, the dominant language for enterprise development. You still get the benefits of these new architectures, so ensuring that Java continues to be you know, performant and efficient in these new you know, constrained environments. >> Okay, excellent, so we're not calling it cloud native Java though, right Rich? But we are bringing, if I heard right, Java for things like containers Kubernetes, I even heard functions as a service so, we're talking to server lists of you know, open shift server lists something that's being talked about this week. So help us understand you know, if Java was long in the tooth. You know, what stays the same, what's different, how have people been managing and you know building applications in this environment, because obviously you know, we've been dealing with containers for a number of years now, so what have they been doing so far and, you know, why is Quarkus different from some of the alternatives that are out there. >> Really, the goal is to introduce those that stayed the same. It's not a different language, it's not a fork. It is Java, you're writing Java applications, essentially in the same way you used to write them. And you may be using Microsoft still functions so slight difference in terms of design, but it's, you know, we want to ensure that you can bring your favorite frameworks and wipers with you as well. When you're accessing databases or message brokers. We want to ensure you can still use those technologies so we're trying to bring the whole ecosystem with us, with Quarkus, so those things can run well, in a you know, container or service environment as well. And that's super important because the real benefit here is any organizations face the choice of I want to develop cloud native, I want to develop functions, but I've got this huge investment in Java in terms of skills and you know, tools and tool trains and I don't want to go learn a new language, just because I need to you know, take advantage of things new environments so we're essentially giving developers their cake and allowing them to eat it. We are trying to provide the best of both worlds. Stick with the language you already know and you know, have lots of experience with, and still be able to get the benefits of running in our containerized environment. >> Okay. what are some of the challenges here, so you know from an infrastructure standpoint. My background is, you know, virtualization broke a lot of pieces and containerization does the same thing. As you mentioned, things you know, spin up really fast and they don't stay on nearly as long. You know, god, you mentioned functions as a service, often we're measuring things in milliseconds, so everything genomes, understand what's up how do I manage it, how do I monitor it all of those pieces so, you know, I understand you're saying we take the skill set and what we know. But, you know, there's got to be some on ramp here and some considerations >> Yes, so, yeah, absolutely so, Red has taken on the ramp and ensuring that this ecosystem moves with us. We do a lot of hard work within Quarkus, so developers don't have to. We do some very, very clever stuff that very few organizations, would be able to do because they don't have the depth of knowledge of the Java virtual machine that we do. We're able to take a lot of things that you'd normally start off once only, like loading classes and you know, building kind of memory data around, all the kind of reading configurations all of the things applications do once and only once. Why do it another time? Why not build that into the component time, you're going to do it once but take it out of your runtime environment completely, so there are many ways where we're having to kind of rethink the way you know, applications run. We have to do a reset on what job was built for this environment of long running applications where, if the application took 10 minutes to load up all the stage area and classes and config, it didn't really matter, because it's not going to run for 36 months. You got to do a resale on those design decisions and think very very differently and given with our deep experience with containers and you know, working on things like native, serverless and on deep, deep roots in Java, we were able to do that and really think differently. So, Quarkus takes a lot of that kind of work away from developers they don't have to think too much about it. And by and large, what they can do is focus on their applications and their micro services and read all of that wiring and optimization for them. And hopefully deliver some you know, real significant improvements both in development productivity, but also the kind of runtime resource utilization as well to really lower costs. >> Okay, and Rich, what's is great that's been really the nirvana when you talk about developers is they don't want to have to think about some of that underlying you know, gobbledygook. That was why you know, the term serverless is so polarizing is because from a developer standpoint I don't think about this but everybody screams, but there are servers and there is networking and there's you know, things underneath that I need to think about. So, what is the underlying assumption here. We talked about you know, containers, Kubernetes, functions as a service, what integration is done there? Does this live across? Is it kind of like, you know, does it sit just just on RHEL and therefore everywhere the RHEL lives it's there? Or, help me understand kind of what that underlying you know, substrate is. >> Yeah, right now our focus is RHEL x86, 'cause that's kind of the dominant platform in a cloud. It is just Java, some have that natural kind of portability and you know, as other architectures become important, we can certainly look at those as well. The reason why the underlying machine architecture is important, is because one of the options you have with Quarkus is actually the ability to compile everything down to a binary executable, right? That may give you some additional footprint reduction and performance enhancements. And also if we compile down to native, we do need to think about the underlying operating system and the architecture. But by and large, as a developer you really don't have to care. Just like to you don't have to care with Java today. You also have the option with Quarkus, to run on conventional JVM, open JDK is our preference and if you can run on open JDK, then you can pretty much run anywhere. Under you know, different reasons for compiling down a native, this is running on a traditional JDK, different optimizations, different trade-offs that you'd like to make. >> All right, so Rich, an open source project here, can you tell us a little bit about you know, who's contributed to this, you know, what general adoption is this, and, you know, where are we with the solution today. Is it today ready for production environments? >> Yeah, it's getting close to production ready, yeah, we'll be making this Germany available and during Summit and many of the components we use are tried and tested, again we're not reinventing everything from the ground up. We leverage things like REHL VM, we leverage open JDK, we leverage all our frameworks and library, the developer that are familiar with, we just have to optimize them for Quarkus, so, yeah, much of this is not brand new technology. The existing technology that has that kind of maturity and tolling support. So yeah, we're confident it's production ready. One of the early stages of the development of Quarkus, was to use some of Red Hats own products as goody picks. Actually, you know, optimize those products for containerized environments by rebuilding them on top of Quarkus and that gave us obviously a lot of insight into the general readiness, yeah, the whole kind of eating around and dog food principle. In terms of the organizations in investing Quarkus, you know, we have this kind of have old addedge, we often use at Red Hat, which is you know, if you want to, if you want to move quickly, go alone. If you want to go far, then go with others. We're at a stage, where we've been developing Quarkus very, very rapidly and that's mostly been a Red Hat effort. We've certainly got some help from the mothership IBM and I expect that to be an increase overtime and we're now in a point where we have a Germany available product coming up and we're ready to really kind of expand the ecosystem. So, we're looking for you, whether you're a framework provider, you've written a framework for Java and you want to have that Quarkus provider, ensure that runs really well and partly the kind of growing ecosystem around Quarkus, we're looking for that, we're for, you know, cloud providers to you know, take this technology and see how it runs in other environments and give us feedback. So, yeah, definitely looking to expand that ecosystem of contributors, so we can really turn this into kind of the facto technology for the cloud. >> So, Richard, stop back for us for a second, you've got a long history with Java. You know, why in 2020 is you know, Java still, I believe it's like number two on the language list there. Why is it so important today and why is moving forward to all of these cloud solutions so important for that ecosystem. >> Yeah, I think it comes down to you know, organizations are faced with a tough choice. That they stick with the language that they know and love, which is Java, the language, the relevant applications for the last decade and not be able to take the best advantage of cloud and native or serverless environment. Whereas if they go and learn a new language, Datalog or No.js and you know, kind of hunt around and trying to see if that has the same kind of ecosystem and support. So, we want give organizations a better choice, which is you can stick with a language you already know and love and you have skills and the resources, yeah, you can still take advantage of these new environments and that's you know, I'm mean, fundaments the problem we're trying to solve for your customers. That twice open source projects are, they live or die, depending on, they really do scratch an itch, you know, fulfill a need with real developments. I'm going to think we've certainly from the adoption and interest we've seen with Quarkus, we really do think we've found a very real problem to solve. >> Yeah, Rich, before we wrap up, I just want to give you the opportunity, you know, how is your teams doing, I think you know, Red Hat's making a real concerted effort to make you know, an appropriate tone for the event this week. Trying to make sure it's not you know, some of the usual glam that we normally expect to see, full on the community all together, but, you know, the community is so important and you know, the network of people that, you know, built not only you know, technologies but also careers and you know, relationships, so, give us a insight as to how your teams doing, everybody in these challenging times. >> I think this is another good example of where open source really does show it's resilience. Open source projects are simply very, very distributed. No open source projects rely on an office being open, so your word distributed team all used to work using distributed tools across the world, different time zones. It's kind of natural for us, so we're kind of plugging on, you know, just as we have them in the task, you have a few more dogs in the background and crying babies and you know, we're all humans, we all tolerate that. We have great support from our leaderships, that's Red Hat and IMB. They're very clear that they've got people and families before revenue and that's good to know. Everybody's you know continuing as they can to you know, ensure that we have you know, great technology out there 'cause like I said there's real demand here that needs to filled and we're going to continue doing that. So, yeah, everybody's kind of holding up pretty well, so, let's just see how long this thing goes but again, I do think it is a valuable kind of lesson on the resilience of distributed teams and open source in particular. So, yeah. >> All right, well thank you for that Rich. Just to bring it on home, as you said, the general availability of Quarkus you know, is in front of us here, really expecting the ecosystem in costumers move. Give us a little bit of what we should be looking at going forward, what are some of the kind of maturity steps and what should we expect to see, through the remainder of 2020. >> Yeah, it's going to be a pretty exciting year, I mean, given the changes we were all going through we are going to try and come meet developers, where they are, which is you know, on their laptops and in front of their computers, so, we're going to do, we're playing through a bunch of you know, kind of very quick webinars, you know, quick bye what it takes, you know, interesting features, we're going to do some virtual hackathons as well, so you can actually get people with time and talk with some experts. We have platform for doing that. So, we're pretty excited, we, you know, again with the incident, we can reach a lot of developers very easily. Actually far more than we could at a live even like Summit, so, we're going to make the best of it and try to get at to as many developers as we can with Quarkus and you know, hopefully they'll repay us by investing a little bit of time into it and giving us some feedback and you know, trying some applications and you know, see how it goes. >> All right and you know, final, final question for your Rich, you know, Quarkus, I have to imagine that the Quark, the subatomic particle, you know, came into the naming there. Is there some connection with that? I guess why the name to the project? >> Yeah, I mean that's pretty much it, you know, the Quarkus you know, kind of. (mumbles) Arguably the smallest fundamental particle. >> And can we find something smaller? >> Well, there potentially is something smaller but that's kind of in the realm of quantum mechanics and physics, which I'm not an expert on, so, but yeah, it's meant to mean small and the us bit, the US bit. I'd like to think there was a really good big meaning around that. The meaning is that we understand, that trying to do any kind of brand leadership or trademark protection on a well know server like Quark, is it possible? So, we had to add something to Quark and Quarkus kind of sounded cool. >> All right, Rich Sharples, pleasure to catch up with you, congrats on the progress for Quarkus, definitely looking forward to watching it's progression in the future. >> Thanks, great talking to you. >> All right, I'm Stu Minneman. Lot's more coverage here at Red Hat Summit 2020. Thank you as always for watching The Cube. (gentle music)
SUMMARY :
brought to you by Red Hat. bringing you guests from Red Hat Linux and Java are two of the, you know, to be you know, performant and efficient of you know, open shift server lists something and you know, have lots of experience with, how do I monitor it all of those pieces so, you know, the way you know, applications run. and there is networking and there's you know, and you know, as other architectures become important, and, you know, where are we to you know, take this technology You know, why in 2020 is you know, and that's you know, I'm mean, fundaments the problem and you know, the network of people and you know, we're all humans, we all tolerate that. you know, is in front of us here, and giving us some feedback and you know, you know, came into the naming there. you know, the Quarkus you know, kind of. and the us bit, the US bit. congrats on the progress for Quarkus, Thank you as always for watching The Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stu Minneman | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
Rich Sharples | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
36 months | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Richard | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
RHEL | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Rich | PERSON | 0.99+ |
RHEL x86 | TITLE | 0.99+ |
Germany | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
Red Hatter | ORGANIZATION | 0.98+ |
JDK | TITLE | 0.98+ |
Red Hat Summit 2020 | EVENT | 0.98+ |
Linux | TITLE | 0.98+ |
twice | QUANTITY | 0.98+ |
IMB | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Quark | ORGANIZATION | 0.98+ |
both worlds | QUANTITY | 0.97+ |
Quarkus | TITLE | 0.97+ |
Quarkus | ORGANIZATION | 0.96+ |
The Cube | ORGANIZATION | 0.95+ |
Red Hat 2020 | EVENT | 0.93+ |
one | QUANTITY | 0.93+ |
both | QUANTITY | 0.92+ |
Red | ORGANIZATION | 0.91+ |
first time | QUANTITY | 0.9+ |
REHL VM | TITLE | 0.9+ |
The Cube | TITLE | 0.88+ |
once | QUANTITY | 0.85+ |
JVM | TITLE | 0.85+ |
Red Hats | ORGANIZATION | 0.85+ |
Quarkus | PERSON | 0.84+ |
US | LOCATION | 0.84+ |
2 million open source projects | QUANTITY | 0.83+ |
One | QUANTITY | 0.77+ |
Quarkus | OTHER | 0.73+ |
Quark | OTHER | 0.7+ |
Seth Juarez, Microsoft | Microsoft Ignite 2019
>>Live from Orlando, Florida. It's the cube covering Microsoft ignite brought to you by Cohesity. >>Good afternoon everyone and welcome back to the cubes live coverage of Microsoft ignite 26,000 people here at this conference at the orange County convention center. I'm your host, Rebecca Knight, alongside my cohost Stu Miniman. We are joined by Seth Juarez. He is the cloud developer advocate at Microsoft. Thank you so much for coming on the show. >>Glad to be here. You have such a lovely sad and you're lovely people. We just met up. You don't know any better? No. Well maybe after after the end of the 15 minutes we'll have another discussion. >>You're starting off on the right foot, so tell us a little bit about what you do. You're also a host on channel nine tell us about your role as a, as a cloud developer. >>So a cloud advocate's job is primarily to help developers be successful on Azure. My particular expertise lies in AI and machine learning and so my job is to help developers be successful with AI in the cloud, whether it be developers, data scientists, machine learning engineers or whatever it is that people call it nowadays. Because you know how the titles change a lot, but my job is to help them be successful and sometimes what's interesting is that sometimes our customers can't find success in the cloud. That's actually a win for me too because then I have a deep integration with the product group and my job is to help them understand from a customer perspective what it is they need and why. So I'm like the ombudsman so to speak because the product groups are the product groups. I don't report up to them. So I usually go in there and I'm like, Hey, I don't report to any of you, but this is what the customers are saying. >>We are very keen on being customer centered and that's why I do what I do. >> Seth, I have to imagine when you're dealing with customers, some of that skills gap and learning is something that they need to deal with. You know, we've been hearing for a long time, you know, there's not enough data scientists, you know, we need to learn these environments. Satya Nadella spent a lot of time talking about the citizen developers out there. So you know H bring us inside the customers you're talking to, you know, kind of, where do you usually start and you know, how do they pull the right people in there or are they bringing in outside people a little bit? Great organization, great question. It turns out that for us at Microsoft we have our product groups and then right outside we have our advocates that are very closely aligned to the product groups. >>And so anytime we do have an interaction with a customer, it's for the benefit of all the other customers. And so I meet with a lot of customers and I don't, I'm to get to talk about them too much. But the thing is I go in there, I see what they're doing. For example, one time I went to the touring Institute in the UK. I went in there and because I'm not there to sell, I'm there to figure out like what are you trying to do and does this actually match up? It's a very different kind of conversation and they'd tell me about what they're working on. I tell them about how we can help them and then they tell me where the gaps are or where they're very excited and I take both of those pieces of feedback to the, to the product group and they, they just love being able to have someone on the ground to talk to people because sometimes you know, when work on stuff you get a little siloed and it's good to have an ombudsman so to speak, to make sure that we're doing the right thing for our customers. >>As somebody that works on AI. You must've been geeking out working, working with the Turing Institute though. Oh yeah. Those people are absolutely wonderful and it was like as I was walking in, a little giddy, but the problems that they're facing in AI are very similar. The problems that people at the other people doing and that are in big organizations, other organizations are trying to onboard to AI and try to figure out, everyone says I need to be using this hammer and they're trying to hammer some screws in with the hammer. So it's good to figure out when it's appropriate to use AI and when it isn't. And I also have customers with that >>and I'm sure the answer is it depends in terms of when it's appropriate, but do you have any sort of broad brush advice for helping an organization determine is is this a job for AI? Absolutely. >>That's uh, it's a question I get often and developers, we have this thing called the smell that tells us if a code smell, we have a code smell tells us, maybe we should refactor, maybe we should. For me, there's this AI smell where if you can't precisely figure out the series of steps to execute an algorithm and you're having a hard time writing code, or for example, if every week you need to change your if L statements or if you're changing numbers from 0.5 to 0.7 and now it works, that's the smell that you should think about using AI or machine learning, right? There's also a set of a class of algorithms that, for example, AI, it's not that we've solved, solved them, but they're pretty much solved. Like for example, detecting what's in an image, understanding sentiment and text, right? Those kinds of problems we have solutions for that are just done. >>But if you have a code smell where you have a lot of data and you don't want to write an algorithm to solve that problem, machine learning and AI might be the solution. Alright, a lot of announcements this week. Uh, any of the highlights for from your area. We last year, AI was mentioned specifically many times now with you know, autonomous systems and you know it feels like AI is in there not necessarily just you know, rubbing AI on everything. >> I think it's because we have such a good solution for people building custom machine learning that now it's time to talk about the things you can do with it. So we're talking about autonomous systems. It's because it's based upon the foundation of the AI that we've already built. We released something called Azure machine learning, a set of tools called in a studio where you can do end and machine learning. >>Because what what's happening is most data scientists nowadays, and I'm guilty of this myself, we put stuff in things called Jupiter notebooks. We release models, we email them to each other, we're emailing Python files and that's kinda like how programming was in 1995 and now we're doing is we're building a set of tools to allow machine learning developers to go end to end, be able to see how data scientists are working and et cetera. For example, let's just say you're a data scientist. Bill. Did an awesome job, but then he goes somewhere else and Sally who was absolutely amazing, comes in and now she's the data scientist. Usually Sally starts from zero and all of the stuff that bill did is lost with Azure machine learning. You're able to see all of your experiments, see what bill tried, see what he learned and Sally can pick right up and go on. And that's just doing the experiments. Now if you want to get machine learning models into production, we also have the ability to take these models, version them, put them into a CIC, D similar process with Azure dev ops and machine learning. So you can go from data all the way to machine learning in production very easily, very quickly and in a team environment, you know? And that's what I'm excited about mostly. >>So at a time when AI and big and technology companies in general are under fire and not, Oh considered to not always have their users best interests at heart. I'd like you to talk about the Microsoft approach to ethical AI and responsible AI. >>Yeah, I was a part of the keynote. Scott Hanselman is a very famous dab and he did a keynote and I got to form part of it and one of the things that we're very careful even on a dumb demo or where he was like doing rock paper, scissors. I said, and Scott, we were watching you with your permission to see like what sequence of throws you were doing. We believe that through and through all the way we will never use our customers' data to enhance any of our models. In fact, there was a time when we were doing like a machine learning model for NLP and I saw the email thread and it's like we don't have language food. I don't remember what it was. We don't have enough language food. Let's pay some people to ethically source this particular language data. We will never use any of our customer's data and I've had this question asked a lot. >>Like for example, our cognitive services which have built in AI, we will never use any of our customer's data to build that neither. For example, if we have, for example, we have a custom vision where you upload your own pictures, those are your pictures. We're never going to use them for anything. And anything that we do, there's always consent and we want to make sure that everyone understands that AI is a powerful tool, but it also needs to be used ethically. And that's just on how we use data for people that are our customers. We also have tools inside of Azure machine learning to get them to use AI. Ethically. We have tools to explain models. So for example, if you very gender does the model changes prediction or if you've very class or race, is your model being a little iffy? We allow, we have those tools and Azure machine learning, so our customers can also be ethical with the AI they build on our platform. So we have ethics built into how we build our models and we have ethics build into how our customers can build their models too, which is to me very. >>And is that a selling point? Are customers gravitating? I mean we've talked a lot about it on the show. About the, the trust that customers have in Microsoft and the image that Microsoft has in the industry right now. But the idea that it is also trying to perpetuate this idea of making everyone else more ethical. Do you think that that is one of the reasons customers are gravitate? >>I hope so. And as far as a selling point, I absolutely think it's a selling point, but we've just released it and so I'm going to go out there and evangelize the fact that not only are we as tickle with what we do in AI, but we want our customers to be ethical as well. Because you know, trust pays, as Satya said in his keynote, tra trust the enhancer in the exponent that allows tech intensity to actually be tech intensity. And we believe that through and through not only do believe it for ourselves, but we want our customers to also believe it and see the benefits of having trust with our customers. One of the things we, we talked to Scott Hanselman a little bit yesterday about that demo is the Microsoft of today isn't just use all the Microsoft products, right? To allow you to use, you know, any tool, any platform, you know, your own environment, uh, to tell us how that, that, that plays into your world. >>It's, you know, like in my opinion, and I don't know if it's the official opinion, but we are in the business of renting computer cycles. We don't care how you use them, just come into our house and use them. You wanna use Java. We've recently announced a tons of things with spraying. We're become an open JDK contributor. You know, one of my colleagues, we're very hard on that. I work primarily in Python because it's machine learning. I have a friend might call a friend and colleague, David Smith who works in our, I have other colleagues that work in a number of different languages. We don't care. What we are doing is we're trying to empower every organization and every person on the planet to achieve more where they are, how they are, and hopefully bring a little bit of of it to our cloud. >>What are you doing that, that's really exciting to you right now? I know you're doing a new.net library. Any other projects that are sparking your end? >>Yeah, so next week I'm going to France and this is before anyone's going to see this and there is a, there is a company, I think it's called surf, I'll have to look it up and we'll put it in the notes, but they are basically trying to use AI to be more environmentally conscious and they're taking pictures of trash and rivers and they're using AI to figure out where it's coming from so they can clean up environment. I get to go over there and see what they're doing, see how I can help them improvement and promote this kind of ethical way of doing AI. We also do stuff with snow leopards. I was watching some Netflix thing with my kids and we were watching snow leopards and there was like two of them. Like this is impressive because as I'm watching this with my kids, I'm like, Hey we are at Microsoft, we're helping this population, you know, perpetuate with AI. >>And so those are the things it's actually a had had I've seen on TV is, you know, rather than spending thousands of hours of people out there, the AI can identify the shape, um, you know, through the cameras. So they're on a, I love that powerful story to explain some of those pieces as opposed to it. It's tough to get the nuance of what's happening here. Absolutely. With this technology, these models are incredibly easy to build on our platform. And, and I and I st fairly easy to build with what you have. We love people use TensorFlow, use TensorFlow, people use pie torch. That's great cafe on it. Whatever you want to use. We are happy to let you use a rent out our computer cycles because we want you to be successful. Maybe speak a little bit of that when you talk about, you know, the, the cloud, one of the things is to democratize, uh, availability of this. >>There's usually free tiers out there, especially in the emerging areas. Uh, you know, how, how is Microsoft helping to get that, that compute and that world technology to people that might not have had it in the past? I was in, I was in Peru a number of years ago and I and I had a discussion with someone on the channel nine show and it was absolutely imp. Like I under suddenly understood the value of this. He said, Seth, if I wanted to do a startup here in Peru, right, and it was a capital Peru, like a very industrialized city, I would have to buy a server. It would come from California on a boat. It would take a couple of months to get here and then it would be in a warehouse for another month as it goes through customs. And then I would have to put it into a building that has a C and then I could start now sat with a click of a button. >>I can provision an entire cluster of machines on Azure and start right now. That's what, that's what the cloud is doing in places like Peru and places that maybe don't have a lot of infrastructure. Now infrastructure is for everyone and maybe someone even in the United States, you know, in a rural area that doesn't, they can start up their own business right now anywhere. And it's not just because it's Peru, it's not just because it's some other place that's becoming industrialized. It's everywhere. Because any kid with a dream can spin up an app service and have a website done in like five minutes. >>So what does this mean? I mean, as you said, any, any kid, any person or rural area, any developing country, what does this mean in five or 10 years from now in terms of the future of commerce and work and business? >>Honestly, some people feel like computers are art, stealing, you know, human engineering. I think they are really augmenting it. Like for example, I don't have to, if I want to know something for her. Back when, when I was a kid, I had to, if I want to know something, sometimes I had to go without knowing where like I guess we'll never know. Right? And then five years later we're like, okay, we found out it was that a character on that show, you know? And now we just look at our phone. It's like, Oh, you were wrong. And I like not knowing that I'm wrong for a lot longer, you know what I'm saying? But nowadays with our, with our phones and with other devices, we have information readily available so that we can make appropriate response, appropriate answers to questions that we have. AI is going to help us with that by augmenting human ingenuity, by looking at the underlying structure. >>We can't, for example, if you look at, if you look at an Excel spreadsheet, if it's like five rows and maybe five columns, you and I as humans can look at and see a trend. But what if it's 10 million rows and 5,000 columns? Our ingenuity has been stretched too far, but with computers now we can aggregate, we can do some machine learning models, and then we can see the patterns that the computer found aggregated, and now we can make the decisions we could make with five columns, five rows, but it's not taking our jobs. It's augmenting our capacity to do the right thing. >>Excellent. We'll assess that. Thank you so much for coming on the Cuba. Really fun conversation. >>Glad to be here. Thanks for having me. >>Alright, I'm Rebecca Knight for Stu minimun. Stay tuned for more of the cubes live coverage of Microsoft ignite.
SUMMARY :
Microsoft ignite brought to you by Cohesity. Thank you so much for coming on the show. Glad to be here. You're starting off on the right foot, so tell us a little bit about what you do. So I'm like the ombudsman so to speak because the product groups are the product groups. You know, we've been hearing for a long time, you know, there's not enough data scientists, they just love being able to have someone on the ground to talk to people because sometimes you know, And I also have customers with that and I'm sure the answer is it depends in terms of when it's appropriate, but do you have any sort of broad brush if every week you need to change your if L statements or if you're changing numbers from 0.5 to 0.7 many times now with you know, autonomous systems and you know it feels like AI is to talk about the things you can do with it. So you can go from data all the way to machine learning in I'd like you to talk about the Microsoft approach to ethical AI and responsible AI. I said, and Scott, we were watching you with your permission to see For example, if we have, for example, we have a custom vision where you upload your own pictures, Do you think that that is one of the reasons customers are gravitate? any platform, you know, your own environment, uh, to tell us how that, We don't care how you use them, just come into our house What are you doing that, that's really exciting to you right now? we're helping this population, you know, perpetuate with AI. And, and I and I st fairly easy to build with what you have. Uh, you know, how, how is Microsoft helping to get that, that compute and that world technology to you know, in a rural area that doesn't, they can start up their own business right now anywhere. Honestly, some people feel like computers are art, stealing, you know, We can't, for example, if you look at, if you look at an Excel spreadsheet, if it's like five rows and maybe five Thank you so much for coming on the Cuba. Glad to be here. Alright, I'm Rebecca Knight for Stu minimun.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sally | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
David Smith | PERSON | 0.99+ |
Peru | LOCATION | 0.99+ |
Seth Juarez | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
France | LOCATION | 0.99+ |
1995 | DATE | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Turing Institute | ORGANIZATION | 0.99+ |
10 million rows | QUANTITY | 0.99+ |
Scott Hanselman | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
five rows | QUANTITY | 0.99+ |
5,000 columns | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
five columns | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Satya | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
next week | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Seth | PERSON | 0.99+ |
Cuba | LOCATION | 0.99+ |
Bill | PERSON | 0.99+ |
today | DATE | 0.99+ |
26,000 people | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
five years later | DATE | 0.98+ |
this week | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
15 minutes | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
0.7 | QUANTITY | 0.97+ |
Azure | TITLE | 0.96+ |
JDK | TITLE | 0.96+ |
thousands of hours | QUANTITY | 0.95+ |
10 years | QUANTITY | 0.94+ |
five | QUANTITY | 0.93+ |
Netflix | ORGANIZATION | 0.92+ |
0.5 | QUANTITY | 0.91+ |
zero | QUANTITY | 0.91+ |
TensorFlow | TITLE | 0.9+ |
orange County convention center | LOCATION | 0.84+ |
snow leopards | TITLE | 0.84+ |
nine show | QUANTITY | 0.76+ |
number of years ago | DATE | 0.73+ |
NLP | ORGANIZATION | 0.72+ |
two of them | QUANTITY | 0.7+ |
bill | PERSON | 0.67+ |
months | QUANTITY | 0.66+ |
Stu | ORGANIZATION | 0.65+ |
things | QUANTITY | 0.61+ |
ignite | TITLE | 0.6+ |
Cohesity | ORGANIZATION | 0.59+ |
couple | QUANTITY | 0.54+ |
Mark Little & Mike Piech, Red Hat | Red Hat Summit 2019
>> Voiceover: Live from Boston, Massachusetts, it's the CUBE. Covering your Red Hat Summit 2019. Brought to you by Red Hat. >> And welcome back to our coverage here on the CUBE Red Hat Summit 2019. We're at the BCEC in Beantown, Boston, Massachusetts playing host this week to some 9000 strong attendees, pack keynotes. Just a great three days of programming here and educational sessions. Stu Miniman and I'm John Walls. We're joined by Mike Piech, who's the VP and general manager of Middleware at Red Hat. Mike, good to see you today. >> Great to be back. >> And Mark Little, VP of engineering Middleware at Red Hat. Mark, Good to see you as well, sir. >> You too. >> Yeah. First of, let's just talk about your ideas at the show here. Been here for a few days. As we've seen on the keynote stage, wide variety of first off, announcements and great case studies, great educational sessions. But your impressions of what's going on and some of the announcements we've heard about this week. >> Well, sure. I mean definitely some very big announcements with RHEL 8 and OpenShift 4. So as Middleware we're a little bit more in sort of gorilla mode here while some of the bigger announcements take a lot of the limelight. But nevertheless those announcements and the advances that they represent are very important for us as Middleware. Particularly OpenShift 4 as sort of the next layer up from OpenShift which the developers sort of touch and feel and live and breathe on a daily basis. We are the immediate beneficiaries of much of the advances in OpenShift and so that's something that, we as the Middleware guys sort of make real for the enterprise application developer. >> I'd say, probably for me, building on that in a way, one of the biggest announcements, one of the biggest surprises is gotta be the first keynote where we had Satya from Microsoft on stage with Jim announcing the collaboration that we're doing. I never believed that would ever happen and that's, that's fantastic. Has a benefit for Middleware as well but just for Red Hat as a whole. Who would've thought it? >> John: Who would have thought it, right? Yeah, we actually just had Marco Bill-Peter on and he was talking about, he's like "Look, we've actually had some of our support people up in Redmond now for a couple of years." And we had Chris Wright on earlier and he says "You know, sometimes we got to these shows and you get the big bang announcement. It's like, well, really we're working incrementally along the way and open source you can watch it. Sure sometimes you get the new chipset or there's a new this or that. But you know, it's very very small things." So in the spirit of that, maybe, you know, give us the updates since last time we got together. What's happening in the Middleware space as you said. If we build up the stack, you know, we got RHEL 8, we got OpenShift 4 and you're sitting on top. >> Yeah. Well one aspect that's an event like this makes clear in almost a reverse sort of way. We put a lot of effort particularly in Mark's team in getting to a much more frequent and more incremental release cycle and style, right. So getting away from sort of big bang releases every year, couple of years, to a much more agile incremental again sort of regime of rolling out functionality. Now, one of the downsides of that is that you don't have these big grand product announcements to make a big deal about in the same way as RHEL just did with 8 for example. So we need to rethink how we sort of (Laughs) >> absence the sort of big .0 releases, you know how we sort of batch up interesting news and roll it out at a large event like this. Now one of the things that we have been working on is our application environment narrative. Right now, the whole idea of the story here is that many people talk about Cloud-Native and about having lot's of different capabilities and services in a cloud environment. And as we've sort of gone through the, particularly the last year or so, it's really become apparent from what our customers tell us and from what we really see as the opportunities in the cloud-native world. The value that we bring is engineering all these pieces together, right? So that it's not simply a list of these disparate, disconnected, independent services but rather Middleware in the world of cloud native re-imagined. It is capabilities that when engineered together in the right way they make for this comprehensive, unified, cohesive environment within which our customers can develop applications and run those applications. And for the developer, you get developer productivity and then at runtime, you're getting operational reliability. So there really is a sort of a dual-sided value proposition there. And this notion of Middleware engineered together for the cloud is what the application environment idea is all about. >> Yeah. I'd add kinda one of the things that ties into that which has been big for us at least at summit this year is an effort that we kicked off or we announced two months ago called Quakers and as you all know a lot of what we do within Middleware, within Red Hat is based on Java and Java is still the dominant language in the enterprise but it's been around for 20 years. It developed in a pre-cloud era and that made lots of assumptions on the way in which the Java language and the JVM on which it runs would develop which aren't necessarily that conducive for running, in a cloud environment, a hybrid cloud environment and certainly public cloud environment based on Linux containers and Kubernetes. So, we've been working for a number of years in the upstream open JDK community to try and make Java much more cloud-native itself. And Quakers kind of builds on that. It essentially is what we call a kub-native approach where we optimize all of the Middleware stack upfront to work really really well in Kubernetes and specifically on OpenShift. And it's all Java though, that's the important thing. And now if people look into this they'll find that we're showing performance figures and memory utilization that is on a per with some of the newer languages like Go for instance, very very fast. Typically your boot time has gone from seconds to tens of milliseconds. And people who have seen it demonstrated have literally been blown away cause it allows them to leverage the skills that they've had invested in their employees to learn Java and move to the cloud without telling them "You guys are gonna have to learn a completely new language and start from scratch" >> All right, so Mark, if I get it right cause we've been at the Kubernetes show for a bunch of years but this is, you're looking at kinda the application side of what's happening in those Kubernetes environment >> Mark: Yeah. So many times we've talked about the platforms and the infrastructure down but it's the the art piece on top. Super important. I know down the DevZone people were buzzing around all the Quaker stuff. What else for people that are you know, looking at that kinda cloud-native containerization space? What other areas that they should be looking at when it comes to your space? >> Well, again, tying into the up environment thing, hopefully, you know, you'll have heard of knative and Istio. So knative is, to put it in a quick sentence is essentially an enabler for serverless if you like. It's where we're spinning containers really really quickly based on events. But really any serverless platform lives and dies based on the services in which your business logic can then rely upon. Do I have a messaging service there? Do I have a transaction service or a database service? So, we've been working with, with Google on knative and with Microsoft on knative to ensure that we have a really good story in OpenShift but tying it into our Middleware suite as well. So, many of our Middleware products are now knative enabled if you like. The second thing is, as I mentioned, Istio which is a sidecar approach. I won't go into details on that but again Istio the aim behind that is to remove from the application developer some of the non-functional business logic that they had to put in there like "How do I use a messaging service? How do I secure this endpoint and push it down the infrastructure?" So the security servers, the messaging servers, the cashing servers et cetera. They move out of the business logic and they move into Istio. But from our point of view, it's our security servers that we've been working on for years, it's our transactional servers that we've been working on for years. So, these are bullet-proof implementations that we have just made more cloud-native by embedding them in a way in Istio and like I said, enabling them with knative. >> I think we'd mentioned that Chris Wright was on earlier and one of the things he talked about was, this new data-eccentric focus and how, that's at the core so much of what enterprise is doing these days. The fact that whenever speed is distributed, they are and you've got so many data inputs come in from, so to a unified user trying to get their data the way they wanna see it. You might want it for a totally other reason, right? I'm just curious, how does that influence or how has that influenced your work in terms of making sure that transport goes smoothly? Because you do have so much more to work with in a much more complex environment for multiple uses that are unique, right? >> (Mike) Yeah. >> It's not all the same. >> Huge, huge impact for sure. The whole idea of decomposing an application into a much larger number of much smaller pieces than was done in the past has many benefits probably one of the most significant being the ability to make small changes, small incremental changes and afford a much more trial and error approach to innovation versus more macro-level planning waterfall as they call it. But one of the implications of that is now you have a large number of entities. Whether they be big or small, there's a large number of them running within the estate. And there's the orchestration of them and the interconnection of them for sure but it's a n-squared relationship, right. The more these entities you have, the more potential connections between each of them you have to somehow structure and manage and ensure are being done securely and so on. So that has really driven the need for new ways of tying things together, new ways essentially of integration. It has definitely amplified the need for disciplines, EPI management for example. It has driven a lot of increase demand for an event-driven approach where you're streaming in realtime and distributing events to many receivers and dealing with things asynchronously and not depending on round-trip times for everything to be consistent and so on. So, there's just a myriad of implications there that are very detailed technical-level drive some of the things that we're doing now. >> Yeah, I'll just add that in terms of data itself, you've probably heard this a number of times, data is king. Everything we do is based on data in one way or another, So we as Red Hat as a whole and Middleware specifically, we've had a very strong data strategy for a long time. Just as you've got myriad types of data, you can't assume that one way of storing that data is gonna be right for every type of data that you've got. So, we've worked through the integration efforts on ensuring that no sequel data stores, relational data stores^, in-memory data caching and even the messaging services as a whole is a way of sto^ring data in transit, that allows you to, in some ways it allows you to actually look at it in an event-driven way and make intelligent decisions. So that's a key part of what anybody should do if they are in the enterprise space. That's certainly what we're doing because at the end of the day people are building these apps to use that data. >> Well, gentlemen, I know you have another engagement. We're gonna cut you loose but I do wanna say you're the first guests to get applause. (guests laugh) >> From across all the way there. People at home can't hear but, so congratulations. You've been well received already. >> I think they're clearly tuned in to the renaissance of the job in here. >> Yes. >> Thank you both. >> Thanks for the time. >> Mark: Thanks so much. >> We appreciate that. Back with more, we are watching a Red Hat summer 2019 coverage live on the CUBE. (Upbeat music)
SUMMARY :
it's the CUBE. We're at the BCEC in Beantown, Boston, Massachusetts Mark, Good to see you as well, sir. and some of the announcements we've heard about this week. of much of the advances in OpenShift one of the biggest surprises is gotta be the first keynote So in the spirit of that, maybe, you know, Now, one of the downsides of that And for the developer, you get developer productivity and that made lots of assumptions on the way in which and the infrastructure down but it's the and push it down the infrastructure?" and one of the things he talked about was, So that has really driven the need for new ways and even the messaging services as a whole Well, gentlemen, I know you have another engagement. From across all the way there. of the job in here. live on the CUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike Piech | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Mark Little | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Middleware | ORGANIZATION | 0.99+ |
Redmond | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
Mike | PERSON | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
OpenShift 4 | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
two months ago | DATE | 0.99+ |
Beantown, Boston, Massachusetts | LOCATION | 0.98+ |
Red Hat Summit 2019 | EVENT | 0.98+ |
tens of milliseconds | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
OpenShift | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Red Hat | TITLE | 0.98+ |
Linux | TITLE | 0.98+ |
today | DATE | 0.97+ |
this week | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
first guests | QUANTITY | 0.97+ |
last year | DATE | 0.97+ |
DevZone | TITLE | 0.97+ |
this year | DATE | 0.96+ |
CUBE Red Hat Summit 2019 | EVENT | 0.96+ |
second thing | QUANTITY | 0.96+ |
first keynote | QUANTITY | 0.95+ |
Istio | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.95+ |
Satya | PERSON | 0.93+ |
summer 2019 | DATE | 0.93+ |
RHEL | TITLE | 0.93+ |
one aspect | QUANTITY | 0.92+ |
Middleware | LOCATION | 0.91+ |
three days | QUANTITY | 0.9+ |
9000 strong attendees | QUANTITY | 0.89+ |
JDK | TITLE | 0.89+ |
20 years | QUANTITY | 0.87+ |
knative | ORGANIZATION | 0.86+ |
couple of years | QUANTITY | 0.84+ |
JVM | TITLE | 0.82+ |
CUBE | ORGANIZATION | 0.79+ |
Adrian Cockcroft, AWS | AWS re:Invent 2018
live from Las Vegas it's the cube covering AWS reinvent 2018 brought to you by Amazon Web Services Intel and their ecosystem partners welcome back to Las Vegas everybody I'm Dave Villeneuve my co-host David Flair you want to the cube the leader in live tech coverage this is our third day of coverage at AWS reinvent 2018 our sixth year covering this event that keeps getting bigger and bigger Dave at 53,000 people amazing place is still jam we still barely have our voices 18 Cockroft is here he's a vice president of cloud architecture and strategy very well known in the industry q Balam thanks so much for coming back on thank you yeah it's the I've been to all of the reinvents we've been far as the customer and then we've been off of one but we watched remotely and hung on every word you know back when there wasn't a lot of information about a DMS now it's like too much information to process it's gonna take us months to sort through it all but at any rate it's it's a phenomenal opportunity for us to to learn to share to inspire folks and you do with some great work talk a little bit about you know some of the fun stuff you're working on and in your current role yeah I have a few different things I do one is one part of my role as I go around the world giving keynotes AWS summits but mostly I call it doing one of Ogle's impressions his deck and I get to presented around the world so we have to digest all of this stuff into a 90-minute deck that we can take to around the world that's a you know what do you leave out there's some it's it's harder and harder every year so that's a lot of fun but the team that I run for AWS I mean recruiting and running is around open-source right and we do we sponsor various events we members of various foundations we make contributions to projects and have been helping that by hiring people from the open-source communities into AWS to help help some of the edge over service teams with their launches of open-source related projects so what I've got what's been happening this year is had like a hundred blog posts related to open source lots of tweets lots of activity lots of events like ask on all things open in coupe car so be there in a couple of weeks exciting to you guys probably again but this week there are a few of the launches where we got quite deeply involved we did a blog posts on the open source blog most at the same time as Jeff fires okay here's the service and here's the open source part of it this is how you contribute and this is what's going on so we've had some fun with that so but it was it two years ago when we first met you've just been on the job for about a month about that particular time and you laid out what you wanted to do in terms of from your previous experience about how you wanted to turn AWS into a an open-source contributor how would you rate yourself in two years I think we've made some good progress really made me a AWS was making contributions to open source but had nobody talking about it and nobody know it was nobody's job to go out and explain what we were doing so that what part of the problem two years ago it was actually more happening so most people knew about but we were just not telling the story and it said it wasn't coming across well and the culture and the culture I mean it was spotty like some parts of AWS were doing a lot of open source other parts we're kind of not really seeing it as a priority so by talking a lot more about it we kind of get a more uniform acceptance across AWC huge organized just there but Amazon as a whole we are actually telling that story the story a much broader story than just AWS and be able to bring that and get everyone go oh this i see everyone doing it so i should be doing that so it helps create the the the leadership for more teams to follow and what we've seen in with you know really the first year building the team the last year kind of getting the content flowing and getting the processes kind of working to get all the all of the different events and blog posts and out the outbound part grips getting increasing number of contributions and launches so now Corrado was a few weeks ago so it you need us launch but that was that was an example that was it's a lot a lot happened from my team from Aaron Gupta my team his a Java champion he used to be at Sun he was a worked at Red Hat on J bar so he's like he knows everybody in Java has great credibility across the Java community and he said we should launch this product in Belgium at like midnight or so you know West Coast time and let's fly in James Gosling and like to a secret like get him on stage without anyone knowing he's gonna do it and do the introduction so it's like this totally crazy idea and it came off beautifully and we even had the the you know the Oracle Java people saying nice things about it the contributions to open JDK just just a really nice example of figuring it out all that get everybody on board get everything done right and then say here's something that matters to the community that we can contribute it'll show up on the rooftop complete thanks the star power thing but mincing James to do it was a right around a lot of credit for that that particular launch but you know this is the kind of people I have on my team and we're like we're pulling them in and pointing them at okay can you help this team figure out how to take this open-source project to market now I mean that was a major contribution to the open-source community and it was just in time wasn't it but another slight view would might be that you and Oracle should have been working this out until not leaving it until the last minute but I mean we were doing this work anyway right okay we're effectively self-supporting our own version of Java or internally we were getting better performance and better sooner bug fixes on open JDK so it made a decision to just move to the open JDK dream and we were just unhooking our internal use of the of the other the other options we have home mix you know a very large organization along for you acquire lots of different versions and flavors of Java you notice this one language so we like clean it up let's get JDK 8 and 10 we're self supporting it and then we announce to our cave will support our Amazon Linux version right and the final step was like the customers were saying please just like supportive on my laptop and anywhere else I need it and the thing we didn't announce then we didn't make a big thing out and arm support we didn't we kind of it was in there by default we didn't talk about it because the ARM chips came out this week so hey and part of it was also have exactly the same version of Java now on all of the Amazon Linux is even the the Intel AMD and arm so that helps the compatibility for people kind of going well it's a different processor architectures ties together so it was all part of the thinking if you didn't want to tip your hand on the announcement this young is right ok so I think sometimes a AWS is misunderstood partly from its own doing I mean you just mentioned you contribute a lot to open-source but you never talked about it generally when AWS doesn't have something to say they don't say a lot about it so others are left to you know make the narrative you come on you've now got an open-source agenda can you just sort of summarize what that motivation is and what the objectives are well we have you know lots of different pieces of this but you have service teams saying I'm gonna launch this product and there's an open source component to it can you help and sometimes that means I hire someone in my team to specialize in that area sometimes it's just our consulting with the team we may know connecting them to the open-source community so that's one piece of it is having that if you think about CN CF in particular cloud native computing foundation that's got lots of projects if you think about the AWS service teams no one team really owns the scope of CN CF but my team has that ownership for CN CF as a whole we have the board seat position and we say ok we have the serval as people over here we've got some entertaining things over here there's some Linux kernel virtualization bits here we can reach out to lots of different teams across AWS but act as a central point where you have something about open-source you want to talk about with with AWS or Amazon even as a whole you can come to us and we'll find the right people and we'll help you make those connections so part of it is acting as an on-ramp for the sort of buffer between the internal the external concerns of the communities there's somewhere to go and partly just getting contributions out there and what we could gain criticized for not making enough contributions well we've been making more and we're making more and we'll just keep making more contributions until people give credit for it and that's that's the if you're like what's the strategy contribute more and then tell people point at it and hope the people like what we did and take the input no it's the customer driven thing right we're gonna do what our customers ask us to do and their customer community focus on the things we want to do and we've been contributing to spinnaker the the Netflix OSS project we made some serious contributions to that in the this year firecracker myths which talk about that a bit and the Robo maker that those are all areas where we've been working with firecracker is particularly interesting isn't it I mean that's a major contribution of improving the performance and capability of those micro VMs yeah can you talk about that a little bit yeah it's the baby it's interesting because it's a piece of software pretty much no one will ever see your use it's the thing you run on the bare metal but lets you run your container Dee that lets you run your container on top right well it's deep down in the guts of the system there's this piece of code but we we kind of there's a few reasons we're using it particularly in production now with its supporting some of our production use of Fargate and lambda there in the middle it's not a hundred centraal out but there's a good chunk of the capacity running on it and that's where it turns out to be useful and just to cook how long we have to get into this but if you think about a customer running a lambda function we would put create a VM with that lambda function in it if they wanted a second lambda function we put it alongside that one no the customer comes and we start a new VM for them and we start a lambda function in that VMs take a while to start up so you have cancer pre-made some sitting there waiting but these are big VMs and we're putting lots of little functions in them what what firecracker lets you do is start a separate micro VM for every function and safely put all of the customers on one machine so you start packing them in it's a much more efficient way to run your capacity our utilization of those machines supporting lambda is vastly higher than having a machine with a bunch of empty space in it that we're trying to weight running for running for the customer so it's that efficiency is the thing and then the speed of starting a VM it's a very it's a very cut-down VM so it's 125 milliseconds with just to start the VM which is incredibly fast when you think hey give me a VM on ec2 it's you know they're in kinda like 30 seconds to a few minutes like I get 12 terabyte VM takes a little while to boot up but you don't have to pay for it till it finished including my good things about these huge machines right how about Robo maker can you talk a little bit about that and it's important so a rubber makers interesting on the open source blog which we posted on Slate on Sunday night early on Monday morning I did an interview with Brian Goerke who's the founder of the open robotics foundation and what we've done there is it's kind of an extension of sage maker if you think about that being AI if you've got these eight where I can deploy an AI model what is the AI model I want to do it wants to read something from the real world and modified the real word so it's a read from a camera or at some of the sensor and then control motors and servos and that's what Robo maker does it wraps the intelligence you can build with sage maker with the robotic operating system that has actually a library of actuators and a library of algorithms control algorithms you've got little brain in the middle and you've got a new robot that does something and we had the the Robo racer low racing car to which where all of these things come together to make an old toy race car that we can drive around tracks which is a whole other topic we get into but what interviewed Brian on what is the history of Rose the robotic operating system where did it come from you know what is the hard thing about running in it turns out the hard thing with Rose wasn't building the robots it was simulating the robots and the simulators quite a CPU intensive job it's graphics intensive you got this virtual world you're running and VR worlds are quite intensive and getting that installed and running was the hard part so what what what robot maker is is that as the service it's this simulator is called gazebo just a funny name so gazebo as a service is the actual service that effectively were charging for with a free tier so you can play with it and then we charge you for the sort of simulation units like how much computing time you're using when the rest of it is all you know cloud9 for the front end and deployment of fleets of data to fleets the robots and updating them and managing them but they're interesting thing is this is getting into like the people that the field of the first robotic thing is high schools high school robotic competitions they're interested yeah universities are interested in a university solar so we kind of it's not just for commercial production robots it's the whole training thing we're getting into STEM education if kids like playing with robots it's like Center and we're pulling all this in so now you can go home and take these like the latest most advanced AI algorithms that used to have to be doing a PhD at Stanford to be playing with and play with your kid you know over Christmas and see what you can come up with really simplifying the whole software development side of that when you look at the Dean came in competitions we're just awesome yeah all the kids they could have gravitate to the hardware cuz they can touch the software was really hard and and and this is gonna I think take a new level is particularly enough and it's all open source yeah you can go yes oh you've got this robot there no no I pointed them somebody who's complaining that we'd done it and no it was some proprietary robot thingy with the toy cars and I pointed them at the github URL it's that you can go build this thing it's all open source you can put anything else you want on it but the robot cars robot has rolls on it the robotic operating system H maker Robo maker all combined together and they're off running races and having all having fun now you guys are both Formula one fans yeah and you guys have been having some you know profile of Formula One folks here you got the little the mini vehicle riff on that really open source but I have another like thing I'm doing on the site it turns out the over the last year or so we started looking for opportunities to do sports sponsorship with a particular focus on Europe and the rest of the world we had a few US sports where they I don't know something with balls I like I like sports with wheels so about the middle of last year like this June we announced the deal with Formula One which is a multi-part deal part of the deal was just take them to the cloud that they have some data centers stuff they were running at a space and their data center is like no they wanted to do a technology refresh so for all the reasons that everyone else is moving to cloud we moved the sports core infrastructure to cloud over some number of years right so that's a process for starting and part of that is the archive of all Formula One races it's a treasure trove like 67 years of archive of everything they've got all the videos were digitizing it we're gonna figure out what to do what you know we've got to process it to label everything anyway so that's one thing and then we went turned up it we all turned up at Silverstone in the UK at that race it was the week after the announcement and that race we have a do as logos turning up on the screen because another piece was sponsorship so we start sponsoring the core video feed that Formula One uses to the world and that's 500 million fans watch Formula One so now 500 million fans for the next few years they're going to see a dope race logos on screen around the analytical insights of what is going on in the sport the odd rear tires are overheating you went round a corner this fast here's the pit stop strategy so we brand advertising associate with a high-technology sport and analytical insights and that's why we did that deal and they get all of our technology AI a lot of help helping them migrate and then the third thing we did that I got involved with was I'd already done a few CIO summits at Formula One races along the way so I was kind of like trying to poke my way into this thing that was happening I'm not involved in sponsorship set up right so hang on if you've done that thing yet and then them so we decided to do some executive events around Formula one so we'll pick a few races we'll have some you know corporate hospitality like things but when you put a bunch of senior executives together for a few days they share they solve each other's problems and you just get out of the way and they know the people that have solved one problem will share it with the other so it's a really it's like a tiny reinvent right here everyone is sharing if you sit next to someone what problem have you sold you can find stuff out so this is a concentrated version of that and we retired it in Monza earlier this year went great amazing I mean it's fun and it you know next to the business so it finally was like can we get someone on the car on Reba okay who's in Abu Dhabi on Saturday can we get them on Sunday night for the launch for the robot slut no this is like top guy in Formula One got here from Abu Dhabi if by Wednesday morning I'm just happy that they got here yeah that was that was a huge tire cube team we've watched your career you've been somebody who you know shares his knowledge and done some great work so thank you so much for coming back in the cube like that congratulations on all your great work Andy Jesse's coming up next we're excited about that keeper right to everybody we'll be back with our next guest Andy Jesse CEO of AWS right - this short break [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Goerke | PERSON | 0.99+ |
David Flair | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Dave Villeneuve | PERSON | 0.99+ |
Belgium | LOCATION | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Abu Dhabi | LOCATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Abu Dhabi | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
125 milliseconds | QUANTITY | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Sunday night | DATE | 0.99+ |
Saturday | DATE | 0.99+ |
67 years | QUANTITY | 0.99+ |
James Gosling | PERSON | 0.99+ |
90-minute | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
JDK 8 | TITLE | 0.99+ |
Wednesday morning | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silverstone | LOCATION | 0.99+ |
James | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Monza | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Aaron Gupta | PERSON | 0.99+ |
Rose | PERSON | 0.99+ |
53,000 people | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
two years ago | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Ogle | PERSON | 0.99+ |
10 | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Christmas | EVENT | 0.98+ |
eight | QUANTITY | 0.98+ |
two years | QUANTITY | 0.98+ |
third day | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
JDK | TITLE | 0.98+ |
500 million fans | QUANTITY | 0.98+ |
12 terabyte | QUANTITY | 0.98+ |
Cockroft | PERSON | 0.98+ |
two years ago | DATE | 0.98+ |
one machine | QUANTITY | 0.98+ |
sixth year | QUANTITY | 0.97+ |
500 million fans | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
US | LOCATION | 0.96+ |
this year | DATE | 0.96+ |
one piece | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
first year | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
AWC | ORGANIZATION | 0.95+ |
one part | QUANTITY | 0.95+ |
third thing | QUANTITY | 0.95+ |
earlier this year | DATE | 0.94+ |
Formula One | TITLE | 0.94+ |
one thing | QUANTITY | 0.94+ |
Netflix | ORGANIZATION | 0.93+ |
Formula one | ORGANIZATION | 0.93+ |
West Coast | LOCATION | 0.93+ |
CN CF | ORGANIZATION | 0.92+ |
Stanford | ORGANIZATION | 0.91+ |
second lambda | QUANTITY | 0.88+ |
Jeff fires | PERSON | 0.88+ |
next few years | DATE | 0.87+ |