Ruchir Puri, IBM and Tom Anderson, Red Hat | AnsibleFest 2022
>>Good morning live from Chicago. It's the cube on the floor at Ansible Fast 2022. This is day two of our wall to wall coverage. Lisa Martin here with John Furrier. John, we're gonna be talking next in the segment with two alumni about what Red Hat and IBM are doing to give Ansible users AI superpowers. As one of our alumni guests said, just off the keynote stage, we're nearing an inflection point in ai. >>The power of AI with Ansible is really gonna be an innovative, I think an inflection point for a long time because Ansible does such great things. This segment's gonna explore that innovation, bringing AI and making people more productive and more importantly, you know, this whole low code, no code, kind of right in the sweet spot of the skills gap. So should be a great segment. >>Great segment. Please welcome back two of our alumni. Perry is here, the Chief scientist, IBM Research and IBM Fellow. And Tom Anderson joins us once again, VP and general manager at Red Hat. Gentlemen, great to have you on the program. We're gonna have you back. >>Thank you for having >>Us and thanks for joining us. Fresh off the keynote stage. Really enjoyed your keynote this morning. Very exciting news. You have a project called Project Wisdom. We're talking about this inflection point in ai. Tell the audience, the viewers, what is Project Wisdom And Wisdom differs from intelligence. How >>I think Project Wisdom is really about, as I said, sort of combining two major forces that are in many ways disrupting and, and really constructing many a aspects of our society, which are software and AI together. Yeah. And I truly believe it's gonna result in a se shift on how not just enterprises, but society carries forefront. And as I said, intelligence is, is, I would argue at least artificial intelligence is more, in some ways mechanical, if I may say it, it's about algorithms, it's about data, it's about compute. Wisdom is all about what is truly important to bring out. It's not just about when you bring out a, a insight, when you bring out a decision to be able to explain that decision as well. It's almost like humans have wisdom. Machines have intelligence and, and it's about project wisdom. That's why we called it wisdom. >>Because it is about being a, a assistant augmenting humans. Just like be there with the humans and, and almost think of it as behave and interact with them as another colleague will versus intelligence, which is, you know, as I said, more mechanical is about data. Computer algorithms crunch together and, and we wanna bring the power of project wisdom and artificial intelligence to developers to, as you said, close the skills gap to be able to really make them more productive and have wisdom for Ansible be their assistant. Yeah. To be able to get things for them that they would find many ways mundane, many ways hard to find and again, be an assistant and augmented, >>You know, you know what's interesting, I want to get into the origin, how it all happened, but interesting IBM research, well known for the deep tech, big engineering. And you guys have been doing this for a long time, so congratulations. But it's interesting here at this event, even on stage here event, you're starting to see the automation come in. So the question comes up, scale. So what happens, IBM buys Red Hat, you go raid the, the raid, the ip, Trevor Treasure trove of ai. I mean this cuz this is kind of like bringing two killer apps together. The Ansible configuration automation layer with ai just kind of a, >>Yeah, it's an amazing relationship. I was gonna say marriage, but I don't wanna say marriage cause I may be >>Last. I didn't mean say raid the Treasure Trobe, but the kind of >>Like, oh my God. An amazing relationship where we bring all this expertise around automation, obviously around IP and application infrastructure automation and IBM research, Richie and his team bring this amazing capacity and experience around ai. Bring those two things together and applying AI to automation for our teams is so incredibly fantastic. I just can't contain my enthusiasm about it. And you could feel it in the keynote this morning that Richie was doing the energy in the room and when folks saw that, it's just amazing. >>The geeks are gonna love it for sure. But here I wanna get into the whole evolution. Computers on computers, remember the old days thinking machines was a company generations ago that I think they've sold or went outta business, but self-learning, learning machines, computers, programming, computers was actually on your slide you kind of piece out this next wave of AI and machine learning, starting with expert systems really kind of, I'm almost say static, but like okay programs. Yeah, yeah. And then now with machine learning and that big debate was unsupervised, supervised, which is not really perfect. Deep learning, which now explores some things, but now we're at another wave. Take, take us through the thought there explaining what this transition looks like and why. >>I think we are, as I said, we are really at an inflection point in the journey of ai. And if ai, I think it's fair to say data is the pain of ai without data, AI doesn't exist. But if I were to train AI with what is known as supervised learning or or data that is labeled, you are almost sort of limited because there are only so many people who have that expertise. And interestingly, they all have day jobs. So they're not just gonna sit around and label this for you. Some people may be available, but you know, this is not, again, as I as Tom said, we are really trying to apply it to some very sort of key domains which require subject matter expertise. This is not like labeling cats and dogs that everybody else in the board knows there are, the community's very large, but still the skills to go around are not that many. >>And I truly believe to apply AI to the, to the word of, you know, enterprises information technology automation, you have to have unsupervised learning and that's the only way to skate. Yeah. And these two trends really about, you know, information technology percolating across every enterprise and unsupervised learning, which is learning on this very large amount of data with of course know very large compute with some very powerful algorithms like transformer architectures and others which have been disrupting the, the domain of natural language as well are coming together with what I described as foundation models. Yeah. Which anybody who plays with it, you'll be blown away. That's literally blown away. >>And you call that self supervision at scale, which is kind of the foundation. So I have to ask you, cuz this comes up a lot with cloud, cloud scale, everyone tells horizontally scalable cloud, but vertically specialized applications where domain expertise and data plays. So the better the data, the better the self supervision, better the learning. But if it's horizontally scalable is a lot to learn. So how do you create that data ops where it's where the machines are gonna be peaked to maximize what's addressable, but what's also in the domain too, you gotta have that kind of diversity. Can you share your thoughts on that? >>Absolutely. So in, in the domain of foundation models, there are two main stages I would say. One is what I'll describe as pre-training, which is think of it as the, the machine in this particular case is knowledgeable about the domain of code in general. It knows syntax of Python, Java script know, go see Java and so, so on actually, and, and also Yammel as well, which is obviously one would argue is the domain of information technology. And once you get to that level, it's a, it's almost like having a developer who knows all of this but may not be an expert at Ansible just yet. He or she can be an expert at Ansible but is not there yet. That's what I'll call background knowledge. And also in the, in the case of foundation models, they are very adept at natural language as well. So they can connect natural language to code, but they are not yet expert at the domain of Ansible. >>Now there's something called, the second stage of learning is called fine tuning, which is about this data ops where I take data, which is sort of the SME data in this particular case. And it's curated. So this is not just generic data, you pick off GitHub, you don't know what exists out there. This is the data which is governed, which we know is of high quality as well. And you think of it as you specialize the generic AI with pre-trained AI with that data. And those two stages, including the governance of that data that goes into it results in this sort of really breakthrough technology that we've been calling Project Wisdom for. Our first application is Ansible, but just watch out that area. There are many more to come and, and we are gonna really, I'm really excited about this partnership with Red Hat because across IBM and research, I think where wherever we, if there is one place where we can find excited, open source, open developer community, it is Right. That's, >>Yeah. >>Tom, talk about the, the role of open source and Project Wisdom, the involvement of the community and maybe Richard, any feedback that you've gotten since coming off stage? I'm sure you were mobbed. >>Yeah, so for us this is, it's called Project Wisdom, not Product Wisdom. Right? Sorry. Right. And so, no, you didn't say that but I wanna just emphasize that it is a project and for us that is a key word in the upstream community that this is where we're inviting the community to jump on board with us and bring their expertise. All these people that are here will start to participate. They're excited in it. They'll bring their expertise and experience and that fine tuning of the model will just get better and better. So we're really excited about introducing this now and involving the community because it's super nuts. Everything that Red Hat does is around the community and this is no different. And so we're really excited about Project Wisdom. >>That's interesting. The project piece because if you see in today's world the innovation strategy before where we are now, go back to say 15 years ago it was of standard, it's gotta have standard bodies. You can still innovate and differentiate, but yet with open source and community, it's a blending of research and practitioners. I think that to me is a big story here is that what you guys are demonstrating is the combination of research and practitioners in the project. Yes. So how does this play out? Cuz this is kind of like how things are gonna get done in the cloud cuz Amazon's not gonna just standardize their stack at at higher level services, nor is Azure and they might get some plumbing commonalities below, but for Project Project Wisdom to be successful, they can, it doesn't need to have standards. If I get this right, if I can my on point here, what do you guys think about that? React to that? Yeah, >>So I definitely, I think standardization in terms of what we will call ML ops pipeline for models to be deployed and managed and operated. It's like models, like any other code, there's standardization on DevOps ops pipeline, there's standardization on machine learning pipeline. And these models will be deployed in the cloud because they need to scale. The only way to scale to, you know, thousands of users is through cloud. And there is, there are standard pipelines that we are working and architecting together with the Red Hat community leveraging open source packages. Yeah. Is really to, to help scale out the AI models of wisdom together. And another point I wanted to pick up on just what Tom said, I've been sort of in the area of productizing AI for for long now having experience with Watson as well. The only scenario where I've seen AI being successful is in this scenario where, what I describe as it meets the criteria of flywheel of ai. >>What do I mean by flywheel of ai? It cannot be some research people build a model. It may be wowing, but you roll it out and there's no feedback. Yeah, exactly. Okay. We are duh. So what actually, the only way the more people use these models, the more they give you feedback, the better it gets because it knows what is right and what is not right. It will never be right the first time. Actually, you know, the data it is trained on is a depiction of reality. Yeah. It is not a reality in itself. Yeah. The reality is a constantly moving target and the only way to make AI successful is to close that loop with the community. And that's why I just wanted to reemphasize the point on why community is that important >>Actually. And what's interesting Tom is this is a difference between standards bodies, old school and communities. Because developers are very efficient in their feedback. Yes. They jump to patterns that serve their needs, whether it's self-service or whatever. You can kind of see what's going on. Yeah. It's either working or not. Yeah, yeah, >>Yeah. We get immediate feedback from the community and we know real fast when something isn't working, when something is working, there are no problems with the flow of data between the members of the community and, and the developers themselves. So yeah, it's, I'm it's great. It's gonna be fantastic. The energy around Project Wisdom already. I bet. We're gonna go down to the Project Wisdom session, the breakout session, and I bet you the room will be overflowed. >>How do people get involved real quick? Get, get a take a minute to explain how I would get involved. I'm a community member. Yep. I'm watching this video, I'm intrigued. This has got me enthusiastic. How do I get more confident with this opportunity? >>So you go to, first of all, you go to red hat.com/project Wisdom and you register your interests and you wanna participate. We're gonna start growing this process, bringing people in, getting ready to make the service available to people to start using and to experiment with. Start getting their feedback. So this is the beginning of, of a journey. This isn't the, you know, this isn't the midpoint of a journey, this is the begin. You know, even though the work has been going on for a year, this is the beginning of the community journey now. And so we're gonna start working together through channels like Discord and whatnot to be able to exchange information and bring people in. >>What are some of the key use cases, maybe Richie are starting with you that, that you think maybe dream use cases that you think the community will help to really uncover as we're looking at Project Wisdom really helping in this transformation of ai. >>So if I focus on let's say Ansible itself, there are much wider use cases, but Ansible itself and you know, I, I would say I had not realized, I've been working on AI for Good for long, but I had not realized the excitement and the power of Ansible community itself. It's very large, it's very bottom sum, which I love actually. But as I went to lot of like CTOs and CIOs of lot of our customers as well, it was becoming clear the use cases of, you know, I've got thousand Ansible developers or IT or automation experts. They write code all the time. I don't know what all of this code is about. So the, the system administrators, managers, they're trying to figure out sort of how to organize all of this together and think of it as Google for finding all of these automation code automation content. >>And I'm very excited about not just the use cases that we demonstrated today, that is beginning of the journey, but to be able to help enterprises in finding the right code through natural language interfaces, generating the code, helping Del us debug their code as well. Giving them predictive insights into this may happen. Just watch out for it when you deploy this. Something like that happened before, just watch out for it as well. So I'm, I'm excited about the entire life cycle of IT automation, Not just about at the build time, but also at the time of deployment. At the time of management. This is just a start of a journey, but there are many exciting use cases abound for Ansible and beyond. >>It's gonna be great to watch this as it unfolds. Obviously just announcing this today. We thank you both so much for joining us on the program, talking about Project wisdom and, and sharing how the community can get involved. So you're gonna have to come back next year. We're gonna have to talk about what's going on. Cause I imagine with the excitement of the community and the volume of the community, this is just the tip of the iceberg. Absolutely. >>This is absolutely exactly. You're excited about. >>Excellent. And you should be. Congratulations. Thank, thanks again for joining us. We really appreciate your insights. Thank you. Thank >>You for having >>Us. For our guests and John Furrier, I'm Lisa Barton and you're watching The Cube Lie from Chicago at Ansible Fest 22. This is day two of wall to wall coverage on the cube. Stick around. Our next guest joins us in just a minute.
SUMMARY :
It's the cube on the floor at Ansible Fast 2022. bringing AI and making people more productive and more importantly, you know, this whole low code, Gentlemen, great to have you on the program. Tell the audience, the viewers, what is Project Wisdom And Wisdom differs from intelligence. It's not just about when you bring out a, a insight, when you bring out a decision to to developers to, as you said, close the skills gap to And you guys have been doing this for a long time, I was gonna say marriage, And you could feel it in the keynote this morning And then now with machine learning and that big debate was unsupervised, This is not like labeling cats and dogs that everybody else in the board the domain of natural language as well are coming together with And you call that self supervision at scale, which is kind of the foundation. And once you So this is not just generic data, you pick off GitHub, of the community and maybe Richard, any feedback that you've gotten since coming off stage? Everything that Red Hat does is around the community and this is no different. story here is that what you guys are demonstrating is the combination of research and practitioners The only way to scale to, you know, thousands of users is through the only way to make AI successful is to close that loop with the community. They jump to patterns that serve the breakout session, and I bet you the room will be overflowed. Get, get a take a minute to explain how I would get involved. So you go to, first of all, you go to red hat.com/project Wisdom and you register your interests and you What are some of the key use cases, maybe Richie are starting with you that, that you think maybe dream use the use cases of, you know, I've got thousand Ansible developers So I'm, I'm excited about the entire life cycle of IT automation, and sharing how the community can get involved. This is absolutely exactly. And you should be. This is day two of wall to wall coverage on the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Lisa Barton | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Perry | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Richie | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Ruchir Puri | PERSON | 0.99+ |
two alumni | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two stages | QUANTITY | 0.99+ |
second stage | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
first application | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
both | QUANTITY | 0.98+ |
Discord | ORGANIZATION | 0.97+ |
15 years ago | DATE | 0.97+ |
AnsibleFest | EVENT | 0.97+ |
Trevor Treasure | PERSON | 0.97+ |
thousand | QUANTITY | 0.97+ |
red hat.com/project | OTHER | 0.96+ |
One | QUANTITY | 0.95+ |
The Cube Lie | TITLE | 0.93+ |
Ansible Fest 22 | EVENT | 0.93+ |
first time | QUANTITY | 0.93+ |
Project Wisdom | ORGANIZATION | 0.92+ |
two killer apps | QUANTITY | 0.92+ |
two major forces | QUANTITY | 0.92+ |
users | QUANTITY | 0.9+ |
IBM Research | ORGANIZATION | 0.9+ |
DevOps | TITLE | 0.89+ |
Azure | TITLE | 0.85+ |
Project Wisdom | TITLE | 0.85+ |
this morning | DATE | 0.85+ |
Yammel | TITLE | 0.82+ |
Project Wisdom | ORGANIZATION | 0.81+ |
a year | QUANTITY | 0.78+ |
Ansible Fast | ORGANIZATION | 0.75+ |
two main stages | QUANTITY | 0.74+ |
wave | EVENT | 0.72+ |
day | QUANTITY | 0.69+ |
first | QUANTITY | 0.67+ |
Project | ORGANIZATION | 0.66+ |
Project Project Wisdom | TITLE | 0.63+ |
Wisdom | TITLE | 0.61+ |
AWS Heroes Panel | Open Cloud Innovations
(upbeat music) >> Hello, and welcome back to AWS Startup Showcase, I'm John Furrier, your host. This is the Hero panel, the AWS Heroes. These are folks that have a lot of experience in Open Source, having fun building great projects and commercializing the value and best practices of Open Source innovation. We've got some great guests here. Liz Rice, Chief Open Source Officer, Isovalent. CUBE alumni, great to see you. Brian LeRoux, who is the Co-founder and CTO of begin.com. Erica Windisch who's an Architect for Developer Experience. AWS Hero, also CUBE alumni. Casey Lee, CTO Gaggle. Doing some great stuff in ed tech. Great collection of experts and experienced folks doing some fun stuff, welcome to this conversation this CUBE panel. >> Hi. >> Thanks for having us. >> Hello. >> Let's go down the line. >> I don't normally do this, but since we're remote and we have such great guests, go down the line and talk about why Open Source is important to you guys. What projects are you currently working on? And what's the coolest thing going on there? Liz we'll start with you. >> Okay, so I am very involved in the world of Cloud Native. I'm the chair of the technical oversight committee for the Cloud Native Computing Foundation. So that means I get to see a lot of what's going on across a very broad range of Cloud Native projects. More specifically, Isovalent. I focus on Cilium, which is it's based on a technology called EBPF. That is to me, probably the most exciting technology right now. And then finally, I'm also involved in an organization called OpenUK, which is really pushing for more use of open technologies here in the United Kingdom. So spread around lots of different projects. And I'm in a really fortunate position, I think, to see what's happening with lots of projects and also the commercialization of lots of projects. >> Awesome, Brian what project are you working on? >> Working project these days called Architect. It's a Open Source project built on top of AWSM. It adds a lot of sugar and terseness to the SM experience and just makes it a lot easier to work with and get started. AWS can be a little bit intimidating to people at times. And the Open Source community is stepping up to make some of that bond ramp a little bit easier. And I'm also an Apache member. And so I keep a hairy eyeball on what's going on in that reality all the time. And I've been doing this open-source thing for quite a while, and yeah, I love it. It's a great thing. It's real science. We get to verify each other's work and we get to expand and build on human knowledge. So that's a huge honor to just even be able to do that and I feel stoked to be here so thanks for having me. >> Awesome, yeah, and totally great. Erica, what's your current situation going on here? What's happening? >> Sure, so I am currently working on developer experience of a number of Open Source STKS and CLI components from my current employer. And previously, recently I left New Relic where I was working on integrating with OpenTelemetry, as well as a number of other things. Before that I was a maintainer of Docker and of OpenStack. So I've been in this game for a while as well. And I tend to just put my fingers in a lot of little pies anywhere from DVD players 20 years ago to a lot of this open telemetry and monitoring and various STKs and developer tools is where like Docker and OpenStack and the STKs that I work on now, all very much focusing on developer as the user. >> Yeah, you're always on the wave, Erica great stuff. Casey, what's going on? Do you got some great ed techs happening? What's happening with you? >> Yeah, sure. The primary Open Source project that I'm contributing to right now is ACT. This is a tool I created a couple of years back when GitHub Actions first came out, and my motivation there was I'm just impatient. And that whole commit, push, wait time where you're testing out your pipelines is painful. And so I wanted to build a tool that allowed developers to test out their GitHub Actions workflows locally. And so this tool uses Docker containers to emulate, to get up action environment and gives you fast feedback on those workflows that you're building. Lot of innovation happening at GitHub. And so we're just trying to keep up and continue to replicate those new features functionalities in the local runner. And the biggest challenge I've had with this project is just keeping up with the community. We just passed 20,000 stars, and it'd be it's a normal week to get like 10 PRs. So super excited to announce just yesterday, actually I invited four of the most active contributors to help me with maintaining the project. And so this is like a big deal for me, letting the project go and bringing other people in to help lead it. So, yeah, huge shout out to those folks that have been helping with driving that project. So looking forward to what's next for it. >> Great, we'll make sure the SiliconANGLE riders catch that quote there. Great call out. Let's start, Brian, you made me realize when you mentioned Apache and then you've been watching all the stuff going on, it brings up the question of the evolution of Open Source, and the commercialization trends have been very interesting these days. You're seeing CloudScale really impact also with the growth of code. And Liz, if you remember, the Linux Foundation keeps making projections and they keep blowing past them every year on more and more code and more and more entrance coming in, not just individuals, corporations. So you starting to see Netflix donates something, you got Lyft donate some stuff, becomes a project company forms around it. There's a lot of entrepreneurial activity that's creating this new abstraction layers, new platforms, not just tools. So you start to see a new kickup trajectory with Open Source. You guys want to comment on this because this is going to impact how fast the enterprise will see value here. >> I think a really great example of that is a project called Backstage that's just come out of Spotify. And it's going through the incubation process at the CNCF. And that's why it's front of mind for me right now, 'cause I've been working on the due diligence for that. And the reason why I thought it was interesting in relation to your question is it's spun out of Spotify. It's fully Open Source. They have a ton of different enterprises using it as this developer portal, but they're starting to see some startups emerging offering like a hosted managed version of Backstage or offering services around Backstage or offering commercial plugins into Backstage. And I think it's really fascinating to see those ecosystems building up around a project and different ways that people can. I'm a big believer. You cannot sell the Open Source code, but you can sell other things that create value around Open Source projects. So that's really exciting to see. >> Great point. Anyone else want to weigh in and react to that? Because it's the new model. It's not the old way. I mean, I remember when I was in college, we had the Pirate software. Open Source wasn't around. So you had to deal under the table. Now it's free. But I mean the old way was you had to convince the enterprise, like you've got a hard knit, it builds the community and the community manage the quality of the code. And then you had to build the company to make sure they could support it. Now the companies are actually involved in it, right? And then new startups are forming faster. And the proof points are shorter and highly accelerated for that. I mean, it's a whole new- >> It's a Cambrian explosion, and it's great. It's one of those things that it's challenging for the new developers because they come in and they're like, "Whoa, what is all this stuff that I'm supposed to figure out?" And there's no right answer and there's no wrong answer. There's just tons of it. And I think that there's a desire for us to have one sort of well-known trot and happy path, that audience we're a lot better with a more diverse community, with lots of options, with lots of ways to approach these problems. And I think it's just great. A challenge that we have with all these options and all these Cambrian explosion of projects and all these competing ideas, right now, the sustainability, it's a bit of a tricky question to answer. We know that there's a commercialization aspect that helps us fund these projects, but how we compose the open versus the commercial source is still a bit of a tricky question and a tough one for a lot of folks. >> Erica, would you chime in on that for a second. I want to get your angle on that, this experience and all this code, and I'm a new person, I'm an existing person. Do I get like a blue check mark and verify? I mean, these are questions like, well, how do you navigate? >> Yeah, I think this has been something happening for a while. I mean, back in the early OpenStack days, 2010, for instance, Rackspace Open Sourcing, OpenStack and ANSU Labs and so forth, and then trying, having all these companies forming in creating startups around this. I started at a company called Cloudccaling back in late 2010, and we had some competitors such as Piston and so forth where a lot of the ANSUL Labs people went. But then, the real winners, I think from OpenStack ended up being the enterprises that jumped in. We had Red Hat in particular, as well as HP and IBM jumping in and investing in OpenStack, and really proving out a lot of... not that it was the first time, but this is when we started seeing billions of dollars pouring into Open Source projects and Open Source Foundations, such as the OpenStack Foundation, which proceeded a lot of the things that we now see with the Linux Foundation, which was then created a little bit later. And at the same time, I'm also reflecting a little bit what Brian said because there are projects that don't get funded, that don't get the same attention, but they're also getting used quite significantly. Things like Log4j really bringing this to the spotlight in terms of projects that are used everywhere by everything with significant outsized impacts on the industry that are not getting funded, that aren't flashy enough, that aren't exciting enough because it's just logging, but a vulnerability in it brings every everything and everybody down and has possibly billions of dollars of impact to our industry because nobody wanted to fund this project. >> I think that brings up the commercialization point about maybe bringing a venture capital model in saying, "Hey, that boring little logging thing could be a key ingredient for say solving some observability problems so I think let's put some cash." Again then we'd never seen that before. Now you're starting to see that kind of a real smart investment thesis going into Open Source projects. I mean, Promethease, Crafter, these are projects that turned off companies. This is turning up companies. >> A decade ago, there was no money in Dev tools that I think that's been fully debunked now. They used to be a concept that the venture community believed, but there's just too much evidence to the contrary, the companies like Cash Court, Datadog, the list goes on and on. I think the challenge for the Open Source (indistinct) comes back to foundations and working (indistinct) these developers make this code safe and secure. >> Casey, what's your reaction to all of this? You've got, so a project has gained some traction, got some momentum. There's a lot of mission critical. I won't say white spaces, but the opportunities in the big cloud game happening. And there's a lot of, I won't say too many entrepreneurial, but there's a lot of community action happening that's precommercialization that's getting traction. How does this all develop naturally and then vector in quickly when it hits? >> Yeah, I want to go back to the Log4j topic real quick. I think that it's a great example of an area that we need to do better at. And there was a cool article that Rob Pike wrote describing how to quantify the criticality. I think that's sort of quantifying criticality was the article he wrote on how to use metrics, to determine how valuable, how important a piece of Open Source is to the community. And we really need to highlight that more. We need a way to make it more clear how important this software is, how many people depend on it and how many people are contributing to it. And because right now we all do that. Like if I'm going to evaluate an Open Source software, sure, I'll look at how many stars it has and how many contributors it has. But I got to go through and do all that work myself and come up with. It would be really great if we had an agreed upon method for ranking the criticality of software, but then also the risk, hey, that this is used by a ton of people, but nobody's contributing to it anymore. That's a concern. And that would be great to potential users of that to signal whether or not it makes sense. The Open Source Security Foundation, just getting off the ground, they're doing some work in this space, and I'm really excited to see where they go with that looking at ways to stop score critically. >> Well, this brings up a good point while we've got everyone here, let's take a plug and plug a project you think that's not getting the visibility it needs. Let's go through each of you, point out a project that you think people should be looking at and talking about that might get some free visibility here. Anyone want to highlight projects they think should be focused more on, or that needs a little bit of love? >> I think, I mean, particularly if we're talking about these sort of vulnerability issues, there's a ton of work going on, like in the Secure Software Foundation, other foundations, I think there's work going on in Apache somewhere as well around the bill of material, the software bill of materials, the Secure Software supply chain security, even enumerating your dependencies is not trivial today. So I think there's going to be a ton of people doing really good work on that, as well as the criticality aspect. It's all like that. There's a really great xkcd cartoon with your software project and some really big monolithic lumps. And then, this tiny little piece in a very important point that's maintained by somebody in his bedroom in Montana or something and if you called it out. >> Yeah, you just opened where the next lightening and a bottle comes from. And this is I think the beauty of Open Source is that you get a little collaboration, you get three feet in a cloud of dust going and you get some momentum, and if it's relevant, it rises to the top. I think that's the collective intelligence of Open Source. The question I want to ask that the panel here is when you go into an enterprise, and now that the game is changing with a much more collaborative and involved, what's the story if they say, hey, what's in it for me, how do I manage the Open Source? What's the current best practice? Because there's no doubt I can't ignore it. It's in everything we do. How do I organize around it? How do I build around it to be more efficient and more productive and reduce the risk on vulnerabilities to managing staff, making sure the right teams in place, the right agility and all those things? >> You called it, they got to get skin in the game. They need to be active and involved and donating to a sustainable Open Source project is a great way to start. But if you really want to be active, then you should be committing. You should have a goal for your organization to be contributing back to that project. Maybe not committing code, it could be committing resources into the darks or in the tests, or even tweeting about an Open Source project is contributing to it. And I think a lot of these enterprises could benefit a lot from getting more active with the Open Source Foundations that are out there. >> Liz, you've been actively involved. I know we've talked personally when the CNCF started, which had a great commercial uptake from companies. What do you think the current state-of-the-art kind of equation is has it changed a little bit? Or is it the game still the same? >> Yeah, and in the early days of the CNCF, it was very much dominated by vendors behind the project. And now we're seeing more and more membership from end-user companies, the kind of enterprises that are building their businesses on Cloud Native, but their business is not in itself. That's not there. The infrastructure is not their business. And I think seeing those companies, putting money in, putting time in, as Brian says contributing resources quite often, there's enough money, but finding the talent to do the work and finding people who are prepared to actually chop the wood and carry the water, >> Exactly. >> that it's hard. >> And if enterprises can find peoples to spend time on Open Source projects, help with those chores, it's hugely valuable. And it's one of those the rising tide floats all the boats. We can raise security, we can reduce the amount of dependency on maintain projects collectively. >> I think the business models there, I think one of the things I'll react to and then get your guys' comments is remember which CubeCon it was, it was one of the early ones. And I remember seeing Apple having a booth, but nobody was manning. It was just an Apple booth. They weren't doing anything, but they were recruiting. And I think you saw the transition of a business model where the worry about a big vendor taking over a project and having undue influence over it goes away because I think this idea of participation is also talent, but also committing that talent back into the communities as a model, as a business model, like, okay, hire some great people, but listen, don't screw up the Open Source piece of it 'cause that's a critical. >> Also hire a channel, right? They can use those contributions to source that talent and build the reputation in the communities that they depend on. And so there's really a lot of benefit to the larger organizations that can do this. They'll have a huge pipeline of really qualified engineers right out the gate without having to resort to cheesy whiteboard interviews, which is pretty great. >> Yeah, I agree with a lot of this. One of my concerns is that a lot of these corporations tend to focus very narrowly on certain projects, which they feel that they depend greatly, they'll invest in OpenStack, they'll invest in Docker, they'll invest in some of the CNCF projects. And then these other projects get ignored. Something that I've been a proponent of for a little bit for a while is observability of your dependencies. And I don't think there's quite enough projects and solutions to this. And it sounds maybe from lists, there are some projects that I don't know about, but I also know that there's some startups like Snyk and so forth that help with a little bit of this problem, but I think we need more focus on some of these edges. And I think companies need to do better, both in providing, having some sort of solution for observability of the dependencies, as well as understanding those dependencies and managing them. I've seen companies for instance, depending on software that they actively don't want to use based on a certain criteria that they already set projects, like they'll set a requirement that any project that they use has a code of conduct, but they'll then use projects that don't have codes of conduct. And if they don't have a code of conduct, then employees are prohibited from working on those projects. So you've locked yourself into a place where you're depending on software that you have instructed, your employees are not allowed to contribute to, for certain legal and other reasons. So you need to draw a line in the sand and then recognize that those projects are ones that you don't want to consume, and then not use them, and have observability around these things. >> That's a great point. I think we have 10 minutes left. I want to just shift to a topic that I think is relevant. And that is as Open Source software, software, people develop software, you see under the hood kind of software, SREs developing very quickly in the CloudScale, but also you've got your classic software developers who were writing code. So you have supply chain, software supply chain challenges. You mentioned developer experience around how to code. You have now automation in place. So you've got the development of all these things that are happening. Like I just want to write software. Some people want to get and do infrastructure as code so DevSecOps is here. So how does that look like going forward? How has the future of Open Source going to make the developers just want to code quickly? And the folks who want to tweak the infrastructure a bit more efficient, any views on that? >> At Gaggle, we're using AWS' CDK, exclusively for our infrastructure as code. And it's a great transition for developers instead of writing Yammel or Jason, or even HCL for their infrastructure code, now they're writing code in the language that they're used to Python or JavaScript, and what that's providing is an easier transition for developers into that Infrastructure as code at Gaggle here, but it's also providing an opportunity to provide reusable constructs that some Devs can build on. So if we've got a very opinionated way to deploy a serverless app in a database and do auto-scaling behind and all stuff, we can present that to a developer as a library, and they can just consume it as it is. Maybe that's as deep as they want to go and they're happy with that. But then they want to go deeper into it, they can either use some of the lower level constructs or create PRs to the platform team to have those constructs changed to fit their needs. So it provides a nice on-ramp developers to use the tools and languages they're used to, and then also go deeper as they need. >> That's awesome. Does that mean they're not full stack developers anymore that they're half stack developers they're taking care of for them? >> I don't know either. >> We'll in. >> No, only kidding. Anyway, any other reactions to this whole? I just want to code, make it easy for me, and some people want to get down and dirty under the hood. >> So I think that for me, Docker was always a key part of this. I don't know when DevSecOps was coined exactly, but I was talking with people about it back in 2012. And when I joined Docker, it was a part of that vision for me, was that Docker was applying these security principles by default for your application. It wasn't, I mean, yes, everybody adopted because of the portability and the acceleration of development, but it was for me, the fact that it was limiting what you could do from a security angle by default, and then giving you these tuna balls that you can control it further. You asked about a project that may not get enough recognition is something called DockerSlim, which is designed to optimize your containers and will make them smaller, but it also constraints the security footprint, and we'll remove capabilities from the container. It will help you build security profiles for app armor and the Red Hat one. SELinux. >> SELinux. >> Yeah, and this is something that I think a lot of developers, it's kind of outside of the realm of things that they're really thinking about. So the more that we can automate those processes and make it easier out of the box for users or for... when I say users, I mean, developers, so that it's straightforward and automatic and also giving them the capability of refining it and tuning it as needed, or simply choosing platforms like serverless offerings, which have these security constraints built in out of the box and sometimes maybe less tuneable, but very strong by default. And I think that's a good place for us to be is where we just enforced these things and make you do things in a secure way. >> Yeah, I'm a huge fan of Kubernetes, but it's not the right hammer for every nail. And there are absolutely tons of applications that are better served by something like Lambda where a lot more of that security surface is taken care of for the developer. And I think we will see better tooling around security profiling and making it easier to shrink wrap your applications that there are plenty of products out there that can help you with this in a cloud native environment. But I think for the smaller developer let's say, or an earlier stage company, yeah, it needs to be so much more straightforward. Really does. >> Really an interesting time, 10 years ago, when I was working at Adobe, we used to requisition all these analysts to tell us how many developers there were for the market. And we thought there was about 20 million developers. If GitHub's to be believed, we think there is now around 80 million developers. So both these groups are probably wrong in their numbers, but the takeaway here for me is that we've got a lot of new developers and a lot of these new developers are really struck by a paradox of choice. And they're typically starting on the front end. And so there's a lot of movement in the stack moved towards the front end. We saw that at re:Invent when Amazon was really pushing Amplify 'cause they're seeing this too. It's interesting because this is where folks start. And so a lot of the obstructions are moving in that direction, but maybe not always necessarily totally appropriate. And so finding the right balance for folks is still a work in progress. Like Lambda is a great example. It lets me focus totally on just business logic. I don't have to think about infrastructure pretty much at all. And if I'm newer to the industry, that makes a lot of sense to me. As use cases expand, all of a sudden, reality intervenes, and it might not be appropriate for everything. And so figuring out what those edges are, is still the challenge, I think. >> All right, thank you very much for coming on the CUBE here panel. AWS Heroes, thanks everyone for coming. I really appreciate it, thank you. >> Thank you. >> Thank you. >> Okay. >> Thanks for having me. >> Okay, that's a wrap here back to the program and the awesome startups. Thanks for watching. (upbeat music)
SUMMARY :
and commercializing the value is important to you guys. and also the commercialization that reality all the time. Erica, what's your current and the STKs that I work on now, the wave, Erica great stuff. and continue to replicate those and the commercialization trends And the reason why I and the community manage that I'm supposed to figure out?" in on that for a second. that don't get the same attention, the commercialization point that the venture community believed, but the opportunities in the of that to signal whether and plug a project you think So I think there's going to be and now that the game is changing and donating to a sustainable Or is it the game still the same? but finding the talent to do the work the rising tide floats all the boats. And I think you saw the and build the reputation And I think companies need to do better, And the folks who want to in the language that they're Does that mean they're not and some people want to get and the acceleration of development, of the realm of things and making it easier to And so finding the right balance for folks for coming on the CUBE here panel. the awesome startups.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Erica Windisch | PERSON | 0.99+ |
Brian LeRoux | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Liz Rice | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Casey Lee | PERSON | 0.99+ |
Rob Pike | PERSON | 0.99+ |
Erica | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ANSU Labs | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
Montana | LOCATION | 0.99+ |
2012 | DATE | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Liz | PERSON | 0.99+ |
ANSUL Labs | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Secure Software Foundation | ORGANIZATION | 0.99+ |
Casey | PERSON | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
OpenUK | ORGANIZATION | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
United Kingdom | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
Open Source Security Foundation | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
three feet | QUANTITY | 0.99+ |
Cash Court | ORGANIZATION | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
20,000 stars | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cloudccaling | ORGANIZATION | 0.99+ |
Piston | ORGANIZATION | 0.99+ |
20 years ago | DATE | 0.99+ |
Lyft | ORGANIZATION | 0.98+ |
late 2010 | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
OpenStack Foundation | ORGANIZATION | 0.98+ |
Lambda | TITLE | 0.98+ |
Gaggle | ORGANIZATION | 0.98+ |
Secure Software | ORGANIZATION | 0.98+ |
around 80 million developers | QUANTITY | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
Open Source Foundations | ORGANIZATION | 0.97+ |
billions of dollars | QUANTITY | 0.97+ |
New Relic | ORGANIZATION | 0.97+ |
OpenStack | ORGANIZATION | 0.97+ |
OpenStack | TITLE | 0.96+ |
DevSecOps | TITLE | 0.96+ |
first time | QUANTITY | 0.96+ |
EBPF | ORGANIZATION | 0.96+ |
about 20 million developers | QUANTITY | 0.96+ |
Open Source Foundations | ORGANIZATION | 0.95+ |
Docker | ORGANIZATION | 0.95+ |
10 PRs | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
CloudScale | TITLE | 0.94+ |
AWS Hero | ORGANIZATION | 0.94+ |
Docker | TITLE | 0.92+ |
GitHub Actions | TITLE | 0.92+ |
A decade ago | DATE | 0.92+ |
first | QUANTITY | 0.91+ |
DockerCon2021 Keynote
>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.
SUMMARY :
We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mario Andretti | PERSON | 0.99+ |
Dani | PERSON | 0.99+ |
Matt Falk | PERSON | 0.99+ |
Dana Lawson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Maya Andretti | PERSON | 0.99+ |
Donnie | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mona | PERSON | 0.99+ |
Nicole | PERSON | 0.99+ |
UNICEF | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
14 million | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Khan | PERSON | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
99 | QUANTITY | 0.99+ |
1.3 times | QUANTITY | 0.99+ |
1.2 times | QUANTITY | 0.99+ |
Claire | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
UC Irvine | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
34% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
Joey | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
160 images | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
$10,000 | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
23 minutes | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
April | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
56% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Molly | PERSON | 0.99+ |
Mac mini | COMMERCIAL_ITEM | 0.99+ |
Hughie cower | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Georgie | PERSON | 0.99+ |
Matt fall | PERSON | 0.99+ |
Mars | LOCATION | 0.99+ |
Second question | QUANTITY | 0.99+ |
Kubicki | PERSON | 0.99+ |
Moby | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
Youi Cal | PERSON | 0.99+ |
three nines | QUANTITY | 0.99+ |
J frog | ORGANIZATION | 0.99+ |
200 K | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
Sharon | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 X | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
windows | TITLE | 0.99+ |
381 | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
A Day in the Life of an IT Admin | HPE Ezmeral Day 2021
>>Hi, everyone. Welcome to ASML day. My name is Yasmin Joffey. I'm the director of systems engineering for ASML at HPE. Today. We're here and joined by my colleague, Don wake, who is a technical marketing engineer who will talk to us about the date and the life of an it administrator through the lens of ASML container platform. We'll be answering your questions real time. So if you have any questions, please feel free to put your questions in the chat, and we should have some time at the end for some live Q and a. Don wants to go ahead and kick us off. >>All right. Thanks a lot, Yasir. Yeah, my name is Don wake. I'm the tech marketing guy and welcome to asthma all day, day in the life of an it admin and happy St. Patrick's day. At the same time, I hope you're wearing green virtual pinch. If you're not wearing green, don't have to look that up if you don't know what I'm scouting. So we're just going to go through some quick things. Talk about discussion of modern business. It needs to kind of set the stage and go right into a demo. Um, so what is the need here that we're trying to fulfill with, uh, ASML container platform? It's, it's all rooted in analytics. Um, modern businesses are driven by data. Um, they are also application centric and the separation of applications and data has never been more important or, or the relationship between the two applications are very data hungry. >>These days, they consume data in all new ways. The applications themselves are, are virtualized, containerized, and distributed everywhere, and optimizing every decision and every application is, is become a huge problem to tackle for every enterprise. Um, so we look at, um, for example, data science, um, as one big use case here, um, and it's, it's really a team sport and I'm today wearing the hat of perhaps, you know, operations team, maybe software engineer, guy working on, you know, continuous integration, continuous development integration with source control, and I'm supporting these data scientists, data analysts. And I also have some resource control. I can decide whether or not the data science team gets a, a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an it admin, and that is the ASML container platform. >>And just walking through this real quick, at the top, I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, dev ops guys, they all have particular needs and they can access their resources and spin up clusters, or just do work with the Jupiter notebook or run spark or Kafka or any of the, you know, popular analytics platforms by just getting in points that we can provide to them web URLs and their self service. But in the backend, I can then as the it guy makes sure the Kubernetes clusters are up and running, I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can, uh, you know, put my like clusters on premise if I want to. >>And I can do all this through this centralized control plane. So today I'm just going to show you I'm supporting some data scientists. So one of our very own guys is actually doing a demo right now as well, called the a day in the life of the data scientist. And he's on the opposite side, not caring about all the stuff I'm doing in the backend and he's training models and registering the models and working with data, uh, inside his, you know, Jupiter notebook, running inferences, running postman scripts. And so I'm in the background here, making sure that he's got access to his cluster storage protected, make sure it's, um, you know, his training models are up, he's got service endpoints, connecting him to, um, you know, his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupiter notebook and models. So why don't we, um, get hands on and I'll just jump right over it. >>It was no container platform. So this is a web UI. So this is the interface into the container platform. Our centralized control plane, I'm using my active directory credentials to log in here. >>And >>When I log in, I've also been assigned a particular role, uh, with regard to how much of the resources I can access. Now, in my case, I'm a site admin you can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. Um, so I have a cluster I can go in here and let's say, um, we have a new data scientists come on board one. I can give him his own resources so he can do whatever he wants, use some GPU's and not affect other clusters. Um, so we have all these other clusters already created here. You can see here that, um, this is a very busy, um, you know, production system. They've got some dev clusters over here. >>I see here, we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and, and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster. And literally I just push a button, create Kubernetes cluster once I've done that. And I'll just show you some of the screens and this is a live environment. So this is, I could actually do it all my hosts are used up right now, but I wouldn't be able to go in here and give it a name, just select, um, some hosts to use as the primary master controller and some workers answer a few more questions. And then once that's done, I have now created a special, a whole nother Kubernetes cluster, um, that I could also create tenants from. >>So tenants are really Kubernetes. Uh namespaces so in addition to taking hosts and Kubernetes clusters, I can also go to that, uh, to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and, um, let's see, we've got, um, we've got this year is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create and similar to how you create a cluster. You can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AI ML project, um, which really is our ML ops license. So at the end of the day, I can say, okay, I'm going to create an ML ops tenant from that cluster that I created. >>And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call it tenant. I mean, it's like, multitenancy the name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do. Um, and at this point I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign users right here. So right now it's just me. But if I want it to, for example, give this, um, to Terry, I could go in here and find another user and assign him from this lead, from this list, as long as he's got the proper credentials here. So you can see here, all these other users have active directory credentials, and they, uh, when we created the cluster itself, we also made sure it integrated with our active directory, so that only authorized users can get in there. >>Let's say the first thing I want to do is make sure when I do Jupiter notebook work, or when Terry does, I'm going to connect him up straight up to the get hub repository. So he gives me a link to get hub and says, Hey man, this is all of my cluster work that I've been doing. I've got my source control there. My scripts, my Python notebooks, my Jupiter notebooks. So when I create that, I simply give him, you know, he gives me his, I create a configuration. I say, okay, here's a, here's a get repo. Here's the link to it. I can use a token, here's his username. And I can now put in that token. So this is actually a private repo and using a token, you know, standard get interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. >>And this gets into the Kubernetes world. Um, you know, if you want to make sure you have secure integration with things like your source control or perhaps your active directory, that's all maintained in secrets. So you can take that secret. And when I then create his notebook, I can put that secret right in here in this, uh, launch Yammel. And I say, Hey, connect this Jupiter notebook up with this secret so he can log in. And when I've launched this Jupiter notebook cluster, this is actually now, uh, within my, my, uh, Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that, uh, Kubernetes tenant and say, coop CTL, these are standard, you know, CNCF certified Kubernetes get pods. And when I do this, it'll tell me all of the active pods and within those positive containers that I'm running. >>So I'm running quite a few pods and containers here in this, uh, artificial intelligence machine learning, um, tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes, uh, control. Uh well, and then I can do something like this, where on my own system where I'm more comfortable, perhaps coop CTL get pods. So this is running on my laptop and I just had to do a coop CTL refresh and give the IP address and authorization, um, information in order to connect from my laptop to that end point. So from a CIC D perspective from, you know, an it admin guides, he usually wants to use tools right on his, uh, desktop. So here am I back in my web browser, I'm also here on the dashboard of this, uh, Kubernetes, um, tenant, and I can see how it's doing. >>It looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupiter notebook pod. So aren't, I show how, you know, I could enable my data scientists by just giving him the, uh, URL or what we call a notebook service end points or notebook end point. And just by clicking on this URL or copying it, copying, you know, it's a link, uh, and then emailing it to them and say, okay, here's your, uh, you know, here's your duper notebook. And I say, Hey, just log in with your credentials. I've already logged in. Um, and so then he's got his Jupiter notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project and within here, and this is really in the data science realm, data scientists realm. >>He can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage. And, you know, these, these commands, um, are kind of cool. They're a little Jupiter magic commands, and we've got some of our own that showed that attachment to the cluster. Um, but you can see here if you run these commands, they're actually looking at the shared project repository managed by the container platform. So, you know, just to show you that again, I'll go back to the container platform. And in fact, the data scientist, uh, could do the same thing. Attitude put a notebook back to platform. So here's this project repository. So this is other big point. So now putting on my storage admin hat, you know, I've got this shared, um, storage, um, volume that is managed for me by the ESMO data fabric. >>Um, in, in here, you can see that the data scientist, um, from his get repo is able to through Jupiter notebook directly, uh, copy his code. He was able to run as Jupiter notebook and create this XG boost, uh, model. So this file can then be registered in this AIML tenant. So he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service kick off his notebooks, even get a deployment end point so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a postman rest URL and get answers. Um, but let's say he wants to, um, he's been doing all this work and I want to make sure that his, uh, data's protected, uh, how about creating a mirror. >>So if I want to create a mirror of that data, now I go back to this other, uh, and this is the, the, uh, data fabric embedded in a very special cluster called the Picasso cluster. And it's a version of the ASML data fabric that allows you to launch what was formerly called Matt bar as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically, uh, gets things like that. Tenant storage. I showed you to create a shared workspace, and it's automatically managed by this, uh, data fabric. Uh, and you're even given an end point to go into the data fabric and then use all of the awesome features of ASML data fabric. So here I can just log in here. And now I'm at the, uh, data fabric, web UI to do some data protection and mirroring. >>So >>Let's go over here. Let's say I want to, uh, create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source, control my project repository that I want to protect. And I see that the ESMO data fabric has created 10 and 30 as a volume. So I'll go back to my, um, data fabric here, and I'm going to look for 10 and 30. And if I want to, I can go into tenant 30, >>Okay. >>Down here, I can look at the usage. I can look at all of the, you know, I've used very little of the, uh, allocated storage that I want, but let's, uh, you know what, let's go ahead and create a volume to mirror that one. So very simple web UI that has said create volume. I go in here and I say, I want to do a, a tenant 30 mirror. And I say, mirror the mirror volume. Um, I want to use my Picasso cluster. I want to use tenant 30. So now that's actually looking up in the data fabric, um, database there's 10 and 30 K. So it knows exactly which one I want to use. I can go in here and I can say, you know, ext HCP, tenant, 30 mirror, you know, I can give it whatever name I want and this path here. >>And that's a whole nother, uh, demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world, because this is truly a global name, split namespace, which is a huge differentiator for us in this case, I'm creating a local mirror and that can go down here and, um, I can add, uh, audit and encryptions. I can do, um, access control. I can, you know, change permissions, you know, so full service, um, interactivity here. And of course this is using the web UI, but there's also rest API interfaces as well. So that is pretty much the, the brunt of what I wanted to show you in the demo. Um, so we got hands on and I'm just going to throw this up real quick and then come back to Yasser. See if he's got any questions he has received from anybody watching, if you have any new questions. >>Yeah. We've got a few questions. Um, we can, uh, just take some time to go, hopefully answer a few. Um, so it, it does look like you can integrate or incorporate your existing get hub, uh, to be able to, um, extract, uh, shared code or repositories. Correct? >>Yeah. So we have that built in and can either be, um, get hub or bit bucket it's, you know, pretty standard interface. So just like you can go into any given, get hub and do a clone of a, of a repo, pull it into your local environment. We integrated that directly into the gooey so that you can, uh, say to your, um, AIML tenant, uh, to your Jupiter notebook. You know, here's, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some, some steps there because Jupiter notebook is designed to be integrated with get hub. So we have get hub integrated in as well or bit bucket. Right. >>Um, another question around the file system, um, has the map, our file system that was carried over, been modified in any way to run on top of Kubernetes. >>So yeah, I would say that the map, our file system data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features, but if you need, um, perhaps run it on bare metal, maybe you have performance, um, concerns, um, you know, you can, uh, you can also deploy it as a separate bare metal instance of data fabric, but this is just one way that you can, uh, use it integrated directly into Kubernetes depends really the needs of, of the, uh, the user and that a fabric has a lot of different capabilities, but this is, um, it has a lot of the core file system capabilities where you can do snapshots and mirrors, and it it's of course, striped across multiple, um, multiple disks and nodes. And, uh, you know, Matt BARR data fabric has been around for years. It's, uh, and it's designed for integration with these, uh, analytic type workloads. >>Great. Um, you showed us how you can manage, um, Kubernetes clusters through the ASML container platform you buy. Um, but the question is, can you, uh, control who accesses, which tenant, I guess, namespace that you created, um, and also can you restrict or, uh, inject resource limitations for each individual namespace through the UI? >>Oh yeah. So that's, that's a great question. Yes. To both of those. So, um, as a site admin, I had lots of authority to create clusters, to go into any cluster I wanted, but typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. Um, and it's all role-based access control. So I could create a local user and have container platform authenticate him, or I can say integrate directly with, uh, active directory or LDAP, and then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant and only this tenant. Um, another thing you asked about is his limitations. So when you create the tenant to prevent that noisy neighbor problem, you can, um, go in and create quotas. >>So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, um, flow is okay, I've defined which cluster I want to use. I defined how much memory I want to use. So there's a quota right there. You could say, Hey, how many CPU's am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host, um, you know, you can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob, over here, um, he's only going to get 50 of the a hundred CPU's available and he's only going to get X amount of gigabytes of memory. And he's only going to get this much storage that he can consume. So you can then safely hand off something and know they're not going to take all the resources, especially the GPU's where those will be expensive. And you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. >>Fantastic. Well, we, I think we are out of time. Um, we have, uh, a list of other questions that we will absolutely reach out and, um, get all your questions answered, uh, for those of you who ask questions in the chat. Um, Don, thank you very much. Thanks everyone else for joining Don, will this recording be made available for those who couldn't make it today? >>I believe so. Honestly, I'm not sure what the process is, but, um, yeah, it's being recorded so they must've done that for a reason. >>Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.
SUMMARY :
So if you have any questions, please feel free to put your questions in the chat, don't have to look that up if you don't know what I'm scouting. you know, continuous integration, continuous development integration with source control, and I'm supporting I can, uh, you know, And so I'm in the background here, making sure that he's got access to So this is a web UI. You can see here that, um, this is a very busy, um, you know, And I'll just show you some of the screens and this is a live environment. in the namespace, I just hit create and similar to how you create a cluster. So you can see here, all these other users have active I create that, I simply give him, you know, he gives me his, I create a configuration. So you can take that secret. So this is running on my laptop and I just had to do a coop CTL refresh And just by clicking on this URL or copying it, copying, you know, it's a link, So now putting on my storage admin hat, you know, I've got this shared, So here again, another URL that you could then take this and put it into like a postman rest URL And when you create this special cluster, every other cluster that you create is automatically, And I see that the ESMO data I can look at all of the, you know, I can, you know, change permissions, Um, so it, it does look like you can integrate So just like you can go into any given, Um, another question around the file system, um, has the it has a lot of the core file system capabilities where you can do snapshots and mirrors, and also can you restrict or, uh, inject resource limitations for each So when you create the tenant to prevent So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, Um, Don, thank you very much. I believe so.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Yasir | PERSON | 0.99+ |
Terry | PERSON | 0.99+ |
Don wake | PERSON | 0.99+ |
Tokyo | LOCATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Yasmin Joffey | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
two applications | QUANTITY | 0.99+ |
Don | PERSON | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
St. Patrick's day | EVENT | 0.98+ |
10 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
30 K. | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
HPE | ORGANIZATION | 0.97+ |
one person | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
Yasser | PERSON | 0.97+ |
Kafka | TITLE | 0.97+ |
Python | TITLE | 0.96+ |
ASML | ORGANIZATION | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
one way | QUANTITY | 0.95+ |
Jupiter | LOCATION | 0.94+ |
ESMO | ORGANIZATION | 0.94+ |
GitHub | ORGANIZATION | 0.94+ |
ASML | EVENT | 0.93+ |
Bob | PERSON | 0.93+ |
Matt BARR | PERSON | 0.92+ |
this year | DATE | 0.91+ |
Jupiter | ORGANIZATION | 0.9+ |
each individual | QUANTITY | 0.86+ |
30 | OTHER | 0.85+ |
a hundred CPU | QUANTITY | 0.82+ |
ASML | TITLE | 0.82+ |
2021 | DATE | 0.8+ |
coop | ORGANIZATION | 0.78+ |
a day | QUANTITY | 0.78+ |
Kubernetes | ORGANIZATION | 0.75+ |
couple | QUANTITY | 0.75+ |
A Day in the Life | TITLE | 0.73+ |
an IT | TITLE | 0.7+ |
30 mirror | QUANTITY | 0.69+ |
case | QUANTITY | 0.64+ |
CTL | COMMERCIAL_ITEM | 0.57+ |
few more questions | QUANTITY | 0.57+ |
coop CTL | ORGANIZATION | 0.55+ |
years | QUANTITY | 0.55+ |
Quentin | PERSON | 0.51+ |
30 | QUANTITY | 0.49+ |
Ezmeral Day | PERSON | 0.48+ |
lots | QUANTITY | 0.43+ |
Jupiter | COMMERCIAL_ITEM | 0.42+ |
10 | TITLE | 0.41+ |
Picasso | ORGANIZATION | 0.38+ |
DockerCon 2020 Kickoff
>>From around the globe. It's the queue with digital coverage of DockerCon live 2020 brought to you by Docker and its ecosystem partners. >>Hello everyone. Welcome to Docker con 2020 I'm John furrier with the cube. I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon con 2020 virtual event. Normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Marcio and Brett Fisher. We'll be with you all day, all day today, taking you through the program, helping you navigate the sessions. I'm so excited, Jenny. This is a virtual event. We talk about this. Can you believe it? We're, you know, may the internet gods be with us today and hope everyone's having an easy time getting in. Jenny, Brett, thank you for being here. Hey, >>Yeah. Hi everyone. Uh, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you >>Guys. Great job. I'm getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the, and the opportunities given this tough times where we're in. Um, it's super exciting. Again, made the internet gods be with us throughout the day, but there's plenty of content. Uh, Brett's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's cancelling their events, but this is special to you guys. Talk about Docker con virtual this year. >>Yeah. You know, the Docker community shows up at DockerCon every year and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make Docker con a virtual event. And of course when we did that, there was no quarantine. Um, we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for Docker con today. And when you look at backer cons of past right live events, really and more learning are just the tip of the iceberg. And so thrilled to be able to deliver a more inclusive vocal event today. And we have so much planned. Uh, I think Brett, you want to tell us some of the things that you have planned? >>Well, I'm sure I'm going to forget something cause there's a lot going on. But, uh, we've obviously got interviews all day today on this channel with John the crew. Um, Jenny has put together an amazing set of all these speakers all day long in the sessions. And then you have a captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. Oh, it's all engineers, all day long, captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically, uh, not scripted. It's an all day long unscripted event, so I'm sure it's going to be a lot of fun hanging out in there. >>Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions where the speakers will be there with their, with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Brett's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock. It'll be available on demand. All that content is available if you're on your desktop, if you're on your mobile, it's the same thing. >>Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, you're getting more out of this event. We, you guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >>Yeah. So first set up your profile, put your picture next to your chat handle and then chat. We have like, uh, John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded so you get quality content and the speakers and chat. So you can ask questions the whole time. Um, if you're looking for the hallway track, then definitely check out the captain's on deck channel. Uh, and then we have some great interviews all day on the queue so that up your profile, join the conversation and be kind, right. This is a community event. Code of conduct is linked on every page at the top and just have a great day. >>And Brett, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, so you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >>Yeah. Yeah. So, uh, I'm sure we're going to have less, uh, lots of, lots of stuff going on in chat. So no concerns there about, uh, having crickets in the, in the chat. But we're going to, uh, basically starting the day with two of my good Docker captain friends, uh, Nirmal Mehta and Laura taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour, and we're going to get you going. And then you can maybe jump out and go to take some sessions. Maybe there's some cool stuff you want to check out in other sessions that are, you want to chat and talk with the, the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interview. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. >>We're not just changing out the, uh, the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there. And basically it's captains all day long. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >>Awesome. And the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What are the things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to, to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies. What else? What's going on? Any secret, uh, surprises throughout the day. >>There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Brett will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Uh, hopefully right you, you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >>All right, great stuff. So they've got the Docker selfie. So the Docker selfies, the hashtag is just Docker con hashtag Docker con. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool, cool. Kids are going to be hanging out with Brett and then all they'll knowledge and learning. Don't miss the keynote. The keynote should be solid. We got changed governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us and again, check out the interactive calendar. All you gotta do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Um, Brett, any final thoughts on what you want to share to the community around, uh, what you got going on the virtual event? Just random thoughts. >>Yeah. Uh, so sorry, we can't all be together in the same physical place. But the coolest thing about as business online is that we actually get to involve everyone. So as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Um, like Jenny said, the code of conduct is important. So, you know, we're all in this together with the chat, so try to try to be nice in there. These are all real humans that, uh, have feelings just like me. So let's, let's try to keep it cool and, uh, over in the Catherine's channel be taking your questions and maybe playing some music, playing some games, giving away some free stuff. Um, while you're, you know, in between sessions learning. Oh yeah. >>And I gotta say props to your rig. You've got an amazing setup there, Brett. I love what your show you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So, um, if you're not getting in, just, you know, just wait going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >>Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. Um, so you can learn and a huge thank you to our platinum and gold authors. >>Awesome. Well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there and you know, I tweeted this out last night and let them get you guys' reaction to this because you know, there's been a lot of talk around the covert crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps apps aren't going to just change the world. They're gonna save the world. So a lot of the theme years, the impact that developers are having right now in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples how containers and microservices are certainly changing the world and helping save the world. Your thoughts. >>Yeah. So if you, I think we have a, like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around coven, um, Clemente is sharing his company's experience, uh, from being able to continue operations in Italy when they were completely shut down. Uh, beginning of March, we have also in the cube channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and, uh, developers are moving in industry and, and really humanity forward because of what they're able to build and create, uh, with advances in technology. Yeah. >>And first responders and these days is developers. Brett compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >>Yeah, it's, it's a, I think we're over 700,000 composed files on GitHub. Um, so it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Um, just by having that we just by, and that's not even counting. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file to manage your local containers or even on a server with Docker compose. And the nice thing is, is Docker is doubling down on that. So we've gotten some news recently, um, from them about what they want to do with opening the spec up, getting more companies involved, because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >>Well, let's get into the keynote. Guys, jump into the keynote. If you missed anything, come back to the stream, check out the sessions, check out the calendar. Let's go. Let's have a great time. Have some fun. Thanks for enjoy the rest of the day. We'll see you soon..
SUMMARY :
It's the queue with digital coverage of DockerCon I'll be with you throughout the day from an amazing lineup of content over 50 different We have a great day planned for you Obviously everyone's cancelling their events, but this is special to you guys. have the opportunity to do an in person event this year, we didn't want to lose the And we're going to be in chat talking to you about answering your questions. And then each of the tracks, you can jump into those sessions. Look at the calendar, find the session that you want. So you can ask questions the whole time. So the folks who were familiar with that can get that either on YouTube or on the site. the end of this keynote, at the end of this hour, and we're going to get you going. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the What are the things can people do to make it interesting? you can catch anything that you miss. click on the session you want. So as long as you have a computer and internet, And I gotta say props to your rig. Um, so you can learn and a huge thank you in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, medicine at the end of the day. just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file check out the sessions, check out the calendar.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jenny | PERSON | 0.99+ |
Clemente | PERSON | 0.99+ |
Brett | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Brett Fisher | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jenny Marcio | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
Laura | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
67,000 | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
each page | QUANTITY | 0.99+ |
DockerCon con 2020 | EVENT | 0.99+ |
Docker con | EVENT | 0.98+ |
today | DATE | 0.98+ |
Nirmal Mehta | PERSON | 0.98+ |
Catherine | PERSON | 0.98+ |
Docker con 2020 | EVENT | 0.97+ |
first | QUANTITY | 0.97+ |
Brett compose | PERSON | 0.97+ |
over 50 different sessions | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
last night | DATE | 0.96+ |
Docker | TITLE | 0.96+ |
over 700,000 composed files | QUANTITY | 0.96+ |
Amazon | ORGANIZATION | 0.95+ |
ORGANIZATION | 0.95+ | |
nearly 70,000 people | QUANTITY | 0.95+ |
GitHub | ORGANIZATION | 0.94+ |
DockerCon live 2020 | EVENT | 0.94+ |
Institute of health and precision cancer medicine | ORGANIZATION | 0.91+ |
DockerCon 2020 Kickoff | EVENT | 0.89+ |
John furrier | PERSON | 0.89+ |
Cambridge | LOCATION | 0.88+ |
Kubernetes | TITLE | 0.87+ |
two great co-hosts | QUANTITY | 0.84+ |
first responders | QUANTITY | 0.79+ |
this year | DATE | 0.78+ |
one | QUANTITY | 0.75+ |
them | QUANTITY | 0.7+ |
national | ORGANIZATION | 0.7+ |
beginning of March | DATE | 0.68+ |
every year | QUANTITY | 0.5+ |
Docker con. | EVENT | 0.49+ |
red monk | PERSON | 0.43+ |
Yammel | PERSON | 0.34+ |
VMworld Day 1 General Session | VMworld 2018
For Las Vegas, it's the cube covering vm world 2018, brought to you by vm ware and its ecosystem partners. Ladies and gentlemen, Vm ware would like to thank it's global diamond sponsors and it's platinum sponsors for vm world 2018 with over 125,000 members globally. The vm ware User Group connects via vmware customers, partners and employees to vm ware, information resources, knowledge sharing, and networking. To learn more, visit the [inaudible] booth in the solutions exchange or the hemoglobin gene vm village become a part of the community today. This presentation includes forward looking statements that are subject to risks and uncertainties. Actual results may differ materially as a result of various risk factors including those described in the 10 k's 10 q's and k's vm ware. Files with the SEC. Ladies and Gentlemen, please welcome Pat Gelsinger. Welcome to vm world. Good morning. Let's try that again. Good morning and I'll just say it is great to be here with you today. I'm excited about the sixth year of being CEO. When it was on this stage six years ago were Paul Maritz handed me the clicker and that's the last he was seen. We have 20,000 plus here on site in Vegas and uh, you know, on behalf of everyone at Vm ware, you know, we're just thrilled that you would be with us and it's a joy and a thrill to be able to lead such a community. We have a lot to share with you today and we really think about it as a community. You know, it's my 23,000 plus employees, the souls that I'm responsible for, but it's our partners, the thousands and we kicked off our partner day yesterday, but most importantly, the vm ware community is centered on you. You know, we're very aware of this event would be nothing without you and our community and the role that we play at vm wares to build these cool breakthrough innovations that enable you to do incredible things. You're the ones who take our stuff and do amazing things. You altogether. We have truly changed the world over the last two decades and it is two decades. You know, it's our anniversary in 1998, the five people that started a vm ware, right. You know, it was, it was exactly 20 years ago and we're just thrilled and I was thinking about this over the weekend and it struck me, you know, anniversary, that's like old people, you know, we're here, we're having our birthday and it's a party, right? We can't have a drink yet, but next year. Yeah. We're 20 years old. Right. We can do that now. And I'll just say the culture of this community is something that truly is amazing and in my 38 years, 38 years in tech, that sort of sounds like I'm getting old or something, but the passion, the loyalty, almost a cult like behavior that we see in this team of people to us is simply thrilling. And you know, we put together a little video to sort of summarize the 20 years and some of that history and some of the unique and quirky aspects of our culture. Let's watch that now. We knew we had something unique and then we demonstrated that what was unique was also some reasons that we love vm ware, you know, like the community out there. So great. The technology I love it. Ware is solid and much needed. Literally. I do love Vmr. It's awesome. Super Awesome. Pardon? There's always someone that wants to listen and learn from us and we've learned so much from them as well. And we reached out to vm ware to help us start building. What's that future world look like? Since we're doing really cutting edge stuff, there's really no better people to call and Bmr has been known for continuous innovation. There's no better way to learn how to do new things in it than being with a company that's at the forefront of technology. What do you think? Don't you love that commitment? Hey Ashley, you know, but in the prep sessions for this, I thought, boy, what can I do to take my commitment to the next level? And uh, so, uh, you know, coming in a couple days early, I went to down the street to bad ass tattoo. So it's time for all of us to take our commitment up level and sometimes what happens in Vegas, you take home. Thank you. Vm Ware has had this unique role in the industry over these 20 years, you know, and for that we've seen just incredible things that have happened over this period of time and it's truly extraordinary what we've accomplished together. And you know, as we think back, you know, what vm ware has uniquely been able to do is I'll say bridge across know and we've seen time and again that we see these areas of innovation emerging and rapidly move forward. But then as they become utilized by our customers, they create this natural tension of what business wants us flexibility to use across these silos of innovation. And from the start of our history, we have collectively had this uncanny ability to bridge across these cycles of innovation. You know, an act one was clearly the server generation. You know, it may seem a little bit, uh, ancient memory now, but you remember you used to walk into your data center and it looked like the loove the museum of it passed right? You know, and you had your old p series and your z series in your sparks and your pas and your x86 cluster and Yo, it had to decide, well, which architecture or am I going to deploy and run this on? And we bridged across and that was the magic of Esx. You don't want to just changed the industry when that occurred. And I sort of called the early days of Esx and vsphere. It was like the intelligence test. If you weren't using it, you fail because Yup. Servers, 10 servers become one months, become minutes. I still have people today who come up to me and they reflect on their first experience of vsphere or be motion and it was like a holy moment in their life and in their careers. Amazing and act to the Byo d, You know, can we bridge across these devices and users wanted to be able to come in and say, I have my device and I'm productive on it. I don't want to be forced to use the corporate standard. And maybe more than anything was the power of the iphone that was introduced, the two, seven, and suddenly every employee said this is exciting and compelling. I want to use it so I can be more productive when I'm here. Bye. Jody was the rage and again it was a tough challenge and once again vm ware helped to bridge across the surmountable challenge. And clearly our workspace one community today is clearly bridging across these silos and not just about managing devices but truly enabling employee engagement and productivity. Maybe act three was the network and you know, we think about the network, you know, for 30 years we were bound to this physical view of what the network would be an in that network. We are bound to specific protocols. We had to wait months for network upgrades and firewall rules. Once every two weeks we'd upgrade them. If you had a new application that needed a firewall rule, sorry, you know, come back next month we'll put, you know, deep frustration among developers and ceos. Everyone was ready to break the chains. And that's exactly what we did. An NSX and Nice Sierra. The day we acquired it, Cisco stock drops and the industry realizes the networking has changed in a fundamental way. It will never be the same again. Maybe act for was this idea of cloud migration. And if we were here three years ago, it was student body, right to the public cloud. Everything is going there. And I remember I was meeting with a cio of federal cio and he comes up to me and he says, I tried for the last two years to replatform my 200 applications I got to done, you know, and all of a sudden that was this. How do I do cloud migration and the effective and powerful way. Once again, we bridged across, we brought these two worlds together and eliminated this, uh, you know, this gap between private and public cloud. And we'll talk a lot more about that today. You know, maybe our next act is what we'll call the multicloud era. You know, because today in a recent survey by Deloitte said that the average business today is using eight public clouds and expected to become 10 plus public clouds. And you know, as you're managing different tools, different teams, different architectures, those solution, how do you, again bridge across, and this is what we will do in the multicloud era, we will help our community to bridge across and take advantage of these powerful cycles of innovation that are going on, but be able to use them across a consistent infrastructure and operational environment. And we'll have a lot more to talk about on this topic today. You know, and maybe the last item to bridge across maybe the most important, you know, people who are profit. You know, too often we think about this as an either or question. And as a business leader, I'm are worried about the people or the And Milton Friedman probably set us up for this issue decades ago when he said, planet, right? the sole purpose of a business is to make profits. You want to create a multi-decade dilemma, right? For business leaders, could I have both people and profits? Could I do well and do good? And particularly for technology, I think we don't have a choice to think about these separately. We are permeating every aspect of business. And Society, we have the responsibility to do both and have all the things that vm ware has accomplished. I think this might be the one that I'm most proud of over, you know, w we have demonstrated by vsphere and the hypervisor alone that we have saved over 540 million tons of co two emissions. That is what you have done. Can you believe that? Five hundred 40 million tons is enough to have 68 percent of all households for a year. Wow. Thank you for what you have done. Thank you. Or another translation of that. Is that safe enough to drive a trillion miles and the average car or you could go to and from Jupiter just in case that was in your itinerary a thousand times. Right? He was just incredible. What we have done and as a result of that, and I'll say we were thrilled to accept this recognition on behalf of you and what you have done. You know, vm were recognized as number 17 in the fortune. Change the world list last week. And we really view it as accepting this honor on behalf of what you have done with our products and technology tech as a force for good. We believe that fundamentally that is our opportunity, if not our obligation, you know, fundamentally tech is neutral, you know, we together must shape it for good. You know, the printing press by Gutenberg in 1440, right? It was used to create mass education and learning materials also can be used for extremist propaganda. The technology itself is neutral. Our ecosystem has a critical role to play in shaping technology as a force for good. You know, and as we think about that tomorrow, we'll have a opportunity to have a very special guest and I really encourage you to be here, be on time tomorrow morning on the stage and you know, Sanjay's a session, we'll have Malala, Nobel Peace Prize winner and fourth will be a bit of extra security as you come in and you understand that. And I just encourage you not to be late because we see this tech being a force for good in everything that we do at vm ware. And I hope you'll enjoy, I'm quite looking forward to the session tomorrow. Now as we think about the future. I like to put it in this context, the superpowers of tech know and you know, 38 years in the industry, you know, I am so excited because I think everything that we've done over the last four decades is creating a foundation that allows us to do more and go faster together. We're unlocking game, changing opportunities that have not been available to any people in the history of humanity. And we have these opportunities now and I, and I think about these four cloud, you have unimaginable scale. You'll literally with your Amex card, you can go rent, you know, 10,000 cores for $100 per hour. Or if you have Michael's am ex card, we can rent a million cores for $10,000 an hour. Thanks Michael. But we also know that we're in many ways just getting started and we have tremendous issues to bridge across and compatible clouds, mobile unprecedented scale. Literally, your application can reach half the humans on the planet today. But we also know that five percent, the lowest five percent of humanity or the other half of humanity, they're still in the lower income brackets, less than five percent penetrated. And we know that we have customer examples that are using mobile phones to raise impoverished farmers in Africa, out of poverty just by having a smart phone with proper crop, the information field and whether a guidance that one tool alone lifting them out of poverty. Ai knows, you know, I really love the topic of ai in 1986. I'm the chief architect of the 80 46. Some of you remember what that was. Yeah, I, you know, you're, you're my folk, right? Right. And for those of you who don't, it was a real important chip at the time. And my marketing manager comes running into my office and he says, Pat, pat, we must make the 46 a great ai chip. This is 1986. What happened? Nothing an AI is today, a 30 year overnight success because the algorithms, the data have gotten so much bigger that we can produce results, that we can bring intelligence to everything. And we're seeing dramatic breakthroughs in areas like healthcare, radiology, you know, new drugs, diagnosis tools, and designer treatments. We're just scratching the surface, but ai has so many gaps, yet we don't even in many cases know why it works. Right? And we'll call that explainable ai and edge and Iot. We're connecting the physical and the digital worlds was never before possible. We're bridging technology into every dimension of human progress. And today we're largely hooking up things, right? We have so much to do yet to make them intelligent. Network secured, automated, the patch, bringing world class it to Iot, but it's not just that these are super powers. We really see that each and each one of them is a super power in and have their own right, but they're making each other more powerful as well. Cloud enables mobile conductivity. Mobile creates more data, more data makes the AI better. Ai Enables more edge use cases and more edge requires more cloud to store the data and do the computing right? They're reinforcing each other. And with that, we know that we are speeding up and these superpowers are reshaping every aspect of society from healthcare to education, the transportation, financial institutions. This is how it all comes together. Now, just a simple example, how many of you have ever worn a hardhat? Yeah, Yo. Pretty boring thing. And it has one purpose, right? You know, keep things from smacking me in the here's the modern hardhat. It's a complete heads up display with ar head. Well, vr capabilities that give the worker safety or workers or factory workers or supply people the ability to see through walls to understand what's going on inside of the equipment. I always wondered when I was a kid to have x Ray Vision, you know, some of my thoughts weren't good about why I wanted it, but you know, I wanted to. Well now you can have it, you know, but imagine in this environment, the complex application that sits behind it. You know, you're accessing maybe 50 year old building plants, right? You're accessing HVAC systems, but modern ar and vr capabilities and new containerized displays. You'll think about that application. You know, John Gage famously said the network is the computer pat today says the application is now a network and pretty typically a complicated one, you know, and this is the vm ware vision is to make that kind of environment realizable in every aspect of our business and community and we simply have been on this journey, any device, any application, any cloud with intrinsic security. And this vision has been consistent for those of you who have been joining us for a number of years. You've seen this picture, but it's been slowly evolving as we've worked in piece by piece to refine and extend this vision, you know, and for it, we're going to walk through and use this as the compass for our discussion today as we walk through our conversation. And you know, we're going to start by a focus on any cloud. And as we think about this cloud topic, you know, we see it as a multicloud world hybrid cloud, public cloud, but increasingly seeing edge and telco becoming clouds in and have their own right. And we're not gonna spend time on it today, but this area of Telco to the is an enormous opportunity for us in our community. You know, data centers and cloud today are over 80 percent virtualized. The Telco network is less than 10 percent virtualized. Wow. An industry that's almost as big as our industry entirely unvirtualized, although the technologies we've created here can be applied over here and Telco and we have an enormous buildout coming with five g and environments emerging. What an opportunity for us, a virgin market right next to us and we're getting some early mega winds in this area using the technologies that you have helped us cure rate than the So we're quite excited about this topic area as well. market. So let's look at this full view of the multicloud. Any cloud journey. And we see that businesses are on a multicloud journey, you know, and today we see this fundamentally in these two paths, a hybrid cloud and a public cloud. And these paths are complimentary and coexisting, but today, each is being driven by unique requirements and unique teams. Largely the hybrid cloud is being driven by it. And operations, the public cloud being driven more by developers and line of business requirements and as some multicloud environment. So how do we deliver upon that and for that, let's start by digging in on the hybrid cloud aspect of this and as we think about the hybrid cloud, we've been talking about this subject for a number of years and I want to give a very specific and crisp definition. You're the hybrid cloud is the public cloud and the private cloud cooperating with consistent infrastructure and consistent operations simply put seamless path to and from the cloud that my workloads don't care if it's here or there. I'm able to run them in a agile, scalable, flexible, efficient manner across those two environments, whether it's my data center or someone else's, I can bring them together to make that work is the magic of the Vm ware Cloud Foundation. The vm ware Cloud Foundation brings together computer vsphere and the core of why we are here, but combines with that networking storage delivered through a layer of management and automation. The rule of the cloud is ruthlessly automate everything. We laid out this vision of the software defined data center seven years ago and we've been steadfastly working on this vision and vm ware. Cloud Foundation provides this consistent infrastructure and operations with integrated lifecycle management automation. Patching the m ware cloud foundation is the simplest path to the hybrid cloud and the fastest way to get vm ware cloud foundation is hyperconverged infrastructure, you know, and with this we've combined integrated then validated hardware and as a building block inside of this we have validated hardware, the v Sand ready environments. We have integrated appliances and cloud delivered infrastructure, three ways that we deliver that integrate integrated hyperconverged infrastructure solution. And we have by far the broadest ecosystem of partners to do it. A broad set of the sand ready nodes from essentially everybody in the industry. Secondly, we have integrated appliances, the extract of vxrail that we have co engineered with our partners at Dell technology and today in fact Dell is releasing the power edge servers, a major step in blade servers that again are going to be powering vxrail and vxrack systems and we deliver hyperconverged infrastructure through a broader set of Vm ware cloud partners as well. At the heart of the hyperconverged infrastructure is v San and simply put, you know, be San has been the engine that's just been moving rapidly to take over the entire integration of compute and storage and expand to more and more areas. We have incredible momentum over 15,000 customers for v San Today and for those of you who joined us, we say thank you for what you have done with this product today. Really amazing you with 50 percent of the global 2000 using it know vm ware. V San Vxrail are clearly becoming the standard for how hyperconverge is done in the industry. Our cloud partner programs over 500 cloud partners are using ulv sand in their solution, you know, and finally the largest in Hci software revenue. Simply put the sand is the software defined storage technology of choice for the industry and we're seeing that customers are putting this to work in amazing ways. Vm Ware and Dell technologies believe in tech as a force for good and that it can have a major impact on the quality of life for every human on the planet and particularly for the most underdeveloped parts of the world. Those that live on less than $2 per day. In fact that this moment 5 billion people worldwide do not have access to modern affordable surgery. Mercy ships is working hard to change the global surgery crisis with greater than 400 volunteers. Mercy ships operates the largest NGO hospital ship delivering free medical care to the poorest of the poor in Africa. Let's see from them now. When the ship shows up to port, literally people line up for days to receive state of the art life, sane changing life saving surgeries, tumor site limbs, disease blindness, birth defects, but not only that, the personnel are educating and training the local healthcare providers with new skills and infrastructure so they can care for their own. After the ship has left, mercy ships runs on Vm ware, a dell technology with VX rail, Dell Isilon data protection. We are the it platform for mercy ships. Mercy ships is now building their next generation ship called global mercy, which were more than double. It's lifesaving capacity. It's the largest charity hospital ever. It will go live in 20 slash 20 serving Africa and I personally plan on being there for its launch. It is truly amazing what they are doing with our technology. Thanks. So we see this picture of the hybrid cloud. We've talked about how we do that for the private cloud. So let's look over at the public cloud and let's dig into this a little bit more deeply. You know, we're taking this incredible power of the Vm ware Cloud Foundation and making it available for the leading cloud providers in the world and with that, the partnership that we announced almost two years ago with Amazon and on the stage last year, we announced their first generation of products, no better example of the hybrid cloud. And for that it's my pleasure to bring to stage my friend, my partner, the CEO of aws. Please welcome Andy Jassy. Thank you andy. You know, you honor us with your presence, you know, and it really is a pleasure to be able to come in front of this audience and talk about what our teams have accomplished together over the last, uh, year. Yo, can you give us some perspective on that, Andy and what customers are doing with it? Well, first of all, thanks for having me. I really appreciate it. It's great to be here with all of you. Uh, you know, the offering that we have together customers because it allows them to use the same software they've been using to again, where cloud and aws is very appealing to manage their infrastructure for years to be able to deploy it an aws and we see a lot of customer momentum and a lot of customers using it. You see it in every imaginable vertical business segment in transportation. You see it with stagecoach and media and entertainment. You see it with discovery communications in education, Mit and Caltech and consulting and accenture and cognizant and dxc you see in every imaginable vertical business segment and the number of customers using the offering is doubling every quarter. So people were really excited about it and I think that probably the number one use case we see so far, although there are a lot of them, is customers who are looking to migrate on premises applications to the cloud. And a good example of that is mit. We're there right now in the process of migrating. In fact, they just did migrate 3000 vms from their data centers to Vm ware cloud native us. And this would have taken years before to do in the past, but they did it in just three months. It was really spectacular and they're just a fun company to work with and the team there. But we're also seeing other use cases as well. And you're probably the second most common example is we'll say on demand capabilities for things like disaster recovery. We have great examples of customers you that one in particular, his brakes, right? Urban in those. The brings security trucks and they all armored trucks coming by and they had a critical need to retire a secondary data center that they were using, you know, for Dr. so we quickly built to Dr Protection Environment for $600. Bdms know they migrated their mission critical workloads and Wallah stable and consistent Dr and now they're eliminating that site and looking for other migrations as well. The rate of 10 to 15 percent. It was just a great deal. One of the things I believe Andy, he'll customers should never spend capital, uh, Dr ever again with this kind of capability in place. That is just that game changing, you know, and you know, obviously we've been working on expanding our reach, you know, we promised to make the service available a year ago with the global footprint of Amazon and now we've delivered on that promise and in fact today or yesterday if you're an ozzie right down under, we announced in Sydney, uh, as well. And uh, now we're in US Europe and in APJ. Yeah. It's really, I mean it's very exciting. Of course Australia is one of the most virtualized places in the world and, and it's pretty remarkable how fast European customers have started using the offering to and just the quarter that's been out there and probably have the many requests customers has had. And you've had a, probably the number one request has been that we make the offering available in all the regions. The aws has regions and I can tell you by the end of 2019 will largely be there including with golf clubs and golf clap. You guys have been, that's been huge for you guys. Yeah. It's a government only region that we have that a lot of federal government workloads live in and we are pretty close together having the offering a fedramp authority to operate, which is a big deal on a game changer for governments because then there'll be able to use the familiar tools they use and vm ware not just to run their workloads on premises but also in the cloud as well with the data privacy requirements, security requirements they need. So it's a real game changer for government too. Yeah. And this you can see by the picture here basically before the end of next year, everywhere that you are and have an availability zone. We're going to be there running on data. Yup. Yeah. Let's get with it. Okay. We're a team go faster. Okay. You'll and you know, it's not just making it available, but this pace of innovation and you know, you guys have really taught us a few things in this respect and since we went live in the Oregon region, you know, we've been on a quarterly cadence of major releases and two was really about mission critical at scale and we added our second region. We added our hybrid cloud extension with m three. We moved the global rollout and we launched in Europe with m four. We really add a lot of these mission critical governance aspects started to attack all of the industry certifications and today we're announcing and five right. And uh, you know, with that, uh, I think we have this little cool thing you know, two of the most important priorities for that we're doing with ebs and storage. Yeah, we'll take, customers, our cost and performance. And so we have a couple of things to talk about today that we're bringing to you that I think hit both of those on a storage side. We've combined the elasticity of Amazon Elastic Block store or ebs with ware is Va v San and we've provided now a storage option that you'll be able to use that as much. It's very high capacity and much more cost effective and you'll start to see this initially on the Vm ware cloud. Native us are five instances which are compute instances, their memory optimized and so this will change the cost equation. You'll be able to use ebs by default and it'll be much more cost effective for storage or memory intensive workloads. Um, it's something that you guys have asked for. It's been very frequently requested it, it hits preview today. And then the other thing is that we've worked really hard together to integrate vm ware's Nsx along with aws direct neck to have a private even higher performance conductivity between on premises and the cloud. So very, very exciting new capabilities to show deep integration between the companies. Yeah. You know, in that aspect of the deep integration. So it's really been the thing that we committed to, you know, we have large engineering teams that are working literally every day. Right on bringing together and how do we fuse these platforms together at a deep and intimate way so that we can deliver new services just like elastic drs and the c and ebs really powerful, uh, capabilities and that pace of innovation continue. So next maybe. Um, maybe six. I don't know. We'll see. All right. You know, but we're continuing this toward pace of innovation, you know, completing all of the capabilities of Nsx. You'll full integration for all of the direct connect to capabilities. Really expanding that. You're only improving licensed capabilities on the platform. We'll be adding pks on top of for expanded developer a capabilities. So just. Oh, thank you. I, I think that was formerly known as Right, and y'all were continuing this pace of storage Chad. So anyway. innovation going forward, but I think we also have a few other things to talk about today. Andy. Yeah, I think we have some news that hopefully people here will be pretty excited about. We know we have a pretty big database business and aws and it's. It's both on the relational and on the nonrelational side and the business is billions of dollars in revenue for us and on the relational side. We have a service called Amazon relational database service or Amazon rds that we have hundreds of thousands of customers using because it makes it much easier for them to set up, operate and scale their databases and so many companies now are operating in hybrid mode and will be for a while and a lot of those customers have asked us, can you give us the ease of manageability of those databases but on premises. And so we talked about it and we thought about and we work with our partners at Vm ware and I'm excited to announce today, right now Amazon rds on Vm ware and so that will bring all the capabilities of Amazon rds to vm ware's customers for their on premises environments. And so what you'll be able to do is you'll be able to provision databases. You'll be able to scale the compute or the memory or the storage for those database instances. You'll be able to patch the operating system or database engines. You'll be able to create, read replicas to scale your database reads and you can deploy this rep because either on premises or an aws, you'll be able to deploy and high high availability configuration by replicating the data to different vm ware clusters. You'll be able to create online backups that either live on premises or an aws and then you'll be able to take all those databases and if you eventually want to move them to aws, you'll be able to do so rather easily. You have a pretty smooth path. This is going to be available in a few months. It will be available on Oracle sql server, sql postgresql and Maria DB. I think it's very exciting for our customers and I think it's also a good example of where we're continuing to deepen the partnership and listen to what customers want and then innovate on their behalf. Absolutely. Thank you andy. It is thrilling to see this and as we said, when we began the partnership, it was a deep integration of our offerings and our go to market, but also building this bi-directional hybrid highway to give customers the capabilities where they wanted cloud on premise, on premise to the cloud. It really is a unique partnership that we've built, the momentum we're feeling to our customer base and the cool innovations that we're doing. Andy, thank you so much for you Jordan Young, rural 20th. You guys appreciate it. Yeah, we really have just seen incredible momentum and as you might have heard from our earnings call that we just finished this. We finished the last quarter. We just really saw customer momentum here. Accelerating. Really exciting to see how customers are starting to really do the hybrid cloud at scale and with this we're just seeing that this vm ware cloud foundation available on Amazon available on premise. Very powerful, but it's not just the partnership with Amazon. We are thrilled to see the momentum of our Vm ware cloud provider program and this idea of the vm ware cloud providers has continued to gain momentum in the industry and go over five years. Right. This program has now accumulated more than 4,200 cloud partners in over 120 countries around the globe. It gives you choice, your local provider specialty offerings, some of your local trusted partners that you would have in giving you the greatest flexibility to choose from and cloud providers that meet your unique business requirements. And we launched last year a program called Vm ware cloud verified and this was saying you're the most complete embodiment of the Vm ware Cloud Foundation offering by our cloud partners in this program and this logo you know, allows you to that this provider has achieved the highest standard for cloud infrastructure and that you can scale and deliver your hybrid cloud and partnering with them. It know a particular. We've been thrilled to see the momentum that we've had with IBM as a huge partner and our business with them has grown extraordinarily rapidly and triple digits, but not just the customer count, which is now over 1700, but also in the depth of customers moving large portions of the workload. And as you see by the picture, we're very proud of the scope of our partnerships in a global basis. The highest standard of hybrid cloud for you, the Vm ware cloud verified partners. Now when we come back to this picture, you know we, you know, we're, we're growing in our definition of what the hybrid cloud means and through Vm Ware Cloud Foundation, we've been able to unify the private and the public cloud together as never before, but we're also seeing that many of you are interested in how do I extend that infrastructure further and farther and will simply call that the edge right? And how do we move data closer to where? How do we move data center resources and capacity closer to where the data's being generated at the operations need to be performed? Simply the edge and we'll dig into that a little bit more, but as we do that, what are the things that we offer today with what we just talked about with Amazon and our VCP p partners is that they can consume as a service this full vm ware Cloud Foundation, but today we're only offering that in the public cloud until project dimension of project dimension allows us to extend delivered as a service, private, public, and to the edge. Today we're announcing the tech preview, a project dimension Vm ware cloud foundation in a hyperconverged appliance. We're partnered deeply with Dell EMC, Lenovo for the first partners to bring this to the marketplace, built on that same proven infrastructure, a hybrid cloud control plane, so literally just like we're managing the Vm ware cloud today, we're able to do that for your on premise. You're small or remote office or your edge infrastructure through that exact same as a service management and control plane, a complete vm ware operated end to end environment. This is project dimension. Taking the vcf stack, the full vm ware cloud foundation stack, making an available in the cloud to the edge and on premise as well, a powerful solution operated by BM ware. This project dimension and project dimension allows us to have a fundamental building block in our approach to making customers even more agile, flexible, scalable, and a key component of our strategy as well. So let's click into that edge a little bit more and we think about the edge in the following layers, the compute edge, how do we get the data and operations and applications closer to where they need to be. If you remember last year I talked about this pendulum swinging of centralization and decentralization edge is a decentralization force. We're also excited that we're moving the edge of the devices as well and we're doing that in two ways. One with workspace, one for human optimized devices and the second is project pulse or Vm ware pulse. And today we're announcing pulse two point zero where you can consume it now as a service as well as with integrated security. And we've now scaled pulse to support 500 million devices. Isn't that incredible, right? I mean this is getting a scale. Billions and billions and finally networking is a key component. You all that. We're stretching the networking platform, right? And evolving how that edge operates in a more cloud and that's a service white and this is where Nsx St with Velo cloud is such a key component of delivering the edge of network services as well. Taken together the device side, the compute edge and rethinking and evolving the networking layer together is the vm ware edge strategy summary. We see businesses are on this multicloud journey, right? How do we then do that for their private of public coming together, the hybrid cloud, but they're also on a journey for how they work and operate it across the public cloud and the public cloud we have this torrid innovation, you'll want Andy's here, challenges. You know, he's announcing 1500 new services or were extraordinary innovation and you'll same for azure or Google Ibm cloud, but it also creates the same complexity as we said. Businesses are using multiple public clouds and how do I operate them? How do I make them work? You know, how do I keep track of my accounts and users that creates a set of cloud operations problems as well in the complexity of doing that. How do you make it work? Right? And your for that. We'll just see that there's this idea cloud cost compliance, analytics as these common themes that of, you know, keep coming up and we're seeing in our customers that are new role is emerging. The cloud operations role. You're the person who's figuring out how to make these multicloud environments work and keep track of who's using what and which data is landing where today I'm thrilled to tell you that the, um, where is acquiring the leader in this space? Cloudhealth technologies. Thank you. Cloudhealth technologies supports today, Amazon, azure and Google. They have some 3,500 customers, some of the largest and most respected brands in the, as a service industry. And Sasa business today rapidly span expanding feature sets. We will take cloudhealth and we're going to make it a fundamental platform and branded offering from the um, where we will add many of the other vm ware components into this platform, such as our wavefront analytics, our cloud, choreo compliance, and many of the other vm ware products will become part of the cloudhealth suite of services. We will be enabling that through our enterprise channels as well as through our MSP and BCPP partners as well know. Simply put, we will make cloudhealth the cloud operations platform of choice for the industry. I'm thrilled today to have Joe Consella, the CTO and founder. Joe, please stand up. Thank you joe to your team of a couple hundred, you know, mostly in Boston. Welcome to the Vm ware family, the Vm ware community. It is a thrill to have you part of our team. Thank you joe. Thank you. We're also announcing today, and you can think of this, much like we had v realize operations and v realize automation, the compliment to the cloudhealth operations, vm ware, cloud automation, and some of you might've heard of this in the past, this project tango. Well, today we're announcing the initial availability of Vm ware, cloud automation, assemble, manage complex applications, automate their provisioning and cloud services, and manage them through a brokerage the initial availability of cloud automation services, service. Your today, the acquisition of cloudhealth as a platform, the aware of the most complete set of multicloud management tools in the industry, and we're going to do so much more so we've seen this picture of this multicloud journey that our customers are on and you know, we're working hard to say we are going to bridge across these worlds of innovation, the multicloud world. We're doing many other things. You're gonna hear a lot at the show today about this year. We're also giving the tech preview of the Vm ware cloud marketplace for our partners and customers. Also today, Dell technologies is announcing their cloud marketplace to provide a self service, a portfolio of a Dell emc technologies. We're fundamentally in a unique position to accelerate your multicloud journey. So we've built out this any cloud piece, but right in the middle of that any cloud is the network. And when we think about the network, we're just so excited about what we have done and what we're seeing in the industry. So let's click into this a little bit further. We've gotten a lot done over the last five years. Networking. Look at these numbers. 80 million switch ports have been shipped. We are now 10 x larger than number two and software defined networking. We have over 7,500 customers running on Nsx and maybe the stat that I'm most proud of is 82 percent of the fortune 100 has now adopted nsx. You have made nsx these standard and software defined networking. Thank you very much. Thank you. When we think about this journey that we're on, we started. You're saying, Hey, we've got to break the chains inside of the data center as we said. And then Nsx became the software defined networking platform. We started to do it through our cloud provider partners. Ibm made a huge commitment to partner with us and deliver this to their customers. We then said, boy, we're going to make a fundamental to all of our cloud services including aws. We built this bridge called the hybrid cloud extension. We said we're going to build it natively into what we're doing with Telcos, with Azure and Amazon as a service. We acquired the St Wagon, right, and a Velo cloud at the hottest product of Vm ware's portfolio today. The opportunity to fundamentally transform branch and wide area networking and we're extending it to the edge. You're literally, the world has become this complex network. We have seen the world go from the old defined by rigid boundaries, simply put in a distributed world. Hardware cannot possibly work. We're empowering customers to secure their applications and the data regardless of where they sit and when we think of the virtual cloud network, we say it's these three fundamental things, a cloud centric networking fabric with intrinsic security and all of it delivered in software. The world is moving from data centers to centers of data and they need to be connected and Nsx is the way that we will do that. So you'll be aware of is well known for this idea of talking but also showing. So no vm world keynote is okay without great demonstrations of it because you shouldn't believe me only what we can actually show and to do that know I'm going to have our CTL come onstage and CTL y'all. I used to be a cto and the CTO is the certified smart guy. He's also known as the chief talking officer and today he's my demo partner. Please walk, um, Vm ware, cto ray to the stage. Right morning pat. How you doing? Oh, it's great ray, and thanks so much for joining us. Know I promised that we're going to show off some pretty cool stuff here. We've covered a lot already, but are you up to the task? We're going to try and run through a lot of demos. We're going to do it fast and you're going to have to keep me on time to ask an awkward question. Slow me down. Okay. That's my fault if you run along. Okay, I got it. I got it. Let's jump right in here. So I'm a CTO. I get to meet lots of customers that. A few weeks ago I met a cio of a large distribution company and she described her it infrastructure as consisting of a number of data centers troll to us, which he also spoke of a large number of warehouses globally, and each of these had local hyperconverged compute and storage, primarily running surveillance and warehouse management applications, and she pulls me four questions. The first question she asked me, she says, how do I migrate one of these data centers to Vm ware cloud on aws? I want to get out of one of these data centers. Okay. Sounds like something andy and I were just talking exactly, exactly what you just spoke to a few moments ago. She also wanted to simplify the management of the infrastructure in the warehouse as themselves. Okay. He's age and smaller data centers that you've had out there. Her application at the warehouses that needed to run locally, butter developers wanted to develop using cloud infrastructure. Cloud API is a little bit late. The rds we spoken with her in. Her final question was looking to the future, make all this complicated management go away. I want to be able to focus on my application, so that's what my business is about. So give me some new ways of how to automate all of this infrastructure from the edge to the cloud. Sounds pretty clear. Can we do it? Yes we can. So we're going to dive right in right now into one of these demos. And the first demo we're going to look at it is vm ware cloud on aws. This is the best solution for accelerating this public cloud journey. So can we start the demo please? So what you were looking at here is one of those data centers and you should be familiar with this product. It's a familiar vsphere client. You see it's got a bunch of virtual machines running in there. These are the virtual machines that we now want to be able to migrate and move the VMC on aws. So we're going to go through that migration right now. And to do that we use a product that you've seen already atx, however it's the x has been, has got some new cool features since the last time we download it. Probably on this stage here last year, I wanted those in particular is how do we do bulk migration and there's a new cool thing, right? Whole thing we want to move the data center en mass and his concept here is cloud motion with vsphere replication. What this does is it replicates the underlying storage of the virtual machines using vsphere replication. So if and when you want to now do the final migration, it actually becomes a vmotion. So this is what you see going on right here. The replication is in place. Now when you want to touch you move those virtual machines. What you'll do is a vmotion and the key thing to think about here is this is an actual vmotion. Those the ends as room as they're moving a hustler, migrating remained life just as you would in a v motion across one particular infrastructure. Did you feel complete application or data center migration with no dying town? It's a Standard v motion kind of appearance. Wow. That is really impressive. That's correct. Wow. You. So one of the other things we want to talk about here is as we are moving these virtual machines from the on prem infrastructure to the VMC on aws infrastructure, unfortunately when we set up the cloud on VMC and aws, we only set up for hosts, uh, that might not be, that'd be enough because she is going to move the whole infrastructure of that this was something you guys, you and Andy referred to briefly data center. Now, earlier, this concept of elastic drs. what elastic drs does, it allows the VMC on aws to react to the workloads as they're being created and pulled in onto that infrastructure and automatically pull in new hosts into the VMC infrastructure along the way. So what you're seeing here is essentially the MC growing the infrastructure to meet the needs of the workloads themselves. Very cool. So overseeing that elastic drs. we also see the ebs capabilities as well. Again, you guys spoke about this too. This is the ability to be able to take the huge amount of stories that Amazon have, an ebs and then front that by visa you get the same experience of v Sign, but you get this enormous amount of storage capabilities behind it. Wow. That's incredible. That's incredible. I'm excited about this. This is going to enable customers to migrate faster and larger than ever before. Correct. Now she had a series of little questions. Okay. The second question was around what about all those data centers and those age applications that I did not move, and this is where we introduce the project which you've heard of already tonight called project dementia. What this does, it gives you the simplicity of Vm ware cloud, but bringing that out to the age, you know what's basically going on here, vmc on aws is a service which manages your infrastructure in aws. We know stretch that service out into your infrastructure, in your data center and at the age, allowing us to be able to manage that infrastructure in the same way. Once again, let's dive down into a demo and take a look at what this looks like. So what you've got here is a familiar series of services available to you, one of them, which is project dimension. When you enter project dimension, you first get a view of all of the different infrastructure that you have available to you, your data centers, your edge locations. You can then dive deeply into one of these to get a closer look at what's going on here. We're diving into one of these The problem is there's a networking problem going on in this warehouse. warehouses and we see it as a problem here. How do we know? We know because vm ware is running this as a managed service. We are directly managing or sorry, monitoring your infrastructure or we discover there's something going wrong here. We automatically create the ASR, so somebody is dealing with this. You have visibility to what's going on, but the vm ware managed service is already chasing the problem for you. Oh, very good. So now we're seeing this dispersed infrastructure with project dementia, but what's running on it so well before we get with running out, you've got another problem and the problem is of course, if you're managing a lot of infrastructure like this, you need to keep it up to date. And so once again, this is where the vm ware managed service kicks in. We manage that infrastructure in terms of patching it and updating it for you. And as an example, when we released a security patch, here's one for the recent l, one terminal fault, the Vmr managed service is already on that and making sure that your on prem and edge infrastructure is up to date. Very good. Now, what's running? Okay. So what's running, uh, so we mentioned this case of this software running at the edge infrastructure itself, and these are workloads which are running locally in those age, uh, those edge locations. This is a surveillance application. You can see it here at the bottom it says warehouse safety monitor. So this is an application which gathers images and then stores those images He said my sql database on top there, now this is where we leverage the somewhere and it puts them in a database. technology you just learned about when Andy and pat spoke about disability to take rds and run that on your on prem infrastructure. The block of virtual machines in the moment are the rds components from Amazon running in your infrastructure or in your edge location, and this gives you the ability to allow your developers to be able to leverage and operate against those Apis, but now the actual database, the infrastructure is running on prem and you might be doing just for performance reasons because of latency, you might be doing it simply because this data center is not always connected to the cloud. When you take a look into under the hood and see what's going on here, what you actually see this is vsphere, a modified version of vsphere. You see this new concept of my custom availability zone. That is the availability zone running on your infrastructure which supports or ds. What's more interesting is you flip back to the Amazon portal. This is typically what your developers are going to do. Once again, you see an availability zone in your Amazon portal. This is the availability zone running on your equipment in your data center. So we've truly taken that already as infrastructure and moved it to the edge so the developer sees what they're comfortable with and the infrastructure sees what they're comfortable with bridging those two worlds. Fabulous. Right. So the final question of course that we got here was what's next? How do I begin to look to the future and say I am going to, I want to be able to see all of my infrastructure just handled in an automated fashion. And so when you think about that, one of the questions there is how do we leverage new technologies such as ai and ml to do that? So what you've got here is, sorry we've got a little bit later. What you've got here is how do I blend ai in a male and the power of what's in the data center itself. Okay. And we could do that. We're bringing you the AI and ml, right? And fusing them together as never before to truly change how the data center operates. Correct. And it is this introduction is this merging of these things together, which is extremely powerful in my mind. This is a little bit like a self driving vehicle, so thinking about a car driving down the street is self driving vehicle, it is consuming information from all of the environment around it, other vehicles, what's happening, everything from the wetter, but it also has a lot of built in knowledge which is built up to to self learning and training along the way in the kids collecting lots of that data for decades. Exactly. And we've got all that from all the infrastructure that we have. We can now bring that to bear. So what we're focusing on here is a project called project magna and project. Magna leverage is all of this infrastructure. What it does here is it helps connect the dots across huge datasets and again a deep insight across the stack, all the way from the application hardware, the infrastructure to the public cloud, and even the age and what it does, it leverages hundreds of control points to optimize your infrastructure on Kpis of cost performance, even user specified policies. This is the use of machine language in order to fundamentally transform. I'm sorry, machine learning. I'm going back to some. Very early was here, right? This is the use of machine learning and ai, which will automatically transform. How do you actually automate these data centers? The goal is true automation of your infrastructure, so you get to focus on the applications which really served needs of your business. Yeah, and you know, maybe you could think about that as in the past we would have described the software defined data center, but in the future we're calling it the self driving data center. Here we are taking that same acronym and redefining it, right? Because the self driving data center, the steep infusion of ai and machine learning into the management and automation into the storage, into the networking, into vsphere, redefining the self driving data center and with that we believe fundamentally is to be an enormous advance and how they can take advantage of new capabilities from bm ware. Correct. And you're already seeing some of this in pieces of projects such as some of the stuff we do in wavefront and so already this is how do we take this to a new level and that's what project magnet will do. So let's summarize what we've seen in a few demos here as we work in true each of these very quickly going through these demos. First of all, you saw the n word cloud on aws. How do I migrate an entire data center to the cloud with no downtime? Check, we saw project dementia, get the simplicity of Vm ware cloud in the data center and manage it at the age as a managed service check. Amazon rds and Vm ware. Cool Demo, seamlessly deploy a cloud service to an on premises environment. In this case already. Yes, we got that one coming in are in m five. And then finally project magna. What happens when you're looking to the future? How do we leverage ai and ml to self optimize to virtual infrastructure? Well, how did ray do as our demo guy? Thank you. Thanks. Thanks. Right. Thank you. So coming back to this picture, our gps for the day, we've covered any cloud, let's click into now any application, and as we think about any application, we really view it as this breadth of the traditional cloud native and Sas Coobernetti is quickly maybe spectacularly becoming seen as the consensus way that containers will be managed and automate as the framework for how modern APP teams are looking at their next generation environment, quickly emerging as a key to how enterprises build and deploy their applications today. And containers are efficient, lightweight, portable. They have lots of values for developers, but they need to also be run and operate and have many infrastructure challenges as well. Managing automation while patch lifecycle updates, efficient move of new application services, know can be accelerated with containers. We also have these infrastructure problems and you know, one thing we want to make clear is that the best way to run a container environment is on a virtual machine. You know, in fact, every leader in public cloud runs their containers and virtual machines. Google the creator and arguably the world leader in containers. They runs them all in containers. Both their internal it and what they run as well as G K, e for external users as well. They just announced gke on premise on vm ware for their container environments. Google and all major clouds run their containers and vms and simply put it's the best way to run containers. And we have solved through what we have done collectively the infrastructure problems and as we saw earlier, cool new container apps are also typically some ugly combination of cool new and legacy and existing environments as well. How do we bridge those two worlds? And today as people are rapidly moving forward with containers and Coobernetti's, we're seeing a certain set of problems emerge. And Dan cone, right, the director of CNCF, the Coobernetti, uh, the cloud native computing foundation, the body for Coobernetti's collaboration and that, the group that sort of stewards the standardization of this capability and he points out these four challenges. How do you secure them? How do you network and you know, how do you monitor and what do you do for the storage underneath them? Simply put, vm ware is out to be, is working to be is on our way to be the dial tone for Coobernetti's. Now, some of you who were in your twenties might not know what that means, so we know over to a gray hair or come and see me afterward. We'll explain what dial tone means to you or maybe stated differently. Enterprise grade standard for Cooper netties and for that we are working together with our partners at Google as well as pivotal to deliver Vm ware, pks, Cooper netties as an enterprise capability. It builds on Bosh. The lifecycle engine that's foundational to the pivotal have offerings today, uh, builds on and is committed to stay current with the latest Coobernetti's releases. It builds on Nsx, the SDN container, networking and additional contributions that were making like harbor the Vm ware open source contribution for the container registry. It packages those together makes them available on a hybrid cloud as well as public cloud environments with pks operators can efficiently deploy, run, upgrade their coopernetties environments on SDDC or on all public clouds. While developers have the freedom to embrace and run their applications rapidly and efficiently, simply put, pks, the standard for Coobernetti's in the enterprise and underneath that Nsx you'll is emerging as the standard for software defined networking. But when we think about and we saw that quote on the challenges of Kubernetes today, we see that networking is one of the huge challenge is underneath that and in a containerized world, things are changing even more rapidly. My network environment is moving more quickly. NSX provides the environment's easily automate networking and security for rapid deployment of containerized environments that fully supports the MRP chaos, fully supports pivotal's application service, and we're also committed to fully support all of the major kubernetes distribution such as red hat, heptio and docker as well Nsx, the only platform on the planet that can address the complexity and scale of container deployments taken together Vm Ware, pks, the production grade computer for the enterprise available on hybrid cloud, available on major public clouds. Now, let's not just talk about it again. Let's see it in action and please walk up to the stage. When di Carter with Ray, the senior director of cloud native marketing for Vm ware. Thank you. Hi everybody. So we're going to talk about pks because more and more new applications are built using kubernetes and using containers with vm ware pts. We get to simplify the deploying and the operation of Kubernetes at scale. When the. You're the experts on all of this, right? So can you take as true the scenario of how pks or vm ware pts can really help a developer operating the Kubernedes environment, developed great applications, but also from an administrator point of view, I can really handle things like networking, security and those configurations. Sounds great. I love to dive into the demo here. Okay. Our Demo is. Yeah, more pks running coubernetties vsphere. Now pks has a lot of cool functions built in, one of which is Nsx. And today what I'm going to show you is how NSX will automatically bring up network objects as quick Coobernetti's name spaces are spun up. So we're going to start with the fees per client, which has been extended to Ron pks, deployed cooper clusters. We're going to go into pks instance one, and we see that there are five clusters running. We're going to select one other clusters, call application production, and we see that it is running nsx. Now a cluster typically has multiple users and users are assigned namespaces, and these namespaces are essentially a way to provide isolation and dedicated resources to the users in that cluster. So we're going to check how many namespaces are running in this cluster and more brought up the Kubernetes Ui. We're going to click on namespace and we see that this cluster currently has four namespaces running wire. We're going to do next is bringing up a new name space and show that Nsx will automatically bring up the network objects required for that name space. So to do that, we're going to upload a Yammel file and your developer may actually use Ku Kata command to do this as well. We're going to check the namespace and there it is. We have a new name space called pks rocks. Yeah. Okay. Now why is that guy now? It's great. We have a new name space and now we want to make sure it has the network elements assigned to us, so we're going to go to the NSX manager and hit refresh and there it is. PKS rocks has a logical robber and a logical switch automatically assigned to it and it's up and running. So I want to interrupt here because you made this look so easy, right? I'm not sure people realize the power of what happened here. The developer, winton using Kubernetes, is api infrastructure to familiar with added a new namespace and behind the scenes pks and tardy took care of the networking. It combination of Nsx, a combination of what we do at pks to truly automate this function. Absolutely. So this means that if you are on the infrastructure operation, you don't need to worry about your developer springing up namespaces because Nsx will take care of bringing the networking up and then bringing them back down when the namespace is not used. So rate, but that's not it. Now, I was in operations before and I know how hard it is for enterprises to roll out a new product without visibility. Right, so pks took care of those dates, you operational needs as well, so while it's running your clusters, it's also exporting Meta data so that your developers and operators can use wavefront to gain deep visibility into the health of the cluster as well as resources consumed by the cluster. So here you see the wavefront Ui and it's showing you the number of nodes running, active parts, inactive pause, et cetera. You can also dive deeper into the analytics and take a look at information site, Georgia namespace, so you see pks rocks there and you see the number of active nodes running as well as the CPU utilization and memory consumption of that nice space. So now pks rocks is ready to run containerized applications and microservices. So you just get us a very highlight of a demo here to see a little bit what pks pks says, where can we learn more? So we'd love to show you more. Please come by the booth and we have more cool functions running on pks and we'd love to have you come by. Excellent. Thank you, Lindy. Thank you. Yeah, so when we look at these types of workloads now running on vsphere containers, Kubernedes, we also see a new type of workload beginning to appear and these are workloads which are basically machine learning and ai and in many cases they leverage a new type of infrastructure, hardware accelerators, typically gps. What we're going to talk about here is how in video and Vm ware have worked together to give you flexibility to run sophisticated Vdi workloads, but also to leverage those same gpu for deep learning inference workloads also on vsphere. So let's dive right into a demo here. Again, what you're seeing here is again, you're looking at here, you're looking at your standard view realized operations product, and you see we've got two sets of applications here, a Vdi desktop workload and machine learning, and the graph is showing what's happening with the Vdi desktops. These are office workers leveraging these desktops everyday, so of course the infrastructure is super busy during the daytime when they're in the office, but the green area shows this is not been used very heavily outside of those times. So let's take a look. What happens to the machine learning application in this case, this organization leverages those available gpu to run the machine learning operations outside the normal working hours. Let's take a little bit of a deeper dive into what the application it is before we see what we can do from an infrastructure and configuration point of view. So this machine learning application processes a vast number of images and it clarify or sorry, it categorizes these images and as it's doing so, it is moving forward and putting each of these in a database and you can see it's operating here relatively fast and it's leveraging some gps to do that. So typical image processing type of machine learning problem. Now let's take a dive in and look at the infrastructure which is making this happen. First of all, we're going to look only at the Vdi employee Dvt, a Vdi infrastructure here. So I've got a bunch of these applications running Vdi applications. What I want to do is I want to move these so that I can make this image processing out a application run a lot faster. Now normally you wouldn't do this, but pot insisted that we do this demo at 10:30 in the morning when the office workers are in there, so we're going to move older Vdi workloads over to the other cluster and that's what you're seeing is going on right now. So as they move over to this other cluster, what we are now doing is freeing up all of the infrastructure. The GPU that Vdi workload was using here. We see them moving across and now you've freed up that infrastructure. So now we want to take a look at this application itself, the machine learning application and see how we can make use of that. Now freed up infrastructure we've got here is the application is running using one gpu in a vsphere cluster, but I've got three more gpu is available now because I've moved the Vdi workloads. We simply modify the application, let it know that these are available and you suddenly see an increase in the processing capabilities because of what we've done here in terms of making the flexibility of accessing those gps. So what you see here is the same gps that youth for Vdi, which you probably have in your infrastructure today, can also be used to run sophisticated machine learning and ai type of applications on your vsphere infrastructure. So let's summarize what we've seen in the various demos here in this section. First of all, we saw how the MRPS simplifies the deployment and operating operation of Kubernetes at scale. What we've also seen is that leveraging the Nvidia Gpu, we can now run the most demanding workloads on vsphere. When we think about all of these applications and these new types of workloads that people are running. I want to take one second to speak to another workload that we're seeing beginning to appear in the data center. And this is of course blockchain. We're seeing an increasing number of organizations evaluating blockchains for smart contract and digital consensus solutions. So this tech, this technology is really becoming or potentially becoming a critical role in how businesses will interact each other, how they will work together. We'd project concord, which is an open source project that we're releasing today. You get the choice, performance and scale of verifiable trust, which you can then bring to bear and run in the enterprise, but this is not just another blockchain implementation. We have focused very squarely on making sure that this is good for enterprises. It focuses on performance, it focuses on scalability. We have seen examples where running consensus algorithms have taken over 80 days on some of the most common and widely used infrastructure in blockchain and we project conquered. You can do that in two and a half hours. So I encourage you to check out this project on get hub today. You'll also see lots of activity around the whole conference. Speaking about this. Now we're going to dive into another section which is the anti device section. And for that I need to welcome pat back up there. Thank you pat. Thanks right. So diving into any device piece of the puzzle, you and as we think about the superpowers that we have, maybe there are no more area that they are more visible than in the any device aspect of our picture. You know, and as we think about this, the superpowers, you know, think about mobility, right? You know, and how it's enabling new things like desktop as a service in the mobile area, these breadth of smartphones and devices, ai and machine learning allow us to manage them, secure them and this expanding envelope of devices in the edge that need to be connected and wearables and three d printers and so on. We've also seen increasing research that says engaged employees are at the center of business success. Engaged employees are the critical ingredient for digital transformation. And frankly this is how I run vm ware, right? You know, I have my device and my work, all my applications, every one of my 23,000 employees is running on our transformed workspace one environment. Research shows that companies that, that give employees ready anytime access are nearly three x more likely to be leaders in digital transformation. That employees spend 20 percent of their time today on manual processes that can be automated. The way team collaboration and speed of division decisions increases by 16 percent with engaged employees with modern devices. Simply put this as a critical aspect to enabling your business, but you remember this picture from the silos that we started with and each of these environments has their own tribal communities of management, security automation associated with them, and the complexity associated with these is mind boggling and we start to think about these. Remember the I'm a pc and I'm a Mac. Well now you have. I'm an Ios. I'm a droid and other bdi and I'm now a connected printer and I'm a connected watch. You remember citrix manager and good is now bad and sccm a failed model and vpns and Xanax. The chaos is now over at the center of that is vm ware, workspace one, get it out of the business of managing devices, automate them from the cloud, but still have the mentor price. Secure cloud based analytics that brings new capabilities to this critical topic. You'll focus your energy on creating employee and customer experiences. You know, new capabilities to allow like our airlift, the new capability to help customers migrate from their sccm environment to a modern management, expanding the use of workspace intelligence. Last year we announced the chromebook and a partnership with HP and today I'm happy to announce the next step in our partnerships with Dell. And uh, today we're announcing that Dell provisioning for Vm ware, workspace one as part of Dell's ready to work solutions Dallas, taking the next leap and bringing workspace one into the core of their client to offerings. And the way you can think about this as Literally a dell drop ship, lap pops showing up to new employee. day one, productivity. You give them their credential and everything else is delivered by workspace one, your image, your software, everything patched and upgraded, transforming your business, right beginning at that device experience that you give to your customer. And again, we don't want to talk about it. We want to show you how this works. Please walk to the stage with re renew the head of our desktop products marketing. Thank you. So we just heard from pat about how workspace one integrated with Dell laptops is really set up to manage windows devices. What we're broadly focused on here is how do we get a truly modern management system for these devices, but one that has an intelligence behind it to make sure that we're kept with a good understanding of how to keep these devices always up to date and secure. Can we start the demo please? So what we're seeing here is to be the the front screen that you see of workspace one and you see you've got multiple devices a little bit like that demo that patch assured. I've got Ios, android, and of course I've got windows renewal. Can you please take us through how workspace one really changes the ability of somebody an it administrator to update and manage windows into our environment? Absolutely. With windows 10, Microsoft has finally joined the modern management body and we are really excited about that. Now. The good news about modern management is the frequency of ostp updates and how quickly they come out because you can address all those security issues that are hitting our radar on a daily basis, but the bad news about modern management is the frequency of those updates because all of us in it admins, we have to test each and every one of our applications would that latest version because we don't want to roll out that update in case of causes any problems with workspace one, we saw that we simply automate and provide you with the APP compatibility information right out of the box so you can now automate that update process. Let's take a quick look. Let's drill down here further into the windows devices. What we'll see is that only a small percentage of those devices are on that latest version of operating system. Now, that's not a good thing because it might have an important security fix. Let's scroll down further and see what the issue is. We find that it's related to app compatibility. In fact, 38 percent of our devices are blocked from being upgraded and the issue is app compatibility. Now we were able to find that not by asking the admins to test each and every one of those, but we combined windows analytics data with APP intelligent out of the box and be provided that information right here inside of the console. Let's dig down further and see what those devices and apps look like. So knew this is the part that I find most interesting. If I am a system administrator at this point I'm looking at workspace one is giving me a key piece of information. It says if you proceed with this update, it's going to fail 84, 85 percent at a time. So that's an important piece of information here, but not alone. Is it telling me that? It is telling me roughly speaking why it thinks it's going to fail. We've got a number of apps which are not ready to work with this new version, particularly the Mondo card sales lead tracker APP. So what we need to do is get engineering to tackle the problems with this app and make sure that it's updated. So let's get fixing it in order to fix it. What we'll do is create an automation and we can do this right out of the box in this automation will open up a Jira ticket right from within the console to inform the engineers about the problem, not just that we can also flag and send a notification to that engineering manager so that it's top of mine and they can get working on this fixed right away. Let's go ahead and save that automation right here, ray UC. There's the automation that we just So what's happening here is essentially this update is now scheduled meeting. saved. We can go and update oldest windows devices, but workspace one is holding the process of proceeding with that update, waiting for the engineers to update the APP, which is going to cause the problem. That's going to take them some time, right? So the engineers have been working on this, they have a fixed and let's go back and see what's happened to our devices. So going back into the ios updates, what we'll find is now we've unblocked those devices from being upgraded. The 38 percent has drastically dropped down. It can rest in peace that all of the devices are compliant and on that latest version of operating system. And again, this is just a snapshot of the power of workspace one to learn more and see more. I invite you all to join our EOC showcase keynote later this evening. Okay. So we've spoken about the presence of these new devices that it needs to be able to manage and operate across everything that they do. But what we're also seeing is the emergence of a whole new class of computing device. And these are devices which are we commonly speak to have been at the age or embedded devices or Iot. And in many cases these will be in factories. They'll be in your automobiles, there'll be in the building, controlling, controlling, uh, the building itself, air conditioning, etc. Are quite often in some form of industrial environment. There's something like this where you've got A wind farm under embedded in each of these turbines. This is a new class of computing which needs to be managed, secured, or we think virtualization can do a pretty good job of that in new virtualization frontier, right at the edge for iot and iot gateways, and that's gonna. That's gonna, open up a whole new realm of innovation in that space. Let's dive down and taking the demo. This spaces. Well, let's do that. What we're seeing here is a wind turbine farm, a very different than a data center than what we're used to and all the compute infrastructure is being managed by v center and we see to edge gateway hose and they're running a very mission critical safety watchdog vm right on there. Now the safety watchdog vm is an fte mode because it's collecting a lot of the important sensor data and running the mission critical operations for the turbine, so fte mode or full tolerance mode, that's a pretty sophisticated virtualization feature allowing to applications to essentially run in lockstep. So if there's a failure, wouldn't that gets to take over immediately? So this no sophisticated virtualization feature can be brought out all the way to the edge. Exactly. So just like in the data center, we want to perform an update, so as we performed that update, the first thing we'll do is we'll suspend ft on that safety watchdog. Next, we'll put two. Oh, five into maintenance mode. Once that's done, we'll see the power of emotion that we're all familiar with. We'll start to see all the virtual machines vmotion over to the second backup host. Again, all the maintenance, all the update without skipping a heartbeat without taking down any daily operations. So what we're seeing here is the basic power of virtualization being brought out to the age v motion maintenance mode, et cetera. Great. What's the big deal? We've been doing that for years. What's the, you know, come on. What's the big deal? So what you're on the edge. So when you get to the age pack, you're dealing with a whole new class of infrastructure. You're dealing with embedded systems and new types of cpu hours and process. This whole demo has been done on an arm 64. Virtualization brought to arm 64 for embedded devices. So we're doing this on arm on the edge, correct. Specifically focused for embedded for age oems. Okay. Now that's good. Okay. Thank you ray. Actually, we've got a summary here. Pat, just a second before you disappear. A lot to rattle off what we've just seen, right? We've seen workspace one cross platform management. What we've also seen, of course esx for arm to bring the power of vfx to edge on 64, but are in platforms will go no. Okay. Okay. Thank you. Thanks. Now we've seen a look at a customer who is taking advantage of everything that we just saw and again, a story of a customer that is just changing lives in a fundamental way. Let's see. Make a wish. So when a family gets the news that a child is sick and it's a critical illness, it could be a life threatening illness. The whole family has turned upside down. Imagine somebody comes to you and they say, what's the one thing you want that's in your heart? You tell us and then we make that happen. So I was just calling to give you the good news that we're going to be able to grant jackson a wish make, which is the largest wish granting organizations in the United States. English was featured in the cbs 60 minutes episode. Interestingly, it got a lot of hits, but uh, unfortunately for the it team, the whole website crashed make a wish is going through a program right now where we're centralizing technology and putting certain security standards in place at our chapters. So what you're seeing here, we're configuring certain cloud services to make sure that they always are able to deliver on the mission whether they have a local problem or not is we continue to grow the partnership and work with vm ware. It's enabling us to become more efficient in our processes and allows us to grant more wishes. It was a little girl. She had a two year old brother. She just wanted a puppy and she was forthright and I want to name the puppy in my name so my brother would always have me to list them off a five year old. It's something we can't change their medical outcome, but we can change their spiritual outcome and we can transform their lives. Thank you. Working together with you truly making wishes come true. The last topic I want to touch on today, and maybe the most important to me personally is security. You got to fundamentally, when we think about this topic of security, I'll say it's broken today and you know, we would just say that the industry got it wrong that we're trying to bolt on or chasing bad, and when we think about our security spend, we're spending more and we're losing more, right? Every day we're investing more in this aspect of our infrastructure and we're falling more behind. We believe that we have to have much less security products and much more security. You know, fundamentally, you know, if you think about the problem, we build infrastructure, right? Generic infrastructure, we then deploy applications, all kinds of applications, and we're seeing all sorts of threats launched that as daily tens of millions. You're simple virus scanner, right? Is having tens of millions of rules running and changing many times a day. We simply believe the security model needs to change. We need to move from bolted on and chasing bad to an environment that has intrinsic security and is built to ensure good. This idea of built in security. We are taking every one of the core vm ware products and we are building security directly into it. We believe with this, we can eliminate much of the complexity. Many of the sensors and agents and boxes. Instead, they'll directly leverage the mechanisms in the infrastructure and we're using that infrastructure to lock it down to behave as we intended it to ensure good, right on the user side with workspace one on the network side with nsx and microsegmentation and storage with native encryption and on the compute with app defense, we are building in security. We're not chasing threats or adding on, but radically reducing the attack surface. When we look at our applications in the data center, you see this collection of machines running inside of it, right? You know, typically running on vsphere and those machines are increasingly connected. Through nsx and last year we introduced the breakthrough security solution called app defense and app defense. Leverages the unique insight we get into the application so that we can understand the application and map it into the infrastructure and then you can lock down, you could take that understanding, that manifest of its behavior and then lock those vms to that intended behavior and we do that without the operational and performance burden of agents and other rear looking use of attack detection. We're shrinking the attack surface, not chasing the latest attack vector, you know, and this idea of bolt on versus chasing bad. You sort of see it right in the network. Machines have lots of conductivity, lots of applications running and something bad happens. It basically has unfettered access to move horizontally through the data center and most of our security is north, south. MosT of the attacks are eastwest. We introduced this idea of microsegmentation five years ago, and by it we're enabling organizations to secure some networks and separate sensitive applications and services as never before. This idea isn't new, that just was never practical before nsx, but we're not standing still. Our teams are innovating to leap beyond 12. What's next beyond microsegmentation, and we see this in three simple words, learn, imagine a system that can look into the applications and understand their behavior and how they should operate. we're using machine learning and ai instead of chasing were to be able to ensure good where that that system can then locked down its behavior so the system consistently operates that way, but finally we know we have a world of increasing dynamic applications and as we move to more containerize the microservices, we know this world is changing, so we need to adapt. We need to have more automation to adapt to the current behavior. Today I'm very excited to have two major announcements that are delivering on this vision. The first of those vsphere platinum, our flagship vm ware vsphere product now has app defense built right in platinum will enable virtualization teams. Yeah, go ahead. Yeah, let's use it. Platinum will enable virtualization teams you to give an enormous contribution to the security profile of your enterprise. You could see whatever vm is for its purpose, its behavior until the system. That's what it's allowed to do. Dramatically reducing the attack surface without impact. On operations or performance, the capability is so powerful, so profound. We want you to be able to leverage it everywhere, and that's why we're building it directly into vsphere, vsphere platinum. I call it the burger and fries. You know, nobody leaves the restaurant without the fries who would possibly run a vm in the future without turning security on. That's how we want this to work going forward. Vsphere platinum and as powerful as microsegmentation has been as an idea. We're taking the next step with what we call adaptive microsegmentation. We are fusing Together app defense and vsphere with nsx to allow us to align the policies of the application through vsphere and the network. We can then lock down the network and the compute and enable this automation of the microsegment formation taken together adaptive microsegmentation. But again, we don't want to just tell you about it. We want to show you. Please welcome to the stage vj dante, who heads our machine learning team for app dispense. Vj a very good vj. Thanks for joining us. So, you know, I talked about this idea right, of being able to learn, lock and adapt. Uh, can you show it to us? Great. Yeah. Thank you. With vc a platinum, what we have done is we have put in everything you need to learn, lock and adapt, right with the infrastructure. The next time you bring up your wifi at line, you'll actually see a difference right in there. Let's go with that demo. There you go. And when you look at our defense there, what you see is that all your guests, virtual machines and all your host, hundreds of them and thousands of virtual machines enabling for that difference. It's in there. And what that does is immediately gets you visibility into the processes running on those virtual machines and the risk for the first time. Think about it for the first time. You're looking at the infrastructure through the lens of an application. Here, for example, the ecommerce application, you can see the components that make up that application, how they interact with each other, the specific process, a specific ip address on a specific board. That's what you get, but so we're learning the behavior. Yes. Yeah, that's very good. But how do you make sure you only learn good behavior? Exactly. How do we make sure that it's not bad? We actually verify me insured. It's all good. We ensured that everybody these reputation is verified. We ensured that the haven is verified. Let's go to svc host, for example. This process can exhibit hundreds of behaviors across numerous. Realize what we do here is we actually verify that failure saw us. It's actually a machine learning models that had been trained on millions of instances of good, bad at you said, and then automatically verify that for okay, so we said, you. We learned simply, learn now, lock. How does that work? Well, once you learned the application, locking it is as simple as clicking on that verify and protect button and then you can lock both the compute and network and it's done. So we've pushed those policies into nsx and microsegmentation has been established actually locked down the compute. What is the operating system is exactly. Let's first look at compute, protected the processes and the behaviors are locked down to exactly what is allowed for that application. And we have bacon policies and program your firewall. This is nsx being configured automatically for you, laurie, with one single click. Very good. So we said learn lock. Now, how does this adapt thing work? Well, a bad change is the only constant, but modern applications applications change on a continuous basis. What we do is actually pretty simple. We look at every change as it comes in determinant is good or bad. If it's good, we say allow it, update the policies. That's bad. We denied. Let's look at an example as asco dxc. It's exhibiting a behavior that they've not seen getting the learning period. Okay? So this machine has never behave this This hasn't been that way. But. way. But again, our machine learning models had seen thousands of instances of this process. They know this is normal. It talks on three 89 all the time. So what it's done to the few things, it's lowered the criticality of the alarm. Okay, so false positive. Exactly. The bane of security operations, false positives, and it has gone and updated. Jane does locks on compute and network to allow for that behavior. Applications continues to work on this project. Okay, so we can learn and adapt and action right through the compute and the network. What about the client? Well, we do with workplace one, intelligence protect and manage end user endpoint, but what's one intelligence? Nsx and actually work together to protect your entire data center infrastructure, but don't believe me. You can watch it for yourself tomorrow tom cornu keynote. You want to be there, at 1:00 PM, be there or be nowhere. I love you. Thank you veejay. Great job. Thank you so much. So the idea of intrinsic security and ensuring good, we believe fundamentally changing how security will be delivered in the enterprise in the future and changing the entire security industry. We've covered a lot today. I'm thrilled as I stand on stage to stand before this community that truly has been at the center of changing the world of technology over the last couple of decades. In it. We've talked about this idea of the super powers of technology and as they accelerate the huge demand for what you do, you know in the same way we together created this idea of the virtual infrastructure admin. You'll think about all the jobs that we are spawning in the discussion that we had today, the new skills, the new opportunities for each one of us in this room today, quantum program, machine learning engineer, iot and edge expert. We're on the cusp of so many new capabilities and we need you and your skills to do that. The skills that you possess, the abilities that you have to work across these silos of technology and enabled tomorrow. I'll tell you, I am now 38 years in the industry and I've never been more excited because together we have the opportunity to build on the things that collective we have done over the last four decades and truly have a positive global impact. These are hard problems, but I believe together we can successfully extend the lifespan of every human being. I believe together we can eradicate chronic diseases that have plagued mankind for centuries. I believe we can lift the remaining 10 percent of humanity out of extreme poverty. I believe that we can reschedule every worker in the age of the superpowers. I believe that we can give modern ever education to every child on the planet, even in the of slums. I believe that together we could reverse the impact of climate change. I believe that together we have the opportunity to make these a reality. I believe this possibility is only possible together with you. I asked you have a please have a wonderful vm world. Thanks for listening. Happy 20th birthday. Have a great topic.
SUMMARY :
of devices in the edge that need to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
1998 | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
1986 | DATE | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Joe | PERSON | 0.99+ |
Sydney | LOCATION | 0.99+ |
Joe Consella | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Oregon | LOCATION | 0.99+ |
20 percent | QUANTITY | 0.99+ |
Ashley | PERSON | 0.99+ |
16 percent | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Jupiter | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
last year | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Lindy | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
John Gage | PERSON | 0.99+ |
10 percent | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dan cone | PERSON | 0.99+ |
68 percent | QUANTITY | 0.99+ |
200 applications | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
50 percent | QUANTITY | 0.99+ |
Vm Ware Cloud Foundation | ORGANIZATION | 0.99+ |
1440 | DATE | 0.99+ |
30 year | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
38 percent | QUANTITY | 0.99+ |
38 years | QUANTITY | 0.99+ |
$600 | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
one months | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
windows 10 | TITLE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
80 million | QUANTITY | 0.99+ |
five percent | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
Jody | PERSON | 0.99+ |
Today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Sanjay | PERSON | 0.99+ |
23,000 employees | QUANTITY | 0.99+ |
five people | QUANTITY | 0.99+ |
sixth year | QUANTITY | 0.99+ |
82 percent | QUANTITY | 0.99+ |
five instances | QUANTITY | 0.99+ |
tomorrow morning | DATE | 0.99+ |
Coobernetti | ORGANIZATION | 0.99+ |