Image Title

Search Results for Bitbucket:

Full Keynote Hour - DockerCon 2020


 

(water running) (upbeat music) (electric buzzing) >> Fuel up! (upbeat music) (audience clapping) (upbeat music) >> Announcer: From around the globe. It's the queue with digital coverage of DockerCon live 2020, brought to you by Docker and its ecosystem partners. >> Hello everyone, welcome to DockerCon 2020. I'm John Furrier with theCUBE I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon 2020. Virtual event, normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content, over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Burcio and Bret Fisher. We'll be with you all day today, taking you through the program, helping you navigate the sessions. I'm so excited. Jenny, this is a virtual event. We talk about this. Can you believe it? Maybe the internet gods be with us today and hope everyone's having-- >> Yes. >> Easy time getting in. Jenny, Bret, thank you for-- >> Hello. >> Being here. >> Hey. >> Hi everyone, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you. >> Guys great job getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the opportunities given this tough times where we're in. It's super exciting again, made the internet gods be with us throughout the day, but there's plenty of content. Bret's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's canceling their events, but this is special to you guys. Talk about DockerCon virtual this year. >> The Docker community shows up at DockerCon every year, and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make DockerCon a virtual event. And of course when we did that, there was no quarantine we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for DockerCon today. And when you look at DockerCon of past right live events, really and we're learning are just the tip of the iceberg and so thrilled to be able to deliver a more inclusive global event today. And we have so much planned I think. Bret, you want to tell us some of the things that you have planned? >> Well, I'm sure I'm going to forget something 'cause there's a lot going on. But, we've obviously got interviews all day today on this channel with John and the crew. Jenny has put together an amazing set of all these speakers, and then you have the captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. It's all engineers, all day long. Captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically not scripted, it's an all day long unscripted event. So I'm sure it's going to be a lot of fun hanging out in there. >> Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions, where the speakers will be there with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Bret's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock, it'll be available on demand. All that content is available if you're on your desktop. If you're on your mobile, it's the same thing. Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, getting more out of this event. You guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >> Yes, so first set up your profile, put your picture next to your chat handle and then chat. John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded, so you get quality content and the speakers and chat so you can ask questions the whole time. If you're looking for the hallway track, then definitely check out the captain's on deck channel. And then we have some great interviews all day on the queue. So set up your profile, join the conversation and be kind, right? This is a community event. Code of conduct is linked on every page at the top, and just have a great day. >> And Bret, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, So you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >> Yeah, so I'm sure we're going to have lots of, stuff going on in chat. So no cLancaerns there about, having crickets in the chat. But we're going to be basically starting the day with two of my good Docker captain friends, (murmurs) and Laura Taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour and we're going to get you going and then you can maybe jump out and go to take some sessions. Maybe there's some stuff you want to check out and other sessions that you want to chat and talk with the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interviews. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. We're not just changing out the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there, and basically it's captains all day long. And if you've been on my YouTube live show you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >> Awesome and the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What other things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies, what else? What's going on? Any secret, surprises throughout the day. >> There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Bret will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Hopefully right you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >> All right, great stuff, so they've got the Docker selfie. So the Docker selfies, the hashtag is just DockerCon hashtag DockerCon. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool kids are going to be hanging out with Bret and then all they'll knowledge and learning. Don't miss the keynote, the keynote should be solid. We've got chain Governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us. And again, check out the interactive calendar. All you got to do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Bret, any final thoughts on what you want to share to the community around, what you got going on the virtual event, just random thoughts? >> Yeah, so sorry we can't all be together in the same physical place. But the coolest thing about as business online, is that we actually get to involve everyone, so as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Like Jenny said, the code of conduct is important. So, we're all in this together with the chat, so try to be nice in there. These are all real humans that, have feelings just like me. So let's try to keep it cool. And, over in the Catherine's channel we'll be taking your questions and maybe playing some music, playing some games, giving away some free stuff, while you're, in between sessions learning, oh yeah. >> And I got to say props to your rig. You've got an amazing setup there, Bret. I love what your show, you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So if you're not getting in, just, Wade's going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >> Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. So you can learn and a huge thank you to our platinum and gold authors. >> Awesome, well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there. I tweeted this out last night and let them get you guys' reaction to this because there's been a lot of talk around the COVID crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps aren't going to just change the world, they're going to save the world. So a lot of the theme here is the impact that developers are having right now in the current situation. If we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples. how containers and microservices are certainly changing the world and helping save the world, your thoughts. >> Like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around COVID, Clement Beyondo is sharing his company's experience, from being able to continue operations in Italy when they were completely shut down beginning of March. We have also in theCUBE channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and developers are moving in industry and really humanity forward because of what they're able to build and create, with advances in technology. >> Yeah and the first responders and these days is developers. Bret compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries, I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >> I think we're over 700,000 composed files on GitHub. So it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Just by having that we just buy, and that's not even counting. I mean that's just counting the files that are named Docker compose YAML. So I'm sure a lot of you out there have created a YAML file to manage your local containers or even on a server with Docker compose. And the nice thing is is Docker is doubling down on that. So we've gotten some news recently, from them about what they want to do with opening the spec up, getting more companies involved because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >> All right, well let's get into the keynote guys, jump into the keynote. If you missing anything, come back to the stream, check out the sessions, check out the calendar. Let's go, let's have a great time. Have some fun, thanks and enjoy the rest of the day we'll see you soon. (upbeat music) (upbeat music) >> Okay, what is the name of that Whale? >> Molly. >> And what is the name of this Whale? >> Mobby. >> That's right, dad's got to go, thanks bud. >> Bye. >> Bye. Hi, I'm Scott Johnson, CEO of Docker and welcome to DockerCon 2020. This year DockerCon is an all virtual event with more than 60,000 members of the Docker Community joining from around the world. And with the global shelter in place policies, we're excited to offer a unifying, inclusive virtual community event in which anyone and everyone can participate from their home. As a company, Docker has been through a lot of changes since our last DockerCon last year. The most important starting last November, is our refocusing 100% on developers and development teams. As part of that refocusing, one of the big challenges we've been working on, is how to help development teams quickly and efficiently get their app from code to cloud And wouldn't it be cool, if developers could quickly deploy to the cloud right from their local environment with the commands and workflow they already know. We're excited to give you a sneak preview of what we've been working on. And rather than slides, we thought we jumped right into the product. And joining me demonstrate some of these cool new features, is enclave your DACA. One of our engineers here at Docker working on Docker compose. Hello Lanca. >> Hello. >> We're going to show how an application development team collaborates using Docker desktop and Docker hub. And then deploys the app directly from the Docker command line to the clouds in just two commands. A development team would use this to quickly share functional changes of their app with the product management team, with beta testers or other development teams. Let's go ahead and take a look at our app. Now, this is a web app, that randomly pulls words from the database, and assembles them into sentences. You can see it's a pretty typical three tier application with each tier implemented in its own container. We have a front end web service, a middle tier, which implements the logic to randomly pull the words from the database and assemble them and a backend database. And here you can see the database uses the Postgres official image from Docker hub. Now let's first run the app locally using Docker command line and the Docker engine in Docker desktop. We'll do a Doc compose up and you can see that it's pulling the containers from our Docker organization account. Wordsmith, inc. Now that it's up. Let's go ahead and look at local host and we'll confirm that the application is functioning as desired. So there's one sentence, let's pull and now you and you can indeed see that we are pulling random words and assembling into sentences. Now you can also see though that the look and feel is a bit dated. And so Lanca is going to show us how easy it is to make changes and share them with the rest of the team. Lanca, over to you. >> Thank you, so I have, the source code of our application on my machine and I have updated it with the latest team from DockerCon 2020. So before committing the code, I'm going to build the application locally and run it, to verify that indeed the changes are good. So I'm going to build with Docker compose the image for the web service. Now that the image has been built, I'm going to deploy it locally. Wait to compose up. We can now check the dashboard in a Docker desktop that indeed our containers are up and running, and we can access, we can open in the web browser, the end point for the web service. So as we can see, we have the latest changes in for our application. So as you can see, the application has been updated successfully. So now, I'm going to push the image that I have just built to my organization's shared repository on Docker hub. So I can do this with Docker compose push web. Now that the image has been updated in the Docker hub repository, or my teammates can access it and check the changes. >> Excellent, well, thank you Lanca. Now of course, in these times, video conferencing is the new normal, and as great as it is, video conferencing does not allow users to actually test the application. And so, to allow us to have our app be accessible by others outside organizations such as beta testers or others, let's go ahead and deploy to the cloud. >> Sure we, can do this by employing a context. A Docker context, is a mechanism that we can use to target different platforms for deploying containers. The context we hold, information as the endpoint for the platform, and also how to authenticate to it. So I'm going to list the context that I have set locally. As you can see, I'm currently using the default context that is pointing to my local Docker engine. So all the commands that I have issued so far, we're targeting my local engine. Now, in order to deploy the application on a cloud. I have an account in the Azure Cloud, where I have no resource running currently, and I have created for this account, dedicated context that will hold the information on how to connect it to it. So now all I need to do, is to switch to this context, with Docker context use, and the name of my cloud context. So all the commands that I'm going to run, from now on, are going to target the cloud platform. So we can also check very, more simpler, in a simpler way we can check the running containers with Docker PS. So as we see no container is running in my cloud account. Now to deploy the application, all I need to do is to run a Docker compose up. And this will trigger the deployment of my application. >> Thanks Lanca. Now notice that Lanca did not have to move the composed file from Docker desktop to Azure. Notice you have to make any changes to the Docker compose file, and nor did she change any of the containers that she and I were using locally in our local environments. So the same composed file, same images, run locally and upon Azure without changes. While the app is deploying to Azure, let's highlight some of the features in Docker hub that helps teams with remote first collaboration. So first, here's our team's account where it (murmurs) and you can see the updated container sentences web that Lanca just pushed a couple of minutes ago. As far as collaboration, we can add members using their Docker ID or their email, and then we can organize them into different teams depending on their role in the application development process. So and then Lancae they're organized into different teams, we can assign them permissions, so that teams can work in parallel without stepping on each other's changes accidentally. For example, we'll give the engineering team full read, write access, whereas the product management team will go ahead and just give read only access. So this role based access controls, is just one of the many features in Docker hub that allows teams to collaboratively and quickly develop applications. Okay Lanca, how's our app doing? >> Our app has been successfully deployed to the cloud. So, we can easily check either the Azure portal to verify the containers running for it or simpler we can run a Docker PS again to get the list with the containers that have been deployed for it. In the output from the Docker PS, we can see an end point that we can use to access our application in the web browser. So we can see the application running in clouds. It's really up to date and now we can take this particular endpoint and share it within our organization such that anybody can have a look at it. >> That's cool Onka. We showed how we can deploy an app to the cloud in minutes and just two commands, and using commands that Docker users already know, thanks so much. In that sneak preview, you saw a team developing an app collaboratively, with a tool chain that includes Docker desktop and Docker hub. And simply by switching Docker context from their local environment to the cloud, deploy that app to the cloud, to Azure without leaving the command line using Docker commands they already know. And in doing so, really simplifying for development team, getting their app from code to cloud. And just as important, what you did not see, was a lot of complexity. You did not see cloud specific interfaces, user management or security. You did not see us having to provision and configure compute networking and storage resources in the cloud. And you did not see infrastructure specific application changes to either the composed file or the Docker images. And by simplifying a way that complexity, these new features help application DevOps teams, quickly iterate and get their ideas, their apps from code to cloud, and helping development teams, build share and run great applications, is what Docker is all about. A Docker is able to simplify for development teams getting their app from code to cloud quickly as a result of standards, products and ecosystem partners. It starts with open standards for applications and application artifacts, and active open source communities around those standards to ensure portability and choice. Then as you saw in the demo, the Docker experience delivered by Docker desktop and Docker hub, simplifies a team's collaborative development of applications, and together with ecosystem partners provides every stage of an application development tool chain. For example, deploying applications to the cloud in two commands. What you saw on the demo, well that's an extension of our strategic partnership with Microsoft, which we announced yesterday. And you can learn more about our partnership from Amanda Silver from Microsoft later today, right here at DockerCon. Another tool chain stage, the capability to scan applications for security and vulnerabilities, as a result of our partnership with Sneak, which we announced last week. You can learn more about that partnership from Peter McKay, CEO Sneak, again later today, right here at DockerCon. A third example, development team can automate the build of container images upon a simple get push, as a result of Docker hub integrations with GitHub and Alaska and Bitbucket. As a final example of Docker and the ecosystem helping teams quickly build applications, together with our ISV partners. We offer in Docker hub over 500 official and verified publisher images of ready to run Dockerized application components such as databases, load balancers, programming languages, and much more. Of course, none of this happens without people. And I would like to take a moment to thank four groups of people in particular. First, the Docker team, past and present. We've had a challenging 12 months including a restructuring and then a global pandemic, and yet their support for each other, and their passion for the product, this community and our customers has never been stronger. We think our community, Docker wouldn't be Docker without you, and whether you're one of the 50 Docker captains, they're almost 400 meetup organizers, the thousands of contributors and maintainers. Every day you show up, you give back, you teach new support. We thank our users, more than six and a half million developers who have built more than 7 million applications and are then sharing those applications through Docker hub at a rate of more than one and a half billion poles per week. Those apps are then run, are more than 44 million Docker engines. And finally, we thank our customers, the over 18,000 docker subscribers, both individual developers and development teams from startups to large organizations, 60% of which are outside the United States. And they spend every industry vertical, from media, to entertainment to manufacturing. healthcare and much more. Thank you. Now looking forward, given these unprecedented times, we would like to offer a challenge. While it would be easy to feel helpless and miss this global pandemic, the challenge is for us as individuals and as a community to instead see and grasp the tremendous opportunities before us to be forces for good. For starters, look no further than the pandemic itself, in the fight against this global disaster, applications and data are playing a critical role, and the Docker Community quickly recognize this and rose to the challenge. There are over 600 COVID-19 related publicly available projects on Docker hub today, from data processing to genome analytics to data visualization folding at home. The distributed computing project for simulating protein dynamics, is also available on Docker hub, and it uses spirit compute capacity to analyze COVID-19 proteins to aid in the design of new therapies. And right here at DockerCon, you can hear how Clemente Biondo and his company engineering in Gagne area Informatica are using Docker in the fight with COVID-19 in Italy every day. Now, in addition to fighting the pandemic directly, as a community, we also have an opportunity to bridge the disruption the pandemic is wreaking. It's impacting us at work and at home in every country around the world and every aspect of our lives. For example, many of you have a student at home, whose world is going to be very different when they returned to school. As employees, all of us have experienced the stresses from working from home as well as many of the benefits and in fact 75% of us say that going forward, we're going to continue to work from home at least occasionally. And of course one of the biggest disruptions has been job losses, over 35 million in the United States alone. And we know that's affected many of you. And yet your skills are in such demand and so important now more than ever. And that's why here at DockerCon, we want to try to do our part to help, and we're promoting this hashtag on Twitter, hashtag DockerCon jobs, where job seekers and those offering jobs can reach out to one another and connect. Now, pandemics disruption is accelerating the shift of more and more of our time, our priorities, our dollars from offline to online to hybrid, and even online only ways of living. We need to find new ways to collaborate, new approaches to engage customers, new modes for education and much more. And what is going to fill the needs created by this acceleration from offline, online? New applications. And it's this need, this demand for all these new applications that represents a great opportunity for the Docker community of developers. The world needs us, needs you developers now more than ever. So let's seize this moment. Let us in our teams, go build share and run great new applications. Thank you for joining today. And let's have a great DockerCon. >> Okay, welcome back to the DockerCon studio headquarters in your hosts, Jenny Burcio and myself John Furrier. u@farrier on Twitter. If you want to tweet me anything @DockerCon as well, share what you're thinking. Great keynote there from Scott CEO. Jenny, demo DockerCon jobs, some highlights there from Scott. Yeah, I love the intro. It's okay I'm about to do the keynote. The little green room comes on, makes it human. We're all trying to survive-- >> Let me answer the reality of what we are all doing with right now. I had to ask my kids to leave though or they would crash the whole stream but yes, we have a great community, a large community gather gathered here today, and we do want to take the opportunity for those that are looking for jobs, are hiring, to share with the hashtag DockerCon jobs. In addition, we want to support direct health care workers, and Bret Fisher and the captains will be running a all day charity stream on the captain's channel. Go there and you'll get the link to donate to directrelief.org which is a California based nonprofit, delivering and aid and supporting health care workers globally response to the COVID-19 crisis. >> Okay, if you jumping into the stream, I'm John Farrie with Jenny Webby, your hosts all day today throughout DockerCon. It's a packed house of great content. You have a main stream, theCUBE which is the mainstream that we'll be promoting a lot of cube interviews. But check out the 40 plus sessions underneath in the interactive calendar on dockercon.com site. Check it out, they're going to be live on a clock. So if you want to participate in real time in the chat, jump into your session on the track of your choice and participate with the folks in there chatting. If you miss it, it's going to go right on demand right after sort of all content will be immediately be available. So make sure you check it out. Docker selfie is a hashtag. Take a selfie, share it. Docker hashtag Docker jobs. If you're looking for a job or have openings, please share with the community and of course give us feedback on what you can do. We got James Governor, the keynote coming up next. He's with Red monk. Not afraid to share his opinion on open source on what companies should be doing, and also the evolution of this Cambrin explosion of apps that are going to be coming as we come out of this post pandemic world. A lot of people are thinking about this, the crisis and following through. So stay with us for more and more coverage. Jenny, favorite sessions on your mind for people to pay attention to that they should (murmurs)? >> I just want to address a few things that continue to come up in the chat sessions, especially breakout sessions after they play live and the speakers in chat with you, those go on demand, they are recorded, you will be able to access them. Also, if the screen is too small, there is the button to expand full screen, and different quality levels for the video that you can choose on your end. All the breakout sessions also have closed captioning, so please if you would like to read along, turn that on so you can, stay with the sessions. We have some great sessions, kicking off right at 10:00 a.m, getting started with Docker. We have a full track really in the how to enhance on that you should check out devs in action, hear what other people are doing and then of course our sponsors are delivering great content to you all day long. >> Tons of content. It's all available. They'll always be up always on at large scale. Thanks for watching. Now we got James Governor, the keynote. He's with Red Monk, the analyst firm and has been tracking open source for many generations. He's been doing amazing work. Watch his great keynote. I'm going to be interviewing him live right after. So stay with us and enjoy the rest of the day. We'll see you back shortly. (upbeat music) >> Hi, I'm James Governor, one of the co-founders of a company called RedMonk. We're an industry research firm focusing on developer led technology adoption. So that's I guess why Docker invited me to DockerCon 2020 to talk about some trends that we're seeing in the world of work and software development. So Monk Chips, that's who I am. I spent a lot of time on Twitter. It's a great research tool. It's a great way to find out what's going on with keep track of, as I say, there's people that we value so highly software developers, engineers and practitioners. So when I started talking to Docker about this event and it was pre Rhona, should we say, the idea of a crowd wasn't a scary thing, but today you see something like this, it makes you feel uncomfortable. This is not a place that I want to be. I'm pretty sure it's a place you don't want to be. And you know, to that end, I think it's interesting quote by Ellen Powell, she says, "Work from home is now just work" And we're going to see more and more of that. Organizations aren't feeling the same way they did about work before. Who all these people? Who is my cLancaern? So GitHub says has 50 million developers right on its network. Now, one of the things I think is most interesting, it's not that it has 50 million developers. Perhaps that's a proxy for number of developers worldwide. But quite frankly, a lot of those accounts, there's all kinds of people there. They're just Selena's. There are data engineers, there are data scientists, there are product managers, there were tech marketers. It's a big, big community and it goes way beyond just software developers itself. Frankly for me, I'd probably be saying there's more like 20 to 25 million developers worldwide, but GitHub knows a lot about the world of code. So what else do they know? One of the things they know is that world of code software and opensource, is becoming increasingly global. I get so excited about this stuff. The idea that there are these different software communities around the planet where we're seeing massive expansions in terms of things like open source. Great example is Nigeria. So Nigeria more than 200 million people, right? The energy there in terms of events, in terms of learning, in terms of teaching, in terms of the desire to code, the desire to launch businesses, desire to be part of a global software community is just so exciting. And you know, these, this sort of energy is not just in Nigeria, it's in other countries in Africa, it's happening in Egypt. It's happening around the world. This energy is something that's super interesting to me. We need to think about that. We've got global that we need to solve. And software is going to be a big part of that. At the moment, we can talk about other countries, but what about frankly the gender gap, the gender issue that, you know, from 1984 onwards, the number of women taking computer science degrees began to, not track but to create in comparison to what men were doing. The tech industry is way too male focused, there are men that are dominant, it's not welcoming, we haven't found ways to have those pathways and frankly to drive inclusion. And the women I know in tech, have to deal with the massively disproportionate amount of stress and things like online networks. But talking about online networks and talking about a better way of living, I was really excited by get up satellite recently, was a fantastic demo by Alison McMillan and she did a demo of a code spaces. So code spaces is Microsoft online ID, new platform that they've built. And online IDs, we're never quite sure, you know, plenty of people still out there just using the max. But, visual studio code has been a big success. And so this idea of moving to one online IDE, it's been around that for awhile. What they did was just make really tight integration. So you're in your GitHub repo and just be able to create a development environment with effectively one click, getting rid of all of the act shaving, making it super easy. And what I loved was it the demo, what Ali's like, yeah cause this is great. One of my kids are having a nap, I can just start (murmurs) and I don't have to sort out all the rest of it. And to me that was amazing. It was like productivity as inclusion. I'm here was a senior director at GitHub. They're doing this amazing work and then making this clear statement about being a parent. And I think that was fantastic. Because that's what, to me, importantly just working from home, which has been so challenging for so many of us, began to open up new possibilities, and frankly exciting possibilities. So Alley's also got a podcast parent-driven development, which I think is super important. Because this is about men and women rule in this together show parenting is a team sport, same as software development. And the idea that we should be thinking about, how to be more productive, is super important to me. So I want to talk a bit about developer culture and how it led to social media. Because you know, your social media, we're in this ad bomb stage now. It's TikTok, it's like exercise, people doing incredible back flips and stuff like that. Doing a bunch of dancing. We've had the world of sharing cat gifts, Facebook, we sort of see social media is I think a phenomenon in its own right. Whereas the me, I think it's interesting because it's its progenitors, where did it come from? So here's (murmurs) So 1971, one of the features in the emergency management information system, that he built, which it's topical, it was for medical tracking medical information as well, medical emergencies, included a bulletin board system. So that it could keep track of what people were doing on a team and make sure that they were collaborating effectively, boom! That was the start of something big, obviously. Another day I think is worth looking at 1983, Sorania Pullman, spanning tree protocol. So at DEC, they were very good at distributed systems. And the idea was that you can have a distributed system and so much of the internet working that we do today was based on radius work. And then it showed that basically, you could span out a huge network so that everyone could collaborate. That is incredibly exciting in terms of the trends, that I'm talking about. So then let's look at 1988, you've got IRC. IRC what developer has not used IRC, right. Well, I guess maybe some of the other ones might not have. But I don't know if we're post IRC yet, but (murmurs) at a finished university, really nailed it with IRC as a platform that people could communicate effectively with. And then we go into like 1991. So we've had IRC, we've had finished universities, doing a lot of really fantastic work about collaboration. And I don't think it was necessarily an accident that this is where the line is twofold, announced Linux. So Linux was a wonderfully packaged, idea in terms of we're going to take this Unix thing. And when I say package, what a package was the idea that we could collaborate on software. So, it may have just been the work of one person, but clearly what made it important, made it interesting, was finding a social networking pattern, for software development so that everybody could work on something at scale. That was really, I think, fundamental and foundational. Now I think it's important, We're going to talk about Linus, to talk about some things that are not good about software culture, not good about open source culture, not good about hacker culture. And that's where I'm going to talk about code of conduct. We have not been welcoming to new people. We got the acronyms, JFTI, We call people news, that's super unhelpful. We've got to find ways to be more welcoming and more self-sustaining in our communities, because otherwise communities will fail. And I'd like to thank everyone that has a code of conduct and has encouraged others to have codes of conduct. We need to have codes of conduct that are enforced to ensure that we have better diversity at our events. And that's what women, underrepresented minorities, all different kinds of people need to be well looked off to and be in safe and inclusive spaces. And that's the online events. But of course it's also for all of our activities offline. So Linus, as I say, I'm not the most charming of characters at all time, but he has done some amazing technology. So we got to like 2005 the creation of GIT. Not necessarily the distributed version control system that would win. But there was some interesting principles there, and they'd come out of the work that he had done in terms of trying to build and sustain the Linux code base. So it was very much based on experience. He had an itch that he needed to scratch and there was a community that was this building, this thing. So what was going to be the option, came up with Git foundational to another huge wave of social change, frankly get to logical awesome. April 20 April, 2008 GitHub, right? GiHub comes up, they've looked at Git, they've packaged it up, they found a way to make it consumable so the teams could use it and really begin to take advantage of the power of that distributed version control model. Now, ironically enough, of course they centralized the service in doing so. So we have a single point of failure on GitHub. But on the other hand, the notion of the poll request, the primitives that they established and made usable by people, that changed everything in terms of software development. I think another one that I'd really like to look at is Slack. So Slack is a huge success used by all different kinds of businesses. But it began specifically as a pivot from a company called Glitch. It was a game company and they still wanted, a tool internally that was better than IRC. So they built out something that later became Slack. So Slack 2014, is established as a company and basically it was this Slack fit software engineering. The focus on automation, the conversational aspects, the asynchronous aspects. It really pulled things together in a way that was interesting to software developers. And I think we've seen this pattern in the world, frankly, of the last few years. Software developers are influences. So Slack first used by the engineering teams, later used by everybody. And arguably you could say the same thing actually happened with Apple. Apple was mainstreamed by developers adopting that platform. Get to 2013, boom again, Solomon Hikes, Docker, right? So Docker was, I mean containers were not new, they were just super hard to use. People found it difficult technology, it was Easter Terek. It wasn't something that they could fully understand. Solomon did an incredible job of understanding how containers could fit into modern developer workflows. So if we think about immutable images, if we think about the ability to have everything required in the package where you are, it really tied into what people were trying to do with CICD, tied into microservices. And certainly the notion of sort of display usability Docker nailed that, and I guess from this conference, at least the rest is history. So I want to talk a little bit about, scratching the itch. And particularly what has become, I call it the developer authentic. So let's go into dark mode now. I've talked about developers laying out these foundations and frameworks that, the mainstream, frankly now my son, he's 14, he (murmurs) at me if I don't have dark mode on in an application. And it's this notion that developers, they have an aesthetic, it does get adopted I mean it's quite often jokey. One of the things we've seen in the really successful platforms like GitHub, Docker, NPM, let's look at GitHub. Let's look at over that Playfulness. I think was really interesting. And that changes the world of work, right? So we've got the world of work which can be buttoned up, which can be somewhat tight. I think both of those companies were really influential, in thinking that software development, which is a profession, it's also something that can and is fun. And I think about how can we make it more fun? How can we develop better applications together? Takes me to, if we think about Docker talking about build, share and run, for me the key word is share, because development has to be a team sport. It needs to be sharing. It needs to be kind and it needs to bring together people to do more effective work. Because that's what it's all about, doing effective work. If you think about zoom, it's a proxy for collaboration in terms of its value. So we've got all of these airlines and frankly, add up that their share that add up their total value. It's currently less than Zoom. So video conferencing has become so much of how we live now on a consumer basis. But certainly from a business to business perspective. I want to talk about how we live now. I want to think about like, what will come out all of this traumatic and it is incredibly traumatic time? I'd like to say I'm very privileged. I can work from home. So thank you to all the frontline workers that are out there that they're not in that position. But overall what I'm really thinking about, there's some things that will come out of this that will benefit us as a culture. Looking at cities like Paris, Milan, London, New York, putting a new cycling infrastructure, so that people can social distance and travel outside because they don't feel comfortable on public transport. I think sort of amazing widening pavements or we can't do that. All these cities have done it literally overnight. This sort of changes is exciting. And what does come off that like, oh there are some positive aspects of the current issues that we face. So I've got a conference or I've got a community that may and some of those, I've been working on. So Katie from HashiCorp and Carla from container solutions basically about, look, what will the world look like in developer relations? Can we have developer relations without the air miles? 'Cause developer advocates, they do too much travel ends up, you know, burning them out, develop relations. People don't like to say no. They may have bosses that say, you know, I was like, Oh that corporates went great. Now we're going to roll it out worldwide to 47 cities. That's stuff is terrible. It's terrible from a personal perspective, it's really terrible from an environmental perspective. We need to travel less. Virtual events are crushing it. Microsoft just at build, right? Normally that'd be just over 10,000 people, they had 245,000 plus registrations. 40,000 of them in the last day, right? Red Hat summit, 80,000 people, IBM think 90,000 people, GitHub Crushed it as well. Like this is a more inclusive way people can dip in. They can be from all around the world. I mentioned Nigeria and how fantastic it is. Very often Nigerian developers and advocates find it hard to get visas. Why should they be shut out of events? Events are going to start to become remote first because frankly, look at it, if you're turning in those kinds of numbers, and Microsoft was already doing great online events, but they absolutely nailed it. They're going to have to ask some serious questions about why everybody should get back on a plane again. So if you're going to do remote, you've got to be intentional about it. It's one thing I've learned some exciting about GitLab. GitLab's culture is amazing. Everything is documented, everything is public, everything is transparent. Think that really clear and if you look at their principles, everything, you can't have implicit collaboration models. Everything needs to be documented and explicit, so that anyone can work anywhere and they can still be part of the team. Remote first is where we're at now, Coinbase, Shopify, even Barkley says the not going to go back to having everybody in offices in the way they used to. This is a fundamental shift. And I think it's got significant implications for all industries, but definitely for software development. Here's the thing, the last 20 years were about distributed computing, microservices, the cloud, we've got pretty good at that. The next 20 years will be about distributed work. We can't have everybody living in San Francisco and London and Berlin. The talent is distributed, the talent is elsewhere. So how are we going to build tools? Who is going to scratch that itch to build tools to make them more effective? Who's building the next generation of apps, you are, thanks.

Published Date : May 29 2020

SUMMARY :

It's the queue with digital coverage Maybe the internet gods be with us today Jenny, Bret, thank you for-- Welcome to the Docker community. but this is special to you guys. of the iceberg and so thrilled to be able or the questions you have. find the session that you want. to help you get the most out of your So the folks who were familiar with that and at the end of this keynote, Awesome and the content attention to the keynotes. and click on the session you want. in the same physical place. And I got to say props to your rig. the sponsor pages and you go, So a lot of the theme here is the impact and interviews in the program today Yeah and the first responders And the nice thing is is Docker of the day we'll see you soon. got to go, thanks bud. of the Docker Community from the Docker command line to the clouds So I'm going to build with Docker compose And so, to allow us to So all the commands that I'm going to run, While the app is deploying to Azure, to get the list with the containers the capability to scan applications Yeah, I love the intro. and Bret Fisher and the captains of apps that are going to be coming in the how to enhance on the rest of the day. in terms of the desire to code,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ellen PowellPERSON

0.99+

Alison McMillanPERSON

0.99+

Peter McKayPERSON

0.99+

Jenny BurcioPERSON

0.99+

JennyPERSON

0.99+

John FurrierPERSON

0.99+

ItalyLOCATION

0.99+

CarlaPERSON

0.99+

Scott JohnsonPERSON

0.99+

Amanda SilverPERSON

0.99+

BretPERSON

0.99+

EgyptLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

LondonLOCATION

0.99+

AppleORGANIZATION

0.99+

Bret FisherPERSON

0.99+

MilanLOCATION

0.99+

ParisLOCATION

0.99+

RedMonkORGANIZATION

0.99+

John FarriePERSON

0.99+

JohnPERSON

0.99+

AfricaLOCATION

0.99+

Clement BeyondoPERSON

0.99+

CaliforniaLOCATION

0.99+

ShopifyORGANIZATION

0.99+

Jenny WebbyPERSON

0.99+

75%QUANTITY

0.99+

BerlinLOCATION

0.99+

KatiePERSON

0.99+

DecemberDATE

0.99+

60%QUANTITY

0.99+

1983DATE

0.99+

1984DATE

0.99+

twoQUANTITY

0.99+

14QUANTITY

0.99+

United StatesLOCATION

0.99+

GitHubORGANIZATION

0.99+

New YorkLOCATION

0.99+

NigeriaLOCATION

0.99+

2005DATE

0.99+

San FranciscoLOCATION

0.99+

DockerORGANIZATION

0.99+

DockerConEVENT

0.99+

more than 44 millionQUANTITY

0.99+

100%QUANTITY

0.99+

Laura TacoPERSON

0.99+

40,000QUANTITY

0.99+

47 citiesQUANTITY

0.99+

April 20 April, 2008DATE

0.99+

AWSORGANIZATION

0.99+

WadePERSON

0.99+

CoinbaseORGANIZATION

0.99+

GagneLOCATION

0.99+

last weekDATE

0.99+

IBMORGANIZATION

0.99+

James GovernorPERSON

0.99+

Sorania PullmanPERSON

0.99+

last NovemberDATE

0.99+

50 million developersQUANTITY

0.99+

FirstQUANTITY

0.99+

Clemente BiondoPERSON

0.99+

10:00 a.mDATE

0.99+

ScottPERSON

0.99+

UNLIST TILL 4/2 - Autonomous Log Monitoring


 

>> Sue: Hi everybody, thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled "Autonomous Monitoring Using Machine Learning". My name is Sue LeClaire, director of marketing at Vertica, and I'll be your host for this session. Joining me is Larry Lancaster, founder and CTO at Zebrium. Before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slide and click submit. There will be a Q&A session at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer them offline. Alternatively, you can also go and visit Vertica forums to post your questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also, just a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available for you to view on demand later this week. We'll send you a notification as soon as it's ready. So, let's get started. Larry, over to you. >> Larry: Hey, thanks so much. So hi, my name's Larry Lancaster and I'm here to talk to you today about something that I think who's time has come and that's autonomous monitoring. So, with that, let's get into it. So, machine data is my life. I know that's a sad life, but it's true. So I've spent most of my career kind of taking telemetry data from products, either in the field, we used to call it in the field or nowadays, that's been deployed, and bringing that data back, like log file stats, and then building stuff on top of it. So, tools to run the business or services to sell back to users and customers. And so, after doing that a few times, it kind of got to the point where I was really sort of sick of building the same kind of thing from scratch every time, so I figured, why not go start a company and do it so that we don't have to do it manually ever again. So, it's interesting to note, I've put a little sentence here saying, "companies where I got to use Vertica" So I've been actually kind of working with Vertica for a long time now, pretty much since they came out of alpha. And I've really been enjoying their technology ever since. So, our vision is basically that I want a system that will characterize incidents before I notice. So an incident is, you know, we used to call it a support case or a ticket in IT, or a support case in support. Nowadays, you may have a DevOps team, or a set of SREs who are monitoring a production sort of deployment. And so they'll call it an incident. So I'm looking for something that will notice and characterize an incident before I notice and have to go digging into log files and stats to figure out what happened. And so that's a pretty heady goal. And so I'm going to talk a little bit today about how we do that. So, if we look at logs in particular. Logs today, if you look at log monitoring. So monitoring is kind of that whole umbrella term that we use to talk about how we monitor systems in the field that we've shipped, or how we monitor production deployments in a more modern stack. And so basically there are log monitoring tools. But they have a number of drawbacks. For one thing, they're kind of slow in the sense that if something breaks and I need to go to a log file, actually chances are really good that if you have a new issue, if it's an unknown unknown problem, you're going to end up in a log file. So the problem then becomes basically you're searching around looking for what's the root cause of the incident, right? And so that's kind of time-consuming. So, they're also fragile and this is largely because log data is completely unstructured, right? So there's no formal grammar for a log file. So you have this situation where, if I write a parser today, and that parser is going to do something, it's going to execute some automation, it's going to open or update a ticket, it's going to maybe restart a service, or whatever it is that I want to happen. What'll happen is later upstream, someone who's writing the code that produces that log message, they might do something really useful for me, or for users. And they might go fix a spelling mistake in that log message. And then the next thing you know, all the automation breaks. So it's a very fragile source for automation. And finally, because of that, people will set alerts on, "Oh, well tell me how many thousands of errors are happening every hour." Or some horrible metric like that. And then that becomes the only visibility you have in the data. So because of all this, it's a very human-driven, slow, fragile process. So basically, we've set out to kind of up-level that a bit. So I touched on this already, right? The truth is if you do have an incident, you're going to end up in log files to do root cause. It's almost always the case. And so you have to wonder, if that's the case, why do most people use metrics only for monitoring? And the reason is related to the problems I just described. They're already structured, right? So for logs, you've got this mess of stuff, so you only want to dig in there when you absolutely have to. But ironically, it's where a lot of the information that you need actually is. So we have a model today, and this model used to work pretty well. And that model is called "index and search". And it basically means you treat log files like they're text documents. And so you index them and when there's some issue you have to drill into, then you go searching, right? So let's look at that model. So 20 years ago, we had sort of a shrink-wrap software delivery model. You had an incident. With that incident, maybe you had one customer and you had a monolithic application and a handful of log files. So it's perfectly natural, in fact, usually you could just v-item the log file, and search that way. Or if there's a lot of them, you could index them and search them that way. And that all worked very well because the developer or the support engineer had to be an expert in those few things, in those few log files, and understand what they meant. But today, everything has changed completely. So we live in a software as a service world. What that means is, for a given incident, first of all you're going to be affecting thousands of users. You're going to have, potentially, 100 services that are deployed in your environment. You're going to have 1,000 log streams to sift through. And yet, you're still kind of stuck in the situation where to go find out what's the matter, you're going to have to search through the log files. So this is kind of the unacceptable sort of position we're in today. So for us, the future will not be index and search. And that's simply because it cannot scale. And the reason I say that it can't scale is because it all kind of is bottlenecked by a person and their eyeball. So, you continue to drive up the amount of data that has to be sifted through, the complexity of the stack that has to be understood, and you still, at the end of the day, for MTTR purposes, you still have the same bottleneck, which is the eyeball. So this model, I believe, is fundamentally broken. And that's why, I believe in five years you're going to be in a situation where most monitoring of unknown unknown problems is going to be done autonomously. And those issues will be characterized autonomously because there's no other way it can happen. So now I'm going to talk a little bit about autonomous monitoring itself. So, autonomous monitoring basically means, if you can imagine in a monitoring platform and you watch the monitoring platform, maybe you watch the alerts coming from it or more importantly, you kind of watch the dashboards and try to see if something looks weird. So autonomous monitoring is the notion that the platform should do the watching for you and only let you know when something is going wrong and should kind of give you a window into what happened. So if you look at this example I have on screen, just to take it really slow and absorb the concept of autonomous monitoring. So here in this example, we've stopped the database. And as a result, down below you can see there were a bunch of fallout. This is an Atlassian Stack, so you can imagine you've got a Postgres database. And then you've got sort of Bitbucket, and Confluence, and Jira, and these various other components that need the database operating in order to function. So what this is doing is it's calling out, "Hey, the root cause is the database stopped and here's the symptoms." Now, you might be wondering, so what. I mean I could go write a script to do this sort of thing. Here's what's interesting about this very particular example, and I'll show a couple more examples that are a little more involved. But here's the interesting thing. So, in the software that came up with this incident and opened this incident and put this root cause and symptoms in there, there's no code that knows anything about timestamp formats, severities, Atlassian, Postgres, databases, Bitbucket, Confluence, there's no regexes that talk about starting, stopped, RDBMS, swallowed exception, and so on and so forth. So you might wonder how it's possible then, that something which is completely ignorant of the stack, could come up with this description, which is exactly what a human would have had to do, to figure out what happened. And I'm going to get into how we do that. But that's what autonomous monitoring is about. It's about getting into a set of telemetry from a stack with no prior information, and understanding when something breaks. And I could give you the punchline right now, which is there are fundamental ways that software behaves when it's breaking. And by looking at hundreds of data sets that people have generously allowed us to use containing incidents, we've been able to characterize that and now generalize it to apply it to any new data set and stack. So here's an interesting one right here. So there's a fella, David Gill, he's just a genius in the monitoring space. He's been working with us for the last couple of months. So he said, "You know what I'm going to do, is I'm going to run some chaos experiments." So for those of you who don't know what chaos engineering is, here's the idea. So basically, let's say I'm running a Kubernetes cluster and what I'll do is I'll use sort of a chaos injection test, something like litmus. And basically it will inject issues, it'll break things in my application randomly to see if my monitoring picks it up. And so this is what chaos engineering is built around. It's built around sort of generating lots of random problems and seeing how the stack responds. So in this particular case, David went in and he deleted, basically one of the tests that was presented through litmus did a delete of a pod delete. And so that's going to basically take out some containers that are part of the service layer. And so then you'll see all kinds of things break. And so what you're seeing here, which is interesting, this is why I like to use this example. Because it's actually kind of eye-opening. So the chaos tool itself generates logs. And of course, through Kubernetes, all the log files locations that are on the host, and the container logs are known. And those are all pulled back to us automatically. So one of the log files we have is actually the chaos tool that's doing the breaking, right? And so what the tool said here, when it went to determine what the root cause was, was it noticed that there was this process that had these messages happen, initializing deletion lists, selection a pod to kill, blah blah blah. It's saying that the root cause is the chaos test. And it's absolutely right, that is the root cause. But usually chaos tests don't get picked up themselves. You're supposed to be just kind of picking up the symptoms. But this is what happens when you're able to kind of tease out root cause from symptoms autonomously, is you end up getting a much more meaningful answer, right? So here's another example. So essentially, we collect the log files, but we also have a Prometheus scraper. So if you export Prometheus metrics, we'll scrape those and we'll collect those as well. And so we'll use those for our autonomous monitoring as well. So what you're seeing here is an issue where, I believe this is where we ran something out of disk space. So it opened an incident, but what's also interesting here is, you see that it pulled that metric to say that the spike in this metric was a symptom of this running out of space. So again, there's nothing that knows anything about file system usage, memory, CPU, any of that stuff. There's no actual hard-coded logic anywhere to explain any of this. And so the concept of autonomous monitoring is looking at a stack the way a human being would. If you can imagine how you would walk in and monitor something, how you would think about it. You'd go looking around for rare things. Things that are not normal. And you would look for indicators of breakage, and you would see, do those seem to be correlated in some dimension? That is how the system works. So as I mentioned a moment ago, metrics really do kind of complete the picture for us. We end up in a situation where we have a one-stop shop for incident root cause. So, how does that work? Well, we ingest and we structure the log files. So if we're getting the logs, we'll ingest them and we'll structure them, and I'm going to show a little bit what that structure looks like and how that goes into the database in a moment. And then of course we ingest and structure the Prometheus metrics. But here, structure really should have an asterisk next to it, because metrics are mostly structured already. They have names. If you have your own scraper, as opposed to going into the time series Prometheus database and pulling metrics from there, you can keep a lot more information about metadata about those metrics from the exporter's perspective. So we keep all of that too. Then we do our anomaly detection on both of those sets of data. And then we cross-correlate metrics and log anomalies. And then we create incidents. So this is at a high level, kind of what's happening without any sort of stack-specific logic built in. So we had some exciting recent validation. So Mayadata's a pretty big player in the Kubernetes space. Essentially, they do Kubernetes as a managed service. They have tens of thousands of customers that they manage their Kubernetes clusters for them. And then they're also involved, both in the OpenEBS project, as well as in the Litmius project I mentioned a moment ago. That's their tool for chaos engineering. So they're a pretty big player in the Kubernetes space. So essentially, they said, "Oh okay, let's see if this is real." So what they did was they set up our collectors, which took three minutes in Kubernetes. And then they went and they, using Litmus, they reproduced eight incidents that their actual, real-world customers had hit. And they were trying to remember the ones that were the hardest to figure out the root cause at the time. And we picked up and put a root cause indicator that was correct in 100% of these incidents with no training configuration or metadata required. So this is kind of what autonomous monitoring is all about. So now I'm going to talk a little bit about how it works. So, like I said, there's no information included or required about, so if you imagine a log file for example. Now, commonly, over to the left-hand side of every line, there will be some sort of a prefix. And what I mean by that is you'll see like a timestamp, or a severity, and maybe there's a PID, and maybe there's function name, and maybe there's some other stuff there. So basically that's kind of, it's common data elements for a large portion of the lines in a given log file. But you know, of course, the contents change. So basically today, like if you look at a typical log manager, they'll talk about connectors. And what connectors means is, for an application it'll generate a certain prefix format in a log. And that means what's the format of the timestamp, and what else is in the prefix. And this lets the tool pick it up. And so if you have an app that doesn't have a connector, you're out of luck. Well, what we do is we learn those prefixes dynamically with machine learning. You do not have to have a connector, right? And what that means is that if you come in with your own application, the system will just work for it from day one. You don't have to have connectors, you don't have to describe the prefix format. That's so yesterday, right? So really what we want to be doing is up-leveling what the system is doing to the point where it's kind of working like a human would. You look at a log line, you know what's a timestamp. You know what's a PID. You know what's a function name. You know where the prefix ends and where the variable parts begin. You know what's a parameter over there in the variable parts. And sometimes you may need to see a couple examples to know what was a variable, but you'll figure it out as quickly as possible, and that's exactly how the system goes about it. As a result, we kind of embrace free-text logs, right? So if you look at a typical stack, most of the logs generated in a typical stack are usually free-text. Even structured logging typically will have a message attribute, which then inside of it has the free-text message. For us, that's not a bad thing. That's okay. The purpose of a log is to inform people. And so there's no need to go rewrite the whole logging stack just because you want a machine to handle it. They'll figure it out for themselves, right? So, you give us the logs and we'll figure out the grammar, not only for the prefix but also for the variable message part. So I already went into this, but there's more that's usually required for configuring a log manager with alerts. You have to give it keywords. You have to give it application behaviors. You have to tell it some prior knowledge. And of course the problem with all of that is that the most important events that you'll ever see in a log file are the rarest. Those are the ones that are one out of a billion. And so you may not know what's going to be the right keyword in advance to pick up the next breakage, right? So we don't want that information from you. We'll figure that out for ourselves. As the data comes in, essentially we parse it and we categorize it, as I've mentioned. And when I say categorize, what I mean is, if you look at a certain given log file, you'll notice that some of the lines are kind of the same thing. So this one will say "X happened five times" and then maybe a few lines below it'll say "X happened six times" but that's basically the same event type. It's just a different instance of that event type. And it has a different value for one of the parameters, right? So when I say categorization, what I mean is figuring out those unique types and I'll show an example of that next. Anomaly detection, we do on top of that. So anomaly detection on metrics in a very sort of time series by time series manner with lots of tunables is a well-understood problem. So we also do this on the event types occurrences. So you can think of each event type occurring in time as sort of a point process. And then you can develop statistics and distributions on that, and you can do anomaly detection on those. Once we have all of that, we have extracted features, essentially, from metrics and from logs. We do pattern recognition on the correlations across different channels of information, so different event types, different log types, different hoses, different containers, and then of course across to the metrics. Based on all of this cross-correlation, we end up with a root cause identification. So that's essentially, at a high level, how it works. What's interesting, from the perspective of this call particularly, is that incident detection needs relationally structured data. It really does. You need to have all the instances of a certain event type that you've ever seen easily accessible. You need to have the values for a given sort of parameter easily, quickly available so you can figure out what's the distribution of this over time, how often does this event type happen. You can run analytical queries against that information so that you can quickly, in real-time, do anomaly detection against new data. So here's an example of that this looks like. And this kind of part of the work that we've done. At the top you see some examples of log lines, right? So that's kind of a snippet, it's three lines out of a log file. And you see one in the middle there that's kind of highlighted with colors, right? I mean, it's a little messy, but it's not atypical of the log file that you'll see pretty much anywhere. So there, you've got a timestamp, and a severity, and a function name. And then you've got some other information. And then finally, you have the variable part. And that's going to have sort of this checkpoint for memory scrubbers, probably something that's written in English, just so that the person who's reading the log file can understand. And then there's some parameters that are put in, right? So now, if you look at how we structure that, the way it looks is there's going to be three tables that correspond to the three event types that we see above. And so we're going to look at the one that corresponds to the one in the middle. So if we look at that table, there you'll see a table with columns, one for severity, for function name, for time zone, and so on. And date, and PID. And then you see over to the right with the colored columns there's the parameters that were pulled out from the variable part of that message. And so they're put in, they're typed and they're in integer columns. So this is the way structuring needs to work with logs to be able to do efficient and effective anomaly detection. And as far as I know, we're the first people to do this inline. All right, so let's talk now about Vertica and why we take those tables and put them in Vertica. So Vertica really is an MPP column store, but it's more than that, because nowadays when you say "column store", people sort of think, like, for example Cassandra's a column store, whatever, but it's not. Cassandra's not a column store in the sense that Vertica is. So Vertica was kind of built from the ground up to be... So it's the original column store. So back in the cStor project at Berkeley that Stonebraker was involved in, he said let's explore what kind of efficiencies we can get out of a real columnar database. And what he found was that, he and his grad students that started Vertica. What they found was that what they can do is they could build a database that gives orders of magnitude better query performance for the kinds of analytics I'm talking about here today. With orders of magnitude less data storage underneath. So building on top of machine data, as I mentioned, is hard, because it doesn't have any defined schemas. But we can use an RDBMS like Vertica once we've structured the data to do the analytics that we need to do. So I talked a little bit about this, but if you think about machine data in general, it's perfectly suited for a columnar store. Because, if you imagine laying out sort of all the attributes of an event type, right? So you can imagine that each occurrence is going to have- So there may be, say, three or four function names that are going to occur for all the instances of a given event type. And so if you were to sort all of those event instances by function name, what you would find is that you have sort of long, million long runs of the same function name over and over. So what you have, in general, in machine data, is lots and lots of slowly varying attributes, lots of low-cardinality data that it's almost completely compressed out when you use a real column store. So you end up with a massive footprint reduction on disk. And it also, that propagates through the analytical pipeline. Because Vertica does late materialization, which means it tries to carry that data through memory with that same efficiency, right? So the scale-out architecture, of course, is really suitable for petascale workloads. Also, I should point out, I was going to mention it in another slide or two, but we use the Vertica Eon architecture, and we have had no problems scaling that in the cloud. It's a beautiful sort of rewrite of the entire data layer of Vertica. The performance and flexibility of Eon is just unbelievable. And so I've really been enjoying using it. I was skeptical, you could get a real column store to run in the cloud effectively, but I was completely wrong. So finally, I should mention that if you look at column stores, to me, Vertica is the one that has the full SQL support, it has the ODBC drivers, it has the ACID compliance. Which means I don't need to worry about these things as an application developer. So I'm laying out the reasons that I like to use Vertica. So I touched on this already, but essentially what's amazing is that Vertica Eon is basically using S3 as an object store. And of course, there are other offerings, like the one that Vertica does with pure storage that doesn't use S3. But what I find amazing is how well the system performs using S3 as an object store, and how they manage to keep an actual consistent database. And they do. We've had issues where we've gone and shut down hosts, or hosts have been shut down on us, and we have to restart the database and we don't have any consistency issues. It's unbelievable, the work that they've done. Essentially, another thing that's great about the way it works is you can use the S3 as a shared object store. You can have query nodes kind of querying from that set of files largely independently of the nodes that are writing to them. So you avoid this sort of bottleneck issue where you've got contention over who's writing what, and who's reading what, and so on. So I've found the performance using separate subclusters for our UI and for the ingest has been amazing. Another couple of things that they have is they have a lot of in-database machine learning libraries. There's actually some cool stuff on their GitHub that we've used. One thing that we make a lot of use of is the sequence and time series analytics. For example, in our product, even though we do all of this stuff autonomously, you can also go create alerts for yourself. And one of the kinds of alerts you can do, you can say, "Okay, if this kind of event happens within so much time, and then this kind of an event happens, but not this one," Then you can be alerted. So you can have these kind of sequences that you define of events that would indicate a problem. And we use their sequence analytics for that. So it kind of gives you really good performance on some of these queries where you're wanting to pull out sequences of events from a fact table. And timeseries analytics is really useful if you want to do analytics on the metrics and you want to do gap filling interpolation on that. It's actually really fast in performance. And it's easy to use through SQL. So those are a couple of Vertica extensions that we use. So finally, I would like to encourage everybody, hey, come try us out. Should be up and running in a few minutes if you're using Kubernetes. If not, it's however long it takes you to run an installer. So you can just come to our website, pick it up and try out autonomous monitoring. And I want to thank everybody for your time. And we can open it up for Q and A.

Published Date : Mar 30 2020

SUMMARY :

Also, just a reminder that you can maximize your screen And one of the kinds of alerts you can do, you can say,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Larry LancasterPERSON

0.99+

David GillPERSON

0.99+

VerticaORGANIZATION

0.99+

100%QUANTITY

0.99+

Sue LeClairePERSON

0.99+

five timesQUANTITY

0.99+

LarryPERSON

0.99+

S3TITLE

0.99+

three minutesQUANTITY

0.99+

six timesQUANTITY

0.99+

SuePERSON

0.99+

100 servicesQUANTITY

0.99+

ZebriumORGANIZATION

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

five yearsQUANTITY

0.99+

TodayDATE

0.99+

yesterdayDATE

0.99+

bothQUANTITY

0.99+

KubernetesTITLE

0.99+

oneQUANTITY

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

SQLTITLE

0.99+

one customerQUANTITY

0.98+

three linesQUANTITY

0.98+

three tablesQUANTITY

0.98+

each eventQUANTITY

0.98+

hundredsQUANTITY

0.98+

first peopleQUANTITY

0.98+

1,000 log streamsQUANTITY

0.98+

20 years agoDATE

0.98+

eight incidentsQUANTITY

0.98+

tens of thousands of customersQUANTITY

0.97+

later this weekDATE

0.97+

thousands of usersQUANTITY

0.97+

StonebrakerORGANIZATION

0.96+

each occurrenceQUANTITY

0.96+

PostgresORGANIZATION

0.96+

One thingQUANTITY

0.95+

three event typesQUANTITY

0.94+

millionQUANTITY

0.94+

VerticaTITLE

0.94+

one thingQUANTITY

0.93+

4/2DATE

0.92+

EnglishOTHER

0.92+

four function namesQUANTITY

0.86+

day oneQUANTITY

0.84+

PrometheusTITLE

0.83+

one-stopQUANTITY

0.82+

BerkeleyLOCATION

0.82+

ConfluenceORGANIZATION

0.79+

double arrowQUANTITY

0.79+

last couple of monthsDATE

0.79+

one ofQUANTITY

0.76+

cStorORGANIZATION

0.75+

a billionQUANTITY

0.73+

Atlassian StackORGANIZATION

0.72+

EonORGANIZATION

0.71+

BitbucketORGANIZATION

0.68+

couple more examplesQUANTITY

0.68+

LitmusTITLE

0.65+

Michael Lauricella, Atlassian & Brooke Gravitt, Forty8Fifty | Splunk .conf2017


 

>> Announcer: Live, from Washington DC, it's the CUBE. Covering .conf2017. Brought to you by Splunk. >> And welcome back here on theCUBE. John Walls and Dave Vellante, we're in Washington DC for .conf2017, Splunk's annual get together coming up to the nation's capital for the first time. This is the eighth year for the show, and 7,000 plus attendees, 65 countries, quite a wide menu of activities going on here. We'll get into that a little bit later on. We're joined now by a couple of gentlemen, Michael Arahuleta who is the Vice President of Engineering at Atlassian, Michael, thank you for being with us. >> Thank you, actually it's Director of Business Development. >> John: Oh, Director of Business Development, my apologies >> He's doin' a great job >> My apologies. >> I don't need that. >> Oh very good. And Brooke Gravitt, who I believe is the VP of Engineering, >> There ya go. >> And the Chief Software Architect at Forty8Fifty. >> Yep, how ya doin'? >> No promotions or job assignments, I've gotcha on the right path there? >> Yeah, yeah. >> Good deal, alright. Thank you for joining us, both of you. First off, let's just set the stage a little bit for the folks watching at home, tell us a little bit about your company, descriptions, core competencies, and your responsibilities, and then we'll get into the intersection, of why the two of you are here. So Michael, why don't you lead off. >> So Atlassian, we, in our simplest form, right, we make team collaboration software. So our goal as a company is to really help make the tools that companies use to collaborate and communicate internally. Our primary focus, and kind of our bread and butter has always been making the tools that software companies use to turn around and make their software. Which is a great position to be in, and an increasingly we're seeing ourselves expand into providing that team collaboration software products like Jira, Confluence, BitBucket, and now, the new introduction of a product called Stride, which is a real time team collaboration product, not just for technical teams, but we're really seeing a great opportunity to empower all teams 'cause every team in every organization needs a better way to communicate and get things done. That's really what Atlassian core focus is all about. >> John: Gotcha. Brooke, if you would. >> Yeah, so Forty8Fifty Labs, we're the software development and DevOps focused subsidiary of Veristor Systems based out of Atlanta. We focus primarily on four key partners, which would be Atlassian, Splunk, QA Symphony, and Red Hat, and primarily, we do integrations and extensibility around products that these guys provide as well as hosting, training, and consulting on DevOps and Atlassian products. >> So the ideal state in your worlds is you've got -- true DevOps, Agile, infrastructure as code, I'll throw all the buzzwords out at ya, but essentially you're not tossing code from the development team into the operations team who them hacks the code, messes it up, points fingers, all that stuff is in part anyway what you're about eliminating, >> Right. >> And getting to value sooner. Okay, so that's the sort of end state Nirvana. Many companies struggle with that obviously, You got, what, Gartner has this term, bimodal IT, which everybody, you know, everybody criticizes but it's sort of true. You've got hybrid clouds, you've got, you know, different skillsets, what is the state of, Agile development, DevOps, where are we in terms of organizational maturity? Wonder if you guys could comment. >> I'll start with that right, I think -- Even though we've been talking about DevOps for a while and companies like Atlassian and Splunk, we live and breathe it. I still think when you look at the vast majority of enterprises, we're still at the early stages of effectively implementing this. I think we're still really bringing the right definition to what DevOps is, we're kind of go through those cycles where either a buzzword gets hot, everybody glams onto it, but no one really knows what it means. I think we're really getting into that truly understanding what DevOps means. I know we've been working hard at Atlassian to really define that strong ecosystem of partners. We really see ourselves as kind of in the middle of that DevOps lifecycle, and we integrate with so many great solutions around monitoring and logging, testing, other operational softwares, and things of that nature to really complete that DevOps lifecycle. I think we're really just now finally seeing it come together and finally starting to see even larger organizations, very large Fortune 100 companies talk about how they know they've got to get away from Waterfall, they've got to embrace Agile, and they've got to get to a true DevOps culture, and I think that's where Atlassian is very strong, devs have loved us for a long time. Operations teams are really learning to embrace Atlassian as well. I think we're really going to great position to be at that mesh of what truly is DevOps as it really emerges in the next couple years. >> Brooke, people come to Forty8Fifty, and they say, alright, teach me how to fish in the DevOps world, is that right? >> Yeah, absolutely. I mean, one of the challenges that you have in large enterprises is bringing these two groups of people together, and one of the easy ways is to go out and buy a tool, I think the harder and more difficult challenge that they face is the culture change that's required to really have a successful DevOps transformation. So we do a little bit of consulting in that area with workshops with folks like Gene Kim, Gary Gruver, Jez Humble that we bring in who are sort of industry icons for that sort of DevOps transformation. To assist, based on our experiences ourselves in previous companies or engagements with customers where we've been successful. >> So the cloud native guys, people who are doing predominantly cloud, or smaller companies, tech companies presumably, have glommed onto this, what about the sort of the Fortune 1000, the Global 2000, what are we seeing in terms of their adoption, I mean, you mentioned Waterfall before, you talk to some application development heads will say, well listen, we got to protect some of our Waterfall, because it's appropriate. What are you seeing in the sort of traditional enterprise? >> We see the traditional enterprise really embracing Agile in a very aggressive way. Obviously they wouldn't be working with Atlassian if they weren't, so our view is probably a little bit tilted. Companies that engage with us are the more open to that. But we're definitely seeing that the far and away the vast majority in the reports that we get from our partners like Forty8Fifty Labs is that increasingly larger and larger companies are really aggressively looking to embrace Agile, bring these methodologies in, and the other simple truth is with the way Atlassian sells -- the way we sell our products online, we have always sort of grown kind of bottoms up inside a lot of these large organizations, so where officially IT may still be doing something else, they're always countless smaller teams within the organization that have embraced Atlassian, are using Atlassian products, and then, a year down the road, or two years down the road, we tend to then emerge as the defacto solution for the organization after we kind of spread through all these different groups within the company. It's a great growth strategy, a lot are trying to replicate it. >> Okay, what's the Splunk angle? What do you guys do with Splunk, and how does it affect your business? >> Mike: Do you want to start? >> Sure, so, we're both a partner of Splunk, a customer of Splunk, and we use it in our own products in terms of our hosting, and support methodologies that we leverage at Forty8Fifty. We use the product day in and day out, and so with Atlassian, we have pulled together a connector that is -- one half of it is a Splunk app, it's available on Splunk base, and the other part is in the Atlassian marketplace, which allows us to send events from Juris Service Desk, ticketing events, over to Splunk to be indexed. You have a data model that ties in and allows you to get some metrics out of those events, and then the return trip is to -- based on real time searches, or alerts, or things that you have -- you're very interested in reports, you can trigger issues to be created inside of Jira. >> I think the only thing to add to that, so definitely, that's been a great relationship and partnership, and we're seeing an increasing number of our partners also become partners with Splunk and vice versa, which is great. The other strong side to this as well, is our own internal use of Splunk. So, we as a company, we always like to empower our different teams to pick whatever solution they want to use, and embrace that, and really give that authority to the individual teams. However, with logging, we were having a huge problem where all of our different teams were using over a whole host variety of different logging solutions, and frankly not to go into all the details, it was a mess. Our security team decided to embrace Splunk and start using Splunk, and really got a lot of value out of the solution and fell in love with the solution. Which says a lot, because our security team doesn't normally like much of anything, especially if it's not homegrown. That was a huge statement there, and then quickly Splunk now has spread to our cloud team which is growing rapidly as our cloud scales dramatically. Our developers are using it for troubleshooting, our SREs and our support team for incident management, and it's even spread to our marketplace, which is one of the larger marketplaces out there today for third party apps. Then the new product, Stride, for team collaboration is going to be very dependent on Splunk for logging as well. It's become that uniform fabric. I even heard a dev use a term which I've never heard a dev talk about logs and talk about log love, which is no PR, that is the direct statement from a developer, which I thought was amazing to hear. 'cause you know, they just want to code and make stuff, they don't want to deal when it actually breaks and have to fix it. But with Splunk they've actually -- They're telling me they actually enjoy that. So that's a great -- >> That's more than the answer is in the logs, that's there's value in our logs, right? >> Yeah, a ton of value, right? Because at the end of the day, these alerts are coming in and then we use tools like the Forty8Fifty Labs tool to get those tickets into Jira. Those logs and things are coming in, that means there's an issue and there's something to be resolved and there's customer pain. So the quicker we can resolve that, that log is that first indicator of what's going on in the cloud and in our platforms to help us figure out how do we keep that customer happy? This isn't just work, and just a task, this is about delivering customer value and that log can be that first indicator. The sooner you can get something resolved, the sooner the customer's back to getting stuff done and that's really our focus as a company, right? How do we enable people to get things done? >> Excuse me, when you are talking about your customers, what are their pain points? Today? I mean, big data's getting bigger and more capabilities, you've got all kinds of transport problems and storage problems, and security problems, so what are the pain points for the people who are just trying to get up to speed, trying to get into the game, and that the kind of services you're trying to bring to them to open their eyes. >> I think if you look at the value stream mapping and time to market for most businesses, where Splunk and Atlassian play in is getting that fast feedback. The closer in to the development side, the left hand side of value stream that you can pull in, key metrics, and get an understanding of where issues are, that actually -- it's much less expensive to fix problems in development than when they're in production, obviously. Rolling things like Splunk that can be used as a SIM to do some security analysis on, whether it be product code or business process early, rather than end up with a data breach or finding something after it's already in production. That kind of stuff, those are the challenges that a lot of the companies are facing is -- especially when the news, if you look at all the things that are goin on from a security perspective, taking these two products and being able to detect things that are going on, trends, any sort of unusual activity, and immediately having that come back for somebody in a service desk to work on either as a security incident or if it's a developer finding a bug early in the lifecycle, and augmenting your sort of infrastructure as code, the build out of the infrastructure itself. Being able to log all that data, and look at the metrics around that to help you build more robust enterprise class platforms for your teams. >> We've been sort of joking earlier about how the big data, nobody really talks about big data anymore, interestingly, Splunk who used to never talk about big data is now talking about big data, cause they're kind of living it. It's almost like same wine, new bottle with machine learning and AI and deep learning are all kind of the new big data buzzwords, but my question is, as practitioners, you were describing a situation where you can sort of identify a problem, maybe get an alert, and then manually I guess remediate that problem, how far away are we from -- so the machines automating that remediation? Thoughts on that? >> Am I first up? >> You guys kind of -- >> We've done a lot of automated remdediation. Close with remediation is what you call it. The big challenge is, it's a multi-disciplinary effort, so you might have folks that need to have expertise between network and systems and the application stack, maybe load balancing. There's a lot of different pieces there, so step one is you got to have folks that have the capacity to actually create the automation for their domain of expertise, and then you need to have sort of that cross platform DevOps mindset of being able to pull that together and the coordinator role of let's orchastrate all of the automations, and then hopefully out of that, combined with machine learning, some of the stuff that you can do in AWS, or with IBM's got out. You can take some of that analysis and be a little bit smarter about running the automation. In terms of whether that's scaling things up, or when -- For example, if you're in a financial industry and you've got a webpage that people are doing bill pay for, if you have a single website down, a web server down, out of a farm of 1000, in a traditional NOC, that would be kind of red on a dashboard. It's high, it's low priority, but it's high visibility and it's just noise, and so leveraging machine learning, people do that in Splunk to really refine what actually shows up in the NOC, that's something I think is compelling to customers. >> How are devs dealing with complexity, obviously, collaboration tools help, but I mean, the level of complexity today, versus when you think back to client server, is orders of magnitude greater for admins and developers, now you got to throw in containers and microservices, and the amount of data, is the industry keeping pace with the pace of escalation of complexity, and if so, how? >> I think we're trying. I think that's where we come into play. As this complexity increases really the only way you can solve it is through better communication and better tools to make sure that teams have the right information at their fingertips. The other challenge too is now in the world of the cloud, these teams need to be on 24/7. But you've got to kind of roll across the globe, and have your support teams in different time zones. You don't always have the right people online at the same time to be able to address, and you can't always talk directly, so that's where having the right tools and processes in place are extremely important so that team can know and know what did the team earlier do, how did they resolve this, where's the run book for this issue, and if this happens, how do we resolve it? How do we do so quickly? I think that tooling is key, and also too, this complexity is also as you guys were talking about before, being solved through some automation as well, and we're increasingly seeing that to where if this occurs and a certain thing occurs, then Jira can now automatically start to trigger some things for you, and then report back as to what it did. You're going to see more and more of that going forward as these models become more intelligent and we can redeploy, or if capacity is low, let's pull back resources, and let's not spend all this money on cloud computing platforms that we may not need because utilization is low. You're seeing all of those things start to happen and Jira as that workflow engine is that engine that's making those things happen in either an automated way at times, or just enabling people to communicate and do things in a very logical fashion. >> As ecosystem partners, how do you view the evolution of Splunk, is it becoming a application platform for you? Are you concerned about swim lanes? I wonder if you could talk about that? >> I personally, I don't see any real concerns of overlap between Splunk and Atlassian. In our view at Atlassian is, we tend to work very closely with people kind of fit into that frenemy category, and they're definitely a partner that we overlap with I think in very very few ways. If and when we ever do, I mean in a way, that's kind of something we always embrace as a company. I mean one thing we'll say a lot is overlap is better than a gap. Because if there's a gap between us and a partner, then that's going to result in customer pain. That means there's nothing that's filling that void. I'd rather have some overlap, and then give the customer the power to choose how do they want to do it. I mean, Splunk says you can probably do it this way, Atlassian says you could do it this way, as long as they can get stuff done, and that's always -- it's not a cliche from us, I mean that's a core message from Atlassian, then we're happy. Regardless if they completely embrace it our way, a little bit, a little deviation, that's not what really matters. >> Too much better than too little. >> Exactly. >> Is what it comes down to. Gentlemen, thanks for being with us. >> Thank you. >> We appreciate the time today and look forward to seeing you down the road and looking as your relationship continues. Not only between the two companies, but with Splunk as well. Thanks for being here. >> Mike: Thank you guys. >> We continue theCUBE does, live from Washington DC here at .conf2017, back with more in just a bit.

Published Date : Sep 26 2017

SUMMARY :

Brought to you by Splunk. This is the eighth year for the show, And Brooke Gravitt, who I believe is the VP of Engineering, And the Chief Software and then we'll get into the intersection, So our goal as a company is to really help make the tools Brooke, if you would. and primarily, we do integrations and extensibility Okay, so that's the sort of end state Nirvana. and they've got to get to a true DevOps culture, is the culture change that's required to really So the cloud native guys, people who are doing for the organization after we kind of spread through all these and the other part is in the Atlassian marketplace, and really give that authority to the individual teams. the sooner the customer's back to getting stuff done and that the kind of services you're trying and time to market for most businesses, are all kind of the new big data buzzwords, that have the capacity to actually create the automation of the cloud, these teams need to be on 24/7. and then give the customer the power to choose Gentlemen, thanks for being with us. and look forward to seeing you down the road conf2017, back with more in just a bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Gary GruverPERSON

0.99+

Brooke GravittPERSON

0.99+

MichaelPERSON

0.99+

Gene KimPERSON

0.99+

Dave VellantePERSON

0.99+

MikePERSON

0.99+

Michael ArahuletaPERSON

0.99+

AtlantaLOCATION

0.99+

AtlassianORGANIZATION

0.99+

twoQUANTITY

0.99+

SplunkORGANIZATION

0.99+

John WallsPERSON

0.99+

Washington DCLOCATION

0.99+

JohnPERSON

0.99+

two companiesQUANTITY

0.99+

bothQUANTITY

0.99+

Red HatORGANIZATION

0.99+

BrookePERSON

0.99+

Michael LauricellaPERSON

0.99+

GartnerORGANIZATION

0.99+

Forty8Fifty LabsORGANIZATION

0.99+

Veristor SystemsORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Jez HumblePERSON

0.99+

65 countriesQUANTITY

0.99+

TodayDATE

0.99+

two groupsQUANTITY

0.99+

todayDATE

0.99+

AWSORGANIZATION

0.99+

eighth yearQUANTITY

0.99+

four key partnersQUANTITY

0.99+

oneQUANTITY

0.98+

Forty8FiftyORGANIZATION

0.98+

DevOpsTITLE

0.98+

first timeQUANTITY

0.98+

two yearsQUANTITY

0.98+

QA SymphonyORGANIZATION

0.97+

first indicatorQUANTITY

0.97+

two productsQUANTITY

0.97+

.conf2017EVENT

0.97+

FirstQUANTITY

0.97+

AgileTITLE

0.97+

JiraTITLE

0.96+

WaterfallEVENT

0.96+

1000QUANTITY

0.94+

7,000 plus attendeesQUANTITY

0.94+

single websiteQUANTITY

0.94+

Mike Kail, Cybric | CUBE Conversation with John Furrier


 

(uplifting music) >> Welcome everyone to CUBEConversation here in Palo Alto, California, theCUBE Studios, I'm John Furrier, the co-host of theCUBE and co-founder of SiliconANGLE Media. Our next guest is Mike Kail, the CTO of Cybric, a security company industry veteran, welcome, good to see you. Glad we got you, get some time, your time today. >> No, absolutely John, thanks for having me. >> Yeah, so you've been through -- seen a lot of growth in the waves. The big web scale, and now as we go full cloud and hybrid cloud, private cloud and public cloud, whole new paradigm shift on security. Many have Dave Velante ask Pat Gelsinger many times, do we need a security do over? The general consensus from everyone is, yes. (laughing) We need a do over. What's the state of the market with security right now as people scratch their head, they've been throwing the kitchen sink at everything, but yet, the attacks are still up. That's not good, so what's the solution? What's going on? >> I mean I think a level set like we've talked about the definition of insanity is doing the same thing over and over, and in security for sure, we've been doing the same thing. We have firewalls, nextgen firewalls, endpoint, you know, product X, product Y, this has got a better algorithm. Has anything really helped? I think in this post Equifax world, and now post SEC world, things are not getting better. We need to step back, and I think we need to really think about how do we bring security assurance into the assembly and delivery of applications, and move it back into the code as well, which is our thesis on shifting left and embedding security into the SDLC. I think there needs to be some design thinking around security as well. Today it's like this fear, uncertainty, and doubt -- it's sold on fear, and that bad things are happening. Let's bring the conversation into visibility. >> I mean, there's so many different lifecycles you've mentioned is really key, and I think I want to just drill down on that because the observation, I'll get your reaction on this, is security shouldn't be a cost center, security should be tied to core objectives of a company, should be reporting to the board, C-level type access should be invested in. At the same time, the architecture of security, not just organizationally funded, cloud and datacenter need to be looked at holistically. There's no one product. So that means okay, one, that's the customer viewpoint, but then you got to actually put the software out there. So, what's your reaction to that trend of security being not actually part of the IT department whether it is or not is irrelevant, it's more of, how it's viewed. Are you staffing properly? How are you staffing? Is it a cost center, or is it tied to an objective? Does it have free reign to set up policies, standards, et cetera? What's your thoughts on this? >> I think, and I've talked about this recently, the technology is there. The culture is lagging behind. Security's always been -- >> Culture is lagging or not? >> Is lagging. Security is traditionally been kind of this -- Like IT was in the past, pre-DevOps culture, security is the Department of No. Coming in and not thinking about driving business revenue and outcome, but pointing fingers and accusing people and yelling at people. It creates this contentious environment, and there needs to be collaboration around, like, how do we drive the business forward with security assurance not insurance? The latter is not helping. >> So, that's a good point, I want to drill down on DevOps, you mentioned DevOps, that's -- you and I have talked about this before at events. DevOps movement has happened. It's happening, and continuing to happen at scale. DevOps is pretty much on the agenda, make it happen. But, it's hard to get DevOps going when there's so much push on application development, so, you have old school transitional application development now with DevOps, and then you got pressure for security. It seems to be a lot on the plate of executives and staffs to balance all of that. So how do you roll up the best security into a DevOps culture, in your opinion? >> I think you have to start embedding security into the DevOps culture and the software development lifecycle, and create this collaborative culture of DevSecOps. >> What is DevSecOps? >> It's making -- you think about the core tenents of DevOps being collaboration, automation, measurement, and sharing. Security needs to take that same approach. So instead of adding or bolting on security at the end of your development and delivery cycle, let's bring it in and find defects early on from what we talk about, from code commit, to build, to delivery, and correlate across all of those instead of these disparate tasks and manual tasks that are done today. >> Where are we on this? First, by the way, I agree with you, I love that idea, because you're bringing agility concepts to security. How far -- what's the progress on this relative to the industry adoption? Is it kind of pioneering right now stage, is it a small group of people, remember, go back to 2008, you remember, the cloud was a clouderati, was a hand full of people. I would go to San Francisco, there'd be six of us. Then NGR would come on, then there's Heroku, then there's like Rackspace, and then Amazon was still kind of rising up. It feels like DevOps, DevSecOps, is beyond that, I mean, where is the progress? >> I think out in the real world, especially outside of Silicon Valley, it's still really early days. People are trying to understand, but as we were chatting about before the show, I feel like in the past few months there's definitely momentum gaining rapidly. I think with conferences like DevSecCon, Security Boulevard coming out from Alan Schimmel and his team, like there's building more and more awareness, and we're been trying to drive it as well. So I think it's like the early days of cloud. You'll see that, "Okay, there's a bunch of -- okay I don't think this is a real thing", and now people are like, "Okay, now I need to do it, I don't want to be the next Equifax, or large breach. So how do I bring security in without being heavy handed." >> Interesting you mentioned Equifax, I mean our reporting soon to be showing, will demonstrate that a lot what's been reported is actually not what really happened at this. They've been sucked dry 10 times over, and that the state actors involved as a franchise in all of this, it's beyond -- Amazing how complicated this -- these hacks are, so how does a company, prepare against the coordination at that level - I mean, it's massive, I mean, someone dropped the ball on the VPN side, but I mean, clearly, they were out-maneuvered, outfoxed if you will. >> Well, I mean I think it has to come from the top, like security has to stop being quote unquote important, and become a priority. Not the number one priority, but you have to think about it with respect to business risk. And Equifax aside, a lot of companies just have poor hygiene. They don't practice good security hygiene across all of the attack vectors. If you look at now, the rise of the developer, Docker containers, moving to cloud, mobile, there's all of these ways in, and the hackers only have to be right once. We on the defensive side, have to be right all the time. >> Hygiene is a great term, but if it's also maybe even more than that. It's like they just need an IQ as well, so you got to have, you've got this growth in Kubernetes, you got containers, you got a lot going on at layer four and above in the stack, that are opportunities as you said, the tech's out there. So, again, back to the organizational mindset, because this is where DevOps really kind of kicked ass, you had an organizational mindset, then you had showcases, people built their own stuff. You go back to the early pioneers, you were involved with a few of them, Facebook built their own stuff, because they had to. >> Mike: Yeah, there was nothing else. >> There was nothing else, so they had to build it. Now a lot of the successes in the web scale days were examples of that. So is that a similar paradigm, are people building their own, are you guys working with one, is that right? How should people think about how to look for use cases, how should they look for successes, who's doing anything? Can you point to any examples of that's kickass DevSecOps? >> I mean, obviously I'm biased, but I think -- >> (laughing) >> the Cybric platform is really trying to take all of the different disparate tools and hyperconverge them onto an automation orchestration platform. Now you can be at all parts of the SDLC, and give the CIO and CSO visibility. I think the visibility aspect with the move to cloud and containers, and Kubernetes, and you name your favorite technology, there's a lack of visibility. You can't secure what you don't know about. >> Take a minute to talk about Cybric for a minute, 'cause you brought up the product, I want to just double down on that. What do you guys do, what's the product, just give a quick one minute, two minutes, update on for the folks on what you guys do. >> Sure, so we're a cloud security as a service platform, so it's delivered SAS, that has a policy driven framework to automate code and application security testing and scanning from code commit, to build assembly, to application delivery, and correlate that testing and the results and provide you, your business resiliency. So we talk about internal rate of detection, internal rate of remediation, and if you can narrow that window, you become much more resilient. >> Alright, so, let me give you an example. Just throw this out since we're here. A little test here -- Test your security mojo here. I go to China. I happen to bring my phone and my Mac, I connect to the -- oh, free Wifi! Boom, I get a certificate, my phone updates from Apple, I think I'm on a free WiFi network, it's a certificate from China, I get the certificate here, they read all my mail while I'm over there, but I'm not done, I come home. And I go back to the enterprise. How do you guys help me, the company identify that I'm now infected at maybe the firmware level or you know, I mean, that's -- what people are talking about all the time right now. You're smiling, he's like, yeah that happens. >> First of all, I would never let you leave to go to a country like that without a burner phone and a burner laptop, but not take -- and don't log into anything, don't connect to anything. >> Is that -- >> It's about building awareness, so I -- >> Hold on in all seriousness that's essentially best practice in your opinion? Not to have your laptop in China, is that the thing? >> Yeah, I don't think, you're not going to be safe. Like, there's so many ways to subvert you, whether you accidentally connect to public WiFi, you join the wrong network, somebody steals your laptop, I mean, there's just all the -- there's a lot of things that bad things that can happen, and not much upside for you. >> Okay, so now back to the enterprise, so I get back in, what kind of security -- how do you guys look at that, so if you're doing agile or DevSecOps, Is there software that does that, is it the methodology, is there mechanisms, how do companies think through some basic things like that, that entry point? Because then that becomes an insider threat from a backdoor. >> Right, so I think you have to have this continuous scanning approach. The days of doing append test on your application once a quarter, meanwhile hackers are doing it continuously behind the scenes, you have to close that chasm. But I think we need to start early on and build awareness to developers. One reporter used the anaology, it's like spell check from Microsoft Word. Now as I'm committing code, I can run a scan and say okay you have this vulnerability, here's how to go remediate it, and you do that, and we don't impact velocity. >> So you have to be on top of a lot of things. But that also is into the team's approach. What is the product that you guys have? Is it software, is it -- a box, how do you guys -- what's the business model for Cybric? >> We're software that overlays into the SDLC, and we plug in at this keypoints of the SDLC. So committing into your code repo, such as GitHub or BitBucket, at the artifact build stage, so Jenkins, Travis Circle -- >> So you're at the binary level? >> Yeah. >> Okay. >> So there we look for open source and third party library and do source code composition of the artifact. Now you make sure that you're not vulnerable to Apache Struts, you have updated and patched to the latest version. Then pre-delivery, we replicate your application environment, and aggressively scan for the OWASP Top 10. So SQL injection, cross site scripting -- >> Yeah. >> And alert you, and allow you to play offense. So we now remediate the vulnerability before it's ever exposed to production. >> Where are you guys winning, give some examples of when someone needs to get you guys in, is it a full on transformational thing, can I come in and engage with Cybric immediately in little kind of POCs, what's the normal use case that you guys are engaging with companies on? >> It really depends where you are in your company with this whole DevOps, DevSecOps migration, but we're agnostic to the methodology in your environments, so we can start at the far right, and just do AppSec scanning, we can start at the middle of the build, at the left, code, or all of that. There's this notion of I have to be ready for security, you don't have to be ready. We help you -- The hardest part is getting started, and we help you get started. You'll see a blog post or an article from us say, "Stop the fudge, just get started." That's how you have to approach this. This paralysis that exists has to end. >> That's not what the paralysis thing -- Pretend I'm a customer for a second, Mike, I'm burnt out, I got a gun to my head every day, I come in, I got every single security vendor lining up begging for my attention, why should I pay attention to you? What's in it for me? How do you answer that? >> So first of all, you know, what do you want to achieve? What is your current state? Where are your code repos, where are your application deployments, what are you doing today? How do we make that a continuous process? It's understanding the environment, having some situational awareness and a bit of EQ. Instead of going in and pounding on them with a product. >> Do you guys then go in and train my staff, I'm trying to think what's the commitment from me, what do I need to do? >> It's -- our policies are very simple, you define a target which is your source repo, your build system, or your application, you define the tool or integration you want to run, so I want to run Metasploit against my application, I want to do it every hour, and I want to be notified via a Slack Channel notification. >> That sounds really easy to implement. It sounds -- >> It's four steps. Literally a POC takes 15 minutes to onboard. >> So what's the outcome, what's some of the successes you've had after a POC? It sounds complicated but it really the methodology really is more of a mindset for the organization, so I love the DevOps angle on that, but okay, I can get in, I kick the tires, I do the four steps, I go, "Oh this is awesome." What happens next, what normally goes on? >> What often happens in the past is you run a test and you're inundated with results, it's -- you know, there's critical warnings, some informational, and some like blood red ones. But you don't know where to start on prioritizing them. We've normalized the output of all these tools so now you know where exactly to start. What are the important vulnerabilities to start with, and go down, versus throwing this over the fence to dev, and upsetting them, and having a contentious conversation. So we implicitly foster the collaborative nature of DevSecOps. >> Cool. So competition. Who do you guys compete with, how do you guys -- Who do you run into the field against, what are customers looking at that would compare to you guys that people could think about? >> I think our biggest competition to be honest is the companies that want to -- that tried to do that themselves. The DIYs are not invented here. I mean, we've talked to a couple companies, they've tried to do this for two years, and they failed, and, you know, outside of us trying to sell something, like, is that really in your company's best interest to have a team dedicated to building this platform. I think there's a couple other big companies out there that do part of it, but like we architected this from the ground up to be unique and somewhat differentiated in very crowded security market. >> What's your general advice, you know, a friend comes to you, CIO friend, hey Mike, you know dude, bottom line, what's going on with security? How do you -- what's your view of the landscape right now because it certainly is noisy, again like I said, the number of software tools, and billions of hundreds of billions of dollars being spent according to Gardner, yet the exploits are still up, so it's not like having any effect. (laughing) Someone's winning. So if there's more tools, either something's -- tools are ineffective or there's just more volume on attacks, probably both, but -- You go, Oh my God, there's nothing really going on here. There's no innovation. What's the landscape look like, how do you describe that in kind of simple terms and less security landscape? Crazy out of control chaotic, I mean, what's -- >> I mean, if you go to RSA and walk the floor, it's like all of the same buzzwords got exploded, and there's no real solutions that address the near -- like we talked about, I said earlier, the definition of insanity is doing the same thing over and over, we keep deploying the same products and having the same results, and not being more secure. I think there needs to be a rationalization process. You can't just go buy tools and expect them to solve all of your problems. You have to have a strategic framework instead of a tactical approach. >> Alright, so I'll say to you, as another example, I got IoT on my agenda, I got a lot of industrial equipment, that's now going to connect to the IP network, used to go to some of it's own proprietary backhaul, but now I'm on the IP network. Mike, how does this play into that? Obviously it's going to open up some more surface area for attacks, how do you guys work with that? >> I think it goes back to that having this continuous security scanning, if you have all of these IoT devices, you have to know how they're operating. You can't just send a bunch of log data to your SIM and try to extract that signal from the noise and overwhelm your security operation center. How do you run that through the kind of of a, let's call it map reduce for lack of a better term, to extract that signal from the noise and find out is this device talking to this one, is that correct, or is this anomalous? But it has to be continuous, that cannot be periodic. >> Obviously data is important, my final question to end the segment is, the role of data and the role of DevOps is impactive to the security practice. What's the reality, where are we? First inning, second inning? Data obviously important, comment on that, and then DevOps impact to security. Obviously you see momentum. What's your thoughts? >> I don't think we've got out of the dugout yet, to start the first inning -- >> (laughing) >> Which is exciting in some ways if you're a start-up. Or depressing if you're an enterprise. But we have to take a different approach going back to how we started this conversation. The current approaches aren't working. We have to think differently about this. >> Okay, so we're in the early innings, I'm a pioneer, an early adopter, because I'm desperate or I really want to be progressive, why am I calling Cybric? >> I think because you want -- you understand that security needs to be more of a priority, you want to shift that left, and find defects and vulnerabilities early on in your product -- Lifecycle. If you're a head of product, wouldn't you want to have some security assurance before I delay your delivery date because the security team comes in and finds a bunch of vulnerabilities the day before your launch. >> So security as a service as you said. Mike Kail with Cybric, CTO, bringing his expert opinion here into theCUBEConversation here at Palo Alto, I'm John Furrier, thanks for watching. (upbeat music)

Published Date : Sep 21 2017

SUMMARY :

I'm John Furrier, the co-host of theCUBE and What's the state of the market with security right now I think there needs to be some design thinking of a company, should be reporting to the board, the technology is there. and there needs to be collaboration around, like, So how do you roll up the best security I think you have to start embedding security So instead of adding or bolting on security at the end go back to 2008, you remember, the cloud and now people are like, "Okay, now I need to do it, and that the state actors involved as a franchise and the hackers only have to be right once. so you got to have, you've got this growth in Kubernetes, Now a lot of the successes in the web scale days and you name your favorite technology, on for the folks on what you guys do. and scanning from code commit, to build assembly, and my Mac, I connect to the -- oh, free Wifi! and don't log into anything, don't connect to anything. you join the wrong network, somebody steals your laptop, how do you guys look at that, so if you're doing agile behind the scenes, you have to close that chasm. What is the product that you guys have? We're software that overlays into the SDLC, to Apache Struts, you have updated and patched and allow you to play offense. and we help you get started. So first of all, you know, what do you want to achieve? you define a target which is your source repo, That sounds really easy to implement. Literally a POC takes 15 minutes to onboard. I do the four steps, I go, "Oh this is awesome." What often happens in the past is you run a test to you guys that people could think about? I think our biggest competition to be honest What's the landscape look like, how do you describe that I think there needs to be a rationalization process. for attacks, how do you guys work with that? I think it goes back to that having and then DevOps impact to security. to how we started this conversation. I think because you want -- you understand So security as a service as you said.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mike KailPERSON

0.99+

Dave VelantePERSON

0.99+

ChinaLOCATION

0.99+

MikePERSON

0.99+

two minutesQUANTITY

0.99+

AppleORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

one minuteQUANTITY

0.99+

Pat GelsingerPERSON

0.99+

FacebookORGANIZATION

0.99+

Alan SchimmelPERSON

0.99+

John FurrierPERSON

0.99+

two yearsQUANTITY

0.99+

sixQUANTITY

0.99+

JohnPERSON

0.99+

EquifaxORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Palo AltoLOCATION

0.99+

FirstQUANTITY

0.99+

2008DATE

0.99+

Silicon ValleyLOCATION

0.99+

10 timesQUANTITY

0.99+

first inningQUANTITY

0.99+

MacCOMMERCIAL_ITEM

0.99+

theCUBE StudiosORGANIZATION

0.99+

First inningQUANTITY

0.99+

second inningQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

AppSecTITLE

0.99+

theCUBEORGANIZATION

0.99+

TodayDATE

0.99+

OWASPTITLE

0.99+

DevOpsTITLE

0.98+

AmazonORGANIZATION

0.98+

bothQUANTITY

0.98+

DevSecOpsTITLE

0.98+

DevSecConEVENT

0.98+

GardnerPERSON

0.98+

CybricORGANIZATION

0.97+

todayDATE

0.96+

MetasploitTITLE

0.95+

four stepsQUANTITY

0.95+

once a quarterQUANTITY

0.95+

CybricPERSON

0.93+

HerokuORGANIZATION

0.93+

GitHubORGANIZATION

0.91+

RackspaceORGANIZATION

0.9+

Top 10QUANTITY

0.89+

KubernetesTITLE

0.88+

agileTITLE

0.87+

Security BoulevardEVENT

0.86+

singleQUANTITY

0.86+

NGRORGANIZATION

0.83+

billions of hundreds of billions of dollarsQUANTITY

0.82+

MicrosoftORGANIZATION

0.81+

secondQUANTITY

0.8+

fourOTHER

0.78+

oneQUANTITY

0.78+

firstQUANTITY

0.78+

One reporterQUANTITY

0.78+

nextgenORGANIZATION

0.78+

SASORGANIZATION

0.76+

Department ofORGANIZATION

0.74+

Apache StrutsTITLE

0.73+

SlackORGANIZATION

0.73+

CUBEConversationEVENT

0.71+