Image Title

Search Results for CIC:

Webb Brown & Alex Thilen, Kubecost | AWS Startup Showcase S2 E1 | Open Cloud Innovations


 

>>Hi, everyone. Welcome to the cubes presentation of the eight of us startup showcase open cloud innovations. This is season two episode one of the ongoing series covering the exciting startups from ABC ecosystems today. Uh, episode one, steam is the open source community and open cloud innovations. I'm Sean for your host got two great guests, Webb brown CEO of coop costs and as Thielen, head of business development, coop quest, gentlemen, thanks for coming on the cube for the showcase 80, but startups. >>Thanks for having a Sean. Great to be back, uh, really excited for the discussion we have here. >>I keep alumni from many, many coupons go. You guys are in a hot area right now, monitoring and reducing the Kubernetes spend. Okay. So first of all, we know one thing for sure. Kubernetes is the hottest thing going on because of all the benefits. So take us through you guys. Macro view of this market. Kubernetes is growing, what's going on with the company. What is your company's role? >>Yeah, so we've definitely seen this growth firsthand with our customers in addition to the broader market. Um, you know, and I think we believe that that's really indicative of the value that Kubernetes provides, right? And a lot of that is just faster time to market more scalability, improved agility for developer teams and, you know, there's even more there, but it's a really exciting time for our company and also for the broader cloud native community. Um, so what that means for our company is, you know, we're, we're scaling up quickly to meet our users and support our users, every, you know, metric that our company's grown about four X over the last year, including our team. Um, and the reason that one's the most important is just because, you know, the, the more folks and the larger that our company is, the better that we can support our users and help them monitor and reduce those costs, which ultimately makes Kubernetes easier to use for customers and users out there on the market. >>Okay. So I want to get into why Kubernetes is costing so much. Obviously the growth is there, but before we get there, what is the background? What's the origination story? Where did coop costs come from? Obviously you guys have a great name costs. Qube you guys probably reduced costs and Kubernetes great name, but what's the origination story. How'd you guys get here? What HR you scratching? What problem are you solving? >>So yeah, John, you, you guessed it, uh, you know, oftentimes the, the name is a dead giveaway there where we're cost monitoring cost management solutions for Kubernetes and cloud native. Um, and backstory here is our founding team was at Google before starting the company. Um, we were working on infrastructure monitoring, um, both on internal infrastructure, as well as Google cloud. Um, we had a handful of our teammates join the Kubernetes effort, you know, early days. And, uh, we saw a lot of teams, you know, struggling with the problems we're solving. We were solving internally at Google and we're we're solving today. Um, and to speak to those problems a little bit, uh, you know, you, you, you touched on how just scale alone is making this come to the forefront, right. You know, there's now many billions of dollars being spent on CU, um, that is bringing this issue, uh, to make it a business critical questions that is being asked in lots of organizations. Um, you know, that combined with, you know, the dynamic nature and complexity of Kubernetes, um, makes it really hard to manage, um, you know, costs, uh, when you scale across a very large organization. Um, so teams turned to coop costs today, you know, thousands of them do, uh, to get monitoring in place, you know, including alerts, recurring reports and like dynamic management insights or automation. >>Yeah. I know we talked to CubeCon before Webb and I want to come back to the problem statement because when you have these emerging growth areas that are really relevant and enabling technologies, um, you move to the next point of failure. And so, so you scaling these abstraction layers. Now services are being turned on more and more keeping it as clusters are out there. So I have to ask you, what is the main cost driver problem that's happening in the cube space that you guys are addressing? Is it just sheer volume? Is it different classes of services? Is it like different things are kind of working together, different monitoring tools? Is it not a platform and take us through the, the problem area? What do you guys see this? >>Yeah, the number one problem area is still actually what, uh, the CNCF fin ops survey highlighted earlier this year, um, which is that approximately two thirds of companies still don't have kind of baseline to visibility into spend when they moved to Kubernetes. Um, so, you know, even if you had a really complex, you know, chargeback program in place, when you're building all your applications on BMS, you move to Kubernetes and most teams again, can't answer these really simple questions. Um, so we're able to give them that visibility in real time, so they can start breaking these problems down. Right. They can start to see that, okay, it's these, you know, the deployments are staple sets that are driving our costs or no, it's actually, you know, these workloads that are talking to, you know, S3 buckets and, you know, really driving, you know, egress costs. Um, so it's really about first and foremost, just getting the visibility, getting the eyes and ears. We're able to give that to teams in real time at the largest scale Kubernetes clusters in the world. Um, and again, most teams, when they first start working with us, don't have that visibility, not having that visibility can have a whole bunch of downstream impacts, um, including kind of not getting, you know, costs right. You know, performance, right. Et cetera. >>Well, let's get into that downstream benefit, uh, um, problems and or situations. But the first question I have just throw naysayer comment at you would be like, oh, wait, I have all this cost monitoring stuff already. What's different about Kubernetes. Why what's what's the problem I can are my other tool is going to work for me. How do you answer that one? >>Yeah. So, you know, I think first and foremost containers are very dynamic right there. They're often complex, often transient and consume variable cluster resources. And so as much as this enables teams to contract construct powerful solutions, um, the associated costs and actually tracking those, those different variables can be really difficult. And so that's why we see why a solution like food costs. That's purpose built for developers using Kubernetes is really necessary because some of those older, you know, traditional cloud cost optimization tools are just not as fit for, for this space specifically. >>Yeah. I think that's exactly right, Alex. And I would add to that just the way that software is being architected deployed and managed is fundamentally changing with Kubernetes, right? It is deeply impacting every part of scifi software delivery process. And through that, you know, decisions are getting made and, you know, engineers are ultimately being empowered, um, to make more, you know, costs impacting decisions. Um, and so we've seen, you know, organizations that get real time kind of built for Kubernetes are built for cloud native, um, benefit from that massively throughout their, their culture, um, you know, cost performance, et cetera. >>Uh, well, can you just give a quick example because I think that's a great point. The architectures are shifting, they're changing there's new things coming in, so it's not like you can use an old tool and just retrofit it. That's sometimes that's awkward. What specific things you see changing with Kubernetes that's that environments are leveraging that's good. >>Yeah. Yeah. Um, one would be all these Kubernetes primitives are concepts that didn't exist before. Right. So, um, you know, I'm not, you know, managing just a generic workload, I'm managing a staple set and, or, you know, three replica sets. Right. And so having a language that is very much tailored towards all of these Kubernetes concepts and abstractions, et cetera. Um, but then secondly, it was like, you know, we're seeing this very obvious, you know, push towards microservices where, you know, typically again, you're shipping faster, um, you know, teams are making more distributed or decentralized decisions, uh, where there's not one single point where you can kind of gate check everything. Um, and that's a great thing for innovation, right? We can move much faster. Um, but for some teams, um, you know, not using a tool like coop costs, that means sacrificing having a safety net in place, right. >>Or guard rails in place to really help manage and monitor this. And I would just say, lastly, you know, uh, a solution like coop costs because it's built for Kubernetes sits in your infrastructure, um, it can be deployed with a single helmet stall. You don't have to share any data remotely. Um, but because it's listening to your infrastructure, it can give you data in real time. Right. And so we're moving from this world where you can make real time automated decisions or manual decisions as opposed to waiting for a bill, you know, a day, two days or a week later, um, when it may be already too late, you know, to avoid, >>Or he got the extra costs and you know what, he wants that. And he got to fight for a refund. Oh yeah. I threw a switch or wasn't paying attention or human error or code because a lot of automation is going on. So I could see that as a benefit. I gotta, I gotta ask the question on, um, developer uptake, because develop, you mentioned a good point. There that's another key modern dynamic developers are in, in the moment making decisions on security, on policy, um, things to do in the CIC D pipeline. So if I'm a developer, how do I engage with Qube cost? Do I have to, can I just download something? Is it easy? How's the onboarding process for your customers? >>Yeah. Great, great question. Um, so, you know, first and foremost, I think this gets to the roots of our company and the roots of coop costs, which is, you know, born in open-source, everything we do is built on top of open source. Uh, so the answer is, you know, you can go out and install it in minutes. Like, you know, thousands of other teams have, um, it is, you know, the, the recommended route or preferred route on our side is, you know, a helm installed. Um, again, you don't have to share any data remotely. You can truly not lock down, you know, namespace eat grass, for example, on the coop cost namespace. Um, and yeah, and in minutes you'll have this visibility and can start to see, you know, really interesting metrics that, again, most teams, when we started working with them, either didn't have them in place at all, or they had a really rough estimate based on maybe even a coop cost Scruff on a dashboard that they installed. >>How does cube cost provide the visibility across the environment? How do you guys actually make it work? >>Yeah, so we, you know, sit in your infrastructure. Um, we have integrations with, um, for on-prem like custom pricing sheets, uh, with card providers will integrate with your actual billing data, um, so that we can, uh, listen for events in your infrastructure, say like a nude node coming up, or a new pod being scheduled, et cetera. Um, we take that information, join with your billing data, whether it's on-prem or in one of the big three cloud providers. And then again, we can, in real time tell you the cost of, you know, any dimension of your infrastructure, whether it's one of the backing, you know, virtual assets you're using, or one of the application dimensions like a label or annotation namespace, you know, pod container, you name it >>Awesome. Alex, what's your take on the landscape with, with the customers as they look the cost reductions. I mean, everyone loves cost reductions as a, certainly I love the safety net comment that Webb made, but at the end of the day, Kubernetes is not so much a cost driver. It's more of a, I want the modern apps faster. Right? So, so, so people who are buying Kubernetes usually aren't price sensitive, but they also don't want to get gouged either on mistakes. Where is the customer path here around Kubernetes cost management and reduction and a scale? >>Yeah. So I think one thing that we're looking forward to hearing this upcoming year, just like we did last year is continuing to work with the various tools that customers are already using and, you know, meeting those customers where they are. So some examples of that are, you know, working with like CICT tools out there. Like we have a great integration with armoring Spinnaker to help customers actually take the insights from coop costs and deploy those, um, in a more efficient manner. Um, we're also working with a lot of partners, like, you know, for fauna to help customers visualize our data and, you know, integrate with or rancher, which are management platforms for Kubernetes. And all of that I think is just to make cost come more to the forefront of the conversation when folks are using Kubernetes and provide that, that data to customers and all the various tools that they're using across the ecosystem. Um, so I think we really want to surface this and make costs more of a first-class citizen across, you know, the, the ecosystem and then the community partners. >>What's your strategy of the biz dev side. As you guys look at a growing ecosystem with CubeCon CNCF, you mentioned that earlier, um, the community is growing. It's always been growing fast. You know, the number of people entering in are amazing, but now that we start going, you know, the S curves kicking in, um, integration and interoperability and openness is always a key part of company success. What's Qube costs is vision on how you're going to do biz dev going forward. >>Absolutely. So, you know, our products opensource that is deeply important to our company, we're always going to continue to drive innovation on our open source product. Um, as Webb mentioned, you know, we have thousands of teams that are, that are using our product. And most of that is actually on the free, but something that we want to make sure continues to be available for the community and continue to bring that development for the community. And so I think a part of that is making sure that we're working with folks not just on the commercial side, but also those open source, um, types of products, right? So, you know, for Fanta is open source Spinnaker's are open source. I think a lot of the biz dev strategies just sticking to our roots and make sure that we continue to drive it a strong open source presence and product for, for our community of users, keep that >>And a, an open source and commercial and keep it stable. Well, I got to ask you, obviously, the wave is here. I always joke, uh, going back. I remember when the word Kubernetes was just kicked around pre uh, the OpenStack days many, many years ago. It's the luxury of being a old cube guy that I am 11 years doing the cube, um, all fun. But if we remember talking to him in the early days, is that with Kubernetes was, if, if it worked, the, the phrase was rising, tide floats all boats, I would say right now, the tides rising pretty well right now, you guys are in a good spot with the cube costs. Are there areas that you see coming where cost monitoring, um, is going to expand more? Where do you see the Kubernetes? Um, what's the aperture, if you will, of the, of the cost monitoring space at your end that you think you can address. >>Yeah, John, I think you're exactly right. This, uh, tide has risen and it just keeps riding rising, right? Like, um, you know, the, the sheer number of organizations we use C using Kubernetes at massive scale is just mind blowing at this point. Um, you know, what we see is this really natural pattern for teams to start using a solution like coop costs, uh, start with, again, either limited or no visibility, get that visibility in place, and then really develop an action plan from there. And that could again be, you know, different governance solutions like alerts or, you know, management reports or, you know, engineering team reports, et cetera. Um, but it's really about, you know, phase two of taking that information and really starting to do something with it. Right. Um, we, we are seeing and expect to see more teams turn to an increasing amount of, of automation to do that. Um, but ultimately that is, uh, very much after you get this baseline highly accurate, uh, visibility that you feel very comfortable making, potentially critical, very critical related to reliability, performance decisions within your infrastructure. >>Yeah. I think getting it right key, you mentioned baseline. Let me ask you a quick follow-up on that. How fast can companies get there when you say baseline, there's probably levels of baseline. Obviously all environments are different now. Not all one's the same, but what's just anecdotally you see, as that baseline, how fast we will get there, is there a certain minimum viable configuration or architecture? Just take us through your thoughts on that. >>Yeah. Great question. It definitely depends on organizational complexity and, you know, can depend on applicational application complexity as well. But I would say most importantly is, um, you know, the, the array of cost centers, departments, you know, complexity across the org as opposed to, you know, technological. Um, so I would say for, you know, less complex organizations, we've seen it happen in, you know, hours or, you know, a day less, et cetera. Um, because that's, you know, one or two or a smaller engineering games, they can share that visibility really quickly. And, um, you know, they may be familiar with Kubernetes and they just get it right away. Um, for larger organizations, we've seen it take kind of up 90 days where it's really about infusing this kind of into their DNA. When again, there may not have been a visibility or transparency here before. Um, again, I think the, the, the bulk of the time there is really about kind of the cultural element, um, and kind of awareness building, um, and just buy in throughout the organization. >>Awesome. Well, guys got a great product. Congratulations, final question for both of you, it's early days in Kubernetes, even though the tide is rising, keeps rising, more boats are coming in. Harbor is getting bigger, whatever, whatever metaphor you want to use, it's really going great. You guys are seeing customer adoption. We're seeing cloud native. I was told that my friends at dock or the container side is going crazy as well. Everything's going great in cloud native. What's the vision on the innovation? How do you guys continue to push the envelope on value in open source and in the commercial area? What's the vision? >>Yeah, I think there's, there's many areas here and I know Alex will have more to add here. Um, but you know, one area that I know is relevant to his world is just more, really interesting integrations, right? So he mentioned coop costs, insights, powering decisions, and say Spinnaker, right? I think more and more of this tool chain really coming together and really seeing the benefits of all this interoperability. Right. Um, so that I think combined with, uh, just more and more intelligence and automation being deployed again, that's only after the fact that teams are really comfortable with his decisions and the information and the decisions that are being made. Um, but I think that increasingly we see the community again, being ready to leverage this information and really powerful ways. Um, just because, you know, as teams scale, there's just a lot to manage. And so a team, you know, leveraging automation can, you know, supercharge them and in really impactful ways. >>Awesome, great integration integrations, Alex, expand on that. A whole different kind of set of business development integrations. When you have lots of tool chains, lots of platforms and tools kind of coming together, sharing data, working together, automating together. >>Well. Yeah, we, so I think it's going to be super important to keep a pulse on the new tools. Right. Make sure that we're on the forefront of what customers are using and just continuing to meet them where they are. And a lot of that honestly, is working with AWS too, right? Like they have great services and EKS and managed Prometheus's. Um, so we want to make sure that we continue to work with that team and support their services as that launched as well. >>Great stuff. I got a couple of minutes left. I felt I'll throw one more question in there since I got two great experts here. Um, just, you know, a little bit change of pace, more of an industry question. That's really no wrong answer, but I'd love to get your reaction to, um, the SAS conversation cloud has changed what used to be SAS. SAS was, oh yeah. Software as a service. Now that you have all these kinds of new kinds of you have automation, horizontally, scalable cloud and edge, you now have vertical machine learning. Data-driven insights. A lot of things in the stack are changing. So the question is what's the new SAS look like it's the same as the old SAS? Or is it a new kind of refactoring of what SAS is? What's your take on this? >>Yeah. Um, there's a web, please jump in here wherever. But in, in my view, um, it's a spectrum, right? There's there's customers that are on both ends of this. Some customers just want a fully hosted, fully managed product that wouldn't benefit from the luxury of not having to do any, any sort of infrastructure management or patching or anything like that. And they just want to consume a great product. Um, on the other hand, there's other customers that have more highly regulated industries or security requirements, and they're going to need things to deploy in their environment. Um, right now QP cost is, is self hosted. But I think in the future, we want to make sure that, you know, we, we have versions of our product available for customers across that entire spectrum. Um, so that, you know, if somebody wants the benefit of just not having to manage anything, they can use a fully self hosted sat or a fully multitenant managed SAS, or, you know, other customers can use a self hosted product. And then there's going to be customers that are in the middle, right, where there's certain components that are okay to be a SAS or hosted elsewhere. But then there's going to be components that are really important to keep in their own environment. So I think, uh, it's really across the board and it's going to depend on customer and customer, but it's important to make sure we have options for all of them. >>Great guys, we have SAS, same as the old SAS. What's the SAS playbook. Now >>I think it is such a deep and interesting question and one that, um, it's going to touch so many aspects of software and on our lives, I predict that we'll continue to see this, um, you know, tension or real trade-off across on the one hand convenience. And now on the other hand, security, privacy and control. Um, and I think, you know, like Alex mentioned, you know, different organizations are going to make different decisions here based on kind of their relative trade-offs. Um, I think it's going to be of epic proportions. I think, you know, we'll look back on this period and just say that, you know, this was one of the foundational questions of how to get this right. We ultimately view it as like, again, we want to offer choice, um, and make, uh, make every choice be great, but let our users, uh, pick the right one, given their profile on those, on those streets. >>I think, I think it's a great comment choice. And also you got now dimensions of implementations, right? Multitenant, custom regulated, secure. I want have all these controls. Um, it's great. No one, no one SaaS rules the world, so to speak. So it's again, great, great dynamic. But ultimately, if you want to leverage the data, is it horizontally addressable? MultiTech and again, this is a whole nother ball game we're watching this closely and you guys are in the middle of it with cube costs, as you guys are creating that baseline for customers. Uh, congratulations. Uh, great to see you where thanks for coming on. Appreciate it. Thank you so much for having us again. Okay. Great. Conservation aiders startup showcase open cloud innovators here. Open source is driving a lot of value as it goes. Commercial, going to the next generation. This is season two episode, one of the AWS startup series with the cube. Thanks for watching.

Published Date : Jan 26 2022

SUMMARY :

as Thielen, head of business development, coop quest, gentlemen, thanks for coming on the cube for the showcase 80, Great to be back, uh, really excited for the discussion we have here. So take us through you guys. Um, you know, and I think we believe that that's really indicative of the value Obviously you guys have a great name costs. Um, you know, that combined with, you know, the dynamic nature and complexity of Kubernetes, And so, so you scaling these abstraction layers. you know, even if you had a really complex, you know, chargeback program in place, when you're building all your applications But the first question I have just throw naysayer comment at you would be like, oh, wait, I have all this cost monitoring you know, traditional cloud cost optimization tools are just not as fit for, for this space specifically. Um, and so we've seen, you know, organizations that get What specific things you see changing with Kubernetes that's Um, but for some teams, um, you know, not using a tool like coop costs, And I would just say, lastly, you know, uh, a solution like coop costs because it's built for Kubernetes Or he got the extra costs and you know what, he wants that. Uh, so the answer is, you know, you can go out and install it in minutes. Yeah, so we, you know, sit in your infrastructure. comment that Webb made, but at the end of the day, Kubernetes is not so much a cost driver. So some examples of that are, you know, working with like CICT you know, the S curves kicking in, um, integration and interoperability So, you know, our products opensource that is deeply important to our company, I would say right now, the tides rising pretty well right now, you guys are in a good spot with the Um, you know, what we see is this really natural pattern How fast can companies get there when you say baseline, there's probably levels of baseline. you know, complexity across the org as opposed to, you know, technological. How do you guys continue Um, but you know, one area that I know is relevant to his world is just more, When you have lots of tool chains, lots of platforms and tools kind Um, so we want to make sure that we continue to work with that team and Um, just, you know, a little bit change of pace, more of an industry question. But I think in the future, we want to make sure that, you know, we, What's the SAS playbook. Um, and I think, you know, like Alex mentioned, you know, we're watching this closely and you guys are in the middle of it with cube costs, as you guys are creating

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Alex ThilenPERSON

0.99+

Webb BrownPERSON

0.99+

11 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

SeanPERSON

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

ThielenPERSON

0.99+

AlexPERSON

0.99+

last yearDATE

0.99+

eightQUANTITY

0.99+

KubecostPERSON

0.99+

WebbPERSON

0.99+

90 daysQUANTITY

0.99+

Webb brownPERSON

0.99+

ABCORGANIZATION

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

CNCFORGANIZATION

0.98+

KubernetesORGANIZATION

0.98+

CubeConORGANIZATION

0.98+

two great guestsQUANTITY

0.98+

firstQUANTITY

0.98+

both endsQUANTITY

0.97+

KubernetesTITLE

0.97+

two great expertsQUANTITY

0.96+

one more questionQUANTITY

0.96+

a dayQUANTITY

0.96+

single helmetQUANTITY

0.94+

earlier this yearDATE

0.94+

todayDATE

0.94+

secondlyQUANTITY

0.94+

one thingQUANTITY

0.93+

S3COMMERCIAL_ITEM

0.92+

FantaORGANIZATION

0.92+

QubeORGANIZATION

0.91+

a week laterDATE

0.91+

KubernetesPERSON

0.91+

SASORGANIZATION

0.9+

season two episodeQUANTITY

0.88+

approximately two thirdsQUANTITY

0.87+

about four XQUANTITY

0.87+

coopORGANIZATION

0.85+

three replica setsQUANTITY

0.85+

EKSORGANIZATION

0.85+

billions of dollarsQUANTITY

0.84+

80QUANTITY

0.81+

two daysQUANTITY

0.8+

single pointQUANTITY

0.8+

one areaQUANTITY

0.77+

season twoQUANTITY

0.76+

BMSTITLE

0.76+

OpenStackTITLE

0.75+

Liran Tal, Synk | CUBE Conversation


 

(upbeat music) >> Hello, everyone. Welcome to theCUBE's coverage of the "AWS Startup Showcase", season two, episode one. I'm Lisa Martin, and I'm excited to be joined by Snyk, next in this episode. Liran Tal joins me, the director of developer advocacy. Liran, welcome to the program. >> Lisa, thank you for having me. This is so cool. >> Isn't it cool? (Liran chuckles) All the things that we can do remotely. So I had the opportunity to speak with your CEO, Peter McKay, just about a month or so ago at AWS re:Invent. So much growth and momentum going on with Snyk, it's incredible. But I wanted to talk to you about specifically, let's start with your role from a developer advocate perspective, 'cause Snyk is saying modern development is changing, so traditional AppSec gatekeeping doesn't apply anymore. Talk to me about your role as a developer advocate. >> It is definitely. The landscape is changing, both developer and security, it's just not what it was before, and what we're seeing is developers need to be empowered. They need some help, just working through all of those security issues, security incidents happening, using open source, building cloud native applications. So my role is basically about making them successful, helping them any way we can. And so getting that security awareness out, or making sure people are having those best practices, making sure we understand what are the frustrations developers have, what are the things that we can help them with, to be successful day to day. And how they can be a really good part of the organization in terms of fixing security issues, not just knowing about it, but actually being proactively on it. >> And one of the things also that I was reading is, Shift Left is not a new concept. We've been talking about it for a long time. But Snyk's saying it was missing some things and proactivity is one of those things that it was missing. What else was it missing and how does Snyk help to fix that gap? >> So I think Shift Left is a good idea. In general, the idea is we want to fix security issues as soon as we can. We want to find them. Which I think that is a small nuance that what's kind of missing in the industry. And usually what we've seen with traditional security before was, 'cause notice that, the security department has like a silo that organizations once they find some findings they push it over to the development team, the R&D leader or things like that, but until it actually trickles down, it takes a lot of time. And what we needed to do is basically put those developer security tools, which is what Snyk is building, this whole security platform. Is putting that at the hands and at the scale of, and speed of modern development into developers. So, for example, instead of just finding security issues in your open source dependencies, what we actually do at Snyk is not just tell you about them, but you actually open a poll request to your source codes version and management system. And through that we are able to tell you, now you can actually merge it, you can actually review it, you can actually have it as part of your day-to-day workflows. And we're doing that through so many other ways that are really helpful and actually remediating the problem. So another example would be the IDE. So we are actually embedding an extension within your IDEs. So, once you actually type in your own codes, that is when we actually find the vulnerabilities that could exist within your own code, if that's like insecure code, and we can tell you about it as you hit Command + S and you will save the file. Which is totally different than what SaaS tools starting up application security testing was before because, when things started, you usually had SaaS tools running in the background and like CI jobs at the weekend and in deltas of code bases, because they were so slow to run, but developers really need to be at speed. They're developing really fast. They need to deploy. One development is deployed to production several times a day. So we need to really enable developers to find and fix those security issues as fast as we can. >> Yeah, that speed that you mentioned is absolutely critical to their workflow and what they're expecting. And one of the unique things about Snyk, you mentioned, the integration into how this works within development workflow with IDE, CIDC, they get environment enabling them to work at speed and not have to be security experts. I imagine are two important elements to the culture of the developer environment, right? >> Correct, yes. It says, a large part is we don't expect developers to be security experts. We want to help them, we want to, again, give them the tools, give them the knowledge. So we do it in several ways. For example, that IDE extension has a really cool thing that's like kind of unique to it that I really like, and that is, when we find, for example, you're writing code and maybe there's a batch traversal vulnerability in the function that you just wrote, what we'll actually do when we tell you about it, it will actually tell you, hey, look, these are some other commits made by other open source projects where we found the same vulnerability and those commits actually fixed it. So actually giving you example cases of what potentially good code looks like. So if you think about it, like who knows what patch reversal is, but prototype pollution like many types of vulnerabilities, but at the same time, we don't expect developers to actually know, the deep aspects of security. So they're left off with, having some findings, but not really, they want to fix them, but they don't really have the expertise to do it. So what we're doing is we're bridging that gap and we're being helpful. So I think this is what really proactive security is for developers, that says helping them remediate it. And I can give like more examples, like the security database, it's like a wonderful place where we also like provide examples and references of like, where does their vulnerability come from if there's like, what's fogging in open-source package? And we highlight that with a lot of references that provide you with things, the pull requests that fixed date, or the issue with where this was discussed. You have like an entire context of what is the... What made this vulnerability happen. So you have like a little bit more context than just specifically, emerging some stuff and updating, and there's a ton more. I'm happy to like dive more into this. >> Well, I can hear your enthusiasm for it, a developer advocate it seems like you are. But talking about the burdens of the gaps that you guys are filling it also seems like the developers and the security folks that this is also a bridge for those teams to work better together. >> Correct. I think that is not siloed anymore. I think the idea of having security champions or having threat modeling activities are really, really good, or like insightful both like developers and security, but more than just being insightful, useful practices that organizations should actually do actually bringing a discussion together to actually creating a more cohesive environment for both of those kind of like expertise, development and security to work together towards some of these aspects of like just mitigating security issues. And one of the things that actually Snyk is doing in that, in bringing their security into the developer mindset is also providing them with the ability to prioritize and understand what policies to put in place. So a lot of the times security organizations actually, the security org wants to do is put just, guardrails to make sure that developers have a good leeway to work around, but they're not like doing things that like, they definitely shouldn't do that, like prior to bringing a big risk into today organizations. And that's what I think we're doing also like great, which is the fact that we're providing the security folks to like put the policies in place and then developers who actually like, work really well within those understand how to prioritize vulnerabilities is an important part. And we kind of like quantify that, we put like an urgency score that says, hey, you should fix this vulnerability first. Why? Because it has, first of all, well, you can upgrade really quickly. It has a fix right there. Secondly, there's like an exploit in the wild. It means potentially an attacker can weaponize this vulnerability and like attack your organizations, in an automated fashion. So you definitely want to put that put like a lead on that, on that broken window, if so to say. So we ended up other kind of metrics that we can quantify and put this as like an urgency score, which we called a priority score that helps again, developers really know what to fix first, because like they could get a scan of like hundreds of vulnerabilities, but like, what do I start first with? So I find that like very useful for both the security and the developers working together. >> Right, and especially now, as we've seen such changes in the last couple of years to the threat landscape, the vulnerabilities, the security issues that are impacting every industry. The ability to empower developers to not only work at the speed with which they are accustomed and need to work, but also to be able to find those vulnerabilities faster prioritize which ones need to be fixed. I mean, I think of Log4Shell, for example, and when the challenge is going on with the supply chain, that this is really a critical capability from a developer empowerment perspective, but also from a overall business health and growth perspective. >> Definitely. I think, first of all, like if you want to step just a step back in terms of like, what has changed. Like what is the landscape? So I think we're seeing several things happening. First of all, there's this big, tremendous... I would call it a trend, but now it's like the default. Like of the growth of open source software. So first of all as developers are using more and more open source and that's like a growing trend of have like drafts of this. And it's like always increasing across, by the way, every ecosystem go, rust, .net, Java, JavaScript, whatever you're building, that's probably like on a growing trend, more open source. And that is, we will talk about it in a second what are the risks there. But that is one trend that we're saying. The other one is cloud native applications, which is also worth to like, I think dive deep into it in terms of the way that we're building applications today has completely shifted. And I think what AWS is doing in that sense is also creating a tremendous shift in the mindset of things. For example, out of the cloud infrastructure has basically democratized infrastructure. I do not need to, own my servers and own my monitoring and configure everything out. I can actually write codes that when I deploy it, when something parses this and runs this, it actually creates servers and monitoring, logging, different kinds of things for me. So it democratize the whole sense of building applications from what it was decades ago. And this whole thing is important and really, really fast. It makes things scalable. It also introduces some rates. For example, some of these configuration. So there's a lot that has been changed. And in that landscape of like what modern developer is and I think in that sense, we kind of can need a lead to a little bit more, be helpful to developers and help them like avoid all those cases. And I'm like happy to dive into like the open source and the cloud native. That was like follow-ups on this one. >> I want to get into a little bit more about your relationship with AWS. When I spoke with Peter McKay for re:Invent, he talked about the partnership being a couple of years old, but there's some kind of really interesting things that AWS is doing in terms of leveraging, Snyk. Talk to me about that. >> Indeed. So Snyky integrates with almost, I think probably a lot of services, but probably almost all of those that are unique and related to developers building on top of the AWS platform. And for example, that would be, if you actually are building your code, it connects like the source code editor. If you are pushing that code over, it integrates with code commits. As you build and CIS are running, maybe code build is something you're using that's in code pipeline. That is something that you have like native integrations. At the end of the day, like you have your container registry or Lambda. If you're using like functions as a service for your obligations, what we're doing is integrating with all of that. So at the end of the day, you really have all of that... It depends where you're integrating, but on all of those points of integration, you have like Snyk there to help you out and like make sure that if we find on any of those, any potential issues, anything from like licenses to vulnerabilities in your containers or just your code or your open source code in those, they actually find it at that point and mitigate the issue. So this kind of like if you're using Snyk, when you're a development machine, it kind of like accompanies you through this journey all over what a CIC kind of like landscape looks like as an architectural landscape for development, kind of like all the way there. And I think what you kind of might be I think more interested, I think to like put your on and an emphasis would be this recent integration with the Amazon Inspector. Which is as it's like very pivotal parts on the AWS platform to provide a lot of, integrate a lot of services and provide you with those insights on security. And I think the idea that now that is able to leverage vulnerability data from the Snyk's security intelligence database that says that's tremendous. And we can talk about that. We'd look for shell and recent issues. >> Yeah. Let's dig into that. We've have a few minutes left, but that was obviously a huge issue in November of 2021, when obviously we're in a very dynamic global situation period, but it's now not a matter of if an organization is going to be hit by vulnerabilities and security threats. It's a matter of when. Talk to me about really how impactful Snyk was in the Log4Shell vulnerability and how you help customers evade probably some serious threats, and that could have really impacted revenue growth, customer satisfaction, brand reputation. >> Definitely. The Log4Shell is, well, I mean was a vulnerability that was disclosed, but it's probably still a major part and going to be probably for the foreseeable future. An issue for organizations as they would need to deal with us. And we'll dive in a second and figure out like why, but in like a summary here, Log4Shell was the vulnerability that actually was found in Java library called Log4J. A logging library that is so popular today and used. And the thing is having the ability to react fast to those new vulnerabilities being disclosed is really a vital part of the organizations, because when it is asking factful, as we've seen Log4Shell being that is when, it determines where the security tool you're using is actually helping you, or is like just an added thing on like a checkbox to do. And that is what I think made Snyk's so unique in the sense. We have a team of those folks that are really boats, manually curating the ecosystem of CVEs and like finding by ourselves, but also there's like an entire, kind of like an intelligence platform beyond us. So we get a lot of notifications on chatter that happens. And so when someone opens an issue on an open source repository says, Hey, I found an issue here. Maybe that's an XSS or code injection or something like that. We find it really fast. And we at that point, before it goes to CVE requirement and stuff like that through like a miter and NVD, we find it really fast and can add it to the database. So this has been something that we've done with Log4Shell, where we found that as it was disclosed, not on the open source, but just on the open source system, but it was generally disclosed to everyone at that point. But not only that, because look for J as the library had several iterations of fixes they needed. So they fixed one version. Then that was the recommendation to upgrade to then that was actually found as vulnerable. So they needed to fix the another time and then another time and so on. So being able to react fast, which is, what I think helped a ton of customers and users of Snyk is that aspect. And what I really liked in the way that this has been received very well is we were very fast on creating those command line tools that allow developers to actually find cases of the Log4J library, embedded into (indistinct) but not true a package manifest. So sometimes you have those like legacy applications, deployed somewhere, probably not even legacy, just like the Log4J libraries, like bundled into a net or Java source code base. So you may not even know that you're using it in a sense. And so what we've done is we've like exposed with Snyk CLI tool and a command line argument that allows you to search for all of those cases. Like we can find them and help you, try and mitigate those issues. So that has been amazing. >> So you've talked in great length, Liran about, and detail about how Snyk is really enabling and empowering developers. One last question for you is when I spoke with Peter last month at re:Invent, he talked about the goal of reaching 28 million developers. Your passion as a director of developer advocacy is palpable. I can feel it through the screen here. Talk to me about where you guys are on that journey of reaching those 28 million developers and what personally excites you about what you're doing here. >> Oh, yeah. So many things. (laughs) Don't know where to start. We are constantly talking to developers on community days and things like that. So it's a couple of examples. We have like this dev site community, which is a growing and kicking community of developers and security people coming together and trying to work and understand, and like, just learn from each other. We have those events coming up. We actually have this, "The Big Fix". It's a big security event that we're launching on February 25th. And the idea is, want to help the ecosystem secure security obligations, open source or even if it's closed source. We like help you fix that though that yeah, it's like helping them. We've launched this Snyk ambassadors program, which is developers and security people, CSOs are even in there. And the idea is how can we help them also be helpful to the community? Because they are like known, they are passionate as we are, on application security and like helping developers code securely, build securely. So we launching all of those programs. We have like social impact related programs and the way that we like work with organizations, like maybe non-profit maybe they just need help, like getting, the security part of things kind of like figured out, students and things like that. Like, there's like a ton of those initiatives all over the boards, helping basically the world be a little bit more secure. >> Well, we could absolutely use Snyk's help in making the world more secure. Liran it's been great talking to you. Like I said, your passion for what you do and what Snyk is able to facilitate and enable is palpable. And it was a great conversation. I appreciate that. And we look forward to hearing what transpires during 2022 for Snyk so you got to come back. >> I will. Thank you. Thank you, Lisa. This has been fun. >> All right. Excellent. Liran Tal, I'm Lisa Martin. You're watching theCUBE's second season, season two of the "AWS Startup Showcase". This has been episode one. Stay tuned for more great episodes, full of fantastic content. We'll see you soon. (upbeat music)

Published Date : Jan 17 2022

SUMMARY :

of the "AWS Startup Showcase", Lisa, thank you for having me. So I had the opportunity to speak of the organization in terms And one of the things and like CI jobs at the weekend and not have to be security experts. the expertise to do it. that you guys are filling So a lot of the times and need to work, So it democratize the whole he talked about the partnership So at the end of the day, you and that could have really the ability to react fast and what personally excites you and the way that we like in making the world more secure. I will. We'll see you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LiranPERSON

0.99+

Peter McKayPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

February 25thDATE

0.99+

PeterPERSON

0.99+

November of 2021DATE

0.99+

Liran TalPERSON

0.99+

oneQUANTITY

0.99+

SnykORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Log4ShellTITLE

0.99+

second seasonQUANTITY

0.99+

JavaTITLE

0.99+

JavaScriptTITLE

0.99+

last monthDATE

0.99+

decades agoDATE

0.98+

LambdaTITLE

0.98+

Log4JTITLE

0.98+

one versionQUANTITY

0.98+

one trendQUANTITY

0.97+

One last questionQUANTITY

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.96+

AppSecTITLE

0.96+

2022DATE

0.95+

One developmentQUANTITY

0.95+

SecondlyQUANTITY

0.95+

28 million developersQUANTITY

0.95+

todayDATE

0.94+

theCUBEORGANIZATION

0.93+

episode oneQUANTITY

0.88+

hundreds of vulnerabilitiesQUANTITY

0.86+

Shift LeftORGANIZATION

0.84+

two important elemQUANTITY

0.83+

SnykPERSON

0.82+

about a month orDATE

0.8+

SnykyPERSON

0.8+

last couple of yearsDATE

0.76+

couple of yearsQUANTITY

0.75+

several times a dayQUANTITY

0.75+

reEVENT

0.74+

Startup ShowcaseTITLE

0.74+

SynkORGANIZATION

0.74+

CICTITLE

0.73+

LeftTITLE

0.72+

season twoQUANTITY

0.7+

re:InventEVENT

0.7+

FirstQUANTITY

0.68+

customersQUANTITY

0.68+

John Grosshans, Palo Alto Networks & Sabina Joseph, AWS | AWS re:Invent 2021


 

>>Hello and welcome back to the cube in person at an event AWS reinvent 2021. We're here live with two sets. Also virtual we've watched the cube on the site. Virtual sits a hybrid event. I'm John for your host of the cube. We're here for three days. Wall-to-wall covered chicken off day one. All about software. ISV is also the value of the cloud. We've got two great guests, John Grosse and senior vice president, chief revenue officer Prisma, cloud of Palo Alto networks. Welcome to the cube. >>Thank you for having me excited to be here. >>Three to Joseph's general manager technology partners from AWS. Thanks for coming on again. Good to see you. So obviously the story here at re-invent is Adam Lesley, new CEO taking over Andy Jassy, uh, tomorrow's a big keynote. We're expecting to hear that the cloud is kind of going next gen. The next gen cloud is here. It's about applications, modern applications and true infrastructure as code security is code data as code essentially, applications are now the number one priority. This is a big thing. This is part of the movement of the cloud. So I got to get your guys' perspectives. Where are we in that movement? What are customers doing as they migrate to the cloud? It's not just lift and shift. They're like, okay, I got to rearchitect my business. Big things are happening. What do you guys see? >>Well, I think there's a couple of big drivers at the highest level, right? Some customers are thinking about migrating their it estate to the cloud. They want to take cost out. They want to drive agility. They want to drive a better user experience and you have other customers that want to innovate, right? They want to drive innovation that leverage the cloud for innovation and increase their speed of execution. And as they look at that opportunity, they're having to rethink dev ops and which is making them also think more about DevSecOps and how are they going to accelerate that cloud application life cycle so they can take advantage of microservices. And in addition to that, as we look back on the last two years, as we were talking about before we came on the air and this unfortunate pandemic era that will maybe refer to it, as many customers have been thinking about their supply chains, you know, what am I going to do with my supply chain? How do I really take problems out of that supply chain? So I can continue to serve my customers in my markets. And it's also made them think about different ways to approach their customers. How do they reach their customers? And then how do they fulfill bill and continue to nurture those customer relationships? So I think it goes to the big drivers >>And the, and the security aspect is so huge. You guys have Palo Alto networks? No, that's just give us a perspective and reaction to that. As people digitize their business, you get security built in from day one. This is the number one thing we talk about on the cube bit, baking it in from day one, whether they say shifting left, whatever sure. It's your business, you're now digital. Yeah. >>What are the things that we think we bring to CEO's and CIS is into boards is really three different ways to get started with cloud native security. With Prisma cloud, you can start at the simplest of terms with posture management. I just want to inventory my assets and know what I have out there and make sure those are secure. I want to be compliant. We want to deliver on compliance and governance for my board, my leadership team, others are thinking about workload protection, Kubernetes, serverless containers. What am I going to do with those critical workloads that I'm now moving to the cloud? And then to your point, big push area is shifting security left. I've got to build security in right from the start of that application development life cycle change the way I think about CIC D and delivering those applications securely in the cloud and doing a fast now time to market on applications is critical for customers. And they've got to think about building security. And so they don't have to rework those apps and build security and later. >>So let's talk about what you guys have been doing with customers during the pandemic and how they're going to come out of it with a growth strategy. We had some great talks on our cube program around how the software development life cycle is changing, how modern applications are being built. And I'll see Amazon, you guys enable people to make money on top of Amazon because you make money too. But how are you guys helping customers? What's the big thing that come out of the pandemic. >>Yeah, so, well, the pandemic has been unfortunate for all of humanity, but through this, we have really seen customers accelerating their journey into AWS and security is top of mind for them as customers continue to digitize their software, they are really looking for solutions from Palo Alto networks on AWS. And what they're looking for is something very simple and cost-effective which Palo Alto has provided because of our long-term partnership. And as John mentioned, right due to the pandemic and many other factors around it, there have been many constraints placed on the supply chain, but the economies of scale with AWS has really helped partners and customers address many of these constraints. So we have seen a tremendous movement into AWS the last 20 months. >>And how, how has the partnership for Palo Alto networks been for you guys? Because I wrote in my article, I just posted last night around the preview of this event in my interview that has Leschi is that cloud is enabling the partners Amazon's cloud is enabling partners to do more than be a point solution. And that we're talking about a platform, not tools. I mean, this tools tools are great, but this notion of super clouds are developing where partners are leveraging more than just hosting, right? >>What's your partnerships always start and end with customers. So one of the things we're most excited about from a first of a cloud perspective is we now have over 800 calmly customers that are utilizing Prisma cloud is secure workloads and to secure their security posture management and shift security left using Prisma cloud on AWS. And the other, a couple of big ingredients that we've had together is really multi-dimensional partnership that makes that all possible, right? We're an advanced technology partner. We have a number of programs that we run together, and we've also been a part of a handful of product launches and innovation launches that we're super excited about, like what we've done with guard duty, like what we've also done with auto provisioning using control tower. So multi-dimensional partnership, which is always the best we think starts with customers. And then from there, what we've done is we've taken a really intentional programmatic approach as we think about innovation programs and go to market together. Yeah. >>Follow up on the, you know, mind, you guys have been very successful at Palo Alto networks as your customer base, the more, more sophisticated and smarter around cloud, you got to add more value and be responsive. What is the big trend in your customer base? You see with cloud? Are they obviously keeping stuff on, on premises for certain things, obviously security reasons, but also data's got to open up. So now you have a more of a bigger data aperture. >>Absolutely. Absolutely. And what's happening is what should happen, which is customers are asking us to do more and innovate faster. And so, you know, we're really excited about our recent launch at Prisma cloud 3.0 where it really expanded the platform. Uh, we're now bringing an adoption adviser, which is going to simplify the experience for our mutual customers so that they can more readily adopt CSPs CWPP and extend their utilization of the platform. At the same time, we've made a number of announcements about adding more value into our infrastructure as code approach, you know, shifting security left. So very excited about that. And, and so I think that, you know, what we're finding is that we're needing to listen to customers and quickly build and deliver, uh, innovation in the cloud is they're all trying to your point new use cases and stretching their needs for cloud security. >>I got to say one of my observations of the past two and a half years, even coming into the pandemic was security clearly being baked in from the beginning, but the pandemic really exposed those who were ready for it. Yeah. And that, and that's a big point. And now it's like dev sec ops, no one argues about it anymore. Right. It is what it is. Right. That's a huge difference from just five years ago. >>Absolutely true. Absolutely true. And now, you know, as you're seeing, you know, partnering with AWS customers are delivering actually their end product in the cloud. Right. And that is the most critical relationship is their customer's customer. And they've got to make sure that it absolutely is a secure user experience because now we're talking about customers, identity payment information, we're talking about critical customer relationship management now all in the cloud. And it has to be secured end to end. So very exciting opportunities. >>I mean, uh, you're under a lot of pressure. Now you have a lot of these big partners doing big business. They have big customers. I know they do. Palo Alto has a lot of great customers. How do you support them? What are you guys doing to continue to nurture and support your customers? >>Yeah. Customers is the key word there, John. So we provide value to Palo Alto and other partners to a number of different ways. But one approach that we take is called a well-architected review. It's a process which looks at the software solutions through pillars of security, reliability, performance, cost optimization, and operational excellence. And the reason for that is we want to make sure that the foundation for customers is laid in the best way possible. Because once you have that foundation laid, you can really, really build and scale your business. And so that is one of the ways we continue to provide value and Palo Alto we've taken the well-architected review through all of their solutions, bought the ones existing and the ones in the future. >>I got to say, I've noticed you guys have been using the word primitives a lot. Now it's foundational services. Um, because what we're talking about here is foundation. And a lot of the trends we're seeing from your customers, both is they want to refactor their business value in the cloud, the modern application trend, isn't just apps is about business model innovation in the software itself. So it's asking the infrastructure to be code, ask you to be programmable security with automation, all that AI, this is a trend. Do you guys agree with that? Yeah, >>I absolutely. I do. And I think what you're seeing now from customer's point of view is they need to build security into that application lifecycle mental model. They have to have an end-to-end vision of how they're going to deliver those, those applications at speed and do it, you know, utilizing cloud native architecture so that they can have microservices that deliver value in they're more flexible. And that's part of the power. I think of AWS and Palo Alto networks. First of all, cloud is we're enabling customers to innovate at speed shift left with security, build security into those apps, take rework out, deliver applications faster, which obviously drives more value to them. >>Yeah. I'd love to get your thoughts on something, John, if you don't mind, while you're here, we were talking about for reinvented around major inflection points and every major inflection point in the history of the tech industry, whenever there's a change of how people develop applications, speed and performance was super important. Critical. How do you guys see that? Cause you guys are on the front lines with security performance matters. Now whether it's in the cloud or in transit, what's your >>Absolutely absolutely. You know, it was really interesting in customer conversations. Even some of the customer conversations I've had today, every customer now starts a conversation with some element of cloud security, security, posture management, workload protection, identity data, but they all are coming back now to shifting left with security. It's part of every single conversation. Yes. I was primarily leaders into posture management. Oh, by the way, absolutely got to dive into how I'm going to shift left and build security in. And so that speed of development now I think is going to be a key competitive differentiator for customers. They're going to have to become experts at delivering on that entire application pipeline. >>But your reaction to that speeds and feeds >>Well, it is, I believe it's really important. And um, we're trying to do everything that we can help partners like Palo Alto network with our processes. And most importantly, scaling the business, which I'm sure we'll talk about shortly, how we work together to really get those 800 customers >>Talking about that. Cause you have the advanced technology partnership program. Talk about what you guys do there. >>Yeah. So first of all, I want to thank John and the entire Palo Alto team for building such an excellent partnership across build co-sale and co-market. And as an advanced technology partner, Palo Alto is part of four different competencies, security containers, DevOps networking. And the reason why these competencies are so crucial is because you're able to list your validated solutions with public customer references by use case in each of these competencies, which I think John, you would agree enables them, asked to do focus, demand generation activities through dev days, blog posts, webinars, account mapping, which of course generates those opportunities together. And Palo Alto is also part of our ISB accelerate program. So our sales team is in incented in order to work with Palo Alto and help them close opportunities. And then also you are on AWS marketplace, which enables you to do free trials and enabling you to really scale across the globe. And then we are also helping Palo Alto across the globe with resources, including public sector to help them scale their business. >>The whole selling thing is interesting as the chief revenue officer, it's like, oh yeah, I love that. Um, this is a big deal. Talk about that further. I know the marketplace is where people are buying, but it's a joint sales, Amazon salespeople sell for you, right? >>Cosa, we call it co-sale whereby we can share opportunities with each other. And when we do share those opportunities, the sales teams are engaging together to understand, Hey, what's going on at the customer? What are the pain points? What are the use cases, value proposition, and then going in together to the customer to win the deal. And then continuing that relationship beyond to continue to grow net new revenue, >>Not too shabby, is it, oh yeah. Get more feet on the street. So to speak and virtual, >>There you go. It works on both dimensions and to all the points you made. I mean, we have some terrific mechanisms we use together, you know, like immersion days, dev days where we're able to work with customers, deliver well-architected visions for our customers together. And when we were both designed in, it's obviously a great, it's a great win for the customer enables us to scale. >>I think it's a cutting and not everyone gets these services to, you have to be a certain lay level to get the joint selling. >>That is correct. That's an advanced technology partner and also as part of ISB accelerate, which is our very focused Cosell program. Awesome. >>Well, thanks so much for coming on the cube. Really appreciate. Congratulations on a great partnership. Uh, two great brands. Congratulations, final minute. Just what's your expectation. As we come out of this pandemic, what do you see customers doing? What's the one thing that all customers are preparing for coming out of the pandemic? What do you guys see? >>Well, I think now customers are preparing for acceleration in all of their routes to market. Right now they're having to anticipate their return to some of the normal routes to market that they've for some time now have been trying to reinvent around and trying to drive primarily digital, go to market. Now I think we're going to see growth on every dimension with our customers, because they're going to need to return to some kind of normal with their supply chains, delivering through brick and mortar and their traditional delivery models on top of driving hyper growth that they're already enjoying through their digital go-to-market. >>That's great insight. So, you know, your, your thoughts on companies coming out of the pandemic, looking for a growth strategy, what's the, >>Well, I think they're going to prepare in order to address this pandemic in the future, Some calamity of some way. Right. But I do think that what I'm observing personally, especially segments that have been slower to adopt because they wanted evidence. The pandemic has really increased that whether that's vaccine research or treatment research, it has really accelerated that. So I agree with John B going to >>See it all across the board. I mean, one thing I'd say just support those two awesome insights is that the pandemic expose what works and what doesn't work. Right. You can't hide the ball anymore. You know, if, if software's being used, it's successful. If not, as self-aware right. You can't hide the ball cloud. If it's not working, you know what right away. Yeah. Thanks so much for coming on the Cape. Really appreciate it. Thank you very much. Okay. Cube coverage here at reinvent live 2021. I'm John for your host of the cube. Stay with us wall to wall coverage for the next four days here in the queue.

Published Date : Nov 30 2021

SUMMARY :

ISV is also the value of the cloud. So I got to get your guys' perspectives. maybe refer to it, as many customers have been thinking about their supply chains, you know, what am I going to do with my supply This is the number one thing we talk about on the cube bit, baking it in from day one, And then to your point, big push area is shifting security left. And I'll see Amazon, you guys enable people to make money on top And as John mentioned, right due to the pandemic and many other And how, how has the partnership for Palo Alto networks been for you guys? And the other, a couple of big ingredients that we've had customer base, the more, more sophisticated and smarter around cloud, you got to add more value and And so, you know, we're really excited about our recent launch at Prisma cloud 3.0 I got to say one of my observations of the past two and a half years, even coming into the pandemic was security clearly And that is the most critical relationship is their customer's What are you guys doing to continue to nurture and support your customers? And so that is one of the ways we continue to So it's asking the infrastructure to be code, ask you to be programmable security And that's part of the power. How do you guys see that? And so that speed of development now I think is going to be a key competitive differentiator for customers. scaling the business, which I'm sure we'll talk about shortly, how we work together to really get those 800 Talk about what you guys do there. And the reason why these competencies I know the marketplace is where people are buying, but it's a joint sales, What are the use cases, value proposition, So to speak and virtual, we use together, you know, like immersion days, dev days where we're able to work with customers, I think it's a cutting and not everyone gets these services to, you have to be a certain lay level to get the joint which is our very focused Cosell program. What do you guys see? Well, I think now customers are preparing for acceleration in all of their routes to market. So, you know, your, your thoughts on companies coming out of the pandemic, Well, I think they're going to prepare in order to address this pandemic in the future, You can't hide the ball cloud.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

JosephPERSON

0.99+

John GrossePERSON

0.99+

Adam LesleyPERSON

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

John BPERSON

0.99+

AmazonORGANIZATION

0.99+

John GrosshansPERSON

0.99+

800 customersQUANTITY

0.99+

three daysQUANTITY

0.99+

two setsQUANTITY

0.99+

Sabina JosephPERSON

0.99+

twoQUANTITY

0.99+

PrismaORGANIZATION

0.99+

ThreeQUANTITY

0.99+

oneQUANTITY

0.98+

pandemicEVENT

0.98+

Palo AltoORGANIZATION

0.98+

five years agoDATE

0.98+

bothQUANTITY

0.98+

alo Alto NetworksORGANIZATION

0.98+

fourQUANTITY

0.98+

two great guestsQUANTITY

0.98+

last nightDATE

0.98+

todayDATE

0.97+

eachQUANTITY

0.97+

both dimensionsQUANTITY

0.97+

Palo AltoORGANIZATION

0.97+

tomorrowDATE

0.97+

over 800QUANTITY

0.97+

one approachQUANTITY

0.97+

two awesome insightsQUANTITY

0.97+

2021DATE

0.96+

FirstQUANTITY

0.96+

day oneQUANTITY

0.95+

AltoLOCATION

0.93+

day oneQUANTITY

0.93+

last 20 monthsDATE

0.92+

last two yearsDATE

0.91+

firstQUANTITY

0.9+

DevSecOpsTITLE

0.89+

Palo AltoLOCATION

0.87+

one thingQUANTITY

0.85+

PPERSON

0.84+

ISBORGANIZATION

0.82+

three different waysQUANTITY

0.82+

3.0TITLE

0.81+

PaloORGANIZATION

0.8+

past two and a half yearsDATE

0.8+

CapeLOCATION

0.75+

Vince Hwang | KubeCon + CloudNativeCon NA 2021


 

>>Good morning from Los Angeles, Lisa Martin here at Qube con cloud native con north America, 2021. This is the cubes third day, a wall-to-wall coverage. So great to be back at an event in person I'm excited to be joined by Vince Wang, senior director of products at 49. We're going to talk security and Kubernetes then welcome to the program. >>Thank you for having me. >>So I always love talking to 40 minutes. Cybersecurity is something that is such an impersonal interest of mine. The fording that talks about the importance of integrating security and compliance and the dev sec ops workflow across the container life cycle. Why is this important and how do you help companies achieve it? >>Well, as companies are making digital innovations, they're trying to move faster and as to move faster, or many companies are shifting towards a cloud native approach, uh, rapid integrations, rapid development, and rapid deployment, uh, but sometimes speed, you know, there's a benefit to that, but there's also the downside of that, where, you know, you can lose track of issues and you can, uh, introduce a human error in a problem. So as part of the, as part of the, the, the means to deliver fast while maintaining his six year approach, where both the company and the organizations delivering it and their end customers, it's important to integrate security throughout the entire life cycle. From the moment you start planning and development, and people's in process to when you're developing it and then deploying and running in production, um, the entire process needs to be secured, monitored, and, um, and vetted regularly with good quality, um, processes, deep visibility, and an integrated approach to the problem. Um, and I think the other thing to also consider is in this day and age with the current situation with COVID, there's a lot of, uh, development of employment in terms of what I call NASA dental Baltic cloud, where you're deploying applications in random places, in places that are unplanned because you need speed and that, uh, diversity of infrastructure and diversity of, uh, of clouds and development and things to consider then, uh, produces a lot of, uh, you know, uh, opportunities for security and, and challenges to come about. >>And we've seen so much change from a security perspective, um, the threat landscape over the last 18 months. So it's absolutely critical that the integration happens shifting left. Talk to us about now let's switch topics. Application teams are adopting CIC D uh, CICB workflows. Why does security need to be at the center of that adoption? >>Well, it goes back to my earlier point where when you're moving fast, your organizations are doing, um, you're building, deploying, running continuously and monitoring, and then improving, right? So the idea is you're, you're creating smaller, incremental changes, throwing it to the cloud, running it, adjusting it. So then you're, you're rapidly integrating and you're rapidly developing and delivery. And again, it comes down to that, that rapid nature, uh, things can happen. There's, there's more, uh, more points of touching and there's more points of interactions. And, you know, and again, when you're moving that fast, it's really easy to, um, miss things along the way. So as you have security as a core fundamental element of that DNA, as you're building it, uh, that that's in parallel with everything you're doing, you just make sure that, um, when you do deliver something that is the most secure application possible, you're not exposing your customers or your organizations to unforeseen risks that just kind of sits there. >>Uh, and I think part of that is if you think about cloud infrastructure, misconfiguration is still number one, uh, biggest problem with, uh, with security on the, in the cloud space, there's, uh, tasks and vulnerabilities those, we all know, and there's there's means to control that, but the configurations, when you're storing the data, the registries, all these different considerations that go into a cloud environment, those are the things that organizations need visibility on. And, um, the ability to, to adopt their processes, to be proactive in those things and know what they, uh, do. They just need to know what, what then, where are they're operating in, um, to kind of make these informed decisions. >>That visibility is key. When you're talking with customers in any industry, what are the top three, let's say recommendations to say, here's how you can reduce your exposure to security vulnerabilities in the CIS CD pipeline. What are some of the things that you recommend there to reduce the risk? >>There's a couple, oh, obviously security as a fundamental practice. We've been talking about that. So that's number one, key number. The second thing that I would say would be, uh, when you're adopting solutions, you need to consider the fact that there is a very much of a heterogeneous environment in today's, uh, ecosystem, lots of different clouds, lots of different tools. So integration is key. The ability to, um, have choices of deployment, uh, in terms of where you wanted to play. You don't want to deploy based upon the technology limitations. You want to deploy and operate your business to meet your business needs and having the right of integrations and toolings to, uh, have that flexibility. Now, option is key. And I think the third thing is once you have security, the choices, then you can treat, you create a situation where there's a lot of, uh, you know, process overhead and operational overhead, and you need a platform, a singular cybersecurity platform to kind of bring it all in that can work across multiple technologies and environments, and still be able to control at the visibility and consolidate, uh, policies and nationally consistent across all closet points. >>So we're to the DevOps folks, what are some of the key considerations that they need to take into >>Account to ensure that their container strategy isn't compromising security? Well, I think it comes down to having to think outside of just dev ops, right? You have to, we talk about CIC D you have to think beyond just the build process beyond just where things live. You have to think continuous life cycles and using a cyber security platform that brings it together, such as we have the Fortinet security fabric that does that tying a lot of different integration solutions. We work well within their core, but theirs have the ability to integrate well into various environments that provide that consistent policies. And I think that's the other thing is it's not just about integration. It's about creating that consistency across class. And the reality is also for, I think today's dev ops, many organizations are in transition it's, you know, as, as much as we all think and want to kind of get to that cloud native point in time, the reality is there's a lot of legacy things. >>And so dev ops set ups, the DevSecOps, all these different kind of operational functions need to consider the fact that everything is in transition. There are legacy applications, they are new cloud native top first type of application delivery is using containers of various technologies. And there needs to be a, again, that singular tool, the ability to tie this all together as a single pane of glass, to be able to then navigate emerge between legacy deployments and applications with the new way of doing things and the future of doing things with cloud native, uh, and it comes down again to, to something like the Fortinet security fabric, where we're tying things together, having solutions that can deploy on any cloud, securing any application on any cloud while bringing together that consistency, that visibility and the single point management, um, and to kind of lower that operational overhead and introduce security as part of the entire life cycle. >>Do you have a Vincent example of a customer that 49 has worked with that has done this, that you think really shows the value of what you're able to enable them to achieve? >>We do. We do. We have lots of customers, so can name any one specific customer for various reasons, you know, it's security after all. Um, but the, the most common use cases when customers look at it, that when you, we talked to a CIO, CSO CTO is I think that's a one enter they ask us is, well, how do we, how do we manage in this day and age making these cloud migrations? Everyone? I think the biggest challenge is everyone is in a different point in time in their cloud journey. Um, there's if you talk to a handful of customers or a rueful customers, you're not going to find one single organization that's going to be at the same point in time that matches them yet another person, another organization, in terms of how they're going about their cloud strategies, where they're deploying it at what stage of evolution there are in their organizational transformations. >>Um, and so what they're looking for is that, that that's the ability to deploy and security any application on any topic throughout their entire application life cycle. Um, and so, so the most common things that, that our customers are looking for, um, and, you know, they're doing is they're looking to secure things on the network and then interconnected to the cloud with, uh, to deliver that superior, uh, application experience. So they were deploying something like the security fabric. Uh, again, you know, Fordanet has a cybersecurity approach to that point and securing the native environments. They're looking at dev ops, they're deploying tooling to provide, uh, you know, security posture management, plus a few posture management to look at the things that are doing that, the registries, their environment, the dev environment, to then securing their cloud, uh, networks, uh, like what we do with our FortiGate solutions, where we're deploying things from the dev ops. >>I feel secure in the cloud environment with our FortiGate environments across all the various multitudes of cloud providers, uh, like, uh, AWS Azure, Google cloud, and that time that together with, with some secure, um, interconnections with SD LAN, and then tying that into the liver and productions, um, on the web application side. So it's a very much a continuous life cycle, and we're looking at various things. And again, the other example we have is because of the different places in different, uh, in terms of Tod journeys, that the number one key is the ability to then have that flexibility deployment to integrate well into existing infrastructure and build a roadmap out for, uh, cloud as they evolve. Because when you talk to customers today, um, they're not gonna know where they're going to be tomorrow. They know they need to get there. Uh, they're not sure how they're going to get there. And so what they're doing now is they're getting to cloud as quickly as they can. And then they're looking for flexibility to then kind of adjust and they need a partner like Fordanet to kind of bring that partnership and advisorship to, uh, to those organizations as they make their, their, their strategies clearer and, uh, adjust to new business demands. >>Yeah. That partnership is key there. So afforded it advocates, the importance of taking a platform approach to the application life cycle. Talk to me about what that means, and then give me like the top three considerations that customers need to be considering for this approach. >>Sure. Number one is how flexible is that deployment in terms of, do you, do customers have the option to secure and deploy any application, any cloud, do they have the flexibility of, um, integrating security into their existing toolings and then, uh, changing that out as they need, and then having a partner and a customer solution that kind of grows with that? I think that's the number one. Number two is how well are these, uh, integrations or these flexible options tied together? Um, like what we do with the security fabric, where everything kind of starts with, uh, the idea of a central management console that's, you know, uh, and consistent policies and security, um, from the get-go. And I think the third is, is looking at making sure that the, the, the security integrations, the secure intelligence is done in real time, uh, with a quality source of information, uh, and, and points of, uh, of responsiveness, um, what we do with four guard labs. >>For example, we have swell of large, um, machine learning infrastructure where have supported by all the various customer inputs and great intelligence organizations, but real time intelligence and percussion as part of that deployment life cycle. Again, this kind of really brings it all together, where organizations looking for application security and, and trying to develop in a CSED fashion. And you have the ability to then have security from the get, go hide ident to the existing toolings for flexibility, visibility, and then benefits from security all along the way with real time, you know, uh, you know, leading edge security, that then kind of brings that, that sense of confidence and reassurance as they're developing, they don't need to worry about security. Security should just be part of that. And they just need to worry about solving the customer problems and, uh, and, you know, delivering business outcomes and results. >>That's it, right? It's all about those business outcomes, but delivering that competence is key. Vince, thank you for joining me on the program today, talking through what 49 is doing, how you're helping customers to integrate security and compliance into the dev dev sec ops workflow. We appreciate your insights. >>Thank you so much for your time. I really appreciate it. My >>Pleasure for vents Wang. I'm Lisa Martin. You're watching the cube live from Los Angeles, uh, cube con and cloud native con 21 stick around at Dave Nicholson will join me next with my next guest.

Published Date : Oct 22 2021

SUMMARY :

So great to be back at an event in person I'm excited to be joined by Vince Wang, So I always love talking to 40 minutes. and things to consider then, uh, produces a lot of, uh, need to be at the center of that adoption? Well, it goes back to my earlier point where when you're moving fast, your organizations Uh, and I think part of that is if you think about cloud infrastructure, misconfiguration let's say recommendations to say, here's how you can reduce your exposure to security vulnerabilities And I think the third thing is once you have security, the choices, You have to, we talk about CIC D you have to think beyond just the build process beyond And there needs to be a, again, that singular tool, the ability to tie this all together as Um, there's if you talk to a handful of customers or a rueful customers, you're not going to find one single and then interconnected to the cloud with, uh, to deliver that superior, They know they need to get there. Talk to me about what that means, and then give me like the top three considerations that and points of, uh, of responsiveness, um, what we do with four guard labs. And they just need to worry about solving the customer problems and, uh, and, you know, to integrate security and compliance into the dev dev sec ops workflow. Thank you so much for your time. uh, cube con and cloud native con 21 stick around at Dave Nicholson will join me next

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

VincePERSON

0.99+

Vince WangPERSON

0.99+

Vince HwangPERSON

0.99+

NASAORGANIZATION

0.99+

six yearQUANTITY

0.99+

Los AngelesLOCATION

0.99+

40 minutesQUANTITY

0.99+

FordanetORGANIZATION

0.99+

tomorrowDATE

0.99+

third dayQUANTITY

0.99+

second thingQUANTITY

0.99+

Dave NicholsonPERSON

0.99+

thirdQUANTITY

0.98+

CloudNativeConEVENT

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

KubeConEVENT

0.98+

2021DATE

0.98+

third thingQUANTITY

0.98+

north AmericaLOCATION

0.97+

threeQUANTITY

0.97+

Qube con cloud native conORGANIZATION

0.94+

top threeQUANTITY

0.89+

CIC DTITLE

0.89+

FortinetORGANIZATION

0.88+

last 18 monthsDATE

0.88+

BalticLOCATION

0.86+

coupleQUANTITY

0.86+

cloud native conEVENT

0.85+

VincentPERSON

0.83+

cube conEVENT

0.82+

WangPERSON

0.81+

FortiGateORGANIZATION

0.81+

single paneQUANTITY

0.8+

oneQUANTITY

0.8+

NA 2021EVENT

0.79+

first typeQUANTITY

0.77+

Number twoQUANTITY

0.76+

GoogleORGANIZATION

0.76+

encePERSON

0.76+

four guard labsQUANTITY

0.75+

KubernetesORGANIZATION

0.74+

singleQUANTITY

0.73+

DevSecOpsTITLE

0.7+

one specific customerQUANTITY

0.69+

one single organizationQUANTITY

0.68+

cubesORGANIZATION

0.64+

number oneQUANTITY

0.63+

AWS AzureORGANIZATION

0.58+

COVIDOTHER

0.57+

49PERSON

0.56+

49QUANTITY

0.52+

49ORGANIZATION

0.52+

singularQUANTITY

0.52+

cloudTITLE

0.5+

CICBTITLE

0.41+

Kingdon Barrett, Weaveworks | KubeCon + CloudNativeCon NA 2021


 

>>Good morning, welcome to the cubes coverage of Qube con and cloud native con 21 live from Los Angeles. Lisa Martin, here with Dave Nicholson. David's great to be in person with other humans at this conference. Finally, I can't believe >>You're arms length away. It's unreal. >>I know, and they checked backs cards. So everybody's here is nice and safe. We're excited to welcome kingdom Barrett to the program, flux, maintainer and open source support engineer at we works. He came to him. Welcome to the program. >>Oh, thank you for having me on today. >>So let's talk about flux. This is a CNCF incubating project. I saw catalyze as adopt talk to us about flux and its evolution. >>Uh, so flex is, uh, uh, just got into its second version a while ago. We've been, uh, working on, um, uh, we're an incubating project and we're going towards graduation at this point. Um, flex has seen a great deal of adoption from, uh, infant cloud infrastructure vendors in particular, uh, like Microsoft and Amazon and VMware, all building products on, um, flux, uh, the latest version of flux. And, uh, we've heard, uh, from companies like Alibaba and state farm. We had a, uh, uh, conference, uh, at a co-hosted event earlier on Tuesday called get-ups con, uh, where we presented all about get ops, which is the technology, uh, guiding, uh, set of principles that underlies flux. And, uh, there are new adopters, um, all, all every day, including, uh, the department of defense, uh, who has a hundred thousand developers. Um, it's, it's, it's very successful project at this point, who are the >>Key users of flux flux? >>Excuse me. The key users of flux are, uh, probably, uh, application developers and infrastructure engineers, and platform support folks. So a pretty broad spectrum of people. >>And you've got some news at the event. >>Yeah, we actually, uh, we have a, uh, ecosystem event that's coming up, um, on October 20th, uh, it's free virtual event. Uh, folks can join us to hear from these companies. We have people from high level, uh, CTOs and GMs, uh, from companies like Microsoft, Amazon VMware, uh, we've worked D two IQ, um, that are, uh, going to be speaking, uh, about their, uh, products that you can buy from their cloud vendor, uh, that, uh, are based on flux. Uh, so, so that's a milestone for us. That's a major milestone. These are large vendors, um, major cloud vendors that have decided that they trust, uh, flux with their customers workloads. And it's, it's the way that they want to push get ups. Great >>Validation. Yeah. >>So give us an example, just digging in a little bit on flux and get ops. What are some of the things that flux either enforces or enables or validates? What, how would you describe the flux get ops relationship? >>So the first to get ops principles is declarative infrastructure and that's, uh, that's something that people who are using Kubernetes are already very familiar with. Um, flux has a basic itself, or, or I guess spawned, uh, maybe is a better way to say it. Uh, this, um, uh, whole get ops working group, that's just defined the principles. There's four of them in the formal definition. That's just been promoted to a 1.0 and, uh, the get ups working group, publish, publish this at, uh, open get-ups dot dev where you can read all four. And, um, it's great copy site. If you're not really familiar with get ops, you can, you can read all four, but, uh, the other, uh, the second one I would have mentioned is, uh, version storage is, is, uh, it's called get ups and get as a version store. So it's a good for, um, disaster recovery. >>Uh, and, uh, if you have an issue with a new release, if you're, uh, pushing changes frequently, that's, you know, more than likely you will have issues from time to time. Uh, you can roll back with, get ups because everything is version. Um, and, uh, you can do those releases rapidly because the deployment is automatic, um, and it's continuously reconciling. So those are the four principles of get ups. Uh, and they're, they're not exactly prescriptive. You don't have to adopt them all at once. You can pick and choose where you want to get started. Um, but that's what, uh, is underneath flux. >>How do you help customers pick and choose based on what are some of the key criteria that you would advise them on? >>We would advise them to try to follow all of those principles, because that's what you get out of the box with fluxes is a solution that does those things. But if there is one of those things that gets in a way, um, there's also the concept of a closed loop that is, um, sometimes debated as whether it should be part of the get ops principles or not. Um, that just means that, uh, when you use get-ups the only changes that go to your infrastructure are coming through get-ups. Uh, so you don't have someone coming in and using the back door. Um, it all goes through get, uh, w when you want to make a change to your cluster or your application, you push it to get the automation takes over from there and, um, and makes, uh, developers and platform engineers jobs a lot easier. And it makes it easier for them to collaborate with each other, >>Of course, productivity. You mentioned AWS, Microsoft, VMware, uh, all working with you to deliver, get ups to enterprise customers. Talk to me about some of the benefits in it for these big guys. I mean, that's great validation, but what's in it for AWS and VMware and Microsoft, for example, business outcome wise. >>Well, uh, one of the things that we've been promoting and since June is a flex has been, uh, uh, there's an API underneath, that's called the get ops toolkit. This is, uh, if you're building a platform for platforms like these cloud vendors are, um, we announced that fluxes APRs are officially stable. So that means that it's safe for them to build on top of, and they can, uh, go ahead and build things and not worry that we're going to pull the rug out from under them. So that's one of the major vendors, uh, one of the major, uh, uh, vendor benefits and, um, uh, we've, we've also added a recent improvement, uh, uh, called service side apply that, uh, will improve performance. Uh, we reduced the number of, um, API calls, but also for, for, uh, users, it makes things a lot easier because they don't have to write explicitly health checks on everything. Uh, it's possible for them to say, we'd like to see everything is healthy, and it's a one-line addition, that's it? >>So, you know, there's been a lot of discussion from a lot of different angles of the subject of security, uh, in this space. Um, how does this, how does this dovetail with that? A lot of discussion specifically about software supply chain security. Now this is more in the operations space. How do, how do those come together? Do you have any thoughts on security? >>Well, flux is built for security first. Um, there are a lot of products out there that, uh, will shell out to other tools and, and that's a potential vulnerability and flux does not do that. Uh, we've recently undergone a security audit, which we're waiting for the results and the report over, but this is part of our progress towards the CNCF graduated status. Um, and, uh, we've, we've liked what we've seen and preliminary results. Uh, we've, we've prepared for the security audit on knowing that it was coming and, uh, uh, flexes, uh, uh, designed for security first. Uh, you're able to verify that the commits that you're applying to your cluster are signed and actually come from a valid author who is, uh, permitted to make changes to the cluster and, uh, get ops itself is, is this, uh, model of operations by poll requests. So, um, you, you have an opportunity to make sure that your changes are, uh, appropriately reviewed before they get applied. >>Got it. So you had a session at coupon this week. Talk to me a little bit about that. What were like the top three takeaways, and maybe even share with us some of the feedback that you got from the audience? >>Um, so, uh, the session was about Jenkins and get ups or Jenkins and flux. And the, um, the main idea is that when you use flux, flux is a tool for delivery. So you've heard maybe of CIC, D C I N C D are separate influx. We consider these as two separate jobs that should not cross over. And, uh, when, when, uh, you do that. So the talk is about Jenkins and flux. Jenkins is a very popular CII solution and the messages, uh, you don't have to abandon, if you've made a large infrastructure investment in a CII solution, you don't have to abandon your Jenkins or your GitHub actions or, or whatever other CII solution you're using to build and test images. Uh, you can take it with you and adopt get ups. >>Um, so there's compatibility there and, and usability familiarity for the audience, the users. Yeah. What was some of the feedback that they provided to you? Um, were they surprised by that? Happy about that? >>Well, and talk to us a little bit fast paced. Uh, we'll put it in the advanced CIC D track. I covered a lot of ground in that talk, and I hope to go back and cover things in a little bit smaller steps. Um, I tried to show as many of the features of Fluxus as I could. Uh, and, and so one of the feedback that I got was actually, it was a little bit difficult to follow up as, so I'm a new presenter. Um, this is my first year we've worked. I've never presented at CubeCon before. Um, I'm really glad I got the opportunity to be here. This is a great, uh, opportunity to collaborate with other open source teams. And, um, that's, that's, uh, that's the takeaway from me? No. >>So you've got to give a shout out to, uh, to weave works. Absolutely. You know, any, any organization that realizes the benefit of having its folks participating in the community, realizing that it, it helps the community, it helps you, it helps them, you know, that's, that's what we love about, about all of this. >>Yeah. We're, uh, we're really excited to grow adoption for, um, Kubernetes and get ops together. So, >>So I've asked a few people this over the last couple of days, where do you think we are in the peak Kubernetes curve? Are we still just at the very beginning stages of this, of this as a, as a movement? >>Um, certainly we're, um, it's, it's, uh, for, for people who are here at CubeCon, I think we see that, you know, uh, a lot of companies are very successful with Kubernetes, but, um, I come from a university, it, uh, background and I haven't seen a lot of adoption, uh, in, in large enterprise, um, more conservative enterprises, at least in, in my personal experience. And I think that, uh, there is a lot for those places to gain, um, through, through, uh, adopting Kubernetes and get ups together. I think get ops is, uh, we'll provide them with the opportunity to, uh, experience Kubernetes in the best way possible. >>We've seen such acceleration in the last 18, 19 months of digital transformation for companies to survive, to pivot during COVID to survive, doubt to thrive. Do you see that influencing the adoption of Kubernetes and maybe different industries getting more comfortable with leveraging it as a platform? >>Sure. Um, a lot of companies see it as a cost center. And so if you can make it easier or possible to do, uh, operations with fewer people in the loop, um, that, that makes it a cost benefit for a lot of people, but also you need to keep people in the loop. You need to keep the people that you have included and, and be transparent about what infrastructure choices and changes you're making. So, uh, that's one of the things that get ups really helps with >>At transparency is key. One more question for you. Can you share a little bit before we wrap here about the project roadmap and some of the things that are coming down the pike? Yeah. >>So I mentioned a graduation. That's the immediate goal that we're working towards? Uh, most directly, uh, we have, um, grown our, uh, number of integrations pretty significantly. We have an operator how entry in red hat, open shift there's operator hub, where you can go and click to install flux. And that's great. Um, and, uh, we looked forward to, uh, making flux more compatible with more of the tools that you find in the CNCF umbrella. Um, that's, that's what our roadmap is for >>Increasing that compatibility. And one more time mentioned the event, October 20th, I believe he said, let folks know where they can go and find it on the web. Yeah. >>If you're interested in the get ups days.com, it's the get-ups one-stop shop and it's, uh, vendors like AWS and Microsoft and VMware detour IQ. And we've worked, we've all built a flux based solutions, um, for, uh, that are available for sale right now. So if you're, uh, trying to use get-ups and you have one of these vendors as your cloud vendor, um, it seems like a natural fit to try the solution that's out of the box. Uh, but if you need convincing, you get Upstate's dot com, you can go find out more about the event and, uh, we'll hope to see you there. >>I get upstairs.com kingdom. Thank you. You're joining Dave and me on the program, talking to us about flux. Congratulations on its evolution. We look forward to hearing more great things as the years unfold. >>Thank you so much for having me on our pleasure >>For Dave Nicholson. I'm Lisa Martin. You're watching the kid live from Los Angeles at CubeCon cloud native con 21 stick around Dave and I, and we'll be right back with our next guest.

Published Date : Oct 14 2021

SUMMARY :

David's great to be in person with other humans You're arms length away. We're excited to welcome kingdom Barrett to the program, to us about flux and its evolution. Uh, so flex is, uh, uh, just got into its second version a while So a pretty broad spectrum of people. uh, products that you can buy from their cloud vendor, uh, that, uh, are based on flux. Yeah. What, how would you describe the flux get ops and, uh, the get ups working group, publish, publish this at, uh, open get-ups dot dev where you can Uh, and, uh, if you have an issue with a new release, if you're, uh, w when you want to make a change to your cluster or your application, you push it to get the automation uh, all working with you to deliver, get ups to enterprise customers. So that means that it's safe for them to build on top of, and they can, uh, of security, uh, in this space. Um, and, uh, we've, we've liked what we've seen and preliminary results. and maybe even share with us some of the feedback that you got from the audience? And, uh, when, when, uh, you do that. Um, so there's compatibility there and, and usability familiarity for the audience, uh, opportunity to collaborate with other open source teams. it helps the community, it helps you, it helps them, you know, that's, So, I think get ops is, uh, we'll provide them with the opportunity to, Do you see that influencing the adoption of Kubernetes and maybe different So, uh, that's one of the things that get ups really helps with Can you share a little bit before we wrap here about the project roadmap Um, and, uh, we looked forward to, uh, And one more time mentioned the event, October 20th, I believe he said, uh, trying to use get-ups and you have one of these vendors as your cloud vendor, You're joining Dave and me on the program, talking to us about flux. con 21 stick around Dave and I, and we'll be right back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Lisa MartinPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

AlibabaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

October 20thDATE

0.99+

AmazonORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

DavidPERSON

0.99+

JuneDATE

0.99+

one-lineQUANTITY

0.99+

oneQUANTITY

0.99+

JenkinsTITLE

0.99+

second versionQUANTITY

0.98+

CNCFORGANIZATION

0.98+

flexORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

this weekDATE

0.98+

first yearQUANTITY

0.98+

firstQUANTITY

0.97+

GitHubORGANIZATION

0.97+

todayDATE

0.97+

CubeConEVENT

0.96+

two separate jobsQUANTITY

0.95+

Kingdon BarrettORGANIZATION

0.95+

udNativeCon NA 2021EVENT

0.95+

One more questionQUANTITY

0.93+

WeaveworksORGANIZATION

0.92+

second oneQUANTITY

0.92+

CICTITLE

0.92+

UpstateORGANIZATION

0.92+

KubernetesTITLE

0.92+

CubeConORGANIZATION

0.89+

FluxusTITLE

0.87+

Qube conEVENT

0.86+

KubernetesORGANIZATION

0.85+

KubeConEVENT

0.84+

ups days.comOTHER

0.83+

upstairs.comOTHER

0.83+

fourQUANTITY

0.82+

BarrettPERSON

0.81+

three takeawaysQUANTITY

0.8+

hundred thousand developersQUANTITY

0.79+

18QUANTITY

0.75+

VMware detour IQORGANIZATION

0.75+

KubernetesPERSON

0.75+

one more timeQUANTITY

0.74+

four of themQUANTITY

0.71+

getOTHER

0.7+

1.0OTHER

0.7+

19 monthsQUANTITY

0.69+

CIITITLE

0.68+

CloORGANIZATION

0.66+

fluxTITLE

0.64+

TuesdayDATE

0.64+

COVIDTITLE

0.63+

D two IQORGANIZATION

0.62+

conEVENT

0.59+

conORGANIZATION

0.59+

lastDATE

0.56+

21QUANTITY

0.5+

cloudORGANIZATION

0.5+

VMwareTITLE

0.37+

Ruvi Kitov, Tufin | Fortinet Security Summit 2021


 

>>From around the globe. It's the cube covering Fortinet security summit brought to you by Fortinet. >>Okay. Welcome back everyone. To the cubes, coverage of Fortinets championship golf tournament, we're here for the cybersecurity summit. David got a great guest, Ruby cutoff CEO, and co-founder of Tufin great to have you on. Thank you for coming on the cube. We were chatting before. Came on. Camera, big talk. You just gave it. Thanks mom. Thanks >>For having me >>Not a bad place here. Golf tournament, golf and cybersecurity, kind of go together. You know, keep the ball in the middle of the fairway. You know, don't let it get out of bounds, you know, >>And it's a beautiful place. So, uh, very happy to be here and be a premier sponsor of the event. >>Congratulations and a good, good to have you on let's get into the cybersecurity. We were talking before we came on camera around how transformation is really hard. We went to the cloud is really hard refactoring. You're just really hard, but security is really, really hard. That's true. So how do you look at how security is perceived in companies? Is there dynamics that are being amplified by the rapid moved movement to the cloud? You seeing apps being developed really fast changes fast. What's the, what's the barometer of the industry right now? Sure, >>Sure. It's interesting. And this hasn't really changed in the past, but we've seen like exacerbated getting worse and worse. I think a lot of companies security is actually seen as a blocker and frankly security is probably the most hated department in the organization because a lot of times, first of all, the security says no, but also they just take their time. So if you think about organizations, enterprises, they run on top of their enterprise applications. They have applications that their own in-house developers are writing, and those developers are changing their apps all the time. They're driving change in it as well. So you end up having dozens of change requests from developers want to open connectivity. You want to go from point a to point B on the network. They open a ticket. It reaches the network security team that ticket might take several days until it's implemented in production. So the level of service that security provides the application teams today is really not very high. So you can really understand why security is not, um, looked upon favorably by the rest of the organization. >>And some organizations. My perception is, is that, you know, the hardcore security teams that have been around for awhile, they've got standards and they're hardcore, a new app comes in, it's gotta be approved. Something's gotta get done. And it's slower, right? It slows people down the perception. It could be slow. How is it changing? Yes, >>So it changing because when you're moving to the cloud and a lot of organizations are adopting the cloud in many ways, private cloud, public cloud hybrid cloud, you know, they're working in cloud native environments and those environments, you know, the developers are, they own the keys to the kingdom, right? They're managing AWS Azure, Google cloud to managing get hub. You know, they got the place to themselves. So they're pushing changes in their apps without asking it for permission. So they're suddenly exposed to this is how fast it can really be. And while anything that they do in the on-prem or sort of traditional applications is still moving very slowly unless they're using an automated approach to policy. So one of the things that I spoke about today is the need for organizations to adopt a policy centric approach. So they need to define a policy of who can talk to whom and what conduct to what across the entire organizational network, whether it's firewalls routers, which is cloud platforms. >>And then once you have that policy, you can start automated based on the policy. So the concept is somebody opens a ticket, a developer wants to make a change. They want a ticket in service. Now remedy that ticket reaches, uh, some system that's going to check for compliance against the policy. If you're able to immediately tell if that change is compliant or not, then you're able to make that split-second decision, which might take an analyst a couple of days, and then you can design the perfect minimal change to implement on the network. That is really agile, right? That's what developers want to see. And a lot of security departments are really struggling with that today. >>Why, why are they? That seems like a no brainer because policy-based innovation has been around in the network layer for many, many years decades. Right? We'll see, makes things go better, faster. Why would they be against it? Where were they? >>Yeah. So they're not really against it. I think it's just the sheer complexity and size of today's networks is nothing compared to where it was 10 years ago. So you have tens to hundreds of firewalls in large enterprises, thousands of routers and switches, load balancers, private cloud SDN, like NSX and ACI public cloud Kubernetes. It's just a plethora of networking. So we're thinking of it as proliferation of networking is getting worse and worse, especially with IOT and now moving to the cloud. So it is just so complex that if you don't have specialized tools, there's absolutely no way they'll, you'll be able to. >>So your talk must so gone over well, because I do a lot of interviews and I hear developers talking about shift left, right? Which is, you know, basically vernacular for do security in the dev CIC D pipelining. So while you're there rather than having to go fix the bugs later, this seems to be a hot trend. People like it, they want it, they want to check it off, get it done, move on this policy-based automation, help them here. >>It does in some ways, I mean, so you need a policy for the cloud as well, but there's a different challenge that I see altogether in the cloud. And one of the challenges that we're saying is that there's actually a political divide. You have network security folks who are managing, you know, firewalls routers, switches, and maybe the hub to the cloud. And then inside the spokes inside the cloud itself, you have a different team, cloud operators, cloud security folks. And those two teams don't really talk to each other. Some companies have set up centers of excellence, where they're trying to bring all the experts together. But most companies, network security, folks who want to understand what's happening inside the cloud are sort of given the Heisman. They're not invited to meetings. Um, and there's lack of which I think is tragic because it's not going to go over well. So there's huge challenges in security in the cloud. And unless these two departments are going to talk to each other and work together, we're not going to get anywhere near the level of security that we need. >>The cloud team, the cloud guys, if you will, you know, quote guys or gals and the security guys and gals, they're not getting along. What's the, what's the, is it historical? Just legacy structures? Is it more of my department? I own the keys to the kingdom. So go through me kind of the vibe, or is it more of just evolution of the, developer's going to say, I'm going to go around you like shadow it, um, created the cloud. Is there like a shadow security, but trend around this? >>Yeah, there is. And I think it stems from what we covered in the beginning, which is, you know, app developers are now used to and trained to fear security. Every change they want on the on-prem network takes a week, right? They're moving to the cloud. Suddenly they're able to roam freely, do things quickly. If network security folks come by and say, oh, we want to take a look at those changes. What they're hearing, the music is all we're going to slow you down. And the last thing cloud guys want to hear is that we're going to slow you down. So they have they're fearfully. You know, they're, they're rightly afraid of what's going to happen. If they enable a very cumbersome and slow process, we got to work differently. Right? So there's new paradigms with dev DevSecOps where security is built into the CIC pipeline, where it doesn't slow down app developers, but enables compliance and visibility into the cloud environments at the same time. Great stuff. >>Great insight. I want to ask you your, one of your things in your top that I found interesting. And I like to have you explain it in more detail is you think security can be an enabler for digital transformation. Digital transformation can kick the wrong yeah. With transforming. Okay. Everyone knows that, but security, how does security become that enabler? >>So, I mean, today security is a, um, as a blocker to digital transformation. I think anybody that claims, Hey, we're on a path to digital transformation. We're automated, we're digitally transformed. And yet you asked the right people and you find out every change takes a week on the network. You're not digitally transformed, right? So if you adopt a, a framework where you're able to make changes in a compliant secure matter and make changes in minutes, instead of days, suddenly you'll be able to provide a level of service to app developers like they're getting in the cloud, that's digital transformation. So I see the network change process as pretty much the last piece of it that has not been digitally transformed yet. >>And this is where a lot of opportunity is. Exactly. All right. So talk about what you guys are doing to solve that problem, because you know, this is a big discussion. Obviously security is on everyone's mind. They're reactive to proactive that buying every tool they can platforms are coming out. You're starting to see a control plane. You're starting to see things like collective intelligence networks forming, uh, what's the solution to all this, >>Right? So what we've developed is a security policy layer that sits on top of all the infrastructure. So we've got, uh, four products in the two for an orchestration suite where we can connect to all the major firewalls, router, switches, cloud platforms, private cloud SDN. So we see the configuration in all those different platforms. We know what's happening on the ground. We build a typology model. That is one of the industry's best apology models that enables us to query and say, okay, from point a to point B, which firewalls, router switches and cloud platforms will you traverse. And then we integrate it with ticketing system, like a remedy or service now, so that the user experiences a developer opens a ticket for a change that ticket gets into Tufin. We check it against the policy that was defined by the security managers, the security manager defined a policy of who can talk to whom and what conducted what across the physical network and the cloud. >>So we can tell within a split second, is this compliant or not? If it's not compliant, we don't waste an engineer's time. We kick it back to the original user. But if it is compliant, we use that typology model to perform network change design. So we design the perfect minimal change to implement an every firewall router switch cloud platform. And then the last mile is we provision that change automatically. So we're able to make a change in minutes, instead of days would dramatically better security and accuracy. So the ROI on Tufin is not just security, but agility balanced with security at the same time. So you like the rules of the road, >>But the roads are changing all the time. That's how do you keep track of what's going on? You must have to have some sort of visualization technology when you lay out the topology and things start to be compliant, and then you might see opportunity to do innovative buckets. Hey, you know, I love this policy, but I'm, I'm going to work on my policy because sure. Got to up your game on policy and continue to iterate. Is that how do they, how do your customers Daniel? >>So listen, we we're, uh, we're not a tiny company anymore. We've grown. We went public in April of 2019 race and capital. We have over 500 employees, we sold over 2000 customers worldwide. So, um, you know, when customers ask us for advice, we come in and help them with consulting or professional services in terms of deployment. And the other piece is we gotta keep up all the time with what's happening with Fortnite. For example, as, as one of our strategic partners, every time fortnight makes the change we're on the beta program. So we know about a code change. We're able to test them the lab we know about their latest features. We got to keep up with all that. So that takes a lot of engineering efforts. We've hired a lot of engineers and we're hiring more. Uh, so it takes a lot of investment to do this at scale. And we're able to deliver that for our customers. >>I want the relationship with 400. I see you're here at the golf tournament. You're part of the pavilion. You're part of the tournament by the way. Congratulations. Great, great, great event. Thank you. What's the relationship with food and air from a product and a customer technology standpoint, >>We're working closely with Fortnite, where they're a strategic partner of ours. We're integrated into their Fordham manager, APIs. We're a fabric ready solution for them. So obviously working closely. Some of our biggest customers are fortnight's biggest customers will get the opportunity to sponsor this event, which is great tons of customers here and very interesting conversations. So we're very happy with that relationship. >>This is good. Yeah. So that ask you, what have you learned? I think you got great business success. Looking back now to where we are today, the speed of the market, what's your big takeaway in terms of how security changed and it continues to be challenging and these opportunities, what was the big takeaway for you? >>Well, I guess if you were like spanning my career, uh, the big takeaway is, uh, first of all, and just in startup world, patients think things come to those away, but also, um, you know, just, you got to have the basics, right? What we do is foundational. And there were times when people didn't believe in what we do or thought, you know, this is minor. This is not important as people move to the cloud, this won't matter. Oh, it matters. It matters not just in on-prem and it matters in the cloud as well. You gotta have a baseline of a policy and you gotta base everything around that. Um, and so w we've sort of had that mantra from day one and we were right. And we're, we're very happy to be where we are today. Yeah. >>And, you know, as a founder, a co-founder of the company, you know, most of the most successful companies I observed is usually misunderstood for a long time. That's true. Jesse's favorite quote on the cube. He's now the CEO of Amazon said we were misunderstood for a long time. I'm surprised it took people this long to figure out what we were doing. And, and that was good. A good thing. So, you know, just having that north star vision, staying true to the problem when there were probably opportunities that you are like, oh, we, you know, pressure or sure. Yeah. I mean, you stayed the course. What was the, what was the key thing? Grit focused. Yes. >>Looking to startup life. It's sorta like being in sales. We, we got told no, a thousand times before we got told yes. Or maybe a hundred times. So, uh, you gotta, you gotta be, um, you got to persevere. You gotta be really confident in what you're doing and, uh, just stay the course. And we felt pretty strongly about what we're building, that the technology was right. That the need of the market was right. And we just stuck to our guns. >>So focus on the future. What's the next five, five years look like, what's your focus? What's the strategic imperative for you guys. What's your, what's your, what do you mean working on? >>So there's several things that on the business side, we're transitioning to a subscription-based model and we're moving into SAS. One of our products is now a SAS based product. So that's very important to us. We also are now undergoing a shift. So we have a new version called Tufin Aurora Tufin Aurora is a transformation. It's our next generation product. Uh, we're rearchitected the entire, uh, underlying infrastructure to be based on microservices so we could be cloud ready. So that's a major focus in terms of engineering, uh, and in terms of customers, you know, we're, we're selling to larger and larger enterprises. And, uh, we think that this policy topic is critical, not just in the on-prem, but in the cloud. So in the next three years, as people move more and more to the cloud, we believe that what we do will be, become even more relevant as organization will straddle on-premise networks and the cloud. So >>Safe to say that you believe that policy based architecture is the key to automation. >>Absolutely. You can't automate what you don't know, and you can't people, like I mentioned this in my talk, people say, oh, I can do this. I can cook up an Ansible script and automate, all right, you'll push a change, but what is the logic? Why did you make that decision? Is it based on something you've got to have a core foundation? And that foundation is the policy >>Really great insight. Great to have you on the cube. You've got great success and working knowledge and you're in the right place. And you're skating to where the puck is and will be, as they say, congratulations on your success. Thank >>You very much. Thanks for having >>Me. Okay. Keep coming here. The Fortinet championship summit day, cybersecurity summit, 40 minutes golf tournament here in Napa valley. I'm John Firmicute. Thanks for watching.

Published Date : Sep 14 2021

SUMMARY :

security summit brought to you by Fortinet. and co-founder of Tufin great to have you on. You know, don't let it get out of bounds, you know, And it's a beautiful place. Congratulations and a good, good to have you on let's get into the cybersecurity. So if you think about organizations, enterprises, they run on top of their enterprise applications. My perception is, is that, you know, the hardcore security teams that have been around for awhile, and those environments, you know, the developers are, they own the keys to the kingdom, And then once you have that policy, you can start automated based on the policy. That seems like a no brainer because policy-based innovation has been around in the network layer So you have tens to hundreds of firewalls Which is, you know, basically vernacular for do security in the dev CIC You have network security folks who are managing, you know, firewalls routers, switches, The cloud team, the cloud guys, if you will, you know, quote guys or gals and the security And the last thing cloud guys want to hear is that we're going to slow you down. And I like to have you explain it in So if you So talk about what you guys are doing to solve that problem, So we see the configuration So you like the rules of the road, You must have to have some sort of visualization technology when you lay out the topology and things start And the other piece is we gotta keep up all the time You're part of the tournament by the way. So we're very happy with that relationship. I think you got great business but also, um, you know, just, you got to have the basics, And, you know, as a founder, a co-founder of the company, you know, most of the most successful companies I observed is So, uh, you gotta, So focus on the future. as people move more and more to the cloud, we believe that what we do will be, become even more relevant You can't automate what you don't know, and you can't people, Great to have you on the cube. You very much. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

April of 2019DATE

0.99+

John FirmicutePERSON

0.99+

Ruvi KitovPERSON

0.99+

AmazonORGANIZATION

0.99+

Napa valleyLOCATION

0.99+

twoQUANTITY

0.99+

tensQUANTITY

0.99+

FortniteORGANIZATION

0.99+

JessePERSON

0.99+

TufinPERSON

0.99+

FortinetORGANIZATION

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DanielPERSON

0.99+

oneQUANTITY

0.99+

two departmentsQUANTITY

0.99+

two teamsQUANTITY

0.99+

FordhamORGANIZATION

0.99+

over 500 employeesQUANTITY

0.99+

over 2000 customersQUANTITY

0.99+

TufinORGANIZATION

0.98+

todayDATE

0.98+

10 years agoDATE

0.98+

a weekQUANTITY

0.97+

ACIORGANIZATION

0.96+

four productsQUANTITY

0.95+

NSXORGANIZATION

0.91+

Fortinet Security Summit 2021EVENT

0.91+

SASORGANIZATION

0.89+

DevSecOpsTITLE

0.88+

FortniteTITLE

0.86+

point BOTHER

0.85+

five yearsQUANTITY

0.84+

GoogleORGANIZATION

0.83+

dozens of change requestsQUANTITY

0.83+

next three yearsDATE

0.83+

cybersecurity summitEVENT

0.83+

Fortinet security summitEVENT

0.82+

Ruby cutoffPERSON

0.82+

hundred timesQUANTITY

0.81+

Fortinet championship summit dayEVENT

0.8+

hundreds of firewallsQUANTITY

0.8+

AzureTITLE

0.79+

AnsibleORGANIZATION

0.78+

CIC DTITLE

0.78+

Fortinets championshipEVENT

0.77+

40 minutes golfQUANTITY

0.75+

thousand timesQUANTITY

0.74+

TufinTITLE

0.74+

firstQUANTITY

0.71+

day oneQUANTITY

0.7+

cloudTITLE

0.69+

One of our productsQUANTITY

0.65+

400ORGANIZATION

0.65+

northORGANIZATION

0.63+

Aurora Tufin AuroraCOMMERCIAL_ITEM

0.6+

fiveQUANTITY

0.59+

point aOTHER

0.57+

KubernetesTITLE

0.55+

thingsQUANTITY

0.53+

routersQUANTITY

0.5+

daysQUANTITY

0.46+

HeismanTITLE

0.35+

DockerCon2021 Keynote


 

>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.

Published Date : May 27 2021

SUMMARY :

We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mario AndrettiPERSON

0.99+

DaniPERSON

0.99+

Matt FalkPERSON

0.99+

Dana LawsonPERSON

0.99+

AmazonORGANIZATION

0.99+

Maya AndrettiPERSON

0.99+

DonniePERSON

0.99+

MicrosoftORGANIZATION

0.99+

MonaPERSON

0.99+

NicolePERSON

0.99+

UNICEFORGANIZATION

0.99+

25%QUANTITY

0.99+

GermanyLOCATION

0.99+

14 millionQUANTITY

0.99+

75%QUANTITY

0.99+

ManhattanLOCATION

0.99+

KhanPERSON

0.99+

10 minutesQUANTITY

0.99+

last yearDATE

0.99+

99QUANTITY

0.99+

1.3 timesQUANTITY

0.99+

1.2 timesQUANTITY

0.99+

ClairePERSON

0.99+

DockerORGANIZATION

0.99+

ScottPERSON

0.99+

BenPERSON

0.99+

UC IrvineORGANIZATION

0.99+

85%QUANTITY

0.99+

OracleORGANIZATION

0.99+

34%QUANTITY

0.99+

JustinPERSON

0.99+

JoeyPERSON

0.99+

80%QUANTITY

0.99+

160 imagesQUANTITY

0.99+

2020DATE

0.99+

$10,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

23 minutesQUANTITY

0.99+

JavaScriptTITLE

0.99+

AprilDATE

0.99+

twoQUANTITY

0.99+

56%QUANTITY

0.99+

PythonTITLE

0.99+

MollyPERSON

0.99+

Mac miniCOMMERCIAL_ITEM

0.99+

Hughie cowerPERSON

0.99+

two weeksQUANTITY

0.99+

100%QUANTITY

0.99+

GeorgiePERSON

0.99+

Matt fallPERSON

0.99+

MarsLOCATION

0.99+

Second questionQUANTITY

0.99+

KubickiPERSON

0.99+

MobyPERSON

0.99+

IndiaLOCATION

0.99+

DockerConEVENT

0.99+

Youi CalPERSON

0.99+

three ninesQUANTITY

0.99+

J frogORGANIZATION

0.99+

200 KQUANTITY

0.99+

appleORGANIZATION

0.99+

SharonPERSON

0.99+

AWSORGANIZATION

0.99+

10 XQUANTITY

0.99+

COVID-19OTHER

0.99+

windowsTITLE

0.99+

381QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Bratin Saha, Amazon | AWS re:Invent 2020


 

>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. >>Welcome back to the cubes, ongoing coverage, AWS, AWS reinvent virtual. The cube has gone virtual too, and continues to bring our digital coverage of events across the globe. It's been a big week, big couple of weeks at reinvent and a big week for machine intelligence in learning and AI and new services for customers. And with me to discuss the trends in this space is broadened Sahab, who is the vice president and general manager of machine learning services at AWS Rodan. Great to see you. Thanks for coming on the cube. >>Thank you, Dave. Thank you for having me. >>You're very welcome. Let's get right into it. I mean, I remember when SageMaker was announced it was 2017. Uh, it was really a seminal moment in the whole machine learning space, but take us through the journey over the last few years. Uh, what can you tell us? >>So, you know, what, when we came out with SageMaker customers were telling us that machine learning is hard and it was within, you know, it's only a few large organizations that could truly deploy machine learning at scale. And so we released SageMaker in 2017 and we have seen really broad adoption of SageMaker across the entire spectrum of industries. And today, most of the machine learning in the cloud, the vast majority of it happens on AWS. In fact, AWS has more than two weeks of the machine learning than any other provider. And, you know, we saw this morning that more than 90% of the TensorFlow in the cloud and more than 92% of the pipe out in the cloud happens on AWS. So what has happened in that is customers saw that it was much easier to do machine learning once they were using tools like SageMaker. >>And so many customers started applying a handful of models and they started to see that they were getting real business value. You know, machine learning was no longer a niche machine learning was no longer a fictional thing. It was something that they were getting real business value. And then they started to proliferate across that use cases. And so these customers went from deploying like tens of models to deploying hundreds and thousands of models inside. We have one customer that is deploying more than a million models. And so that is what we have seen is really making machine learning broadly accessible to our customers through the use of SageMaker. >>Yeah. So you probably very quickly went through the experimentation phase and people said, wow, you got the aha moments. And, and, and so adoption went through the roof. What kind of patterns have you seen in terms of the way in which people are using data and maybe some of the problems and challenges that has created for organizations that they've asked you to erect help them rectify? Yes. >>And in fact, in a SageMaker is today one of the fastest growing services in AWS history. And what we have seen happen is as customer scaled out the machine learning deployments, they asked us to help them solve the issues that used to come when you deploy machine learning at scale. So one of the things that happens is when you're doing machine learning, you spend a lot of time preparing the data, cleaning the data, making sure the data is done correctly, so it can train your models. And customers wanted to be able to do the data prep in the same service in which they were doing machine learning. And hence we launched Sage, make a data and learn where with a few clicks, you can connect a variety of data stores, AWS data stores, or third party data stores, and do all of your data preparation. >>Now, once you've done your data preparation, customers wanted to be able to store that data. And that's why we came out with SageMaker feature store and then customers want to be able to take this entire end to end pipeline and be able to automate the whole thing. And that is why we came up with SageMaker pipelines. And then one of the things that customers have asked us to help them address is this issue of statistical bias and explainability. And so we released SageMaker clarify that actually helps customers look at statistical bias to the entire machine learning workflow before you do, when you're doing a data processing before you train your model. And even after you have deployed your model and it gives us insights into why your model is behaving in a particular way. And then we had machine learning in the cloud and many customers have started deploying machine learning at the edge, and they want to be able to deploy these models at the edge and wanted a solution that says, Hey, can I take all of these machine learning capabilities that I have in the cloud, specifically, the model management and the MLR SKP abilities and deploy them to the edge devices. >>And that is why we launched SageMaker edge manager. And then customers said, you know, we still need our basic functionality of training and so on to be faster. And so we released a number of enhancements to SageMaker distributed training in terms of new data, parallel models and new model parallelism models that give the fastest training time on SageMaker across both the frameworks. And, you know, that is one of the key things that we have at AWS is we give customers choice. We don't force them onto a single framework. >>Okay, great. And we, I think we hit them all except, uh, I don't know if you talked about SageMaker debugger, but we will. So I want to come back to and ask you a couple of questions about these features. So it's funny. Sometimes people make fun of your names, but I like them because they said, it says what it does because, because people tell me that I spend all my time wrangling data. So you have data Wrangler, it's, you know, it's all about transformation cleaning. And, and because you don't want to spend 80% of your time wrangling data, you want to spend 80 of your time, you know, driving insights and, and monetization. So, so how, how does one engage with, with data Wrangler and how do you see the possibilities there? >>So data angler is part of SageMaker studio. SageMaker studio was the world's first, fully integrated development run for machine learning. So you come to SageMaker studio, you have a tab there, which you SageMaker data angler, and then you have a visual UI. So that visual UI with just a single click, you can connect to AWS data stores like, you know, red shift or a Tina or third party data stores like snowflake and Databricks and Mongo DB, which will be coming. And then you have a set of built-in data processes for machine learning. So you get that data and you do some interactive processing. Once you're happy with the results of your data, you can just send it off as an automated data pipeline job. And, you know, it's really today the easiest and fastest way to do machine learning and really take out that 80% that you were talking about. >>Has it been so hard to automate the Sage, the pipelines to bring CIC D uh, to, uh, data pipelines? Why has that been such a challenge? And how did you resolve that? >>You know, what has happened is when you look at machine learning, machine learning deals with both code and data, okay. Unlike software, which really has to deal with only code. And so we had the CIC D tools for software, but someone needed to extend it to operating on both data and code. And at the same time, you know, you want to provide reproducibility and lineage and trackability, and really getting that whole end to end system to work across code and data across multiple capabilities was what made it hard. And, you know, that is where we brought in SageMaker pipelines to make this easy for our customers. >>Got it. Thank you. And then let me ask you about, uh, clarify. And this is a huge issue in, in machine intelligence, uh, you know, humans by the very nature of bias that they build models, the models of bias in them. Uh, and so you bringing transplant the other problem with, with AI, and I'm not sure that you're solving this problem, but please clarify if you are no pun intended, but it's that black box AI is a black box. I don't know how the answer, how we got to the answer. It seems like you're attacking that, bringing more transparency and really trying to deal with the biases. I wonder if you could talk about how you do that and how people can expect this to affect their operations. >>I'm glad you asked this question because you know, customers have also asked us about the SageMaker clarify is really intended to address the questions that you brought up. One is it gives you the tools to provide a lot of statistical analysis on the data set that you started with. So let's say you were creating a model for loan approvals, and you want to make sure that, you know, you have equal number of male applicants and equal number of female applicants and so on. So SageMaker clarify, lets you run these kinds of analysis to make sure that your data set is balanced to start with. Now, once that happens, you have trained the model. Once you've trained the model, you want to make sure that the training process did not introduce any unintended statistical bias. So then you can use, SageMaker clarify to again, say, well, is the model behaving in the way I expected it to behave based on the training data I had. >>So let's say your training data set, you know, 50% of all the male applicants got the loans approved after training, you can use, clarify to say, does this model actually predict that 50% of the male applicants will get approved? And if it's more than less, you know, you have a problem. And then after that, we get to the problem you mentioned, which is how do we unravel the black box nature of this? And you know, we took the first steps of it last year with autopilot where we actually gave notebooks. But SageMaker clarify really makes it much better because it tells you why our model is predicting the way it's predicting. It gives you the reasons and it tells you, you know, here is why the model predicts that, you know, you had approved a loan and here's why the model said that you may or may not get a loan. So it really makes it easier, gives visibility and transparency and helps to convert insights that you get from model predictions into actionable insights because you now know why the model is predicting what it's predicting. >>That brings out the confidence level. Okay. Thank you for that. Let me, let me ask you about distributed training on SageMaker help us understand what problem you're solving. You're injecting auto parallelism. Is that about, about scale? Help us understand that. >>Yeah. So one of the things that's happening is, you know, our customers are starting to train really large models like, you know, three years back, they will train models with like 20 million parameters. You know, last year they would train models with like couple of hundred million parameters. Now customers are actually training models with billions of parameters. And when you have such large models, that training can take days and sometimes weeks. And so what we have done E are two concepts. One is we introduced a way of taking a model and training it in parallel and multiple GPU's. And that's, you know what we call a data parallel implementation. We have our own custom libraries for this, which give you the fastest performance on AWS. And then the other thing that happens is customer stakes. Some of these models that are fairly large, you know, like billions of parameters and we showed one of them today called T five and these models are so big that they cannot fit in the memory of a single GPU. And so what happens is today customers have to train such a model. They spend weeks of effort trying to paralyze that Marlon, what we introduced in SageMaker today is a mechanism that automatically takes these large models and distributes it across multiple GPU's the auto parallelization that you were talking about, making it much easier and much faster for customers to really work with these big models. >>Well, the GPU is a very expensive resource. And prior to this, you would have the GPU waiting, waiting, waiting, load me up and you don't want to do that with it. Expensive resources. Yeah. >>And you know, one of the things I mentioned before is Sage make a debugger. So one of the things that we also came out with today is the SageMaker profiler, which is only part of the debugger that lets you look at your GPU utilization at your CPU utilization at, in network utilization and so on. And so now, you know, when your training job has started at which point has the GPU utilization gone down and you can go in and fix it. So this really lets you meet, utilize your resources much better and ultimately reducing your cost of training and making it more efficient. Awesome. >>Let's talk about edge manager because I, you know, Andy Jassy, his keynote was interesting. He his, where he's talking about hybrid and his vision is basically an Amazon's vision is we want to bring AWS to the edge. We see the data center as just another edge node. And so, so this is, to me, another example of, uh, of AWS is, you know, edge strategy, talk about how that works and, and, and, and in practice, uh, how does, how does it work? Am I doing inference at the edge and then bringing back data into the cloud? Uh, am I, am I doing things locally? >>Yes. So, you know what? See each man got edge manager does, is it helps you manage, deploy and manage and manage models at the edge. The inference is happening on the edge device. Now considers his case. So Lenovo has been working with us. And what Lenovo wants to do is to take these models and do predictive maintenance on laptops. So you want to get an it shop and you have a couple of hundred thousand laptops. You would want to know when something may go down. And so the deployed is predictive maintenance models on the laptop. They're doing inference locally on the laptop, but you want to see are the models getting degraded and you want to be able to see is the quality up. So what H manager does is number one, it takes your models, optimizes them so they can run on an edge device and we get up to 25 X benefit and then once you've deployed it, it helps you monitor the quality of the models by letting you upload data samples to SageMaker so that you can see if there is drift in your models, that if there's any other degradation, >>All right. And jumpstart is where I go to. It's kind of the portal that I go to, to access all these cool tools. Is that right? Yep. >>And you know, we have a lot of getting started material, lots of false party models, lots of open source models and solutions. >>I probably we're out of time, but I could go on forever and we did thanks so much for, for bringing this knowledge to the cube audience. Really appreciate your time. >>Thank you. Thank you, Dave, for having me. >>And you're very welcome and good luck with the, the announcements. And thank you for watching everybody. This is Dave Volante for the cube and our coverage of AWS reinvent 2020 continues right after this short break.

Published Date : Dec 10 2020

SUMMARY :

It's the cube with digital coverage of AWS And with me to discuss the trends in this Uh, what can you tell us? and it was within, you know, it's only a few large organizations that And so that is what we have seen is really making machine learning broadly accessible and challenges that has created for organizations that they've asked you to erect help them rectify? to come when you deploy machine learning at scale. And even after you have And then customers said, you know, we still need our basic functionality of training And we, I think we hit them all except, uh, I don't know if you talked about SageMaker debugger, And then you have a set of built-in data processes And at the same time, you know, you want to provide reproducibility and And then let me ask you about, uh, clarify. is really intended to address the questions that you brought up. And if it's more than less, you know, you have a problem. Thank you for that. And when you have such large models, And prior to this, you would have the GPU waiting, And so now, you know, when your training job has started at you know, edge strategy, talk about how that works and, and, They're doing inference locally on the laptop, but you want And jumpstart is where I go to. And you know, we have a lot of getting started material, lots of false party models, knowledge to the cube audience. Thank you. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Andy JassyPERSON

0.99+

DavePERSON

0.99+

50%QUANTITY

0.99+

80%QUANTITY

0.99+

2017DATE

0.99+

AmazonORGANIZATION

0.99+

SahabPERSON

0.99+

more than two weeksQUANTITY

0.99+

80QUANTITY

0.99+

last yearDATE

0.99+

more than a million modelsQUANTITY

0.99+

oneQUANTITY

0.99+

Bratin SahaPERSON

0.99+

tens of modelsQUANTITY

0.99+

Dave VolantePERSON

0.99+

todayDATE

0.99+

more than 92%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

bothQUANTITY

0.99+

one customerQUANTITY

0.99+

two conceptsQUANTITY

0.99+

SageMakerORGANIZATION

0.99+

more than 90%QUANTITY

0.99+

LenovoORGANIZATION

0.98+

SageMakerTITLE

0.98+

firstQUANTITY

0.98+

OneQUANTITY

0.98+

TinaORGANIZATION

0.98+

three years backDATE

0.97+

SageORGANIZATION

0.96+

eachQUANTITY

0.96+

first stepsQUANTITY

0.96+

single frameworkQUANTITY

0.96+

both dataQUANTITY

0.95+

IntelORGANIZATION

0.95+

this morningDATE

0.94+

AWS RodanORGANIZATION

0.92+

20 million parametersQUANTITY

0.92+

snowflakeORGANIZATION

0.91+

single GPUQUANTITY

0.9+

hundreds and thousands of modelsQUANTITY

0.89+

billions of parametersQUANTITY

0.86+

Mongo DBORGANIZATION

0.8+

couple of hundred thousand laptopsQUANTITY

0.77+

December 8th Keynote Analysis | AWS re:Invent 2020


 

>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS, and our community partners. >>Hi everyone. Welcome back to the cubes. Virtual coverage of AWS reinvent 2020 virtual. We are the cube virtual I'm John ferry, your host with my coach, Dave Alante for keynote analysis from Swami's machine learning, all things, data huge. Instead of announcements, the first ever machine learning keynote at a re-invent Dave. Great to see you. Thanks Johnny. And from Boston, I'm here in Palo Alto. We're doing the cube remote cube virtual. Great to see you. >>Yeah, good to be here, John, as always. Wall-to-wall love it. So, so, John, um, how about I give you my, my key highlights from the, uh, from the keynote today, I had, I had four kind of curated takeaways. So the first is that AWS is, is really trying to simplify machine learning and use machine intelligence into all applications. And if you think about it, it's good news for organizations because they're not the become machine learning experts have invent machine learning. They can buy it from Amazon. I think the second is they're trying to simplify the data pipeline. The data pipeline today is characterized by a series of hyper specialized individuals. It engineers, data scientists, quality engineers, analysts, developers. These are folks that are largely live in their own swim lane. Uh, and while they collaborate, uh, there's still a fairly linear and complicated data pipeline, uh, that, that a business person or a data product builder has to go through Amazon making some moves to the front of simplify that they're expanding data access to the line of business. I think that's a key point. Is there, there increasingly as people build data products and data services that can monetize, you know, for their business, either cut costs or generate revenue, they can expand that into line of business where there's there's domain context. And I think the last thing is this theme that we talked about the other day, John of extending Amazon, AWS to the edge that we saw that as well in a number of machine learning tools that, uh, Swami talked about. >>Yeah, it was great by the way, we're live here, uh, in Palo Alto in Boston covering the analysis, tons of content on the cube, check out the cube.net and also check out at reinvent. There's a cube section as there's some links to so on demand videos with all the content we've had. Dave, I got to say one of the things that's apparent to me, and this came out of my one-on-one with Andy Jassy and Andy Jassy talked about in his keynote is he kind of teased out this idea of training versus a more value add machine learning. And you saw that today in today's announcement. To me, the big revelation was that the training aspect of machine learning, um, is what can be automated away. And it's under a lot of controversy around it. Recently, a Google paper came out and the person was essentially kind of, kind of let go for this. >>But the idea of doing these training algorithms, some are saying is causes more harm to the environment than it does good because of all the compute power it takes. So you start to see the positioning of training, which can be automated away and served up with, you know, high powered ships and that's, they consider that undifferentiated heavy lifting. In my opinion, they didn't say that, but that's clearly what I see coming out of this announcement. The other thing that I saw Dave that's notable is you saw them clearly taking a three lane approach to this machine, learning the advanced builders, the advanced coders and the developers, and then database and data analysts, three swim lanes of personas of target audience. Clearly that is in line with SageMaker and the embedded stuff. So two big revelations, more horsepower required to process training and modeling. Okay. And to the expansion of the personas that are going to be using machine learning. So clearly this is a, to me, a big trend wave that we're seeing that validates some of the startups and I'll see their SageMaker and some of their products. >>Well, as I was saying at the top, I think Amazon's really trying, working hard on simplifying the whole process. And you mentioned training and, and a lot of times people are starting from scratch when they have to train models and retrain models. And so what they're doing is they're trying to create reusable components, uh, and allow people to, as you pointed out to automate and streamline some of that heavy lifting, uh, and as well, they talked a lot about, uh, doing, doing AI inferencing at the edge. And you're seeing, you know, they, they, uh, Swami talked about several foundational premises and the first being a foundation of frameworks. And you think about that at the, at the lowest level of their S their ML stack. They've got, you know, GPU's different processors, inferential, all these alternative processes, processors, not just the, the Xav six. And so these are very expensive resources and Swami talked a lot about, uh, and his colleagues talked a lot about, well, a lot of times the alternative processor is sitting there, you know, waiting, waiting, waiting. And so they're really trying to drive efficiency and speed. They talked a lot about compressing the time that it takes to, to run these, these models, uh, from, from sometimes weeks down to days, sometimes days down to hours and minutes. >>Yeah. Let's, let's unpack these four areas. Let's stay on the firm foundation because that's their core competency infrastructure as a service. Clearly they're laying that down. You put the processors, but what's interesting is the TensorFlow 92% of tensor flows on Amazon. The other thing is that pie torch surprisingly is back up there, um, with massive adoption and the numbers on pie torch literally is on fire. I was coming in and joke on Twitter. Um, we, a PI torch is telling because that means that TensorFlow is originally part of Google is getting, is getting a little bit diluted with other frameworks, and then you've got MX net, some other things out there. So the fact that you've got PI torch 91% and then TensorFlow 92% on 80 bucks is a huge validation. That means that the majority of most machine learning development and deep learning is happening on AWS. Um, >>Yeah, cloud-based, by the way, just to clarify, that's the 90% of cloud-based cloud, uh, TensorFlow runs on and 91% of cloud-based PI torch runs on ADM is amazingly massive numbers. >>Yeah. And I think that the, the processor has to show that it's not trivial to do the machine learning, but, you know, that's where the infrared internship came in. That's kind of where they want to go lay down that foundation. And they had Tanium, they had trainee, um, they had, um, infrared chow was the chip. And then, you know, just true, you know, distributed training training on SageMaker. So you got the chip and then you've got Sage makers, the middleware games, almost like a machine learning stack. That's what they're putting out there >>And how bad a Gowdy, which was, which is, which is a patrol also for training, which is an Intel based chip. Uh, so that was kind of interesting. So a lot of new chips and, and specialized just, we've been talking about this for awhile, particularly as you get to the edge and do AI inferencing, you need, uh, you know, a different approach than we're used to with the general purpose microbes. >>So what gets your take on tenant? Number two? So tenant number one, clearly infrastructure, a lot of announcements we'll go through those, review them at the end, but tenant number two, that Swami put out there was creating the shortest path to success for builders or machine learning builders. And I think here you lays out the complexity, Dave butts, mostly around methodology, and, you know, the value activities required to execute. And again, this points to the complexity problem that they have. What's your take on this? >>Yeah. Well you think about, again, I'm talking about the pipeline, you collect data, you just data, you prepare that data, you analyze that data. You, you, you make sure that it's it's high quality and then you start the training and then you're iterating. And so they really trying to automate as much as possible and simplify as much as possible. What I really liked about that segment of foundation, number two, if you will, is the example, the customer example of the speaker from the NFL, you know, talked about, uh, you know, the AWS stats that we see in the commercials, uh, next gen stats. Uh, and, and she talked about the ways in which they've, well, we all know they've, they've rearchitected helmets. Uh, they've been, it's really a very much database. It was interesting to see they had the spectrum of the helmets that were, you know, the safest, most safe to the least safe and how they've migrated everybody in the NFL to those that they, she started a 24%. >>It was interesting how she wanted a 24% reduction in reported concussions. You know, you got to give the benefit of the doubt and assume some of that's through, through the data. But you know, some of that could be like, you know, Julian Edelman popping up off the ground. When, you know, we had a concussion, he doesn't want to come out of the game with the new protocol, but no doubt, they're collecting more data on this stuff, and it's not just head injuries. And she talked about ankle injuries, knee injuries. So all this comes from training models and reducing the time it takes to actually go from raw data to insights. >>Yeah. I mean, I think the NFL is a great example. You and I both know how hard it is to get the NFL to come on and do an interview. They're very coy. They don't really put their name on anything much because of the value of the NFL, this a meaningful partnership. You had the, the person onstage virtually really going into some real detail around the depth of the partnership. So to me, it's real, first of all, I love stat cast 11, anything to do with what they do with the stats is phenomenal at this point. So the real world example, Dave, that you starting to see sports as one metaphor, healthcare, and others are going to see those coming in to me, totally a tale sign that Amazon's continued to lead. The thing that got my attention was is that it is an IOT problem, and there's no reason why they shouldn't get to it. I mean, some say that, Oh, concussion, NFL is just covering their butt. They don't have to, this is actually really working. So you got the tech, why not use it? And they are. So that, to me, that's impressive. And I think that's, again, a digital transformation sign that, that, you know, in the NFL is doing it. It's real. Um, because it's just easier. >>I think, look, I think, I think it's easy to criticize the NFL, but the re the reality is, is there anything old days? It was like, Hey, you get your bell rung and get back out there. That's just the way it was a football players, you know, but Ted Johnson was one of the first and, you know, bill Bellacheck was, was, you know, the guy who sent him back out there with a concussion, but, but he was very much outspoken. You've got to give the NFL credit. Uh, it didn't just ignore the problem. Yeah. Maybe it, it took a little while, but you know, these things take some time because, you know, it's generally was generally accepted, you know, back in the day that, okay, Hey, you'd get right back out there, but, but the NFL has made big investments there. And you can say, you got to give him, give him props for that. And especially given that they're collecting all this data. That to me is the most interesting angle here is letting the data inform the actions. >>And next step, after the NFL, they had this data prep data Wrangler news, that they're now integrating snowflakes, Databricks, Mongo DB, into SageMaker, which is a theme there of Redshift S3 and Lake formation into not the other way around. So again, you've been following this pretty closely, uh, specifically the snowflake recent IPO and their success. Um, this is an ecosystem play for Amazon. What does it mean? >>Well, a couple of things, as we, as you well know, John, when you first called me up, I was in Dallas and I flew into New York and an ice storm to get to the one of the early Duke worlds. You know, and back then it was all batch. The big data was this big batch job. And today you want to combine that batch. There's still a lot of need for batch, but when people want real time inferencing and AWS is bringing that together and they're bringing in multiple data sources, you mentioned Databricks and snowflake Mongo. These are three platforms that are doing very well in the market and holding a lot of data in AWS and saying, okay, Hey, we want to be the brain in the middle. You can import data from any of those sources. And I'm sure they're going to add more over time. Uh, and so they talked about 300 pre-configured data transformations, uh, that now come with stage maker of SageMaker studio with essentially, I've talked about this a lot. It's essentially abstracting away the, it complexity, the whole it operations piece. I mean, it's the same old theme that AWS is just pointing. It's its platform and its cloud at non undifferentiated, heavy lifting. And it's moving it up the stack now into the data life cycle and data pipeline, which is one of the biggest blockers to monetizing data. >>Expand on that more. What does that actually mean? I'm an it person translate that into it. Speak. Yeah. >>So today, if you're, if you're a business person and you want, you want the answers, right, and you want say to adjust a new data source, so let's say you want to build a new, new product. Um, let me give an example. Let's say you're like a Spotify, make it up. And, and you do music today, but let's say you want to add, you know, movies, or you want to add podcasts and you want to start monetizing that you want to, you want to identify, who's watching what you want to create new metadata. Well, you need new data sources. So what you do as a business person that wants to create that new data product, let's say for podcasts, you have to knock on the door, get to the front of the data pipeline line and say, okay, Hey, can you please add this data source? >>And then everybody else down the line has to get in line and Hey, this becomes a new data source. And it's this linear process where very specialized individuals have to do their part. And then at the other end, you know, it comes to self-serve capability that somebody can use to either build dashboards or build a data product. In a lot of that middle part is our operational details around deploying infrastructure, deploying, you know, training machine learning models that a lot of Python coding. Yeah. There's SQL queries that have to be done. So a lot of very highly specialized activities, what Amazon is doing, my takeaway is they're really streamlining a lot of those activities, removing what they always call the non undifferentiated, heavy lifting abstracting away that it complexity to me, this is a real positive sign, because it's all about the technology serving the business, as opposed to historically, it's the business begging the technology department to please help me. The technology department obviously evolving from, you know, the, the glass house, if you will, to this new data, data pipeline data, life cycle. >>Yeah. I mean, it's classic agility to take down those. I mean, it's undifferentiated, I guess, but if it actually works, just create a differentiated product. So, but it's just log it's that it's, you can debate that kind of aspect of it, but I hear what you're saying, just get rid of it and make it simpler. Um, the impact of machine learning is Dave is one came out clear on this, uh, SageMaker clarify announcement, which is a bias decision algorithm. They had an expert, uh, nationally CFUs presented essentially how they're dealing with the, the, the bias piece of it. I thought that was very interesting. What'd you think? >>Well, so humans are biased and so humans build models or models are inherently biased. And so I thought it was, you know, this is a huge problem to big problems in artificial intelligence. One is the inherent bias in the models. And the second is the lack of transparency that, you know, they call it the black box problem, like, okay, I know there was an answer there, but how did it get to that answer and how do I trace it back? Uh, and so Amazon is really trying to attack those, uh, with, with, with clarify. I wasn't sure if it was clarity or clarified, I think it's clarity clarify, um, a lot of entirely certain how it works. So we really have to dig more into that, but it's essentially identifying situations where there is bias flagging those, and then, you know, I believe making recommendations as to how it can be stamped. >>Nope. Yeah. And also some other news deep profiling for debugger. So you could make a debugger, which is a deep profile on neural network training, um, which is very cool again on that same theme of profiling. The other thing that I found >>That remind me, John, if I may interrupt there reminded me of like grammar corrections and, you know, when you're typing, it's like, you know, bug code corrections and automated debugging, try this. >>It wasn't like a better debugger come on. We, first of all, it should be bug free code, but, um, you know, there's always biases of the data is critical. Um, the other news I thought was interesting and then Amazon's claiming this is the first SageMaker pipelines for purpose-built CIC D uh, for machine learning, bringing machine learning into a developer construct. And I think this started bringing in this idea of the edge manager where you have, you know, and they call it the about machine, uh, uh, SageMaker store storing your functions of this idea of managing and monitoring machine learning modules effectively is on the edge. And, and through the development process is interesting and really targeting that developer, Dave, >>Yeah, applying CIC D to the machine learning and machine intelligence has always been very challenging because again, there's so many piece parts. And so, you know, I said it the other day, it's like a lot of the innovations that Amazon comes out with are things that have problems that have come up given the pace of innovation that they're putting forth. And, and it's like the customers drinking from a fire hose. We've talked about this at previous reinvents and the, and the customers keep up with the pace of Amazon. So I see this as Amazon trying to reduce friction, you know, across its entire stack. Most, for example, >>Let me lay it out. A slide ahead, build machine learning, gurus developers, and then database and data analysts, clearly database developers and data analysts are on their radar. This is not the first time we've heard that. But we, as the kind of it is the first time we're starting to see products materialized where you have machine learning for databases, data warehouse, and data lakes, and then BI tools. So again, three different segments, the databases, the data warehouse and data lakes, and then the BI tools, three areas of machine learning, innovation, where you're seeing some product news, your, your take on this natural evolution. >>Well, well, it's what I'm saying up front is that the good news for, for, for our customers is you don't have to be a Google or Amazon or Facebook to be a super expert at AI. Uh, companies like Amazon are going to be providing products that you can then apply to your business. And, and it's allowed you to infuse AI across your entire application portfolio. Amazon Redshift ML was another, um, example of them, abstracting complexity. They're taking, they're taking S3 Redshift and SageMaker complexity and abstracting that and presenting it to the data analysts. So that, that, that individual can worry about, you know, again, getting to the insights, it's injecting ML into the database much in the same way, frankly, the big query has done that. And so that's a huge, huge positive. When you talk to customers, they, they love the fact that when, when ML can be embedded into the, into the database and it simplifies, uh, that, that all that, uh, uh, uh, complexity, they absolutely love it because they can focus on more important things. >>Clearly I'm this tenant, and this is part of the keynote. They were laying out all their announcements, quick excitement and ML insights out of the box, quick, quick site cue available in preview all the announcements. And then they moved on to the next, the fourth tenant day solving real problems end to end, kind of reminds me of the theme we heard at Dell technology worlds last year end to end it. So we are starting to see the, the, the land grab my opinion, Amazon really going after, beyond I, as in pass, they talked about contact content, contact centers, Kendra, uh, lookout for metrics, and that'll maintain men. Then Matt would came on, talk about all the massive disruption on the, in the industries. And he said, literally machine learning will disrupt every industry. They spent a lot of time on that and they went into the computer vision at the edge, which I'm a big fan of. I just loved that product. Clearly, every innovation, I mean, every vertical Dave is up for grabs. That's the key. Dr. Matt would message. >>Yeah. I mean, I totally agree. I mean, I see that machine intelligence as a top layer of, you know, the S the stack. And as I said, it's going to be infused into all areas. It's not some kind of separate thing, you know, like, Coobernetti's, we think it's some separate thing. It's not, it's going to be embedded everywhere. And I really like Amazon's edge strategy. It's this, you, you are the first to sort of write about it and your keynote preview, Andy Jassy said, we see, we see, we want to bring AWS to the edge. And we see data center as just another edge node. And so what they're doing is they're bringing SDKs. They've got a package of sensors. They're bringing appliances. I've said many, many times the developers are going to be, you know, the linchpin to the edge. And so Amazon is bringing its entire, you know, data plane is control plane, it's API APIs to the edge and giving builders or slash developers, the ability to innovate. And I really liked the strategy versus, Hey, here's a box it's, it's got an x86 processor inside on a, throw it over the edge, give it a cool name that has edge in it. And here you go, >>That sounds call it hyper edge. You know, I mean, the thing that's true is the data aspect at the edge. I mean, everything's got a database data warehouse and data lakes are involved in everything. And then, and some sort of BI or tools to get the data and work with the data or the data analyst, data feeds, machine learning, critical piece to all this, Dave, I mean, this is like databases used to be boring, like boring field. Like, you know, if you were a database, I have a degree in a database design, one of my degrees who do science degrees back then no one really cared. If you were a database person. Now it's like, man data, everything. This is a whole new field. This is an opportunity. But also, I mean, are there enough people out there to do all this? >>Well, it's a great point. And I think this is why Amazon is trying to extract some of the abstract. Some of the complexity I sat in on a private session around databases today and listened to a number of customers. And I will say this, you know, some of it I think was NDA. So I can't, I can't say too much, but I will say this Amazon's philosophy of the database. And you address this in your conversation with Andy Jassy across its entire portfolio is to have really, really fine grain access to the deep level API APIs across all their services. And he said, he said this to you. We don't necessarily want to be the abstraction layer per se, because when the market changes, that's harder for us to change. We want to have that fine-grained access. And so you're seeing that with database, whether it's, you know, no sequel, sequel, you know, the, the Aurora the different flavors of Aurora dynamo, DV, uh, red shift, uh, you know, already S on and on and on. There's just a number of data stores. And you're seeing, for instance, Oracle take a completely different approach. Yes, they have my SQL cause they know got that with the sun acquisition. But, but this is they're really about put, is putting as much capability into a single database as possible. Oh, you only need one database only different philosophy. >>Yeah. And then obviously a health Lake. And then that was pretty much the end of the, the announcements big impact to health care. Again, the theme of horizontal data, vertical specialization with data science and software playing out in real time. >>Yeah. Well, so I have asked this question many times in the cube, when is it that machines will be able to make better diagnoses than doctors and you know, that day is coming. If it's not here, uh, you know, I think helped like is really interesting. I've got an interview later on with one of the practitioners in that space. And so, you know, healthcare is something that is an industry that's ripe for disruption. It really hasn't been disruption disrupted. It's a very high, high risk obviously industry. Uh, but look at healthcare as we all know, it's too expensive. It's too slow. It's too cumbersome. It's too long sometimes to get to a diagnosis or be seen, Amazon's trying to attack with its partners, all of those problems. >>Well, Dave, let's, let's summarize our take on Amazon keynote with machine learning, I'll say pretty historic in the sense that there was so much content in first keynote last year with Andy Jassy, he spent like 75 minutes. He told me on machine learning, they had to kind of create their own category Swami, who we interviewed many times on the cube was awesome. But a lot of still a lot more stuff, more, 215 announcements this year, machine learning more capabilities than ever before. Um, moving faster, solving real problems, targeting the builders, um, fraud platform set of things is the Amazon cadence. What's your analysis of the keynote? >>Well, so I think a couple of things, one is, you know, we've said for a while now that the new innovation cocktail is cloud plus data, plus AI, it's really data machine intelligence or AI applied to that data. And the scale at cloud Amazon Naylor obviously has nailed the cloud infrastructure. It's got the data. That's why database is so important and it's gotta be a leader in machine intelligence. And you're seeing this in the, in the spending data, you know, with our partner ETR, you see that, uh, that AI and ML in terms of spending momentum is, is at the highest or, or at the highest, along with automation, uh, and containers. And so in. Why is that? It's because everybody is trying to infuse AI into their application portfolios. They're trying to automate as much as possible. They're trying to get insights that, that the systems can take action on. >>And, and, and actually it's really augmented intelligence in a big way, but, but really driving insights, speeding that time to insight and Amazon, they have to be a leader there that it's Amazon it's, it's, it's Google, it's the Facebook's, it's obviously Microsoft, you know, IBM's Tron trying to get in there. They were kind of first with, with Watson, but with they're far behind, I think, uh, the, the hyper hyper scale guys. Uh, but, but I guess like the key point is you're going to be buying this. Most companies are going to be buying this, not building it. And that's good news for organizations. >>Yeah. I mean, you get 80% there with the product. Why not go that way? The alternative is try to find some machine learning people to build it. They're hard to find. Um, so the seeing the scale of kind of replicating machine learning expertise with SageMaker, then ultimately into databases and tools, and then ultimately built into applications. I think, you know, this is the thing that I think they, my opinion is that Amazon continues to move up the stack, uh, with their capabilities. And I think machine learning is interesting because it's a whole new set of it's kind of its own little monster building block. That's just not one thing it's going to be super important. I think it's going to have an impact on the startup scene and innovation is going, gonna have an impact on incumbent companies that are currently leaders that are under threat from new entrance entering the business. >>So I think it's going to be a very entrepreneurial opportunity. And I think it's going to be interesting to see is how machine learning plays that role. Is it a defining feature that's core to the intellectual property, or is it enabling new intellectual property? So to me, I just don't see how that's going to fall yet. I would bet that today intellectual property will be built on top of Amazon's machine learning, where the new algorithms and the new things will be built separately. If you compete head to head with that scale, you could be on the wrong side of history. Again, this is a bet that the startups and the venture capitals will have to make is who's going to end up being on the right wave here. Because if you make the wrong design choice, you can have a very complex environment with IOT or whatever your app serving. If you can narrow it down and get a wedge in the marketplace, if you're a company, um, I think that's going to be an advantage. This could be great just to see how the impact of the ecosystem this will be. >>Well, I think something you said just now it gives a clue. You talked about, you know, the, the difficulty of finding the skills. And I think that's a big part of what Amazon and others who were innovating in machine learning are trying to do is the gap between those that are qualified to actually do this stuff. The data scientists, the quality engineers, the data engineers, et cetera. And so companies, you know, the last 10 years went out and tried to hire these people. They couldn't find them, they tried to train them. So it's taking too long. And now that I think they're looking toward machine intelligence to really solve that problem, because that scales, as we, as we know, outsourcing to services companies and just, you know, hardcore heavy lifting, does it doesn't scale that well, >>Well, you know what, give me some machine learning, give it to me faster. I want to take the 80% there and allow us to build certainly on the media cloud and the cube virtual that we're doing. Again, every vertical is going to impact a Dave. Great to see you, uh, great stuff. So far week two. So, you know, we're cube live, we're live covering the keynotes tomorrow. We'll be covering the keynotes for the public sector day. That should be chock-full action. That environment is going to impact the most by COVID a lot of innovation, a lot of coverage. I'm John Ferrari. And with Dave Alante, thanks for watching.

Published Date : Dec 9 2020

SUMMARY :

It's the cube with digital coverage of Welcome back to the cubes. people build data products and data services that can monetize, you know, And you saw that today in today's And to the expansion of the personas that And you mentioned training and, and a lot of times people are starting from scratch when That means that the majority of most machine learning development and deep learning is happening Yeah, cloud-based, by the way, just to clarify, that's the 90% of cloud-based cloud, And then, you know, just true, you know, and, and specialized just, we've been talking about this for awhile, particularly as you get to the edge and do And I think here you lays out the complexity, It was interesting to see they had the spectrum of the helmets that were, you know, the safest, some of that could be like, you know, Julian Edelman popping up off the ground. And I think that's, again, a digital transformation sign that, that, you know, And you can say, you got to give him, give him props for that. And next step, after the NFL, they had this data prep data Wrangler news, that they're now integrating And today you want to combine that batch. Expand on that more. you know, movies, or you want to add podcasts and you want to start monetizing that you want to, And then at the other end, you know, it comes to self-serve capability that somebody you can debate that kind of aspect of it, but I hear what you're saying, just get rid of it and make it simpler. And so I thought it was, you know, this is a huge problem to big problems in artificial So you could make a debugger, you know, when you're typing, it's like, you know, bug code corrections and automated in this idea of the edge manager where you have, you know, and they call it the about machine, And so, you know, I said it the other day, it's like a lot of the innovations materialized where you have machine learning for databases, data warehouse, Uh, companies like Amazon are going to be providing products that you can then apply to your business. And then they moved on to the next, many, many times the developers are going to be, you know, the linchpin to the edge. Like, you know, if you were a database, I have a degree in a database design, one of my degrees who do science And I will say this, you know, some of it I think was NDA. And then that was pretty much the end of the, the announcements big impact And so, you know, healthcare is something that is an industry that's ripe for disruption. I'll say pretty historic in the sense that there was so much content in first keynote last year with Well, so I think a couple of things, one is, you know, we've said for a while now that the new innovation it's, it's, it's Google, it's the Facebook's, it's obviously Microsoft, you know, I think, you know, this is the thing that I think they, my opinion is that Amazon And I think it's going to be interesting to see is how machine And so companies, you know, the last 10 years went out and tried to hire these people. So, you know, we're cube live, we're live covering the keynotes tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ted JohnsonPERSON

0.99+

Dave AlantePERSON

0.99+

Julian EdelmanPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

New YorkLOCATION

0.99+

JohnnyPERSON

0.99+

AWSORGANIZATION

0.99+

DallasLOCATION

0.99+

JohnPERSON

0.99+

Palo AltoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SwamiPERSON

0.99+

DavePERSON

0.99+

John FerrariPERSON

0.99+

FacebookORGANIZATION

0.99+

80%QUANTITY

0.99+

24%QUANTITY

0.99+

90%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

BostonLOCATION

0.99+

December 8thDATE

0.99+

IBMORGANIZATION

0.99+

MattPERSON

0.99+

NFLORGANIZATION

0.99+

80 bucksQUANTITY

0.99+

PythonTITLE

0.99+

91%QUANTITY

0.99+

92%QUANTITY

0.99+

75 minutesQUANTITY

0.99+

OracleORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.99+

cube.netOTHER

0.99+

IntelORGANIZATION

0.99+

Brandon Jung, GitLab | AWS re:Invent 2019


 

>>LA from Las Vegas. It's the cube covering AWS reinvent 2019 brought to you by Amazon web services and they don't play along with its ecosystem partners. >>Well, welcome back live in Las Vegas. We're here on the cube. Continue our coverage here of day two of AWS. Raven 2019 in fact, it took me to the last interview on the second day to be paired up with my guy. Still many minutes to what happened is this is the first interview we've done this way. >>John, you know, I've not been out playing golf >>well and I wouldn't mind if I was, it'd be all right Brandon. You know Brandon, you play golf. Brandon Young? I do. I play college golf so, and I have a, you can't see them, but I have some trousers that might match there and prove that I have done a few times. Paint shirt would be, he would very proud granted to VP of alliances to get lab. And where'd you play college golf by the way. I split some time in Oklahoma and down at rice down in Houston. Oh you, yes. Wow. Be a sooner. How back that has some pretty good golfers there. They do. Um, let's first off, let's talk about, um, VP of alliances sure. And get like what do you do? So what does that encompass? What's that all about? Covers a bunch of pieces. Uh, covers all of the big key partnerships with us. >>So that's going to be obviously Amazon, other big cloud providers, a lot of strategic technology partnerships and then all your system integrators, man service providers, resellers, um, and then functionally anything else that comes in. So also we're bringing the open source space. So lead a lot of our open source engagement, uh, uh, in as well. What kind of customer base we're talking about here? I mean for, for you guys, sorry, cause it's pretty significant. It's, um, so in the space we've got roughly to two to 3 million users that use get lab and count on it for building, deploying and securing their code. Uh, and somewhere between a hundred thousand and 200,000 companies, uh, that get loud is, uh, is being used. Now. >>Brennan, you're not dealing with get lab. You're also on the board for the Linux foundation. And you know, we're, we're getting close to 2020. So I even, I saw some people looking back at where open source has come in the last decade. And you know, get, of course is one of the predominant drivers the proliferation of opensource. So maybe tell us a little bit about, you know, what your customers come to. Uh, w you know, why, why get lab is so critical to what, >>sure. Yeah. Because if we look at history, it kind of makes naturally in get lab we're getting, so that was where our, our base was, uh, when we started in 2012, 2013. Um, as it's evolved, so in get continues to be that core piece you need. So whether you're doing get ops infrastructure is code application development, you've got to have state, you've got to store your issues, you've got to take care of that. That's just one Oh one in software development or infrastructure management. Um, so that's got to where we started. And then, you know, a couple of years later, we picked up and did a bunch of stuff in the CIC space. Initially we had them separate, uh, and customers kept saying, God, these might work well together and to the Linux world has always been single tool, very sharp, very narrow. Uh, so we held off on that for a long time. >>Um, finally said, Oh, we're going to give it a go, shift them together. And that's kind of led to where we are now, which is we think of, you know, get lab as a single tool for the entire dev ops life cycle. And that makes it easy for someone to get started to build it, secure it, ship it, all of that from idea to production in the shortest possible time. And so that's kind of how it evolved. And yeah, we've grown up with the open source world ever since. And um, it's an awesome place. All right, so you've got the alliances and we're here at the biggest cloud show there. So help us connect the dots. Get lab AWS. Yeah. Perfect. So if we kind of look back and we go, ah, look at the keynote, right? So Andy talked a whole bunch, front keynote, Goldman Sachs, big talk with Verizon, a lot around the services, new stuff with arm new chips, new, um, a lot of new databases. >>Um, all of that rolled out. Those are services as Amazon looked at it. Our goal, our job is to get those customers onto the Amazon services. We're the tool that helps them develop and deploy those applications. Goldman, huge customer, Verizon, huge customer. So the majority of the keynotes you'd get lab to get to Amazon. So we're that tool that does the application security deployment and um, you know, lets those devs really take advantage of the great services that Amazon delivers. You know, you talk about security is it, is it, um, and obviously it's increased in terms of its importance. We recognize we've, we've seen how vulnerable apps can be and, and these invasion points, is that being reflected in budgets? Are we seeing that? Are people making these kinds of investments or is there still some lip service being paid to it and maybe they need a little more money where their mouth is. >>There's not a shortage of dollars, so I'll be be real straight forward. That is for us, the big growth area is uh, application security in a pipeline. The notion of shift left, um, and it's been, it's actually one of the easier conversations because the CSOs really want to make sure that every piece of code is tested, be it static code, dynamic code, license scanning, all the above. Um, the way they've had to do that and traditionally done it is at the end of a pipeline and they make every dev on happy because they throw it all the way back to the front with the dev. And then I was like, Oh, thank you so much. I did that two weeks ago and now I have to go, why didn't we do it on the front side instead of the back side? You kill the most important thing, which is cycle time, right? >>Cycle time is time from idea to Chimp. So by shifting it left, there's plenty of money and the CSOs love it because just want you to spend it. It's where they spend it. Right. And so now they get all the code tested. The devs love it because they get feedback instead of the CSO saying this is broken. The two old, the second they hit command a couple minutes later, Oh it's broken. They go fix it, make another commit. They're going to move way faster much. Um, so that's really what we get at and yeah, but no short in dollars, the security still the windows, the spend happens, you're saying right on the front side instead of the back shop and try and get full coverage. So a lot of times otherwise if you're trying to do security after someone's developed it, you're not sure. Like are you getting every code, all a piece of code that was developed? Are you getting just a lot of it as you talked about web apps, a lot of it is the focus. Oh the web apps. Cause that's the front end. But intrusion, once it passed the front end, it's a soft interior. You've got to do every single piece of code has to be tested. >>Yeah. It's Brandon. So you know what I've heard, especially from, I mean, you know, my peers in the security industry, you know, security needs to be considered the entire way. Security is everyone's job chair's responsibility. I need to think about it. But the other thing that really has changed for people is you talk about CIC. D I need to move fast. Well hold on. The security team's got to review everything. One of the core principles of dev ops is you want to bake it in the process, you need to get them involved. And then there's DevSecOps which pulls all of these pieces together. So tell, tell us how those trends are going and that, you know, speed and security actually go together not opposed. >>Oh yeah. And because, and it's how you measure the, the speed. Cause I think sometimes the question is all back to what is it from it. It's, it's a life cycle. And if that's what you're measuring, being able to do the security earlier is so much faster because you're not having to iterate, um, later. But, um, it's continues to increase. Devs are getting more and more say that's not gonna change anytime soon. Um, empowering those devs to own the security, uh, empowering those devs through the pipeline to be able to deploy into Lambda, into far gate. They love that. And if you could give that and give the security, the visibility, the dashboarding, the understanding of what just went in, um, what code they're using, what the licenses are, that visibility is huge and that allows you to move fast cause it's trust. >>I mean actually, uh, I love the researchers at Dora, you know, do the annual survey, uh, on dev ops and they said, actually if you are a company that tends to deploy less often, it tends to take you much longer to recover and you're not geared to be able to do it. Uh, you know, my background networking and you think about, you know, security is one of those things like, well wait, I want to keep my things stable and not changing for a while, but that means you're less and less secure cause I need to be on the latest patch. I need to be able to update things there. So, uh, you know, CIC D I think leads to should lead to greater security. Do you have some stats around that for your customer as to, you know, how they measure that? >>We have some pretty good velocity. Um, so Goldman went with us and this is real public is they, they started with us and went from about a two week release cycle down to tens, 20 a hundred times a day. Um, and that, I mean that's a company that does a great job in dev, um, but can also be like smaller companies like wag labs that we talked with earlier and they same kind of thing. They went often from a week down to they were doing, they typically do 20 to 30 deployments a day. And again, it just makes you break the pieces smaller, less likely that you're going to introduce dependencies that break something and all that process builds on each other as the door is stuff. If you haven't read, you've read it obviously, but if the users haven't great place to get started and understand how this works. >>Has testing changed or is testing changing in terms of when you establish the criteria, what you're looking for in terms of I guess you have a lot of new capabilities so you've got to change, I assumed your criteria up front do have a little proper, a little more accurate evaluation is that environment it's changed somewhat. I mean testing in application testing it is pretty specific to every comfy. So tools continue to get better. Um, ways of review have gotten a lot better. So, uh, there's now a lot of capabilities that at the point that you're going to go into deployment, one of the harder pieces is doing, um, your user acceptance testing is like, God, am I going to see the same thing that a user will? Right. And a lot of these have gotten to a point like we have a one click at the end of the deploy, a review app. >>Anyone in the company can look at exactly rebuild everything you're going to bought to deploy. So there's some tools that make it faster. Um, but in terms of what your load balancing in terms of your user acceptance testing, a lot of those principles continue to be pretty girl. Uh, one of the big things we heard from Andy Jassy is talking about transformation and he said you can't just do it incrementally and you need, you know, clear leadership and commitment. We want to hear how, you know, you're hearing about this from your customers. How is get live helping customers along those transformation journeys. Sure. Um, so totally agree that, I mean, it's a cultural piece, uh, without question. I think there's a couple of places, there's the obviously the tool piece and just getting everyone on the same page. And we, we all know this intuitively is we've seen what w when you go from a word doc to a Google doc and everyone can edit the same time, that's transformation goes, you know what everyone's working on, uh, and you're not duplicating effort. >>And that, that's really in many ways that's what get lab is doing is just helping the front end. I, you know, product manager know exactly what's going on in the infrastructure side and you communicate in a similar language. Um, the other piece of that we are working a lot in is because, um, get lamb operates an extremely open culture. So we publish how we run the company in a handbook that's 2,500 pages. We're always updating it. So, uh, we do reviews every time we release, we release every single month for the last 120 months in a row. We go through, here's what the release is going to be. It's on YouTube. Everyone can see it when things go wrong, we publish it. So we have an outage, we will, we have live broadcast, how we get back out from an outage and we publish all of it for someone to understand. >>And so one of the other things, there's a lot of our customers are getting started on that journey. There's one thing for a deck that says, here's what you do for your transformation for your company. That's another thing when you can literally jump in on Monday morning under the get lab call and watch, get lab go through a post-mortem of when we had a small outage. Oh that's what a no blame looks like. Okay, now I understand that, Hey, what, what didn't we release that we could have done better? And those are processes that you can have it on a piece of paper, but it's a different thing when you can walk through that with the company. And it's even better when you're watching the company that's doing the same product, the same tool that you're using. So I mean that's a, that's a cultural decision. >>Yes. I mean it's gotta be right. Yeah. I love the no blame. Right. Cause you're saying instead of finger pointing, great or castigating, you know, we're, we're going to learn from this. And how do you think, what impact does that have on a customer when they see you in real time solving your problems? They know that. They know that if they have a question for us, that we both take it seriously and that we're going to do it in a way that they know when it's going to be resolved. And that doesn't mean that we always deliver at the same time that a customer asks. But that level of transparency breeds both trust. And it also helps a customer quantify what do they want, helps us huge amount of communication because they know what we're prioritizing and they understand why. And that isn't something that is typical to come, but it's always typically very hard unless you're broadcast everything like we do to know, well, why are they making that decision? >>Um, and so that's one of the real big reasons that our customers work with us. That's where we get 10,000 plus additional contributors to get lab as an open source project. And that helps massively of course. So the velocity is because there's no difference between a get labber or the thousand get lappers in 64 countries or any one of the 10,000 contributors or our biggest competitors that regularly make contributions to, uh, our, um, our landscape. So we have a landscape that's, how does dev ops work? Who does stuff well? Hey, have no shame if they delivered something better. I want to know that I make that commit. We will share it with the world that we are not good at that and you are better at it and you know what? We'll get better. Right. It's a winning formula. It's good. It's been working really well. I appreciate the time brand. A good saying. You can love the slacks. Wish we could show them of course. But next time, thanks for having us. All right. You're watching Carvery Cherif AWS reinvent 2019 on the queue.

Published Date : Dec 5 2019

SUMMARY :

AWS reinvent 2019 brought to you by Amazon web services Still many minutes to what happened is this is the first interview we've And get like what do you do? So that's going to be obviously Amazon, other big cloud providers, a lot of strategic So maybe tell us a little bit about, you know, what your customers come to. Um, as it's evolved, so in get continues to be that core piece you need. And that's kind of led to where we are now, which is we think of, you know, get lab as a single tool for the the application security deployment and um, you know, And then I was like, Oh, thank you so much. the security still the windows, the spend happens, you're saying right on the front side instead of the back shop and One of the core principles of dev ops is you want to bake it in the process, you need to get them involved. And if you could give that and give often, it tends to take you much longer to recover and you're not geared to be able to do it. And again, it just makes you break the pieces And a lot of these have gotten to a point like we have a one click at We want to hear how, you know, you're hearing about this from your customers. Um, the other piece of that we are working a lot in is because, There's one thing for a deck that says, here's what you do for your transformation for your company. And how do you think, what impact does that have on a customer when they see you in Um, and so that's one of the real big reasons that our customers work with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

HoustonLOCATION

0.99+

AmazonORGANIZATION

0.99+

2012DATE

0.99+

Brandon JungPERSON

0.99+

OklahomaLOCATION

0.99+

BrandonPERSON

0.99+

Brandon YoungPERSON

0.99+

JohnPERSON

0.99+

20QUANTITY

0.99+

VerizonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

2,500 pagesQUANTITY

0.99+

Las VegasLOCATION

0.99+

2013DATE

0.99+

Monday morningDATE

0.99+

BrennanPERSON

0.99+

twoQUANTITY

0.99+

LALOCATION

0.99+

second dayQUANTITY

0.99+

2020DATE

0.99+

Goldman SachsORGANIZATION

0.99+

10,000 contributorsQUANTITY

0.99+

AWSORGANIZATION

0.99+

a weekQUANTITY

0.99+

two weeks agoDATE

0.99+

last decadeDATE

0.98+

bothQUANTITY

0.98+

200,000 companiesQUANTITY

0.98+

first interviewQUANTITY

0.98+

oneQUANTITY

0.98+

LambdaTITLE

0.97+

one clickQUANTITY

0.97+

GoldmanORGANIZATION

0.97+

64 countriesQUANTITY

0.97+

3 million usersQUANTITY

0.97+

DoraORGANIZATION

0.97+

GitLabORGANIZATION

0.97+

day twoQUANTITY

0.96+

single toolQUANTITY

0.96+

YouTubeORGANIZATION

0.96+

one thingQUANTITY

0.96+

firstQUANTITY

0.96+

OneQUANTITY

0.95+

LinuxTITLE

0.93+

a hundred thousandQUANTITY

0.93+

thousandQUANTITY

0.92+

10,000 plus additional contributorsQUANTITY

0.92+

LinuxORGANIZATION

0.92+

a couple of years laterDATE

0.9+

a couple minutes laterDATE

0.89+

30 deployments a dayQUANTITY

0.86+

Google docTITLE

0.84+

two oldQUANTITY

0.83+

InventEVENT

0.82+

DevSecOpsTITLE

0.81+

tens, 20 a hundred times a dayQUANTITY

0.81+

every single monthQUANTITY

0.76+

GoldmanPERSON

0.74+

secondQUANTITY

0.74+

2019DATE

0.71+

single pieceQUANTITY

0.69+

CICORGANIZATION

0.68+

a two weekQUANTITY

0.66+

2019TITLE

0.65+

CarveryTITLE

0.64+

every piece ofQUANTITY

0.64+

wag labsORGANIZATION

0.61+

120 monthsQUANTITY

0.54+

devQUANTITY

0.53+

Raven 2019COMMERCIAL_ITEM

0.5+

lastQUANTITY

0.48+

CherifORGANIZATION

0.28+

Peter Burris, Wikibon | Action Item, Feb 9 2018


 

>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (upbeat music) Once again, we're broadcasting from theCUBE studio in beautiful Palo Alto, California, and I have joining me here in the studio George Gilbert, David Floyer, both Wikibon analysts, and remote, welcome Neil Raden and Jim Kobielus. This week, we're going to talk about something that's actually quite important, and it's one of those examples of an innovation in which technology that is maturing in multiple domains is brought together in unique and interesting ways to potentially dramatically revolutionize how work gets done. Specifically, we're talking about something we call augmented programming. The notion of augmented programming borrows from some of the technologies associated with new or declarative low-code development environments, machine learning, and an increasing understanding of the role that automation's going to play, specifically as pertains to human and human-augmented activities. Now, low-code programming has been around for a while. Machine learning's been around for a while, and, increasingly, some of these notions of automation have been around for a while. But it's how they are coming together to create new approaches and new possibilities that can dramatically improve the speed of systems development, the quality of systems development, and, ultimately, very importantly, the ongoing manageability of those systems. So, Jim Kobielus, let's start with you. What are some of the issues associated with augmented programming that users need to be focused on? >> Yeah, well, the primary issue, or, really, the driver, is that we need to increase the productivity of developers greatly, because required of them to build programs, applications faster with fewer resources, and deploy them more rapidly in DevOps environments, and to manage that code, and to optimize that code for 10 zillion downstream platforms from mobile to web to the Internet of Things, and so forth. They need power tooling to be able to drive this process. Now, low-code platforms, you know, that whole low-code space has been around for years. It's very much evolved from what used to be called rapid application development, which itself evolved from the 4GL languages of decades past, and so forth. Looking at it now, here, we're moving towards the end of the second decade of this century. All low-code development space has evolved, it is rapidly emerging into, BPM, on the one hand, orchestration modeling tools. Robotic process automation, on the other hand, to enable the average end user or business analyst to quickly gin up an application based on being able to wire together UI components fairly rapidly, and drive it from the UI on in. What we're seeing now is that more and more machine learning is being used in the process of developing low-code application, or in the low-code development of applications. More machine learning is being used in a variety of capacities, one of which is simply to be able to infer the appropriate program code for external assets like screenshots and wireframes, but also from database schema and so forth. A lot of machine learning is coming to this space in a major way. >> But it sounds, though, there's still going to be some degree of specialization, and the nature of the tools that we might use in this notion of augmented programming. So, RPA may be associated with a certain class of applications and environmental considerations, and there'll be other tools, for example, that might be associated with different application considerations and environmental attributes as well. But David Floyer, one of the things that we're concerned about is, a couple weeks ago, we talked about the notion of data-aware middleware, where the idea that, increasingly, we'll see middleware emerge that's capable of moving data in response to the metadata attributes of the data, combined with invisibility to the application patterns. But when we think about this notion of augmented programming, what are some of the potential limits that people have to think about as they consider these tools? >> Peter, that's a very good question. The key for all of these techniques is to use the right tools in the right place. A lot of the environments where the leading edge of this environment assumes an environment where the programmer has access to all of his data, he owns it, and he is the only person there. The challenge is, in many applications, you are sharing data. You are sharing data across the organization, you are sharing data between programmers. Now, this introduces a huge amount of complexity, and there have been many attempts to try and tackle this. There've been data dictionaries, there've been data management, ways of managing this data. They haven't had a very good history. The efforts involved in trying to make those work within an organization have been, at best, spasmodic. >> (laughs) Spasmodic, good word! >> When we go into this environment, I think the key is, make sure that you are applying these tools to the areas initially where somebody does have access to all the data, and then carefully look at it from the point of view of shared data, because you have a whole lot of issues in state environments, which we do not have in non-state environments, and the complexity of locking data, the complexity of many people accessing that data, that requires another set of tools. I'm all in favor of these low-code-type environments, but you have to make sure that you're applying the right tools for the right type of applications. >> And specifically, for example, a lot of metadata that's typically associated with a database is not easily revealed to an application developer, nor an application. And so, you have to be very, very careful about how you exploit that. Now, Neil Raden, there has been over the years, as David mentioned, a number of passes at doing this that didn't go so well, but there are some business reasons to think why this time it might go a little bit better. Talk a little bit about some of the higher-level business considerations that are on the table that may catalyze better adoption this time of these types of tools. >> One thing is that, no matter what kind of an organization you are, whether you're a huge multinational or an SMB or whatever, all of these companies are really rotten with what we call shadow systems. In other words, companies have applications that do what they do, and what they don't do, people cobble together. The vast majority of 'em are done in Access and Excel, still. Even in advanced organizations, you'll find this. If there's a way to eliminate that, because it's a real killer of productivity, then that's a real positive. I suppose my concern is that when you deal at that level, how are you going to maintain coherency and consistency in those systems over time without adding, like he said, orchestration of those systems? What David is saying, I think, is really key. >> Yeah, I, go ahead, sorry, Neil. Go ahead. >> No, that's all right. What I was-- >> I think-- >> Peter: Sorry. Bad host. >> David: You think? >> Neil: No, go ahead. >> No, what I was going to say was, and a crucial feature of this is that a lot of times, the application is owned by a business line, and the business line presumes that they own their data, and they have modeled those systems for a certain type of work, for a certain volume of work, for a certain distribution of control, and when you reveal a lot of this stuff, you sometimes break those assumptions. That can lead to real serious breaks in the system. >> You know, they're not always evil, as we like to characterize them. Some of them are actually well-thought-out and really good system, better than anything they could get 'em from the IT organizations. But the point is, they're usually pretty brittle, and they require a lot of effort for the people who develop them to keep them running because they don't use the kind of tools and approaches and platforms and methodologies that lend themselves to good-quality software. I think there's real potential for RTA in that area. >> I think there are also some interesting platforms that are driving to help in this particular area, particularly of applications which go across departments in an organization. ServiceNow, for example, has a very powerful platform for very high-level production of systems, and it's being used a lot of the time to solve problems of procedures, procedures going across different departments, automating those procedures. I think there are some extremely good tools coming out which will significantly help, but they do help more in the serial procedures, rather than the concurrent procedures. >> And there are some expectations about the type of tools you use, and the extensibility of those tools, et cetera, which leads me, anyway, George, to ask the question about some of the machine learning attributes of this. We've got to be careful about machine learning being positioned as the panacea for all business problems, which too often seems to be the case. But we are certainly, it's reasonable to observe that machine learning can, in fact, help us in important ways at understanding how patterns in applications and data are working, how people are working together. Talk a little bit about the machine learning attributes of some of these tools. >> Well, I like to say that every few years, we have a technology we get so excited about that we assume it tastes like chocolate, costs a dollar, and cures cancer. Machine learning is that technology right now. The interesting thing about robotic process automation in many low-code environments is that they're sort of inheriting the mantle of the old application macros, and even cross-application macros from the early desktop office wars. The difference now is, unlike then, there were APIs that those scripts could talk to, and they could then treat the desktop applications as an application platform. As David said, and Neil, we're going through application user interfaces now, and when you want to do a low-code programming environment, you want often to program by example. But then you need to generalize parts, you know, when you move this thing to this place, you might now want to generalize that. That's where machine learning can start helping take literal scripts, and adding more abstract constructs to them. >> So, you're literally digitizing some of the digital primitives that are in some of these applications, and that allows you to reveal data the machine learning can apply to make observations, recommendations about patterns, and actually do code generation. >> And you know, I would add one thing, that it's not just about the UI anymore, because we're surfacing, as we were talking earlier, the data-driven middleware. Another way of looking at what used to be the system catalog, we had big applications all talking to a central database. But now that we have so many repositories, we're sort of extricating the system catalog so that we can look at and curate data in many locations. These tools can access that because they have user interfaces, as well as APIs. And then, in addition, you don't have to go against a database that is unprotected with an applications business logic. More and more, we have microservices and serverless functions where they embody the business logic, and you can go against them, and they enforce the rules as well. >> That's great, so, David Floyer-- >> I should point out-- >> Hold on, Jim. Dave Floyer, this is not a technology set that suddenly is emerging on the scene independent of other changes. There's also some important changes in the hardware itself that are making it possible for us to reveal data differently, so that these types of tools and these types of technologies can be applied. I'm specifically thinking about something as mundane as SSD, flash-based storage, and other types of technologies that allow us to different things with data so that we can envision working with this stuff. Give us a quick list down on the infrastructure, some of the key technologies in making this possible. >> When we look at systems architectures now, what we never had was fast memories, fast storage. We had very, very slow storage, and we had to design systems to take account of that. What is coming in now is much, much faster storage built on things like NVMe, other fabrics, which really get to any data within microseconds, as opposed to the milliseconds. That's thousands of times faster. What you can do with these is, not only can the access density that you can achieve to the data is much, much higher than it was. Many, again, many thousand times higher. That enables you to take a different approach to sharing data. Instead of having to share data at the disk level, you can now, for example, take a snapshot of data. You can allow that snapshot to be the snapshot of, for example, the analytics system on the hour, or on the day, or whatever timescale that you want it. And then, in parallel, you can use huge amounts of analytic data against a snapshot of that same data while the same operational system is working. There are some techniques there which I think are very exciting, indeed. The other big change is that we're going to be talking machine to machine. Applications were designed for human, most of applications were designed for a human to be the recipient at the other end. One of the differences when you're dealing with machines is now you have to get your code done in microseconds, as opposed to seconds. Again, a thousand times faster. This is a very exciting area, but when we're looking at low-code, for example, you're still going to need those well-crafted algorithms, those well-crafted code, very fast code that you're going to need as one of the tools of programmers. There's still going to be a need for people who can create these very fast algorithms. An exciting time all the way around for programmers. >> What were you going to say, Jim? And I want to come back and have you talk about DevOps for a second. >> Yeah, I think I was going to, I'll add to what David was just saying. Most low-code tools are not entirely no-code, meaning what they do is they auto-generate code, pursuant to some business declared a specification. The code, the actual, professional programmers can go in and modify that code and tweak it and optimize it. And I want to tie in now to something that George was talking about, the role of ML in this process. ML can make a huge mess, in the sense that ML can be an enabler for more people who don't know whole lot about development. You want to build stuff willy-nilly, so there's more code out there than you can shake a stick at, and there's no standards. But also, I'm seeing, and I saw this past week, MIT has a project, they already have a tool, that's able to do this. It's able to take ML, use ML to take a snapshot or a segment of code out of one program, and then modify it so that it fit and then transplant it into another application and modify it so it fits the context of the new application along various attributes, and so forth. What I'm getting at is that ML can be, according to what, say, MIT has done, ML can be a tool for enabling reuse of code and re-contextualization and tweaking of code. In other words, ML can be a handmaiden of enforcing standards as code gets repurposed throughout these low-code environments. I think that ML can be, it's a double-edged sword, in terms of enabling stronger or weaker governance over the whole development process. >> Yeah, and I want to add to that, Jim, that it's not just you can enforce, or at least, reveal standards and compliance, but also increases the likelihood that we become a little bit more tool-dependent. And then going back to what you were talking about, a little bit less tool-dependent, I should say. Going back to what you were talking about, David, it increases the likelihood that people are using the right tool for the right job, which is a pretty crucial element of this, especially as we do in adoption. So, Jim, give us a couple of quick observations on what a development organization is going to have to do differently to get going on utilizing some of these technologies. What are the top two or three things that folks are going to have to think about? >> First of all, in the low-code space, there are general-purpose tools that can bang out code for various target languages, for various applications, and there are highly special-purpose tools that can go gangbusters on auto-ginning web application code and mobile code and IoT code. First and foremost, you got to decide how much of the ocean you want to boil off, in terms of low-code. I recommend that if you have a requirement for accelerating, say, mobile code development, then go with low-code tools that are geared to iOS and Android and so forth, as your target platform, and stay there. Don't feel like you have to get some monster suite that can do everything, potentially. That's one critical thing. Another critical thing is it's not, the tool that you adopt, it needs to be more than just a development tool. It needs to also have capabilities built in to help your team govern those code builds within whatever DevOps, CIC, or repository you have inside your organization, make sure that the tool you've got plays well with your DevOps environment, with your workflows, with your code repositories. And then, number three, we keep forgetting this, but the front-end development is still not a walk in the woods. In fact, specifying a complex business logic that drives all this code generation, this is stuff for professional developers more often than not. These are complex, even RPA tools are, quite frankly, not as user-friendly as maybe potentially they could be down the road, 'cause you still need somebody to think through the end-to-end application, and then to specify those steps at a declarative level that need to be accomplished before the RPA tool can do its magic and build something that you might want to then crystallize as a repeatable asset in your organization. >> So it doesn't take the thinking out of application development. >> James: Oh, no, no, no no. >> All right, so, let's do this. Let's hit the action items and see what we all think folks should do next. David Floyer, let me start with you. What's the action item out of this? >> The action item is horses for courses. The right horse for the right course, the right tools for the right job. Understand where things are stateless and where things are state, and use the appropriate tools, and, as Jim was just saying, make sure that there is integration of those tools into the current processes and procedures for coding. >> George Gilbert, action item. >> I would say that, building on that, start with pilots where it involves one or a couple simple applications. Or, I should say, one or a couple enterprise applications, but with less, less sort of branching, if-then type of logic built in. It could be hardwired-- >> So, simple flows? >> Simple flows, so that over time you can generalize that and play with how the RPA tools or low-code tools can generalize their auto-generated code. >> Peter: Neil Raden, action item. >> My suggestion is that if you involve someone who's going to learn how to use these tools and develop an application or applications for you, make sure that you're dealing with someone who's going to be around for a while, because otherwise, you're going to end up with a lot of orphan code that you can't maintain. We've certainly seen that before. >> David: That's great. >> Peter: Jim Kobielus, action item. >> Yeah, action item is, approach low-code as tooling for the professional developer, not to necessarily bring in untrained, non-traditional developers. Like Neil said, make sure that the low-code environment itself is there for the long haul, it'll be managed and used by professional developers, and make sure that they are provided with the front-end visual workspace that helps them do their jobs most effectively, that is user-friendly for them to get stuff done in a hurry. And don't worry about bringing in the freelance, untrained developers into your organization, or somehow re-tasking your business analysts to become coders. That's probably not the best idea in the long run, for maintainability of the code, if nothing else. >> Certainly not in the intermediate term. Okay, so here's the action item. Here's our Wikibon Action Item. As digital business progresses, it needs to be able to create digital assets that are predicated on valuable data faster, in a more flexible way, with more business knowledge embedded and imbued directly in how the process works. A new class of tools is emerging that we think will actually allow this to happen more successfully. It combines mature knowledge in the application development world with new insights in how machine learning works, and new understanding of the impacts of automation on organization. We call these augmented programming tools, and essentially, we call them augmented programming, because in this case, the system is taking some degree of responsibility for the business to generate code, identify patterns, and ultimately do a better job maintaining how applications get organized and run. While these technologies have potential power, we have to acknowledge that there's not ever going to be a one-size-fits-all at all. In fact, we believe very strongly that we're going to see a range of different tools emerge that will allow developers to take advantage of this approach, given their starting point of the artifacts that are available, and the characteristics of the applications that have to be built. One of the ones that we think is particularly important is robotic process automation, or RPA, which starts with the idea of being able to discover something about the way applications work by looking at how the application behaves onscreen, encapsulate that, generalize it so that it can be used as a tool in future application development work. We also note that these application development technologies will not operate independent of other technology and organizational changes within the business. Specifically, on the technology side, we are encouraged that there's a continuing evolution of hardware technology that's going to take advantage of faster data access utilizing solid-state disks, NVMe over fabric, and new types of system architectures that are much better-suited for rapid shared data access. Additionally, we observed that there's new classes of technologies that are emerging that allow a data control plane to actually operate based on metadata characteristics, and informed by application patterns, often through things like machine learning. One of the organizational issues that we think is really crucial is that folks should not presume that this is going to be a path for taking anybody in the business and turning them into an application developer. You still have to be able to think like an application developer and imagine how you turn a business process into something that looks like a program. But another group that we think has to be considered here is not just the DevOps people, although that's important, but go down a level. The good old DBAs who have always suffered through new advances in tools that made the assumption that the data that's in a database is always available, and they don't have to worry about transaction scaling, and they don't have to worry about the way that the database manager's set up. It would be unfortunate if the value of these tools from a collaboration standpoint, to work better with the business, to work better with the younger programmers, ended up failing because developers continue to not pay attention to how the underlying systems that currently control a lot of the data operate. Okay, once again, this has been, we really appreciate you participating. Thank you, David Floyer and George Gilbert, and on the remote, Neil Raden and Jim Kobielus. We've been talking about augmented programming. This has been Wikibon Action Item. (upbeat music)

Published Date : Feb 9 2018

SUMMARY :

of the role that automation's going to play, and drive it from the UI on in. and the nature of the tools that we might use and he is the only person there. and the complexity of locking data, business considerations that are on the table that when you deal at that level, Yeah, I, go ahead, sorry, Neil. What I was-- Peter: Sorry. and the business line presumes that they own their data, that lend themselves to good-quality software. that are driving to help in this particular area, and the extensibility of those tools, et cetera, and adding more abstract constructs to them. and that allows you to reveal data that it's not just about the UI anymore, some of the key technologies in making this possible. You can allow that snapshot to be the snapshot of, and have you talk about DevOps for a second. and modify it so it fits the context of the new application And then going back to what you were talking about, make sure that the tool you've got So it doesn't take the thinking Let's hit the action items make sure that there is integration of those tools but with less, Simple flows, so that over time you can generalize that that you can't maintain. and make sure that they are provided with that this is going to be a path

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

Jim KobielusPERSON

0.99+

Peter BurrisPERSON

0.99+

JimPERSON

0.99+

JamesPERSON

0.99+

Neil RadenPERSON

0.99+

NeilPERSON

0.99+

George GilbertPERSON

0.99+

Dave FloyerPERSON

0.99+

GeorgePERSON

0.99+

PeterPERSON

0.99+

Feb 9 2018DATE

0.99+

ExcelTITLE

0.99+

oneQUANTITY

0.99+

10 zillionQUANTITY

0.99+

MITORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

WikibonORGANIZATION

0.99+

AndroidTITLE

0.99+

iOSTITLE

0.98+

bothQUANTITY

0.98+

FirstQUANTITY

0.98+

This weekDATE

0.98+

OneQUANTITY

0.97+

DevOpsTITLE

0.97+

one programQUANTITY

0.96+

three thingsQUANTITY

0.96+

theCUBEORGANIZATION

0.95+

thousand timesQUANTITY

0.94+

CICTITLE

0.94+

past weekDATE

0.94+

ServiceNowTITLE

0.93+

One thingQUANTITY

0.93+

AccessTITLE

0.92+

one thingQUANTITY

0.89+

thousands of timesQUANTITY

0.88+

one critical thingQUANTITY

0.88+

a dollarQUANTITY

0.87+

couple weeks agoDATE

0.85+

second decade of this centuryDATE

0.84+

number threeQUANTITY

0.76+

decadesDATE

0.75+

couple simple applicationsQUANTITY

0.73+

one ofQUANTITY

0.71+

couple enterprise applicationsQUANTITY

0.67+

a secondQUANTITY

0.63+

doubleQUANTITY

0.61+

topQUANTITY

0.57+

twoQUANTITY

0.53+

MLTITLE

0.51+

4GLOTHER

0.48+