Image Title

Search Results for Git:

Rachel Skaff, AWS | International Women's Day


 

(gentle music) >> Hello, and welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE. I've got a great guest here, CUBE alumni and very impressive, inspiring, Rachel Mushahwar Skaff, who's a managing director and general manager at AWS. Rachel, great to see you. Thanks for coming on. >> Thank you so much. It's always a pleasure to be here. You all make such a tremendous impact with reporting out what's happening in the tech space, and frankly, investing in topics like this, so thank you. >> It's our pleasure. Your career has been really impressive. You worked at Intel for almost a decade, and that company is very tech, very focused on Moore's law, cadence of technology power in the industry. Now at AWS, powering next-generation cloud. What inspired you to get into tech? How did you get here and how have you approached your career journey, because it's quite a track record? >> Wow, how long do we have? (Rachel and John laugh) >> John: We can go as long as you want. (laughs) It's great. >> You know, all joking aside, I think at the end of the day, it's about this simple statement. If you don't get goosebumps every single morning that you're waking up to do your job, it's not good enough. And that's a bit about how I've made all of the different career transitions that I have. You know, everything from building out data centers around the world, to leading network and engineering teams, to leading applications teams, to going and working for, you know, the largest semiconductor in the world, and now at AWS, every single one of those opportunities gave me goosebumps. And I was really focused on how do I surround myself with humans that are better than I am, smarter than I am, companies that plan in decades, but live in moments, companies that invest in their employees and create like artists? And frankly, for me, being part of a company where people know that life is finite, but they want to make an infinite impact, that's a bit about my career journey in a nutshell. >> Yeah. What's interesting is that, you know, over the years, a lot's changed, and a theme that we're hearing from leaders now that are heading up large teams and running companies, they have, you know, they have 20-plus years of experience under their belt and they look back and they say, "Wow, "things have changed and it's changing faster now, "hopefully faster to get change." But they all talk about confidence and they talk about curiosity and building. When did you know that this was going to be something that you got the goosebumps? And were there blockers in your way and how did you handle that? (Rachel laughs) >> There's always blockers in our way, and I think a lot of people don't actually talk about the blockers. I think they make it sound like, hey, I had this plan from day one, and every decision I've made has been perfect. And for me, I'll tell you, right, there are moments in your life that mark a differentiation and those moments that you realize nothing will be the same. And time is kind of divided into two parts, right, before this moment and after this moment. And that's everything from, before I had kids, that's a pretty big moment in people's lives, to after I had kids, and how do you work through some of those opportunities? Before I got married, before I got divorced. Before I went to this company, after I left this company. And I think the key for all of those is just having an insatiable curiosity around how do you continue to do better, create better and make better? And I'll tell you, those blockers, they exist. Coming back from maternity leave, hard. Coming back from a medical leave, hard. Coming back from caring for a sick parent or a sick friend, hard. But all of those things start to help craft who you are as a human being, not as a leader, but as a human being, and allows you to have some empathy with the people that you surround yourself with, right? And for me, it's, (sighs) you can think about these blockers in one of two ways. You can think about it as, you know, every single time that you're tempted to react in the same way to a blocker, you can be a prisoner of your past, or you can change how you react and be a pioneer of the future. It's not a blocker when you think about it in those terms. >> Mindset matters, and that's really a great point. You brought up something that's interesting, I want to bring this up. Some of the challenges in different stages of our lives. You know, one thing that's come out of this set of interviews, this, of day and in conversations is, that I haven't heard before, is the result of COVID, working at home brought empathy about people's personal lives to the table. That came up in a couple interviews. What's your reaction to that? Because that highlights that we're human, to your point of view. >> It does. It does. And I'm so thankful that you don't ask about balance because that is a pet peeve of mine, because there is no such thing as balance. If you're in perfect balance, you are not moving and you're not changing. But when you think about, you know, the impact of COVID and how the world has changed since that, it has allowed all of us to really think about, you know, what do we want to do versus what do we have to do? And I think so many times, in both our professional lives and our personal lives, we get caught up in doing what we think we have to do to get ahead versus taking a step back and saying, "Hey, what do I want to do? "And how do I become a, you know, "a better human?" And many times, John, I'm asked, "Hey, "how do you define success or achievement?" And, you know, my answer is really, for me, the greatest results that I've achieved, both personally and professionally, is when I eliminate the word success and balance from my vocabulary, and replace them with two words: What's my contribution and what's my impact? Those things make a difference, regardless of gender. And I'll tell you, none of it is easy, ever. I think all of us have been broken, we've been stretched, we've been burnt out. But I also think what we have to talk about as leaders in the industry is how we've also found endurance and resilience. And when we felt unsteady, we've continued to go forward, right? When we can't decide, the best answer is do what's uncomfortable. And all of those things really stemmed from a part of what happened with COVID. >> Yeah, yeah, I love the uncomfortable and the balance highlight. You mentioned being off balance. That means you're growing, you're not standing still. I want to get your thoughts on this because one thing that has come out again this year, and last year as well, is having a team with you when you do it. So if you're off balance and you're going to stretch, if you have a good team with you, that's where people help each other. Not just pick them up, but like maybe get 'em back on track again. So, but if you're solo, you fall, (laughs) you fall harder. So what's your reaction to that? 'Cause this has come up, and this comes up in team building, workforce formation, goal setting, contribution. What's your reaction to that? >> So my reaction to that that is pretty simple. Nobody gets there on their own at all, right? Passion and ambition can only take you so far. You've got to have people and teams that are supporting you. And here's the funny thing about people, and frankly, about being a leader that I think is really important: People don't follow for you. People follow for who you help them become. Think about that for a second. And when you think about all the amazing things that companies and teams are able to do, it's because of those people. And it's because you have leaders that are out there, inspiring them to take what they believe is impossible and turn it into the possible. That's the power of teams. >> Can you give an example of your approach on how you do that? How do you build your teams? How do you grow them? How do you lead them effectively and also make 'em inclusive, diverse and equitable? >> Whew. I'll give you a great example of some work that we're doing at AWS. This year at re:Invent, for the first time in its history, we've launched an initiative with theCUBE called Women of the Cloud. And part of Women of the Cloud is highlighting the business impact that so many of our partners, our customers and our employees have had on the social, on the economic and on the financials of many companies. They just haven't had the opportunity to tell their story. And at Amazon, right, it is absolutely integral to us to highlight those examples and continue to extend that ethos to our partners and our customers. And I think one of the things that I shared with you at re:Invent was, you know, as U2's Bono put it, (John laughs) "We'll build it better than we did before "and we are the people "that we've been waiting for." So if we're not out there, advocating and highlighting all the amazing things that other women are doing in the ecosystem, who will? >> Well, I've got to say, I want to give you props for that program. Not only was it groundbreaking, it's still running strong. And I saw some things on LinkedIn that were really impressive in its network effect. And I met at least half a dozen new people I never would have met before through some of that content interaction and engagement. And this is like the power of the current world. I mean, getting the voices out there creates momentum. And it's good for Amazon. It's not just personal brand building for my next job or whatever, you know, reason. It's sharing and it's attracting others, and it's causing people to connect and meet each other in that world. So it's still going strong. (laughs) And this program we did last year was part of Rachel Thornton, who's now at MessageBird, and Mary Camarata. They were the sponsors for this International Women's Day. They're not there anymore, so we decided we're going to do it again because the impact is so significant. We had the Amazon Education group on. It's amazing and it's free, and we've got to get the word out. I mean, talk about leveling up fast. You get in and you get trained and get certified, and there's a zillion jobs out (laughs) there in cloud, right, and partners. So this kind of leadership is really important. What was the key learnings that you've taken away and how do you extend this opportunity to nurture the talent out there in the field? Because when you throw the content out there from great leaders and practitioners and developers, it attracts other people. >> It does. It does. So look, I think there's two types of people, people that are focused on being and people who are focused on doing. And let me give you an example, right? When we think about labels of, hey, Rachel's a female executive who launched Women of the Cloud, that label really limits me. I'd rather just be a great executive. Or, hey, there's a great entrepreneur. Let's not be a great entrepreneur. Just go build something and sell it. And that's part of this whole Women of the cloud, is I don't want people focused on what their label is. I want people sharing their stories about what they're doing, and that's where the lasting impact happens, right? I think about something that my grandmother used to tell me, and she used to tell me, "Rachel, how successful "you are, doesn't matter. "The lasting impact that you have "is your legacy in this very finite time "that you have on Earth. "Leave a legacy." And that's what Women of the Cloud is about. So that people can start to say, "Oh, geez, "I didn't know that that was possible. "I didn't think about my career in that way." And, you know, all of those different types of stories that you're hearing out there. >> And I want to highlight something you said. We had another Amazonian on the program for this day earlier and she coined a term, 'cause inside Amazon, you have common language. One of them is bar raising. Raise the bar, that's an Amazonian (Rachel laughs) term. It means contribute and improve and raise the bar of capability. She said, "Bar raising is gender neutral. "The bar is a bar." And I'm like, wow, that was amazing. Now, that means your contribution angle there highlights that. What's the biggest challenge to get that mindset set in culture, in these- >> Oh. >> 'Cause it's that simple, contribution is neutral. >> It absolutely is neutral, but it's like I said earlier, I think so many times, people are focused on success and being a great leader versus what's the contribution I'm making and how am I doing as a leader, you know? And when it comes to a lot of the leadership principles that Amazon has, including bar raising, which means insisting on the highest standards, and then those standards continue to raise every single time. And what that is all about is having all of our employees figure out, how do I get better every single day, right? That's what it's about. It's not about being better than the peer next to you. It's about how do I become a better leader, a better human being than I was yesterday? >> Awesome. >> You know, I read this really cute quote and I think it really resonates. "You meditate to upgrade your software "and you work out to upgrade your hardware." And while it's important that we're all ourselves at work, we can't deny that a lot of times, ourselves still need that meditation or that workout. >> Well, I hope I don't have any zero days in my software out there, so, but I'm going to definitely work on that. I love that quote. I'm going to use that. Thank you very much. That was awesome. I got to ask you, I know you're really passionate about, and we've talked about this, around, so you're a great leader but you're also focused on what's behind you in the generation, pipelining women leaders, okay? Seats at the table, mentoring and sponsorship. What can we do to build a strong pipeline of leaders in technology and business? And where do you see the biggest opportunity to nurture the talent in these fields? >> Hmm, you know, that's great, great question. And, you know, I just read a "Forbes" article by another Amazonian, Tanuja Randery, who talked about, you know, some really interesting stats. And one of the stats that she shared was, you know, by 2030, less than 25% of tech specialists will be female, less than 25%. That's only a 6% growth from where we are in 2023, so in seven years. That's alarming. So we've really got to figure out what are the kinds of things that we're going to go do from an Amazon perspective to impact that? And one of the obvious starting points is showcasing tech careers to girls and young women, and talking openly about what a technology career looks like. So specifically at Amazon, we've got an AWS Git IT program that helps schools and educators bring in tech role models to show them what potential careers look like in tech. I think that's one great way that we can help build the pipeline, but once we get the pipeline, we also have to figure out how we don't let that pipeline leak. Meaning how do we keep women and, you know, young women on their tech career? And I think big part of that, John, is really talking about how hard it is, but it's also greater than you can ever imagine. And letting them see executives that are very authentic and will talk about, geez, you know, the challenges of COVID were a time of crisis and accelerated change, and here's what it meant to me personally and here's what we were able to solve professionally. These younger generations are all about social impact, they're about economic impact and they're about financial impact. And if we're not talking about all three of those, both from how AWS is leading from the front, but how its executives are also taking that into their personal lives, they're not going to want to go into tech. >> Yeah, and I think one of the things you mentioned there about getting people that get IT, good call out there, but also, Amazon's going to train 30 million people, put hundreds of millions of dollars into education. And not only are they making it easier to get in to get trained, but once you're in, even savvy folks that are in there still have to accelerate. And there's more ways to level up, more things are happening, but there's a big trend around people changing careers either in their late 20s, early 30s, or even those moments you talk about, where it's before and after, even later in the careers, 40s, 50s. Leaders like, well, good experience, good training, who were in another discipline who re-skilled. So you have, you know, more certifications coming in. So there's still other pivot points in the pipeline. It's not just down here. And that, I find that interesting. Are you seeing that same leadership opportunities coming in where someone can come into tech older? >> Absolutely. You know, we've got some amazing programs, like Amazon Returnity, that really focuses on how do we get other, you know, how do we get women that have taken some time off of work to get back into the workforce? And here's the other thing about switching careers. If I look back on my career, I started out as a civil engineer, heavy highway construction. And now I lead a sales team at the largest cloud company in the world. And there were, you know, twists and turns around there. I've always focused on how do we change and how do we continue to evolve? So it's not just focused on, you know, young women in the pipeline. It's focused on all gender and all diverse types throughout their career, and making sure that we're providing an inclusive environment for them to bring in their unique skillsets. >> Yeah, a building has good steel. It's well structured. Roads have great foundations. You know, you got the builder in you there. >> Yes. >> So I have to ask you, what's on your mind as a tech athlete, as an executive at AWS? You know, you got your huge team, big goals, the economy's got a little bit of a headwind, but still, cloud's transforming, edge is exploding. What's your outlook as you look out in the tech landscape these days and how are you thinking about it? What your plans? Can you share a little bit about what's on your mind? >> Sure. So, geez, there's so many trends that are top of mind right now. Everything from zero trust to artificial intelligence to security. We have more access to data now than ever before. So the opportunities are limitless when we think about how we can apply technology to solve some really difficult customer problems, right? Innovation sometimes feels like it's happening at a rapid pace. And I also say, you know, there are years when nothing happens, and then there's years when centuries happen. And I feel like we're kind of in those years where centuries are happening. Cloud technologies are refining sports as we know them now. There's a surge of innovation in smart energy. Everyone's supply chain is looking to transform. Custom silicon is going mainstream. And frankly, AWS's customers and partners are expecting us to come to them with a point of view on trends and on opportunities. And that's what differentiates us. (John laughs) That's what gives me goosebumps- >> I was just going to ask you that. Does that give you goosebumps? How could you not love technology with that excitement? I mean, AI, throw in AI, too. I just talked to Swami, who heads up the AI and database, and we just talked about the past 24 months, the change. And that is a century moment happening. The large language models, computer vision, more compute. Compute's booming than ever before. Who thought that was going to happen, is still happening? Massive change. So, I mean, if you're in tech, how can you not love tech? >> I know, even if you're not in tech, I think you've got to start to love tech because it gives you access to things you've never had before. And frankly, right, change is the only constant. And if you don't like change, you're going to like being irrelevant even less than you like change. So we've got to be nimble, we've got to adapt. And here's the great thing, once we figure it out, it changes all over again. And it's not something that's easy for any of us to operate. It's hard, right? It's hard learning new technology, it's hard figuring out what do I do next? But here's the secret. I think it's hard because we're doing it right. It's not hard because we're doing it wrong. It's just hard to be human and it's hard to figure out how we apply all this different technology in a way that positively impacts us, you know, economically, financially, environmentally and socially. >> And everyone's different, too. So you got to live those (mumbles). I want to get one more question in before we, my last question, which is about you and your impact. When you talk to your team, your sales, you got a large sales team, North America. And Tanuja, who you mentioned, is in EMEA, we're going to speak with her as well. You guys lead the front lines, helping customers, but also delivering the revenue to the company, which has been fantastic, by the way. So what's your message to the troops and the team out there? When you say, "Take that hill," like what is the motivational pitch, in a few sentences? What's the main North Star message in today's marketplace when you're doing that big team meeting? >> I don't know if it's just limited to a team meeting. I think this is a universal message, and the universal message for me is find your edge, whatever that may be. Whether it is the edge of what you know about artificial intelligence and neural networks or it's the edge of how do we migrate our applications to the cloud more quickly. Or it's the edge of, oh, my gosh, how do I be a better parent and still be great at work, right? Find your edge, and then sharpen it. Go to the brink of what you think is possible, and then force yourself to jump. Get involved. The world is run by the people that show up, professionally and personally. (John laughs) So show up and get started. >> Yeah as Steve Jobs once said, "The future "that everyone looks at was created "by people no smarter than you." And I love that quote. That's really there. Final question for you. I know we're tight on time, but I want to get this in. When you think about your impact on your company, AWS, and the industry, what's something you want people to remember? >> Oh, geez. I think what I want people to remember the most is it's not about what you've said, and this is a Maya Angelou quote. "It's not about what you've said to people "or what you've done, "it's about how you've made them feel." And we can all think back on leaders or we can all think back on personal moments in our lives where we felt like we belonged, where we felt like we did something amazing, where we felt loved. And those are the moments that sit with us for the rest of our lives. I want people to remember how they felt when they were part of something bigger. I want people to belong. It shouldn't be uncommon to talk about feelings at work. So I want people to feel. >> Rachel, thank you for your time. I know you're really busy and we stretched you a little bit there. Thank you so much for contributing to this wonderful day of great leaders sharing their stories. And you're an inspiration. Thanks for everything you do. We appreciate you. >> Thank you. And let's go do some more Women of the Cloud videos. >> We (laughs) got more coming. Bring those stories on. Back up the story truck. We're ready to go. Thanks so much. >> That's good. >> Thank you. >> Okay, this is theCUBE's coverage of International Women's Day. It's not just going to be March 8th. That's the big celebration day. It's going to be every quarter, more stories coming. Stay tuned at siliconangle.com and thecube.net here, with bringing all the stories. I'm John Furrier, your host. Thanks for watching. (gentle music)

Published Date : Mar 6 2023

SUMMARY :

and very impressive, inspiring, Thank you so much. and how have you approached long as you want. to going and working for, you know, and how did you handle that? and how do you work through Some of the challenges in And I'm so thankful that you don't ask and the balance highlight. And it's because you have leaders that I shared with you at re:Invent and how do you extend this opportunity And let me give you an example, right? and raise the bar of capability. contribution is neutral. than the peer next to you. "and you work out to And where do you see And one of the stats that she shared the things you mentioned there And there were, you know, twists You know, you got the and how are you thinking about it? And I also say, you know, I was just going to ask you that. And if you don't like change, And Tanuja, who you mentioned, is in EMEA, of what you know about And I love that quote. And we can all think back on leaders Rachel, thank you for your time. Women of the Cloud videos. We're ready to go. It's not just going to be March 8th.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TelcoORGANIZATION

0.99+

RachelPERSON

0.99+

Tim CookPERSON

0.99+

Jeff FrickPERSON

0.99+

TelcosORGANIZATION

0.99+

Tanuja RanderyPERSON

0.99+

Rachel ThorntonPERSON

0.99+

AmazonORGANIZATION

0.99+

NayakiPERSON

0.99+

SanjayPERSON

0.99+

Peter BurrisPERSON

0.99+

2014DATE

0.99+

FordORGANIZATION

0.99+

TanujaPERSON

0.99+

Rachel SkaffPERSON

0.99+

Todd SkidmorePERSON

0.99+

NokiaORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

JohnPERSON

0.99+

AustraliaLOCATION

0.99+

FacebookORGANIZATION

0.99+

Bob StefanskiPERSON

0.99+

Steve JobsPERSON

0.99+

Tom JoycePERSON

0.99+

Lisa MartinPERSON

0.99+

Laura CooneyPERSON

0.99+

John FurrierPERSON

0.99+

ToddPERSON

0.99+

AWSORGANIZATION

0.99+

2011DATE

0.99+

Mary CamarataPERSON

0.99+

Meg WhitmanPERSON

0.99+

IBMORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

BlackberryORGANIZATION

0.99+

Coca-ColaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Sanjay SrivastavaPERSON

0.99+

Silicon ValleyLOCATION

0.99+

BMC SoftwareORGANIZATION

0.99+

U.S.LOCATION

0.99+

SiriTITLE

0.99+

BMCORGANIZATION

0.99+

HPORGANIZATION

0.99+

MotorolaORGANIZATION

0.99+

JeffPERSON

0.99+

SamsungORGANIZATION

0.99+

Mihir ShuklaPERSON

0.99+

2023DATE

0.99+

Nayaki NayyarPERSON

0.99+

AppleORGANIZATION

0.99+

Rachel Mushahwar SkaffPERSON

0.99+

6%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Share A CokeORGANIZATION

0.99+

Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native


 

(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Chris JonesPERSON

0.99+

12 gigQUANTITY

0.99+

five yearQUANTITY

0.99+

John FurrierPERSON

0.99+

two yearsQUANTITY

0.99+

six peopleQUANTITY

0.99+

two personasQUANTITY

0.99+

AdobeORGANIZATION

0.99+

JavaTITLE

0.99+

three monthsQUANTITY

0.99+

20 mileQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

SeattleLOCATION

0.99+

two toolsQUANTITY

0.99+

twoQUANTITY

0.99+

eight coresQUANTITY

0.99+

KubeConEVENT

0.99+

last yearDATE

0.99+

GitOpsTITLE

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

over two yearsQUANTITY

0.99+

HashiCorpORGANIZATION

0.99+

TerraformORGANIZATION

0.99+

two separate platformsQUANTITY

0.99+

24QUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

two waysQUANTITY

0.98+

third alternativeQUANTITY

0.98+

each releaseQUANTITY

0.98+

IntuitORGANIZATION

0.98+

third pillarQUANTITY

0.98+

2:00amDATE

0.98+

first jobsQUANTITY

0.98+

Mobile World CongressEVENT

0.98+

Cloud NativeTITLE

0.98+

this yearDATE

0.98+

late 2019DATE

0.98+

Platform9TITLE

0.98+

one environmentQUANTITY

0.98+

last fallDATE

0.97+

KubernetesTITLE

0.97+

yesterdayDATE

0.97+

two experiencesQUANTITY

0.97+

about eight yearsQUANTITY

0.97+

DevSecOpsTITLE

0.97+

GitTITLE

0.97+

FluxORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

two big contributorsQUANTITY

0.96+

Cloud NativeTITLE

0.96+

DevOpsTITLE

0.96+

RebarsORGANIZATION

0.95+

Nadir Izrael, Armis | Manage Risk with the Armis Platform


 

(upbeat music) >> Today's organizations are overwhelmed by the number of different assets connected to their networks, which now include not only IT devices and assets, but also a lot of unmanaged assets, like cloud, IoT, building management systems, industrial control systems, medical devices, and more. That's not just it, there's more. We're seeing massive volume of threats, and a surge of severe vulnerabilities that put these assets at risk. This is happening every day. And many, including me, think it's only going to get worse. The scale of the problem will accelerate. Security and IT teams are struggling to manage all these vulnerabilities at scale. With the time it takes to exploit a new vulnerability, combined with the lack of visibility into the asset attack surface area, companies are having a hard time addressing the vulnerabilities as quickly as they need. This is today's special CUBE program, where we're going to talk about these problems and how they're solved. Hello, everyone. I'm John Furrier, host of theCUBE. This is a special program called Managing Risk Across Your Extended Attack Surface Area with Armis, new asset intelligence platform. To start things off, let's bring in the co-founder and CTO of Armis, Nadir Izrael. Nadir, great to have you on the program. >> Yeah, thanks for having me. >> Great success with Armis. I want to just roll back and just zoom out and look at, what's the big picture? What are you guys focused on? What's the holy grail? What's the secret sauce? >> So Armis' mission, if you will, is to solve to your point literally one of the holy grails of security teams for the past decade or so, which is, what if you could actually have a complete, unified, authoritative asset inventory of everything, and stressing that word, everything. IT, OT, IoT, everything on kind of the physical space of things, data centers, virtualization, applications, cloud. What if you could have everything mapped out for you so that you can actually operate your organization on top of essentially a map? I like to equate this in a way to organizations and security teams everywhere seem to be running, basically running the battlefield, if you will, of their organization, without an actual map of what's going on, with charts and graphs. So we are here to provide that map in every aspect of the environment, and be able to build on top of that business processes, products, and features that would assist security teams in managing that battlefield. >> So this category, basically, is a cyber asset attack surface management kind of focus, but it really is defined by this extended asset attack surface area. What is that? Can you explain that? >> Yeah, it's a mouthful. I think the CAASM, for short, and Gartner do love their acronyms there, but CAASM, in short, is a way to describe a bit of what I mentioned before, or a slice out of it. It's the whole part around a unified view of the attack surface, where I think where we see things, and kind of where Armis extends to that is really with the extended attack surface. That basically means that idea of, what if you could have it all? What if you could have both a unified view of your environment, but also of every single thing that you have, with a strong emphasis on the completeness of that picture? If I take the map analogy slightly more to the extreme, a map of some of your environment isn't nearly as useful as a map of everything. If you had to, in your own kind of map application, you know, chart a path from New York to whichever your favorite surrounding city, but it only takes you so far, and then you sort of need to do the rest of it on your own, not nearly as effective, and in security terms, I think it really boils down into you can't secure what you can't see. And so from an Armis perspective, it's about seeing everything in order to protect everything. And not only do we discover every connected asset that you have, we provide a risk rating to every single one of them, we provide a criticality rating, and the ability to take action on top of these things. >> Having a map is huge. Everyone wants to know what's in their inventory, right, from a risk management standpoint, also from a vulnerability perspective. So I totally see that, and I can see that being the holy grail, but on the vulnerability side, you got to see everything, and you guys have new stuff around vulnerability management. What's this all about? What kind of gaps are you seeing that you're filling in the vulnerability side, because, okay, I can see everything. Now I got to watch out for threat vectors. >> Yeah, and I'd say a different way of asking this is, okay, vulnerability management has been around for a while. What the hell are you bringing into the mix that's so new and novel and great? So I would say that vulnerability scanners of different sorts have existed for over a decade. And I think that ultimately what Armis brings into the mix today is how do we fill in the gaps in a world where critical infrastructure is in danger of being attacked by nation states these days, where ransomware is an everyday occurrence, and where I think credible, up-to-the-minute, and contextualize vulnerability and risk information is essential. Scanners, or how we've been doing things for the last decade, just aren't enough. I think the three things that Armis excels at and completes the security staff today on the vulnerability management side are scale, reach, and context. Scale, meaning ultimately, and I think this is of no news to any enterprise, environments are huge. They are beyond huge. When most of the solutions that enterprises use today were built, they were built for thousands, or tens of thousands of assets. These days, we measure enterprises in the billions, billions of different assets, especially if you include how applications are structured, containers, cloud, all that, billions and billions of different assets, and I think that, ultimately, when the latest and greatest in catastrophic new vulnerabilities come out, and sadly, that's a monthly occurrence these days. You can't just now wait around for things to kind of scan through the environment, and figure out what's going on there. Real time images of vulnerabilities, real time understanding of what the risk is across that entire massive footprint is essential to be able to do things, and if you don't, then lots and lots of teams of people are tasked with doing this day in, day out, in order to accomplish the task. The second thing, I think, is the reach. Scanners can't go everywhere. They don't really deal well with environments that are a mixed IT/OT, for instance, like some of our clients deal with. They can't really deal with areas that aren't classic IT. And in general, these days over 70% of assets are in fact of the unmanaged variety, if you will. So combining different approaches from an Armis standpoint of both passive and active, we reach a tremendous scale, I think, within the environment, and ability to provide or reach that is complete. What if you could have vulnerability management, cover a hundred percent of your environment, and in a very effective manner, and in a very scalable manner? And the last thing really is context. And that's a big deal here. I think that most vulnerability management programs hinge on asset context, on the ability to understand, what are the assets I'm dealing with? And more importantly, what is the criticality of these assets, so I can better prioritize and manage the entire process along the way? So with these things in mind, that's what Armis has basically pulled out is a vulnerability management process. What if we could collect all the vulnerability information from your entire environment, and give you a map of that, on top of that map of assets? Connect every single vulnerability and finding to the relevant assets, and give you a real way to manage that automatically, and in a way that prevents teams of people from having to do a lot of grunt work in the process. >> Yeah, it's like building a search engine, almost. You got the behavioral, contextual. You got to understand what's going on in the environment, and then you got to have the context to what it means relative to the environment. And this is the criticality piece you mentioned, this is a huge differentiator in my mind. I want to unpack that. Understanding what's going on, and then what to pay attention to, it's a data problem. You got that kind of search and cataloging of the assets, and then you got the contextualization of it, but then what alarms do I pay attention to? What is the vulnerability? This is the context. This is a huge deal, because your businesses, your operation's going to have some important pieces, but also it changes on agility. So how do you guys do that? That's, I think, a key piece. >> Yeah, that's a really good question. So asset criticality is a key piece in being able to prioritize the operation. The reason is really simple, and I'll take an example we're all very, very familiar with, and it's been beaten to death, but it's still a good example, which is Log4j, or Log4Shell. When that came out, hundreds of people in large organizations started mapping the entire environment on which applications have what aspect of Log4j. Now, one of the key things there is that when you're doing that exercise for the first time, there are literally millions of systems in a typical enterprise that have Log4j in them, but asset criticality and the application and business context are key here, because some of these different assets that have Log4j are part of your critical business function and your critical business applications, and they deserve immediate attention. Some of them, or some Git server of some developer somewhere, don't warrant quite the same attention or criticality as others. Armis helps by providing the underlying asset map as a built-in aspect of the process. It maps the relationships and dependencies for you. It pulls together and clusters together. What applications does each asset serve? So I might be looking at a server and saying, okay, this server, it supports my ERP system. It supports my production applications to be able to serve my customers. It serves maybe my .com website. Understanding what applications each asset serves and every dependency along the way, meaning that endpoint, that server, but also the load balancers are supported, and the firewalls, and every aspect along the way, that's the bread and butter of the relationship mapping that Armis puts into place to be able to do that, and we also allow users to tweak, add information, connect us with their CMDB or anywhere else where they put this in, but once the information is in, that can serve vulnerability management. It can serve other security functions as well. But in the context of vulnerability management, it creates a much more streamlined process for being able to do the basics. Some critical applications, I want to know exactly what all the critical vulnerabilities that apply to them are. Some business applications, I just want to be able to put SLAs on, that this must be solved within a week, this must be solved within a month, and be able to actually automatically track all of these in a world that is very, very complex inside of an operation or an enterprise. >> We're going to hear from some of your customers later, but I want to just get your thoughts on, anecdotally, what do you hear from? You're the CTO, co-founder, you're actually going into the big accounts. When you roll this out, what are they saying to you? What are some of the comments? Oh my God, this is amazing. Thank you so much. >> Well, of course. Of course. >> Share some of the comments. >> Well, first of all, of course, that's what they're saying. They're saying we're great. Of course, always, but more specifically, I think this solves a huge gap for them. They are used to tools coming in and discovering vulnerabilities for them, but really close to nothing being able to streamline the truly complex and scalable process of being able to manage vulnerabilities within the environment. Not only that, the integration-led, designer-led deployment and the fact that we are a completely agent-less SaaS platform are extremely important for them. These are times where if something isn't easily deployable for an enterprise, its value is next to nothing. I think that enterprises have come to realize that if something isn't a one click deployment across the environment, it's almost not worth the effort these days, because environments are so complex that you can't fully realize the value any other way. So from an Armis standpoint, the fact that we can deploy with a few clicks, the fact that we immediately provide that value, the fact that we're agent-less, in the sense that we don't need to go around installing a footprint within the environment, and for clients who already have Armis, the fact that it's a flip of a switch, just turn it on, are extreme. I think that the fact, in particular, that Armis can be deployed. the vulnerability management can be deployed on top of the existing vulnerability scanner with a simple one-click integration is huge for them. And I think all of these together are what contribute to them saying how great this is. But yeah, that's it. >> The agent listing is huge. What's the alternative? What does it look like if they're going to go the other route, slow to deploy, have meetings, launch it in the environment? What's it look like? >> I think anything these days that touches an endpoint with an agent goes through a huge round of approvals before anything goes into an environment. Same goes, by the way, for additional scanners. No one wants to hear about additional scanners. They've already gone through the effort with some of the biggest tools out there to punch holes through firewalls, to install scanners in different ways. They don't want yet another scanner, or yet another agent. Armis rides on top of the existing infrastructure, the existing agents, the existing scanners. You don't need to do a thing. It just deploys on top of it, and that's really what makes this so easy and seamless. >> Talk about Armis research. Can you talk about, what's that about? What's going on there? What are you guys doing? How do you guys stay relevant for your customers? >> For sure. So one of the, I've made a lot of bold claims throughout, I think, the entire Q and A here, but one of the biggest magic components, if you will, to Armis that kind of help explain what all these magic components are, are really something that we call our collective asset knowledge base. And it's really the source of our power. Think of it as a giant collective intelligent that keeps learning from all of the different environments combined that Armis is deployed at. Essentially, if we see something in one environment, we can translate it immediately into all environments. So anyone who joins this or uses the product joins this collective intelligence in essence. What does that mean? It means that Armis learns about vulnerabilities from other environments. A new Log4j comes out, for instance. It's enough that, in some environments, Armis is able to see it from scanners, or from agents, or from SBOMs, or anything that basically provides information about Log4j, and Armis immediately infers or creates enrichment rules that act across the entire tenant base, or the entire client base of Armis. So very quick response to industry events, whenever something comes out, again, the results are immediate, very up to the minute, very up to the hour, but also I'd say that Armis does its own proactive asset research. We have a huge data set at our disposal, a lot of willing and able clients, and also a lot of partners within the industry that Armis leverages, but our own research is into interesting aspects within the environment. We do our own proactive research into things like TLStorm, which is kind of a bit of a bridging research and vulnerabilities between cyber physical aspect. So on the one hand, the cyber space and kind of virtual environments, but on the other hand, the actual physical space, vulnerabilities, and things like UPSs, or industrial equipment, or things like that. But I will say that also, Armis targets its research along different paths that we feel are underserved. We started a few years back research into firmwares, different types of real time operating systems. We came out with things like URGENT/11, which was research into, on the one hand, operating systems that run on two billion different devices worldwide, on the other hand, in the 40 years it existed, only 13 vulnerabilities were ever exposed or revealed about that operating system. Either it's the most secure operating system in the world, or it's just not gone through enough rigor and enough research in doing this. The type of active research we do is to complement a lot of the research going on in the industry, serve our clients better, but also provide kind of inroads, I think, for the industry to be better at what they do. >> Awesome, Nadir, thanks for sharing the insights. Great to see the research. You got to be at the cutting edge. You got to investigate, be ready for a moment's notice on all aspects of the operating environment, down to the hardware, down to the packet level, down to the any vulnerability, be ready for it. Great job. Thanks for sharing. Appreciate it. >> Absolutely. >> In a moment, Tim Everson's going to join us. He's the CSO of Kalahari Resorts and Conventions. He'll be joining me next. You're watching theCUBE, the leader in high tech coverage. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jun 21 2022

SUMMARY :

With the time it takes to What's the holy grail? in every aspect of the environment, management kind of focus, and the ability to take and I can see that being the holy grail, and manage the entire and cataloging of the assets, and every dependency along the way, What are some of the comments? Well, of course. and the fact that we are What's the alternative? of the biggest tools out there What are you guys doing? from all of the different on all aspects of the He's the CSO of Kalahari

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Tim EversonPERSON

0.99+

New YorkLOCATION

0.99+

John FurrierPERSON

0.99+

thousandsQUANTITY

0.99+

John FurrierPERSON

0.99+

NadirPERSON

0.99+

billionsQUANTITY

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

todayDATE

0.99+

40 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

GartnerORGANIZATION

0.99+

each assetQUANTITY

0.98+

second thingQUANTITY

0.98+

one clickQUANTITY

0.98+

13 vulnerabilitiesQUANTITY

0.98+

a weekQUANTITY

0.98+

over 70%QUANTITY

0.98+

millions of systemsQUANTITY

0.98+

oneQUANTITY

0.98+

two billion different devicesQUANTITY

0.97+

a monthQUANTITY

0.97+

one-clickQUANTITY

0.97+

bothQUANTITY

0.96+

Log4jTITLE

0.96+

hundred percentQUANTITY

0.96+

over a decadeQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

one environmentQUANTITY

0.94+

Log4ShellTITLE

0.93+

Managing Risk Across Your Extended Attack Surface AreaTITLE

0.91+

SBOMsORGANIZATION

0.89+

past decadeDATE

0.88+

threeQUANTITY

0.86+

hundreds of peopleQUANTITY

0.84+

CUBETITLE

0.84+

singleQUANTITY

0.82+

last decadeDATE

0.81+

CAASMTITLE

0.75+

CMDBTITLE

0.74+

billions of different assetsQUANTITY

0.72+

CAASMORGANIZATION

0.66+

URGENTORGANIZATION

0.65+

single vulnerabilityQUANTITY

0.65+

TLStormORGANIZATION

0.65+

Armis'ORGANIZATION

0.64+

GitTITLE

0.64+

11TITLE

0.63+

a few yearsDATE

0.61+

CTOPERSON

0.57+

the holy grailsQUANTITY

0.55+

assetsQUANTITY

0.55+

lotsQUANTITY

0.51+

clicksQUANTITY

0.5+

Nadir Izrael, Armis | Managing Risk with the Armis Platform


 

(upbeat music) >> Today's organizations are overwhelmed by the number of different assets connected to their networks, which now include not only IT devices and assets, but also a lot of unmanaged assets, like cloud, IoT, building management systems, industrial control systems, medical devices, and more. That's not just it, there's more. We're seeing massive volume of threats, and a surge of severe vulnerabilities that put these assets at risk. This is happening every day. And many, including me, think it's only going to get worse. The scale of the problem will accelerate. Security and IT teams are struggling to manage all these vulnerabilities at scale. With the time it takes to exploit a new vulnerability, combined with the lack of visibility into the asset attack surface area, companies are having a hard time addressing the vulnerabilities as quickly as they need. This is today's special CUBE program, where we're going to talk about these problems and how they're solved. Hello, everyone. I'm John Furrier, host of theCUBE. This is a special program called Managing Risk Across Your Extended Attack Surface Area with Armis, new asset intelligence platform. To start things off, let's bring in the co-founder and CTO of Armis, Nadir Izrael. Nadir, great to have you on the program. >> Yeah, thanks for having me. >> Great success with Armis. I want to just roll back and just zoom out and look at, what's the big picture? What are you guys focused on? What's the holy grail? What's the secret sauce? >> So Armis' mission, if you will, is to solve to your point literally one of the holy grails of security teams for the past decade or so, which is, what if you could actually have a complete, unified, authoritative asset inventory of everything, and stressing that word, everything. IT, OT, IoT, everything on kind of the physical space of things, data centers, virtualization, applications, cloud. What if you could have everything mapped out for you so that you can actually operate your organization on top of essentially a map? I like to equate this in a way to organizations and security teams everywhere seem to be running, basically running the battlefield, if you will, of their organization, without an actual map of what's going on, with charts and graphs. So we are here to provide that map in every aspect of the environment, and be able to build on top of that business processes, products, and features that would assist security teams in managing that battlefield. >> So this category, basically, is a cyber asset attack surface management kind of focus, but it really is defined by this extended asset attack surface area. What is that? Can you explain that? >> Yeah, it's a mouthful. I think the CAASM, for short, and Gartner do love their acronyms there, but CAASM, in short, is a way to describe a bit of what I mentioned before, or a slice out of it. It's the whole part around a unified view of the attack surface, where I think where we see things, and kind of where Armis extends to that is really with the extended attack surface. That basically means that idea of, what if you could have it all? What if you could have both a unified view of your environment, but also of every single thing that you have, with a strong emphasis on the completeness of that picture? If I take the map analogy slightly more to the extreme, a map of some of your environment isn't nearly as useful as a map of everything. If you had to, in your own kind of map application, you know, chart a path from New York to whichever your favorite surrounding city, but it only takes you so far, and then you sort of need to do the rest of it on your own, not nearly as effective, and in security terms, I think it really boils down into you can't secure what you can't see. And so from an Armis perspective, it's about seeing everything in order to protect everything. And not only do we discover every connected asset that you have, we provide a risk rating to every single one of them, we provide a criticality rating, and the ability to take action on top of these things. >> Having a map is huge. Everyone wants to know what's in their inventory, right, from a risk management standpoint, also from a vulnerability perspective. So I totally see that, and I can see that being the holy grail, but on the vulnerability side, you got to see everything, and you guys have new stuff around vulnerability management. What's this all about? What kind of gaps are you seeing that you're filling in the vulnerability side, because, okay, I can see everything. Now I got to watch out for threat vectors. >> Yeah, and I'd say a different way of asking this is, okay, vulnerability management has been around for a while. What the hell are you bringing into the mix that's so new and novel and great? So I would say that vulnerability scanners of different sorts have existed for over a decade. And I think that ultimately what Armis brings into the mix today is how do we fill in the gaps in a world where critical infrastructure is in danger of being attacked by nation states these days, where ransomware is an everyday occurrence, and where I think credible, up-to-the-minute, and contextualize vulnerability and risk information is essential. Scanners, or how we've been doing things for the last decade, just aren't enough. I think the three things that Armis excels at and completes the security staff today on the vulnerability management side are scale, reach, and context. Scale, meaning ultimately, and I think this is of no news to any enterprise, environments are huge. They are beyond huge. When most of the solutions that enterprises use today were built, they were built for thousands, or tens of thousands of assets. These days, we measure enterprises in the billions, billions of different assets, especially if you include how applications are structured, containers, cloud, all that, billions and billions of different assets, and I think that, ultimately, when the latest and greatest in catastrophic new vulnerabilities come out, and sadly, that's a monthly occurrence these days. You can't just now wait around for things to kind of scan through the environment, and figure out what's going on there. Real time images of vulnerabilities, real time understanding of what the risk is across that entire massive footprint is essential to be able to do things, and if you don't, then lots and lots of teams of people are tasked with doing this day in, day out, in order to accomplish the task. The second thing, I think, is the reach. Scanners can't go everywhere. They don't really deal well with environments that are a mixed IT/OT, for instance, like some of our clients deal with. They can't really deal with areas that aren't classic IT. And in general, these days over 70% of assets are in fact of the unmanaged variety, if you will. So combining different approaches from an Armis standpoint of both passive and active, we reach a tremendous scale, I think, within the environment, and ability to provide or reach that is complete. What if you could have vulnerability management, cover a hundred percent of your environment, and in a very effective manner, and in a very scalable manner? And the last thing really is context. And that's a big deal here. I think that most vulnerability management programs hinge on asset context, on the ability to understand, what are the assets I'm dealing with? And more importantly, what is the criticality of these assets, so I can better prioritize and manage the entire process along the way? So with these things in mind, that's what Armis has basically pulled out is a vulnerability management process. What if we could collect all the vulnerability information from your entire environment, and give you a map of that, on top of that map of assets? Connect every single vulnerability and finding to the relevant assets, and give you a real way to manage that automatically, and in a way that prevents teams of people from having to do a lot of grunt work in the process. >> Yeah, it's like building a search engine, almost. You got the behavioral, contextual. You got to understand what's going on in the environment, and then you got to have the context to what it means relative to the environment. And this is the criticality piece you mentioned, this is a huge differentiator in my mind. I want to unpack that. Understanding what's going on, and then what to pay attention to, it's a data problem. You got that kind of search and cataloging of the assets, and then you got the contextualization of it, but then what alarms do I pay attention to? What is the vulnerability? This is the context. This is a huge deal, because your businesses, your operation's going to have some important pieces, but also it changes on agility. So how do you guys do that? That's, I think, a key piece. >> Yeah, that's a really good question. So asset criticality is a key piece in being able to prioritize the operation. The reason is really simple, and I'll take an example we're all very, very familiar with, and it's been beaten to death, but it's still a good example, which is Log4j, or Log4Shell. When that came out, hundreds of people in large organizations started mapping the entire environment on which applications have what aspect of Log4j. Now, one of the key things there is that when you're doing that exercise for the first time, there are literally millions of systems in a typical enterprise that have Log4j in them, but asset criticality and the application and business context are key here, because some of these different assets that have Log4j are part of your critical business function and your critical business applications, and they deserve immediate attention. Some of them, or some Git server of some developer somewhere, don't warrant quite the same attention or criticality as others. Armis helps by providing the underlying asset map as a built-in aspect of the process. It maps the relationships and dependencies for you. It pulls together and clusters together. What applications does each asset serve? So I might be looking at a server and saying, okay, this server, it supports my ERP system. It supports my production applications to be able to serve my customers. It serves maybe my .com website. Understanding what applications each asset serves and every dependency along the way, meaning that endpoint, that server, but also the load balancers are supported, and the firewalls, and every aspect along the way, that's the bread and butter of the relationship mapping that Armis puts into place to be able to do that, and we also allow users to tweak, add information, connect us with their CMDB or anywhere else where they put this in, but once the information is in, that can serve vulnerability management. It can serve other security functions as well. But in the context of vulnerability management, it creates a much more streamlined process for being able to do the basics. Some critical applications, I want to know exactly what all the critical vulnerabilities that apply to them are. Some business applications, I just want to be able to put SLAs on, that this must be solved within a week, this must be solved within a month, and be able to actually automatically track all of these in a world that is very, very complex inside of an operation or an enterprise. >> We're going to hear from some of your customers later, but I want to just get your thoughts on, anecdotally, what do you hear from? You're the CTO, co-founder, you're actually going into the big accounts. When you roll this out, what are they saying to you? What are some of the comments? Oh my God, this is amazing. Thank you so much. >> Well, of course. Of course. >> Share some of the comments. >> Well, first of all, of course, that's what they're saying. They're saying we're great. Of course, always, but more specifically, I think this solves a huge gap for them. They are used to tools coming in and discovering vulnerabilities for them, but really close to nothing being able to streamline the truly complex and scalable process of being able to manage vulnerabilities within the environment. Not only that, the integration-led, designer-led deployment and the fact that we are a completely agent-less SaaS platform are extremely important for them. These are times where if something isn't easily deployable for an enterprise, its value is next to nothing. I think that enterprises have come to realize that if something isn't a one click deployment across the environment, it's almost not worth the effort these days, because environments are so complex that you can't fully realize the value any other way. So from an Armis standpoint, the fact that we can deploy with a few clicks, the fact that we immediately provide that value, the fact that we're agent-less, in the sense that we don't need to go around installing a footprint within the environment, and for clients who already have Armis, the fact that it's a flip of a switch, just turn it on, are extreme. I think that the fact, in particular, that Armis can be deployed. the vulnerability management can be deployed on top of the existing vulnerability scanner with a simple one-click integration is huge for them. And I think all of these together are what contribute to them saying how great this is. But yeah, that's it. >> The agent listing is huge. What's the alternative? What does it look like if they're going to go the other route, slow to deploy, have meetings, launch it in the environment? What's it look like? >> I think anything these days that touches an endpoint with an agent goes through a huge round of approvals before anything goes into an environment. Same goes, by the way, for additional scanners. No one wants to hear about additional scanners. They've already gone through the effort with some of the biggest tools out there to punch holes through firewalls, to install scanners in different ways. They don't want yet another scanner, or yet another agent. Armis rides on top of the existing infrastructure, the existing agents, the existing scanners. You don't need to do a thing. It just deploys on top of it, and that's really what makes this so easy and seamless. >> Talk about Armis research. Can you talk about, what's that about? What's going on there? What are you guys doing? How do you guys stay relevant for your customers? >> For sure. So one of the, I've made a lot of bold claims throughout, I think, the entire Q and A here, but one of the biggest magic components, if you will, to Armis that kind of help explain what all these magic components are, are really something that we call our collective asset knowledge base. And it's really the source of our power. Think of it as a giant collective intelligent that keeps learning from all of the different environments combined that Armis is deployed at. Essentially, if we see something in one environment, we can translate it immediately into all environments. So anyone who joins this or uses the product joins this collective intelligence in essence. What does that mean? It means that Armis learns about vulnerabilities from other environments. A new Log4j comes out, for instance. It's enough that, in some environments, Armis is able to see it from scanners, or from agents, or from SBOMs, or anything that basically provides information about Log4j, and Armis immediately infers or creates enrichment rules that act across the entire tenant base, or the entire client base of Armis. So very quick response to industry events, whenever something comes out, again, the results are immediate, very up to the minute, very up to the hour, but also I'd say that Armis does its own proactive asset research. We have a huge data set at our disposal, a lot of willing and able clients, and also a lot of partners within the industry that Armis leverages, but our own research is into interesting aspects within the environment. We do our own proactive research into things like TLStorm, which is kind of a bit of a bridging research and vulnerabilities between cyber physical aspect. So on the one hand, the cyber space and kind of virtual environments, but on the other hand, the actual physical space, vulnerabilities, and things like UPSs, or industrial equipment, or things like that. But I will say that also, Armis targets its research along different paths that we feel are underserved. We started a few years back research into firmwares, different types of real time operating systems. We came out with things like URGENT/11, which was research into, on the one hand, operating systems that run on two billion different devices worldwide, on the other hand, in the 40 years it existed, only 13 vulnerabilities were ever exposed or revealed about that operating system. Either it's the most secure operating system in the world, or it's just not gone through enough rigor and enough research in doing this. The type of active research we do is to complement a lot of the research going on in the industry, serve our clients better, but also provide kind of inroads, I think, for the industry to be better at what they do. >> Awesome, Nadir, thanks for sharing the insights. Great to see the research. You got to be at the cutting edge. You got to investigate, be ready for a moment's notice on all aspects of the operating environment, down to the hardware, down to the packet level, down to the any vulnerability, be ready for it. Great job. Thanks for sharing. Appreciate it. >> Absolutely. >> In a moment, Tim Everson's going to join us. He's the CSO of Kalahari Resorts and Conventions. He'll be joining me next. You're watching theCUBE, the leader in high tech coverage. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jun 17 2022

SUMMARY :

With the time it takes to What's the holy grail? in every aspect of the environment, management kind of focus, and the ability to take and I can see that being the holy grail, and manage the entire and cataloging of the assets, and every dependency along the way, What are some of the comments? Well, of course. and the fact that we are What's the alternative? of the biggest tools out there What are you guys doing? from all of the different on all aspects of the He's the CSO of Kalahari

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Tim EversonPERSON

0.99+

New YorkLOCATION

0.99+

thousandsQUANTITY

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

NadirPERSON

0.99+

billionsQUANTITY

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

todayDATE

0.99+

40 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

GartnerORGANIZATION

0.99+

each assetQUANTITY

0.98+

second thingQUANTITY

0.98+

one clickQUANTITY

0.98+

13 vulnerabilitiesQUANTITY

0.98+

a weekQUANTITY

0.98+

over 70%QUANTITY

0.98+

millions of systemsQUANTITY

0.98+

oneQUANTITY

0.98+

two billion different devicesQUANTITY

0.97+

a monthQUANTITY

0.97+

one-clickQUANTITY

0.97+

bothQUANTITY

0.96+

Log4jTITLE

0.96+

hundred percentQUANTITY

0.96+

over a decadeQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

one environmentQUANTITY

0.94+

Log4ShellTITLE

0.93+

Managing Risk Across Your Extended Attack Surface AreaTITLE

0.91+

SBOMsORGANIZATION

0.89+

past decadeDATE

0.88+

threeQUANTITY

0.86+

hundreds of peopleQUANTITY

0.84+

CUBETITLE

0.84+

singleQUANTITY

0.82+

last decadeDATE

0.81+

CAASMTITLE

0.75+

CMDBTITLE

0.74+

billions of different assetsQUANTITY

0.72+

CAASMORGANIZATION

0.66+

URGENTORGANIZATION

0.65+

single vulnerabilityQUANTITY

0.65+

TLStormORGANIZATION

0.65+

Armis'ORGANIZATION

0.64+

GitTITLE

0.64+

11TITLE

0.63+

a few yearsDATE

0.61+

CTOPERSON

0.57+

the holy grailsQUANTITY

0.55+

assetsQUANTITY

0.55+

lotsQUANTITY

0.51+

clicksQUANTITY

0.5+

ArmisPERSON

0.49+

Sahir Azam & Guillermo Rauch | MongoDB World 2022


 

>> We're back at the Big Apple, theCUBE's coverage of MongoDB World 2022. Sahir Azam is here, he's the Chief Product Officer of MongoDB, and Guillermo Rauch who's the CEO of Vercel. Hot off the keynotes from this morning guys, good job. >> Thank you. >> Thank you. >> Thank you for joining us here. Thanks for having us. Guillermo when it comes to modern web development, you know the back-end, the cloud guys got to it kind of sewn up, >> you know- >> Guillermo: Forget about it. >> But all the action's in the front end, and that's where you are. Explain Vercel. >> Yeah so Vercel is the company that pioneers front-end development as serverless infrastructure. So we built Next.js which is the most popular React framework in the world. This is what front-end engineers choose to build innovative UI's, beautiful websites. Companies like Dior and GitHub and TikTok and Twitch, which we mentioned in the keynote, are powering their entire dot-coms or all of their new parts of their dot-coms with Next.js. And Vercel is the serverless platform where you can deploy frameworks like in Next.js and others like Svelte and Vue to create really fast experiences on the web. >> So I hear, so serverless, I hear that's the hot trend. You guys made some announcements today. I mean when you look at the, we have spending data with our friends at ETR right down the street. I mean it's just off the charts, whether it's Amazon, Google, Azure Functions, I mean it's just exploding. >> Sahir: Yeah, it's I think in many ways, it's a natural trend. You know, we talk a lot about, whether it be today's keynote or another industry talks you see around our industry that developers are constantly looking for ways to focus on innovation and the business logic that defines their application and as opposed to managing the plumbing, and management of infrastructure. And we've seen this happen over and over again across every layer of the stack. And so for us, you know MongoDB, we have a bit of, you know sort of a lens of a broad spectrum of the market. We certainly have you know, large enterprises that are modernizing existing kind of core systems, then we have developers all over the world who are building the next big best thing. And that's what led us to partner with Vercel is just the bleeding edge of developers building in a new way, in a much more efficient way. And we wanted to make sure we provide a data platform that fits naturally in the way they want to work. >> So explain to our audience the trade-offs of serverless, and I want to get into sort of how you've resolved that. And then I want to hear from Guillermo, what that means for developers. >> Sahir: Yeah in our case, we don't view it as an either or, there are certain workloads and definitely certain companies that will gravitate towards a more traditional database infrastructure where they're choosing the configuration of their cluster. They want full control over it. And that provides, you know, certain benefits around cost predictability or isolation or perceived benefits at least of those things. And customers will gravitate towards that. Now on the flip side, if you're building a new application or you want the ability to scale seamlessly and not have to worry about any of the plumbing, serverless is clearly the easier model. So over the long term, we certainly expect to see as a mix of things, more and more serverless workloads being built on our platform and just generally in the industry, which is why we leaned in so heavily on investing in Atlas serverless. But the flexibility to not be forced into a particular model, but to get the same database experience across your application and even switch between them is an important characteristic for us as we build going forward. >> And you stressed the cost efficiency, and not having to worry about, you know, starting cold. You've architected around that, and what does that mean for a developer? >> Guillermo: For a developer it means that you kind of get the best of both worlds, right? Like you get the best possible performance. Front-end developers are extremely sensitive to this. That's why us pioneering this concept, serverless front-end, has put us in a very privileged position because we have to deliver that really quick time to first buy, that really quick paint. So any of the old trade-offs of serverless are not accepted by the market. You have to be extremely fast. You have to be instant to deliver that front-end content. So what we talked about today for example, with the Vercel Edge network, we're removing all of the cost of that like first hit. That cold start doesn't really exist. And now we're seeing it all across the board, going into the back-end where Mongo has also gotten rid of it. >> Dave: How do you guys collaborate? What's the focus of integration specifically from, you know, an engineering resource standpoint? >> Yeah the main idea is, idea to global app in seconds, right? You have your idea. We give you the framework. We don't give you infrastructure primitives. We give you all the necessary tools to start your application. In practice this means you host it in a Git repo. You import it onto Vercel. You install the Mongo integration. Now your front-end and your data back-end are connected. And then your application just goes global in seconds. >> So, okay. So you've abstracted away the complexity of those primitives, is that correct? >> Guillermo: Absolutely. >> Do do developers ever say, "That's awesome but I'd like to get to them every now and then." Or do you not allow that? >> Definitely. We expose all the underlying APIs, and the key thing we hear is that, especially with the push for usage-based billing models, observability is of the essence. So at any time you have to be able to query, in real time, every data point that the platform is observing. We give you performance analytics in real time to see how your front-end is performing. We give you statistics about how often you're querying your back-end and so on, and your cache hit ratios. So what I talked about today in the keynote is, it's not just about throwing more compute at the problem, but the ability to use the edge to your advantage to memoize computation and reuse it across different visits. >> When we think of mission critical historically, you know, you think about going to the ATM, right? I mean a financial transaction. But Mongo is positioning for mission critical applications across a variety of industries. Do we need to rethink what mission critical means? >> I think it's all in the eye of the beholder so to speak. If you're a new business starting up, your software and your application is your entire business. So if you have a cold start latency or God forbid something actually goes down, you don't have a business. So it's just as mission critical to that founder of a new business and new technology as it is, you know, an established enterprise that's running sort of a more, you know, day-to-day application that we may all interact with. So we treat all of those scenarios with equal fervor and importance right? And many times, it's a lot of those new experiences that the become the day-to-day experiences for us globally, and are super important. And we power all of those, whether it be an established enterprise all the way to the next big startup. >> I often talk about COVID as the forced march to digital. >> Sahir: Mm-Hmm. >> Which was obviously a little bit rushed, but if you weren't in digital business, you were out of business. And so now you're seeing people step back and say, "All right, let's be more thoughtful about our digital transformation. We've got some time, we've obviously learned some things made some mistakes." It's all about the customer experience though. And that becomes mission critical right? What are you seeing Guillermo, in terms of the patterns in digital transformation now that we're sort of exiting the isolation economy? >> One thing that comes to mind is, we're seeing that it's not always predictable how fast you're going to grow in this digital economy. So we have customers in the ecommerce space, they do a drop and they're piggybacking on serverless to give them that ability to instantly scale. And they couldn't even prepare for some of these events. We see that a lot with the Web3 space and NFT drops, where they're building in such a way that they're not sensitive to this massive fluctuations in traffic. They're taking it for granted. We've put in so much work together behind the scenes to support it. But the digital native creator just, "Oh things are scaling from one second to the next like I'm hitting like 20,000 requests per second, no problem Vercel is handling it." But the amount of infrastructural work that's gone behind the scenes in support has been incredible. >> We see that in gaming all the time, you know it's really hard for a gaming company to necessarily predict where in the globe a game's going to be particularly hot. Games get super popular super fast if they're successful, it's really hard to predict. It's another vertical that's got a similar dynamic. >> So gaming, crypto, so you're saying that you're able to assist your customers in architecting so that the website doesn't crash. >> Guillermo: Absolutely. >> But at the same time, if the the business dynamic changes, they can dial down. >> Yeah. >> Right and in many ways, slow is the new down, right? And if somebody has a slow experience they're going to leave your site just as much as if it's- >> I'm out of here- >> You were down. So you know, it's really maintaining that really fast performance, that amazing customer experience. Because this is all measured, it's scientific. Like anytime there's friction in the process, you're going to lose customers. >> So obviously people are excited about your keynote, but what have they been saying? Any specific comments you can share, or questions that you got that were really interesting or? >> I'm already getting links to the apps that people are deploying. So the whole idea- >> Come on! >> All over the world. Yeah so it's already working I'm excited. >> So they were show they were showing off, "Look what I did" Really? >> Yeah on Twitter. >> That's amazing. >> I think from my standpoint, I got a question earlier, we were with a bunch of financial analysts and investors, and they said they've been talking to a lot of the customers in the halls. And just to see, you know, from the last time we were all in person, the number of our customers that are using multiple capabilities across this idea of a developer data platform, you know, certainly MongoDB's been a popular core database open source for a long time. But the new capabilities around search, analytics, mobile being adopted much more broadly to power these experiences is the most exciting thing from our side. >> So from 2019 to now, you're saying substantial uptick in adoption for these features? >> Yeah. And many of them are new. >> Time series as well, that's pretty new, so yeah. >> Yeah and you know, our philosophy of development at MongoDB is to get capabilities in the hands of customers early. Get that feedback to enrich and drive that product-market fit. And over the last three years especially, we've been transitioning from a single product kind of core, you know, non relational modern database to a data platform, a developer data platform that adds more and more capabilities to power these modern applications. And a lot of those were released during the pandemic. Certainly we talked about them in our virtual conferences and all the zoom meetings we had over the years. But to actually go talk to all these customers, this is the largest conference we've ever put on, and to get a sense of, wow all the amazing things they're doing with them, it's definitely a different feeling when we're all together. >> So that's interesting, when you have such a hot product, product-led growth which is what Mongo has been in, and you add these new features. They're coming from the developers who are saying, "Hey, we need this." >> Yip. >> Okay so you have a pretty high degree of confidence, but how do you know when you have product-market fit? I mean, is it adoption, usage, renewals? What's your metric? >> Yeah I think it's a mix of quantitative measures that you know, around conversion rates, the size of your funnel, the retention rate, NPS which obviously can be measured, but also just qualitative. You know when you're talking to a developer or a technology executive around what their needs are, and then you see how they actually apply it to solve a problem, it's that balance between the qualitative and the quantitative measurement of things. And you can just sort of, frankly you can feel it. You can see it in the numbers sure, but you can kind of feel that excitement, you can see that adoption and what it empowers people to do. And so to me, as a product leader, it's always a blend of those things. If you get too obsessed with purely the metrics, you can always over optimize something for the wrong reason. So you have to bring in that qualitative feedback to balance yourself out. >> Right. >> Guillermo, what's next? What do you not have that you want from Sahir and Mongo? >> So the natural next step for serverless computing is, is the Edge. So we have to auto-scale, we have to tolerate fares. We have to be avail. We have to be easy, but we have to be global. And right now we've been doing this by using a lot of techniques like caching and replication and things like this. But the future's about personalizing even more to each visitor depending on where they are. So if I'm in New York, I want to get the latest offers for New York on demand, just for me, and using AI to continue to personalize that experience. So giving the developer these tools in a way where it feels natural to build an application like this. It doesn't feel like, "Oh I'm going to do this year 10 if I make it, I'm going to do it since the very beginning." >> Dave: Okay interesting. So that says to me that I'm not going to make a round trip to the cloud necessarily for that experience. So I'm going to have some kind, Apple today, at the Worldwide Developer Conference announced the M2, right. I've been looking at the M1 Ultra, and I'm going wow look at that! And so- >> Sahir: You were talking about that new one backstage. >> I mean it's this amazing pace of Silicon development and they're focusing on the NPU and you look at what Tesla's doing. I mean it's just incredible. So you're going to have some new hardware architecture that emerges. Most of the AI that's done today is modeling in the cloud. You're going to have a real time inferencing at the Edge. So that's not going to do the round trip. There's going to be a data store there, I think it has to be. You're going to persist some of the data, maybe not all of it. So it's a whole new architecture- >> Sahir: Absolutely. >> That's developing. That sounds very disruptive. >> Sahir: Yeah. >> How do you think about that, and how does Mongo play there? Guillermo first. >> What I spent a lot of time thinking about is obviously the developer experience, giving the programmer a programming model that is natural, intuitive, and produces its great results. So if they have to think about data that's local because of regulatory reasons for example, how can we let the framework guide them to success? I'm just writing an application I deployed to the cloud and then everything else is figured out. >> Yeah or speed of light is another challenge. (Sahir and Guillermo laugh) >> How can we overcome the speed of light is our next task for sure. >> Well you're working on that aren't you? You've got the best engineers on that one. (Sahir and Guillermo laugh) >> We can solve a lot of problems, I'm not sure of that one. >> So Mongo plays in that scenario or? >> Yeah so I think, absolutely you know, we've been focused heavily on becoming the globally distributed cloud data layer. The back-end data layer that allows you to persist data to align with performance and move data where it needs to be globally or deal with data sovereignty, data nationalism that's starting to rise, but absolutely there is more data being pushed out to the Edge, to your point around processing or inference happening at the Edge. And there's going to be a globally distributed front-end layer as well, whether data and processing takes apart. And so we're focused on one, making sure the data connectivity and the layer is all connected into one unified architecture. We do that in combination with technologies that we have that do with mobility or edge distribution and synchronization of data with realm. And we do it with partnerships. We have edge partnerships with AWS and Verizon. We have partnerships with a lot of CVM players who are building out that Edge platform and making sure that MongoDB is either connected to it or just driving that synchronization back and forth. >> I call that unified experience super cloud, Robbie Belson from Verizon the cloud continuum, but that consistent experience for developers whether you're on Prim, whether you're in you know, Azure, Google, AWS, and ultimately the Edge. That's the big- >> That's where it's going. >> White space right now I'm hearing, Guillermo, right? >> I think it'll define the next generation of how software is built. And we're seeing this almost like a coalition course between some of the ideas that the Web3 developers are excited about, which is like decentralization almost to the extreme. But the Web2 also needs more decentralization, because we're seeing it with like, the data needs to be local to me, I need more privacy. I was looking at the latest encryption features in Mongo, like I think both Web2 need to incorporate more of the ideas of Web3 and vice versa to create the best possible consumer experience. Privacy matters more than ever before. Latency for conversion matters more than ever before. And regulations are changing. >> Sahir: Yeah. >> And you talked about Web3 earlier, talked about new protocols, a new distributed you know, decentralized system emerging, new hardware architectures. I really believe we really think that new economics are going to bleed back into the data center, and yeah every 15 years or so this industry gets disrupted. >> Sahir: Yeah. >> Guillermo: Absolutely. >> You know you ain't see nothing yet guys. >> We all talked about hardware becoming commoditized 10, 15 years ago- >> Yeah of course. >> We get the virtualization, and it's like nope not at all. It's actually a lot of invention happening. >> The lower the price the more the consumption. So guys thanks so much. Great conversation. >> Thank you. >> Really appreciate your time. >> Really appreciate it I enjoyed the conversation. >> All right and thanks for watching. Keep it right there. We'll be back with our next segment right after this short break. Dave Vellante for theCUBE's coverage of MongoDB World 2022. >> Man Offscreen: Clear. (clapping) >> All right wow. Don't get up. >> Sahir: Okay. >> Is that a Moonwatch? >> Sahir: It is a Speedmaster but it's that the-

Published Date : Jun 8 2022

SUMMARY :

he's the Chief Product Officer of MongoDB, the cloud guys got to it kind of sewn up, and that's where you are. And Vercel is the I mean it's just off the charts, and the business logic that So explain to our audience But the flexibility to not be forced and not having to worry about, So any of the old trade-offs You install the Mongo integration. is that correct? "That's awesome but I'd like to get the edge to your advantage you know, that the become the day-to-day experiences the forced march to digital. in terms of the patterns behind the scenes to support it. We see that in gaming all the time, the website doesn't crash. But at the same time, friction in the process, So the whole idea- All over the world. from the last time we were all in person, And many of them are new. so yeah. and all the zoom meetings They're coming from the it's that balance between the qualitative So giving the developer So that says to me that I'm about that new one backstage. So that's not going to do the round trip. That's developing. How do you think about that, So if they have to think (Sahir and Guillermo laugh) How can we overcome the speed of light You've got the best engineers on that one. I'm not sure of that one. and the layer is all connected That's the big- the data needs to be local to me, that new economics are going to bleed back You know you ain't We get the virtualization, the more the consumption. enjoyed the conversation. of MongoDB World 2022. All right wow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robbie BelsonPERSON

0.99+

SahirPERSON

0.99+

VerizonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

Sahir AzamPERSON

0.99+

Dave VellantePERSON

0.99+

GuillermoPERSON

0.99+

Guillermo RauchPERSON

0.99+

2019DATE

0.99+

VercelORGANIZATION

0.99+

DiorORGANIZATION

0.99+

TwitchORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

New YorkLOCATION

0.99+

AmazonORGANIZATION

0.99+

Next.jsTITLE

0.99+

MongoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TikTokORGANIZATION

0.99+

bothQUANTITY

0.99+

AppleORGANIZATION

0.99+

one secondQUANTITY

0.99+

TeslaORGANIZATION

0.99+

todayDATE

0.99+

MongoDBORGANIZATION

0.98+

firstQUANTITY

0.98+

20,000 requests per secondQUANTITY

0.98+

first hitQUANTITY

0.97+

both worldsQUANTITY

0.97+

EdgeTITLE

0.97+

single productQUANTITY

0.96+

pandemicEVENT

0.96+

Web2ORGANIZATION

0.96+

oneQUANTITY

0.95+

Web3ORGANIZATION

0.95+

SvelteTITLE

0.95+

MongoDBTITLE

0.95+

theCUBEORGANIZATION

0.93+

Worldwide Developer ConferenceEVENT

0.91+

M2EVENT

0.91+

VueTITLE

0.9+

each visitorQUANTITY

0.9+

GitTITLE

0.9+

M1 UltraCOMMERCIAL_ITEM

0.89+

this year 10DATE

0.88+

AtlasORGANIZATION

0.84+

TwitterORGANIZATION

0.83+

ReactTITLE

0.83+

VercelTITLE

0.81+

10, 15 years agoDATE

0.81+

One thingQUANTITY

0.75+

Big AppleLOCATION

0.75+

7 Sahir Azam & Guillermo Rauch


 

>> Man Offscreen: Standby. Dave is coming you in 5, 4, 3, 2. >> We're back at the Big Apple, theCUBE's coverage of MongoDB World 2022. Sahir Azam is here, he's the Chief Product Officer of MongoDB, and Guillermo Rauch who's the CEO of Vercel. Hot off the keynotes from this morning guys, good job. >> Thank you. >> Thank you. >> Thank you for joining us here. Thanks for having us. Guillermo when it comes to modern web development, you know the back-end, the cloud guys got to it kind of sewn up, >> you know- >> Guillermo: Forget about it. >> But all the action's in the front end, and that's where you are. Explain Vercel. >> Yeah so Vercel is the company that pioneers front-end development as serverless infrastructure. So we built Next.js which is the most popular React framework in the world. This is what front-end engineers choose to build innovative UI's, beautiful websites. Companies like Dior and GitHub and TikTok and Twitch, which we mentioned in the keynote, are powering their entire dot-coms or all of their new parts of their dot-coms with Next.js. And Vercel is the serverless platform where you can deploy frameworks like in Next.js and others like Svelte and Vue to create really fast experiences on the web. >> So I hear, so serverless, I hear that's the hot trend. You guys made some announcements today. I mean when you look at the, we have spending data with our friends at ETR right down the street. I mean it's just off the charts, whether it's Amazon, Google, Azure Functions, I mean it's just exploding. >> Sahir: Yeah, it's I think in many ways, it's a natural trend. You know, we talk a lot about, whether it be today's keynote or another industry talks you see around our industry that developers are constantly looking for ways to focus on innovation and the business logic that defines their application and as opposed to managing the plumbing, and management of infrastructure. And we've seen this happen over and over again across every layer of the stack. And so for us, you know MongoDB, we have a bit of, you know sort of a lens of a broad spectrum of the market. We certainly have you know, large enterprises that are modernizing existing kind of core systems, then we have developers all over the world who are building the next big best thing. And that's what led us to partner with Vercel is just the bleeding edge of developers building in a new way, in a much more efficient way. And we wanted to make sure we provide a data platform that fits naturally in the way they want to work. >> So explain to our audience the trade-offs of serverless, and I want to get into sort of how you've resolved that. And then I want to hear from Guillermo, what that means for developers. >> Sahir: Yeah in our case, we don't view it as an either or, there are certain workloads and definitely certain companies that will gravitate towards a more traditional database infrastructure where they're choosing the configuration of their cluster. They want full control over it. And that provides, you know, certain benefits around cost predictability or isolation or perceived benefits at least of those things. And customers will gravitate towards that. Now on the flip side, if you're building a new application or you want the ability to scale seamlessly and not have to worry about any of the plumbing, serverless is clearly the easier model. So over the long term, we certainly expect to see as a mix of things, more and more serverless workloads being built on our platform and just generally in the industry, which is why we leaned in so heavily on investing in Atlas serverless. But the flexibility to not be forced into a particular model, but to get the same database experience across your application and even switch between them is an important characteristic for us as we build going forward. >> And you stressed the cost efficiency, and not having to worry about, you know, starting cold. You've architected around that, and what does that mean for a developer? >> Guillermo: For a developer it means that you kind of get the best of both worlds, right? Like you get the best possible performance. Front-end developers are extremely sensitive to this. That's why us pioneering this concept, serverless front-end, has put us in a very privileged position because we have to deliver that really quick time to first buy, that really quick paint. So any of the old trade-offs of serverless are not accepted by the market. You have to be extremely fast. You have to be instant to deliver that front-end content. So what we talked about today for example, with the Vercel Edge network, we're removing all of the cost of that like first hit. That cold start doesn't really exist. And now we're seeing it all across the board, going into the back-end where Mongo has also gotten rid of it. >> Dave: How do you guys collaborate? What's the focus of integration specifically from, you know, an engineering resource standpoint? >> Yeah the main idea is, idea to global app in seconds, right? You have your idea. We give you the framework. We don't give you infrastructure primitives. We give you all the necessary tools to start your application. In practice this means you host it in a Git repo. You import it onto Vercel. You install the Mongo integration. Now your front-end and your data back-end are connected. And then your application just goes global in seconds. >> So, okay. So you've abstracted away the complexity of those primitives, is that correct? >> Guillermo: Absolutely. >> Do do developers ever say, "That's awesome but I'd like to get to them every now and then." Or do you not allow that? >> Definitely. We expose all the underlying APIs, and the key thing we hear is that, especially with the push for usage-based billing models, observability is of the essence. So at any time you have to be able to query, in real time, every data point that the platform is observing. We give you performance analytics in real time to see how your front-end is performing. We give you statistics about how often you're querying your back-end and so on, and your cache hit ratios. So what I talked about today in the keynote is, it's not just about throwing more compute at the problem, but the ability to use the edge to your advantage to memoize computation and reuse it across different visits. >> When we think of mission critical historically, you know, you think about going to the ATM, right? I mean a financial transaction. But Mongo is positioning for mission critical applications across a variety of industries. Do we need to rethink what mission critical means? >> I think it's all in the eye of the beholder so to speak. If you're a new business starting up, your software and your application is your entire business. So if you have a cold start latency or God forbid something actually goes down, you don't have a business. So it's just as mission critical to that founder of a new business and new technology as it is, you know, an established enterprise that's running sort of a more, you know, day-to-day application that we may all interact with. So we treat all of those scenarios with equal fervor and importance right? And many times, it's a lot of those new experiences that the become the day-to-day experiences for us globally, and are super important. And we power all of those, whether it be an established enterprise all the way to the next big startup. >> I often talk about COVID as the forced march to digital. >> Sahir: Mm-Hmm. >> Which was obviously a little bit rushed, but if you weren't in digital business, you were out of business. And so now you're seeing people step back and say, "All right, let's be more thoughtful about our digital transformation. We've got some time, we've obviously learned some things made some mistakes." It's all about the customer experience though. And that becomes mission critical right? What are you seeing Guillermo, in terms of the patterns in digital transformation now that we're sort of exiting the isolation economy? >> One thing that comes to mind is, we're seeing that it's not always predictable how fast you're going to grow in this digital economy. So we have customers in the ecommerce space, they do a drop and they're piggybacking on serverless to give them that ability to instantly scale. And they couldn't even prepare for some of these events. We see that a lot with the Web3 space and NFT drops, where they're building in such a way that they're not sensitive to this massive fluctuations in traffic. They're taking it for granted. We've put in so much work together behind the scenes to support it. But the digital native creator just, "Oh things are scaling from one second to the next like I'm hitting like 20,000 requests per second, no problem Vercel is handling it." But the amount of infrastructural work that's gone behind the scenes in support has been incredible. >> We see that in gaming all the time, you know it's really hard for a gaming company to necessarily predict where in the globe a game's going to be particularly hot. Games get super popular super fast if they're successful, it's really hard to predict. It's another vertical that's got a similar dynamic. >> So gaming, crypto, so you're saying that you're able to assist your customers in architecting so that the website doesn't crash. >> Guillermo: Absolutely. >> But at the same time, if the the business dynamic changes, they can dial down. >> Yeah. >> Right and in many ways, slow is the new down, right? And if somebody has a slow experience they're going to leave your site just as much as if it's- >> I'm out of here- >> You were down. So you know, it's really maintaining that really fast performance, that amazing customer experience. Because this is all measured, it's scientific. Like anytime there's friction in the process, you're going to lose customers. >> So obviously people are excited about your keynote, but what have they been saying? Any specific comments you can share, or questions that you got that were really interesting or? >> I'm already getting links to the apps that people are deploying. So the whole idea- >> Come on! >> All over the world. Yeah so it's already working I'm excited. >> So they were show they were showing off, "Look what I did" Really? >> Yeah on Twitter. >> That's amazing. >> I think from my standpoint, I got a question earlier, we were with a bunch of financial analysts and investors, and they said they've been talking to a lot of the customers in the halls. And just to see, you know, from the last time we were all in person, the number of our customers that are using multiple capabilities across this idea of a developer data platform, you know, certainly MongoDB's been a popular core database open source for a long time. But the new capabilities around search, analytics, mobile being adopted much more broadly to power these experiences is the most exciting thing from our side. >> So from 2019 to now, you're saying substantial uptick in adoption for these features? >> Yeah. And many of them are new. >> Time series as well, that's pretty new, so yeah. >> Yeah and you know, our philosophy of development at MongoDB is to get capabilities in the hands of customers early. Get that feedback to enrich and drive that product-market fit. And over the last three years especially, we've been transitioning from a single product kind of core, you know, non relational modern database to a data platform, a developer data platform that adds more and more capabilities to power these modern applications. And a lot of those were released during the pandemic. Certainly we talked about them in our virtual conferences and all the zoom meetings we had over the years. But to actually go talk to all these customers, this is the largest conference we've ever put on, and to get a sense of, wow all the amazing things they're doing with them, it's definitely a different feeling when we're all together. >> So that's interesting, when you have such a hot product, product-led growth which is what Mongo has been in, and you add these new features. They're coming from the developers who are saying, "Hey, we need this." >> Yip. >> Okay so you have a pretty high degree of confidence, but how do you know when you have product-market fit? I mean, is it adoption, usage, renewals? What's your metric? >> Yeah I think it's a mix of quantitative measures that you know, around conversion rates, the size of your funnel, the retention rate, NPS which obviously can be measured, but also just qualitative. You know when you're talking to a developer or a technology executive around what their needs are, and then you see how they actually apply it to solve a problem, it's that balance between the qualitative and the quantitative measurement of things. And you can just sort of, frankly you can feel it. You can see it in the numbers sure, but you can kind of feel that excitement, you can see that adoption and what it empowers people to do. And so to me, as a product leader, it's always a blend of those things. If you get too obsessed with purely the metrics, you can always over optimize something for the wrong reason. So you have to bring in that qualitative feedback to balance yourself out. >> Right. >> Guillermo, what's next? What do you not have that you want from Sahir and Mongo? >> So the natural next step for serverless computing is, is the Edge. So we have to auto-scale, we have to tolerate fares. We have to be avail. We have to be easy, but we have to be global. And right now we've been doing this by using a lot of techniques like caching and replication and things like this. But the future's about personalizing even more to each visitor depending on where they are. So if I'm in New York, I want to get the latest offers for New York on demand, just for me, and using AI to continue to personalize that experience. So giving the developer these tools in a way where it feels natural to build an application like this. It doesn't feel like, "Oh I'm going to do this year 10 if I make it, I'm going to do it since the very beginning." >> Dave: Okay interesting. So that says to me that I'm not going to make a round trip to the cloud necessarily for that experience. So I'm going to have some kind, Apple today, at the Worldwide Developer Conference announced the M2, right. I've been looking at the M1 Ultra, and I'm going wow look at that! And so- >> Sahir: You were talking about that new one backstage. >> I mean it's this amazing pace of Silicon development and they're focusing on the NPU and you look at what Tesla's doing. I mean it's just incredible. So you're going to have some new hardware architecture that emerges. Most of the AI that's done today is modeling in the cloud. You're going to have a real time inferencing at the Edge. So that's not going to do the round trip. There's going to be a data store there, I think it has to be. You're going to persist some of the data, maybe not all of it. So it's a whole new architecture- >> Sahir: Absolutely. >> That's developing. That sounds very disruptive. >> Sahir: Yeah. >> How do you think about that, and how does Mongo play there? Guillermo first. >> What I spent a lot of time thinking about is obviously the developer experience, giving the programmer a programming model that is natural, intuitive, and produces its great results. So if they have to think about data that's local because of regulatory reasons for example, how can we let the framework guide them to success? I'm just writing an application I deployed to the cloud and then everything else is figured out. >> Yeah or speed of light is another challenge. (Sahir and Guillermo laugh) >> How can we overcome the speed of light is our next task for sure. >> Well you're working on that aren't you? You've got the best engineers on that one. (Sahir and Guillermo laugh) >> We can solve a lot of problems, I'm not sure of that one. >> So Mongo plays in that scenario or? >> Yeah so I think, absolutely you know, we've been focused heavily on becoming the globally distributed cloud data layer. The back-end data layer that allows you to persist data to align with performance and move data where it needs to be globally or deal with data sovereignty, data nationalism that's starting to rise, but absolutely there is more data being pushed out to the Edge, to your point around processing or inference happening at the Edge. And there's going to be a globally distributed front-end layer as well, whether data and processing takes apart. And so we're focused on one, making sure the data connectivity and the layer is all connected into one unified architecture. We do that in combination with technologies that we have that do with mobility or edge distribution and synchronization of data with realm. And we do it with partnerships. We have edge partnerships with AWS and Verizon. We have partnerships with a lot of CVM players who are building out that Edge platform and making sure that MongoDB is either connected to it or just driving that synchronization back and forth. >> I call that unified experience super cloud, Robbie Belson from Verizon the cloud continuum, but that consistent experience for developers whether you're on Prim, whether you're in you know, Azure, Google, AWS, and ultimately the Edge. That's the big- >> That's where it's going. >> White space right now I'm hearing, Guillermo, right? >> I think it'll define the next generation of how software is built. And we're seeing this almost like a coalition course between some of the ideas that the Web3 developers are excited about, which is like decentralization almost to the extreme. But the Web2 also needs more decentralization, because we're seeing it with like, the data needs to be local to me, I need more privacy. I was looking at the latest encryption features in Mongo, like I think both Web2 need to incorporate more of the ideas of Web3 and vice versa to create the best possible consumer experience. Privacy matters more than ever before. Latency for conversion matters more than ever before. And regulations are changing. >> Sahir: Yeah. >> And you talked about Web3 earlier, talked about new protocols, a new distributed you know, decentralized system emerging, new hardware architectures. I really believe we really think that new economics are going to bleed back into the data center, and yeah every 15 years or so this industry gets disrupted. >> Sahir: Yeah. >> Guillermo: Absolutely. >> You know you ain't see nothing yet guys. >> We all talked about hardware becoming commoditized 10, 15 years ago- >> Yeah of course. >> We get the virtualization, and it's like nope not at all. It's actually a lot of invention happening. >> The lower the price the more the consumption. So guys thanks so much. Great conversation. >> Thank you. >> Really appreciate your time. >> Really appreciate it I enjoyed the conversation. >> All right and thanks for watching. Keep it right there. We'll be back with our next segment right after this short break. Dave Vellante for theCUBE's coverage of MongoDB World 2022. >> Man Offscreen: Clear. (clapping) >> All right wow. Don't get up. >> Sahir: Okay. >> Is that a Moonwatch? >> Sahir: It is a Speedmaster but it's that the-

Published Date : Jun 7 2022

SUMMARY :

Dave is coming you in 5, 4, 3, 2. he's the Chief Product Officer of MongoDB, the cloud guys got to it kind of sewn up, and that's where you are. And Vercel is the I mean it's just off the charts, and the business logic that So explain to our audience But the flexibility to not be forced and not having to worry about, So any of the old trade-offs You install the Mongo integration. is that correct? "That's awesome but I'd like to get the edge to your advantage you know, that the become the day-to-day experiences the forced march to digital. in terms of the patterns behind the scenes to support it. We see that in gaming all the time, the website doesn't crash. But at the same time, friction in the process, So the whole idea- All over the world. from the last time we were all in person, And many of them are new. so yeah. and all the zoom meetings They're coming from the it's that balance between the qualitative So giving the developer So that says to me that I'm about that new one backstage. So that's not going to do the round trip. That's developing. How do you think about that, So if they have to think (Sahir and Guillermo laugh) How can we overcome the speed of light You've got the best engineers on that one. I'm not sure of that one. and the layer is all connected That's the big- the data needs to be local to me, that new economics are going to bleed back You know you ain't We get the virtualization, the more the consumption. enjoyed the conversation. of MongoDB World 2022. Man Offscreen: Clear. All right wow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robbie BelsonPERSON

0.99+

AWSORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SahirPERSON

0.99+

Sahir AzamPERSON

0.99+

GuillermoPERSON

0.99+

DavePERSON

0.99+

Guillermo RauchPERSON

0.99+

TwitchORGANIZATION

0.99+

New YorkLOCATION

0.99+

GitHubORGANIZATION

0.99+

2019DATE

0.99+

VercelORGANIZATION

0.99+

DiorORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Next.jsTITLE

0.99+

MongoORGANIZATION

0.99+

one secondQUANTITY

0.99+

TikTokORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

AppleORGANIZATION

0.98+

MongoDBORGANIZATION

0.98+

this year 10DATE

0.98+

both worldsQUANTITY

0.97+

SvelteTITLE

0.97+

first hitQUANTITY

0.97+

oneQUANTITY

0.97+

EdgeTITLE

0.96+

pandemicEVENT

0.96+

MongoDBTITLE

0.96+

firstQUANTITY

0.94+

M1 UltraCOMMERCIAL_ITEM

0.94+

20,000 requests per secondQUANTITY

0.94+

Web2ORGANIZATION

0.94+

VueTITLE

0.93+

GitTITLE

0.93+

Big AppleLOCATION

0.93+

single productQUANTITY

0.93+

Worldwide Developer ConferenceEVENT

0.93+

each visitorQUANTITY

0.92+

M2EVENT

0.92+

theCUBEORGANIZATION

0.9+

VercelTITLE

0.88+

first buyQUANTITY

0.85+

One thingQUANTITY

0.83+

10, 15 years agoDATE

0.82+

MongoDB World 2022EVENT

0.81+

TwitterORGANIZATION

0.78+

Anish Dhar & Ganesh Datta, Cortex | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: TheCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend and we are in a beautiful locale. The city itself is not that big, 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company and I think that's really the philosophy we have. When we built Cortex, is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. The engineers who write the services they're often the ones who understand all the intricacies with the service. What are the downstream dependencies, who's on call for this service? Where does the documentation live? All of these things I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively codify your best practices in some way and have that distributed to all engineers in the company. >> And to add to that, I think very concrete examples for an organization that's already modern like their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century, they're trying to get to Kubernetes. For them, quality means where are we in that journey? Are you on our latest platforms? Are you running CI, are you doing continuous delivery? Like quality can mean a lot of things and so our perspective is how do we give you the tools to say as an organization, here's what quality means to us. >> So at first, my mind was going through when you said quality, Anish, you started out the conversation about having this kind of non-codified set of measurements, historical knowledge, et cetera. I was thinking observability, measuring how much time does it take to have a transaction. But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff. And I think that's very high level and I think that's one perspective of what quality is, but as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> I'm kind of taken back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCUBE clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... What's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version, and then what? I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different sets of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. Like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where the areas of risk with my service across reliability or security or operation maturity. I think it gives us in insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standards for SLO that is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability or reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my API integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs, we want you to define your thresholds. We want you to be tracking them, are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be, ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you got to be tracking them. And the service catalog that data flows from a service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, you can say like, hey, for this service, we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans. Here's the APM, the monitors, the SLOs, the JIRA ticket is like all that stuff comes into a single place. And then our scorecards product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah, and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them. And scorecards can be used to one, which teams are actually meeting these guidelines? And then two, let's get those teams adopted on SLOs. Let's track that, you can do all of that in Cortex, which is I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering if not, you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define this scorecard. And that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful and those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service, you can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices, so I reduce call wait time on my customer service calls. So I can easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard, but what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run books to quickly roll back to an older version that we know is running. That way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind of the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics. And then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics like are we doing things the right way, versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely, I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock? >> Time for the Q clock. Start the Q clock. (upbeat music) Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on. You're you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this. For a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my engineering team actually producing. >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that can- >> A goal without a timeline is just a dream. >> And CTO comes in and they say, here's what we care about. Here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's where Cortex can provide. We can give you that visibility that you never have before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube. The leader in high tech coverage. (soft music) (soft music) >> Narrator: TheCube presents Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend. And we are in a beautiful locale. The city itself is not that big 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company. And I think that's really the philosophy we have when we build Cortex is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. Engineers who write the services, they're often the ones who understand all the intricacies with the service. What are the downstream I dependencies, who's on call for this service, where does the documentation live? All of these things, I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively like codify your best practices in some way, and have that distributed to all engineers in the company. >> And to add to that, I think like very concrete examples for an organization that's already modern their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century. They're trying to get to Kubernetes. For them quality means like, where are we in that journey? Are you on our latest platforms? Are you running CI? Are you doing continuous delivery? Like quality can mean a lot of things. And so our perspective is how do we give you the tools to say as an organization here's what quality means to us. >> So at first my mind was going through when you said quality and as you started out the conversation about having this kind of non codified set of measurements, historical knowledge, et cetera. I was thinking observability measuring how much time does it take to have a transaction? But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff and I think that's very high level. And I think that's one perspective of what quality is. But as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of, are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> Wow, I'm kind of taken me back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCube clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... what's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version and then what, I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different set of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where are the areas of risk with my service across reliability or security or operation maturity. I think it gives us insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standard for SLO. That is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability and reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my APM integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs. We want to define your thresholds. We want you to be tracking them. Are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be. Ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you've got to be tracking them. And the service catalog that data flows from the service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, we can say like, hey, for this service we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans, here's the APM, the monitor, the SLOs, the JIRA ticket, like all that stuff comes into a single place. And then our scorecard product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them? And scorecards can be used to one, which teams are actually meeting these guidelines? And then two let's get those teams adopted on SLOs. Let's track that. You can do all of that in Cortex, which is, I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define the scorecard and that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful. And like those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service. You can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level. Saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices so I reduce call wait time on my customer service calls. So, I could easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard. But what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run book to quickly roll back to an older version that we know is running that way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics and then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics? Are we doing things the right way versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on like the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely. I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right, Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock >> Time for the Q clock. Start the Q clock. (upbeat music) >> Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on, you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this, for a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my enduring team actually producing- >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that you can- >> A goal without a timeline is just a dream. >> And a CTO comes in and they say, here's what we care about, here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's what Cortex can provide. We can give you that visibility that you never had before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube, the leader in high tech coverage. (soft music)

Published Date : May 20 2022

SUMMARY :

Brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think What's the first process you do that has all the information you need So how do you help me we want you to define your thresholds. And so we can start mapping adopted SLOs in the first place? in the end to end process And so you can start seeing like trends So I'm going to give you And so if you have those metrics coming in and Cortex lets you represent that. the speed round here. Time for the Q clock. You're you're 10 seconds in. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never have before. And hey, your SRE can't And you're watching theCube. 2022 brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think And I think later on, this that has all the information you need So how do you help me And the service catalog that data flows And so we can start mapping You can do all of that in the end to end process And so you can start seeing So I'm going to give you And so if you have those metrics coming in I think just to touch on the reporting, the speed round here. Time for the Q clock. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never had before. And hey, your SRE can't And you're watching theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnishPERSON

0.99+

Keith TownsendPERSON

0.99+

CortexORGANIZATION

0.99+

80%QUANTITY

0.99+

KeithPERSON

0.99+

Red HatORGANIZATION

0.99+

USLOCATION

0.99+

GaneshPERSON

0.99+

21st centuryDATE

0.99+

100,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

five secondsQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

800,000 peopleQUANTITY

0.99+

CortexTITLE

0.99+

Valencia SpainLOCATION

0.99+

one elementQUANTITY

0.99+

one aspectQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

one perspectiveQUANTITY

0.99+

DatadogORGANIZATION

0.99+

one componentQUANTITY

0.99+

Ganesh DattaPERSON

0.98+

OneQUANTITY

0.98+

SLOTITLE

0.98+

2022DATE

0.98+

first stepQUANTITY

0.98+

KubeconORGANIZATION

0.97+

about 800,000 peopleQUANTITY

0.97+

one clickQUANTITY

0.97+

John Amaral, Slim.AI | DockerCon 2022


 

>>mhm. Hello and welcome to the cubes Ducker con coverage. I'm John Ferry, host of the Cube. We've got a great segment here with slim dot AI CEO John Amaral. Stealth mode, SAS Company. Start up in the devops space with tools today and open source around. Supply chain security with containers closed beta with developers. John, Thanks for coming on. Congratulations for being platinum sponsor here, Dr Khan. Thanks for coming on The Cube. >>Thanks so much on my pleasure. >>You know, container analysis, management optimisation. You know, that's super important. But security is at the centre of all the action we're seeing with containers. We've been talking shift left on a lot of cube conversations. What that means? Is it an outcome? Is that the product software supply chain? You seek them? A secure where malware. All these things are part of now the new normal in cloud Native. You guys at the centre of this, the surface areas change. All these things are important. Take a minute to explain what you guys are doing as a as a tools and open source. Some of the things you're doing, I know you got a stealth mode product. You probably can't talk about. But you gotta close, Beta. Can you give us a little bit of a teaser? What slim dot ai about >>sure. So someday I is about helping developers build secure containers fast, and that really plays to a few trends in the marketplace that are really apparent and important right now in a federal mandate and a bunch of really highly publicised breaches that have all been caused by software supply, chain risks and security and software supply, chain security has become a really top of mind concept for people who secure things and people who develop software and runs. SAS so slim that AI has built a bunch of capabilities and tools that allow software developers at their desks to better understand and build secure containers that really reduce software supply. Chain risk as you think about containers being run in production. And we do three things to help developers one, as we help them know everything about their software. It's a kind of a core concept of suffering supply chain security. Just know what software is in your containers to. Another core concept is only ship to production. What you need to run. That's all about risk surface and the ability for you to easily make a container small that has as much a software reduction in it as possible. And three, it's removed as many vulnerabilities as possible to Slim Toolset. Both are open source and our SAS data platform make that easy for developers to do >>so. Basically, you have a nice, clean, secure environment. Know what's in there. Don't only put in production was needed and make sure it's tight and it's trimmed down perfectly. So you're kind of teasing out this concept of slimming, which is in the name of the company. But it really is about surface area of attack around containers and super important as it becomes more and more prominent in the environment these days. What is container slimming and why is it important for supply chain security? >>Sure. So in the in the in the realm of software supply chain security, best practises right, there are three core concepts. One is the idea of an S bahn that you should know the inventory of all the software that runs in your world to its security posture, signing containers, making sure that the authenticity of the software that you use and production is well understood. And the third is, well, managing exactly what shopper you ship. The first two things I said are simply just inventory and basics about knowing what software you have. But no one answers the question. What software do I need? So I run a container and say, It's a gig and it's got all these packages in. It comes from the operating system from note, etcetera. It's got all this stuff in it. I know the parts that I write my code to. But all that other stuff, what is it? Why is it there? What's the risk in it? That slimming part is all about managing the list of things you actually shipped to the absolute minimum and with confidence that you know that that code will actually work when it gets production but be as small as possible. That's what slimming is all about, and it really reduces supply chain risk by lowering the attack surface in your container, but also trimming your supply chain to only the minimum pieces you need, which really causes a lot of improvements in in the operational overhead of having software supply chain security >>It's interesting as you get more more volume and velocity around containers, uh, and automation kicks in. Sometimes things are turning on and off you don't even know. And shift left has been a great trend for getting in the CI CD pipeline for developer productivity. Really cool. What are some of the consequences that's going on with this? Because then you start to get into some of these areas like some stuff happens that the developers have to come shift back and can take care of stuff. So, you know, C. Tus and CSOs are really worried about this container dynamic. What's the What's the new thing that's causing the problems here? What's the issue around the management that CDOs and CDOs care about? >>Sure. And I'll talk about the shift left implications as well for that exact point. So as you start to worry about software supply, chain security and get a handle on all the software you ship to prod well, part of that is knowledge is power. But it's also, um, risk and work as soon as I know about problems with my containers or the risk surface, and I got to do something about it so we're really getting into the age where everyone has to know about the software they ship. As soon as you know about that, say there's a vulnerability or a package that's a little risky or some surface area you don't really understand. The only place that can be evaded is by going back to the developers and asking them. What is that? How do I remove it? Please do that work. So the software supply chain security knowledge turns into developer security work. Now the problem is, is that historically, the knowledge was imperfect, and the developer, you know, involvement in that was, I'd say, at Hawk, meaning that developers had best practises that did the best they could. But the scrutiny we have now on minimising this kind of risk is really high. The beautiful part about containers is their portable, and it's an easily transferrable piece of software. So you have a lot of producers and a lot of consumers of containers. Consumers of containers that care about supply chain risk are now starting to push back on, producers saying, Take those vulnerabilities out, move those packages, make this thing more secure, lower the risk profile this works its way all the way back to the developers who don't really have the tools, capabilities and automation is to do the work I just described easily, and that's an opportunity that Slim is really addressing, making it easy for developers to remove risk. >>And that's really the consequences of shifting left without having the slimming. Because what you're saying is your shift left and that's kind of annulled out because you've got to go back and fix it. The work comes, >>that's right. And yeah, and it's not an easy task for a developer to understand the code that they didn't intentionally put in the container. It's like, Okay, there's a package in that operating system. What does it do? I don't know. Do I even use it? I don't know. So there's like tonnes of analytic and I would say even optimisation questions and work to be done, but they're just not equipped to, because the tooling for that is really immature Slims on a mission to make that really easy for them and do it automatically so they don't have to think about it. We just automatically remove stuff you don't use and voila! You've got this like perfectly pre optimised capability. >>You know, this suffer supply chain is huge, and I remember when open source started when I remember when I was breaking into the business. Now it's such a height in such an escalation of new developers. This it's a real issue that that's going to be resolved. It has to be because supply chain is part of open source, right? As more code comes in, you got to verify. You gotta make sure it's it's slimming where it needs to be slim and optimised. There needs to be optimised, huge trend. Um and so I just love this area. I think it's really innovative and needed. So congratulations on that, you know, have one more question for you before we get into to close out. Um, you guys are part of the Docker Extensions launch and your partner, >>Why >>is this important to participate in this programme and and what do you guys hope to hope it does for slim dot ai, >>First of all, doctors, the ubiquitous platform, their hub has millions and millions of containers. We've got millions and millions of developers using Docker desktop to actually build and work on containers. It's like literally the sandbox for all local work for building containers. It's a fair statement. So inclusion in Dr Khan and the relationship we're building with Docker is really important for developers and that we're bringing these capabilities to the place where developers work and live every day. It's where all the containers live in the world. So we want to have our technology be easy to use with docker tools. We want to keep developers workflows and systems and and tools of record be the same. We just want to help them use those tools better and optimist outputs. From that we've we've worked since our inception to make our tools really, really friendly for darker and darker environments to, um, we are building a doctor extension. Uh, they have, uh, in this darker con. They're launching their doctor extensions programme to the worldwide audience. We have been one of the lucky Cos that's been selected to build one of the early Dr desktop plug ins. It's derived from our capabilities and our Saas platform and an open source, and it's it's effectively an MRI machine, an awesome analytic tool that allows any developer to really understand the composition, security and profile of any container they work with. So it's giving the sight to the blind, so to speak, that it's this new tool to make container analysis easy. >>Well, John, you guys got a great opportunity. Container analysis, management, optimisation key to security, enabling it and maintaining and sustaining it. And it's changing. I know you guys. Your co founder also did a doctor Slim. So you guys are deep in the open source. I Congratulations on that. We'll see a Q. Khan for the remaining time. We have give a plug for the company, obviously in stealth mode price going to come out later this year. You got a developer preview? What's What's the company all about? What's the most important story here? Dr. Khan? >>Sure, just to playback. So we help developers do three important things. Know everything about the software in their containers to only ship stuff to production that you need, and and and three remove as many vulnerabilities as possible. That's really about managing and understanding the risk surface. It ties right back to software supply chain security, and any developer can use these tools today to emit and build containers that are more secure and better production grade containers, and it's easy to do. We have an open source project called Dioxin. Go check it out. Uh, it's not. It's on git Hub. It's easy to find if you go to w w w dot slim that ai you can find access to that. We have tens of thousands of developers, 500,000 plus downloads. We have developers everywhere using those tools today and open source to do the objectives. I just said You can also easily sign up for our data for our Saas platform, you can use the doctor extension, go ahead and do that and really get on your journey to make those outcomes reality for you. And really kind of make those SEC ops people downstream not have to shift anything left. It's super easy for you to be a great participant in software slash insecurity. >>All right. John Amaral, CEO slim dot ai Stealth. Most thanks for coming The Cube Cube coverage of Dr Khan. Thanks for watching. I'm John Kerry hosted the Cube back to more Dr Khan after the short break. Mhm mhm

Published Date : May 11 2022

SUMMARY :

I'm John Ferry, host of the Cube. Take a minute to explain what you guys are doing as a as a tools and open source. That's all about risk surface and the ability for you to easily make a container small that has as containers and super important as it becomes more and more prominent in the environment these days. posture, signing containers, making sure that the authenticity of the software that you use and production What's the issue around the management that CDOs and CDOs care about? and the developer, you know, involvement in that was, I'd say, And that's really the consequences of shifting left without having the slimming. and do it automatically so they don't have to think about it. This it's a real issue that that's going to be resolved. So it's giving the sight to the blind, So you guys are deep in the open source. It's easy to find if you go to w w I'm John Kerry hosted the Cube back to more Dr Khan after the short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John AmaralPERSON

0.99+

John FerryPERSON

0.99+

millionsQUANTITY

0.99+

John KerryPERSON

0.99+

KhanPERSON

0.99+

thirdQUANTITY

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

threeQUANTITY

0.98+

BothQUANTITY

0.98+

SAS CompanyORGANIZATION

0.98+

DockerTITLE

0.97+

later this yearDATE

0.97+

500,000 plus downloadsQUANTITY

0.97+

three core conceptsQUANTITY

0.97+

todayDATE

0.96+

DrPERSON

0.94+

one more questionQUANTITY

0.94+

git HubTITLE

0.94+

three thingsQUANTITY

0.94+

SECORGANIZATION

0.93+

DioxinORGANIZATION

0.91+

SaasTITLE

0.91+

HawkORGANIZATION

0.89+

Dr.PERSON

0.87+

slim dotORGANIZATION

0.87+

three important thingsQUANTITY

0.85+

Docker ExtensionsORGANIZATION

0.85+

millions of developersQUANTITY

0.85+

DockerCon 2022EVENT

0.83+

Q. KhanPERSON

0.83+

SlimPERSON

0.81+

tens of thousands of developersQUANTITY

0.78+

first two thingsQUANTITY

0.78+

tonnes of analyticQUANTITY

0.76+

slimORGANIZATION

0.76+

CEOPERSON

0.76+

DrORGANIZATION

0.74+

C. TusORGANIZATION

0.74+

FirstQUANTITY

0.74+

Dr KhanPERSON

0.6+

CubeTITLE

0.59+

DockerORGANIZATION

0.57+

SASORGANIZATION

0.57+

CubeORGANIZATION

0.57+

S bahnORGANIZATION

0.51+

Cube CubeCOMMERCIAL_ITEM

0.49+

Slim ToolsetORGANIZATION

0.48+

KhanTITLE

0.45+

SASTITLE

0.38+

Mark Lyons, Dremio | AWS Startup Showcase S2 E2


 

(upbeat music) >> Hello, everyone and welcome to theCUBE presentation of the AWS startup showcase, data as code. This is season two, episode two of the ongoing series covering the exciting startups from the AWS ecosystem. Here we're talking about operationalizing the data lake. I'm your host, John Furrier, and my guest here is Mark Lyons, VP of product management at Dremio. Great to see you, Mark. Thanks for coming on. >> Hey John, nice to see you again. Thanks for having me. >> Yeah, we were talking before we came on camera here on this showcase we're going to spend the next 20 minutes talking about the new architectures of data lakes and how they expand and scale. But we kind of were reminiscing by the old big data days, and how this really changed. There's a lot of hangovers from (mumbles) kind of fall through, Cloud took over, now we're in a new era and the theme here is data as code. Really highlights that data is now in the developer cycles of operations. So infrastructure is code-led DevOps movement for Cloud programmable infrastructure. Now you got data as code, which is really accelerating DataOps, MLOps, DatabaseOps, and more developer focus. So this is a big part of it. You guys at Dremio have a Cloud platform, query engine and a data tier innovation. Take us through the positioning of Dremio right now. What's the current state of the offering? >> Yeah, sure, so happy to, and thanks for kind of introing into the space that we're headed. I think the world is changing, and databases are changing. So today, Dremio is a full database platform, data lakehouse platform on the Cloud. So we're all about keeping your data in open formats in your Cloud storage, but bringing that full functionality that you would want to access the data, as well as manage the data. All the functionality folks would be used to from NC SQL compatibility, inserts updates, deletes on that data, keeping that data in Parquet files in the iceberg table format, another level of abstraction so that people can access the data in a very efficient way. And going even further than that, what we announced with Dremio Arctic which is in public preview on our Cloud platform, is a full get like experience for the data. So just like you said, data as code, right? We went through waves and source code and infrastructure as code. And now we can treat the data as code, which is amazing. You can have development branches, you can have staging branches, ETL branches, which are separate from production. Developers can do experiments. You can make changes, you can test those changes before you merge back to production and let the consumers see that data. Lots of innovation on the platform, super fast velocity of delivery, and lots of customers adopting it in just in the first month here since we announced Dremio Cloud generally available where the adoption's been amazing. >> Yeah, and I think we're going to dig into the a lot of the architecture, but I want to highlight your point you made about the branching off and taking a branch of Git. This is what developers do, right? The developers use GitHub, Git, they bake branches from code. They build on top of other code. That's open source. This is what's been around for generations. Now for the first time we're seeing data sets being taken out of production to be worked on and coded and tested and even doing look backs or even forward looking analysis. This is data being programmed. This is data as code. This is really, you couldn't get any closer to data as code. >> Yeah. It's all done through metadata by the way. So there's no actual copying of these data sets 'cause in these big data systems, Cloud data lakes and stuff, and these tables are billions of records, trillions of records, super wide, hundreds of columns wide, thousands of columns wide. You have to do this all through metadata operations so you can control what version of the data basically a individual's working with and which version of the data the production systems are seeing because these data sets are too big. You don't want to be moving them. You can't be moving them. You can't be copying them. It's all metadata and manifest files and pointers to basically keep track of what's going on. >> I think this is the most important trend we've seen in a long time, because if you think about what Agile did for developers, okay, speed, DevOps, Cloud scale, now you've got agility in the data side of it where you're basically breaking down the old proprietary, old ways of doing data warehousing, but not killing the functionality of what data warehouses did. Just doing more volume data warehouses where proprietary, not open. They were different use cases. They were single application developers when used data warehouse query, not a lot of volume. But as you get volume, these things are inadequate. And now you've got the new open Agile. Is this Agile data engineering at play here? >> Yeah, I think it totally is. It's bringing it as far forward in as possible. We're talking about making the data engineering process easier and more productive for the data engineer, which ultimately makes the consumers of that data much happier as well as way more experiments can happen. Way more use cases can be tried. If it's not a burden and it doesn't require building a whole new pipeline and defining a schema and adding columns and data types and all this stuff, you can do a lot more with your data much faster. So it's really going to be super impactful to all these businesses out there trying to be data driven, especially when you're looking at data as a code and branching, a branch off, you can de-risk your changes. You're not worried about messing up the production system, messing up that data, having it seen by end user. Some businesses data is their business so that data would be going all the way to a consumer, a third party. And then it gets really scary. There's a lot of risk if you show the wrong credit score to a consumer or you do something like that. So it's really de-risking... >> Even updating machine learning algorithms. So for instance, if the data sets change, you can always be iterating on things like machine learning or learning algorithms. This is kind of new. This is awesome, right? >> I think it's going to change the world because this stuff was so painful to do. The data sets had gotten so much bigger as you know, but we were still doing it in the old way, which was typically moving data around for everyone. It was copying data down, sampling data, moving data, and now we're just basically saying, hey, don't do that anymore. We got to stop moving the data. It doesn't make any sense. >> So I got to ask you Mark, data lakes are growing in popularity. I was originally down on data lakes. I called them data swamps. I didn't think they were going to be as popular because at that time, distributed file systems like Hadoop, and object store in the Cloud were really cool. So what happened between that promise of distributed file systems and object store and data lakes? What made data lakes popular? What made that work in your opinion? >> Yeah, it really comes down to the metadata, which I already mentioned once. But we went through these waves. John you saw we did the EDWs to the data lakes and then the Cloud data warehouses. I think we're at the start of a cycle back to the data lake. And it's because the data lakes this time around with the Apache iceberg table format, with project (mumbles) and what Dremio's working on around metadata, these things aren't going to become data swamps anymore. They're actually going to be functional systems that do inserts updates into leads. You can see all the commits. You can time travel them. And all the files are actually managed and optimized so you have to partition the data. You have to merge small files into larger files. Oh, by the way, this is stuff that all the warehouses have done behind the scenes and all the housekeeping they do, but people weren't really aware of it. And the data lakes the first time around didn't solve all these problems so that those files landing in a distributed file system does become a mess. If you just land JSON, Avro or Parquet files, CSV files into the HDFS, or in S3 compatible, object store doesn't matter, if you're just parking files and you're going to deal with it as schema and read instead of schema and write, you're going to have a mess. If you don't know which tool changed the files, which user deleted a file, updated a file, you will end up with a mess really quickly. So to take care of that, you have to put a table format so everyone's looking at Apache iceberg or the data bricks Delta format, which is an interesting conversation similar to the Parquet and org file format that we saw play out. And then you track the metadata. So you have those manifest files. You know which files change when, which engine, which commit. And you can actually make a functional system that's not going to become a swamp. >> Another trend that's extending on beyond the data lake is other data sources, right? So you have a lot of other data, not just in data lakes so you have to kind of work with that. How do you guys answer the question around some of the mission critical BI dashboards out there on the latency side? A lot of people have been complaining that these mission critical BI dashboards aren't getting the kind of performance as they add more data sources and they try to do more. >> Yeah, that's a great question. Dremio does actually a bunch of interesting things to bring the performance of these systems up because at the end of the day, people want to access their data really quickly. They want the response times of these dashboards to be interactive. Otherwise the data's not interesting if it takes too long to get it. To answer a question, yeah, a couple of things. First of all, from a data source's side, Dremio is very proficient with our Parquet files in an object store, like we just talked about, but it also can access data in other relational systems. So whether that's a Postgres system, whether that's a Teradata system or an Oracle system. That's really useful if you have dimensional data, customer data, not the largest data set in the world, not the fastest moving data set in the world, but you don't want to move it. We can query that where it resides. Bringing in new sources is definitely, we all know that's a key to getting better insights. It's in your data, is joining sources together. And then from a query speed standpoint, there's a lot of things going on here. Everything from kind of Apache, the Apache Avro project, which is in memory format of Parquet and not kind of serialize and de-serialize the data back and forth. As well as what we call reflection, which is basically a re-indexing or pre-computing of the data, but we leave it in Parquet format, in a open format in the customer's account so that you can have aggregates and other things that are really popular in these dashboards pre-computed. So millisecond response, lightning fast, like tricks that a warehouse would do that the warehouses have been doing forever. Right? >> Yeah, more deals coming in. And obviously the architecture we'll get into that now has to handle the growth. And as your customers and practitioners see the volume and the variety and the velocity of the data coming in, how are they adjusting their data strategies to respond to this? Again, Cloud is clearly the answer, not the data warehouse, but what are they doing? What's the strategy adjustment? >> It's interesting when we start talking to folks, I think sometimes it's a really big shift in thinking about data architectures and data strategies when you look at the Dremio approach. It's very different than what most people are doing today around ETL pipelines and then bringing stuff into a warehouse and oh, the warehouse is too overloaded so let's build some cubes and extracts into the next tier of tools to speed up those dashboards for those tools. And Dremio has totally flipped this on a sentence and said, no, let's not do all those things. That's time consuming. It's brittle, it breaks. And actually your agility and the scope of what you can do with your data decreases. You go from all your data and all your data sources to smaller and smaller. We actually call it the perimeter doom and a lot of people look at this and say, yeah, that kind of looks like how we're doing things today. So from a Dremio perspective, it's really about no copy, try to keep as much data in one place, keep it in one open format and less data movement. And that's a very different approach for people. I think they don't realize how much you can accomplish that way. And your latency shrinks down too. Your actual latency from data created to insight is much shorter. And it's not because of the query response time, that latency is mostly because of data movement and copy and all these things. So you really want to shrink your time to insight. It's not about getting a faster query from a few seconds down, it's about changing the architecture. >> The data drift as they say, interesting there. I got to ask you on the personnel side, team side, you got the technical side, you got the non-technical consumers of the data, you got the data science or data engineering is ramping up. We mentioned earlier data engineering being Agile, is a key innovation here. As you got to blend the two personas of technical and non-technical people playing with data, coding with data, we're the bottlenecks in this process today. How can data teams overcome these bottlenecks? >> I think we see a lot of bottlenecks in the process today, a lot of data movement, a lot of change requests, update this dashboard. Oh, well, that dashboard update requires an ETL pipeline update, requires a column to be added to this warehouse. So then you've got these personas, like you said, some more technical, less technical, the data consumers, the data engineers. Well, the data engineers are getting totally overloaded with requests and work. And it's not even super value-add work to the business. It's not really driving big changes in their culture and insights and new new use cases for data. It's turning through kind of small changes, but it's taking too much time. It's taking days, if not weeks for these organizations to manage small changes. And then the data consumers, the less technical folks, they can't get the answers that they want. They're waiting and waiting and waiting and they don't understand why things are so challenging, how things could take so much time. So from a Dremio perspective, it's amazing to watch these organizations unleash their data. Get the data engineers, their productivity up. Stop dealing with some of the last mile ETL and small changes to the data. And Dremio actually says, hey, data consumers, here's a really nice gooey. You don't need to be a SQL expert, well, the tool will write the joints for you. You can click on a column and say, hey, I want to calculate a new field and calculate that field. And it's all done virtually so it's not changing the physical data sets. The actual data engineering team doesn't even really need to care at that point. So you get happier data consumers at the end of the day. They're doing things more self-service. They're learning about the data and the data engineering teams can go do value-add things. They can re-architecture the platform for the future. They can do POCs to test out new technologies that could support new use cases and bring those into the organization. Things that really add value, instead of just churning through backlogs of, hey, can we get a column added or we change... Everyone's doing app development, AB testing, and those developers are king. Those pipelines stream all this data down when the JSON files change. You need agility. And if you don't have that agility, you just get this endless backlog that you never... >> This is data as code in action. You're committing data back into the main brand that's been tested. That's what developers do. So this is really kind of the next step function. I got to put the customer hat on for a second and ask you kind of the pessimist question. Okay, we've had data lakes, I've got data lakes, it's been data lakes around, I got query engines here and there, they're all over the place, what's missing? What's been missing from the architecture to fully realize the potential of a data lakehouse? >> Yeah, I think that's a great question. The customers say exactly that John. They say, "I've got 22 databases, you got to be kidding me. You showed up with another database." Or, hey, let's talk about a Cloud data lake or a data lake. Again, I did the data lake thing. I had a data lake and it wasn't everything I thought it was going to be. >> It was bad. It was data swamp. >> Yeah, so customers really think this way, and you say, well, what's different this time around? Well, the Cloud in the original data lake world, and I'm just going to focus on data lakes, so the original data lake worlds, everything was still direct attached storage, so you had to scale your storage and compute out together. And we built these huge systems. Thousands of thousands of HDFS nodes and stuff. Well, the Cloud brought the separated compute and storage, but data lakes have never seen separated compute and storage until now. We went from the data lake with directed tap storage to the Cloud data warehouse with separated compute and storage. So the Cloud architecture and getting compute and storage separated is a huge shift in the data lake world. And that agility of like, well, I'm only going to apply it, the compute that I need for this question, for this answer right now, and not get 5,000 servers of compute sitting around at some peak moment. Or just 5,000 compute servers because I have five petabytes or 50 petabytes of data that need to be stored in the discs that are attached to them. So I think the Cloud architecture and separating compute and storage is the first thing that's different this time around about data lakes. But then more importantly than that is the metadata tier. Is the data tier and having sufficient metadata to have the functionality that people need on the data lake. Whether that's for governance and compliance standpoints, to actually be able to do a delete on your data lake, or that's for productivity and treating that data as code, like we're talking about today, and being able to time travel it, version it, branch it. And now these data lakes, the data lakes back in the original days were getting to 50 petabytes. Now think about how big these Cloud data lakes could be. Even larger and you can't move that data around so we have to be really intelligent and really smart about the data operations and versioning all that data, knowing which engine touch the data, which person was the last commit and being able to track all that, is ultimately what's going to make this successful. Because if you don't have the governance in place these days with data, the projects are going to fail. >> Yeah, and I think separating the query layer or SQL layer and the data tier is another innovation that you guys have. Also it's a managed Cloud service, Dremio Cloud now. And you got the open source angle too, which is also going to open up more standardization around some of these awesome features like you mentioned the joints, and I think you guys built on top of Parquet and some other cool things. And you got a community developing, so you get the Cloud and community kind of coming together. So it's the real world that is coming to light saying, hey, I need real world applications, not the theory of old school. So what use cases do you see suited for this kind of new way, new architecture, new community, new programability? >> Yeah, I see people doing all sorts of interesting things and I'm sure with what we've introduced with Dremio Arctic and the data is code is going to open up a whole new world of things that we don't even know about today. But generally speaking, we have customers doing very interesting things, very data application things. Like building really high performance data into use cases whether that's a supply chain and manufacturing use case, whether that's a pharma or biotech use case, a banking use case, and really unleashing that data right into an application. We also see a lot of traditional data analytics use cases more in the traditional business intelligence or dashboarding use cases. That stuff is totally achievable, no problems there. But I think the most interesting stuff is companies are really figuring out how to bring that data. When we offer the flexibility that we're talking about, and the agility that we're talking about, you can really start to bring that data back into the apps, into the work streams, into the places where the business gets more value out of it. Not in a dashboard that some person might have access to, or a set of people have access to. So even in the Dremio Cloud announcement, the press release, there was a customer, they're in Europe, it's called Garvis AI and they do AI for supply chains. It's an intelligent application and it's showing customers transparently how they're getting to these predictions. And they stood this all up in a very short period of time, because it's a Cloud product. They don't have to deal with provisioning, management, upgrades. I think they had their stuff going in like 30 minutes or something, like super quick, which is amazing. The data was already there, and a lot of organizations, their data's already in these Cloud storages. And if that's the case... >> If they have data, they're a use case. This is agility. This is agility coming to the data engineering field, making data programmable, enabling the data applications, the data ops for everybody, for coding... >> For everybody. And for so many more use cases at these companies. These data engineering teams, these data platform teams, whether they're in marketing or ad tech or Fiserv or Telco, they have a list. There's a list about a roadmap of use cases that they're waiting to get to. And if they're drowning underwater in the current tooling and barely keeping that alive, and oh, by the way, John, you can't go higher 30 new data engineers tomorrow and bring on the team to get capacity. You have to innovate at the architecture level, to unlock more data use cases because you're not going to go triple your team. That's not possible. >> It's going to unlock a tsunami of value. Because everyone's clogged in the system and it's painful. Right? >> Yeah. >> They've got delays, you've got bottlenecks. you've got people complaining it's hard, scar tissue. So now I think this brings ease of use and speed to the table. >> Yeah. >> I think that's what we're all about, is making the data super easy for everyone. This should be fun and easy, not really painful and really hard and risky. In a lot of these old ways of doing things, there's a lot of risk. You start changing your ETL pipeline. You add a column to the table. All of a sudden, you've got potential risk that things are going to break and you don't even know what's going to break. >> Proprietary, not a lot of volume and usage, and on-premises, open, Cloud, Agile. (John chuckles) Come on, which path? The curtain or the box, what are you going to take? It's a no brainer. >> Which way do you want to go? >> Mark, thanks for coming on theCUBE. Really appreciate it for being part of the AWS startup showcase data as code, great conversation. Data as code is going to enable a next wave of innovation and impact the future of data analytics. Thanks for coming on theCUBE. >> Yeah, thanks John and thanks to the AWS team. A great partnership between AWS and Dremio too. Talk to you soon. >> Keep it right there, more action here on theCUBE. As part of the showcase, stay with us. This is theCUBE, your leader in tech coverage. I'm John Furrier, your host, thanks for watching. (downbeat music)

Published Date : Apr 26 2022

SUMMARY :

of the AWS startup showcase, data as code. Hey John, nice to see you again. and the theme here is data as code. Lots of innovation on the platform, Now for the first time the production systems are seeing in the data side of it for the data engineer, So for instance, if the data sets change, I think it's going to change the world and object store in the And it's because the data extending on beyond the data lake of the data, but we leave and the variety and the the scope of what you can do I got to ask you on the and the data engineering teams kind of the pessimist question. Again, I did the data lake thing. It was data swamp. and really smart about the data operations and the data tier is another and the data is code is going the data engineering field, and bring on the team to get capacity. Because everyone's clogged in the system to the table. is making the data The curtain or the box, and impact the future of data analytics. Talk to you soon. As part of the showcase, stay with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

JohnPERSON

0.99+

EuropeLOCATION

0.99+

John FurrierPERSON

0.99+

Mark LyonsPERSON

0.99+

30 minutesQUANTITY

0.99+

TelcoORGANIZATION

0.99+

MarkPERSON

0.99+

50 petabytesQUANTITY

0.99+

five petabytesQUANTITY

0.99+

two personasQUANTITY

0.99+

5,000 serversQUANTITY

0.99+

tomorrowDATE

0.99+

hundreds of columnsQUANTITY

0.99+

22 databasesQUANTITY

0.99+

DremioORGANIZATION

0.99+

trillions of recordsQUANTITY

0.99+

DremioPERSON

0.99+

Dremio ArcticORGANIZATION

0.99+

FiservORGANIZATION

0.99+

first timeQUANTITY

0.98+

30 new data engineersQUANTITY

0.98+

billions of recordsQUANTITY

0.98+

thousands of columnsQUANTITY

0.98+

first thingQUANTITY

0.98+

Thousands of thousandsQUANTITY

0.98+

todayDATE

0.97+

one placeQUANTITY

0.97+

OracleORGANIZATION

0.97+

ApacheORGANIZATION

0.96+

S3TITLE

0.96+

GitTITLE

0.96+

CloudTITLE

0.95+

HadoopTITLE

0.95+

first monthQUANTITY

0.94+

ParquetTITLE

0.94+

Dremio CloudTITLE

0.91+

5,000 compute serversQUANTITY

0.91+

oneQUANTITY

0.91+

JSONTITLE

0.89+

FirstQUANTITY

0.89+

single applicationQUANTITY

0.89+

GarvisORGANIZATION

0.88+

GitHubORGANIZATION

0.87+

ApacheTITLE

0.82+

episodeQUANTITY

0.79+

AgileTITLE

0.77+

season twoQUANTITY

0.74+

AgileORGANIZATION

0.69+

DevOpsTITLE

0.67+

Startup Showcase S2 E2EVENT

0.66+

TeradataORGANIZATION

0.65+

theCUBEORGANIZATION

0.64+

Steve George, Weaveworks & Steve Waterworth, Weaveworks | AWS Startup Showcase S2 E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two of the ongoing series. We're covering exciting start startups in the AWS ecosystem to talk about open source community stuff. I'm your host, Dave Nicholson. And I'm delighted today to have two guests from Weaveworks. Steve George, COO of Weaveworks, and Steve Waterworth, technical marketing engineer from Weaveworks. Welcome, gentlemen, how are you? >> Very well, thanks. >> Very well, thanks very much. >> So, Steve G., what's the relationship with AWS? This is the AWS Startup Showcase. How do Weaveworks and AWS interact? >> Yeah sure. So, AWS is a investor in Weaveworks. And we, actually, collaborate really closely around EKS and some specific EKS tooling. So, in the early days of Kubernetes when AWS was working on EKS, the Elastic Kubernetes Service, we started working on the command line interface for EKS itself. And due to that partnership, we've been working closely with the EKS team for a long period of time, helping them to build the CLI and make sure that users in the community find EKS really easy to use. And so that brought us together with the AWS team, working on GitOps and thinking about how to deploy applications and clusters using this GitOps approach. And we've built that into the EKS CLI, which is an open source tool, is a project on GitHub. So, everybody can get involved with that, use it, contribute to it. We love hearing user feedback about how to help teams take advantage of the elastic nature of Kubernetes as simply and easily as possible. >> Well, it's great to have you. Before we get into the specifics around what Weaveworks is doing in this area that we're about to discuss, let's talk about this concept of GitOps. Some of us may have gotten too deep into a Netflix series, and we didn't realize that we've moved on from the world of DevOps or DevSecOps and the like. Explain where GitOps fits into this evolution. >> Yeah, sure. So, really GitOps is an instantiation, a version of DevOps. And it fits within the idea that, particularly in the Kubernetes world, we have a model in Kubernetes, which tells us exactly what we want to deploy. And so what we're talking about is using Git as a way of recording what we want to be in the runtime environment, and then telling Kubernetes from the configuration that is stored in Git exactly what we want to deploy. So, in a sense, it's very much aligned with DevOps, because we know we want to bring teams together, help them to deploy their applications, their clusters, their environments. And really with GitOps, we have a specific set of tools that we can use. And obviously what's nice about Git is it's a very developer tool, or lots and lots of developers use it, the vast majority. And so what we're trying to do is bring those operational processes into the way that developers work. So, really bringing DevOps to that generation through that specific tooling. >> So Steve G., let's continue down this thread a little bit. Why is it necessary then this sort of added wrinkle? If right now in my organization we have developers, who consider themselves to be DevOps folks, and we give them Amazon gift cards each month. And we say, "Hey, it's a world of serverless, "no code, low code lights out data centers. "Go out and deploy your code. "Everything should be fine." What's the problem with that model, and how does GitOps come in and address that? >> Right. I think there's a couple of things. So, for individual developers, one of the big challenges is that, when you watch development teams, like deploying applications and running them, you watch them switching between all those different tabs, and services, and systems that they're using. So, GitOps has a real advantage to developers, because they're already sat in Git, they're already using their familiar tooling. And so by bringing operations within that developer tooling, you're giving them that familiarity. So, it's one advantage for developers. And then for operations staff, one of the things that it does is it centralizes where all of this configuration is kept. And then you can use things like templating and some other things that we're going to be talking about today to make sure that you automate and go quickly, but you also do that in a way which is reliable, and secure, and stable. So, it's really helping to bring that run fast, but don't break things kind of ethos to how we can deploy and run applications in the cloud. >> So, Steve W., let's start talking about where Weaveworks comes into the picture, and what's your perspective. >> So, yeah, Weaveworks has an engine, a set of software, that enables this to happen. So, think of it as a constant reconciliation engine. So, you've got your declared state, your desired state is declared in Git. So, this is where all your YAML for all your Kubernetes hangs out. And then you have an agent that's running inside Kubernetes, that's the Weaveworks GitOps agent. And it's constantly comparing the desired state in Git with the actual state, which is what's running in Kubernetes. So, then as a developer, you want to make a change, or an operator, you want to make a change. You push a change into Git. The reconciliation loop runs and says, "All right, what we've got in Git does not match "what we've got in Kubernetes. "Therefore, I will create story resource, whatever." But it also works the other way. So, if someone does directly access Kubernetes and make a change, then the next time that reconciliation loop runs, it's automatically reverted back to that single source of truth in Git. So, your Kubernetes cluster, you don't get any configuration drift. It's always configured as you desire it to be configured. And as Steve George has already said, from a developer or engineer point of view, it's easy to use. They're just using Git just as they always have done and continue to do. There's nothing new to learn. No change to working practices. I just push code into Git, magic happens. >> So, Steve W., little deeper dive on that. When we hear Ops, a lot of us start thinking about, specifically in terms of infrastructure, and especially since infrastructure when deployed and left out there, even though it's really idle, you're paying for it. So, anytime there's an Ops component to the discussion, cost and resource management come into play. You mentioned this idea of not letting things drift from a template. What are those templates based on? Are they based on... Is this primarily an infrastructure discussion, or are we talking about the code itself that is outside of the infrastructure discussion? >> It's predominantly around the infrastructure. So, what you're managing in Git, as far as Kubernetes is concerned, is always deployment files, and services, and horizontal pod autoscalers, all those Kubernetes entities. Typically, the source code for your application, be it in Java, Node.js, whatever it is you happen to be writing it in, that's, typically, in a separate repository. You, typically, don't combine the two. So, you've got one set of repository, basically, for building your containers, and your CLI will run off that, and ultimately push a container into a registry somewhere. Then you have a separate repo, which is your config. repo, which declares what version of the containers you're going to run, how many you're going to run, how the services are bound to those containers, et cetera. >> Yeah, that makes sense. Steve G., talk to us about this concept of trusted application delivery with GitOps, and frankly, it's what led to the sort of prior question. When you think about trusted application delivery, where is that intertwinement between what we think of as the application code versus the code that is creating the infrastructure? So, what is trusted application delivery? >> Sure, so, with GitOps, we have the ability to deploy the infrastructure components. And then we also define what the application containers are, that would go to be deployed into that environment. And so, this is a really interesting question, because some teams will associate all of the services that an application needs within an application team. And sometimes teams will deploy sort of horizontal infrastructure, which then all application teams services take advantage of. Either way, you can define that within your configuration, within your GitOps configuration. Now, when you start deploying speed, particularly when you have multiple different teams doing these sorts of deployments, one of the questions that starts to come up will be from the security team, or someone who's thinking about, well, what happens if we make a deployment, which is accidentally incorrect, or if there is a security issue in one of those dependencies, and we need to get a new version deployed as quickly as possible? And so, in the GitOps pipeline, one of the things that we can do is to put in various checkpoints to check that the policy is being followed correctly. So, are we deploying the right number of applications, the right configuration of an application? Does that application follow certain standards that the enterprise has set down? And that's what we talk about when we talk about trusted policy and trusted delivery. Because really what we're thinking about here is enabling the development teams to go as quickly as possible with their new deployments, but protecting them with automated guard rails. So, making sure that they can go fast, but they are not going to do anything which destroys the reliability of the application platform. >> Yeah, you've mentioned reliability and kind of alluded to scalability in the application environment. What about looking at this from the security perspective? There've been some recently, pretty well publicized breaches. Not a lot of senior executives in enterprises understand that a very high percentage of code that their businesses are running on is coming out of the open source community, where developers and maintainers are, to a certain degree, what they would consider to be volunteers. That can be a scary thing. So, talk about why an enterprise struggles today with security, policy, and governance. And I toss this out to Steve W. Or Steve George. Answer appropriately. >> I'll try that in a high level, and Steve W. can give more of the technical detail. I mean, I'll say that when I talk to enterprise customers, there's two areas of concern. One area of concern is that, we're in an environment with DevOps where we started this conversation of trying to help teams to go as quickly as possible. But there's many instances where teams accidentally do things, but, nonetheless, that is a security issue. They deploy something manually into an environment, they forget about it, and that's something which is wrong. So, helping with this kind of policy as code pipeline, ensuring that everything goes through a set of standards could really help teams. And that's why we call it developer guard rails, because this is about helping the development team by providing automation around the outside, that helps them to go faster and relieves them from that mental concern of have they made any mistakes or errors. So, that's one form. And then the other form is the form, where you are going, David, which is really around security dependencies within software, a whole supply chain of concern. And what we can do there, by, again, having a set of standard scanners and policy checking, which ensures that everything is checked before it goes into the environment. That really helps to make sure that there are no security issues in the runtime deployment. Steve W., anything that I missed there? >> Yeah, well, I'll just say, I'll just go a little deeper on the technology bit. So, essentially, we have a library of policies, which get you started. Of course, you can modify those policies, write your own. The library is there just to get you going. So, as a change is made, typically, via, say, a GitHub action, the policy engine then kicks in and checks all those deployment files, all those YAML for Kubernetes, and looks for things that then are outside of policy. And if that's the case, then the action will fail, and that'll show up on the pull request. So, things like, are your containers coming from trusted sources? You're not just pulling in some random container from a public registry. You're actually using a trusted registry. Things like, are containers running as route, or are they running in privileged mode, which, again, it could be a security? But it's not just about security, it can also be about coding standards. Are the containers correctly annotated? Is the deployment correctly annotated? Does it have the annotation fields that we require for our coding standards? And it can also be about reliability. Does the deployment script have the health checks defined? Does it have a suitable replica account? So, a rolling update. We'll actually do a rolling update. You can't do a rolling update with only one replica. So, you can have all these sorts of checks and guards in there. And then finally, there's an admission controller that runs inside Kubernetes. So, if someone does try and squeeze through, and do something a little naughty, and go directly to the cluster, it's not going to happen, 'cause that admission controller is going to say, "Hey, no, that's a policy violation. "I'm not letting that in." So, it really just stops. It stops developers making mistakes. I know, I know, I've done development, and I've deployed things into Kubernetes, and haven't got the conflict quite right, and then it falls flat on its face. And you're sitting there scratching your head. And with the policy checks, then that wouldn't happen. 'Cause you would try and put something in that has a slightly iffy configuration, and it would spit it straight back out at you. >> So, obviously you have some sort of policy engine that you're you're relying on. But what is the user experience like? I mean, is this a screen that is reminiscent of the matrix with non-readable characters streaming down that only another machine can understand? What does this look like to the operator? >> Yeah, sure, so, we have a console, a web console, where developers and operators can use a set of predefined policies. And so that's the starting point. And we have a set of recommendations there and policies that you can just attach to your deployments. So, set of recommendations about different AWS resources, deployment types, EKS deployment types, different sets of standards that your enterprise might be following along with. So, that's one way of doing it. And then you can take those policies and start customizing them to your needs. And by using GitOps, what we're aiming for here is to bring both the application configuration, the environment configuration. We talked about this earlier, all of this being within Git. We're adding these policies within Git as well. So, for advanced users, they'll have everything that they need together in a single unit of change, your application, your definitions of how you want to run this application service, and the policies that you want it to follow, all together in Git. And then when there is some sort of policy violation on the other end of the pipeline, people can see where this policy is being violated, how it was violated. And then for a set of those, we try and automate by showing a pull request for the user about how they can fix this policy violation. So, try and make it as simple as possible. Because in many of these sorts of violations, if you're a busy developer, there'll be minor configuration details going against the configuration, and you just want to fix those really quickly. >> So Steve W., is that what the Mega Leaks policy engine is? >> Yes, that's the Mega Leaks policy engine. So, yes, it's a SaaS-based service that holds the actual policy engine and your library of policies. So, when your GitHub action runs, it goes and essentially makes a call across with the configuration and does the check and spits out any violation errors, if there are any. >> So, folks in this community really like to try things before they deploy them. Is there an opportunity for people to get a demo of this, get their hands on it? what's the best way to do that? >> The best way to do it is have a play with it. As an engineer, I just love getting my hands dirty with these sorts of things. So, yeah, you can go to the Mega Leaks website and get a 30-day free trial. You can spin yourself up a little, test cluster, and have a play. >> So, what's coming next? We had DevOps, and then DevSecOps, and now GitOps. What's next? Are we going to go back to all infrastructure on premises all the time, back to waterfall? Back to waterfall, "Hot Tub Time Machine?" What's the prediction? >> Well, I think the thing that you set out right at the start, actually, is the prediction. The difference between infrastructure and applications is steadily going away, as we try and be more dynamic in the way that we deploy. And for us with GitOps, I think we're... When we talk about operations, there's a lots of depth to what we mean about operations. So, I think there's lots of areas to explore how to bring operations into developer tooling with GitOps. So, that's, I think, certainly where Weaveworks will be focusing. >> Well, as an old infrastructure guy myself, I see this as vindication. Because infrastructure still matters, kids. And we need sophisticated ways to make sure that the proper infrastructure is applied. People are shocked to learn that even serverless application environments involve servers. So, I tell my 14-year-old son this regularly, he doesn't believe it, but it is what it is. Steve W., any final thoughts on this whole move towards GitOps and, specifically, the Weaveworks secret sauce and superpower. >> Yeah. It's all about (indistinct)... It's all about going as quickly as possible, but without tripping up. Being able to run fast, but without tripping over your shoe laces, which you forgot to tie up. And that's what the automation brings. It allows you to go quickly, does lots of things for you, and yeah, we try and stop you shooting yourself in the foot as you're going. >> Well, it's been fantastic talking to both of you today. For the audience's sake, I'm in California, and we have a gentleman in France, and a gentlemen in the UK. It's just the wonders of modern technology never cease. Thanks, again, Steve Waterworth, Steve George from Weaveworks. Thanks for coming on theCUBE for the AWS Startup Showcase. And to the rest of us, keep it right here for more action on theCUBE, your leader in tech coverage. (upbeat music)

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase This is the AWS Startup Showcase. So, in the early days of Kubernetes from the world of DevOps from the configuration What's the problem with that model, to make sure that you and what's your perspective. that enables this to happen. that is outside of the how the services are bound to that is creating the infrastructure? one of the things that we can do and kind of alluded to scalability that helps them to go And if that's the case, is reminiscent of the matrix and start customizing them to your needs. So Steve W., is that what that holds the actual policy engine So, folks in this community So, yeah, you can go to on premises all the in the way that we deploy. that the proper infrastructure is applied. and yeah, we try and stop you and a gentlemen in the UK.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve WaterworthPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

Steve GeorgePERSON

0.99+

AWSORGANIZATION

0.99+

Steve G.PERSON

0.99+

FranceLOCATION

0.99+

Steve W.PERSON

0.99+

CaliforniaLOCATION

0.99+

30-dayQUANTITY

0.99+

WeaveworksORGANIZATION

0.99+

GitTITLE

0.99+

UKLOCATION

0.99+

GitOpsTITLE

0.99+

JavaTITLE

0.99+

twoQUANTITY

0.99+

Node.jsTITLE

0.99+

one advantageQUANTITY

0.99+

two guestsQUANTITY

0.99+

Mega LeaksTITLE

0.99+

Mega LeaksTITLE

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

each monthQUANTITY

0.99+

DevOpsTITLE

0.98+

NetflixORGANIZATION

0.98+

one setQUANTITY

0.98+

DevSecOpsTITLE

0.98+

one formQUANTITY

0.98+

EKSTITLE

0.98+

oneQUANTITY

0.97+

One areaQUANTITY

0.97+

KubernetesTITLE

0.97+

two areasQUANTITY

0.97+

one replicaQUANTITY

0.96+

GitHubORGANIZATION

0.95+

Raziel Tabib & Dan Garfield, Codefresh | AWS Startup Showcase S2 E1 | Open Cloud Innovations


 

(bright music) >> Hi, everyone. Welcome to the CUBE's presentation of the AWS Startup Showcase around open cloud innovations. It's the season two episode one of the ongoing series covering exciting startups from the AWS ecosystem and talking about open source and innovation. I'm John Furrier, your host. Today, we're joined by two great guests. Dan Garfield, chief open source officer and co-founder of Codefresh IO, and Raziel Tabib, CEO and co-founder. Two co-founders in the middle of all the innovation. Gentlemen thanks for coming on. >> Thank you. >> So you guys have a great platform and as cloud native goes mainstream in the enterprise and for developers, the big topic is unification, end-to-end, horizontally scalable, leveraging data. All these things around agile that I call agile cloud next level. This is kind of what we're seeing. The CNCF is growing. You've seen KubeCon every year is more about these kinds of things. Words like orchestration, Kubernetes, container, security. All of those complexities are now at the center of making things easier for developers. This is a key value proposition and you guys at Codefresh are offering really the first enterprise delivery solution powered by Argo, which is an open source project. Again, open source driving really big changes. So let's get into it. And first of all, congratulations, and thanks for working on this project. What's so special about- >> Thank you for that. >> Argo the project, and why have you guys decided to build a platform on it, and where is this coming together? Take us through why this is so important. >> I think Argo has been a very fast growing open source project for multiple reasons. A, it has been built for the new way of building and deploying an application. It's cloud native. You mentioned Kubernetes becoming kind of the de facto way of running application. It's the de facto way to run automation and pipeline. But also Argo has been built from the ground up to the latest practices of how we deploy software. We deploy software now differently. We deploy it using a GitOps practice. We're deploying it using canary blue-green progressive deployment. And Argo has been built around these practices, around these technologies, and has been very much widely adopted by the community. In the past, the KubeCon you've mentioned, Argo was all over the place. And we were very glad to be working with the community to talk about what the next steps with Argo. >> Yeah, it's a really good point. I would like to just follow up on that because you see this being talked about. It always comes up, where is open source really outside of a pure contributors matter? And when you have corporations contributing, you seeing this has been the trend. You saw it with Lyft, with Envoy, companies doing more and more open source. This is part of a big collaboration. And again, this comes back down to this whole why it's relevant and why it's so special with Argo. Continue to talk about relationship because it's not just you guys, it's now community. >> Yeah, I can speak to that. The Argo project is something that we maintain in partnership with several other companies and really our relationship with it is that this is something that we're actively contributing to. This is something that we're helping build the roadmap on and planning the events around and all those kinds of things. And we're doing that because we really believe in this technology and we've built our platform on it. So when you deploy Codefresh, you're deploying technology that's built directly on Argo and is designed specifically to solve that problem that you spoke to at the top of the hour. We all want to deliver software faster. We all want to have fewer regressions. We want to have fewer breaking changes. We want software to be super reliable. We want to be comfortable with what we're doing. That's really why we picked Argo because that technology that we have it is to Raziel's point delivered in this new way. It's delivered using GitOps. And that's a whole revolution and change in the way that people build and deploy software. And bringing cohesion into that experience is so critical to building the confidence that lets you actually deploy often and frequently and more. >> Dan, if you don't mind just expanding on that one point about the problem you solve, because to me, this has been kind of that evolution. It's almost like, yeah, there's been problems, plural, and opportunities that you saw with those in growing markets like this with DevOps and DevSecOps and now cloud native. What is the catalyst behind all of this? What was the epiphany behind it? How did it get so much momentum? What was it really doing under the covers? >> Well, it's a very simple and easy to use set of tools. And that's one of the big things is that if you look at the ideas of GitOps and there's actually a foundation around this that were part of called open GitOps to GitOps working group under the CNCF. And those principles of, I want to, yes, do my software defined as code. I want to do my infrastructure defined as code and I need something monitoring by production run times and making sure that the declared desired state is always matching the actual state. Those principles have actually been around for a number of years. And with Kubernetes, we really unlocked an API that allowed us to start doing GitOps and this is why we bring in Argo and you see the rise of Argo CD and other workflows and what we've been doing is really because that technology has been unlocked now. So the ability to define how your software is supposed to run and now your entire software delivery stack should run, all defined and then monitored and then kept in check using the GitOps operator. That critical unlock is what's really driving the massive adoption. And like Raziel said, Argo is the fastest growing and most popular open source project for delivering software. And it's not even close. >> Yeah, this is really great point. And I want to get into that 'cause I want to know why, what you guys do on your platform versus the open source and get that relationship settled? Before we get there, though, I want to get your reaction to some of the commentary in the industry 'cause GitOps trend has been exploding into new directions. I mean, it used to be a term about 10 years ago called big data. And at the beginning where data was all big data. Now it was DevOps revolution around data as well. But now you're hearing people talk about big code. Like, I mean, the code bases are becoming so huge. So as a developer, you're leveraging large open source code. This idea of the software delivery with existing code and new code just adds to more code. There's more code being developed every day. >> There is more code delivered every day. And I think that organization realize today, almost in every industry that they have to pace up how fast and how frequent they update their software delivery. We're living in a world in which every aspect of our life has been disrupted by software and organization realize that they have to keep up and figure out how to deploy software more frequent and more lively. And I think, you mentioned that really Kubernetes, the cloud native became the de facto way of running application. I think most of organization has made that decision to move into cloud native. The second question is after, is okay, now we have all applications running, how fast and how more frequent we can deploy applications to the cloud native? And that's the stage in which we're super excited about Argo and our up platform because that's basically streamline the building application for these cloud native, deploying applications for the cloud native, and so on. >> Yeah, and I think that highlights the business value. You getting a lot of the conversations with businesses that say they want the modern application on the cloud scale. And at the end of the day, it comes down to speed and security. So how fast can I get the app out? How well does it work? Does it run performance? And does it have security? And I don't want a slow. >> Exactly. Exactly. It kind of oversimplifies it, but that's kind of the net net. So when you look at Argo open source, what's that's done and kind of where you guys are taking it. Can you talk about the differences between your enterprise version and the open source version and the interplay there, the relationship, the business model health customers can play on both sides or understand the difference? >> Sure. >> Go ahead. >> Go ahead, Raziel. Okay, so I think Argo, as you mentioned, is probably the most advanced technology today to both run pipelines. They're like events to trigger pipelines and Argo work for the one that pipelines, the Argo CD for GitOps and Rollout, for Canary blue-green strategies. And the adoption is really exploding. Just as an Advocate that we had in December, we have worked with the community and organized ArgoCon events in which we had initially kind of thought about 500 attendees. And so we have more than 4,000 registrants and majority of them are coming from enterprise. Now as we have talked to the community during this conference and figure out, okay, so what are the things that you're still missing? And that will help you take the benefit that you get from Argo to the next level. The few things that came up. One is Argo is a great technology. However, Argo now is fragmented into four projects. There is an advance. There is workflow. There is Argo CD. And there is Argo Rollout. And there is a need to bring them all together into a solid platform, solid one run time that can be easily installed, monitor all of these in a single UI, in a single control plane. That's one aspect. The second is the scalability. Really being able to manage it centrally across multiple clusters, not in one cluster. And what we bring in with the new one, we're so excited about this platform, is we're bringing that big. The first to get all of these four projects in one runtime, and one control plane, but also allow the community to run it across multiple cluster from one place getting into the solution, not just as a technology. >> If I may add to that, the value of bringing these projects together, it provides so many insights. So when you're trying to figure out, there's some breaking change that has been made, but you don't necessarily know where it is because you have a lot of microservices that are out there. You have a lot of teams working on it. By bringing all of these things together, we're able to look at all of the commits, all of the deployments, all of the Jira issues. All of these components combined together, so you really get a single view where you can see everything that's going on. And this is another element where when you're trying to deploy software at scale, you're trying to deliver it faster. People are getting a little bit overwhelmed because there are so many updates and so many different services and so many teams working that they're starting to miss that visibility. So this is what we want to bring into the ecosystem is we really want them that visibility to be super clear. And by bringing all of the Argo components, the Argo tools together, we're able to do that in a single dashboard. >> Yeah, so if I get this right, let me just double click on that because it sounds like, yeah, Argo's great. It's been organically growing, a lot of different components to it, but when you start getting into pushing code in an organization, you have, I call the old-school version control kind of vibe going on where it's like you don't know what's out there and how that affects the system as it's a distributed system, which cloud is. There are consequences when stuff breaks. So we all know that. Is that kind of where you guys are getting at? The challenge is actually the opportunity at the same time where it's all goodness, but then when you start looking at scale and the system impact, is that kind of where the open source and you guys pick up, is that right? >> This is one aspect. I think the second one is that again, when you look at each individual component of Argo, each provide a lot of value by itself. But when you sum it, the value of the sum is greater than the value of the individual. So when you're taking, really the events and workflow, Argo CD and Argo Rollout, and you bring them all together into single runtime. The value of its time is really automation all the way from code to cloud. It's not breaking into, there is like an automation for CI, there's an automation for CD, there's information for progressive delivery. It's actually automated all the way from the Git commit through the GitOps through the deployment strategy, and so on. And being able to monitor it and scale it in the enterprise scale. So, of course, it's helping enterprise and make Argo to some level more crucial for enterprise, if I may say, but second is really bringing all of these components together and get the outcome be greater than the individual parts. >> Yeah, that's a good point. Yeah, make it make a commercial grade, if you will, for enterprise who wants to have support and consistency and whatnot. What other problems are you solving? Dan, can you chime in on the whole, how you guys resolve some of these challenges for the enterprise? Because, again, some stability is key as well, but also the business benefit has got to be there for the development teams. >> Yeah. So there's several. One aspect is that the way that most people operate today is they essentially do a bunch of commands and engage with systems. And then hopefully at the end, they write those things to Git. And this is a little bit backwards if you think about it because there's a situation where you can end up with things in production that were never checked in, or maybe somebody is operating and they're making a change. If we look at most of the downtime that's occurred over the last two years, it's because people have flubbed a key when they were typing in a command or something like that. The way that this system works is that we provide an interface, both the CLI and the GUI, where those operations interactions actually end with a Git commit. So rather than doing an operation and then hopefully committing to Git, most of the operations are actually done first in Git, or if there is something that can't be done first in Git, it's maybe bootstrapped and then committed to Git as part of a single command. So this means you have end-to-end traceability. It also means your auditability is way better. And then the second, the other component that we're adding is that security and scale layer. So we are securing these things, we're building in single sign-on, and all those robust security things you would expect to have across all these instances. So many organizations, when they're building their software delivery tools, they have to deploy instances in many locations. And so this is how you end up with companies that have 5,000 instances that are all out of date and insecure. Well with Codefresh, if you need to deploy a component onto this end cluster or something like that, you may have thousands of them. All of those are monitored and taken care of in a centralized way, so I can do all of my updates at once. I can make sure they're all up to date. I'm not running with a bunch of known CVEs or something like that and it's clear. The components are also designed in an architectural way. So that only the information that is needed is ever passed out. So I can have a cluster that is remotely managed, that checks out code, that the control plane never has access to. So this hybrid model has been really popular with our customers. We have customers in healthcare, we have customers in defense and in financial services, all these regulated industries. The flow of information is really critical. So this hybrid model allows you to deploy something that has the ease of a SaaS solution, but has the security of an on-prem solution while being centrally managed and easy to take care of. >> Yeah, it's a platform. It's what it is. It's not a tool. It's not a tool anymore. It's a platform. >> Exactly. >> I think the foundational aspect of this is critical. And you mentioned automation before. If you're going to go end-to-end automation, you have some stuff in the system that whether it hasn't been checked in yet. I mean, we know what this leads to. Disaster or a lot of troubleshooting and disruption. That's what it seems to solve. Am I getting that right? Is that right? >> Yeah. >> Go ahead. >> Yeah, it helps automate the whole process. But as you say, it's really like identify what needs not to be going all the way to production and really kind of avoid vulnerabilities or any flaws in the software. So it automates everything, but in a way that the automation can identify issues and avoid them from coming into the production. >> Well, great stuff here. I've got to ask you guys now that you've got that settled. It's really, I see the value there, how you guys are letting it grow organically and with Argo and then building that platform for businesses and developers. It's really cool. And I see the foundational value there. It just only gets better. How you guys contributing back to open source and helping the wider GitOps and Argo communities? Because this is, again, the rising tide that's bringing all the boats into the harbor, so to speak. So this is a good trend and people will acknowledge that. So how's this going to work as you guys work back into the open source community? >> So we work closely with both myself and the other maintainers worked closely with the community on the roadmap and making sure that we're addressing issues. I think if you look in the last quarter, we probably have upwards of 40 or 50 different issues that we've solved in terms of fixing a bug or adding features or things like that. So making sure that these tools, which are really the undergirding components of our platform, they have to be really robust. They have to be really strong. And so we're contributing those things back. And then when it comes to the scalability side, these are things that we can build into the platform. So the value should be really clear. I can deploy this, I can manage it myself, I can build tools on top of it. And if I want to start doing it at scale, maybe I want support. That's when I really am going to go to Codefresh and start saying, let's get the enterprise little platform. >> Awesome. GitOps, a lot of people like some naysayers may say, Hey, it's the latest fad. Is it here to stay? We were talking about big code earlier. GitOps, obviously seeing open source. Just every year, just get better and better and growth. I mean, I remember when I was breaking into the business, you have to sell under the table. Now it's all free and open and getting better every year. Just the growth of code. Is GitOps a fad? How do you talk to people who say that? I mean, besides slapping around saying wake up. I mean, how do you guys address that when people say it's just the latest fad? >> So if I may comment here and Dan feel free to chime in, I think that the GitOps is a continuation of a trend that everything is a source code. As a developer, many years ago myself and still writing code, always both code and code was the source of tool that's where we write the code. But now code actually is also describing how our application is running in production. And we've already seen kind of where it's get next. We also hear about infrastructure as a code. So now actually we storing the code the way the infrastructure should be. And I think that the benefit of storing all this configuration in a source code, which has been built to track changes, to be enabled to roll back, that is just going to be here to stay. And I think that's the new way of doing things. >> All right, gentlemen, great. Closing statements. Please share an update on the company. What it's all about? What event you got coming? I know you got a big launch. Can you take us through? Take us home. >> Join on February 1st, we're going to be launching the Codefresh software delivery platform. Raziel and I will be hosting the event. We've got a number of customers, a number of members of the community who are going to be joining us to show off that platform. So you're going to be able to see it in action, see how the features work, and understand the value of it. And you'll see how it works with GitOps. You'll see how it helps you deliver software at scale. That's February 1st. You can get information at codefresh.io. >> Raziel, Dan, thanks for coming on. >> Thank you. >> Pretty good showcase. Thanks for sharing. Congratulations. Great venture. Loved the approach. Love the growth in cloud native and you guys sure on the cutting edge. Fresh code, people love fresh code, codefresh.io. Thanks for coming on. >> Thank you. Thank you. >> Okay, this is the AWS Startup Showcase Open Cloud Innovations. Cloud scale, software, data. That's the future of modern applications being developed, changing the game to the next level. This is the CUBE's coverage season two episode one of the ongoing AWS Startup series here in theCUBE.

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase and you guys at Codefresh Argo the project, and why becoming kind of the de facto way And when you have and planning the events around and opportunities that you saw with those and making sure that the And at the beginning where And that's the stage in which You getting a lot of the and the open source version but also allow the community to run it all of the deployments, and how that affects the system and scale it in the enterprise scale. for the enterprise? One aspect is that the way Yeah, it's a platform. And you mentioned automation before. all the way to production And I see the foundational value there. and the other maintainers worked it's just the latest fad? the way the infrastructure should be. I know you got a big launch. a number of members of the community and you guys sure on the cutting edge. Thank you. changing the game to the next level.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan GarfieldPERSON

0.99+

Dave VellantePERSON

0.99+

JohnPERSON

0.99+

BrianPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

VishalPERSON

0.99+

John FurrierPERSON

0.99+

BostonLOCATION

0.99+

Brian LazearPERSON

0.99+

CiscoORGANIZATION

0.99+

DecemberDATE

0.99+

February 1stDATE

0.99+

JuniperORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Vishal JainPERSON

0.99+

fiveQUANTITY

0.99+

AWSORGANIZATION

0.99+

FortinetORGANIZATION

0.99+

Raziel TabibPERSON

0.99+

RazielPERSON

0.99+

GitTITLE

0.99+

ValtixPERSON

0.99+

Twenty peopleQUANTITY

0.99+

ArgoORGANIZATION

0.99+

twenty peopleQUANTITY

0.99+

two guestsQUANTITY

0.99+

14 millionQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

last weekDATE

0.99+

5,000 instancesQUANTITY

0.99+

third optionQUANTITY

0.99+

CodefreshORGANIZATION

0.99+

TodayDATE

0.99+

DanPERSON

0.99+

ValtixORGANIZATION

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

second questionQUANTITY

0.99+

thousandsQUANTITY

0.99+

more than 4,000 registrantsQUANTITY

0.99+

second thingQUANTITY

0.99+

40QUANTITY

0.99+

EnvoyORGANIZATION

0.99+

One aspectQUANTITY

0.99+

bothQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

one aspectQUANTITY

0.99+

oneQUANTITY

0.99+

last quarterDATE

0.99+

secondQUANTITY

0.99+

third thingQUANTITY

0.99+

two core enginesQUANTITY

0.99+

both optionsQUANTITY

0.99+

three core elementsQUANTITY

0.98+

fourQUANTITY

0.98+

James Watters, VMware | AWS re:Invent 2021


 

(upbeat music) >> Welcome back everyone to theCUBE's continuous coverage of AWS re:Invent 2021. I'm John Furrier, your host of theCUBE. We're here with James Watters, CTO of Modern Applications at VMware here to talk about the big Tanzu cloud native application wave, the modernization's here. James, great to see you. Thanks for coming on. >> Hey John, great to have you back on. And really excited about re:Invent this year. And I've been watching your coverage of it. There's lots of exciting stuff going on in this space. >> Awesome. Well, James, you've been riding the wave of, I would call cloud 1.0, 2.0 what do you want to call it, the initial wave of cloud where the advent of replatforming is there. You know all these benefits and things are moving fast. Things are being developed. A lot of endeavors, things are tracking. Some are kicking, Kubernetes kicks in, and now the big story is over the past year and a half. Certainly the pandemic highlighted is this big wave that's hitting now, which is the real, the modernization of the enterprise, the modernization of software development. And even Amazon was saying that in one of our talks that the sovereign life cycles over it should be completely put away to bed. And that DevOps is truly here. And you add security, you got DevSecOps. So an entirely new, large scale, heavy use of data, new methodologies are all hitting right now. And if you're not on that wave your driftwood, what's your take? >> Oh, I think you're dead right, John, and you know, kind of the first 10 years of working on this for sort of proving that the microservices, the container, the declared of automation, the DevOps patterns were the future. And I think everyone's agreed now. And I think DevSecOps and the trends around app modernization are really around bringing that to scale for enterprises. So the conversations I tend to be having are, Hey, you've done a little Kubernetes. You've done some modern apps and APIs, but how do you really scale this across your enterprise? That's what I think is exciting today. And that's what we're talking about. Some of the tools we're bringing to Amazon to help people achieve faster, consumption, better scale, more security. >> You know, one of the things about VMware that's been impressive over the years is that on the wave of IT, they already had great operational install base. They did a deal with Amazon Ragu did that. I think 2016, that kind of cleared the air. They're not going to do their own cloud or they have cloud efforts kind of solidifies that. And then incomes, Kubernetes, and then you saw a completely different cloud native wave coming in with the Tanzu, the Heptio acquisition. And since then a lot's been done. Can you just take us through the Tanzu evolution because I think this is a cornerstone of what's happening right now. >> Yeah, that's a great question, John. I think that the emergence of Kubernetes as a common set of APIs that every cloud and almost every infrastructure agrees on was a huge one. And the way I talked to our clients about is that VMware is doing a couple of things in this space. The first is that we're recognizing that as an infrastructure or baking Kubernetes into every vSphere, be it vSphere on-prem, be it VMC on Amazon. You're just going to find Kubernetes is a big part of each year. So that's kind of a big step one, but it's in some ways the same way that Amazon is doing with EKS and Azure is doing with AKS, but like every infrastructure provider is bringing Kubernetes everywhere. And then that kind of unleashes this really exciting moment where you've got this global control plane that you can program to be your DevSecOps platform. And Kubernetes has this incredible model of extensibility where you can add CRDs and program, right against the Kubernetes APIs with your additional features and functions you want your DevSecOps pipeline. And so it's created this opportunity for Tanzu to kind of have then a global control plane, which we call Tanzu Mission Control to bring all of those Kubernetes running on different clouds together. And then the last thing that we'll talk about a little bit more is this Tanzu Application Platform, which is bringing a developer experience to Kubernetes. So that you're not always starting with what I like to say, like, oh, I have Git, I have Kubernetes, am I done? There's a lot more to the story than that. >> I want to get to this Tanzu Application Platform on EKS. I think that's a big story at VMware. We've seen that, but before we do that for the folks out there watching who are like, I'm now seeing this, whether they're young, new to the industry or enterprises who have replatforming or refactoring, trying to understand what is a modern application. So give us the definition in your words, what is a modern application? >> You know, John, it's a great question. And I tend to start with why and like, hey, how did we get here? And you, you and I both, I think, used to work for the bigger iron vendors back in the day. And we've seen the age of the big box Silicon Valley. I don't know, I worked at Sun just across the aisle here and basically we'd sell you a big box and then once or twice a year, you'd change the software on it. And so in a sense, like there was no chance to do user-oriented design or any of these things. Like you kind of got what you got and you hope to scale it. And then modern applications have been much more of the age of like what you might say, like Instagram or some of these modern apps that are very user-oriented and how you're changing that user interface that user design might change every week based on user feedback. And you're constantly using big data to adjust that modern app experience. And so modern apps to me are inherently iterative and inherently scalable and amenable to change. And that's where the 12 factor application manifesto was written, a blog was written a decade ago, basically saying here's how you can start to design apps to be constantly upgradable. So to me, modern apps, 12 of factors, one of them Kubernetes compatible, but the real point is that they should be flexible to be constantly iterated on maybe at least once a week at a minimum and designed and engineered to do that. And that takes them away from the old vertically scaled apps that kind of ran on 172 processors that you would infrequently update in the past. Those are what you might call like cloud apps. Is that helpful? >> Yeah, totally helpful. And by the way, those old iron vendors, they're now called the on-premise vendors and, you know, HPE, Dell and whatnot, IBM. But the thing about the cloud is, is that you have the true infrastructure as code happening. It's happened, it's happening, but faster and better and greater the goodness there. So you got DevSecOps, which is just DevOps with security. So DevSecOps is the standard now that everyone's shooting for. So what that means is I'm a developer, I just want to write code, the infrastructure got to work for me. So things like Lambda functions are all great things. So assuming that there's going to be this now programmable layer for developers just to do stuff. What is, in context to that need, what is the Tanzu Application Platform about and how does it work? >> Yeah, that's a great question, John. So once you have Kubernetes, you have this abundance of programmable, inner infrastructure resources. You can do almost anything with it, right? Like you can run machine learning workflows, you can run microservices, you can build APIs, you can import legacy apps to it, but it doesn't come out of the box with a set of application patterns and a set of controllers that are built for just, you know, modern apps. It comes with sort of a lot of flexibility and it expects you to understand a pretty broad surface area of APIs. So what we're doing is we're following in the footsteps of companies like Netflix and Uber, et cetera, all of which built kind of a developer platform on top of their Kubernetes infrastructure to say, here's your more templatized path to production. So you don't have to configure everything. You're just changing the right parts of the application. And we kind of go through three steps. The first is an application template that says, here's how to build a streaming app on Kubernetes, click here, and you'll get in your version control and we'll build a Kubernetes manifest for it. Two, is an automated containerization, which is we'll take your app and auto create a container for it so that we know it's secure and you can't make a mistake. And then three is that it will auto detect your application and build a Kubernetes deployment for it so that you can deploy it to Kubernetes in a reliable way. We're basically trying to reduce the burden on the developer from having to understand everything about Kubernetes, to really understanding their domain of the application. Does that make sense? >> Yeah, and this kind of is inline, you mentioned Netflix early on. They were one of the pioneers in inside AWS, but they had the full hyperscaler developers. They had those early hardcore devs that are like unicorns. No, you can't hire these people. They're just not many enough in the world. So the world's becoming, I won't say democratization, that's an overused word, but what we're getting to is if I get this right, you're saying you're going to eliminate the heavy lifting, the boring mundane stuff. >> Yeah, even at Netflix as is great of a developers they have, they still built kind of a microservices or an application platform on top of AWS. And I think that's true of Kubernetes today, which if you go to a Kubernetes conference, you'll often see, don't expose Kubernetes to developers. So tons of application platforms starts to really solve that question. What do you expose to a developer when they want to consume Kubernetes? >> So let's ask you, I know you do a lot of customer visits, that's one of the jobs that make you go out in the field which you like doing and working backwards on the customers has been in the DNA of VMware for years. What is the big narrative with the customers? What's their pain point? How else has the pandemics shown them projects that are working and not working, and they want to come out of it with a growth strategy. VMware is now an independent company. You guys got the platform, what are the customers doing with it? >> Well, I'll give you one example. You know, I went out and I was chatting with the retailer, had seen their online sales goes from one billion to like three billion during the pandemic. And they had been using kind of packaged shopping cart software before like a basic online store that they bought and configured. And they realized they needed to get great at modern apps to keep up with customer demand. And so I would say in general, we've seen the drive, the need for modern apps and digital transformation is just really skyrocketing and everyone's paying attention to it. And then I think they're looking for a trusted partner and they're debating, do we build it all in-house or do we turn to a partner that can help us build this above the cloud? And I think for the people that want an enterprise trusted brand, they'll have a lot of engineering talent behind it. There's been strong interest in Tanzu. And I think the big message we're trying to get out is that Tanzu can not only help you in your on-prem infrastructure, but it can also really help you on public cloud. And I think people are surprised by just how much. >> It's just in the common thread. I see that it's that point is right on is that these companies that don't digitize their business and build an application for their customer are going to get taken away by a startup. I mean, we've seen, it's so easy if you don't have an app for that, you're out of business. I mean, this is like, no, no, it's not like maybe we should do the cloud, let's get proactive. Pretty much it's critical path now for companies. So I'm sure you agree with that, but what's the progress of most of the enterprises? What percentage do you think are having this realization? >> I would say at least 70, 80%, if not more, are there now, and 10 years ago, I used to kind of have to tell stories, like, you know, some startups going to come along and they might disrupt you and people kind of give you that like, yeah, yeah, yeah. You know, I get it. And now it's sort of like, hey, someone's already in our market with an API. Tell me how to build API first apps we need to compete. And that's the difference in the strategic conversation kind of post pandemic and post, you know, the last 10 years. >> All right, final question for you 'cause this is right great thread. I've seen having a web interface it's not good enough, to your point. You got to have an application that they're engaging with, with all the modern capabilities, because the needs there, the expectation for the customers there. What new things are you seeing beyond mobile that are coming around the pike for enterprises, obviously web to mobile, mobile to what? What's next? >> I think the thing that's interesting is there is a bigger push to say more and more of what we do should be an API both internally, like, hey, other teams might want to consume some of these services as a well-formed API. I call it kind of like Stripe MB. Like you look at all these companies, they're like, Hey, stripes worth a hundred billion dollars now because they built a great API. What about us? And so I've seen a lot of industries from automotive to of course financial services and others that are saying, what if we gave our developers internally great APIs? And what if we also expose those APIs externally, we could get a lot, a more rat, fast moving business than the traditional model we might've had in the past. >> It's interesting, you know, commoditizing and automating a way infrastructure or software or capable workflows is actually normal. And if you can unify that in a way that's just better I mean, you have a lower cost structure, but the value doesn't go away, right? So I think a lot of this comes down to, beauty's in the eye of the beholder. I mean, that's how DevSecOps works. I mean, it's agile, it's faster, but you still have to achieve the value of the net is lower cost. What's your take on that? >> Well, I think you're dead right, John. And I think this is what was surprising about Stripe is it was possible before Stripe to go out as a developer and kind of pulled together a backend that did payments, but boy, it was hard. And I think that's the same thing with kind of this tons of application platform and the developer experience focus is people are realizing they can't hire enough developers. So this is the other thing that's happened during the pandemic and the great resignation, if you will, the war for talent is on. And you know, when I talked to a customer, like we might be able to help you, even 30% with your developer productivity, there's like one out of four developers. You might not have to be able to have to recruit they're all in. And so I think that API first model and the developer experience model are the same thing, which is like, it doesn't have to just be possible. It should be excellent. >> Well, great insight learning a lot. Of course, we should move to theCube API and we'll plug into your applications. We're here in the studio with our API, James. Great to have you on. Final word, what's your take this, the big story for re:Invent. If you had to summarize this year's re:Invent going in to 2022, what would you say is happening in this industry right now? >> You know, I'm just super excited about the EKS market and how fast it's growing. We're seeing EKS in a lot of places. We're super excited about helping EKS customers scale. And I think it's great to see Amazon adopting that standard API from Kubernetes. And I think that's going to be, just awesome to watch the creativity the industry is going to have around it. >> Well, great insight, thanks for coming on. And again, we'll work on that Cube API for you. The virtualization of theCUBE is here. We're virtual, which we could be in-person and hope to see you in-person soon. Thanks for coming on. >> You too John, thank you. >> Okay, Cube's coverage of alias re:Invent 2021. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Dec 1 2021

SUMMARY :

about the big Tanzu cloud Hey John, great to have you back on. that the sovereign life cycles over it for sort of proving that the is that on the wave of IT, And the way I talked to our for the folks out there watching And I tend to start with why is that you have the true so that you can deploy it to So the world's becoming, I And I think that's true What is the big narrative is that Tanzu can not only help you most of the enterprises? And that's the difference in it's not good enough, to your point. and others that are saying, And if you can unify that And I think this is what Great to have you on. And I think that's going to be, and hope to see you in-person soon. of alias re:Invent 2021.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

NetflixORGANIZATION

0.99+

IBMORGANIZATION

0.99+

JamesPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

UberORGANIZATION

0.99+

James WattersPERSON

0.99+

DellORGANIZATION

0.99+

30%QUANTITY

0.99+

one billionQUANTITY

0.99+

2016DATE

0.99+

AWSORGANIZATION

0.99+

TanzuORGANIZATION

0.99+

2022DATE

0.99+

three billionQUANTITY

0.99+

DevSecOpsTITLE

0.99+

firstQUANTITY

0.99+

StripeORGANIZATION

0.99+

LambdaTITLE

0.99+

VMwareORGANIZATION

0.99+

SunORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

onceQUANTITY

0.99+

KubernetesTITLE

0.98+

first 10 yearsQUANTITY

0.98+

172 processorsQUANTITY

0.98+

one exampleQUANTITY

0.98+

threeQUANTITY

0.98+

pandemicEVENT

0.98+

AKSORGANIZATION

0.98+

each yearQUANTITY

0.98+

oneQUANTITY

0.97+

bothQUANTITY

0.97+

HPEORGANIZATION

0.97+

HeptioORGANIZATION

0.97+

a decade agoDATE

0.97+

12 of factorsQUANTITY

0.96+

three stepsQUANTITY

0.96+

vSphereTITLE

0.96+

pandemicsEVENT

0.96+

TwoQUANTITY

0.96+

Amazon RaguORGANIZATION

0.96+

DevOpsTITLE

0.95+

10 years agoDATE

0.95+

EKSORGANIZATION

0.94+

first modelQUANTITY

0.94+

todayDATE

0.93+

re:InventEVENT

0.93+

12 factorQUANTITY

0.93+

past year and a halfDATE

0.91+

2021TITLE

0.9+

Stu Miniman, Red Hat | KubeCon 2021 Preview


 

in the beginning there were mainframes a highly centralized secure command and control environment open systems brought a spate of innovation innovations that were powered by machines servers storage arrays networks that had to be configured deployed and managed by specialists virtualization that made that simpler but it was still a machine centric world the cloud devops and importantly containers created an inflection point in the industry where no longer do developers have to do a handoff to an infrastructure guru to deploy and often reconfigure systems which could cause other problems containers essentially codified the infrastructure to the point where developers could now be responsible for the full stack with consistency that allows stretching if you will of applications between on-prem to the cloud across clouds and out to the edge kubernetes in particular has enabled organizations to host applications and containers with automation so you can now deploy as many instances of your application as required and communicate between different services used by those applications in a consistent manner manner what this does is enables rolling updates security patches in a run anywhere environment that is changing how organizations build and manage their applications hello and welcome to this cube conversation and preview to kubecon cloud nativecon north america 2021 i'm pleased to welcome my friend and guest stu miniman director of market insights for cloud platforms at red hat stu man great to see you so good to see you dave thanks for having me you're very welcome so you heard my little spiel up front a little narrative what are the big trends that you're seeing that you're watching that you think people should know about they're important yeah well well dave i'm so glad you started out talking about the application because dave i mean you know my background your background very much too is started in infrastructure and for so long we talked about well let's dig different increments that we talk about the infrastructure but there was that huge divide between the people that run the infrastructure and the people that build and own the applications and when agile and devops came out we talked about not throwing things over the wall but when we look at containers and kubernetes really what it is is an application to build our application to modernize our application to run our application as you said they they have to be more that that right once go anywhere has been something we've wanted for a while and from a developer viewpoint i haven't wanted to think about the infrastructure so we want to enable that we want developers to be able to do their thing what we've done at red hat is try to have that consistency in every environment because kubernetes is only a single a very thin layer there's lots that needs to be done on top of that but one of the biggest trends is from an application standpoint the same thing that we've seen in other environments dave when you say okay well what apps did you have well you know it's great to say i have the cool micro service new stuff but what about older applications what about modernizing things can i lift things over can i have a broader spectrum of applications and yes that's where we are with kubernetes we don't just have stateless applications that are you know written in this new modern way we have a broad spectrum and there's another word that i really keyed off of in your intro talking about automation dave if you talk about scale and you talk about automation that's what container was built for if you look at what you know the the predecessor kubernetes was borg at google and if you think about just building things at scale and building things for with automation at their core that's what we've done and that's where this ecosystem is building towards so not saying everybody needs to be google but when you start talking about ai applications when you start talking about different ways to really have automation built into your environment this is where containers and kubernetes really shines because you know that's where we've really gone beyond human scale dave and we've gone to that machine scale so we need to make sure not just to remove humans to remove errors but to be able to have that agility and flexibility and scale which is what offers in this space so all the cool kids of course they want to develop in the cloud but i feel like for every app that's developed in the cloud there's like 10 on prem that are screaming to be modernized and we have a we have a chart on this but so what kind of applications are you seeing going in to containers and kubernetes yeah so so two two charts here for the survey we actually did for kubecon europe leading up to it the one on the left talks about the data is it stateless applications is it stateful applications well what do you know dave it's a mix of both of those right you'll remember dave in the virtualization days it took us about a decade to solve those storage and networking things how do we make sure that things really run at the virtual machine layer how do we have things like moving all over the place and still not break the connection that we had there that was a lot of hard work that we as an industry did well you know here we are six seven years into kubernetes we've solved a lot of those same issues so storage and networking work much better today in kubernetes environments than it did in the early days it started out oh stateless applications but if you look at the data on the second side what kind of applications are there the answer dave is yes you want your cool new modern databases absolutely ai and ml absolutely uh you know through kind of your isv you know more traditional applications the the answer is yes so customers are doing a whole lot of it when i'm meeting with customers one of the first questions we always have dave we've worked on silo busting for for many decades in this industry but if you talk to the infrastructure team and you ask them well what apps are you putting on there if they don't have a good answer the first thing we do is hey you really need to get the developers in the room you really need to understand this because if you stand up a platform just because kubernetes is cool and it's great it helps you build your resume you're not going to have success down the road you want to make sure they're involved up front understand what the requirements so you know kubernetes uh that one of the joke is you know containers and kubernetes add some magic and you know yippee you win it's like well there's a little bit more to that uh to actually have it work you mentioned it took decade plus to actually you know kind of work it out in the virtualization days i mean you remember the api you know stuff and we have the scars from their revenues right exactly but it's interesting when i look at this chart that you know because like you said it started off it's kind of stateless database yes all kinds of applications but database is number one and so you've got a lot of stateful applications enterprise apps security sensitive i mean everything's security sensitive today but hyper security sensitive so do you feel like that time frame relative to you know two decades ago is going to be compressed yes it seems like it's compressing quite rapidly absolutely the cncf always puts out a survey around the event as to where adoption is it's a little bit of a self-selecting for the community but containers and kubernetes broad adoption we've really not only crossed the chasm we're into the you know solid majority of of adoption here and yeah the the databases i mean dave you've covered things like the postgres uh world uh companies like crunchy data uh and some of these modern databases are really built for this type of environment and as you said they shouldn't have to think as much about okay i'm in a cloud or i'm in a different cloud this containerized platform that for applications can live in a lot of different places and that goes to kind of what we're seeing changing in the in the infrastructure world uh over the last couple years i'm glad to mention that a database i was interviewing josh uh at the postgres event and he was explaining to me how far kubernetes has actually come and and how much you know more trustworthy it is today still still some gaps but much different than even two or three years ago yeah i guess one of the highlights interesting at the kubecon europe uh there was the general availability of both the pipelines project and the get ops project it was it's argo cd is the project for for get ops and when that went ga for red hat we actually have that built into openshift at ga and not only was it ready to go we actually had a few customers that were ready to say hey we're using this and we're using the production so we had xa insurance one of the largest payers in the globe and the largest bank in turkey uh were two of the ones that we had saying hey we're using this for the audience if you're not familiar with git ops it's everything we use github as the repository of records so that this is kind of if you think about the old days we had the gold cd or the gold server well we do that for our entire stack that whole infrastructure's code that we've been talking about so many years but it will manage that for us so i patch it at the github level and it will enforce what i have in my environment so if somebody oh wait let me make a change no it's constantly validating things at github so it keeps it rather regimented so we've had uh as i mentioned a couple of customers we've seen a lot of interest in the public sector space because of course dave they're very concerned around security and patching and access and we want to keep that least access necessary so if we can keep that at the github level that's one of the things that will help your environment it really ties into the whole kind of git ops ai ops modern environment so it really ties all of it together as to kind of the the culture of the application and the infrastructure so your files your config files your policies same api same console that is how you get the scale yeah absolutely it's we we don't want the people to have to manage that as much you can let them focus on where they're going to add value to the business so let's talk about cloud cloud the definition of cloud is changing the cloud is expanding it's going on-prem there's hybrid connections to to a cloud or multiple clouds across clouds now as seems to be becoming more real we could talk about that and then maybe eventually out to the edge they're all real in their own right but how much is actually being connected together is something that i'm interested in but what are you seeing there what role is kubernetes playing yeah so first you talked about where applications live the latest data i've seen from kind of the the industry watchers is what are we dave 20 25 of applications are in the cloud that means there's a lot still in the data center if i look at open shift customers yes do we have a lot of them in the data center but then they are also using the public cloud so we have deep partnerships with amazon and azure to do public services in the cloud and our value is we give consistency across all of those environments so are using data center yes most customers still have data center do you have one or more clouds absolutely you know i used to love the andy jassy line um you know multi-cloud doesn't mean that you spread evenly across all the clouds most customers i talk to they have a primary provider that they partner with but things change over time we've seen plenty of customers go two or three years in and say well i have a strategic initiative sometimes they make an acquisition and they'll do another cloud or you know there's lots of factors why i might be doing more than one cloud there's certain industries where basically you have to have relationships with multiple vendors or there's there's regulations that you need to be concerned about so the answer is yes what we've been talking about more than a decade at red hat is open hybrid cloud and what does that mean today you might have not have planned it out but you're hybrid today and what are you going to be in the next decade you're going to be even more hybrid so edge if we talk about it everyone is talking about one of the biggest trends here is how does kubernetes go out to the edge even more that consistency message that i talked about where does openshift live openshift lives anywhere that red hat enterprise linux lives so rel am i going to have linux out of these small environments without a lot of resources what else are you going to have other than linux that's going to be the foundation of what you have so if i can have management and consistency that push out to all of those environments and we've been building out a portfolio something that you'll see us talking about more at kubecon in la is single node openshift so this is a really small footprint openshift but still have the consistency to work across all these environments and we've had different footprints basically to be able to do edge and remote offices whether you're talking from a service provider out to a full customer premise data center but there's there's a lot going on in the edge space we actually have we already have a public use case with verizon who's doing some of the ai use cases i'm sure you can picture with verizon being such a large telco the touch points that they have not only at the service provider but to their customer environments and openshift is the platform for enabling that innovation i mean if i had a big application portfolio on-prem you know legacy company with you know 100-year history obviously i'm going to be doing some stuff in the cloud i would be building some kind of abstraction layer that would could obviously modernize my on-premise state i would want to i would probably start with amazon i'd want to take advantage of aws cloud native tooling but i would absolutely be doing the same thing in azure and google and i would want to build my own cloud right and and and service my customers or or my company have people log into that cloud hide the underlying complexity of the technology and just simplify everything up level it and build a stack around that and probably build it on on openshift why not and of course kubernetes but there are alternatives there's there's eks anywhere for example which presumably is a competitor what do you how is that impacting the marketplace yeah so so dave as you said everybody is kind of extending beyond where they live so microsoft azure has their arc offering google has anthos and amazon was the last one i mean dave you'll remember this when we talked about hybrid and multi-cloud for a bunch of years it was like amazon doesn't talk about hybrid or multi-cloud and you know back when i sat on the analyst side i was like well you can't talk about hybrid and multi-cloud without talking about amazon so they've now uh eks anywhere something they announced back at re invent it just went generally available recently and so they have a distribution of kubernetes that you can use on your own so you could have completely disconnected in your data center running only on vmware is the only way that they support it today and they have in beta there's something called an eks connector so if you want it to be managed from the cloud and have someone more of that consistency they have the way to do that they've had eks which is their kubernetes service in amazon for a bunch of years but as a friend of the program corey quinn says there's actually 17 different ways to run containers in amazon today that's supported by amazon and you laugh at it but you know dave it's it's no different you know remember the storage world okay how many different storage products did emc have do you know how many compute and storage products amazon have they have a lot growing so one of those offerings that they have natively in the console is red hat openshift service for aws so is eksd a competitor well if you're an amazon customer and you want everything amazon and you want to use their environment in a hybrid environment yes you can do that part of the strategy for amazon is outpost we've got on our roadmap to be able to support openshift on outposts so you know we look at our our positioning is we are much more than kubernetes if you talk about the stack of tooling that we build on top of it we've done a real lot to make sure that developers have the tooling that they need from an amazon environment it's just the kubernetes piece it's a in the cloud it's a managed control plane in your own data center it's here's a kubernetes distribution good luck with it if you want monitoring and observability if you want more security if you want all these other pieces you need to build them on top of that as opposed to openshift gives you a full application development platform you know forrester wave we were you know far and away the top and to the right on on that uh spectrum with the leading position for both developers and operators so you know great to see amazon you know i i i hate to say they're like validating something that we do but look everybody's going to do it's true this is true i know that's the marketing line but and and i hate to do the the marketing line but um it's you will you see everyone rolls out their pieces and you say what is the game that they are playing it's amazon wants you to consume as much of their services as you can from a red hat standpoint it's well everywhere that rel can go we can go so openshift can live a lot of places we are going to give you the best experience in your data center in amazon in azure in google in your hosted in the edge we're going to work in all of those environments and we've got years of experience with thousands of production employments like in the data center eks anywhere sitting on top of vsphere as far as i know we have at red hat the most production kubernetes deployments on vmware are openshift actually at vmworld i'll be talking about i'm i'm on a panel talking about openshift on vsphere with vmware so long deep partnership that we've had there no one can speak to the breadth and depth of uh what we've done there uh what's the little line amazon always says there's no compression algorithm for experience well i like it okay but that's why i like your edge strategy because i've said many times the edge is going to be won by developers it's not going to be won by taking a you know x86 box throwing it over the fence and saying okay we got edge and i think you know that's tongue-in-cheek i think that the traditional enterprise hardware vendors are understanding that but they're not in a great position with developers you know maybe cisco a little bit with devnet but generally speaking you know vmware obviously uh it always has been struggling the edge is you know the challenge with the edge is you always have to look through it as to what your perspective is so we have a long and deep relationship with a lot of the telecommunications providers uh people will disparage openstack some but that's actually the solutions that we've sold the most into are network function virtualization for the telco and a lot of them have followed what they worked with us on openstack and continued that into openshift and verizon being one of those proof points you've seen my etr data and i tell you openstack keeps popping up and when you dig into it it's oh that's telco there may not be maybe there's not a region there and it's telcos developing their own cloud essentially and you know they're monetizing it so let's talk about um a cncf the ecosystem uh it's we have another slide on this if you guys wouldn't mind bringing it up i mean it's a complicated matter right you got here's the picture i mean it's like you can't read it because there's just so many people that wants to stop this from becoming you know kind of openstack too yeah that's a great question so chris wright our cto i thought really boiled it down really well one of the big problems with openstack is we were building a complete stack so when they said oh there's all these projects it's like okay well we're going to create a big tent and under that big tent you have to have all of these pieces and they all need to work together and while they were modular projects i needed to have that full stack validated and managing and maintaining that was a nightmare what is the cncf landscape it is you know what doesn't hundred more projects that are independent of what they had so yes kubernetes is the one that gets the most attention but takes something like service mesh service mesh has been around for a few years it's hot we're still early on the adoption trend service mesh works with kubernetes but it isn't limited to kubernetes it's one of those venn diagram it works with it but you can also work with my virtual environment it works in other places and that's true of a lot of these projects often they are complementary to kubernetes but i can adopt them standalone so the challenge is it is that paradox of choice when you go out there there are some people that want to go to the grocery store and buy all of their various pieces and put it all together well other people will come to us and say hey i just want my developers to get working i don't want them to spend all their time fighting over what they had and at red hat we say great we're going to have an opinionated platform and if you come down later and say oh there's a piece of it i don't want to use or i have some other tool i can have its batters are included they're optional and they're swappable so that's what's nice in this developer environment so you know we also work with you know companies like hashicorp a lot of our customers use vault for their secrets uh you know git lab is is another pure var in this industry that have a lot of developer tools they're not a kubernetes provider they usually sit higher up in the stack than we do so there's a lot of players there's a lot of room for activity and innovation yes we've seen a cambrian explosion of projects there and there has been some consolidation that's part of the job of the cncf is in the observability world they took uh i can't remember there were two projects that were kind of similar and they got them in a room and got them to agree to put them into a single project and put those together so we do see some consolidation over time but there's still room for a lot of growth standards are good but so is optionality i think is your point there so the event is october 11th to the 15th it's actually an in-person event you're planning on being there so i i am it's it's hybrid i know a lot of people will be online the other thing i'd point out there are a lot of day zero events so these are really awesome there's a git ops day there's security day there's so many different pieces i'll actually be for the day zero i'll be emceeing the openshift commons where we get a bunch of end users to just tell their stories projects they're working on deployments that they have have some good partner ecosystem discussion there it's usually a lot of fun we hope a bunch of people come to those in purses and then you know the day itself uh the the three days of the show itself are always hopping and lots of learning to be done uh whether you're there in person or online fantastic so i'm glad you pointed out it is a hybrid event that's kind of the nature of these things these days and i think we'll be for for some time i think potentially indefinitely i think people are realizing hey you know what as much of a pain in the neck as virtual events are we can reach a lot more people and it's a good on-demand experience so have at it stu thanks so much for for coming into the cube studios we miss you glad to see you're thriving and uh good luck at the show and uh we'll see you around the block thanks dave i know i'll be seeing john on the cube there too absolutely okay thanks for watching everybody this is dave vellante we'll see you next time you

Published Date : Sep 14 2021

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
amazonORGANIZATION

0.99+

twoQUANTITY

0.99+

turkeLOCATION

0.99+

two projectsQUANTITY

0.99+

telcoORGANIZATION

0.99+

100-yearQUANTITY

0.99+

october 11thDATE

0.99+

three yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

corey quinnPERSON

0.99+

microsoftORGANIZATION

0.99+

kubeconORGANIZATION

0.98+

two decades agoDATE

0.98+

Stu MinimanPERSON

0.98+

oneQUANTITY

0.98+

verizonORGANIZATION

0.98+

ciscoORGANIZATION

0.98+

davePERSON

0.98+

azureORGANIZATION

0.98+

second sideQUANTITY

0.98+

17 different waysQUANTITY

0.98+

more than a decadeQUANTITY

0.98+

six seven yearsQUANTITY

0.98+

bothQUANTITY

0.98+

telcosORGANIZATION

0.97+

next decadeDATE

0.97+

red hatORGANIZATION

0.97+

todayDATE

0.97+

githubTITLE

0.96+

Red HatORGANIZATION

0.96+

north americaLOCATION

0.96+

dave vellantePERSON

0.96+

googleORGANIZATION

0.96+

15thDATE

0.96+

openstackORGANIZATION

0.95+

2021DATE

0.94+

single projectQUANTITY

0.94+

joshPERSON

0.94+

johnPERSON

0.93+

awsORGANIZATION

0.93+

openshiftORGANIZATION

0.92+

20QUANTITY

0.92+

three daysQUANTITY

0.91+

three years agoDATE

0.9+

KubeConEVENT

0.89+

hundred more projectsQUANTITY

0.89+

two chartsQUANTITY

0.89+

more than one cloudQUANTITY

0.88+

openshiftTITLE

0.88+

first questionsQUANTITY

0.87+

every appQUANTITY

0.87+

a lot of peopleQUANTITY

0.86+

about a decadeQUANTITY

0.85+

Ajay Singh, Pure Storage | CUBEconversation


 

(upbeat music) >> The Cloud essentially turned the data center into an API and ushered in the era of programmable infrastructure, no longer do we think about deploying infrastructure in rigid silos with a hardened, outer shell, rather infrastructure has to facilitate digital business strategies. And what this means is putting data at the core of your organization, irrespective of its physical location. It also means infrastructure generally and storage specifically must be accessed as sets of services that can be discovered, deployed, managed, secured, and governed in a DevOps model or OpsDev, if you prefer. Now, this has specific implications as to how vendor product strategies will evolve and how they'll meet modern data requirements. Welcome to this Cube conversation, everybody. This is Dave Vellante. And with me to discuss these sea changes is Ajay Singh, the Chief Product Officer of Pure Storage, Ajay welcome. >> Thank you, David, gald to be on. >> Yeah, great to have you, so let's talk about your role at Pure. I think you're the first CPO, what's the vision there? >> That's right, I just joined up Pure about eight months ago from VMware as the chief product officer and you're right, I'm the first our chief product officer at Pure. And at VMware I ran the Cloud management business unit, which was a lot about automation and infrastructure as code. And it's just great to join Pure, which has a phenomenal all flash product set. I kind of call it the iPhone or flash story super easy to use. And how do we take that same ease of use, which is a heart of a Cloud operating principle, and how do we actually take it up to really deliver a modern data experience, which includes infrastructure and storage as code, but then even more beyond that and how do you do modern operations and then modern data services. So super excited to be at Pure. And the vision, if you may, at the end of the day, is to provide, leveraging this moderate experience, a connected and effortless experience data experience, which allows customers to ultimately focus on what matters for them, their business, and by really leveraging and managing and winning with their data, because ultimately data is the new oil, if you may, and if you can mine it, get insights from it and really drive a competitive edge in the digital transformation in your head, and that's what be intended to help our customers to. >> So you joined earlier this year kind of, I guess, middle of the pandemic really I'm interested in kind of your first 100 days, what that was like, what key milestones you set and now you're into your second a 100 plus days. How's that all going? What can you share with us in and that's interesting timing because the effects of the pandemic you came in in a kind of post that, so you had experience from VMware and then you had to apply that to the product organization. So tell us about that sort of first a 100 days and the sort of mission now. >> Absolutely, so as we talked about the vision, around the modern data experience, kind of have three components to it, modernizing the infrastructure and really it's kudos to the team out of the work we've been doing, a ton of work in modernizing the infrastructure, I'll briefly talk to that, then modernizing the data, much more than modernizing the operations. I'll talk to that as well. And then of course, down the pike, modernizing data services. So if you think about it from modernizing the infrastructure, if you think about Pure for a minute, Pure is the first company that took flash to mainstream, essentially bringing what we call consumer simplicity to enterprise storage. The manual for the products with the front and back of a business card, that's it, you plug it in, boom, it's up and running, and then you get proactive AI driven support, right? So that was kind of the heart of Pure. Now you think about Pure again, what's unique about Pure has been a lot of our competition, has dealt with flash at the SSD level, hey, because guess what? All this software was built for hard drive. And so if I can treat NAND as a solid state drive SSD, then my software would easily work on it. But with Pure, because we started with flash, we released went straight to the NAND level, and as opposed to kind of the SSD layer, and what that does is it gives you greater efficiency, greater reliability and create a performance compared to an SSD, because you can optimize at the chip level as opposed to at the SSD module level. That's one big advantage that Pure has going for itself. And if you look at the physics, in the industry for a minute, there's recent data put out by Wikibon early this year, effectively showing that by the year 2026, flash on a dollar per terabyte basis, just the economics of the semiconductor versus the hard disk is going to be cheaper than hard disk. So this big inflection point is slowly but surely coming that's going to disrupt the hardest industry, already the high end has been taken over by flash, but hybrid is next and then even the long tail is coming up over there. And so to end to that extent our lead, if you may, the introduction of QLC NAND, QLC NAND powerful competition is barely introducing, we've been at it for a while. We just recently this year in my first a 100 days, we introduced the flasher AC, C40 and C60 drives, which really start to open up our ability to go after the hybrid story market in a big way. It opens up a big new market for us. So great work there by the team,. Also at the heart of it. If you think about it in the NAND side, we have our flash array, which is a scale-up latency centric architecture and FlashBlade which is a scale-out throughput architecture, all operating with NAND. And what that does is it allows us to cover both structured data, unstructured data, tier one apps and tier two apps. So pretty broad data coverage in that journey to the all flash data center, slowly but surely we're heading over there to the all flash data center based on demand economics that we just talked about, and we've done a bunch of releases. And then the team has done a bunch of things around introducing and NVME or fabric, the kind of thing that you expect them to do. A lot of recognition in the industry for the team or from the likes of TrustRadius, Gartner, named FlashRay, the Carton Peer Insights, the customer choice award and primary storage in the MQ. We were the leader. So a lot of kudos and recognition coming to the team as a result, Flash Blade just hit a billion dollars in cumulative revenue, kind of a leader by far in kind of the unstructured data, fast file an object marketplace. And then of course, all the work we're doing around what we say, ESG, environmental, social and governance, around reducing carbon footprint, reducing waste, our whole notion of evergreen and non-disruptive upgrades. We also kind of did a lot of work in that where we actually announced that over 2,700 customers have actually done non-disruptive upgrades over the technology. >> Yeah a lot to unpack there. And a lot of this sometimes you people say, oh, it's the plumbing, but the plumbing is actually very important too. 'Cause we're in a major inflection point, when we went from spinning disk to NAND. And it's all about volumes, you're seeing this all over the industry now, you see your old boss, Pat Gelsinger, is dealing with this at Intel. And it's all about consumer volumes in my view anyway, because thanks to Steve Jobs, NAND volumes are enormous and what two hard disk drive makers left in the planet. I don't know, maybe there's two and a half, but so those volumes drive costs down. And so you're on that curve and you can debate as to when it's going to happen, but it's not an if it's a when. Let me, shift gears a little bit. Because Cloud, as I was saying, it's ushered in this API economy, this as a service model, a lot of infrastructure companies have responded. How are you thinking at Pure about the as a service model for your customers? What's the strategy? How is it evolving and how does it differentiate from the competition? >> Absolutely, a great question. It's kind of segues into the second part of the moderate experience, which is how do you modernize the operations? And that's where automation as a service, because ultimately, the Cloud has validated and the address of this model, right? People are looking for outcomes. They care less about how you get there. They just want the outcome. And the as a service model actually delivers these outcomes. And this whole notion of infrastructure as code is kind of the start of it. Imagine if my infrastructure for a developer is just a line of code, in a Git repository in a program that goes through a CICD process and automatically kind of is configured and set up, fits in with the Terraform, the Ansibles, all that different automation frameworks. And so what we've done is we've gone down the path of really building out what I think is modern operations with this ability to have storage as code, disability, in addition modern operations is not just storage scored, but also we've got recently introduced some comprehensive ransomware protection, that's part of modern operations. There's all the threat you hear in the news or ransomware. We introduced what we call safe mode snapshots that allow you to recover in literally seconds. When you have a ransomware attack, we also have in the modern operations Pure one, which is maybe the leader in AI driven support to prevent downtime. We actually call you 80% of the time and fix the problems without you knowing about it. That's what modern operations is all about. And then also Martin operations says, okay, you've got flash on your on-prem side, but even maybe using flash in the public Cloud, how can I have seamless multi-Cloud experience in our Cloud block store we've introduced around Amazon, AWS and Azure allows one to do that. And then finally, for modern applications, if you think about it, this whole notion of infrastructure's code, as a service, software driven storage, the Kubernetes infrastructure enables one to really deliver a great automation framework that enables to reduce the labor required to manage the storage infrastructure and deliver it as code. And we have, kudos to Charlie and the Pure storage team before my time with the acquisition of Portworx, Portworx today is truly delivers true storage as code orchestrated entirely through Kubernetes and in a multi-Cloud hybrid situation. So it can run on EKS, GKE, OpenShift rancher, Tansu, recently announced as the leader by giggle home for enterprise Kubernetes storage. We were really proud about that asset. And then finally, the last piece are Pure as a service. That's also all outcome oriented, SLS. What matters is you sign up for SLS, and then you get those SLS, very different from our competition, right? Our competition tends to be a lot more around financial engineering, hey, you can buy it OPEX versus CapEx. And, but you get the same thing with a lot of professional services, we've really got, I'd say a couple of years and lead on, actually delivering and managing with SRE engineers for the SLA. So a lot of great work there. We recently also introduced Cisco FlashStack, again, flash stack as a service, again, as a service, a validation of that. And then finally, we also recently did a announcement with Aquaponics, with their bare metal as a service where we are a key part of their bare metal as a service offering, again, pushing the kind of the added service strategy. So yes, big for us, that's where the buck is skating, half the enterprises, even on prem, wanting to consume things in the Cloud operating model. And so that's where we're putting it lot. >> I see, so your contention is, it's not just this CapEx to OPEX, that's kind of the, during the economic downturn of 2007, 2008, the economic crisis, that was the big thing for CFOs. So that's kind of yesterday's news. What you're saying is you're creating a Cloud, like operating model, as I was saying upfront, irrespective of physical location. And I see that as your challenge, the industry's challenge, be, if I'm going to effect the digital transformation, I don't want to deal with the Cloud primitives. I want you to hide the underlying complexity of that Cloud. I want to deal with higher level problems, but so that brings me to digital transformation, which is kind of the now initiative, or I even sometimes call it the mandate. There's not a one size fits all for digital transformation, but I'm interested in your thoughts on the must take steps, universal steps that everybody needs to think about in a digital transformation journey. >> Yeah, so ultimately the digital transformation is all about how companies are gain a competitive edge in this new digital world or that the company are, and the competition are changing the game on, right? So you want to make sure that you can rapidly try new things, fail fast, innovate and invest, but speed is of the essence, agility and the Cloud operating model enables that agility. And so what we're also doing is not only are we driving agility in a multicloud kind of data, infrastructure, data operation fashion, but we also taking it a step further. We were also on the journey to deliver modern data services. Imagine on a Pure on-prem infrastructure, along with your different public Clouds that you're working on with the Kubernetes infrastructures, you could, with a few clicks run Kakfa as a service, TensorFlow as a service, Mongo as a service. So me as a technology team can truly become a service provider and not just an on-prem service provider, but a multi-Cloud service provider. Such that these services can be used to analyze the data that you have, not only your data, your partner data, third party public data, and how you can marry those different data sets, analyze it to deliver new insights that ultimately give you a competitive edge in the digital transformation. So you can see data plays a big role there. The data is what generates those insights. Your ability to match that data with partner data, public data, your data, the analysis on it services ready to go, as you get the digital, as you can do the insights. You can really start to separate yourself from your competition and get on the leaderboard a decade from now when this digital transformation settles down. >> All right, so bring us home, Ajay, summarize what does a modern data strategy look like and how does it fit into a digital business or a digital organization? >> So look, at the end of the day, data and analysis, both of them play a big role in the digital transformation. And it really comes down to how do I leverage this data, my data, partner data, public data, to really get that edge. And that links back to a vision. How do we provide that connected and effortless, modern data experience that allows our customers to focus on their business? How do I get the edge in the digital transformation? But easily leveraging, managing and winning with their data. And that's the heart of where Pure is headed. >> Ajay Singh, thanks so much for coming inside theCube and sharing your vision. >> Thank you, Dave, it was a real pleasure. >> And thank you for watching this Cube conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Aug 18 2021

SUMMARY :

in the era of programmable Yeah, great to have you, And the vision, if you the pandemic you came in in kind of the unstructured data, And a lot of this sometimes and the address of this model, right? of 2007, 2008, the economic crisis, the data that you have, And that's the heart of and sharing your vision. was a real pleasure. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Ajay SinghPERSON

0.99+

CharliePERSON

0.99+

AmazonORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

AjayPERSON

0.99+

Steve JobsPERSON

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

PureORGANIZATION

0.99+

TrustRadiusORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

2008DATE

0.99+

2007DATE

0.99+

firstQUANTITY

0.99+

CapExORGANIZATION

0.99+

AquaponicsORGANIZATION

0.99+

PortworxORGANIZATION

0.99+

yesterdayDATE

0.99+

IntelORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

OPEXORGANIZATION

0.99+

MartinPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

100 plus daysQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

second partQUANTITY

0.99+

over 2,700 customersQUANTITY

0.99+

WikibonORGANIZATION

0.98+

secondQUANTITY

0.98+

first 100 daysQUANTITY

0.98+

billion dollarsQUANTITY

0.98+

this yearDATE

0.97+

KubernetesTITLE

0.97+

CiscoORGANIZATION

0.96+

two and a halfQUANTITY

0.96+

oneQUANTITY

0.96+

MongoORGANIZATION

0.96+

TansuORGANIZATION

0.95+

AzureORGANIZATION

0.95+

early this yearDATE

0.94+

earlier this yearDATE

0.94+

100 daysQUANTITY

0.94+

FlashRayORGANIZATION

0.93+

first companyQUANTITY

0.93+

tier two appsQUANTITY

0.93+

C60COMMERCIAL_ITEM

0.92+

pandemicEVENT

0.92+

OpenShiftORGANIZATION

0.91+

SLSTITLE

0.91+

2026DATE

0.91+

CartonORGANIZATION

0.91+

three componentsQUANTITY

0.9+

todayDATE

0.88+

CloudTITLE

0.88+

a minuteQUANTITY

0.87+

SREORGANIZATION

0.86+

Cloud blockTITLE

0.86+

two hard disk driveQUANTITY

0.86+

EKSORGANIZATION

0.85+

KubernetesORGANIZATION

0.82+

about eight months agoDATE

0.82+

AnsiblesORGANIZATION

0.8+

GKEORGANIZATION

0.79+

KakfaORGANIZATION

0.79+

a decadeDATE

0.77+

tier one appsQUANTITY

0.76+

Peer InsightsTITLE

0.75+

GitTITLE

0.75+

TensorFlowORGANIZATION

0.71+

one big advantageQUANTITY

0.7+

Amanda Silver, Microsoft | DockerCon 2021


 

>>Welcome back to the cubes coverage of dr khan 2021. I'm john for your host of the cube. We're here with Amanda Silver, corporate vice president, product developer division at Microsoft. Amanda, Great to see you you were on last year, Dr khan. Great to see you again a full year later were remote. Thanks for coming on. I know you're super busy with build happening this week as well. Thanks for making the time to come on the cube for Dr khan. >>Thank you so much for having me. Yeah, I'm joining you like many developers around the globe from my personal home office, >>developers really didn't skip a beat during the pandemic and again, it was not a good situation but developers, as you talked about last year on the front lines, first responders to creating value quite frankly, looking back you were pretty accurate in your prediction, developers did have an impact this year. They did create the kind of change that really changed the game for people's lives, whether it was developing solutions from a medical standpoint or even keeping systems running from call centres to making sure people got their their their goods or services and checks and and and kept sanity together. So. >>Yeah absolutely. I mean I think I think developers you know get the M. V. P. Award for this year because you know at the end of the day they are the digital first responders to the first responders and the pivot that we've had to make over the past year in terms of supporting remote telehealth, supporting you know online retail, curbside pickup. All of these things were done through developers being the ones pushing the way forward remote learning. You know my kids are learning at home right behind me right now so you might hear them during the interview that's happening because developers made that happen. >>I don't think mom please stop hogging the band with, they've got a gigabit. Stop it. Don't be streaming. My kids are all game anyway, Hey, great to have you on and you have to get the great keynote, exciting to see you guys continue the collaboration with Docker uh with GIT hub and Microsoft, A great combination, it's a 123 power punch of value. You guys are really kind of killing it. We heard from scott and dan has been on the cube. What's your thoughts on the partnership with the developer division team at Microsoft with Doctor, What's it all about this year? What's the next level? >>Well, I mean, I think, I think what's really awesome about this partnership is that we all have, we all are basically sharing a common mission. What we want to do is make sure that we're empowering developers, that we're focused on their productivity and that we're delivering value to them so they can do their job better so that they can help others. So that's really kind of what drives us day in and day out. So what we focus on is developer productivity. And I think that's a lot of what dana was talking about in her session, the developer division. Specifically, we really try to make sure that we're improving the state of the art from modern developers. So we want to make sure that every keystroke that they take, every mouse move that they make, it sounds like a song but every every one of those matter because we want to make sure that every developers writing the code that only they can write and in terms of the partnership and how that's going. You know my team and the darker team have been collaborating a ton on things like dr desktop and the Doctor Cli tool integrations. And one of the things that we do is we think about pain points and various workflows. We want to make sure that we're shaving off the edges of all of the user experience is the developers have to go through to piece all of these applications together. So one of the big pain points that we have heard from developers is that signing into the Azure cloud and especially our sovereign clouds was challenging. So we contributed back to uh back to doctor to actually make it easier to sign into these clouds. And so dr developers can now use dr desktop and the Doctor Cli to actually change the doctor context so that its Azure. So that makes it a lot easier to connect the other. Oh, sorry, go ahead. No, I was just >>going to say, I love the reference of the police song. Every breath you take, every >>mouth moving. Great, >>great line there. Uh, but I want to ask you while you're on this modern cloud um, discussion, what is I mean we have a lot of developers here at dr khan. As you know, you guys know developers in your ecosystem in core competency. From Microsoft, Kublai khan is a very operator like focus developed. This is a developer conference. You guys have build, what is the state of the art for a modern cloud developer? Could you just share your thoughts because this comes up a lot. You know, what's through the art? What's next jan new guard guard? It's his legacy. What is the state of the art for a modern cloud developer? >>Fantastic question. And extraordinarily relevant to this particular conference. You know what I think about often times it's really what is the inner loop and the outer loop look like in terms of cycle times? Because at the end of the day, what matters is the time that it takes for you to make that code change, to be able to see it in your test environment and to be able to deploy it to production and have the confidence that it's delivering the feature set that you need it to. And it's, you know, it's secure, it's reliable, it's performance, that's what a developer cares about at the end of the day. Um, at the same time, we also need to make sure that we're growing our team to meet our demand, which means we're constantly on boarding new developers. And so what I take inspiration from our, some of the tech elite who have been able to invest significant amounts in, in tuning their engineering systems, they've been able to make it so that a new developer can join a team in just a couple of minutes or less that they can actually make a code change, see that be reflected in their application in just a few seconds and deploy with confidence within hours. And so our goal is to actually be able to take that state of the art metric and democratize that actually bring it to as many of our customers as we possibly can. >>You mentioned supply chain earlier in securing that. What are you guys doing with Docker and how to make that partnership better with registries? Is there any update there in terms of the container registry on Azure? >>Yeah, I mean, you know, we, we we have definitely seen recent events and and it almost seems like a never ending attacks that that you know, increasingly are getting more and more focused on developer watering holes is how we think about it. Kind of developers being a primary target um for these malicious hackers. And so what it's more important than ever that every developer um and Microsoft especially uh really take security extraordinarily seriously. Our engineers are working around the clock to make sure that we are responding to every security incident that we hear about and partnering with our customers to make sure that we're supporting them as well. One of the things that we announced earlier this week at Microsoft build is that we've actually taken, get have actions and we've now integrated that into the Azure Security Center. And so what this means is that, you know, we can now do things like scan for vulnerabilities. Um look at things like who is logging in, where things like that and actually have that be tracked in the Azure security center so that not just your developers get that notification but also your I. T. Operations. Um In terms of the partnership with dR you know, this is actually an ongoing partnership to make sure that we can provide more guidance to developers to make sure that they are following best practices like pulling from a private registry like Docker hub or at your container registry. So I expect that as time goes on will continue to more in partnership in this space >>and that's going to give a lot of confidence. Actually, productivity wise is going to be a big help for developers. Great stuff is always good, good progress. They're moving the needle. >>Last time we >>spoke we talked about tools and setting Azure as the doctor context duty tooling updates here at dot com this year. That's notable. >>Yeah, I mean, I think, you know, there's one major thing that we've been working on which has a big dependency on docker is get help. Code space is now one of the biggest pain points that developers have is setting up a new DEV box, which they often have to do when they are on boarding a new employee or when they're starting a new project or even if they're just kicking the tires on a new technology that they want to be able to evaluate and sometimes creating a developer environment can actually take hours um and especially when you're trying to create a developer environment that matches somebody else's developer environment that can take like a half a day and you can spend all of your time just debugging the differences in environment variables, for example, um, containers actually makes that much easier. So what you can do with this, this services, you can actually create death environment spun up in the cloud and you can access it in seconds and you get from there are working coding environment and a runtime environment and this is repeatable via containers. So it means that there's no inadvertent differences introduced by each DEV. And you might be interested to know that underneath this is actually using Docker files and dr composed to orchestrate the debits and the runtime bits for a whole bunch of different stacks. And so this is something that we're actually working on in collaboration with the with the doctor team to have a common the animal format. And in fact this week we actually introduced a couple of app templates so that everybody can see this all in action. So if you check out a ca dot m s forward slash app template, you can see this in action yourself. >>You guys have always had such a strong developer community and one thing I love about cloud as it brings more agility, as we always talk about. But when you start to see the enterprise grow into, the direction is going now, it's almost like the developer communities are emerging, it's no longer about all the Lennox folks here and the dot net folks there, you've got windows, you've got cloud, >>it's almost >>the the the solidification of everyone kind of coming together. Um and visual studio, for instance, last year, I think you were talking about that to having to be interrogated dr composed, et cetera. >>How do you see >>this melting pot emerging? Because at the end of the day, you pick the language you love and you got devops, which is infrastructure as code doesn't matter. So give us your take on where we are with that whole progress of of making that happen. >>Well, I mean I definitely think that, you know, developer environments and and kind of, you know, our approach to them don't need to be as dogmatic as they've been in the past. I really think that, you know, you can pick the right tool and language and stand developer stack for your team, for your experience and you can be productive and that's really our goal. And Microsoft is to make sure that we have tools for every developer and every team so that they can build any app that they want to want to create. Even if that means that they're actually going to end up ultimately deploying that not to our cloud, they're going to end up deploying it to AWS or another another competitive cloud. And so, you know, there's a lot of things that we've been doing to make that really much easier. We have integrated container tools in visual studio and visual studio code and better cli integrations like with the doctor context that we had talked about a little bit earlier. We continue to try to make it easier to build applications that are targeting containers and then once you create those containers it's much easier to take it to another environment. One of the examples of this kind of work is now that we have WsL and the Windows subsystem for Lennox. This makes it a lot easier for developers who prefer a Windows operating system as their environment and maybe some tools like Visual Studio that run on Windows, but they can still target Lennox with as their production environment without any impedance mismatch. They can actually be as productive as they would be if they had a Linux box as their Os >>I noticed on this session, I got to call this out. I want to get your reaction to it interesting. Selection of Microsoft talks, the container based development. Visual studio code is one that's where you're going to show some some some container action going on with note and Visual Studio code. And then you get the machine learning with Azure uh containers in the V. S. Code. Interesting how you got, you know, containers with V. S. And now you've got machine learning. What does that tell the world about where Microsoft's at? Because in a way you got the cutting edge container management on one side with the doctor integration. Now you get the machine learning which everyone's talking about shifting, left more automation. Why are these sessions so important? Why should people attend? And what's the what's the bottom line? >>Well, like I said, like containers basically empower developer productivity. Um that's what creates the reputable environments, that's what allows us to make sure that, you know, we're productive as soon as we possibly can be with any text act that we want to be able to target. Um and so that's kind of almost the ecosystem play. Um it's how every developer can contribute to the success of others and we can amor ties the kinds of work that we do to set up an environment. So that's what I would say about the container based development that we're doing with both visual studio and visual studio code. Um in terms of the machine learning development, uh you know, the number of machine learning developers in the world is relatively small, but it's growing and it's obviously a very important set of developers because to train a machine learning uh to train an ml model, it actually requires a significant amount of compute resources, and so that's a perfect opportunity to bring in the research that are in a public cloud. Um What's actually really interesting about that particular develop developer stack is that it commonly runs on things like python. And for those of you who have developed in python, you know, just how difficult it is to actually set up a python environment with the right interpreter, with the right run time, with the right libraries that can actually get going super quickly, um and you can be productive as a developer. And so it's actually one of the hardest, most challenging developer stacks to actually set up. And so this allows you to become a machine learning developer without having to spend all of your time just setting up the python runtime environment. >>Yeah, it's a nice, nice little call out on python, it's a double edged sword. It's easier to sling code around on one hand, when you start getting working then you gotta it gets complicated can get well. Um Well the great, great call out there on the island, but good, good, good project. Let me get your thoughts on this other tool that you guys are talking about project tie. Uh This is interesting because this is a trend that we're seeing a lot of conversations here on the cube about around more too many control planes. Too many services. You know, I no longer have that monolithic application. I got micro micro applications with microservices. What the hell is going on with my services? >>Yeah, I mean, I think, you know, containers brought an incredible amount of productivity in terms of having repeatable environments, both for dev environments, which we talked about a lot on this interview already, but also obviously in production and test environments. Super important. Um and with that a lot of times comes the microservices architecture that we're also moving to and the way that I view it is the microservices architecture is actually accompanied by businesses being more focused on the value that they can actually deliver to customers. And so they're trying to kind of create separations of concerns in terms of the different services that they're offering, so they can actually version and and kind of, you know, actually improve each of these services independently. But what happens when you start to have many microservices working together in a SAS or in some kind of aggregate um service environment or kind of application environment is it starts to get unwieldy, it's really hard to make it so that one micro service can actually address another micro service. They can pass information back and forth. And you know what used to be maybe easy if you were just building a client server application because, you know, within the server tear all of your code was basically contained in the same runtime environment. That's no longer the case when every microservices actually running inside of its own container. So the question is, how can we improve program ability by making it easier for one micro service that's being used in an application environment, be to be able to access another another service and kind of all of that context. Um and so, you know, you want to be able to access the service is the the api endpoint, the containers, the ingress is everything, make everything work together as though it felt just as easy as as um you know, server application development. Um And so what this means as well is that you also oftentimes need to get all of these different containers running at the same time and that can actually be a challenge in the developer and test loop as well. So what project tie does is it improves the program ability and it actually allows you to just write a command like thai run so that you can actually in stan she ate all of these containers and get them up and running and basically deploy and run your application in that environment and ultimately make the dev testing or loop much faster >>than productivity gain. Right. They're making it simple to stand up. Great, great stuff. Let me ask you a question as we kind of wrap down here for the folks here at Dakar Con, are >>there any >>special things you'd like to talk about the development you think are important for the developers here within this space? It's very dynamic. A lot of change happening in a good way. Um, but >>sometimes it's hard to keep >>track of all the cool stuff happening. Could you take a minute to, to share your thoughts on what you think are the most important develops developments in this space? That that might be interesting to ducker con attendees. >>I think the most important things are to recognize that developer environments are moving to containerized uh, environments themselves so that they can be repeated, they can be shared, the work, configuring them can be amortized across many developers. That's important thing. Number one important thing. Number two is it doesn't matter as much what operating system you're running as your chrome, you know, desktop. What matters is ultimately the production environment that you're targeting. And so I think now we're in a world where all of those things can be mixed and matched together. Um and then I think the next thing is how can we actually improve microservices, uh programming development together um so that it's easier to be able to target multiple micro services that are working in aggregate uh to create a single service experience or a single application. And how do we improve the program ability for that? >>You know, you guys have been great supporters of DACA and the community and open source and software developers as they transform and become quite frankly the superheroes for the transformation, which is re factoring businesses. So this has been a big thing. I'd love to get your thoughts on how this is all coming together inside Microsoft, you've got your division, you get the developer division, you got GIT hub, got Azure. Um, and then just historically, and he put this up last year army of an ecosystem. People who have been contributing encoding with Microsoft and the partners for many, many decades. >>Yes. The >>heart Microsoft now, how's it all working? What's the news? I get Lincoln, Lincoln, but there's no yet developer model there yet, but probably is soon. >>Um Yeah, I mean, I think that's a pretty broad question, but in some ways I think it's interesting to put it in the context of Microsoft's history. You know, I think when I think back to the beginning of my career, it was kind of a one stack shop, you know, we was all about dot net and you know, of course we want to dot net to be the best developer environment that it can possibly be. We still actually want that. We still want that need to be the most productive developer environment. It could we could possibly build. Um but at the same time, I think we have to recognize that not all developers or dot net developers and we want to make sure that Azure is the most productive cloud for developers and so to do that, we have to make sure that we're building fantastic tools and platforms to host java applications, javascript applications, no Js applications, python applications, all of those things, you know, all of these developers in the world, we want to make sure it can be productive on our tools and our platforms and so, you know, I think that's really kind of the key of you know what you're speaking of because you know, when I think about the partnership that I have with the GIT hub team or with the Azure team or with the Azure Machine learning team or the Lincoln team, um A lot of it actually comes down to helping empower developers, improving their productivity, helping them find new developers to collaborate with, um making sure that they can do that securely and confidently and they can basically respond to their customers as quickly as they possibly can. Um and when, when we think about partnering inside of Microsoft with folks like linkedin or office as an example, a lot of our partnership with them actually comes down to improving their colleagues efficiency. We build the developer tools that office and lengthen are built on top of and so every once in a while we will make an improvement that has, you know, 5% here, 3% there and it turns into an incredible amount of impact in terms of operations, costs for running these services. >>It's interesting. You mentioned earlier, I think there's a time now we're living in a time where you don't have to be dogmatic anymore, you can pick what you like and go with it. Also that you also mentioned just now this idea of distributed applications, distributed computing. You know, distributed applications and microservices go really well together. Especially with doctor. >>Can you share >>your thoughts on the framework that you guys released called Dapper? >>Yeah, yeah. We recently released Dapper. It's called D A P R. You can look it up on GIT hub and it's a programming model for common microservices pattern, two common microservices patterns that make it really easy and automatic to create those kinds of microservices. So you can choose to work with your favorite state stores or databases or pub sub components and get things like cloud events for free. You can choose either http or g R B C so that you can get mesh capabilities like service discovery and re tries and you can bring your own secret store and easily be able to call it from any environment variable. It's also like I was talking about earlier, multi lingual. Um so you don't need to embrace dot net, for example, as you're programming language to be able to benefit from Dapper, it actually supports many programming languages and Dapper itself is actually written and go. Um and so, you know, all developers can benefit from something like Dapper to make it easier to create microservices applications. >>I mean, always great to have you on great update. Take a minute to give an update on what's going on with your division. I know you had to build conference this week. V. S has got the new preview title. We just talked about what are the things you want to get to plug in for? Take a minute to get to plug in for what you're working on, your goals, your objectives hiring, give us the update. >>Yeah, sure. I mean, you know, we we built integrated container tools in visual studio uh and the Doctor extension and Visual Studio code and cli extensions. Uh and you know, even in this most recent release of our Visual Studio product, Visual Studio 16 10, we added some features to make it easier to use DR composed better. So one of the examples of this is that you can actually have uh Oftentimes you need to be able to use multiple doctor composed files together so that you can actually configure various different container environments for a single single application. But it's hard sometimes to create the right Yeah. My file so that you can actually invoke it and invoke the the container and the micro services that you need. And so what this allows you to do is to actually have just a menu of the different doctor composed files so that you can select the runtime and test environment that you need for the subset of the portion of the application that you're working on at the end of the day. This is always about developer productivity. You know, like I said, every keystroke matters. Um and we want to make sure that you as a developer can focus on the code that only you can Right. >>Amanda Silver, corporate vice president product development division of Microsoft. Always great to see you and chat with you remotely soon. We'll be back in in real life with real events soon as we come out of the pandemic and thanks for sharing your insight and congratulations on your success this year and and congratulations on your announcement here at Dakar Gone. >>Thank you so much for having me. >>Okay Cube coverage for Dunkirk on 2021. I'm John for your host of the Cube. Thanks for watching. Mhm

Published Date : May 28 2021

SUMMARY :

Amanda, Great to see you you were on last year, Dr khan. Yeah, I'm joining you like many developers around the globe quite frankly, looking back you were pretty accurate in your prediction, developers did have an impact V. P. Award for this year because you know at the end of the day they are the digital first My kids are all game anyway, Hey, great to have you on and you have to get the great keynote, exciting to see you guys and the Doctor Cli to actually change the doctor context so that its Azure. Every breath you take, every Great, you guys know developers in your ecosystem in core competency. Because at the end of the day, what matters is the time that it takes for you to make that What are you guys doing with Docker and how to make that partnership better with Um In terms of the partnership with dR you know, and that's going to give a lot of confidence. spoke we talked about tools and setting Azure as the doctor context duty So what you can do with this, this services, you can actually create death But when you start to see the enterprise grow into, studio, for instance, last year, I think you were talking about that to having to be interrogated dr composed, Because at the end of the day, you pick the language you love easier to build applications that are targeting containers and then once you create And then you get the machine learning with the machine learning development, uh you know, the number of machine learning developers around on one hand, when you start getting working then you gotta it gets complicated can get well. Um And so what this means as well is that you also oftentimes need to Let me ask you a question as we kind of wrap down here for the folks here at Dakar Con, the developers here within this space? Could you take a minute to, to share your thoughts on what you think are the most I think the most important things are to recognize that developer environments are moving to You know, you guys have been great supporters of DACA and the community and open source and software developers What's the news? that has, you know, 5% here, 3% there and it You mentioned earlier, I think there's a time now we're living in a time where you don't have to be dogmatic anymore, You can choose either http or g R B C so that you can get mesh capabilities I mean, always great to have you on great update. So one of the examples of this is that you can actually Always great to see you and chat with you remotely I'm John for your host of the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenePERSON

0.99+

Eric HerzogPERSON

0.99+

James KobielusPERSON

0.99+

Jeff HammerbacherPERSON

0.99+

DianePERSON

0.99+

IBMORGANIZATION

0.99+

Mark AlbertsonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

JenniferPERSON

0.99+

ColinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Rob HofPERSON

0.99+

UberORGANIZATION

0.99+

Tricia WangPERSON

0.99+

FacebookORGANIZATION

0.99+

SingaporeLOCATION

0.99+

James ScottPERSON

0.99+

ScottPERSON

0.99+

Ray WangPERSON

0.99+

DellORGANIZATION

0.99+

Brian WaldenPERSON

0.99+

Andy JassyPERSON

0.99+

VerizonORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

Rachel TobikPERSON

0.99+

AlphabetORGANIZATION

0.99+

Zeynep TufekciPERSON

0.99+

TriciaPERSON

0.99+

StuPERSON

0.99+

Tom BartonPERSON

0.99+

GoogleORGANIZATION

0.99+

Sandra RiveraPERSON

0.99+

JohnPERSON

0.99+

QualcommORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

FranceLOCATION

0.99+

Jennifer LinPERSON

0.99+

Steve JobsPERSON

0.99+

SeattleLOCATION

0.99+

BrianPERSON

0.99+

NokiaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Peter BurrisPERSON

0.99+

Scott RaynovichPERSON

0.99+

RadisysORGANIZATION

0.99+

HPORGANIZATION

0.99+

DavePERSON

0.99+

EricPERSON

0.99+

Amanda SilverPERSON

0.99+

Intermission 2 | DockerCon 2021


 

>>welcome back everyone. We're back to intermission. I'm hama in case you forgot and hear them with Brett and Peter. So what a great morning afternoon. We've had like we're now in the home stretch and you know, I really want to give a shout out to all of you who are sticking with us, especially if you're in different time zone than pacific. So I then jumped into the community rooms. The spanish won, the Brazilian won the french one. Everybody is just going strong. So again, so much so gratitude for that. Thank you for being so involved and really participating the chat rooms in the community. The chat windows in the community rooms are just going nuts. So it's, it's really good to see that. And as usual, Peter and brat had some great, very interactive panels and that was very exciting to watch. But you know, since they were on the panels, I decided to go and see some other things and I actually attended the last mile of container ization. That was, that was actually a very good session. We had a lot of good interactivity there. Yeah. And then while also talked about the container security in the cloud native world. So that was, I think that was your panel peter. That was, that was very exciting. And um, I want to share with everybody the numbers that we've been seeing for dr khan live. So as, as of, I'm sorry, said we need a drumroll. We do need a drum roll. Can you do a drum roll for me? No, no, no. >>Just a >>symbol. Okay, good. Go. Uh, we're at over 22,000 attendees um, today. So that's amazing. That's great. I love the sound effect. That's a great sound effect. The community rooms continue to be really engaged. We're still seeing hundreds of people in those rooms. So again shout out to everyone who is participating. And I felt again like a kid in a candy store didn't know which sessions to attend. They were all very interesting and you know, we're getting some good feedback on twitter. I want to read out some more tweets that we got and one in particular, I don't know whether to feel happy for this person or sad for this person, but it's uh well the initials are P. W. And he said that he was up at two am to watch the keynotes. So again, I'll let you decide whether you're it's a good thing or not, but we're happy to have you PW is awesome. Um as well. There was someone who said that these features are so needed. The things that dr announced this morning in the keynotes and that doctor has reacted to our pains and I think they mean has addressed their pain. So that was really gratifying to read. Yeah, really wonderful. That's some other countries that I didn't shout out before this just tells you what the breadth and scope of our community is. Indonesia, la paz Bolivia, Greece, Munich, Ukraine, oxford UK Australia Philippines. And there's just more and I'm going to do a special shadow to Montreal because that's where I'm from. So yes, applause for that. It was really great. And so I just want to thank all of you. Um, I want to encourage you when we talked about the power of community. Remember we're doing a fundraiser. So to combat Covid for Covid relief or actually all that money is going to go to UNICEF. Docker is contributing 10,000 and we're doing a go fund me. And the link is there on the screen. So please donate. You know, just $1. 1 person each of you donates $1. We would have raised over $22,000. So please please find it within you to contribute because again, our communities that are, that are the most effective are India and brazil, which are are very active doctor affinity. So please give back. I really appreciate that >>highlighted by the brazil. Yeah. >>You're going to brazil room and get them all to donate. Exactly. Um, also want to encourage, you know, if you're interested in participating in our, in our road map. Our public road map is on GIT hub. So it's get home dot com slash docker slash roadmap. And that's something that you can participate in and vote up features that you want to see. We love to get the community involved and participating in our, in our road map. So make sure to check that out. And I also want to note on that >>Hello can real quick. I'm sorry. Yeah, I talk about our road map all the time, but honestly folks out there are PMS are in their our ceo is in there that we do watch that. That is our roadmap is extremely, extremely important to us. So any features complaints, right, joining the conversation. That's a great way to get uh to interact with Docker in our products. Right. We we really highly valued the road map. Okay, back to your mama, sorry. >>Oh absolutely. And if you want to see us be even more responsive to what you need to participate in that road map discussion. That's really great. Um a couple of things coming up, just want to put the spotlight on. We have at 3 15 what's new with with desktop from our own ue cow. So I highly recommend that you attend that session and of course there's the Woman in tech live panel. So this is very exciting, moderated by yours truly and it has putting a spotlight on our women captains and our women developers. So that's very exciting. But I also hear that we're doing there's a session with jay frog coming up so peter, why don't you talk about that a little bit? >>Yeah, we have a session coming up from our partners from jay frog around devops patterns and anti patterns for continuous software updates. And another one that I'm extremely excited about is uh RM one talk from our very own Tony's from Docker. So if you have an M one and you're interested in multi arc architecture builds, check that out. It's gonna be a great, great talk. Um and then we have melissa McKay also from jay frog, talking about Docker and the container ecosystem and last but definitely not least. We'll check them all out there. Going to be great. But Brett is going to be doing I think the best panel that I'm gonna go watch and he made up a new word, it's called say this. I'm all about the trending new words today about this is gonna be awesome. Yeah. Yeah >>I'm going to have the battle bottle of the panels. >>Yeah. Yeah well mine's before years so we're not competing. So yeah we have we have two excellent panels in a row to finish off the day and just seven list is basically how to run, how can we run containers without managing servers? So it doesn't mean you don't actually have infrastructure just let's not manage service. Um Yeah and we we uh need to wrap it up and >>Uh before we do that I just want to um tell everyone that we actually have a promotion going on. So we um for those that sign up for a pro or team subscription, we're offering a 20% off so there's the U. R. L.. You can check out what the promotion is and this is for a new and returning users so you can use the promo code dr khan 21 all the information is on the website are really encourage you to check that out promotion for 20% off, join us for our panels. And we're doing a wrap up at five p.m. Where we'll have our own Ceo and that wrap up portion. Look forward to seeing there. All right, >>thank you too. All right everyone we'll see you on the next go around coming up next me and some other people awesome and Yeah. Mhm. Mhm. Yeah. >>Yeah. Yeah. Mhm. Is >>a really varied community. There's a lot of people with completely different backgrounds, completely different experience levels and completely different goals about how they want to use Docker. And I think that's really interesting. It's always easy to talk about the technology that I've used for so many years. I really love Doctor and I can find so many ways that it's useful and it's great to use in your day to day work clothes. I've >>used doctor for anything from um tracking airplanes with my son, which was a kind of cool project to more professional projects where we actually Built one of the first database as his services using docker even before it was 10 and I was released and we took it further and we start composing monitoring tools. We really start taking it to the next level. And we got to the point where I was trying to make everything in a container, I love to use >>doctor to make disposable project so I can download the project and it's been that up using Docker compose or something like that in a way that every developer that works in the project doesn't even need to know the underlying technology doesn't just need to run Docker compose up and the whole project is going to be up and running even if >>you're not using doctor and production, there are a lot of other ways that you can use doctor to make your life so much easier. As a developer, you can run your projects on your machine locally. Um as a tester you can actually launch test containers and be able to run um dependencies that your project requires. You can run real life versions so that um you're as close to production as possible. >>I was able to migrate most of the workloads from our on from uh to the cloud. Running complete IEDs inside a docker or running it or using it basically to replace their build scripts or using it to run not web applications but maybe compile c plus plus code or compile um projects that really just require some sort of consistency across their team, >>whether it be a web app or a database, I can control these all the same. That was really the power I saw within Doctors standardization and the portability >>doctor isn't the one that created containers uh and uh but it's the one that made it uh democratically possible, so everyone use it. And this effort has made the technology environment so much better for everyone that uses it, both for developers and for end users. So this >>past year has been quite interesting and I think we're all in the same boat here, so no one has, no one is an exception to this, but what we all learn from it is, you know, the community is very important and to lean on other people for help for assistance. >>Yeah, it's been really challenging of course, but I think the biggest and most obvious thing that I've learned both on a personal and a business perspective is just to be ready to adapt to change and don't be afraid of it either. I think it's worth noting that you should never really take it for granted that the paradigms of, you know, the world or technology or something like that aren't going to shift drastically and very, very quickly. >>I'm looking forward to what is coming down the pipe with doctor. What more are they going to throw our way in order to make our lives easier? >>It's very interesting to see the company grow and adapt the way it has. I mean it as well as the community, it's been very interesting to see, you know, how, you know, the return to develop our focus is now the main focus and I find that's very interesting because, you know, developers are the ones that really boost the doctor to where it is today. And if we keep on encouraging these developer innovation, we'll just see more tools being developed on top of Doctor in the future, and that's what I'm really excited to see with Doctor and the technology in the future.

Published Date : May 28 2021

SUMMARY :

I really want to give a shout out to all of you who are sticking with us, especially if you're in different time zone than So again, I'll let you decide whether you're it's a good thing or not, highlighted by the brazil. So make sure to check that out. So any features complaints, right, joining the conversation. So I highly recommend that you attend that So if you have an M one and you're interested in multi arc architecture builds, So it doesn't mean you don't actually khan 21 all the information is on the website are really encourage you to check that out All right everyone we'll see you on the next go around coming it's great to use in your day to day work clothes. We really start taking it to the next level. As a developer, you can run your projects on your machine I was able to migrate most of the workloads from our on from That was really the power I saw within Doctors standardization and the portability So this from it is, you know, the community is very important and to lean on other people for help the paradigms of, you know, the world or technology or something like that aren't going to shift I'm looking forward to what is coming down the pipe with doctor. it's been very interesting to see, you know, how, you know, the return to develop

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrettPERSON

0.99+

PeterPERSON

0.99+

20%QUANTITY

0.99+

melissa McKayPERSON

0.99+

five p.m.DATE

0.99+

MontrealLOCATION

0.99+

10,000QUANTITY

0.99+

$1QUANTITY

0.99+

over $22,000QUANTITY

0.99+

UNICEFORGANIZATION

0.99+

brazilLOCATION

0.99+

3 15DATE

0.99+

dockerTITLE

0.99+

first databaseQUANTITY

0.98+

P. W.PERSON

0.98+

todayDATE

0.98+

UkraineLOCATION

0.98+

two amDATE

0.98+

MunichLOCATION

0.98+

$1. 1 personQUANTITY

0.97+

twitterORGANIZATION

0.97+

jay frogORGANIZATION

0.97+

oxfordLOCATION

0.97+

oneQUANTITY

0.97+

bothQUANTITY

0.97+

over 22,000QUANTITY

0.96+

DockerORGANIZATION

0.96+

DockerTITLE

0.96+

past yearDATE

0.95+

CovidOTHER

0.94+

hundreds of peopleQUANTITY

0.94+

two excellent panelsQUANTITY

0.94+

GreeceLOCATION

0.94+

bratPERSON

0.92+

frenchOTHER

0.92+

eachQUANTITY

0.9+

peterPERSON

0.89+

c plus plusTITLE

0.88+

spanishOTHER

0.88+

this morningDATE

0.88+

DockerCon 2021EVENT

0.86+

hamaPERSON

0.86+

IndonesiaLOCATION

0.85+

seven listQUANTITY

0.84+

TonyPERSON

0.83+

IndiaLOCATION

0.83+

dr khanPERSON

0.78+

10QUANTITY

0.74+

drPERSON

0.73+

pacificLOCATION

0.73+

BrazilianOTHER

0.72+

U. R.LOCATION

0.7+

Australia PhilippinesLOCATION

0.66+

brazilORGANIZATION

0.63+

UKLOCATION

0.59+

many yearsQUANTITY

0.56+

of peopleQUANTITY

0.55+

PWORGANIZATION

0.54+

GITTITLE

0.53+

khan 21OTHER

0.52+

dockerORGANIZATION

0.52+

CeoORGANIZATION

0.52+

la pazORGANIZATION

0.51+

BoliviaLOCATION

0.4+

LIVE Panel: FutureOps: End-to-end GitOps


 

>>and hello, we're back. I've got my panel and we are doing things real time here. So sorry for the delay a few minutes late. So the way let's talk about things, the reason we're here and we're going around the room and introduce everybody. Got three special guests here. I got my evil or my john and the normal And we're going to talk about get ops I called it future office just because I want to think about what's the next thing for that at the end, we're gonna talk about what our ideas for what's next for getups, right? Um, because we're all starting to just get into get ups now. But of course a lot of us are always thinking about what's next? What's better? How can we make this thing better? So we're going to take your questions. That's the reason we're here, is to take your questions and answer them. Or at least the best we can for the next hour. And all right, so let's go around the room and introduce yourself. My name is Brett. I am streaming from Brett from that. From Brett. From Virginia Beach in Virginia beach, Virginia, United States. Um, and I talk about things on the internet, I sell courses on you, to me that talk about Docker and kubernetes Ive or introduce yourself. >>How's it going? Everyone, I'm a software engineer at axel Springer, currently based in Berlin and I happen to be Brett Brett's teaching assistant. >>All right, that's right. We're in, we're in our courses together almost every day. Mm john >>hey everyone, my name is john Harris, I used to work at Dhaka um, I now work at VM ware is a star field engineer. Um, so yeah, >>and normal >>awesome by the way, you are streaming from Brett Brett, >>I answered from breath to breath. >>Um I'm normal method. I'm a distinguished engineer with booz allen and I'm also a doctor captain and it's good to see either in person and it's good to see you again john it's been a little while. >>It has the pre covid times, right? You're up here in Seattle. >>Yeah. It feels, it feels like an eternity ago. >>Yeah, john shirt looks red and reminds me of the Austin T shirt. So I was like, yeah, so we all, we all have like this old limited edition doctor on E. >>T. That's a, that's a classic. >>Yeah, I scored that one last year. Sometimes with these old conference church, you have to like go into people's closets. I'm not saying I did that. Um, but you know, you have to go steal stuff, you to find ways to get the swag >>post post covid. If you ever come to my place, I'm going to have to lock the closets. That >>that's right, That's right. >>So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the T shirts, just boxes and boxes and boxes floor to ceiling. So every time I went to HQ you just you just as many as you can fit in your luggage. I think I have about 10 of these. You >>bring an extra piece of luggage just for your your shirt shirt grab. Um All right, so I'm going to start scanning questions uh so that you don't have to you can you help you all are welcome to do that. And I'm going to start us off with the topic. Um So let's just define the parameters. Like we can talk about anything devops and here we can go down and plenty of rabbit holes. But the kind of, the goal here is to talk about get ups and get ups if you haven't heard about it is essentially uh using versioning systems like get like we've all been getting used to as developers to track your infrastructure changes, not just your code changes and then automate that with a bunch of tooling so that the robots take over. And essentially you have get as a central source of truth and then get log as a central source of history and then there's a bunch of magic little bits in the middle and then supposedly everything is wonderful. It's all automatic. The reality is is what it's often quite messy, quite tricky to get everything working. And uh the edges of this are not perfect. Um so it is a relatively new thing. It's probably three, maybe four years old as an official thing from. We've uh so we're gonna get into it and I'll let's go around the room and the same word we did before and um not to push on that, put you on the spot or anything. But what is, what is one of the things you either like or either hate about getups um that you've enjoyed either using it or you know, whatever for me. I really, I really love that I can point people to a repo that basically is hopefully if they look at the log a tracking, simplistic tracking of what might have changed in that part of the world or the environment. I remember many years past where, you know, I've had executive or some mid level manager wants to see what the changes were or someone outside my team went to see what we just changed. It was okay, they need access to this system into that dashboard and that spreadsheet and then this thing and it was always so complicated and now in a world where if we're using get up orbit bucket or whatever where you can just say, hey go look at that repo if there was three commits today, probably three changes happened. That's I love that particular part about it. Of course it's always more complicated than that. But um Ive or I know you've been getting into this stuff recently. So um any thoughts? Yeah, I think >>my favorite part about get ops is >>reproducibility. Um >>you know the ability to just test something and get it up and running >>and then just tear it down. >>Uh not >>being worried that how did I configure it the first time? I think that's my favorite part about >>it. I'm changing your background as we do this. >>I was going to say, did you just do it get ups pushed to like change his >>background, just a dialogue that different for that green screen equals false? Uh Change the background. Yeah, I mean, um and I mean I think last year was really my first year of actually using it on anything significant, like a real project. Um so I'm still, I still feel like I'm very new to john you anything. >>Yeah, it's weird getups is that thing which kind of crystallizes maybe better than anything else, the grizzled veteran life cycle of emotions with the technology because I think it's easy to get super excited about something new. And when I first looked into get up, so I think this is even before it was probably called getups, we were looking at like how to use guest source of truth, like everything sounds great, right? You're like, wait, get everyone knows, get gets the source of truth, There's a load of robust tooling. This just makes a sense. If everything dies, we can just apply the get again, that would be great. Um and then you go through like the trough of despair, right? We're like, oh no, none of this works. The application is super stateless if this doesn't work and what do we do with secrets and how do we do this? Like how do we get people access in the right place and then you realize everything is terrible again and then everything it equalizes and you're kind of, I think, you know, it sounds great on paper and they were absolutely fantastic things about it, but I think just having that measured approach to it, like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. Right, like it's the magic, which is >>the magic is the mystery, >>right? >>Magic can be good and bad and in text so >>very much so yeah, so um concurrence with with john and ever uh in terms of what I like about it is the potential to apply it to moving security to left and getting closer to a more stable infrastructures code with respect to the whole entire environment. Um And uh and that reconciliation loop, it reminds me of what, what is old is new again? Right? Well, quote unquote old um in terms of like chef and puppet and that the reconciliation loop applied in a in a more uh in a cleaner interface and and into the infrastructure that we're kind of used to already, once you start really digging into kubernetes what I don't like and just this is in concurrence with the other Panelist is it's relatively new. It has um, so it has a learning curve and it's still being, you know, it's a very active um environment and community and that means that things are changing and constantly and there's like new ways and new patterns as people are exploring how to use it. And I think that trough of despair is typically figuring out incrementally what it actually is doing for you and what it's not going to solve for you, right, john, so like that's that trough of despair for a bit and then you realize, okay, this is where it fits potentially in my architecture and like anything, you have to make that trade off and you have to make that decision and accept the trade offs for that. But I think it has a lot of promise for, for compliance and security and all that good stuff. >>Yeah. It's like it's like the potentials, there's still a lot more potential than there is uh reality right now. I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling that doesn't appreciate getups like you're using to get up to and use something else and that tool has no awareness of the concept so it doesn't flow well with all of the things you're trying to do and get um uh things that aren't state based and all that. So this is going to lead me to our first question from Camden asking dumb questions by the way. No dumb questions here. Um How is get apps? Not just another name for C. D. Anybody want to take that as an answer as a question. How is get up is not just another name for C. D. I have things but we can talk about it. I >>feel like we need victor foster kids. Yeah, sure you would have opinions. Yeah, >>I think it's a very yeah. One person replied said it's a very specific it's an opinionated version of cd. That's a great that's a great answer like that. Yeah. >>It's like an implement. Its it's an implementation of deployment if you want it if you want to use it for that. All right. I realize now it's kind of hard in terms of a physical panel and a virtual panel to figure out who on the panel is gonna, you know, ready to jump in to answer a question. But I'll take it. So um I'll um I'll do my best inner victor and say, you know, it's it's an implementation of C. D. And it's it's a choice right? It's one can just still do docker build and darker pushes and doctor pulls and that's fine. Or use other technologies to deploy containers and pods and change your, your kubernetes infrastructure. But get apps is a different implementation, a different method of doing that same thing at the end of the day. Yeah, >>I like it. I like >>it and I think that goes back to your point about, you know, it's kind of early days still, I think to me what I like about getups in that respect is it's nice to see kubernetes become a platform where people are experimenting with different ways of doing things, right? And so I think that encourages like lots of different patterns and overall that's going to be a good thing for the community because then more, you know, and not everything needs to settle in terms of only one way of doing things, but a lot of different ways of doing things helps people fit, you know, the tooling to their needs, or helps fit kubernetes to their needs, etcetera. Yeah, >>um I agree with that, the, so I'm gonna, since we're getting a load of good questions, so um one of the, one of the, one of the, I want to add to that real quick that one of the uh from the, we've people themselves, because I've had some on the show and one of things that I look at it is distinguishing is with continuous deployment tools, I sort of think that it's almost like previous generation and uh continuous deployment tools can be anything like we would consider Jenkins cd, right, if you if you had an association to a server and do a doctor pull and you know, dr up or dr composed up rather, or if it did a cube control apply uh from you know inside an ssh tunnel or something like that was considered considered C. D. Well get ops is much more rigid I think in terms of um you you need to apply, you have a specific repo that's all about your deployments and because of what tool you're using and that one your commit to a specific repo or in a specific branch that repo depends on how you're setting it up. That is what kicks off a workflow. And then secondly there's an understanding of state. So a lot of these tools now I have uh reconciliation where they they look at the cluster and if things are changing they will actually go back and to get and the robots will take over and will commit that. Hey this thing has changed um and you maybe you human didn't change it, something else might have changed it. So I think that's where getups is approaching it, is that ah we we need to we need to consider more than just a couple of commands that be runnin in a script. Like there needs to be more than that for a getups repo to happen anyway, that's just kind of the the take back to take away I took from a previous conversation with some people um >>we've I don't think that lost, its the last piece is really important, right? I think like for me, C d like Ci cd, they're more philosophical ideas, write a set of principles, right? Like getting an idea or a code change to environments promoting it. It's very kind of pipeline driven um and it's very imperative driven, right? Like our existing CD tools are a lot of the ways that people think about Cd, it would be triggered by an event, maybe a code push and then these other things are happening in sequence until they either fail or pass, right? And then we're done. Getups is very much sitting on the, you know, the reconciliation side, it's changing to a pull based model of reconciliation, right? Like it's very declarative, it's just looking at the state and it's automatically pulling changes when they happen, rather than this imperative trigger driven model. That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups is doing anything creatively revolutionary here, but I think that's one of the main things that the ideas that are being introduced into those, like existing C kind of tools and pipelines, um certainly the pull based model and the reconciliation model, which, you know, has a lot in common with kubernetes and how those kind of controllers work, but I think that's the key idea. Yeah. >>Um This is a pretty specific one Tory asks, does anyone have opinions about get ops in a mono repo this is like this is getting into religion a little bit. How many repos are too many repose? How um any thoughts on that? Anyone before I rant, >>go >>for it, go for it? >>Yeah. How I'm using it right now in a monitor repo uh So I'm using GIT hub. Right, so you have what? The workflow and then inside a workflow? Yeah, mo file, I'll >>track the >>actual changes to the workflow itself, as well as a folder, which is basically some sort of service in Amman Arepa, so if any of those things changes, it'll trigger the actual pipeline to run. So that's like the simplest thing that I could figure out how to, you know, get it set up using um get hubs, uh workflow path future. Yeah. And it's worked for me for writing, you know? That's Yeah. >>Yeah, the a lot of these things too, like the mono repo discussion will, it's very tool specific. Each tool has various levels of support for branch branching and different repos and subdirectories are are looking at the defense and to see if there's changes in that specific directory. Yeah. Sorry, um john you're going to say something, >>I was just going to say, I've never really done it, but I imagine the same kind of downsides of mono repo to multiple report would exist there. I mean, you've got the blast radius issues, you've got, you know, how big is the mono repo? Do we have to pull does the tool have to pull that or cashier every time it needs to determine def so what is the support for being able to just look at directories versus you know, I think we can get way down into a deeper conversation. Maybe we'll save it for later on in the conversation about what we're doing. Get up, how do we structure our get reposed? We have super granular repo per environment, Perper out reaper, per cluster repo per whatever or do we have directories per environment or branches per environment? How how is everything organized? I think it's you know, it's going to be one of those, there's never one size fits all. I'll give the class of consultant like it depends answer. Right? >>Yeah, for sure. It's very similar to the code struggle because it depends. >>Right? >>Uh Yeah, it's similar to the to the code problem of teams trying to figure out how many repose for their code. Should they micro service, should they? Semi micro service, macro service. Like I mean, you know because too many repose means you're doing a bunch of repo management, a bunch of changes on your local system, you're constantly get pulling all these different things and uh but if you have one big repo then it's it's a it's a huge monolithic thing that you usually have to deal with. Path based issues of tools that only need to look at a specific directory and um yeah, it's a it's a culture, I feel like yeah, like I keep going back to this, it's a culture thing. Does your what is your team prefer? What do you like? What um what's painful for everyone and who's what's the loudest pain that you need to deal with? Is it is it repo management? That's the pain um or is it uh you know, is that that everyone's in one place and it's really hard to keep too many cooks out of the kitchen, which is a mono repo problem, you know? Um How do we handle security? So this is a great one from Tory again. Another great question back to back. And that's the first time we've done that um security as it pertains to get up to anyone who can commit can change the infrastructure. Yes. >>Yes. So the tooling that you have for your GIT repo and the authentication, authorization and permissions that you apply to the GIT repo using a get server like GIT hub or get lab or whatever your flavor of the day is is going to be how security is handled with respect to changes in your get ups configuration repository. So um that is completely specific to your implementation of that or ones implementation of of how they're handling that. Get repositories that the get ups tooling is looking at. To reconcile changes with respect to the permissions of the for lack of better term robot itself. Right? They get up tooling like flux or Argosy. D Um one kid would would create a user or a service account or uh other kind of authentication measures to limit the permissions for that service account that the Gaddafi's tooling needs to be able to read the repose and and send commits etcetera. So that is well within the realm of what you have already for your for your get your get um repo. Yeah. >>Yeah. A related question is from a g what they like about get apps if done nicely for a newbie it's you can get stuff done easily if you what they dislike about it is when you have too many get repose it becomes just too complicated and I agree. Um was making a joke with a team the other week that you know the developer used to just make one commit and they would pass pass it on to a QA team that would then eventually emerging in the master. But they made the commits to these feature branches or whatever. But now they make a commit, they make a pR there for their code then they go make a PR in the helm chart to update the thing to do that and then they go make a PR in the get ups repeal for Argo. And so we talked about that they're probably like four or five P. R. Is just to get their code in the production. But we were talking about the negative of that but the reality was It's just five or 4 or five prs like it wasn't five different systems that had five different methodologies and tooling and that. So I looked at it I was like well yeah that's kind of a pain in the get sense but you're also dealing with one type. It's a repetitive action but it's it's the one thing I don't have to go to five different systems with five different ways of doing it. And once in the web and one's on the client wants a command line that I don't remember. Um Yeah so it's got pros and cons I think when you >>I think when you get to the scale where those kind of issues are a problem then you're probably at the scale where you can afford to invest some time into automation into that. Right? Like what I've when I've seen this in larger customers or larger organizations if there ever at that stage where okay apps are coming up all the time. You know, there's a 10 X 100 X developer to operations folks who may be creating get repose setting up permissions then that stuff gets automated, right? Like, you know, maybe ticket based systems or whatever. Developers say I need a new app. It templates things or more often using the same model, right of reconciliation and operators and the horrific abuse of cogs that we're seeing in the communities community right now. Um You know, developers can create a crd which just says, hey, I'm creating a new app is called app A and then a controller will pick up that app a definition. It will go create a get a repo Programmatically it will add the right definitely will look up and held up the developers and the permissions that need to be able to get to that repo it will create and template automatically some name space and the clusters that it needs in the environments that it needs, depending on, you know, some metadata it might read. So I think, you know, those are definite problems and they're definitely like a teething, growing pain thing. But once you get to that scale, you kind of need to step back and say, well look, we just need to invest in time into the operational aspect of this and automating this pain away, I think. Yeah, >>yeah. And that ultimately ends in Yeah. Custom tooling, which it's hard to avoid it at scale. I mean, there's there's two, there's almost two conversations here, right. There is what I call the Solo admin Solo devops, I bought that domain Solo devops dot com because, you know, whenever I'm talking to dr khan in the real world, it's like I asked people to raise hands, I don't know how we can raise hands here, but I would ask people to raise hands and see how many of you here are. The sole person responsible for deploying the app that your team makes and like a quarter of the room would raise their hand. So I call that solo devops like those, that person can't make all the custom tooling in the world. So they really need dr like solutions where it's opinionated, the workflow is sort of built in and they don't have to wrangle things together with a bunch of glue, you know, in other words bash. Um and so this kind of comes to a conversation uh starting this question from lee he's asking how do you combine get ops with ci cd, especially the continuous bit. How do you avoid having a human uh sort of the complaint the team I was working with has, how do you avoid a human editing and get committing for every single deploy? They've settled on customized templates and a script for routine updates. So as a seed for this conference, this question I'm gonna ask you all uh instead of that specific question cause it's a little open ended. Um Tell me whether you agree with this. I I kind of look at the image, the image artifact because the doctor image or container image in general is an artifact that I I view it that way and that thing going into the registry with the right label or right part of the label. Um That tag rather not the label but the tag that to me is like one of the great demarche points of, we're kind of done with Ci and we're now into the deployment phase and it doesn't necessarily mean the tooling is a clear cut there, but that artifact being shipped in a specific way or promoted as we sometimes say. Um what do you think? Does anyone have opinions on that? I don't even know if that's the right opinion to have so mhm. >>So um I think what you're, what you're getting at is that get ups, models can trigger off of different events um to trigger the reconciliation loop. And one way to do that is if the image, if it notices a image change in the registry, the other is if there's a commit event on a specific rebo and branch and it's up to, you are up to the person that's implementing their get ups model, what event to trigger there, that reconciliation loop off of, You can do both, you can do one or the other. It also depends on the Templeton engine that you're using on top of um on top of kubernetes, such as helm or um you know, the other ones that are out there or if you're not even doing that, then, you know straight. Yeah, mo um so it kind of just depends, but those are the typically the two options one has and a combination of of those to trigger that event. You can also just trigger it manually, right? You can go into the command line and force a a, you know, a really like a scan or a new reconciliation loop to occur. So it kind of just, I don't want to say this, but it depends on what you're trying to do and what makes sense in your pipeline. Right? So if you're if you're set up where you are tag, if you're doing it based off of image tags, then you probably want to use get ups in a way that you're using the image tags. Right. And the pattern that you've established there, if you're not really doing that and you're more around, like, different branches are mapped to different environments, then triggered off of the correct branch. And that's where the permissions also come into play. Where if you don't want someone to touch production and you've got your getups for your production cluster based off of like uh you know, a main branch, then whoever can push a change to that main branch has the authority to push that change to production. Right? So that's your authentication and permissions um system same for the registry itself. Right. So >>Yeah. Yeah. Sorry, anyone else have any thoughts on that? I was about to go to the next topic, >>I was going to say. I think certain tools dictate the approach, like, if you're using Argosy d it's I think I'm correct me if I'm wrong, but I think the only way to use it right now is just through image modification. Like, the manifest changes, it looks at a specific directory and anything changes then it will do its thing. And uh Synchronize the cost there with whatever's and get >>Yeah, flux has both. Yeah, and flux has both. So it it kind of depends. I think you can make our go do that too, but uh this is back to what we were saying in the beginning, uh you know, these things are changing, right? So that might be what it is right now in terms of triggering the reconciliation loops and get ups, tooling, but there might be other events in the future that might trigger it, and it's not completely stand alone because you still need you're tooling to do any kind of testing or whatever you have in terms of like the specific pipeline. So oftentimes you're bolting in getups into some other part of broader Cfd solution. That makes sense. Yeah, >>we've got a lot of questions about secrets or people that are asking about secrets. >>So my my tongue and cheek answered the secrets question was, what's the best practices for kubernetes? Secrets? That's the same thing for secrets with good apps? Uh getups is not last time I checked and last time I was running this stuff get ups is not has nothing to do with secrets in that sense. It's just there to get your stuff running on communities. So, um there's probably a really good session on secrets at dr concept. I >>would agree with you, I agree with you. Yeah, I mean, get off stools, I mean every every project of mine handles secrets differently. Uh huh. And I think I'm not sure if it was even when I was talking to but talking to someone recently that I'm very bullish on get up actions, I love get up actions, it's not great for deployments yet, but we do have this new thing and get hub environments, I think it's called. So it allows me at least the store secrets per environment, which it didn't have the concept of that before, which you know, if you if any of you running kubernetes out there, you typically end up when you start running kubernetes, you end up with more than one kubernetes, like you're going to end up with a lot of clusters at some point, at least many multiple, more than two. Um and so if you're trying to store secret somewhere, you do have and there's a discussion happening in chat right now where people are talking about um sealed secrets which if you haven't heard of that, go look that up and just be versed on what sealed secrets is because it's a it's a fantastic concept for how to store secrets in the public. Um I love it because I'm a big P. K. I nerd but um it's not the only way and it doesn't fit all models. So I have clients that use A W. S. Secrets because they're in A W. S. And then they just have to use the kubernetes external secret. But again like like like normal sand, you know, it's that doesn't really affect get ops, get ops is just applying whatever helm charts or jahmal or images that you're, you're you're deploying, get off. It was more about the approach of when the changes happen and whether it's a push or pull model like we're talking about and you know, >>I would say there's a bunch of prerequisites to get ups secrets being one of them because the risk of you putting a secret into your git repo if you haven't figured out your community secrets architecture and start diving into getups is high and removing secrets from get repose is you know, could be its own industry, right. It's >>a thing, >>how do >>I hide this? How do I obscure this commit that's already now on a dozen machines. >>So there are some prerequisites in terms of when you're ready to adopt get up. So I think is the right way of saying the answer to that secrets being one of them. >>I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see the ah ha moment when it came to get ups which, which was that the premier thing that everyone used to say about get up about why it was great. Was its the single source of truth. There's no state anywhere else. You just need to look at git. Um and then secrets may be realized along with a bunch of other things down the line that is not true and will never be true. So as soon as you can lose the dogmatism about everything is going to be and get it's fantastic. As long as you've understood everything is not going to get. There are things which will absolutely never be and get some tools just don't deal with that. They need to earn their own state, especially in communities, some controls on their own state. You know, cuz sealed secrets and and other projects like SOps and I think there are two or three others. That's a great way of dealing with secrets if you want to keep them in get. But you know, projects like vault more kind of like what I would say, production grade secret strategies. Right? And if you're in AWS or a cloud, you're more likely to be using their secrets. Your secret policy is maybe not dictated by you in large organizations might be dictated by CSO or security or Great. Like I think once if you, if you're trying to adopt getups or you're thinking about it, get the dogmatism of get as a single point of truth out of your mind and think about getups more as a philosophy and a set of best practice principles, then you will be in much better stead, >>right? Yeah. >>People are asking more questions in chat like infrastructure as code plus C d essentially get ups or C I rather, um, these are all great questions and a part of the debate, I'm actually just going to throw up on screen. I'm gonna put this in chat, but this is, this is to me the source, Right? So we worked with when they coined the term. We, a lot of us have been trying to get, if we talk about the history for a minute and then tell me if I'm getting this right. Um, a lot of us were trying to automate all these different parts of the puzzle, but a lot of them, they, some things might have been infrastructure as code. Some things weren't, some things were sort of like settings is coded, like you're going to Jenkins and type in secrets and settings or type in a certain thing in the settings of Jenkins and then that it wasn't really in get and so what we was trying to go for was a way to have almost like eventually a two way state understanding where get might change your infrastructure but then your infrastructure might also change and needs to be reflected in the get if the get is trying to be the single source of truth. Um and like you're saying the reality is that you're never gonna have one repo that has all of your infrastructure in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. Right. Um but anyway, I'm gonna put this link in chat. So this guide actually, uh one of things they talk about is what it's not, so it's, it's kind of great to read through the different requirements and like what I was saying well ago um mhm. Having having ci having infrastructure as code and then trying a little bit of continuous deployment out, it's probably a prerequisite. Forget ops so it's hard to just jump into that when you don't already have infrastructure as code because a machine doing stuff on your behalf, it means that you have to have things documented and somewhere and get repo but let me put this in the in the >>chitty chat, I would like to know if the other panelists agree, but I think get apps is a okay. I would say it's a moderate level, it's not a beginner level communities thing, it's like a moderate level advanced, a little bit more advanced level. Um One can start off using it but you definitely have to have some pre recs in place or some understanding of like a pattern in place. Um So what do the other folks think about that opinion? >>I think if you're if you're trying to use get out before, you know what problem you have, you're probably gonna be in trouble. Right. It's like having a solution to it probably don't have yet. Mhm. Right. I mean if if you're just evil or and you're just typing, keep control apply, you're one person right, Get off. It doesn't seem like a big a big jump, like, I mean it doesn't like why would I do that? I'm just, I'm just gonna inside, it's the type of get commit right, I'm typing Q control apply. But I think one of the rules from we've is none of your developers and none of your admins can have cute control access to the cluster because if you can't, if you do have access and you can just apply something, then that's just infrastructure as code. That's just continuous deployment, that's, that's not really get ops um, getups implies that the only way things get into the cluster is through the get up, get automation that you're using with, you know, flux Argo, we haven't talked about, what's the other one that Victor Farsi talks about, by the way people are asking about victor, because victor would love to talk about this stuff, but he's in my next life, so come back in an hour and a half or whatever and victor is going to be talking about sys, admin list with me. Um >>you gotta ask him nothing but get up questions in the next, >>confuse them, confuse them. But anyway, that, that, that's um, it's hard, it's hard to understand and without having tried it, I think conceptually it's a little challenging >>one thing with getups, especially based off the we've works blog post that you just put up on there. It's an opinionated way of doing something. Uh you know, it's an opinionated way of of delivering changes to an environment to your kubernetes environment. So it's opinionated were often not used to seeing things that are very opinionated in this sense, in the in the ecosystem, but get apps is a opinionated thing. It's it's one way of doing it. Um there are ways to change it and like there are options um like what we were talking about in terms of the events that trigger, but the way that it's structured is an opinion opinionated way both from like a tooling perspective, like using get etcetera, but also from a devops cultural perspective, right? Like you were talking about not having anyone access cube control and changing the cluster directly. That's a philosophical opinion that get ups forces you to adopt otherwise. It kind of breaks the model and um I just I want everyone to just understand that. That is very opinion, anything in that sense. Yeah, >>polygamy is another thing. Infrastructure as code. Um someone's mentioning plummy and chat, I just had actually my life show self plug bread that live go there. I'm on Youtube every week. I did the same thing. These these are my friends um and had palami on two weeks ago uh last week, remember uh and it was in the last couple of weeks and we talked about their infrastructure as code solution. Were actually writing code instead of um oh that's an interesting take on uh developer team sort of owning coding the infrastructure through code rather than Yamil as a data language. I don't really have an opinion on it yet because I haven't used it in production or anything in the real real world, but um, I'm not sure how much they are applying trying to go towards the get up stuff. I will do a plug for Solomon hikes. Who has a, the beginning of the day, it's already happened so you can go back and watch it. It's a, it's a, what's it called? Q. Rethinking application delivery with Q. And build kit. So go look this up. This is the found co founder of Dr and former CTO Solomon hikes at the beginning of the day. He has a tool called dagger. I'm not sure why the title of the talk is delivering with Q. And built it, but the tool is showing off in there for an hour is called dagger. And it's, it's an interesting idea on how to apply a lot of this opinionated automated stuff to uh, to deployment and it's get off space and you use Q language. It's a graph language. I watched most of it and it was a really interesting take. I'm excited to see if that takes off and if they try that because it's another way that you can get a little bit more advanced with your you're get deployments and without having to just stick everything in Yemen, which is kind of what we're in today with helm charts and what not. All right. More questions about secrets, I think. I think we're not going to have a whole lot of more, a lot more about secrets basically. Uh put secrets in your cluster to start with and kubernetes in encrypted, you know, thing. And then, you know, as it gets harder, then you have to find another solution when you have five clusters, you don't wanna have to do it five times. That's when you have to go for Walton A W. S secrets and all >>that. Right? I'm gonna post it note. Yeah. Crm into the cluster. Just kidding. >>Yes, there are recordings of this. Yes, they will be later. Uh, because we're that these are all gonna be on youtube later. Um, yeah, detects secrets cushion saying detect secrets or get Guardian are absolute requirements. I think it's in reference to your secrets comment earlier. Um, Camels asking about Cuban is dropping support for Docker that this is not the place to ask for that, but it, it is uh, basically it's a Nonevent Marantz has actually just created that same plug in available in a different repos. So if you want to keep using Docker and kubernetes, you know, you can do it like it's no big deal. Most of us aren't using doctor in our communities anyway, so we're using like container D or whatever is provided to us by our provider. Um yeah, thank you so much for all these comments. These are great people helping each other and chat. I feel like we're just here to make sure the chats available so people can help each other. >>I feel like I want to pick up on something when you mentioned pollux me, I think there's a um we're talking about getups but I think in the original like the origination of that I guess was deploying applications to clusters right, picking up deployment manifest. But I think with the gloomy and I obviously terra form and things have been around a long time, folks are starting to apply this I think I found one earlier which was like um kub stack the Terror Forms get ups framework. Um but also with the advent of things like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure for your clusters and build the cluster right? We're not just talking about deploying applications, the cluster A. P. I will talk to a W. S. Spin up, VPc spin up machines, you know, we'll do the same kind of things that terra form does and and those other tools do I think applying getups principles to the infrastructure spin up right, the proper infrastructure as code stuff, constantly applying Terror form um you know, plans and whatever, constantly applying cluster Api resources spinning up stuff in those clouds. That's a super interesting. Um you know, extension of this area, I'd be curious to see if what the folks think about that. >>Yeah, that's why I picked this topic is one of my three. Uh I got I got to pick the topics. I was like the three things that there like the most bleeding edge exciting. Most people haven't, we haven't basically we haven't figured all this out yet. We as an industry, so um it's I think we're gonna see more ideas on it. Um what's the one with the popsicle as the as the icon victor talks about all the time? It's not it's another getups like tool, but it's um it's getups for you use this kubernetes limit and then we have to look it up, >>You're talking about cross plane. >>So >>my >>wife is over here with the sound effects and the first sound effect of the day that she chooses to use is one. >>All right, can we pick it? Let's let's find another question bret >>I'm searching >>so many of them. All right, so uh I think one really quick one is getups only for kubernetes, I think the main to tooling to tools that we're talking about, our Argosy D and flux and they're mostly geared toward kubernetes deployments but there's a, it seems like they're organized in a way that there's a clean abstraction in with respect to the agent that's doing the deployment and the tooling that that can interact with. So I would imagine that in the future and this might be true already right now that get ups could be applied to other types of deployments at some point in the future. But right now it's mostly focused and treats kubernetes as a first class citizen or the tooling on top of kubernetes, let's say something like how as a first class citizen? Yeah, to Brett, >>to me the field, back to you bret the thing I was looking for is cross plane. So that's another tool. Um Victor has been uh sharing a lot about it in Youtube cross plane and that is basically runs inside a kubernetes, but it handles your other infrastructure besides your app. It allows you to like get ops, you're a W. S stuff by using the kubernetes state engine as a, as a way to manage that. And I have not used it yet, but he does some really great demos on Youtube. So people are liking this idea of get off, so they're trying to figure out how do we, how do we manage state? How do we uh because the probably terra form is that, well, there's many problems, but it's always a lot of problems, but in the get outs world it's not quite the right fit yet, It might be, but you still, it's still largely as expected for people to, you know, like type the command, um, and it keeps state locally the ss, clouds and all that. And but the other thing is I'm I'm now realizing that when I saw the demo from Solomon, I'm going back to the Solomon hikes thing. He was using the demo and he was showing it apply deploying something on S three buckets, employing internet wifi and deploying it on google other things beyond kubernetes and saying that it's all getups approach. So I think we're just at the very beginning of seeing because it all started with kubernetes and now there's a swarm one, you can look up swarm, get office and there's a swarm, I can't take the name of it. Swarm sink I think is what's called swarm sink on git hub, which allows you to do swarm based getups like things. And now we're seeing these other tools coming out. They're saying we're going to try to do the get ups concepts, but not for kubernetes specifically and that's I think, you know, infrastructure as code started with certain areas of the world and then now then now we all just assume that you're going to have an infrastructure as code way of doing whatever that is and I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for all the clouds stuff and it won't just be flexor Argo. And then that's the weird thing is will flex and Argo support all those things or will it just be focused on kubernetes apps? You know, community stuff? >>There's also, I think this is what you're alluding to. There is a trend of using um kubernetes and see rDS to provision and control things that are outside of communities like the cloud service providers services as if they were first class entities within kubernetes so that you can use the kubernetes um focus tooling for things that are not communities through the kubernetes interface communities. Yeah, >>yeah, even criticism. >>Yeah, yeah, I'm just going to say that sounds like cross plane. >>Yeah, yeah, I mean, I think that's that's uh there were, you know, for the last couple of years, it's been flux and are going back and forth. Um they're like frenemies, you know, and they've been going back and forth with iterating on these ideas of how do we manage this complicated thing? That is many kubernetes clusters? Um because like Argo, I don't know if the flux V two can do this, but Argo can manage multiple clusters now from one cluster, so your, you can manage other clusters, technically external things from a single entity. Um Originally flux couldn't do that, but I'm going to say that V two can, I don't actually >>know. Um I think all that is gonna, I think that's going to consolidate in the future. All right. In terms of like the common feature set, what Iver and john what do you think? >>I mean, I think it's already begun, right, I think haven't, didn't they collaborate on a common engine? I don't know whether it's finished yet, but I think they're working towards a common getups engine and then they're just going to layer on features on top. But I think, I mean, I think that's interesting, right, because where it runs and where it interacts with, if we're talking about a pull based model, it shouldn't, it's decentralized to a certain extent, right? We need get and we need the agent which is pulling if we're saying there's something else which is orchestrating something that we start to like fuzzy the model even right. Like is this state living somewhere else, then I think that's just interesting as well. I thought flux was completely decentralized, but I know you install our go somewhere like the cargo has a server as well, but it's been a while since I've looked in depth at them. But I think the, you know, does that muddy the agent only pull model? >>I'm reading a >>Yeah, I would say that there's like a process of natural selection going on as as the C. N. C. F. Landscape evolves and grows bigger and a lot of divide and conquer right now. But I think as certain things kind of get more prominent >>and popular, I think >>it starts to trend and it inspires other things and then it starts to aggregate and you know, kind of get back into like a unified kind of like core. Maybe like for instance, cross plane, I feel like it shouldn't even really exist. It should be, it like it's a communities add on, but it should be built in, it should be built into kubernetes, like why doesn't this exist already >>for like controlling a cloud? >>Yeah, like just, you know, having this interface with the cloud provider and be able to Yeah, >>exactly. Yeah, and it kinda, you're right. That kinda happens because you do, I mean when you start talking about storage providers and networking providers was very specific implementations of operators or just individual controllers that do operate and control other resources in the cloud, but certainly not universally right. Not every feature of AWS is available to kubernetes out of the box. Um and you know, it, one of the challenges across plane is you gotta have kubernetes before you can deploy kubernetes. Like there's a chicken and egg issue there where if you're going to use, if you're going to use our cross plane for your other infrastructure, but it's gotta, but it has to run on kubernetes who creates that first kubernetes in order for you to put that on there. And victor talks about one of his videos, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. There has to be that initial, I did a thing with, I'm a human and I typed in some commands on a server and things happened but they don't really have an easy deployment method for getting our go up and running using simply nothing but a get push to an existing system. There's something like that. So it's a it's an interesting problem of day one infrastructure which is again only day one, I think data is way more interesting and hard, but um how can we spend these things up if they're all depending on each other and who is the first one to get started? >>I mean it's true of everything though, I mean at the end of that you need some kind of big bang kind of function too, you know, I started running start everything I >>think without going over that, sorry, without going off on a tangent. I was, I was gonna say there's a, if folks have heard of kind which is kubernetes and Docker, which is a mini kubernetes cluster, you can run in a Docker container or each container will run as a as a node. Um you know, that's been a really good way to spin up things like clusters. KPI because they boot strap a local kind, install the manifests, it will go and spin up a fully sized cluster, it will transfer its resources over there and then it will die itself. Right? So that, that's kind of bootstrapping itself. And I think a couple of folks in the community, Jason to Tiberius, I think he works for Quinyx metal um has, has experimented with like an even more minimal just Api server, so we're really just leveraging the kubernetes ideas of like a reconciliation loop and a controller. We just need something to bootstrap with those C R D s and get something going and then go away again. So I think that's gonna be a pattern that comes up kind of more and more >>Yeah, for sure. Um, and uh, the next, next quick answer to the question, Angel asked what your thoughts on getups being a niche to get or versus others vcs tools? Well, if I knew anyone who is using anything other than get, I would say no, you know, get ops is a horrible name. It should just be CVS office, but that doesn't or vcs ops or whatever like that, but that doesn't roll off the tongue. So someone had to come up with the get ups phrase. Um but absolutely, it's all about version control solutions used for infrastructure, not code. Um might get doctor asks a great question, we're not gonna have time for it, but maybe people can reply and chat with what they think but about infrastructure and code, the lines being blurred and that do develop, how much of infrastructure does developer do developers need to know? Essentially, they're having to know all the things. Um so unfortunately we've had way more questions like every panel here today with all the great community, we've got way more questions we can handle in this time. So we're gonna have to wrap it up and say goodbye. Go to the next live panel. I believe the next one is um on developer, developer specific setups that's gonna be peter running that panel. Something about development in containers and I'm sure it's gonna be great. Just like this one. So let's go around the room where can people find you on the internet? I'm at Brett fisher on twitter. That's where you can usually find me most days you are? >>Yeah, I'm on twitter to um, I'll put it in the chat. It's kind of confusing because the TSR seven. >>Okay. Yeah, that's right. You can't just say it. You can also look at the blow of the video and like our faces are there and if you click on them, it tells you our twitter in Arlington and stuff, john >>John Harris 85, pretty much everywhere. Get hub Twitter slack, etc. >>Yeah >>and normal, normal faults or just, you know, living on Youtube live with Brett. >>Yeah, we're all on the twitter so go check us out there and thank you so much for joining. Uh thank you so much to you all for being here. I really appreciate you taking time in your busy schedule to join me for a little chit chat. Um Yes, all the, all the cheers, yes. >>And I think this kid apps loop has been declarative lee reconciled. >>Yeah, there we go. And with that ladies and gentlemen, uh bid you would do, we will see you in the next, next round coming up next with Peter >>bye.

Published Date : May 28 2021

SUMMARY :

I got my evil or my john and the normal And we're going to talk about get ops I currently based in Berlin and I happen to be Brett Brett's teaching assistant. All right, that's right. Um, so yeah, it's good to see either in person and it's good to see you again john it's been a little It has the pre covid times, right? Yeah, john shirt looks red and reminds me of the Austin T shirt. Um, but you know, you have to go steal stuff, you to find ways to get the swag If you ever come to my place, I'm going to have to lock the closets. So the second I think it was the second floor of the doctor HQ in SAn Francisco was where they kept all the Um All right, so I'm going to start scanning questions uh so that you don't have to you can Um I still feel like I'm very new to john you anything. like it's, you know, I think when you put it best in the beginning where you do a and then there's a magic and then you get C. so it has a learning curve and it's still being, you know, I think it's like I feel like we're very early days and the idea of especially when you start getting into tooling sure you would have opinions. I think it's a very yeah. um I'll do my best inner victor and say, you know, it's it's I like it. then more, you know, and not everything needs to settle in terms of only one way of doing things, to a server and do a doctor pull and you know, dr up or dr composed up rather, That's not to say that there aren't city tools which we're doing pull based or you can do pull based or get ups I rant, Right, so you have what? thing that I could figure out how to, you know, get it set up using um get hubs, and different repos and subdirectories are are looking at the defense and to see if there's changes I think it's you know, Yeah, for sure. That's the pain um or is it uh you know, is that that everyone's in one place So that is well within the realm of what you have Um was making a joke with a team the other week that you know the developer used to just I think when you get to the scale where those kind of issues are a problem then you're probably at the scale this kind of comes to a conversation uh starting this question from lee he's asking how do you combine top of kubernetes, such as helm or um you know, the other ones that are out there I was about to go to the next topic, I think certain tools dictate the approach, like, if you're using Argosy d I think you can make our go do that too, but uh this is back to what That's the same thing for secrets with good apps? But again like like like normal sand, you know, it's that doesn't really affect get ops, the risk of you putting a secret into your git repo if you haven't figured I hide this? So I think is the right way of saying the answer to that I think the secrets was the thing that made me, you know, like two or three years ago made me kind of see Yeah. in it, like you would have to have, you have to have all your terra form, anything else you're spinning up. can start off using it but you definitely have to have some pre recs in if you do have access and you can just apply something, then that's just infrastructure as code. But anyway, one thing with getups, especially based off the we've works blog post that you just put up on And then, you know, as it gets harder, then you have to find another solution when Crm into the cluster. I think it's in reference to your secrets comment earlier. like cluster A. P. I. Um in the Cuban at the space where you can declare actively build the infrastructure but it's um it's getups for you use this kubernetes I think the main to tooling to tools that we're talking about, our Argosy D and flux I think get off is going to have that same approach where pretty soon, you know, we'll have get apps for you can use the kubernetes um focus tooling for things I mean, I think that's that's uh there were, you know, Um I think all that is gonna, I think that's going to consolidate But I think the, you know, does that muddy the agent only But I think as certain things kind of get more it starts to trend and it inspires other things and then it starts to aggregate and you know, the same problem with flux and Argo where like Argo, you can't deploy Argo itself with getups. Um you know, that's been a really good way to spin up things like clusters. So let's go around the room where can people find you on the internet? the TSR seven. are there and if you click on them, it tells you our twitter in Arlington and stuff, john Get hub Twitter slack, etc. and normal, normal faults or just, you know, I really appreciate you taking time in your And with that ladies and gentlemen, uh bid you would do,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrettPERSON

0.99+

BerlinLOCATION

0.99+

Victor FarsiPERSON

0.99+

john HarrisPERSON

0.99+

Virginia BeachLOCATION

0.99+

SeattleLOCATION

0.99+

JasonPERSON

0.99+

Brett BrettPERSON

0.99+

GaddafiPERSON

0.99+

twoQUANTITY

0.99+

first questionQUANTITY

0.99+

YemenLOCATION

0.99+

last weekDATE

0.99+

threeQUANTITY

0.99+

last yearDATE

0.99+

oneQUANTITY

0.99+

ArlingtonLOCATION

0.99+

Brett fisherPERSON

0.99+

five timesQUANTITY

0.99+

TiberiusPERSON

0.99+

PeterPERSON

0.99+

two optionsQUANTITY

0.99+

johnPERSON

0.99+

Virginia beachLOCATION

0.99+

two weeks agoDATE

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

Amman ArepaLOCATION

0.99+

three changesQUANTITY

0.99+

one clusterQUANTITY

0.99+

second floorQUANTITY

0.99+

QuinyxORGANIZATION

0.99+

fiveQUANTITY

0.99+

ToryPERSON

0.99+

an hour and a halfQUANTITY

0.99+

todayDATE

0.99+

axel SpringerORGANIZATION

0.99+

VictorPERSON

0.99+

JenkinsTITLE

0.98+

youtubeORGANIZATION

0.98+

SAn FranciscoLOCATION

0.98+

three special guestsQUANTITY

0.98+

4QUANTITY

0.98+

Each toolQUANTITY

0.98+

booz allenPERSON

0.98+

one personQUANTITY

0.98+

five clustersQUANTITY

0.98+

three thingsQUANTITY

0.98+

first timeQUANTITY

0.98+

five different systemsQUANTITY

0.98+

each containerQUANTITY

0.98+

day oneQUANTITY

0.98+

YoutubeORGANIZATION

0.98+

AngelPERSON

0.98+

IverPERSON

0.98+

five different waysQUANTITY

0.98+

first yearQUANTITY

0.97+

V twoOTHER

0.97+

three commitsQUANTITY

0.97+

more than twoQUANTITY

0.97+

One personQUANTITY

0.97+

two wayQUANTITY

0.96+

twitterORGANIZATION

0.96+

one wayQUANTITY

0.96+

single sourceQUANTITY

0.96+

single pointQUANTITY

0.96+

five prsQUANTITY

0.95+

first oneQUANTITY

0.95+

John Harris 85PERSON

0.95+

firstQUANTITY

0.95+

more than one kubernetesQUANTITY

0.95+

Dana Lawson, GitHub | DockerCon 2021


 

>>Okay, welcome back to the Cube coverage of Dr Khan 2021. I'm John for your host. Had a great guest here. Dana Lawson. Vice president. Engineering and technology partnerships that get up dana. Welcome to the cube. You're leading the engineering team over at GIT hub. Been been around the block in the cloud enterprise area. Congratulations. Welcome to the cube. >>Well, thanks for having me. Don, I am super excited. Dr. 2021 Wow. I can't believe it's been that long. Right. >>Got the keynote coverage automation. The top trend here in the world. DevoPS DEP sec apps, developer productivity, modern errors here, a lot of action uh and dr conscious more attendance every year, containers setting up the cloud native. You know the tsunami of new ways that people are programming. New way teams are formed new way people are being super productive with the pandemic. We've seen developers really lead the charge in the virtual work environment. So a lot of action. So first tell us what's going on in the developer community right now, give us your take, >>I mean, my take on it is the developer teams are just working closer than ever before. You know, we see this across all industries, whether you're going through your own digital transformation and trying to streamline your workflow, um you know, we have this concept of devops now for about a decade and and we all were hopeful I was one of those early adopters that like, yes, this will change the world, as you can imagine, and like we're seeing it materialized and I feel like in this historic year, uh it's on steroids, we see teams working across the aisle doing things we've never experienced before with this concept of interconnected tools. And so we're seeing really the, I would say the practice of devops really going across every member of the team and not being just a practice that maybe one person on your team did. You know, this trend has been ongoing for a while. But with these new key technologies out there, it's really on fire in my opinion, >>outside of just the whole cloud native awesomeness that's happening. You see kubernetes enabling a lot of new things, the virtual work environment with the pandemic developers, just like just the way we've been working a long time. Finally, it just got standardized for the rest of the world, the world. Um they didn't really miss a beat and, and combined again with the cloud scale and we saw the earnings from all the big companies, the developers have been super productive this year. Do you see um that continuing and what, how is it going to change in your opinion as the pandemic kind of lifts a little bit and now the new normal gets back to real life. Certainly those benefits came out is what's your take on this engineering dynamic going on. >>I mean you said it they're like this is a common kind of workflow that people had pre pandemic, especially in the open source community where it's literally a bunch of random people around the world that don't obviously get to talk as as quickly and as uh you know, synchronously and so a saint communications gone up in what we've seen there is teams really tuning in their automation, right? So whereas you may have had it in your backlog to say, you know what, I should probably go automate that workflow now that we have been forced. Even even companies that haven't haven't thought about in the past to say, okay, how do I get code from A to B. Seamlessly? There's spending time on those workflows. and I think that we're seeing that naturally, you know, in the keynote where I mentioned some of the Research that we've done is we're seeing developers work more but we're seeing them work more on open source projects and the things that they want to work on not necessarily going and saying I'm going to go and spend 20 hours at work. But really it's that that continuation of like hey instead of automation being an afterthought we're gonna make it something that is at the forethought of what we're doing. And so what it's really done is just increase the time spent on writing great code and hopefully having a better up time. I am a I am a DEvops SRE sys admin, whatever you wanna call it at heart forever will be. Um and so you know, getting to have more time to spend on S. L. O. S. And really the, you know like I call it the safety guards, the rails of your system so that you can just really go in there and allow everybody to contribute. And that's what I think we're seeing and we're going to continue to see that as things just get easier as stuff happens out of the virtual box. >>I mean simple or easy. It's always a good strategy. I was just reporting for our team on the cube con and cloud native con. There's more cloud native con going on than cube con because kubernetes got kind of boring. Um, and enabled more cloud native development. And then the other trend that we've been reporting on is end user contribution to open sores. You're starting to see end users, not just the usual suspects like lift and whatnot. You're seeing like real enterprises like having teams contributing into open source in a big way. This is a kind of a new, interesting dynamic. What's your take on that? Is that a signal of simplicity? What does it mean? >>I'm going to tell you, I think that companies and big names that realized they were using open source and they have been all along, um, it's been around for a minute. Some of our most favorite libraries and frameworks have been open source from the beginning. You hear me talking about Java and Tomcat that's open source. And so it's really this understanding of the workflow. So I want to say that what we see now is there should be an investment because the world's team of open source developers are powering our technology and why shouldn't we as companies embrace and actually get back and spend that quality time because us innovating together on open source privately and publicly just makes everything better for everybody. And so I I think we're going to continue to see this trim. I'm excited about it. GIT hub has done some amazing work in this space by with get up sponsors because we want open source to continue to enable the innovation and having people participate. And now we're seeing it with businesses alike. And so I think we're going to see this practice continue on and really take a look not only of the technology they're using, but the open source practices like how do these maintainers and these open source teams shit reliable quality code that is changing the world. And how can we put those practices within our own development teams on what we're building for our customers? So you're just going to continue to see this. And I think also with that being said because the barrier of entry has has lowered some by the advancement. What we're seeing the rise of the citizen developer as well. So we're seeing you know people all within the company and some that are much more further along with their transformations participate in a way they never have before. Whether it's like you know the design part in the design thinking of it to like how do you curate and have a great experience for your customers. We're just seeing participation at all levels of development stack and that also is the stuff outside of the actual code being written because it's so interconnected and so I I don't know I'm excited. I'm excited to see what we're going to unlock by having people participate more so than ever and then having companies invest in that participation. >>I love your enthusiasm. I agree. I think it's a great time for open source because it has democratized, it is bringing in new people. The aperture of the personas coming in >>is not >>just computer science and engineering. This hybrid SRE rolls developing and then you've got creative. There's a creativity aspect coming back and I've been riffing on this for a few years but I'm kind of seeing this development, love to get your thoughts used to be like craftsmanship was involved in building software and then Agile came in ship fast and iterate. Um and now craft is coming back. You're starting to see creativity and the developer experience through collaboration tools and kind of this democratization. What's your thoughts on this? And no, I know you I know you think about this as an engineering leader. Um Craft agile bring them both together. Speed and quality is craft coming back. >>Craft is definitely coming back and I think it is because we we melt the mundane stuff, right? Like, you know, we're all hyper focused on like you want to be the bush out there, you gotta ship immediately agile, agile, agile. But what we know is like you can ship a bunch of stuff, nobody wants very fast, you can ship a bunch of stuff that hasn't been curated to really, you know, solve the problem now, you'll be fast but will be awesome. I think people demand more. And I really believe that because we've embraced some of these frameworks, workflows and tool sets, that we get a focus on the craft and that's what we're trying to do, right? Ultimately we want every person that builds to be an innovator and not just an innovator for innovation state, but because they're changing and affecting somebody's life, right? And so when we dig deep and focusing on the craft, and we still have these expertise, we're just gonna be applying that in a very intentional way versus okay, hurry up. Bill, Bill Bill, hurry up, hurry up. Bill Bill, Bill, go, go, go, because now it's connected. And so we're seeing the rise of that craft and what I think is going to in turn happen is we're all going to have a better experience, we're all going to reap the benefits of having that expertise. You know, there's a spirit sometimes when we talk about automation and devops and, you know, interconnected tool systems that maybe you're taking somebody's job that they were doing before the daily task. No way. All we're doing is saying like, cool, take the repeatable thing that you're doing over and over and over, and let's focus on that craft, lets you know if your security person and you want to get down and deep and understand where vulnerabilities are going to come from and things that people haven't even thought of. Cool, let's take away some of the other things that we know can be caught and solved without you paying attention in some aspects. I think we just need along the whole stack. So it's pretty exciting times. >>Yeah, I did it and we call that different, undifferentiated heavy lifting, you know, just get it out of the way since you brought that up. Let's take automation down that road of experience. What does it mean for the developer? Because this is really an opportunity. Right. So the phrase I've heard is if you do it more than a few times, just automated away. So when is the right time to automate where this automation play into the developer experience? When does it make it more productive? Where's the innovation angle you share your thoughts on when people look through the prism of automation productivity versus innovation? What's the what's the automation view there? >>I mean, you know it is it is a good like, you know, little metric could be done it five times and it's the same thing over and over and over. Your question is now like do you have to be doing that? I mean you should because you're doing it. So I think it's about finding and defining your own boundary for what you need, right? I mean it's hard to get out there and say every workflow like we can go and apply the stamp. We already tried that with agile frameworks for like everybody you're gonna do scrum, we're going to combine, you know what? It doesn't work. What we really need to do is have teams understand their workflows, right, understand and do some diagnosis and saying like we're in the system and I think that's powerful metrics and insights of going like where are we having a slowdown? Where are people spending their time if people are spending their time doing break fix or they're spending their time continuously trying to jam something into a certain pipeline, you have to ask yourself, is this something that we should be spending that time on? What if we had that time freed up? And so I do think you can go and put some good boundaries in there, whatever yours may be. I love I love some of those rule sets but really you know, deadlocks and automation starts with the process, right? We think about it and when I developed software always think about it through that design. Thinking lands of how will this work when I get to it. And so if we're focusing on the design aspect and the user experience, then we start looking at the pieces in between from that code to having people use it and say what do I need to do? And sometimes you know depending on your industry, you may have these other needs that not everybody has. So it's hard to say there's a one size fits all. But there is a good rule like if you've done the same repeatable thing over every every day, uh numerous days like you probably should just go spend the time to automate that. And I think it's the convincing point, right? Like if we go and and a lot of us are are nerds and engineers at heart and I love freaking math. So it's that like okay if we spend two hours building maybe a hub action for a doctor one time instead of somebody happened to repeat this process no matter what it is. Like you're giving that time back in that time is mental capacity, mental capacity that can be applied to something that's more important and hopefully the more important thing is the user experience. Um So yeah, I mean you know we all have those little systems out there. I say use them but take a step back. I think the bigger, the harder part is like yes, you will have to slow down for a minute, which is scary to go and build something repeatable so that you can speed back up. You know, >>it's awesome. Great, great inside love, love the energy a lot to ask you while you're here because this is something I've been thinking about. I'm hearing a lot of developers talking about, understand the workflow you mentioned that's a key thing. I love that. Getting in and understanding the customer experience working backwards, but that brings up the whole. How do you form the teams? How do you think about team formation? Because at cloud scale with cloud native, you can use building blocks, You have automation, you can easily compose and then build intellectual property around things. Use containers, make things easier. So as you start thinking about teams, is it better to have teams focus on, say workflows and then decoupled teams? Is there a strategy for general purpose teams or how do you look at the team formation from the developer perspective to make the experience great, high quality. Is there a state of the art in your opinion, given the compose ability and all the ease of use going on? I mean, what's the ideal way to think this through? What's your thoughts? >>Oh, you know, there's, I'm going to say there's not one team team to rule them all, there's not one team kind of foundation that's gonna be able to be applicable, it's all different, right? Like even within the same company, especially at scale, you may have these different compositions of your team and I think it comes down to like, what problems are you trying to solve within your workflow? What are you trying to accomplish? I think when we, when we step back and we think about our Ci cd pipelines and really code from idea into cloud that I believe in a unified system, because I don't want developers worrying about it and doing one offs, I'm like, you don't need to know that, and that's been an argument that's going on, you know, I'm a huge kubernetes fan and so it's been like, should, should, should the feature developers understand the entrance of kubernetes? I'm gonna say something controversial, I'm gonna say no, I'm gonna say they don't need to know, they need to know how to monitor alert and how to have smart rollbacks and have a system that does it for them. That's why we have Orchestration, that's why we have dr containers, that's why we have world class eight PM and monitoring systems in place because we've done that, we've done that hard work. So I would say no, they don't need to know that, so, but you still need these needs, right? Depending upon where you are in this transformation, right? Maybe you're still like, you know, integrating some of these cloud needed principles and toolsets and so you need some smes I do really love the SRE embedded model, not embedded, like on your, you know, like embedded, like a chip set, but embedded in the team, because that person really should be a mentor and should be a force multiplier. You don't want to fall in the trap and be like oh we have an SRE on the team. They're going to do all the devops stuff. No no no no they're going to go and help you think about your product through a customer lens right there. They're the experts going like whoa maybe we should have an S. L. A. Because this is a tier one feature lets go and make sure we build that automation so that we curate this feature with the highest level availability but then teach the team how to do that. So now you have this practice as a part right? Like you're honing your craft, you have this practice now. Does that mean they need to go learn everything about like the monitoring sweet and tools are used. No, but they should understand how to read the output of that. And so there's not one team size to rule them all. Unfortunately, I personally, I'll tell you what I'm a fan of is like I think that you should have flexibility. Like once again think about the points where you need to have the connective unified system, right? And then you have this opportunity for developers to have some agency and creative freedom because maybe you've been on a team that's been working on, I don't know, let's say your audit service. I think every every software has some component of audit uh, you know, in some ability because you want to know what he was using one well after they've done their tour of duty because most of the cool stuff, they've already fixed and made a feature set. Let them go roll into something else because then you have that connective tissue on the inner points of your system that are always the same, right? We want really repeatability. We want them just to focus on writing the code. And I think because of these advancements we are unlocking opportunity for developers to think broader, right? Like maybe you've been on the platform team and you want to go dip your toes into writing features well, 90 okay, maybe not 90 but also 80% of that, you know, every day repeatable task, like focus on that and get that shit out. But then you have the sme and you're really thinking holistically as a customer obsessed team of what you're building and why. So I love that. No one way. >>Yeah, I love the idea of the platform person just having more flex out because that brings a platform mindset to the other pieces, but also feature acceleration versus product strategy. Thinking through the arc of why you're building in the first place, Right? So and then the embedded SRE great point there, great call out there because everything's cloud scale now, you gotta have pen tests built in automation, >>who's gonna >>design that. So I think it's really interesting how you're putting that together and I think that's very relevant. Um and any um new things that you see happening now with with cloud Native, you mentioned cabernets, I think you know the story that we've been telling is kubernetes got boring and that's good. Right? So, >>meaning its meaning it's working >>and people like it, it's interoperability or frustration. It feels like a unifying connective tissue between under the hood and above at the application layer. So it's nice but the consequence of that is there's more cloud native going on, so that means more services are going to be connected and torn down. You mentioned observe ability and monitoring. That's important too. So as an engineering leader, that's not another department. Right? That's gonna be core to the developers. What's your thoughts on how to integrate observe ability now there's a zillion companies doing it now but is that you know >>there is a zillion. My thoughts are like heck yeah. Like conservative observe ability isn't at the end of the stack. Right, observe ability is apart just like qualities apart. Just like when we think about agile, let me just throw it this way right? Like when dr came right, we had it basically have this maybe this baby os encompassed on servers. So you can have multiple, multiple, multiple, multiple distributed. Right? I think of like let's let's say that like your team is that Docker container man, you want everything in their right? It is a part of the practice. You want your learning, you want your logging, you want it all wrapped up in this nice little bow and you want lots of them all working together harmoniously. The same thing can be said about our teams. We want them to be their own little micro operating system where they have all the resources available for them to go and do the thing that they are intending to do and not have to worry about that subset. But it also gives them that control. Right? So it's building in that layer of abstraction that's needed but also understanding why it's important. So it's a little bit of both. Right? We're not going to curate deep subject matter experts. You know, I'm, you know the Oh yes, I model and every aspect right? Like we're not going to turn a friend and engineer necessarily into a network engineer. But utilizing the tool sets, having a playbook where it is controlled, maintained in a part of your culture. All that's gonna do is allow you to move faster and it's allow you to see what's really running out there in the wild. And I see these trends happening. I think we're continuing to see the rise of cloud native technologies because applications now are really a set of a P. I. S. That go across the world and in and out. And so the way that we develop is slightly different. And so we need to think about, well, how is it orchestrated and deployed? Well, if you have a repeatable pattern once again, if we go back to that and think of our team and I promise nobody asked me to come up with this as like a little darker, a little docker container itself. You know, you're gonna write that image into what makes sense for you and have all the resources available and you're gonna rinse and repeat that over and over and over again. And so I mean, we're just seeing, seeing this continue this continuation of, you know, monitoring devops? S sorry, it's not a problem. It's a culture, right? It's not one person's job or a role. It's a part of how you build great software. It's just a practice. >>You mentioned abstraction layer used to be conventional wisdom that they were good. But there's trade offs whose performance tradeoffs or some overhead. Not anymore. It's good. You can basically build an abstraction layer and say, hey, I don't want to deal with networking anymore. It's gonna make it programmable. >>That's cool. No >>problem. So you start to see these new innovation patterns. Right. So what are you most excited about when you start to see these new kinds of things of being brought on that were limited years ago? Like you start an abstraction layers, you see the role of the SRE you're seeing um the democratization of new developers coming in that are bringing new perspectives. She's seeing all these new kinds of ways that's re factoring how people write code. But what are you seeing is the most exciting >>for me? Honestly, it's like the opportunity for anybody to really be a builder maker developer, right? You don't have to have a traditional CS degree if you do that's awesome, Like come and teach us awesome stuff that we probably should know. That's foundational. I don't have a CS degree. You know, we're moving on from these opportunities where it's self taught to where you actually 100% can go and learn and build and create. We're seeing the rise in these communities. I feel like these toolsets are really just lowering the barrier of entry for those people that don't have advantage to go to like a four year school and get a degree for people that are just like have a great idea what excites me is that next developer, You know, we talk about the 100 million developer sitting somewhere in the world, just going, I have a great idea and I'm gonna change the world and I don't know how to get started, but they do, they have it at their hands now. You know, if you can go onto a website, get a little bit dangerous with these tool sets, you can go and get your idea to the masses and what we're going to end up doing is like you said, democratizing tech, it's going to bring in new ways to think it's going to change how we interact with systems. We get we get our blinders on sometimes, especially, you know, I live in Portland on the West Coast, the US, we know that the world is vast, majorly huge, dynamic, awesome place. The things that work for me may not work for somebody on the other side of the world. The things that I do may not be relevant. But we're going to find that human connection. We're going to continue to say, well, wait a minute. How can we optimize for any human anywhere? How can we help take all these differences but doing them in a repeatable pattern. So like for me that's exciting is these toolsets that we've been working on for years, are now going to put put in people's hands that never thought they could. And that is exciting. And like to see to see the rise of just creativity is what really makes humans special because we build and make >>and the fact that it's more inclusive now becoming more inclusive on all aspects of inclusive whether it's individuals and coders types of code. So uh integration is the new normal right integrating in uh data control planes, all that goodness coming in because of the ease of use of developer experience. Super awesome. Um dana you're awesome. Great to have you on the cube and sharing your energy and insight. Great call outs on many topics. A lot of gems being dropped. Their thanks for coming on the cube. >>Well thanks for having me. It's been awesome and doctor comes been great. I can't wait to see the rest of the show. >>Dr khan 2021 Virtual real life coming back maybe in physical next year or hybrid for sure. Just the cube coverage of Dr khan 2021. I'm sean for your host. Thanks for watching

Published Date : May 27 2021

SUMMARY :

Been been around the block in the cloud enterprise I can't believe it's been that long. You know the tsunami of new ways that people are programming. You know, we see this across all industries, whether you're going through your own digital transformation just like just the way we've been working a long time. and I think that we're seeing that naturally, you know, in the keynote where I mentioned some of the Research not just the usual suspects like lift and whatnot. part in the design thinking of it to like how do you curate and have a great experience for your customers. I love your enthusiasm. And no, I know you I know you think about this as an engineering leader. been curated to really, you know, solve the problem now, you'll be fast but will be awesome. Where's the innovation angle you share your thoughts on when people look through the prism of automation And so I do think you can go and put some good boundaries in there, whatever yours may be. Great, great inside love, love the energy a lot to ask you while you're here because this No no no no they're going to go and help you think about your product through a customer lens right there. point there, great call out there because everything's cloud scale now, you gotta have pen tests built in Um and any um new things that you see happening now with companies doing it now but is that you know You know, I'm, you know the Oh You can basically build an abstraction layer and say, hey, I don't want to deal with networking anymore. That's cool. So you start to see these new innovation patterns. You don't have to have a traditional CS degree if you do that's Great to have you on the cube and sharing your energy I can't wait to see the rest of the show. Just the cube coverage of Dr khan 2021.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dana LawsonPERSON

0.99+

PortlandLOCATION

0.99+

20 hoursQUANTITY

0.99+

two hoursQUANTITY

0.99+

100%QUANTITY

0.99+

five timesQUANTITY

0.99+

80%QUANTITY

0.99+

JohnPERSON

0.99+

90QUANTITY

0.99+

DockerConEVENT

0.99+

GitHubORGANIZATION

0.99+

USLOCATION

0.99+

next yearDATE

0.98+

pandemicEVENT

0.98+

firstQUANTITY

0.98+

four yearQUANTITY

0.98+

bothQUANTITY

0.98+

agileTITLE

0.97+

one personQUANTITY

0.97+

one teamQUANTITY

0.97+

one timeQUANTITY

0.96+

this yearDATE

0.95+

JavaTITLE

0.94+

DonPERSON

0.94+

cube conCOMMERCIAL_ITEM

0.93+

a minuteQUANTITY

0.92+

West CoastLOCATION

0.91+

oneQUANTITY

0.91+

S. L. O. S.ORGANIZATION

0.9+

GIT hubORGANIZATION

0.88+

Dr.PERSON

0.88+

BillPERSON

0.86+

100 million developerQUANTITY

0.86+

yearsDATE

0.86+

AgileTITLE

0.81+

more than a few timesQUANTITY

0.81+

zillionQUANTITY

0.8+

2021PERSON

0.79+

danaPERSON

0.78+

a zillion companiesQUANTITY

0.77+

about a decadeQUANTITY

0.76+

cube conCOMMERCIAL_ITEM

0.73+

DrPERSON

0.71+

CubeCOMMERCIAL_ITEM

0.65+

KhanPERSON

0.65+

Dr khanPERSON

0.64+

CraftORGANIZATION

0.62+

SREORGANIZATION

0.59+

2021DATE

0.57+

S. L.ORGANIZATION

0.53+

VicePERSON

0.52+

TomcatORGANIZATION

0.5+

cloudCOMMERCIAL_ITEM

0.45+

GITORGANIZATION

0.36+

2021OTHER

0.28+

Mike Tarselli, TetraScience | CUBE Conversation May 2021


 

>>Mhm >>Yes, welcome to this cube conversation. I'm lisa martin excited about this conversation. It's combining my background in life sciences with technology. Please welcome Mike Tarsa Lee, the chief scientific officer at Tetra Science. Mike I'm so excited to talk to you today. >>Thank you lisa and thank you very much to the cube for hosting us. >>Absolutely. So we talk about cloud and data all the time. This is going to be a very interesting conversation especially because we've seen events of the last what are we on 14 months and counting have really accelerated the need for drug discovery and really everyone's kind of focused on that. But I want you to talk with our audience about Tetra science, Who you guys are, what you do and you were founded in 2014. You just raised 80 million in series B but give us an idea of who you are and what you do. >>Got it. Tetro Science, what are we? We are digital plumbers and that may seem funny but really we are taking the world of data and we are trying to resolve it in such a way that people can actually pipe it from the data sources they have in a vendor agnostic way to the data targets in which they need to consume that data. So bringing that metaphor a little bit more to life sciences, let's say that you're a chemist and you have a mass spec and an NMR and some other piece of technology and you need all of those to speak the same language. Right? Generally speaking, all of these are going to be made by different vendors. They're all going to have different control software and they're all going to have slightly different ways of sending their data in. Petro Science takes those all in. We bring them up to the cloud or cloud native solution. We harmonize them, we extract the data first and then we actually put it into what we call our special sauce are intermediate data schema to harmonize it. So you have sort of like a picture and a diagram of what the prototypical mass spec or H P. L. C. Or cell counting data should look like. And then we build pipelines to export that data over to where you need it. So if you need it to live in an L. N. Or a limb system or in a visualization tool like spot fire tableau. We got you covered. So again we're trying to pipe things from left to right from sources to targets and we're trying to do it with scientific context. >>That was an outstanding description. Data plumbers who have secret sauce and never would have thought I would have heard that when I woke up this morning. But I'm going to unpack this more because one of the things that I read in the press release that just went out just a few weeks ago announcing the series B funding, it said that that picture science is pioneering a $300 billion dollar Greenfield data market and operating this is what got my attention without a direct cloud native and open platform competitor. Why is that? >>That's right. If you look at the way pharma data is handled today, even those that long tend to be either on prem solutions with a sort of license model or a distribution into a company and therefore maintenance costs, professional services, etcetera. Or you're looking at somebody who is maybe cloud but their cloud second, you know, they started with their on prem journey and they said we should go and build out some puppies, we should go to the cloud migrate. However, we're cloud first cloud native. So that's one first strong point. And the second is that in terms of data harmonization and in terms of looking at data in a vendor agnostic way, um many companies claim to do it. But the real hard test of this, the metal, what will say is when you can look at this with the Scientific contextual ization we offer. So yes, you can collect the data and put it on a cloud. Okay great. Yes. You may be able to do an extract, transform and load and move it to somewhere else. Okay. But can you actually do that from front to back while retaining all the context of the data while keeping all of the metadata in the right place? With veracity, with G XP readiness, with data fidelity and when it gets over to the other side can somebody say oh yeah that's all the data from all the H. P. L. C. S we control. I got it. I see where it is. I see where to go get it, I see who created it. I see the full data train and validation landscape and I can rebuild that back and I can look back to the old raw source files if I need to. Um I challenge someone to find another direct company that's doing that today. >>You talk about that context and the thing that sort of surprises me is with how incredibly important scientific discovery is and has been for since the beginning of time. Why is why has nobody come out in the last seven years and tried to facilitate this for life sciences organizations. >>Right. I would say that people have tried and I would say that there are definitely strides being made in the open source community, in the data science community and inside pharma and biotech themselves on these sort of build motif, right. If you are inside of a company and you understand your own ontology and processes while you can probably design an application or a workflow using several different tools in order to get that data there. But will it be generally useful to the bioscience community? One thing we pride ourselves on is when we product eyes a connector we call or an integration, we actually do it with a many different companies, generic cases in mind. So we say, OK, you have an h p l C problem over at this top pharma, you have an HPC problem with this biotech and you have another one of the C R. O. Okay. What are the common points between all of those? Can we actually distill that down to a workflow? Everyone's going to need, for example a compliance workflow. So everybody needs compliance. Right. So we can actually look into an empower or a unicorn operation and we can say, okay, did you sign off on that? Did it come through the right way? Was the data corrupted etcetera? That's going to be generically useful to everybody? And that's just one example of something we can do right now for anybody in bio pharma. >>Let's talk about the events of the last 14 months or so mentioned 10 X revenue growth in 2020. Covid really really highlighted the need to accelerate drug discovery and we've seen that. But talk to me about some of the things that Tetra science has seen and done to facilitate that. >>Yeah, this past 14 months. I mean um I will say that the global pandemic has been a challenge for everyone involved ourselves as well. We've basically gone to a full remote workforce. Um We have tried our very best to stay on top of it with remote collaboration tools with vera, with GIT hub with everything. However, I'll say that it's actually been some of the most successful time in our company's history because of that sort of lack of any kind of friction from the physical world. Right? We've really been able to dig down and dig deep on our integrations are connections, our business strategy. And because of that, we've actually been able to deliver a lot of value to customers because, let's be honest, we don't actually have to be on prem from what we're doing since we're not an on prem solution and we're not an original equipment manufacturer, we don't have to say, okay, we're going to go plug the thing in to the H. P. L. C. We don't have to be there to tune the specific wireless protocols or you're a W. S. Protocols, it can all be done remotely. So it's about building good relationships, building trust with our colleagues and clients and making sure we're delivering and over delivering every time. And then people say great um when I elect a Tetra solution, I know what's going right to the cloud, I know I can pick my hosting options, I know you're going to keep delivering more value to me every month. Um Thanks, >>I like that you make it sound simple and that actually you bring up a great point though that the one of the many things that was accelerated this last year Plus is the need to be remote that need to be able to still communicate, collaborate but also the need to establish and really foster those relationships that you have with existing customers and partners as everybody was navigating very, very different challenges. I want to talk now about how you're helping customers unlock the problem that is in every industry data silos and point to point integration where things can talk to each other, Talk to me about how you're helping customers like where do they start with? Touch? Where do you start that? Um kind of journey to unlock data value? >>Sure. Journey to unlock data value. Great question. So first I'll say that customers tend to come to us, it's the oddest thing and we're very lucky and very grateful for this, but they tend to have heard about what we've done with other companies and they come to us they say listen, we've heard about a deployment you've done with novo Nordisk, I can say that for example because you know, it's publicly known. Um so they'll say, you know, we hear about what you've done, we understand that you have deep expertise in chromatography or in bio process. And they'll say here's my really sticky problem. What can you do here? And invariably they're going to lay out a long list of instruments and software for us. Um we've seen lists that go up past 2000 instruments. Um and they'll say, yeah, they'll say here's all the things we need connected, here's four or five different use cases. Um we'll bring you start to finish, we'll give you 20 scientists in the room to talk through them and then we to get somewhere between two and four weeks to think about that problem and come back and say here's how we might solve that. Invariably, all of these problems are going to have a data silos somewhere, there's going to be in Oregon where the preclinical doesn't see the biology or the biology doesn't see the screening etcetera. So we say, all right, give us one scientist from each of those, hence establishing trust, establishing input from everybody. And collaboratively we'll work with, you will set up an architecture diagram, will set up a first version of a prototype connector, will set up all this stuff they need in order to get moving, we'll deliver value upfront before we've ever signed a contract and will say, is this a good way to go for you? And they'll say either no, no, thank you or they'll say yes, let's go forward, let's do a pilot a proof of concept or let's do a full production rollout. And invariably this data silos problem can usually be resolved by again, these generic size connectors are intermediate data schema, which talks and moves things into a common format. Right? And then also by organizationally, since we're already connecting all these groups in this problem statement, they tend to continue working together even when we're no longer front and center, right? They say, oh we set up that thing together. Let's keep thinking about how to make our data more available to one another. >>Interesting. So culturally, within the organization it sounds like Tetra is having significant influences their, you know, the collaboration but also data ownership. Sometimes that becomes a sticky situation where there are owners and they want to read retain that control. Right? You're laughing? You've been through this before. I'd like to understand a little bit more though about the conversation because typically we're talking about tech but we're also talking about science. Are you having these technical conversations with scientists as well as I. T. What is that actual team from the customer perspective look >>like? Oh sure. So the technical conversation and science conversation are going on sometimes in parallel and sometimes in the same threat entirely. Oftentimes the folks who reach out to us first tend to be the scientists. They say I've got a problem, you know and and my research and and I. T. Will probably hear about this later. But let's go. And then we will invariably say well let's bring in your R. And D. I. T. Counterparts because we need them to help solve it right? But yes we are usually having those conversations in parallel at first and then we unite them into one large discussion. And we have varied team members here on the Tetris side we have me from science along with multiple different other PhD holders and pharma lifers in our business who actually can look at the scientific use cases and recommend best practices for that and visualizations. We also have a lot of solutions architects and delivery engineers who can look at it from the how should the platform assemble the solution and how can we carry it through? Um And those two groups are three groups really unite together to provide a unified front and to help the customer through and the customer ends up providing the same thing as we do. So they'll give us on the one call, right? Um a technical expert, a data and QA person and a scientist all in one group and they'll say you guys work together to make sure that our orders best represented here. Um And I think that that's actually a really productive way to do this because we end up finding out things and going deeper into the connector than we would have otherwise. >>It's very collaborative, which is I bet those are such interesting conversations to be a part of it. So it's part of the conversation there helping them understand how to establish a common vision for data across their organization. >>Yes, that that tends to be a sort of further reaching conversation. I'll say in the initial sort of short term conversation, we don't usually say you three scientists or engineers are going to change the fate of the entire orig. That's maybe a little outside of our scope for now. But yes, that first group tends to describe a limited solution. We help to solve that and then go one step past and then they'll nudge somebody else in the Oregon. Say, do you see what Petra did over here? Maybe you could use it over here in your process. And so in that way we sort of get this cultural buy in and then increased collaboration inside a single company. >>Talk to me about some customers that you've worked with it. Especially love to know some of the ones that you've helped in the last year where things have been so incredibly dynamic in the market. But give us an insight into maybe some specific customers that work with you guys. >>Sure. I'd love to I'll speak to the ones that are already on our case studies. You can go anytime detector science dot com and read all of these. But we've worked with Prelude therapeutics for example. We looked at a high throughput screening cascade with them and we were able to take an instrument that was basically unloved in a corner at T. Can liquid handler, hook it up into their Ln. And their screening application and bring in and incorporate data from an external party and do all of that together and merge it so they could actually see out the other side a screening cascade and see their data in minutes as opposed to hours or days. We've also worked as you've seen the press release with novo Nordisk, we worked on automating much of their background for their chromatography fleet. Um and finally we've also worked with several smaller biotechs in looking at sort of in stan shih ation, they say well we've just started we don't have an L. N. We don't have a limbs were about to buy these 50 instruments. Um what can you do with us and we'll actually help them to scope what their initial data storage and harmonization strategy should even be. Um so so we're really man, we're at everywhere from the enterprise where its fleets of thousands of instruments and we're really giving data to a large amount of scientists worldwide, all the way down to the small biotech with 50 people who were helping add value there. >>So big range there in terms of the data conversation, I'm curious has have you seen it change in the last year plus with respect to elevating to the C suite level or the board saying we've got to be able to figure this out because as we saw, you know, the race for the Covid 19 vaccine for example. Time to value and and to discovery is so critical. Is that C suite or board involved in having conversations with you guys? >>It's funny because they are but they are a little later. Um we tend to be a scientist and user driven um solution. So at the beginning we get a power user, an engineer or a R and D I. T. Person in who really has a problem to solve. And as they are going through and developing with us, eventually they're going to need either approval for the time, the resources or the budget and then they'll go up to their VP or their CIA or someone else at the executive level and say, let's start having more of this conversation. Um, as a tandem effort, we are starting to become involved in some thought leadership exercises with some larger firms. And we are looking at the strategic aspect through conferences, through white papers etcetera to speak more directly to that C suite and to say, hey, you know, we could fit your industry for dato motif. And then one other thing you said, time to value. So I'll say that the Tetro science executive team actually looks at that as a tract metric. So we're actually looking at driving that down every single week. >>That's outstanding. That's a hard one to measure, especially in a market that is so dynamic. But that time to value for your customers is critical. Again, covid sort of surfaced a number of things and some silver linings. But that being able to get hands on the day to make sure that you can actually pull insights from it accelerate facilitate drug discovery. That time to value there is absolutely critical. >>Yeah. I'll say if you look at the companies that really, you know, went first and foremost, let's look at Moderna right? Not our customer by the way, but we'll look at Madonna quickly as an example as an example are um, everything they do is automated, right? Everything they do is cloud first. Everything they do is global collaboration networks, you know, with harmonized data etcetera. That is the model we believe Everyone's going to go to in the next 3-5 years. If you look at the fact that Madonna went from sequence to initial vaccine in what, 50, 60 days, that kind of delivery is what the market will become accustomed to. And so we're going to see many more farmers and biotechs move to that cloud first. Distributed model. All data has to go in somewhere centrally. Everyone has to be able to benefit from it. And we are happy to help them get >>Well that's that, you know, setting setting a new record for pace is key there, but it's also one of those silver linings that has come out of this to show that not only was that critical to do, but it can be done. We have the technology, we have the brain power to be able to put those all user would harmonize those together to drive this. So give me a last question. Give me an insight into some of the things that are ahead for Tetra science the rest of this year. >>Oh gosh, so many things. One of the nice parts about having funding in the bank and having a dedicated team is the ability to do more. So first of course our our enterprise pharma and BioPharma clients, there are plenty more use cases, workflows, instruments. We've just about scratch the surface but we're going to keep growing and growing our our integrations and connectors. First of all right we want to be like a netflix for connectors. You know we just want you to come and say look do they have the connector? No well don't worry. They're going to have it in a month or two. Um so that we can be basically the almost the swiss army knife for every single connector you can imagine. Then we're going to be developing a lot more data apps so things that you can use to derive value from your data out. And then again, we're going to be looking at helping to educate everybody. So how is cloud useful? Why go to the system with harmonization? How does this influence your compliance? How can you do bi directional communication? There's lots of ways you can use. Once you have harmonized centralized data, you can do things with it to influence your order and drive times down again from days and weeks, two minutes and seconds. So let's get there. And I think we're going to try doing that over the next year. >>That's awesome. Never a dull moment. And I, you should partner with your marketing folks because we talked about, you talked about data plumbing the secret sauce and becoming the netflix of connectors. These are three gems that you dropped on this this morning mike. This has been awesome. Thank you for sharing with us what teacher science is doing, how you're really helping to fast track a lot of the incredibly important research that we're all really um dependent on and helping to heal the world through data. It's been a pleasure talking with you. >>Haley says I'm a real quickly. It's a team effort. The entire Tetro science team deserves credit for this. I'm just lucky enough to be able to speak to you. So thank you very much for the opportunity. >>And she about cheers to the whole touch of science team. Keep up the great work guys. Uh for mike Roselli, I'm lisa martin. You're watching this cube conversation. >>Mhm.

Published Date : May 13 2021

SUMMARY :

Mike I'm so excited to talk to you today. But I want you to talk with our audience about over to where you need it. But I'm going to unpack this more because one of the things that I read I can rebuild that back and I can look back to the old raw source files if I need to. You talk about that context and the thing that sort of surprises me is with how incredibly important scientific So we say, OK, you have an h p l C problem over at this top pharma, Covid really really highlighted the need to accelerate to the H. P. L. C. We don't have to be there to tune the specific wireless protocols or you're a W. is the need to be remote that need to be able to still communicate, we understand that you have deep expertise in chromatography or in bio process. T. What is that actual team from the customer perspective look and going deeper into the connector than we would have otherwise. it. So it's part of the conversation there helping them understand how to establish of short term conversation, we don't usually say you three scientists or engineers are going to change the Especially love to know some of the ones that you've helped Um what can you do with us and we'll actually help them to scope what their initial data as we saw, you know, the race for the Covid 19 vaccine for example. So at the beginning we get a But that being able to get hands on the day to make That is the model we believe Everyone's going to go to in the next 3-5 years. We have the technology, we have the brain power to be able to put those You know we just want you to come and say look do they have the connector? And I, you should partner with your marketing folks because we talked about, I'm just lucky enough to be able to speak to you. And she about cheers to the whole touch of science team.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2014DATE

0.99+

Mike TarselliPERSON

0.99+

CIAORGANIZATION

0.99+

OregonLOCATION

0.99+

50QUANTITY

0.99+

MikePERSON

0.99+

HaleyPERSON

0.99+

2020DATE

0.99+

Mike Tarsa LeePERSON

0.99+

Tetro ScienceORGANIZATION

0.99+

Tetra scienceORGANIZATION

0.99+

lisa martinPERSON

0.99+

mike RoselliPERSON

0.99+

lisaPERSON

0.99+

fourQUANTITY

0.99+

May 2021DATE

0.99+

20 scientistsQUANTITY

0.99+

MadonnaPERSON

0.99+

netflixORGANIZATION

0.99+

two groupsQUANTITY

0.99+

50 peopleQUANTITY

0.99+

80 millionQUANTITY

0.99+

one callQUANTITY

0.99+

three groupsQUANTITY

0.99+

two minutesQUANTITY

0.99+

Tetra ScienceORGANIZATION

0.99+

one groupQUANTITY

0.99+

50 instrumentsQUANTITY

0.99+

14 monthsQUANTITY

0.99+

$300 billion dollarQUANTITY

0.99+

OneQUANTITY

0.99+

novo NordiskORGANIZATION

0.99+

twoQUANTITY

0.99+

secondQUANTITY

0.99+

last yearDATE

0.99+

Petro ScienceORGANIZATION

0.99+

todayDATE

0.99+

four weeksQUANTITY

0.99+

ModernaORGANIZATION

0.98+

three scientistsQUANTITY

0.98+

eachQUANTITY

0.98+

60 daysQUANTITY

0.98+

firstQUANTITY

0.98+

one scientistQUANTITY

0.98+

a monthQUANTITY

0.98+

FirstQUANTITY

0.97+

oneQUANTITY

0.97+

PetraPERSON

0.97+

first versionQUANTITY

0.96+

one exampleQUANTITY

0.96+

series BOTHER

0.96+

next yearDATE

0.96+

2000 instrumentsQUANTITY

0.96+

five different use casesQUANTITY

0.94+

single companyQUANTITY

0.94+

TetroORGANIZATION

0.94+

first groupQUANTITY

0.93+

mikePERSON

0.93+

three gemsQUANTITY

0.92+

this morningDATE

0.9+

one stepQUANTITY

0.9+

first strong pointQUANTITY

0.89+

BioPharmaORGANIZATION

0.89+

TetraScienceORGANIZATION

0.88+

few weeks agoDATE

0.86+

last 14 monthsDATE

0.86+

past 14 monthsDATE

0.86+

TetrisORGANIZATION

0.85+

last seven yearsDATE

0.85+

thousands of instrumentsQUANTITY

0.83+

this yearDATE

0.82+

H. P. L. C.ORGANIZATION

0.81+

swissORGANIZATION

0.8+

10 X revenueQUANTITY

0.79+

CTITLE

0.79+

single weekQUANTITY

0.78+

Siamak Sadeghianfar, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>> Narrator: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2021 virtual. Brought to you by Red Hat, The Cloud Native Computing Foundation, and ecosystem partners. >> Hey, welcome back to theCUBE's coverage of KubeCon 2021 CloudNativeCon Europe. Part of the CNCF and ongoing, could be in there from the beginning, love this community, theCUBE's proud to support and continue to cover it. We're virtual this year again because of the pandemic but it looks like we'll be right around the corner for a physical event, hopefully for the next one, fingers crossed. Got a great guest here from Red Hat. Siamak Sadeghianfar, a Senior Principal Product Manager. Welcome to theCUBE. Thanks for coming in. >> Thank you for having me. >> So, this topic's about GitOps, Pipelines, code. Obviously Infrastructure as Code has been the ethos since I can remember going back to 2008 and the original cloutaroti vision. And we were always talking about that. Now it's mainstream. Now it's DevSecOps. So, it's now, day two operations, shifting left with security. OpenShift is continuing to get, take ground. Congratulations on that. So my first question is you guys announced the general availability of OpenShift Pipelines and GitOps at KubeCon. What are, what's this about? And what's the benefits for the customer. Let's get into the news >> Thanks for, to begin with for the Congress and this, this is definitely a hot topic around the DevSecOps. And the different variations of that year about some versions that during in, in FinTech and other verticals as well. The idea is here really is that CI/CD has been around for a long time, continuous integration and continuous delivery, as one of the core practices of the DevOps movement. DevOps movement is quite widespread, now. You, you see reports of above 90% of organizations are in the process of adoption in their journey. And this is one of the main practices but something that has become quite apparent is that many of these organizations that are investing more and more in Cloud Native apps and adopting Cloud Native ways of building applications the tooling and technology that they use for CI/CD since CI/CD is nothing new is from 10 years old, five years old pre Kubernetes era which is not quite Cloud Native. So there is always a clash of how do I build Cloud Natives application using these technologies that are not really built for Cloud Native space and an OpenShift Pipelines OpenShift GitOps is really an opening in this direction and bring more Cloud Native ways of continuous integration and continuous delivery to customers on OpenShift. >> Got it, so I got to ask you, so a couple of questions on this topic, I really want to dig into. Can you describe the Cloud Native CI/CD process versus traditional CI/CD? >> Sure, so traditional when we think about CI/CD there is usually this monolithic solutions that are running on a virtual machine on a type of infrastructure that they use to deploy applications as well. 'Cause you, you need reliability and you have to be making an assumption about an infrastructure that you're running on. And when you come to Cloud Native infrastructure you have a much more dynamic infrastructure. We have a lot less assumptions. You might be running on a public cloud or on premise infrastructure or different types of public cloud. So these environments are often also containerized. So there are, there's a high chance you're running on a container platform, regardless if it's a public or on premises. And with the whole containers, you, you have different types of disciplines and principals to think in, about your infrastructure. So in the Cloud Native ways of CI/CD, you're running most likely in a container platform. You don't have dedicated infrastructure. You are running mostly on demand. You scale when there is a demand for running CI/CD, for example, rather than dedicated infrastructure to it. And also from the mode of operation from organization perspective, they are more adapted to this decentralized ways of ownership. As a part of the DevOps culture, this comes really with that movement, that more and more development teams are getting ownership of some portion of the delivery of their applications. And it's cognitive CS/CD solutions, they focus on supporting these models that you go away from that central model of control to decentralize and have more ownership, more capabilities within the development teams for delivering application. >> Okay, so I then have to ask you the next question. It's like you, like a resource, you'd say: Hey Siri, what is, what is GitOps? What is GitOps? 'Cause that's the topic that's been getting a lot of traction, everyone's talking about it. I mean we know DevOps. So what is the GitOps model? Can you define that? And is that what a, it that what comes after DevOps? Is it DevOps 2.0, what is the GitOps model? >> That's a very good question. GitOps is nothing really new. It's rather a more descriptive way of DevOps principles. DevOps talks about the cultural changes and mindset and ways of working. And when it comes to the, to the concrete work flow it is quite open for interpretation. So GitOps is one, a specific interpretation of how you, you do continuous integration and continuous delivery, how we implement DevOps. And the concept have been around for a couple of years. But just recently, it's got a lot of traction within the Cloud Native space. >> So how does GitOps fit into Kubernetes then? 'Cause that's going to be the next dot that we want to connect. What is that, what is, how, how. How does GitOps fit into Kubernetes? >> So GitOps is really the, the core principle of GitOps is that you, you, you think about everything in your infrastructure and application in a declarative manner. So everything needs to be declared in, in, in a number of gate repositories and you drive your operations through Git Workflows. Which if you think about it is quite similar to how Kubernetes operates. The, the reason Kubernetes became so popular is because of this declarative way of thinking about your infrastructure. You declare what you expect and Kubernetes actualizes that on, on some sort of infrastructure. So GitOps is, is, is exact same concept, but the, but applied not to the infrastructure itself, but to the operations of that infrastructure, operations of those applications. It becomes a really nice fit together. It's the same mindset really applied in different place. >> It's like Kubernetes is like the linchpin or the enabler for GitOps. Just a whole nother level of, I mean, I think GitOps essentially DevOps 2.0 in my opinion because it takes this whole nother level above that for the developer modern developer because it allows them to do more. So it's been around for a while. We've been talking about this, it's got a new name but GitOps is kind of concept has been around. Why is the increase adoption happening now in your opinion or do you have any data on or any facts or opinion on why it's such an increase in, in conversation and adoption? >> You had the, you had like very accurate point there that Kubernetes has been a great enabler for, for DevOps and later the same applies to GitOps as well because of that, that great fit. It has been, GitOps the concept has been there but implementation of that has been quite difficult before Kubernetes and also for non-containerized environments. Kubernetes is, is a very potent platform for this kind of operation because the the mindset and the ways of working is really native to how Kubernetes thinks. But there is also another driver that has been influential in, in the rise of GitOps in the last year or two. And this is an observation we see at a lot of our customers, that the number of clusters that organizations are deploying, Kubernetes clusters increasing. As their maturity increases they get more comfortable with Cloud Native way of working and transfer the workflows to become Cloud Native, they are, they are having, they move more and more of their infrastructure to Kubernetes clusters. So a new challenge rises with this. And now that I have a larger number of clusters how do I ensure consistency across all these, all these clusters? So before I had to deploy an application to production environment, perhaps, which meant two clusters across two geographical zones. Now I have to deploy to 20 clusters. And these 20 clusters also change over time. So this week is a different 20 clusters then three weeks from now. So this, this dynamic ways of working and the customers maturing in, in dealing with Kubernetes operating communities has increased really the pace of adoption of GitOps because it addresses a lot of those challenges that customers are dealing with in this space. >> Yeah, you bring up a really good challenge there. And I think that's worth calling out, this idea of expansion. And I won't say sprawl because it's not a sprawl of cluster. It's more a state provisioning and standing up clusters. And you said they they're changing because the environment has needs and the workloads might have requirements. This makes total sense in a DevOps kind of GitOps way. So I get that and I see that definitely happening. So this brings up the question, if I'm a customer, what I'm worried about is I don't want to have that Hadoop factor where I build a cluster and it takes too long to manage it, or I can't measure it, or understand the data, or have any observability. So I want to have an ease of provisioning and standing up and I want to have consistency that my apps who are using it, don't have to be, you know mangled with or coded with. So, you know, this combination of ease of deploying, ease of integrating, ease of consuming the clusters becomes a service model. Can you share your thoughts on how that gets solved? >> Yeah, absolutely. So that, that's a great point because as, as this is happening, there is also heterogenesis in this, this type of Kubernetes infrastructure window. Like, they're all Kubernetes but this problem also has multiple facets as customers running on multiple public clouds and, and combination of that with their on-premise Kubernetes clusters. And that is, they may as well be OpenShift across all this, all this infrastructure. But the, the problem that GitOps helps its customers advise that they can have the exact same operational model across all these apps and infrastructure, regardless of what kind of application it is. And regardless of where OpenShift is installed or if you're using that combined with a public cloud managed a Kubernetes stats, is the exact same process because you're relying on, on the Gits Workflows, right? And even beyond that, this standard workflow has the benefit of something that many organizations are already familiar with. So if you think about what GitOps operations mean it is essentially what developers have been always using for developing applications. So this standardizes the operations of both application and infrastructure as solvers. >> Listen to me, I got to ask you as the product manager on the whole pipelining in Kubernetes deployments. In your opinion, share your perspective on, real quick, on Kubernetes, where we're at? Because just the accelerated adoption has been phenomenal. We've seen it mature this year at KubeCon. And certainly when KubeCon North America happens, you're going to see more and more end user participation. You're going to see much more end-user use cases. You mentioned clusters are growing. What's the state of Kubernetes from your perspective, from a developer mindset? >> So Kubernetes, I think it has moved from a place that it was seen as only a, a type of infrastructure for Cloud Native applications because of the capability that it provides to a type of infrastructure for any type of application, any type of workload. I think what we have seen over the last two years is, is a shift to expansion of the use cases. And if, if you are, you talked about head open if you are a data scientist, or if you are an AIML type of developer or any type of workload really, see use cases that are coming to the Kubernetes platform as the targets type of infrastructure. So that's really where we see Kubernetes at right now is the really, the preferred infrastructure for any type of workload. And I believe this trend going to to keep continuing to address any of the challenge that exists that prevents maybe part of the, a particular type of workload to address that within the platform and opens that to add to, to developers. Which means for the developers now, once you learn the platform you are really proficient in a, you have this skills for any type of application or any type of infrastructure because they're all standardized, regardless of what type of application or workloads or technology you're specialized in. They're all going to the exact same platform. So it's very standardized type of skills across organizations, different type of teams that they have. >> Awesome, great, thanks for sharing that insight and definition. You're like a walking dictionary today for our CUBE audience. Thank you for all this good stuff. Appreciate it. Final question for you is, what does it mean for developers that are using Jenkins or other cloud-based CI solutions like GitHub Actions? What, what's the impact to them with all this from a working standpoint? 'Cause obviously you've got to make it workable. >> Right, so it's CI/CD also like it's, it's it's great to see like with DevOps adoption, there are many organizations that already have processes in place. They have, they're already using a CI tool or a CD tool. They might be using Jenkins. A lot of organizations really use, use Jenkins even though it comes with challenges and you might be using public cloud services or cloud-based CI tools, like you have Actions, you have pipelines and so on. So we are very well aware of the existing investment that many organizational teams have made. And we make sure that OpenShift as a platform works really well alongside all these different types of CI and CD technology that exists. We want to make sure that for developers starting on OpenShift, they, they have a really solid Cloud Native foundation for CI/CD. They have of strategies included but replaceable type of strategies. So they, they have a supportive platform that is Cloud Native, that gives them capability that matches the type of Cloud Native workloads that they have on the platform but also integrate well with existing tooling that exists around CI/CD. So that they can match and choose if they want to replace a piece of that with an existing investment that they have done, integrated with the rest of the platform. >> Awesome, well, great to have you on. Having the principal product manager is awesome, to talk about the two new announcements here. OpenShift pipe, Pipelines, and OpenShift GitOps. Final, final question, bumper sticker this for the audience. What's the bottom line with OpenShift Pipelines and GitOps? What's the, what's the bottom line benefit for customers? >> It's a, so OpenShift Pipeline and OpenShift GitOps makes it really simple for customers to create Cloud Native Pipelines and GitOps model for delivering application. And also making cluster changes across a large range of clusters that they have, make it really simple to grow from that point to many, many clusters and still manage the complexity of this complex infrastructure that it will be growing into. >> All right, Siamak Sadeghianfar, Senior Principal Product Manager at Red Hat. Here for the KubeCon + CloudNativeCon, Europe. CUBE conversation, thanks for coming on, appreciate it. >> Thanks John, thanks for having me. Okay, CUBE coverage continues. I'm John Farrow with theCUBE. Thanks for watching. (upbeat music)

Published Date : May 6 2021

SUMMARY :

Brought to you by Red Hat, again because of the pandemic and the original cloutaroti vision. of the DevOps movement. Got it, so I got to ask So in the Cloud Native ways of CI/CD, And is that what a, it that And the concept have been 'Cause that's going to be the next dot of that infrastructure, above that for the that the number of ease of consuming the clusters and combination of that on the whole pipelining and opens that to add to, to developers. that are using Jenkins that matches the type of What's the bottom line with from that point to many, many clusters Here for the KubeCon + Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Siamak SadeghianfarPERSON

0.99+

Red HatORGANIZATION

0.99+

20 clustersQUANTITY

0.99+

JohnPERSON

0.99+

John FarrowPERSON

0.99+

2008DATE

0.99+

two clustersQUANTITY

0.99+

this weekDATE

0.99+

KubeConEVENT

0.99+

first questionQUANTITY

0.99+

OpenShiftTITLE

0.99+

JenkinsTITLE

0.98+

last yearDATE

0.98+

SiriTITLE

0.98+

GitOpsTITLE

0.98+

Cloud NativesTITLE

0.98+

Cloud NativeTITLE

0.98+

KubernetesTITLE

0.98+

CloudNativeConEVENT

0.98+

DevOps 2.0TITLE

0.98+

oneQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

two new announcementsQUANTITY

0.98+

above 90%QUANTITY

0.97+

KubeCon 2021 CloudNativeCon EuropeEVENT

0.97+

CongressORGANIZATION

0.97+

EuropeLOCATION

0.96+

two geographical zonesQUANTITY

0.95+

Cloud NativeTITLE

0.95+

DevSecOpsTITLE

0.94+

GitTITLE

0.94+

OpenShift PipelinesTITLE

0.94+

OpenShift GitOpsTITLE

0.94+

three weeksQUANTITY

0.93+

CloudNativeCon Europe 2021 virtualEVENT

0.93+

both applicationQUANTITY

0.93+

CI/CDTITLE

0.9+

10 years oldQUANTITY

0.9+

Cloud Native Computing FoundationORGANIZATION

0.89+

this yearDATE

0.89+

todayDATE

0.89+

GitsTITLE

0.89+

pandemicEVENT

0.87+

Rachel Stephens, RedMonk | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought to you by Silicon Angle. Hi, I'm stupid, man. And welcome back to the Cube on Cloud. We're talking about developers. And while so many people remember the mean from 2010 of Steve Balmer jumping around on stage development developers and developers, uh, many people know what really important is really important about developers. They probably read the 2013 book called The New King Makers by Stephen O. Grady. And I'm really happy to welcome to the program. Rachel Stevens, who is an industry analyst with Red Monk who was co founded by the aforementioned Stephen O. Grady. Rachel, Great to see you. Thank you so much for joining us. >>Thank you so much for having me. I'm excited to be here. >>Well, I've had the opportunity, Thio read some of what you've done. We've interacted on social media. We've got to talk events back when we used to do those in people. And >>I'm so >>glad that you get to come on the program especially. You were the ones I reached out. When we have this developer track, um, if you could just give our audience a little bit about your background. You know, that developer cred that you have Because as I joke, I've got a closet full of hoodies. But, you know, I'm an infrastructure guy by training I've been learning about, you know, containers and serverless and all this stuff for years. But I'm not myself much of developer. I've touched a thing or two in the years. >>Yeah. So happy to be here. Red Monk has been around since 2002 and have kind of been beating that developer drum ever since then, kind of as the company, The founder, Stephen James, notice that the decision making that developers was really a driver for what was actually ending up in the Enterprise. And as even more true, as cloud came onto, the scene is open source exploded, and I think it's become a lot more of a common view now. But in those early days, it was probably a little bit more of a controversial opinion, but I have been with the firm for coming up on five years now. My work is an industry analyst. We kind of help people understand, bottoms up technology, adoption trends, so that that's where I spend my time focusing is what's getting used in the enterprise. Why, what kind of trends are happening? So, yeah, that's where we all come from. That's the history of Red Monk in 30 seconds. >>Awesome. Rachel, you talk about the enterprise and developers For the longest time. I just said there was this huge gap you talk about. Bottoms up. It's like, well, developers use the tools that they want If they don't have to, they don't pay for anything. And the general I t. And the business sides of the house were like, I don't know, We don't know what those people in the corner we're doing, you know, it's important and things like that. But today it feels like that that's closed a bunch. Where are we? In your estimation, you know, our developers do they have a clear seat at the table? The title we have for this is whether the Enterprise Developer is its enterprise development oxymoron. In 2020 and 2021 >>I think enterprise developers have a lot more practical authority than people give them credit for, especially if you're kind of looking at that old view of the world where everything is driven by a buyer decision or kind of this top down purchasing motion. And we've really seen that authority of what is getting used and why change a lot in the last year. In the last decade, even more of people who are able to choose the tools that meet the job bring in tools, regardless of whether they maybe have that official approval through the right channels because of the convenience of trying to get things up and running. We are asking developers to do so much right now and to go faster and thio shifting things left. And so the things that they are responsible for incorporating into the way they are building APS is growing. And so, as we are asking developers to do more and to do more quickly, um, the tools that they need to do those, um, tasks to get these APS built is that the decision making us fall into them? This is what I need. This is what needs to come in, and so we're seeing. Basically, the tools that enterprise is air using are the tools that developers want to be using, and they kind of just find their way into the enterprise. >>Now I want to key off what you were talking about. Just developers were being asked to do Mawr and Mawr. We've seen these pendulum swings in technology. There was a time where it was like, Well, I'll outsource it because that'll be easier and maybe it'll be less expensive. And number one we found it necessarily. It wasn't necessarily cheaper. And number two, I couldn't make changes, and I didn't understand what was happening. So when when I talked to Enterprises today, absolutely. I need to have skills that's internally. I need to be able to respond to things fast, and therefore I need skills that I need people that can build what they have. What what do you see? What are those skill sets that are so important today? Uh, you know, we've talked so many times over the years is to you know, there's there's the skills gap. We don't have enough data scientists. We don't have enough developers way. We don't have any of these things. So what do we have and where things trending? >>Yeah, it's It's one of those things for developers where they both have probably the most full tool set that we've seen in this industry in terms of things that are available to them. But it's also really hard because it also indicates that there is just this fragmentation at every level of the stack. And there's this explosion of choice and decisions that is happening up and down the stack of how are we going to build things? And so it's really tricky to be a developer these days and that you are making a lot of decisions and you are wiring a lot of things together and you have to be able to navigate a lot of things. E think. One of the things that is interesting here is that we have seen the phrase like Full stack developer really carried a lot of panache, maybe earlier this decade and has kind of fallen away. Just because we've realized that it's impossible for anybody to be ableto spanned this whole broad spectrum of all of the things we're asking people to dio. So we're seeing this explosion of choice, which is meaning that there is a little bit more focused and where developers are trying to actually figure out what is my niche. What is it that I'm supposed to focus on. And so it's really just this balancing of act of trying to see this big picture of how to get this all put together and also have this focused area realizing that you have to specialize at some point. >>Rachel is such a great point there. We've actually seen that Cambrian explosion of developer tools that are out there. If you go to the CFCF landscape and look at everything out there or goto any of your public cloud providers, there's no way that anybody even working for those companies no good portion of the tools that are out there so nobody could be a master of everything. How about from a cloud standpoint, you know, there is the discussion of, you know what do I shift? Left What? You know, Can I just say, Okay, this piece of it, it could be a manage service. I don't need to think about it versus what skills that I need to have in house. What is it that's important. And obviously, you know, a zoo analyst. We know it varies greatly across companies, but you know what? What are some of those top things that we need to make sure that enterprises have skill set and the tools in house that they should understand. And what can they push off to their platform of choice? >>Yeah, I think your comment about managed services is really pressing because one of the trends that we're watching closely, it's just this rise of manage services. And it kind of ties back into the concept you had before about like, what an I team. That's they have, like the Nicholas Carr. I t doesn't matter, and we're pushing this all the way. And then we realized, Oh, we've got to bring that all back. Um, but we also realize that we really want as enterprises want to be spending our time doing differentiated work and wiring together, your entire infrastructure isn't necessarily differentiated for a lot of companies. And so it's trying to find this mix of where can I push my abstraction higher or to find a manage service that can do something for me? And we're seeing that happen in all levels of the stack. And so what we're seeing is this rise of composite APS where we're going to say, Okay, I'm gonna pull in back end AP ice from a whole bunch of tools like twilio or stripe or all zero where algo Leah, all of those things are great tools that I can incorporate into my app. And I can have this great user, um, interface that I can use. And then I don't have to worry quite so much about building it all myself. But I am responsible for wiring at all together. So I think it's that wire together set of interest that is happening for developers as the tool set that they are spending a lot of time with. So we see the manage services being important. Um played an important role in how absent composed, and it's the composition of that APs that is happening internally. >>What one of the one of the regular research items that I see a red monk is you know what languages you know. Where are the trends going? There's been relative stability, but then something's changed. You know, I look at the tools that you mentioned Full stack developer. I talked to a full stack developer a couple of years ago, and he's like like like terror form is my life and I love everything and I've used it forever. And that was 18 months, Andi. I kind of laugh because it's like, OK, I managed. I measure a lot of the technology that I used in the decades. Um, not that await. This came out six months ago and it's kind of mature. And of course, you know, C I C d. Come on. If it's six weeks old, it's probably gone through a lot of generations. So what do you see? Do you have any research that you can share as to looking forward? What are the You know what the skill sets we need? How should we be training our force? What do >>we need to >>be looking at in this kind of next decade of cloud? >>Yeah. So when when you spoke about languages, we dio a semi annual review of language usage as a sign on get hub and in discussion as seen on stack overflow, which we fully recognize is not a perfect representation of how these languages are used in the broader world. But those air data sets that we have access to that are relatively large and open eso just before anyone writes me angry letters that that's not the way that we should be doing it, Um, but one of the things that we've seen over time is that there is a lot of relative stability in those top tier languages in terms of how they are used, and there's some movement at the bottom. But the trends we're seeing where the languages are moving is type safety and having a safer language and the communities that are building upon other communities. So things like, um, we're seeing Scotland that is able to kind of piggyback off of being a jvm based language and having that support from Google. Or we're seeing typescript where it can piggyback off of the breath of deployment of JavaScript, things like that. So those things where were combining together multiple trends that developers are interested in the same time combined with an ecosystem that's already rich and full. And so we're seeing that there's definitely still movement in languages that people are interested in, but also, language on its own is probably pretty stable. So, like as you start to make language choices as a developer, that's not where we're seeing a ton of like turnover language frameworks on the other hand, like if you're a JavaScript developer and all of a sudden there's just explosion of frameworks that you need to choose from, that may be a different story, a lot more turnover there and harder to predict. But language trends are a little bit more stable over >>time, changing over time. You know, Boy, I I got to dig into, you know, relatively Recently I went down like the jam stack. Uh, ecosystem. I've been digging into a serverless for a number of years. What's your take on that? There's certain people. I talked to him. They're like, I don't even need to be a code. Or I could be a marketing person. And I can get things done when I talked to some developers there like a citizen developers. They're not developers. Come on, you know, I really need to be able to do this, so I'll give you your choices, toe. You know, serverless and some of these trends to kind of ext fan. You know who can you know? Code and development. >>Yeah. So for both translate jam stack and serve Ellis, One of the things that we see kind of early in the iteration of a technology is that it is definitely not going to be the right tool for every app. And the number of APS that they approach will fit for will grow as the tool develops. And you add more functionality over time and all of these platforms expand the capability, but definitely not the correct tool choice in every case. That said, we do watch both of those areas with extreme interest in terms of what this next generation of APS can look like and probably will look like in a lot of cases. And I think that it is super interesting to think about who gets to build these APs, because I e. I think one of the things that we probably haven't landed on the right language yet is what that what we should call these people because I don't think anyone associates themselves as a low code person. Like if you're someone from marketing and all of a sudden you can build something technical, that's really cool, and you're excited about that. Nobody else on your team could build. You're not walking around saying I am a low code marketing person like that, that that's that's that's demeaning. Like you're like. No, I'm technical. I'm a technical market, or look what I just did. And if you're someone who codes professionally for a living like and you use a low code tool to get something out the door quickly and >>you don't >>wanna demean and said, Oh, that was I did a low code that just like everybody, is just trying to solve problems. And everybody, um, is trying to figure out how to do things in the most effective way possible and making trade offs all the time. And so I don't think that the language of low code really is anything that resonates with any of the actual users of low code tools. And so I think that's something that we as an industry need toe work on finding the correct language because it doesn't feel like we've landed there yet. >>Yeah, Rachel, what? Want to get your take on just careers for developers now to think about in 2020 everyone is distributed. Lots of conversations about where we work. Can we bring the remote? Many of the developers I talked to already were remote. I had the chance that interview that the head of remote. Forget lab. They're over 1000 people and they're fully remote. So, you know, remote. Absolutely a thing for developers. But if you talk about careers, it is no longer, you know. Oh, hey, here's my CV. It's I'm on git Hub. You can see the code I've done. We haven't talked about open source yet, so give us your take on kind of developers today. Career paths. Andi. Kind of the the online community there. >>Yeah, this could be a whole own conversation. We'll try to figure out my points. Um, so I think one of the things that we are trying to figure out in terms of balance is how much are we expecting people to have done on the side? It's like a side project Hustle versus doing, exclusively getting your job done and not worrying too much about how many green squares you have on your get hub profile. And I think it's a really emotional and fraught discussion and a lot of quarters because it can be exclusionary for people saying that you you need to be spending your time on the side working on this open source project because there are people who have very different life circumstances, like if you're someone who already has kids or you're doing elder care or you are working another job and trying to transition into becoming a developer, it's a lot to ask. These people toe also have a side hustle. That said, it is probably working on open source, having an understanding of how tools are done. Having this, um, this experience and skills that you can point to and contributions you can point Teoh is probably one of the cleaner ways that you can start to move in the industry and break through to the industry because you can show your skills two other employers you can kind of maybe make your way in is a junior developer because you worked on a project and you make those connections. And so it's really still again. It's one of those balancing act things where there's not a perfect answer because there really is to correct sides of this argument. And both of those things are true. At the same time where it's it's hard to figure out what that early career path maybe looks like, or even advancing in a career path If you're already a developer, it's It's tricky. >>Well, I want to get your take on something to you know, I think back to you know, I go back a decade or two I started working with about 20 years ago. Back in the crazy days were just Colonel Daughter Warg and, you know, patches everywhere and lots of different companies trying to figure out what they would be doing on most of the people contributing to the free software before we're calling it open source. Most of the time, it was their side Hustle was the thing they're doing. What was their passion? Project? I've seen some research in the last year or so that says the majority of people that are contributing to open source are doing it for their day job. Obviously, there's a lot of big companies. There's plenty of small companies. When I goto the Linux Foundation shows. I mean, you've got whole companies that are you know, that that's their whole business. So I want to get your take on, you know, you know, governance, you know, contribution from the individual versus companies. You know, there's a lot of change going on there. The public cloud their impact on what's happening open source. What are you seeing there? And you know what's good? What's bad? What do we need to do better as a community? >>Yeah. E think the governance of open source projects is definitely a live conversation that we're having right now about what does this need to look like? What role do companies need to be having and how things are put together is a contribution or leadership position in the name of the individual or the name of the company. Like all of these air live conversations that are ongoing and a lot of communities e think one of the things that is interesting overall, though, is just watching if you're if you're taking a really zoomed out view of what open source looks like where it was at one point, um, deemed a cancer by one of the vendors in the space, and now it is something that is just absolutely an inherent part of most well tech vendors and and users is an important part of how they are building and using software today, like open source is really an integral tool. And what is happening in the enterprise and what's being built in the enterprise. And so I think that it is a natural thing that this conversation is evolving in terms of what is the enterprises role here and how are we supposed to govern for that? And e don't think that we have landed on all the correct answers yet. But I think that just looking at that long view, it makes sense that this is an area where we are spending some time focusing >>So Rachel without giving away state secrets. We know read Monk, you do lots of consulting out there. What advice do you give to the industry? We said we're making progress. There's good things there. But if we say okay, I wanna at 2030 look back and say, Boy, this is wonderful for developers. You know, everything is going good. What things have we done along the way? Where have we made progress? >>Yeah, I think I think it kind of ties back to the earlier discussion we were having around composite APS and thinking about what that developer experience looks like. I think that right now it is incredibly difficult for developers to be wiring everything together and There's just so much for developers to dio to actually get all of these APs from source to production. So when we talk with our customers, a lot of our time is spent thinking, How can you not only solve this individual piece of the puzzle, but how can you figure out how to fit it into this broader picture of what it is the developers air trying to accomplish? How can you think about where your ATF, It's not on your tool or you your project? Whatever it is that you are working on, how does this fit? Not only in terms of your one unique problem space, but where does this problem space fit in the broader landscape? Because I think that's going to be a really key element of what the developer experience looks like in the next decade. Is trying to help people actually get everything wired together in a coherent way. >>Rachel. No shortage of work to do there really appreciate you joining us. Thrilled to have you finally as a cube. Alumni. Thanks so much for joining. >>Thank you for having me. I appreciate it. >>All right. Thank you for joining us. This is the developer content for the cube on cloud, I'm stew minimum, and as always, thank you for watching the Cube.

Published Date : Jan 22 2021

SUMMARY :

cloud brought to you by Silicon Angle. Thank you so much for having me. Well, I've had the opportunity, Thio read some of what you've done. When we have this developer track, um, if you could just give our audience a little bit about your background. The founder, Stephen James, notice that the decision making that developers was And the business sides of the house were like, I don't know, We don't know what those people in the corner we're doing, And so the things that they are responsible for What what do you see? One of the things that is interesting here is that we have seen the And obviously, you know, a zoo analyst. back into the concept you had before about like, what an I team. And of course, you know, C I C d. Come on. developer and all of a sudden there's just explosion of frameworks that you need to choose from, Come on, you know, I really need to be able to do this, so I'll kind of early in the iteration of a technology is that it is definitely not going to And so I think that's something that we Many of the developers I talked to for people saying that you you need to be spending your time on the side working on this open Back in the crazy days were just Colonel Daughter Warg and, you know, patches everywhere and lots of different And e don't think that we have landed on all the correct answers yet. What advice do you give to the industry? of the puzzle, but how can you figure out how to fit it into this broader picture of what Thrilled to have you finally Thank you for having me. This is the developer content for the cube on cloud,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RachelPERSON

0.99+

Rachel StephensPERSON

0.99+

Stephen JamesPERSON

0.99+

Rachel StevensPERSON

0.99+

Silicon AngleORGANIZATION

0.99+

The New King MakersTITLE

0.99+

2020DATE

0.99+

Stephen O. GradyPERSON

0.99+

2013DATE

0.99+

Stephen O. GradyPERSON

0.99+

2021DATE

0.99+

Steve BalmerPERSON

0.99+

five yearsQUANTITY

0.99+

ThioPERSON

0.99+

OneQUANTITY

0.99+

last yearDATE

0.99+

Red MonkORGANIZATION

0.99+

2010DATE

0.99+

twoQUANTITY

0.99+

JavaScriptTITLE

0.99+

2002DATE

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

over 1000 peopleQUANTITY

0.98+

30 secondsQUANTITY

0.98+

six months agoDATE

0.98+

Linux FoundationORGANIZATION

0.98+

next decadeDATE

0.97+

last decadeDATE

0.97+

GoogleORGANIZATION

0.97+

ScotlandLOCATION

0.97+

Nicholas CarrPERSON

0.95+

HustleORGANIZATION

0.94+

about 20 years agoDATE

0.94+

one pointQUANTITY

0.93+

CubeORGANIZATION

0.93+

CubanOTHER

0.93+

git HubTITLE

0.9+

AndiPERSON

0.89+

couple of years agoDATE

0.89+

RedMonkPERSON

0.86+

six weeks oldQUANTITY

0.86+

earlier this decadeDATE

0.85+

one unique problemQUANTITY

0.84+

a decadeQUANTITY

0.82+

MonkPERSON

0.79+

EllisPERSON

0.76+

Colonel Daughter WargORGANIZATION

0.75+

theCUBEORGANIZATION

0.71+

TeohPERSON

0.67+

twilioTITLE

0.66+

2030DATE

0.65+

a thingQUANTITY

0.65+

employersQUANTITY

0.61+

MawrPERSON

0.6+

stripeTITLE

0.59+

jvmOTHER

0.58+

CloudTITLE

0.56+

algo LeahTITLE

0.42+

zeroTITLE

0.39+

CambrianLOCATION

0.33+

3 4 Insights for All v3 clean


 

>>Yeah. >>Welcome back for our last session of the day how to deliver career making business outcomes with Search and AI. So we're very lucky to be hearing from Canada. Canadian Tire, one of Canada's largest and most successful retailers, have been powered 4.5 1000 employees to maximize the value of data with self service insights. So today we're joining us. We have Yarrow Baturin, who is the manager of Merch analytics and planning to support at Canadian Tire and then also Andrea Frisk, who is the engagement manager manager for thoughts. What s O U R Andrea? Thanks so much for being here. And with >>that, >>I'll pass the mic to you guys. >>Thank you for having us. Um, already, I I think I'll start with an introduction off who I am, what I do. A Canadian entire on what Canadian pair is all about. So, as a manager of Merch analytics at Canadian Tire, I support merchant organization with reporting tools, and then be I platform to enable decision making on a day to day basis. What is? Canadian Tire's Canadian tire is one of the largest retailers in Canada. Um, serving Canadians with a number of lines of business spanning automotive fixing, living, playing and SNG departments. We have a number of banners, including sport check Marks Party City Phl that covers more than 1700 locations. So as an organization, we've got vast variety of different data, whether it's product or loyalty. Now, as the time goes on, the number of asks the number off data points. The complexity of the analysis has been increasing on banned traditional tools. Analytical tools such as Excel Microsoft Access do find job but start hitting their limitations. So we started on the journey of exploring what other B I platforms would be suitable for our needs. And the criteria that we thought about as we started on that journey is to make sure that we enable customization as well as the McCarthy ization of data. What does that mean? That means we wanted to ensure that each one of the end users have ability to create their own versions off the report while having consistency from the data standpoint, we also wanted Thio ensure that they're able to create there at hawks search queries and draw insights based on the desired business needs. As each one of our lines of business as each one of our departments is quite unique in their nature. And this is where thoughts about comes into play. Um, you checked off all the boxes? Um, as current customers, as potential customers, you will discover that this is the tool that allows that at hawks search ability within a matter of seconds and ability to visualize the information and create those curated pin boards for each one of the business units, depending on what the needs are. And now where? I guess well, Andrea will talk a little bit more about how we gained adoption, but the usage was like and how we, uh, implemented the tool successfully in the organization. >>Okay, so I actually used to work for Canadian tire on DSO. During that time, I helped Thio build training and engaging users to sort of really kick start our use cases. Andi, the ongoing process of adopting thought spot through Canadian Tire s 01 of the sort of reasons that we moved into using thought spot was there was a need Thio evolve, um, in order to see the wealth of data that we had coming in. So the existing reporting again. And this is this sort of standard thoughts bought fix is, um, it brings the data toe. Everyone on git makes it more accessible, so you get more out of your data. So we want to provide users with the ability to customize what they could see and personalized three information so that they could get their specific business requirements out of the data rather than relying on the weekly monthly quarterly reporting. That was all usually fairly generic eso without the ability to deep dive in. So this gave the users the agility thio optimize their campaigns, optimize product murder, urgency where products are or where there's maybe supply chain gaps. Andi just really bring this out for trillions of rose to become accessible. Thio the Canadian tire. That's what user base think. That's the slide. >>That's the slight, Um So as Andrea talked about the business use of the particular tool, let's talk a little bit about how we set it up and a wonderful journey of how it's evolved. So we first implemented 5.3 version of that spot on the Falcon server on we've been adding horsepower to it over time. Now mhm. What I want to stress is the importance off the very first, Data said. That goes into the tool toe. Actually engage the users and to gain the adoption and to make sure there is no argument whether the tool is accurate or not. So what we've started with is a key p I marked layer with all the major metrics that we have and all the available permutations and combinations off the dimensions, whether it's a calendar dimension, proud of dimension or, let's say, customer attribute now, as we started with that data set, we wanted to make sure that we're we have the ability to add and the dimensions right. So now, as we're implementing the tool, we're starting to add in more dimension tables to satisfy the needs off our clients if you want to call it that way as they want to evolve their analytics. So we started adding in some of the store attributes we started adding in some of the product attributes on when I refer to a product attributes, let's say, uh, it involves costs and involves prices involved in some of the strategic internal pieces that we're thinking about now as the comprehensive mark contains right now, in our instance, close to five billion records. This is where it becomes the one source of truth for people declaring information against right so as they go in, we also wanted to make sure when they Corey thought spot there, we're really Onley. According one source of data. One source of truth. It became apparent over time, obviously, that more metrics are needed. They might not be all set up in that particular mark. And that's when we went on the journey off implementing some of the new worksheets or some of the new data sets particularly focused on the four looking pieces. And uh, that's where it becomes important to say This is how you gain the interest and keep the interests of the public right. So you're not just implementing a number off data sets all at once and then letting the users be you're implementing pieces and stages. You're keeping the interest thio, the tool relevant. You're keeping, um, the needs of the public in mind. Now, as you can imagine on the Falcon server piece, um, adding in the horsepower capacity might become challenging the mawr. Billions of Rosie erratic eso were actually in the middle of transitioning our environment to azure in snowflake so that we can connect it. Thio embrace capability of thoughts cloud. And that's where I'm looking forward to that in 2021 I truly believe this will enable us Thio increase the speed off adoption Increase the speed of getting insights out of the tool and scale with regards Thio new data sets that we're thinking about implementing as we're continuing our thoughts about journey >>Okay, so how we drove adoption Thio 4500 plus users eso When we first started Thio approach our use case with the merchants within Canadian Tire We had meetings with these users with who are used place is gonna be with and sort of found out. What are they searching for, Where they typically looking at what existing reports are available for them. Andi kind of sought out to like, What are those things where you're pulling this on your own or someone else's pulling this data because it's not accessible yet And we really use that as our foundation to determine one what data we needed to initially bring into the system but also to sort of create those launchpad pin boards that had the base information that the users we're gonna need so that we could twofold, make it easy for them, toe adopt into the tool and also quickly start Thio, deactivate or discontinue those reports. And just like these air now only available in thought spot because with the sort of formatting within thought spot around dates, it's really easy to make this year's report last year report etcetera. Just have everything roll over every month or a recorder s. So that was kind of some of the pre work foundation when we originally did it. But really, it's been a lot of training, a lot of training. So we conducted ah, lot of in person training, obviously pre co vid eso. We've started to train the group that we targeted, which was the merchants and all of the like, surrounding support groups. Eso we had planners going in and training as well, so that everyone who was really closely connected to the merchants I had an idea of what thoughts about what was and how to use it and where the reports were, and so we just sort of rolled it out that way, and then it started to fly like wildfire. Eso the merchants start to engage with supply chain to have conversations, or the merchants were engaging with the vendors to sort of have negotiations about pricing. And they're creating these reports and getting the access to the information so quickly, and they're sharing it out that we had other groups just coming to us asking, How do I get into thoughts about how can I get in on DSO on top of those groups, we also sought out other heavy analytics groups such a supply chain where we felt like they could have the same benefits if they on boarded into thought spot with their data as well on Ben. Just continuing to evolve the training roll out. Um, you know, we continued to engage with the users, >>so >>we had a newsletter briefly Thio, sort of just keep informing users of the new data coming in or when we actually upgraded our system. So the here are the new features that you'll start seeing. We did virtual trainings and maintaining an F A Q document with the incoming questions from the users, and then eventually evolved into a self guided learning so that users that were coming to a group, or maybe we've already done a full rollout could come in and have the opportunity to learn how to use thought spot, have examples that were relevant to the business and really get started. Eso then each use case sort of after our initial started to build into a formula of the things that we needed to have. So you need to understand it. Having SMEs ready and having the database Onda worksheets built out sort of became the step by step path to drive adoption. Um, from an implementation timeline, I think they're saying, Took about two months and about half of that waas Kenny entire figuring out how figuring out our security, how to get the data in on, Do we need the time to set up the environment and get on Falcon? So then, after that initial two months, then each use case that we come through. Generally, we've got users trained and SMEs set up within about 2 to 3 weeks after the data is ingested. It's not obviously, once snowflakes set up on the data starts to get into that and the data feeds in, then you're really just looking at the 2 to 3 weeks because the data is easily connected in, >>um, no. All right, let's talk about some of the use cases. So we started with what data we've implemented. Andrea touched upon what Use a training look like what the back curate that piece wants. Now let's talk a little bit about use cases and how we actually leverage thoughts bought together the insights. So the very first one is ultimately the benefit of the tool to the entire organization. Israel Time insights. To reiterate what Andrea said, we first implemented the tool with our buyers. They're the nucleus of any retail organization as they work with everybody within the company and as the buyer's eyes, Their responsibility to ensure both the procurement and the sales channel, um, stays afloat at the end of the day, right? So they need information on a regular basis. They needed fast. They needed timely, and they needed in a fashion that they choose to digest it. It right? Not every business is the same. Not every individual is the same. They consume digest, analyze information differently. And that's what that's what allows you to dio whether it's the search, whether it's a customized onboard, please now supply chain unexpected things. As Andrea mentioned Irish work a lot of supply chain. What is the goal of supply chain to receive product and to be able to ship that product to the stores Now, as our organization has been growing and is doing extremely well, we've actually published Q three results recently. Um, the aspect off prioritization at D C level becomes very important, And what drives some of that prioritization is the analysis around what the upcoming sales would be for specific products for specific categories. And that's where again thoughts. But is one of the tools that we've utilized recently to set our prioritization logic from both inbound and outbound us. It's right because it gives you most recent results. It gives you most granular results, depending on the business problem that you're trying to tackle. Now let's chat a little bit about covert 19 response, because this one is an extremely interesting case as a pandemic hit back in March. Um, as you can imagine, the everyday life a Canadian entire became as business unusual is our executives referred to it under business unusual. This speed and the intensity of the insights and the analytics has grown exponentially. And the speed and the intensity of the insights is driven by the fact that we were trying Thio ensure that we have the right selection of products for our Canadian customers because that's ultimately bread and butter off all of the retailers is the customers, right? So thoughts bought allowed us to have early trends off both sales and inventory patterns, where, whether we were stalking out of some of the products in specific stories of provinces, whether we saw some of the upload off different lines of business, depending on the region, ality right as pandemic hit, for example, um, gym's closed restaurants closed. So as Canadian pack carries a wide variety of different lines of business, we actually offer a wide selection of exercise equipment and accessories, cycling products as well as the kitchen appliances and kitchen accessories pieces. Right? So all of those items started growing exponentially and in certain areas more than others. And this is where thoughts about comes into play. A typical analysis on what the region ality of the sales has been over the last couple of days, which is lifetime and pandemic terms, um, could have taken days weeks for analysts to ultimately cobbled together an Excel spreadsheet. Meanwhile, it can take a couple of seconds for 12 Korean tosspot set up a PIN board that can be shared through a wide variety of individuals rather than fording that one Excel spreadsheet that gets manipulated every single time. And then you don't get the right inside. So from again merch supply chain covert response aspect of things. That spot has been one of those blessings and one of those amazing tools to utilize and improve the speed off insights, improved the speed of analytics and improve the speed of decision making that's ultimately impacting, then consumer at the store level. So Andrea talked about 4500 users that we have that number of school. But what I owe the recently like to focus on, uh, Andrew and I laughing because I think the last time we've spoken at a larger forum with the fastball community, I think we had only 500 users. That was in the beginning >>of the year in in February, we were aiming to have like 1000 >>exactly. So mission accomplished. So we've got 4500 employees now. Everybody asked me, Yeah, that's a big number, but how many times do people actually log in on a weekly or daily basis? I'm or interested in that statistic? So lately, um, we've had more than 400 users on the weekly basis. What's what's been cool lately is, uh, the exponential growth off ad hoc ways. So throughout October, we've reached a 75,000 ad hoc ways in our system and about 13,000 PIN board views. So why is that's that's significant? We started off, I would say, in January of 2020 when Andrea refers to it, I think we started off with about 40 45,000 ad hoc worries a month. So again, that was cool. But at the end of the day, we were able to thio double that amount as more people migrate to act hawk searches from PIN board views, and that's that's a tremendous phenomena, because that's what that's about is all about. So I touched upon a little bit about exercise and cycling. So these are our quarterly results for Q two, um, that have showed tremendous growth that we did not plan for, that we were able to achieve with, ultimately the individuals who work throughout the organization, whether it's the merch organization or whether it's the supply chain side of the business. But coming together and utilizing a B I platform by tools such a hot spot, we can see triple digit growth results. Eso What's next for us users at Hawks searches? That's fantastic. I would still like to get to more than 1200 people on the weekly basis. The cool number to me is if all of our lifetime users were you were getting into the tool on a weekly basis. That would be cool. And what's proven to be true is ultimately the only way to achieve it is to keep surprising and delighting them and your surprising and delighting them with the functionality of the tool. With more of the relevant content and ultimately data adding in more data, um, is again possible through ET else, and it's possible through pulling that information manually. But it's expensive, expensive not from the sense of monetary value, but it's expensive from the size time, all of those aspects of things So what I'm looking forward to is migrating our platform to azure in snowflake and being able thio scale our insights accordingly. Toe adding more data to Adam or incites more, uh, more individual worksheets and data sets for people to Korea against helps the each one of the individuals learn. Get some of the insights. Helps my team in particular be, well, more well versed in the data that we have existing throughout the organization. Um, and then now Andrea, in touch upon how we scale it further and and how each one of the individuals can become better with this wonderful >>Yeah, soas used a zero mentioned theater hawk searches going up. It's sort of it's a little internal victory because our starting platform had really been thio build the pin boards to replicate what the users were already expecting. So that was sort of how we easily got people in. And then we just cut off the tap Thio, whatever the previous report waas. So it gave them away. Thio get into the tool and understand the information. So now that they're using ad hoc really means they understand the tool. Um, then they they have the data literacy Thio access the information and use it how they need. So that's it's a really cool piece. Um, that worked on for Canadian tire. A very report oriented and heavy organization. So it was a good starting platforms. So seeing those ad hoc searches go up is great. Um, one of the ways that we sort of scaled out of our initial group and I kind of mentioned this earlier I sort of stepped on my own toes here. Um is that once it was a proven success with the merchants and it started to spread through word of mouth and we sought out the analyst teams. Um, we really just kept sort of driving the insights, finding the data and learning more about the pieces of the business. As you would like to think he knows everything about everything. He only knows what he knows. Eso You have to continue to cultivate the internal champions. Um Thio really keep growing the adoption eso find this means that air excited about the possibility of using thought spot and what they can do with it. You need to find those people because they're the ones who are going to be excited to have this rapid access to the information and also to just be able to quickly spend less time telling a user had access it in thought spot. Then they would running the report because euro mentioned we basically hit a curiosity tax, right? You you didn't want to search for things or you didn't want to ask questions of the data because it was so conversed. Um, it was took too much time to get the data. And if you didn't know exactly what you were looking for, it was worse. So, you know, you wouldn't run a query and be like, Oh, that's interesting. Let me let me now run another query of all that information to get more data. Just not. It's not time effective or resource effective. Actually, at the point, eso scaling the adoption is really cultivating those people who are really into it as well. Um, from a personal development perspective, sort of as a user, I mean, one who doesn't like being smartest person in the room on bought spot sort of provides that possibility. Andi, it makes it easier for you to get recognized for delivering results on Dahlia ble insights and sort of driving the business forward. So you know, B b that all star be the Trailblazer with all the answers, and then you can just sort of find out what really like helping the organization realized the power of thought spot on, baby. Make it into a career. >>Amazing. I love love that you've joined us, Andrea. Such a such an amazing create trajectory. No bias that all of my s o heaps of great information there. Thank you both. So much for sharing your story on driving such amazing adoption and the impact that you've been able to make a T organization through. That we've got a couple of minutes remaining. So just enough time for questions. Eso Andrea. Our first questions for you from your experience. What is one thing you would recommend to new thoughts about users? >>Um, yeah, I would say Be curious and creative. Um, there's one phrase that we used a lot in training, which was just mess around in the tool. Um, it's sort of became a catchphrase. It is really true. Just just try and use it. You can't break. It s Oh, just just play around. Try it you're only limitation of what you're gonna find is your own creativity. Um, and the last thing I would say is don't get trapped by trying to replicate things. Is that exactly as they were? B, this is how we've always done it. Isin necessarily The the best move on day isn't necessarily gonna find new insights. Right. So the change forces you thio look at things from a different perspective on defined. Find new value in the data. >>Yeah, absolutely. Sage advice there. Andan another one here for Yaro. So I guess our theme for beyond this year is analytics meets Cloud Open for everyone. So, in your experience, what does What does that mean for you? >>Wonderful question. Yeah. Listen, Angela Okay, so to me, in short, uh, means scale and it means turning Yes. Sorry. No, into a yes. Uh, no, I'm gonna elaborate. Is interest is laughing at me a little bit. That's right. >>I can talk >>Fancy Two. Okay, So scale from the scale perspective Cloud a zai touched upon Throw our conversation on our presentation cloud enables your ability Thio store have more data, have access to more data without necessarily employing a number off PTL developers and going toe a number of security aspect of things in different data sources now turning a no into a yes. What does that mean with more data with more scalability? Um, the analytics possibilities become infinite throughout my career at Canadian Tire. Other organizations, if you don't necessarily have access thio data or you do not have the necessary granularity, you always tell individuals No, it's not possible. I'm not able to deliver that result. And quite often that becomes the norm, saying no becomes the norm. And I think what we're all striving towards here on this call Aziz part the conference is turning that no one say yes on then making a yes a new, uh, standard a new form. Um, as we have more access to the data, more access to the insights. So that would be my answer. >>Love it. Amazing. Well, that kind of brings in into this session. So thank you, everyone for joining us today on did wrap up this dream. Don't miss the upcoming product roadmap eso We'll be sticking around to speak thio some of the speakers you heard earlier today and I'll make the experts round table, and you can absolutely continue the conversation with this life. Q. On Q and A So you've got an opportunity here to ask questions that maybe keep you up at night. Perhaps, but yet stay tuned for the meat. The experts secrets to scaling analytics adoption after the product roadmap session. Thanks everyone. And thank you again for joining us. Guys. Appreciate it. >>Thank you. Thanks. Thanks.

Published Date : Dec 10 2020

SUMMARY :

Welcome back for our last session of the day how to deliver career making business outcomes with Search And the criteria that we thought about as we started on that journey of the sort of reasons that we moved into using thought spot was there was a need Thio the business use of the particular tool, let's talk a little bit about how we set it up and boards that had the base information that the users we're gonna need so that we could of the things that we needed to have. and the intensity of the insights is driven by the fact that we were trying Thio But at the end of the day, we were able to thio double that amount as more people Um, one of the ways that we sort of scaled out of our initial group and I kind on driving such amazing adoption and the impact that you've been able to make a T organization through. So the change forces you thio look at things from a different perspective on So I guess our theme for beyond this year is analytics meets Cloud so to me, in short, uh, means scale and And quite often that becomes the norm, saying no becomes the norm. the experts round table, and you can absolutely continue the conversation with this life. Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndreaPERSON

0.99+

Andrea FriskPERSON

0.99+

CanadaLOCATION

0.99+

AngelaPERSON

0.99+

January of 2020DATE

0.99+

AndrewPERSON

0.99+

Yarrow BaturinPERSON

0.99+

FebruaryDATE

0.99+

2QUANTITY

0.99+

Merch analyticsORGANIZATION

0.99+

4500 employeesQUANTITY

0.99+

first questionsQUANTITY

0.99+

Canadian TireORGANIZATION

0.99+

OctoberDATE

0.99+

2021DATE

0.99+

AdamPERSON

0.99+

one phraseQUANTITY

0.99+

KoreaLOCATION

0.99+

ExcelTITLE

0.99+

One sourceQUANTITY

0.99+

last yearDATE

0.99+

MarchDATE

0.99+

more than 1700 locationsQUANTITY

0.99+

oneQUANTITY

0.99+

12QUANTITY

0.99+

todayDATE

0.99+

more than 1200 peopleQUANTITY

0.99+

one sourceQUANTITY

0.99+

1000QUANTITY

0.99+

3 weeksQUANTITY

0.99+

more than 400 usersQUANTITY

0.98+

bothQUANTITY

0.98+

two monthsQUANTITY

0.98+

firstQUANTITY

0.98+

4.5 1000 employeesQUANTITY

0.98+

CoreyPERSON

0.97+

each oneQUANTITY

0.97+

four looking piecesQUANTITY

0.96+

this yearDATE

0.96+

trillions of roseQUANTITY

0.96+

about two monthsQUANTITY

0.96+

each use caseQUANTITY

0.96+

KennyPERSON

0.96+

about 40 45,000 adQUANTITY

0.96+

McCarthyPERSON

0.95+

75,000 adQUANTITY

0.95+

EsoORGANIZATION

0.95+

five billion recordsQUANTITY

0.95+

three informationQUANTITY

0.94+

first oneQUANTITY

0.93+

CanadianOTHER

0.93+

earlier todayDATE

0.91+

ThioORGANIZATION

0.9+

500 usersQUANTITY

0.9+

ThioPERSON

0.9+

hawksORGANIZATION

0.9+

about 2QUANTITY

0.89+

both salesQUANTITY

0.88+

pandemicEVENT

0.88+

MicrosoftORGANIZATION

0.87+

about 4500 usersQUANTITY

0.86+

External Data | Beyond.2020 Digital


 

>>welcome back. And thanks for joining us for our second session. External data, your new leading indicators. We'll be hearing from industry leaders as they share best practices and challenges in leveraging external data. This panel will be a true conversation on the part of the possible. All right, let's get to >>it >>today. We're excited to be joined by thought spots. Chief Data Strategy Officer Cindy Housing Deloitte's chief data officer Manteo, the founder and CEO of Eagle Alfa. And it Kilduff and Snowflakes, VP of data marketplace and customer product strategy. Matt Glickman. Cindy. Without further ado, the floor is yours. >>Thank you, Mallory. And I am thrilled to have this brilliant team joining us from around the world. And they really bring each a very unique perspective. So I'm going to start from further away. Emmett, Welcome. Where you joining us from? >>Thanks for having us, Cindy. I'm joining from Dublin, Ireland, >>great. And and tell us a little bit about Eagle Alfa. What do you dio >>from a company's perspective? Think of Eagle Alfa as an aggregator off all the external data sets on a word I'll use a few times. Today is a big advantage we could bring companies is we have a data concierge service. There's so much data we can help identify the right data sets depending on the specific needs of the company. >>Yeah. And so, Emma, you know, people think I was a little I kind of shocked the industry. Going from gardener to a tech startup. Um, you have had a brave journey as well, Going from financial services to starting this company, really pioneering it with I think the most data sets of any of thes is that right? >>Yes, it was. It was a big jump to go from Morgan Stanley. Uh, leave the comforts of that environment Thio, PowerPoint deck and myself raising funding eight years ago s So it was a big jump on. We were very early in our market. It's in the last few years where there's been real momentum and adoption by various types of verticals. The hedge funds were first, maybe then private equity, but corporate sar are following quite quickly from behind. That will be the biggest users, in our view, by by a significant distance. >>Yeah, great. Thank um, it So we're going to go a little farther a field now, but back to the U. S. So, Juan, where you joining us from? >>Hey, Cindy. Thanks for having me. I'm joining you from Houston, Texas. >>Great. Used to be my home. Yeah, probably see Rice University back there. And you have a distinct perspective serving both Deloitte customers externally, but also internally. Can you tell us about that? >>Yeah, absolutely. So I serve as the Lord consultants, chief data officer, and as a professional service firm, I have the responsibility for overseeing our overall data agenda, which includes both the way we use data and insights to run and operate our own business, but also in how we develop data and insights services that we then take to market and how we serve our dealers and clients. >>Great. Thank you, Juan. And last but not least, Matt Glickman. Kind of in my own backyard in New York. Right, Matt? >>Correct. Joining I haven't been into the city and many months, but yes, um, based in New York. >>Okay. Great. And so, Matt, you and Emmett also, you know, brave pioneers in this space, and I'm remembering a conversation you and I shared when you were still a J. P. Morgan, I believe. And you're Goldman Sachs. Sorry. Sorry. Goldman. Can you Can you share that with us? >>Sure. I made the move back in 2015. Um, when everyone thought, you know, my wife, my wife included that I was crazy. I don't know if I would call it Comfortable was emitted, but particularly had been there for a long time on git suffered in some ways. A lot of the pains we're talking about today, given the number of data, says that the amount of of new data sets that are always demand for having run analytics teams at Goldman, seeing the pain and realizing that this pain was not unique to Goldman Sachs, it was being replicated everywhere across the industry, um, in a mind boggling way and and the fortuitous, um, luck to have one of snowflakes. Founders come to pitch snowflake to Goldman a little bit early. Um, they became a customer later, but a little bit early in 2014. And, you know, I realized that this was clearly, you know, the answer from first principles on bond. If I ever was going to leave, this was a problem. I was acutely aware of. And I also was aware of how much the man that was in financial services for a better solution and how the cloud could really solve this problem in particular the ability to not have to move data in and out of these organizations. And this was something that I saw the future of. Thank you, Andi, that this was, you know, sort of the pain that people just expected to pay. Um, this price if you need a data, there was method you had thio. You had to use you either ftp data in and out. You had data that was being, you know, dropped off and, you know, maybe in in in a new ways and cloud buckets or a P i s You have to suck all this data down and reconstruct it. And God forbid the formats change. It was, you know, a nightmare. And then having issues with data, you had a what you were seeing internally. You look nothing like what the data vendors were seeing because they want a completely different system, maybe model completely differently. Um, but this was just the way things were. Everyone had firewalls. Everyone had their own data centers. There was no other way on git was super costly. And you know this. I won't even share the the details of you know, the errors that would occur in the pain that would come from that, Um what I realized it was confirmed. What I saw it snowflake at the time was once everyone moves to run their actual workloads in this in the cloud right where you're now beyond your firewall, you'll have all this scale. But on top of that, you'll be able to point at data from these vendors were not there the traditional data vendors. Or, you know, this new wave of alternative data vendors, for example, like the ones that eagle out for brings together And bring these all these data sets together with your own internal data without moving it. Yeah, this was a fundamental shift of what you know, it's in some ways, it was a side effect of everyone moving to the cloud for costs and scale and elasticity. But as a side effect of that is what we talked about, You know it snowflake summit, you know, yesterday was this notion of a data cloud that would connect data between regions between cloud vendors between customers in a way where you could now reference data. Just like your reference websites today, I don't download CNN dot com. I point at it, and it points me to something else. I'm always seeing the latest version, obviously, and we can, you know, all collaborate on what I'm seeing on that website. That's the same thing that now can happen with data. So And I saw this as what was possible, and I distinctly asked the question, you know, the CEO of the time Is this possible? And not only was it possible it was a fundamental construct that was built into the way that snowflake was delivered. And then, lastly, this is what we learned. And I think this is what you know. M It also has been touting is that it's all great if data is out there and even if you lower that bar of access where data doesn't have to move, how do I know? Right? If I'm back to sitting at Goldman Sachs, how do I know what data is available to me now in this this you know, connected data network eso we released our data marketplace, which was a very different kind of marketplace than these of the past. Where for us, it was really like a global catalog that would elect a consumer data consumer. Noah data was available, but also level the playing field. Now we're now, you know, Eagle, Alfa, or even, you know, a new alternative data vendor build something in their in their basement can now publish that data set so that the world could see and consume and be aligned to, you know, snowflakes, core business, and not where we wouldn't have to be competing or having to take, um, any kind of custody of that data. So adding that catalog to this now ubiquitous access, um really changed the game and, you know, and then now I seem like a genius for making this move. But back then, like I said, we've seen I seem like instant. I was insane. >>Well, given, given that snowflake was the hottest aipo like ever, you were a genius. Uh, doing this, you know, six years in advance. E think we all agree on that, But, you know, a lot of this is still visionary. Um, you know, some of the most leading companies are already doing this. But one What? What is your take our Are you best in class customers still moving the data? Or is this like they're at least thinking about data monetization? What are you seeing from your perspective? >>Yeah, I mean, I did you know, the overall appreciation and understanding of you know, one. I got to get my house in order around my data, um, has something that has been, you know, understood and acted upon. Andi, I do agree that there is a shift now that says, you know, data silos alone aren't necessarily gonna bring me, you know, new and unique insights on dso enriching that with external third party data is absolutely, you know, sort of the the ship that we're seeing our customers undergo. Um, what I find extremely interesting in this space and what some of the most mature clients are doing is, you know, really taking advantage of these data marketplaces. But building data partnerships right there from what mutually exclusive, where there is a win win scenario for for you know, that organization and that could be, you know, retail customers or life science customers like with pandemic, right the way we saw companies that weren't naturally sharing information are now building these data partnership right that are going are going into mutually benefit, you know, all organizations that are sort of part of that value to Andi. I think that's the sort of really important criteria. And how we're seeing our clients that are extremely successful at this is that partnership has benefits on both sides of that equation, right? Both the data provider and then the consumer of that. And there has to be, you know, some way to ensure that both parties are are are learning right, gaining you insights to support, you know, whatever their business organization going on. >>Yeah, great one. So those data partnerships getting across the full value chain of sharing data and analytics Emmett, you work on both sides of the equation here, helping companies. Let's say let's say data providers maybe, like, you know, cast with human mobility monetize that. But then also people that are new to it. Where you seeing the top use cases? Well, >>interestingly, I agree with one of the supply side. One of the interesting trends is we're seeing a lot more data coming from large Corporates. Whether they're listed are private equity backed, as opposed to maybe data startups that are earning money just through data monetization. I think that's a great trend. I think that means a lot of the best. Data said it data is yet to come, um, in terms off the tough economy and how that's changed. I think the category that's had the most momentum and your references is Geo location data. It's that was the category at our conference in December 2000 and 12 that was pipped as the category to watch in 2019. On it didn't become that at all. Um, there were some regulatory concerns for certain types of geo data, but with with covert 19, it's Bean absolutely critical for governments, ministries of finance, central banks, municipalities, Thio crunch that data to understand what's happening in a real time basis. But from a company perspective, it's obviously critical as well. In terms of planning when customers might be back in the High Street on DSO, fourth traditionally consumer transaction data of all the 26 categories in our taxonomy has been the most popular. But Geo is definitely catching up your slide. Talked about being a tough economy. Just one point to contradict that for certain pockets of our clients, e commerce companies are having a field day, obviously, on they are very data driven and tech literate on day are they are really good client base for us because they're incredibly hungry, firm or data to help drive various, uh, decision making. >>Yeah, So fair enough. Some sectors of the economy e commerce, electron, ICS, healthcare are doing great. Others travel, hospitality, Um, super challenging. So I like your quote. The best is yet to come, >>but >>that's data sets is yet to come. And I do think the cloud is enabling that because we could get rid of some of the messy manual data flows that Matt you talked about, but nonetheless, Still, one of the hardest things is the data map. Things combining internal and external >>when >>you might not even have good master data. Common keys on your internal data. So any advice for this? Anyone who wants to take that? >>Sure I can. I can I can start. That's okay. I do think you know, one of the first problems is just a cataloging of the information that's out there. Um, you know, at least within our organization. When I took on this role, we were, you know, a large buyer of third party data. But our organization as a whole didn't necessarily have full visibility into what was being bought and for what purpose. And so having a catalog that helps us internally navigate what data we have and how we're gonna use it was sort of step number one. Um, so I think that's absolutely important. Um, I would say if we could go from having that catalog, you know, created manually to more automated to me, that's sort of the next step in our evolution, because everyone is saying right, the ongoing, uh, you know, creation of new external data sets. It's only going to get richer on DSO. We wanna be able to take advantage of that, you know, at the at the pacing speed, that data is being created. So going from Emanuel catalog to anonymous >>data >>catalog, I think, is a key capability for us. But then you know, to your second point, Cindy is how doe I then connect that to our own internal data to drive greater greater insights and how we run our business or how we serve our customers. Andi, that one you know really is a It's a tricky is a tricky, uh, question because I think it just depends on what data we're looking toe leverage. You know, we have this concept just around. Not not all data is created equal. And when you think about governance and you think about the management of your master data, your internal nomenclature on how you define and run your business, you know that that entire ecosystem begins to get extremely massive and it gets very broad and very deep on DSO for us. You know, government and master data management is absolutely important. But we took a very sort of prioritized approach on which domains do we really need to get right that drive the greatest results for our organization on dso mapping those domains like client data or employee data to these external third party data sources across this catalog was really the the unlocked for us versus trying to create this, you know, massive connection between all the external data that we're, uh, leveraging as well as all of our own internal data eso for us. I think it was very. It was a very tailored, prioritized approach to connecting internal data to external data based on the domains that matter most to our business. >>So if the domains so customer important domain and maybe that's looking at things, um, you know, whether it's social media data or customer transactions, you prioritized first by that, Is that right? >>That's correct. That's correct. >>And so, then, Matt, I'm going to throw it back to you because snowflake is in a unique position. You actually get to see what are the most popular data sets is is that playing out what one described are you seeing that play out? >>I I'd say Watch this space. Like like you said. I mean this. We've you know, I think we start with the data club. We solve that that movement problem, which I think was really the barrier that you tended to not even have a chance to focus on this mapping problem. Um, this notion of concordance, I think this is where I see the big next momentum in this space is going to be a flurry of traditional and new startups who deliver this concordance or knowledge graph as a service where this is no longer a problem that I have to solve internal to my organization. The notion of mastering which is again when everyone has to do in every organization like they used to have to do with moving data into the organization goes away. And this becomes like, I find the best of breed for the different scopes of data that I have. And it's delivered to me as a, you know, as a cloud service that just takes my data. My internal data maps it to these 2nd and 3rd party data sets. Um, all delivered to me, you know, a service. >>Yeah, well, that would be brilliant concordance as a service or or clean clean master data as a service. Um, using augmented data prep would be brilliant. So let's hope we get there. Um, you know, so 2020 has been a wild ride for everyone. If I could ask each of you imagine what is the art of the possible or looking ahead to the next to your and that you are you already mentioned the best is yet to come. Can you want to drill down on that. What what part of the best is yet to come or what is your already two possible? >>Just just a brief comment on mapping. Just this week we published a white paper on mapping, which is available for for anyone on eagle alfa dot com. It's It's a massive challenge. It's very difficult to solve. Just with technology Onda people have tried to solve it and get a certain level of accuracy, but can't get to 100% which which, which, which makes it difficult to solve it. If if if there is a new service coming out against 100% I'm all ears and that there will be a massive step forward for the entire data industry, even if it comes in a few years time, let alone next year, I think going back to the comment on data Cindy. Yes, I think boards of companies are Mawr and Mawr. Viewing data as an asset as opposed to an expense are a cost center on bond. They are looking therefore to get their internal house in order, as one was saying, but also monetize the data they are sitting on lots of companies. They're sitting on potentially valuable data. It's not all valuable on a lot of cases. They think it's worth a lot more than it is being frank. But in some cases there is valuable data on bond. If monetized, it can drop to the bottom line on. So I think that bodes well right across the world. A lot of the best date is yet to come on. I think a lot of firms like Deloitte are very well positioned to help drive that adoption because they are the trusted advisor to a lot of these Corporates. Um, so that's one thing. I think, from a company perspective. It's still we're still at the first base. It's quite frustrating how slow a lot of companies are to move and adopt, and some of them are haven't hired CDO. Some of them don't have their internal house in order. I think that has to change next year. I think if we have this conference at this time next year, I would expect that would hopefully be close to the tipping point for Corporates to use external data. And the Malcolm Gladwell tipping point on the final point I make is I think, that will hopefully start to see multi department use as opposed to silos again. Parliaments and silos, hopefully will be more coordinated on the company's side. Data could be used by marketing by sales by r and D by strategy by finance holds external data. So it really, hopefully will be coordinated by this time next year. >>Yeah, Thank you. So, to your point, there recently was an article to about one of the airlines that their data actually has more value than the company itself now. So I know, I know. We're counting on, you know, integrators trusted advisers like Deloitte to help us get there. Uh, one what? What do you think? And if I can also drill down, you know, financial services was early toe all of this because they needed the early signals. And and we talk about, you know, is is external data now more valuable than internal? Because we need those early signals in just such a different economy. >>Yeah, I think you know, for me, it's it's the seamless integration of all these external data sources and and the signals that organizations need and how to bring those into, you know, the day to day operations of your organization, right? So how do you bring those into, You know, you're planning process. How do you bring that into your sales process on DSO? I think for me success or or where I see the that the use and adoption of this is it's got to get down to that level off of operations for organizations. For this to continue to move at the pace and deliver the value that you know, we're all describing. I think we're going to get there. But I think until organizations truly get down to that level of operations and how they're using this data, it'll sort of seem like a Bolton, right? So for me, I think it's all about Mawr, the seamless integration. And I think to what Matt mentioned just around services that could help connect external data with internal data. I'll take that one step beyond and say, How can we have the data connect itself? Eso I had references Thio, you know, automation and machine learning. Um, there's significant advances in terms of how we're seeing, you know, mapping to occur in a auto generated fashion. I think this specific space and again the connection between external and internal data is a prime example of where we need to disrupt that, you know, sort of traditional data pipeline on. Try to automate that as much as possible. And let's have the data, you know, connect itself because it then sort of supports. You know, the first concept which waas How do we make it more seamless and integrated into, you know, the business processes of the organization's >>Yeah, great ones. So you two are thinking those automated, more intelligent data pipelines will get us there faster. Matt, you already gave us one. Great, Uh, look ahead, Any more to add to >>it, I'll give you I'll give you two more. One is a bit controversial, but I'll throw that you anyway, um, going back to the point that one made about data partnerships What you were saying Cindy about, you know, the value. These companies, you know, tends to be somehow sometimes more about the data they have than the actual service they provide. I predict you're going to see a wave of mergers and acquisitions. Um, that it's solely about locking down access to data as opposed to having data open up. Um to the broader, you know, economy, if I can, whether that be a retailer or, you know, insurance company was thes prime data assets. Um, you know, they could try to monetize that themselves, But if someone could acquire them and get exclusive access that data, I think that's going to be a wave of, um, in a that is gonna be like, Well, we bought this for this amount of money because of their data assets s. So I think that's gonna be a big wave. And it'll be maybe under the guise of data partnerships. But it really be about, you know, get locking down exclusive access to valuable data as opposed to trying toe monetize it itself number one. And then lastly, you know. Now, did you have this kind of ubiquity of data in this interconnected data network? Well, we're starting to see, and I think going to see a big wave of is hyper personalization of applications where instead of having the application have the data itself Have me Matt at Snowflake. Bring my data graph to applications. Right? This decoupling of we always talk about how you get data out of these applications. It's sort of the reverse was saying Now I want to bring all of my data access that I have 1st, 2nd and 3rd party into my application. Instead of having to think about getting all the data out of these applications, I think about it how when you you know, using a workout app in the consumer space, right? I can connect my Spotify or connect my apple music into that app to personalize the experience and bring my music list to that. Imagine if I could do that, you know, in a in a CRM. Imagine I could do that in a risk management. Imagine I could do that in a marketing app where I can bring my entire data graph with me and personalize that experience for, you know, for given what I have. And I think again, you know, partners like thoughts. But I think in a unique position to help enable that capability, you know, for this next wave of of applications that really take advantage of this decoupling of data. But having data flow into the app tied to me as opposed to having the APP have to know about my data ahead of time, >>Yeah, yeah, So that is very forward thinking. So I'll end with a prediction and a best practice. I am predicting that the organizations that really leverage external data, new data sources, not just whether or what have you and modernize those data flows will outperform the organizations that don't. And as a best practice to getting there, I the CDOs that own this have at least visibility into everything they're purchasing can save millions of dollars in duplicate spend. So, Thio, get their three key takeaways. Identify the leading indicators and market signals The data you need Thio. Better identify that. Consolidate those purchases and please explore the data sets the range of data sets data providers that we have on the thought spot. Atlas Marketplace Mallory over to you. >>Wow. Thank you. That was incredible. Thank you. To all of our Panelists for being here and sharing that wisdom. We really appreciate it. For those of you at home, stay close by. Our third session is coming right up and we'll be joined by our partner AWS and get to see how you can leverage the full power of your data cloud complete with the demo. Make sure to tune in to see you >>then

Published Date : Dec 10 2020

SUMMARY :

All right, let's get to We're excited to be joined by thought spots. Where you joining us from? Thanks for having us, Cindy. What do you dio the external data sets on a word I'll use a few times. you have had a brave journey as well, Going from financial It's in the last few years where there's been real momentum but back to the U. S. So, Juan, where you joining us from? I'm joining you from Houston, Texas. And you have a distinct perspective serving both Deloitte customers So I serve as the Lord consultants, chief data officer, and as a professional service Kind of in my own backyard um, based in New York. you know, brave pioneers in this space, and I'm remembering a conversation If I'm back to sitting at Goldman Sachs, how do I know what data is available to me now in this this you know, E think we all agree on that, But, you know, a lot of this is still visionary. And there has to be, you know, some way to ensure that you know, cast with human mobility monetize that. I think the category that's had the most momentum and your references is Geo location Some sectors of the economy e commerce, that Matt you talked about, but nonetheless, Still, you might not even have good master data. having that catalog, you know, created manually to more automated to me, But then you know, to your second point, That's correct. And so, then, Matt, I'm going to throw it back to you because snowflake is in a unique position. you know, as a cloud service that just takes my data. Um, you know, so 2020 has been I think that has to change next year. And and we talk about, you know, is is external data now And let's have the data, you know, connect itself because it then sort of supports. So you two are thinking those automated, And I think again, you know, partners like thoughts. and market signals The data you need Thio. by our partner AWS and get to see how you can leverage the full power of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt GlickmanPERSON

0.99+

CindyPERSON

0.99+

JuanPERSON

0.99+

EmmaPERSON

0.99+

MattPERSON

0.99+

2015DATE

0.99+

DeloitteORGANIZATION

0.99+

EmmettPERSON

0.99+

New YorkLOCATION

0.99+

2019DATE

0.99+

AWSORGANIZATION

0.99+

December 2000DATE

0.99+

GoldmanORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

Eagle AlfaORGANIZATION

0.99+

EagleORGANIZATION

0.99+

next yearDATE

0.99+

AndiPERSON

0.99+

twoQUANTITY

0.99+

AlfaORGANIZATION

0.99+

third sessionQUANTITY

0.99+

100%QUANTITY

0.99+

OneQUANTITY

0.99+

12DATE

0.99+

Houston, TexasLOCATION

0.99+

oneQUANTITY

0.99+

BothQUANTITY

0.99+

second sessionQUANTITY

0.99+

bothQUANTITY

0.99+

both sidesQUANTITY

0.99+

yesterdayDATE

0.99+

MalloryPERSON

0.99+

both partiesQUANTITY

0.99+

Morgan StanleyORGANIZATION

0.99+

second pointQUANTITY

0.99+

TodayDATE

0.99+

todayDATE

0.99+

Cindy HousingPERSON

0.99+

Rice UniversityORGANIZATION

0.98+

26 categoriesQUANTITY

0.98+

Dublin, IrelandLOCATION

0.98+

2014DATE

0.98+

eight years agoDATE

0.98+

Malcolm GladwellPERSON

0.98+

2ndQUANTITY

0.98+

first principlesQUANTITY

0.98+

ThioPERSON

0.97+

U. S.LOCATION

0.97+

firstQUANTITY

0.97+

MawrORGANIZATION

0.97+

1stQUANTITY

0.97+

one pointQUANTITY

0.97+

2020DATE

0.96+

PowerPointTITLE

0.96+

fourthQUANTITY

0.96+

this weekDATE

0.96+

first baseQUANTITY

0.95+

eachQUANTITY

0.92+

CNN dot comORGANIZATION

0.92+

OndaORGANIZATION

0.92+

SpotifyORGANIZATION

0.92+

Tres Vance, Red Hat | AWS re:Invent 2020 Public Sector Day


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 Special coverage sponsored by AWS Worldwide Public sector Welcome back to the cubes coverage. This is the Cube virtual in our coverage of AWS reinvent with special coverage of the worldwide public sector day. I'm your host, John Firrea. We are the Cube, and I'm joined by Trayvon's hyper scaler partner. Leave with Red Hat. Trey, Welcome to the Cube. >>Thank you. Great to be here, John. Very happy to be at my first virtual reinvent, but probably my third in a row reinvented itself. >>You know, it's super exciting and usually were in person, as you mentioned. But the Cube virtual your virtual. We gotta do it virtual this year, but the game is still the same. It's about learning is about getting updates on what's relevant for customers with the pandemic. A lot of things have been highlighted, and this has been the big fun of reinvent because you mentioned three years. This is our eighth year. We've been there every year since the since the except for the first year, but you just look at the growth right, but it's still the same cadence of more news, more announcements, more higher level services, you know, with with open shift We've been following that with kubernetes and containers Service meshes. You're seeing micro services. All this coming together around open source and public sector is the main benefit of that. Right now, if you look at most interviews that I've done, the mandate for change in public sector is multifold in every vertical education to military, right. So So there's a need to get off your butt and get going with cloud if you're in public sector, um, tell us more about Red hat and the partnership around public sector, because I think that's really what we want to dig into. >>Absolutely. And there definitely have been, uh, changes this year that have inspired innovation. Uh, Red hat native us have been on a path for innovation for quite a while. Red hat in working with the open source community and taking an iterative approach to what we call upstream first, which is essentially, uh, to develop, uh, in the open source communities to mature those into enterprise grade products and then thio iterative Lee, take those findings back to the open source community. So Red Hat and eight of us have had a long history of collaboration. Starting all the way back in 2007 with Red Hat Enterprise Linux being available within the AWS console continue on to things like AWS Quick starts, which are reference architectures for how to deploy products that you're managing yourself on then, More recently, recent being the last say, four years, Uh, Thio offer a open shift managed service within a W s. And now continuing that with a joint offering that's gonna be forthcoming. That's the Red had open shift service on AWS, which will be the first native offering and joint offering with a W s by A by a third party such as ourselves. So there's a history of innovation there in a history of collaboration, and I think we'll talk a little bit later on in the interview specifically about how that relates to public sector and their unique needs. >>Yeah, well, let's just get in there. What are the some of the unique needs? Because there's value in your partnership with AWS. You laid out a bunch of those services, so certainly there's customers that are in need. What specific requirements are there. Can you tell us how Red Hat and A W s work together to meet these challenges? >>Sure. So the public sector group is composed of many organizations and agencies both. When I think of public sector, I think of the federal civilian space. I think about the D. O. D uh, the state and local and education. All of those elements of public sector have different needs. But there are some standards that are very pervasive in the public sector, things like Phipps and how you articulate your compliance with particular validated cryptographic modules or with how you express a control statement using something like the uh minus 853 which is critically important for cloud service offerings. And so those are some of the things that Red Hat native us have a heritage of working together on also providing deep explanations for those organizations and their mission so that they can comfortably move into the cloud, do digital transformation by taking applications that maybe on Prem today and having the confidence to move those into the cloud with security and compliance at the forefront. So when I think about the overall mission of government and then the threats to that mission, whether they be state actors, you know, individuals there are serious. They're serious solutions that have been developed both in the open source community to provide greater visibility into security. And there are things that the government has done to kind of create frameworks for compliance. And those are things that we work with, uh, in the open. So we have, ah, process that we call Compliance is code which can be found both inside of repositories like git Hub. But also on our website, where we articulate how our products actually work with those compliance frameworks, uh, the cryptographic a while authorizations and some of the certifications for technology that the government's put forward. >>So if it's compliance, is code like infrastructure is code, which is Dev Ops. What do you call it? Gov. Dev Ops or Gove ops Compliant ops. It's kind of get a little Dev ops vibe there. I mean, this is a really real question. I mean, you're talking about making compliance, automated. This is what Dev ops is all about, right? And this is this is kind of where it's going. How do you how do you expand more on that? Take a minute to explain. >>Sure. So it's a red hat. Over the last 20 plus years has been doing things that are now called Dev Ops or Dev SEC ops any number of combinations of those words. But the reality is that we've worked in in things like small teams. We've worked to make things like micro services, where you have a very well defined and discreet service that could be scaled up and then that's been incorporated into our products. But not only that, we release those things back Thio the open source community to make the broader Linux platform, for example, the broader kubernetes platform to make those things, uh stronger onto also get more visibility to some of those security items. So that there is a level of trust that you can have in the software supply chain is being created not only with ease, but the things that the customers of building based on these solutions. >>Yeah, that's a good point. Trust and all that compliance is, too. But also when you have that trust, now you have a product you wanna actually deploy it or have customers consume it. Um, it hasn't always been easy trade and cover. You got Fed ramp. I mean, I talked to Teresa cross about this all the time at a W s. You know, there's all kinds of, you know, things. You got hoops you gotta jump through. How are you guys making that? Easier, Because again, that's another concern you got. You guys got a great channel. You got the upstream. First, you've got the open source. Um, you know, enterprises certainly do great. And now you're doing great in public sector. How you guys making it easier for partners to on Ram Pinto. All these Fed programs? >>Yeah. So what I think about the application transformation that organizations are going through we have, especially in the open shift environment. We have what we call the operator framework, which allows operational knowledge to be used as code on. That's gonna be a kind of a running theme for us, but to be able to do these things as code, uh, whether it's things like our compliance operator, which allow you to do testing of a production environment, uh, testing of operational elements of your infrastructure to be able to test them for compliance is Phipps enabled our cryptographic libraries being used, and at what levels are they being used by simply the operating system where they're being used in the kubernetes environment? Are they even being used toe access AWS services? So one of the big things that is important for redhead customers that are moving into the cloud is the depth at which we can leverage the cloud provider services such as the AWS services, but also bring new application services that the customer may be familiar with on Prem, bring those into the environment and then be able to test. So you trust. But you verify on you provide that visibility and ultimately that accountability to the customer that is interested in using your solution in the cloud. And that's what one of those success criterias is gonna bay. >>Yeah, and speed to is a big theme. We're hearing speed agility. I mean, Julie has been talked about all time with Dev ops deficit cops, and you know all these ops automation, but speed deployment. This brings up to the point about we kind of teed up a little bit of the top of the interview, but there's been a big year for disruption, pandemic uncertainty, polarized political environment. Geopolitical. You got stuff in space congestion contention. There you got the edge of the network exploding. So we all new paradigm shifting going on everywhere, right? So, you know, and all the all the turmoil pandemic specifically has been driving a lot of change. How has all this disruption accelerated the public sector cloud journey? Because we were talking earlier, You know, the public sector and didn't have a big I T budget that was never super funded. Like enterprises, they're not flush with cash on board. The motivation was to kind of go slow. Not anymore. Sure anymore, >>I think. Ah, lot of organizations have drawn inspiration from those factors, right? So you have these factors that say that you have a limited budget on that necessity brings out the innovation right, And the especially for government organizations, the the the spirit of the innovation is something that runs deep in the culture. And when faced with those kinds of things, they actually rise to the occasion. And so I think about things like the US Navy's compiled to combat 24 program which were part of and that program is leveraging things like automation, dep, SEC ops and the agile methods to create new capabilities and new software on, as the program name says, it's compiled to combat in 24 hours. So the idea is that you can have software that is created a new capability deployed and in theater, uh, within a short period of time. That's very agile, and it's also ah, very innovative thing, and that's all leveraging red hats portfolio of products. But it's also their vision that and their methodology to actually bring that toe life. So we're very fortunate and very glad to be a part of that and continue to iterating that that way. >>It's nice to be on the road map of the product requirements that are needed now. They're never because the speed is super important and the role of data and all the things that you're doing and open source drives that trade Great to have you on sharing your insight. What? Just a personal question. Hyper scale partner leaders, your title. What does that mean? It means you're going to hyper scales. You're hyper scale who your partner is. Just take a minute to explain what you do it. It's fascinating. It >>definitely means that I'm hyper scale 100% thea Other thing It means we view the cloud service providers as hyper scale er's right. They have capacity on demand pay As you go this very elastic nature to what they do, they offer infrastructure a za service that you can then use for the foundations of your solutions. So as a hyper scale partner leader, what I do is I worked very closely with the AWS team. I actually super long story short. I came from a W S after spending about three years there, so understand it pretty well on, uh, in this particular case, I am working with them to bring the whole portfolio of red hat products, uh, not only onto the cloud for customers to consume in a self directed manner, but also as we build out more of these managed services across application services A i m l A Z you mentioned with things like co vid, uh, there are discrete examples of things like business process, management decision making, that air used in hospitals and inside of, uh, places within the government. You know, uh, that are really wrestling with these decisions. So I'm very pleased with, you know, the relationship that we have with a W s. They're great partner. It's a great opportunity to talk. Especially now it reinvent So these are all really good things and really excited Thio be the hyper scale partner leader. >>That's great that you have that they had the DNA from the best. You know how to do the working backwards stuff. You know, the cultures, both technical cultures. So very customer centric. So nice fit. Thank you for sharing that. And thanks for the insight into, uh, reinvent and red hat. Thank you. >>All right, that was great to be here and look forward to learning a lot. This reinvent >>great. We'll see on the interwebs throughout the next couple of weeks. Trayvon's hyper scale partner manager Really putting in the cloud to Red Hat and customers and public sector. This is our special coverage of the public sector day here at reinvent and ongoing coverage Cube virtual throughout the next couple weeks. John, for your host. Thanks for watching. Yeah,

Published Date : Dec 9 2020

SUMMARY :

This is the Cube virtual in our coverage of AWS reinvent with special Very happy to be at my first virtual So So there's a need to get off your butt and get going with cloud if you're in public sector, the AWS console continue on to things like AWS What are the some of the unique needs? and having the confidence to move those into the cloud with security and compliance at How do you how do you expand more on that? of trust that you can have in the software supply chain is being created I talked to Teresa cross about this all the time at a W s. You know, there's all kinds of, you know, customers that are moving into the cloud is the depth at which we can leverage the Yeah, and speed to is a big theme. So the idea is that you can have software that is created a new capability Just take a minute to explain what you do it. you know, the relationship that we have with a W s. They're great partner. That's great that you have that they had the DNA from the best. All right, that was great to be here and look forward to learning a lot. manager Really putting in the cloud to Red Hat and customers and public sector.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FirreaPERSON

0.99+

AWSORGANIZATION

0.99+

JuliePERSON

0.99+

2007DATE

0.99+

Red HatORGANIZATION

0.99+

24 hoursQUANTITY

0.99+

eighth yearQUANTITY

0.99+

firstQUANTITY

0.99+

FirstQUANTITY

0.99+

US NavyORGANIZATION

0.99+

bothQUANTITY

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

three yearsQUANTITY

0.99+

24QUANTITY

0.98+

TrayvonPERSON

0.98+

Teresa crossPERSON

0.98+

CubeCOMMERCIAL_ITEM

0.98+

100%QUANTITY

0.97+

Cube virtualCOMMERCIAL_ITEM

0.97+

todayDATE

0.97+

eightQUANTITY

0.97+

LinuxTITLE

0.97+

first yearQUANTITY

0.97+

ThioPERSON

0.96+

pandemicEVENT

0.96+

red hatORGANIZATION

0.96+

AWS WorldwideORGANIZATION

0.95+

Tres VancePERSON

0.95+

four yearsQUANTITY

0.95+

Red hatTITLE

0.95+

this yearDATE

0.95+

RedORGANIZATION

0.94+

about three yearsQUANTITY

0.93+

LeePERSON

0.92+

A WORGANIZATION

0.92+

git HubTITLE

0.91+

TreyPERSON

0.91+

PhippsTITLE

0.86+

InventEVENT

0.84+

GoveTITLE

0.8+

Ram PintoPERSON

0.8+

oneQUANTITY

0.79+

next couple of weeksDATE

0.78+

third inQUANTITY

0.77+

reinvent 2020EVENT

0.75+

W SORGANIZATION

0.75+

PhippsORGANIZATION

0.74+

Quick startsTITLE

0.74+

couple weeksDATE

0.73+

DayPERSON

0.68+

minus 853OTHER

0.67+

last 20 plus yearsDATE

0.66+

Red HatTITLE

0.66+

PremORGANIZATION

0.65+

D. O. DLOCATION

0.61+

red hatORGANIZATION

0.6+

FedORGANIZATION

0.6+

Dev OpsTITLE

0.58+

cubesCOMMERCIAL_ITEM

0.58+

2020DATE

0.56+

public sectorEVENT

0.53+

ThioORGANIZATION

0.46+

reinventEVENT

0.45+

virtual reinventEVENT

0.44+

SECTITLE

0.37+

Gov.TITLE

0.35+

Jill Rouleau, Brad Thornton & Adam Miller, Red Hat | AnsibleFest 2020


 

>> (soft upbeat music) >> Announcer: From around the globe, it's the cube with digital coverage of Ansible Fest 2020, brought to you by RedHat. >> Hello, welcome to the cubes coverage of Ansible Fest 2020. We're not in person, we're virtual. I'm John Furrier , your host of theCube. We've got a great power panel here of RedHat engineers. We have Brad Thorton, Senior Principle Software Engineer for Ansible networking. Adam Miller, Senior Principle Software Engineer for Security and Jill Rouleau, who's the Senior Software Engineer for Ansible Cloud. Thanks for joining me today. Appreciate it. Thanks for coming on. >> Thanks. >> Good to be here. >> We're not in person this year because of COVID, a lot going on but still a lot of great news coming out of Ansible Fest this year. Last year, you guys launched a lot since last year. It's been awesome. Launched the new platform. The automation platform, grown the collections, certified collections community from five supported platforms to over 50, launched a lot of automation services catalog. Brad let's start with you. Why are customers successful with Ansible in networking? >> Why are customers successful with Ansible in networking? Well, let's take a step back to a bit of classic network engineering, right? Lots of CLI interaction with the terminal, a real opportunity for human error there. Managing thousands of devices from the CLI becomes very difficult. I think one of the reasons why Ansible has done well in the networking space and why a lot of network engineers find it very easy to use is because you can still see an attack at the CLI. But what we have the ability to do is pull information from the same COI that you were using manually, and showed that as structured data and then let you return that structured data and push it back to the configuration. So what you get when you're using Ansible is a way to programmatically interface and do configuration management across your entire fleet. It brings consistency and stability, and speed really to network configuration management. >> You know, one of the big hottest areas is, you know, I always ask the folks in the cloud what's next after cloud and pretty much unanimously it's edge, and edge is super important around automation, Brad. What's your thoughts on, as people start thinking about, okay, I need to have edge devices. How does automation play into that? And cause networking, edge it's kind of hand in hand there. So what's your thought on that? >> Yeah, for sure. It really depends on what infrastructure you have at the edge. You might be deploying servers at the edge. You may be administering IOT devices and really how you're directing that traffic either into edge compute or back to your data center. I think one of the places Ansible is going to be really critical is administering the network devices along that path from the edge, from IOT back to the data center, or to the cloud. >> Jill, when you have a Cloud, what's your thoughts on that? Because when you think about Cloud and Multicloud, that's coming around the horizon, you're looking at kind of the operational model. We talked about this a lot last year around having Cloud ops on premises and in the Cloud. What should customers think about when they look at the engineering challenges and the development challenges around Cloud? >> So cloud gets used for a lot of different things, right? But if we step back Cloud just means any sort of distributed applications, whether it's on prem in your own data center, on the edge, in a public hosted environment, and automation is critical for making those things work, when you have these complex applications that are distributed across, whether it's a rack, a data center or globally. You need a tool that can help you make sense of all of that. You've got to... We can't manage things just with, Oh, everything is on one box anymore. Cloud really just means that things have been exploded out and broken up into a bunch of different pieces. And there's now a lot more architectural complexity, no matter where you're running that. And so I think if you step back and look at it from that perspective, you can actually apply a lot of the same approaches and philosophies to these new challenges as they come up without having to reinvent the wheel of how you think about these applications. Just because you're putting them in a new environment, like at the edge or in a public Cloud or on a new, private on premise solution. >> It's interesting, you know, I've been really loving the cloud native action lately, especially with COVID, we're seeing a lot of more modern apps come out of that. If I could follow up there, how do you guys look at tools like Terraform and how does Ansible compare to that? Because you guys are very popular in the cloud configuration, you look at cloud native, Jill, your thoughts. >> Yeah. So Terraform and tools like that. Things like cloud formation or heat in the OpenStack world, they do really, really great at things like deploying your apps and setting up your stack and getting them out there. And they're really focused on that problem space, which is a hard problem space that they do a fantastic job with where Ansible tends to come in and a tool like Ansible is what do you do on day two with that application? How do you run an update? How do you manage it in the longterm of something like 60% of the workloads or cloud spend at least on AWS is still just EC2 instances. What do you do with all of those EC2 instances once you've deployed them, once they're in a stack, whether you're managing it, whatever tool you're managing it with, Ansible is a phenomenal way of getting in there and saying, okay, I have these instances, I know about them, but maybe I just need to connect out and run an update or add a package or reconfigure a service that's running on there. And I think you can glue these things together and use Ansible with these other stack deployment based tools really, really effectively. >> Real quick, just a quick followup on that. what's the big pain point for developers right now when they're looking at these tools? Because they see the path, what are some of the pain points that they're living right now that they're trying to overcome? >> I think one of the problems kind of coincidentally is we have so many tools. We're in kind of a tool explosion in the cloud space, right now. You could piece together as as many tools to manage your stack, as you have components in your stack and just making sense of what that landscape looks like right now and figuring out what are the right tools for the job I'm trying to do, that can be flexible and that are not going to box me into having to spend half of my engineering time, just managing my tools and making sense of all of that is a significant effort and job on its own. >> Yes, too many may add, would choke in years ago in the big data search, the tools, the tool train, one we call the tool shed, after a while, you don't know what's in the back, what you're using every day. People get comfortable with the right tools, but the platform becomes a big part of that thinking holistically as a system. And Adam, this comes back to security. There's more tools in the security space than ever before. Talking about tool challenges, security is the biggest tool shed everyone's got tools they'd buy everything, but you got to look at, what a platform looks like and developers just want to have the truth. And when you look at the configuration management piece of it, security is critical. What's your thoughts on the source of truth when it comes into play for these security appliances? >> So these are... Source of truth piece is kind of an interesting one because this is going to be very dependent on the organization. What type of brownfield environment they've developed, what type of things that they rely on, and what types of data they store there. So we have the ability for various sources of truth to come in for your inventory source and the types of information you store with that. This could be tagged information on a series of cloud instances or series about resources. This could be something you store in a network management tool or a CMDB. This could even be something that you put into a privilege access management system, such as, CyberArk or hashivault. Like those are the things and because of Ansible flexibility and because of the way that everything is put together in a pluggable nature, we have the capability to actually bring in all of these components from anywhere in a brownfield environment, in a preexisting infrastructure, as well as new decisions that are being made for the enterprise as I move forward. And, and we can bring all that together and be that infrastructure glue, be that automation component that can tie all these disjoint loosely coupled, or complete disc couple pieces, together. And that's kind of part of that, that security posture, remediation various levels of introspection into your environment, these types of things, as we go forward, and that's kind of what we're focusing on doing with this. >> What kind of data is stored in the source of truth? >> I mean... So what type of data? This could be credential. It could be single use credential access. This could be your inventory data for your systems, what target systems you're trying to do. It could be, various attributes of different systems to be able to classify them ,and codify them in different ways. It's kind of kind of depending, be configuration data. You know, we have the ability with some of the work that Brad and his team are doing to actually take unstructured data, make it structured, bullet into whatever your chosen source of truth is, store it, and then utilize that to, kind of decompose it into different vendors, specific syntax representations and those types of things. So we have a lot of different capability there as well. >> Brad, you were mentioned, do you have a talk on parsing, can you elaborate on that? And why should network operators care about that? >> Yeah, welcome to 2020. We're still parsing network configuration and operational state. This is an interesting one. If you had asked me years ago, did I think that we would be investing development time into parsing with Ansible network configurations? I would have said, "Well, I certainly hope not. "I hope programmability of network devices and the vendors "really have their API's in order." But I think what we're seeing is network containers are still comfortable with the command line. They're still very familiar with the command line and when it comes time to do operational state assessment and health assessment of your network, engineers are comfortable going to the command line and running show commands. So really what we're trying to do in the parsing space is not author brand new parking and parsing engine ourselves, but really leverage a lot of the open source tools that are already out there bringing them into Ansible, so network engineers can now harvest the critical information from usher operational state commands on their network devices. And then once they've gotten to the structure data, things get really interesting because now you can do entrance criteria checks prior to doing configuration changes, right? So if you want to ensure a network device has a very particular operational state, all the BGP neighbors are, for example before pushing configuration changes, what we have the ability to do now is actually parse the command that you would have run from the command line. Use that within a decision tree in your Ansible playbook, and only move forward when the configuration changes. If the box is healthy. And then once the configuration changes are made at the end, you run those same health checks to ensure that you're in a speck can do a steady state and are production ready. So parsing is the mechanism. It's the data that you get from the parsing that's so critical. >> If I had to ask you real quick, just while it's on my mind. You know, people want to know about automation. It's top of mind use case. What are some of these things around automation and configuration parsing, whether it's parsing to other configuration manager, what are the big challenges around automation? Because it's the Holy grail. Everyone wants it now. What are the couches? where's the hotspots that needs to be jumped on and managed carefully? Or the easiest low hanging fruit? >> Well, there's really two pieces to it, right? There's the technology. And then there's the culture. And, and we talk really about a culture of automation, bringing the team with you as you move into automation, ensuring that everybody has the tools and they're familiar with how automation is going to work and how their day job is going to change because of automation. So I think once the organization embraces automation and the culture is in place. On the technology side, low hanging fruit automation can be as simple as just using Ansible to push the commands that you would have previously pushed to the device. And then as your organization matures, and you mature along this kind of path of network automation, you're dealing with larger pieces, larger sections of the configuration. And I think over time, network engineers will become data managers, right? Because they become less concerned about the network, the vendors specific configuration, and they're really managing the data that makes up the configuration. And I think once you hit that part, you've won at automation because you can move forward with Ansible resource modules. You're well positioned to do NETCONF for RESTCONF or... Right once you've kind of grown to that it's the data that we need to be concerned about and it could fit (indistinct) and the operational state management piece, you're going to go through a transformation on the networking side. >> So you mentioned-- >> And one thing to note there, if I may, I feel like a piece of this too, is you're able to actually bridge teams because of the capability of Ansible, the breadth of technologies that we've had integrations with and our ability to actually bridge that gap between different technologies, different teams. Once you have that culture of automation, you can start to realize these DevOps and DevSecOps workflow styles that are top of everybody's mind these days. And that's something that I think is very powerful. And I like to try to preach when I have the opportunity to talk to folks about what we can do, and the fact that we have so much capability and so many integrations across the entire industry. >> That's a great point. DevSecOps is totally a hop on. When you have software and hardware, it becomes interesting. There's a variety of different equipment, on the security automation. What kind of security appliances can you guys automate? >> As of today, we are able to do endpoint management systems, enterprise firewalls, security information, and event management systems. We're able to do security orchestration, automation, remediation systems, privileged access management systems. We're doing some threat intelligence platforms. And we've recently added to the I'm sorry, did I say intrusion detection? We have intrusion detection and prevention, and we recently added endpoint security management. >> Huge, huge value there. And I think everyone's wants that. Jill, I've got to ask you about the Cloud because the modules came up. What use cases do you see the Ansible modules in for the public cloud? Because you got a lot of cloud native folks in public cloud, you've got enterprises lifting and shifting, there's a hybrid and multicloud horizon here. What's some of the use cases where you see those Ansible modules fitting well with public level. >> The modules that we have in public cloud can work across all of those things, you know. In our public clouds, we have support for Amazon web services, Azure GCP, and they all support your main services. You can spin up a Lambda, you can deploy ECS clusters, build AMI, all of those things. And then once you get all of that up there, especially looking at AWS, which is where I spend the most time, you get all your EC2 instances up, you can now pull that back down into Ansible, build an inventory from that. And seamlessly then use Ansible to manage those instances, whether they're running Linux or windows or whatever distro you might have them running, we can go straight from having deployed all of those services and resources to managing them and going between your instances in your traditional operating system management or those instances and your cloud services. And if you've got multiple clouds or if you still have on prem, or if you need to, for some reason, add those remote cloud instances into some sort of on-prem hardware load balancer, security endpoint, we can go between all of those things and glue everything together, fairly seamlessly. You can put all of that into tower and have one kind of view of your cloud and your hardware and your on-prem and being able to move things between them. >> Just put some color commentary on what that means for the customer in terms of, is it pain reduction, time savings? How would you classify their value? >> I mean, both. Instead of having to go between a number of different tools and say, "Oh, well for my on-prem, I have to use this. "But as soon as I shift over to a cloud, "I have to use these tools. "And, Oh, I can't manage my Linux instances with this tool "that only knows how to speak to, the EC2 to API." You can use one tool for all of these things. So like we were saying, bring all of your different teams together, give them one tool and one view for managing everything end to end. I think that's, that's pretty killer. >> All right. Now I get to the fun part. I want you guys to weigh in on the Kubernetes. Adam, if you can start with you, we'll start with you go in and tell us why is Kubernetes more important now? What does it mean? A lot of hype continues to be out there. What's the real meet around Kubernetes what's going on? >> I think the big thing is the modernization of the application development delivery. When you talk about Kubernetes and OpenShift and the capabilities we have there, and you talk about the architecture, you can build a lot of the tooling that you used to have to maintain, to be able to deliver sophisticated resilient architectures in your application stack, are now baked into the actual platform, so the container platform itself takes care of that for you and removes that complexity from your operations team, from your development team. And then they can actually start to use these primitives and kind of achieve what the cloud native compute foundation keeps calling cloud native applications and the ability to develop and do this in a way that you are able to take yourself out of some of the components you used to have to babysit a lot. And that becomes in also with the OpenShift operator framework that came out of originally Coral S, and if you go to operator hub, you're able to see these full lifecycle management stacks of infrastructure components that you don't... You no longer have to actually, maintain a large portion of what you start to do. And so the operator SDK itself, are actually developing these operators. Ansible is one of the automation capabilities. So there's currently three supported there's Ansible, there's one that you just have full access to the Golang API and then helm charts. So Ansible's specifically obviously being where we focus. We have our collection content for the... carries that core, and then also ReHat to OpenShift certified collection's coming out in, I think, a month or so. Don't hold me to the timeline. I'm shoving in trouble for that one, but we have those things going to come out. Those will be baked into the operator's decay that we fully supported by our customer base. And then we can actually start utilizing the Ansible expertise of your operations team to container native of the infrastructure components that you want to put into this new platform. And then Ansible itself is able to build that capability of automating the entire Kubernetes or OpenShift cluster in a way that allows you to go into a brownfield environment and automate your existing infrastructure, along with your more container native, futuristic next generation, net structure. >> Jill this brings up the question. Why don't you just use native public cloud resources versus Kubernetes and Ansible? What's the... What should people know about where you use that, those resources? >> Well, and it's kind of what Adam was saying with all of those brownfield deployments and to the same point, how many workloads are still running just in EC2 instances or VMs on the cloud. There's still a lot of tech out there that is not ready to be made fully cloud native or containerized or broken up. And with OpenShift, it's one more layer that lets you put everything into a kind of single environment instead of having to break things up and say, "Oh, well, this application has to go here. "And this application has to be in this environment.' You can do that across a public cloud and use a little of this component and a little of that component. But if you can bring everything together in OpenShift and manage it all with the same tools on the same platform, it simplifies the landscape of, I need to care about all of these things and look at all of these different things and keep track of these and are my tools all going to work together and are my tools secure? Anytime you can simplify that part of your infrastructure, I think is a big win. >> John: You know, I think about-- >> The one thing, if I may, Jill spoke to this, I think in the way that a architectural, infrastructure person would, but I want to try to really quick take the business analyst component of it as the hybrid component. If you're trying to address multiple footprints, both on prem, off prem, multiple public clouds, if you're running OpenShift across all of them, you have that single, consistent deployment and development footprint for everywhere. So I don't disagree with anything they said, I just wanted to focus specifically on... That piece is something that I find personally unique, as that was a problem for me in a past life. And that kind of speaks to me. >> Well, speaking of past lives-- >> Having me as an infrastructure person, thank you. >> Yeah. >> Well, speaking of past lives, OpenStack, you look at Jill with OpenStack, we've been covering the Cuba thing when OpenStack was rolling out back in the day, but you can also have private cloud. Where you used to... There's a lot of private cloud out there. How do you talk about that? How do people understand using public cloud versus the private cloud aspect of Ansible? >> Yeah, and I think there is still a lot of private cloud out there and I don't think that's a bad thing. I've kind of moved over onto the public cloud side of things, but there are still a lot of use cases that a lot of different industries and companies have that don't make sense for putting into public cloud. So you still have a lot of these on-prem open shift and on-prem OpenStack deployments that make a ton of sense and that are solving a bunch of problems for these folks. And I think they can all work together. We have Ansible that can support both of those. If you're a telco, you're not going to put your network function, virtualization on USC as to one in spot instances, right? When you call nine one one, you don't want that going through the public cloud. You want that to be on dedicated infrastructure, that's reliable and well-managed and engineered for that use case. So I think we're going to see a lot of ongoing OpenStack and on-prem OpenShift, especially with edge, enabling those types of use cases for a long time. And I think that's great. >> I totally agree with you. I think private cloud is not a bad thing at all. Things that are only going to accelerate my opinion. You look at the VM world, they talked about the telco cloud and you mentioned edge when five G comes out, you're going to have basically have private clouds everywhere, I guess, in my opinion. But anyway, speaking of VMware, could you talk about the Ansible VMware module real quick? >> Yeah, so we have a new collection that we'll be debuting at Ansible Fest this year bore the VMware REST API. So the existing VMware modules that we have usually SOAP API for VMware, and they rely on an external Python library that VMware provides, but with these fare 6.0 and especially in vSphere 6.5, VMware has stepped up with a REST API end point that we find is a lot more performance and offers a lot of options. So we built a new collection of VMware modules that will take advantage of that. That's brand new, it's a lighter way. It's much faster, we'll get better performance out of it. You know, reduced external requirements. You can install it and get started faster. And especially with these sphere seven, continuing to build on this REST API, we're going to see more and more interfaces being exposed so that we can take advantage. We plan to expand it as new interfaces are being exposed in that API, it's compatible with all of the existing modules. You can go back and forth, use your existing playbooks and start introducing these. But I think especially on the performance side, and especially as we get these larger clouds and more cloud deployments, edge clouds, where you have these private clouds and lots and lots of different places, the performance benefits of this new collection that we're trying to build is going to be really, really powerful for a lot of folks. >> Awesome. Brad, we didn't forget about you. We're going to bring you back in. Network automation has moved towards the resource modules. Why should people care about them? >> Yeah. Resource modules, excuse me. Probably I think having been a network engineer for so long, I think some of the most exciting work that has gone into Ansible network over the past year and a half, what the resource modules really do for you is they will reach out to network devices. They will pull back that network native, that vendor native configuration. While the resource module actually does the parsing for you. So there's none of that with the resource modules. And we returned structured data back to the user that represents the configuration. Going back to your question about source of truth. You can take that structure data, maybe for your interface CONFIG, your OSPF CONFIG, your access list CONFIG, and you can store that data in your source of truth under source of truth. And then where you are moving forward, is you really spend time as every engineer managing the data that makes up the configuration, and you can share that data across different platforms. So if you were to look at a lot of the resource modules, the data model that they support, it's fairly consistent between vendors. As an example, I can pull OSPF configuration from one vendor and with very small changes, push that OSPF configuration to a different vendor's platform. So really what we've tried to do with the resource modules is normalize the data model across vendors. It'll never be a hundred percent because there's functionality that exists in one platform that doesn't exist and that's exposed through the configuration, but where we could, we have normalized the data model. So I think it's really introducing the concept of network configuration management through data management and not through CLI commands anymore. >> Yeah, that's a great point. It just expands the network automation vision. And one of the things that's interesting here in this panel is you're talking about, cloud holistically, public multicloud, private hybrid security network automation as a platform, not just a tool, we're still going to have all kind of tools out there. And then the importance of automating the edge. I mean, that's a network game Brad. I mean, it's a data problem, right? I mean, we all know about networking, moving packets from here to there, but automating the data is critical and you give have bad data and you don't have... If you have misinformation, it sounds like our current politics, but you know, bad information is bad automation. I mean, what's your thoughts? How do you share that concept to developers out there? What should they be thinking about in terms of the data quality? >> I think that's the next thing we have to tackle as network engineers. It's not, do I have access to the data? You can get the data now for resource modules, you can get the data from NETCONF, from RESTCONF, you can get it from OpenConfig, you can get it from parsing. The question really is, how do you ensure the integrity and the quality of the data that is making up your configurations and the consistency of the data that you're using to look at operational state. And I think this is where the source of truth really becomes important. If you look at Git as a viable source of truth, you've got all the tools and the mechanisms within Git to use that as your source of truth for network configuration. So network engineers are actually becoming developers in the sense that they're using Git ops to worklow to manage configuration moving forward. It's just really exciting to see that transformation happen. >> Great panel. Thanks for everyone coming on, I appreciate it. We'll just end this by saying, if you guys could just quickly summarize Ansible fast 2020 virtual, what should people walk away with? What should your customers walk away with this year? What's the key points. Jill, we'll start with you. >> Hopefully folks will walk away with the idea that the Ansible community includes so many different folks from all over, solving lots of different, interesting problems, and that we can all come together and work together to solve those problems in a way that is much more effective than if we were all trying to solve them individually ourselves, by bringing those problems out into the open and working together, we get a lot done. >> Awesome, Brad? >> I'm going to go with collections, collections, collections. We introduced in last year. This year, they are real. Ansible2.10 that just came out is made up of collections. We've got certified collections on automation. We've got cloud collections, network collections. So they are here. They're the real thing. And I think it just gets better and deeper and more content moving forward. All right, Adam? >> Going last is difficult. Especially following these two. They covered a lot of ground and I don't really know that I have much to add beyond the fact that when you think about Ansible, don't think about it in a single context. It is a complete automation solution. The capability that we have is very extensible. It's very pluggable, which has a standing ovation to the collections and the solutions that we can come up with collectively. Thanks to ourselves. Everybody in the community is almost infinite. A few years ago, one of the core engineers did a keynote speech using Ansible to automate Philips hue light bulbs. Like this is what we're capable of. We can automate the fortune 500 data centers and telco networks. And then we can also automate random IOT devices around your house. Like we have a lot of capability here and what we can do with the platform is very unique and something special. And it's very much thanks to the community, the team, the open source development way. I just, yeah-- >> (Indistinct) the open source of truth, being collaborative all is what it makes up and DevOps and Sec all happening together. Thanks for the insight. Appreciate the time. Thank you. >> Thank you. I'm John Furrier, you're watching theCube here for Ansible Fest, 2020 virtual. Thanks for watching. (soft upbeat music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by RedHat. and Jill Rouleau, who's the Launched the new platform. and then let you return I always ask the folks in the along that path from the edge, from IOT and the development lot of the same approaches and how does Ansible compare to that? And I think you can glue that they're trying to overcome? as you have components in your And when you look at the and because of the way that and those types of things. It's the data that you If I had to ask you real quick, bringing the team with you and the fact that we on the security automation. and we recently added What's some of the use cases where you see those Ansible and being able to move Instead of having to go between A lot of hype continues to be out there. and the capabilities we have there, about where you use that, and a little of that component. And that kind of speaks to me. infrastructure person, thank you. but you can also have private cloud. and that are solving a bunch You look at the VM world, and lots and lots of different places, We're going to bring you back in. and you can store that data and you give have bad data and the consistency of What's the key points. and that we can all come I'm going to go with collections, and the solutions that we can Thanks for the insight. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

Adam MillerPERSON

0.99+

Brad ThortonPERSON

0.99+

JohnPERSON

0.99+

60%QUANTITY

0.99+

AdamPERSON

0.99+

JillPERSON

0.99+

Jill RouleauPERSON

0.99+

AnsibleORGANIZATION

0.99+

John FurrierPERSON

0.99+

two piecesQUANTITY

0.99+

Last yearDATE

0.99+

This yearDATE

0.99+

last yearDATE

0.99+

AmazonORGANIZATION

0.99+

GitTITLE

0.99+

AWSORGANIZATION

0.99+

vSphere 6.5TITLE

0.99+

OpenShiftTITLE

0.99+

RedHatORGANIZATION

0.99+

PhilipsORGANIZATION

0.99+

KubernetesTITLE

0.99+

PythonTITLE

0.99+

LinuxTITLE

0.99+

twoQUANTITY

0.99+

EC2TITLE

0.99+

five supported platformsQUANTITY

0.99+

Ansible FestEVENT

0.99+

one toolQUANTITY

0.99+

todayDATE

0.99+

thousands of devicesQUANTITY

0.99+

over 50QUANTITY

0.99+

bothQUANTITY

0.98+

USCORGANIZATION

0.98+

2020DATE

0.98+

oneQUANTITY

0.98+

one boxQUANTITY

0.98+

LambdaTITLE

0.98+

this yearDATE

0.98+

Brad ThorntonPERSON

0.98+

windowsTITLE

0.98+

telcoORGANIZATION

0.98+

one more layerQUANTITY

0.98+

one platformQUANTITY

0.98+

Ansible Fest 2020EVENT

0.97+

DevSecOpsTITLE

0.97+

AnsibleFestEVENT

0.96+

day twoQUANTITY

0.96+

one vendorQUANTITY

0.96+

NETCONFORGANIZATION

0.95+

threeQUANTITY

0.95+

nineQUANTITY

0.95+

one viewQUANTITY

0.95+

hundred percentQUANTITY

0.94+