Image Title

Search Results for zero click:

LaDavia Drane, AWS | International Women's Day


 

(bright music) >> Hello, everyone. Welcome to theCUBE special presentation of International Women's Day. I'm John Furrier, host of theCUBE. This is a global special open program we're doing every year. We're going to continue it every quarter. We're going to do more and more content, getting the voices out there and celebrating the diversity. And I'm excited to have an amazing guest here, LaDavia Drane, who's the head of Global Inclusion Diversity & Equity at AWS. LaDavia, we tried to get you in on AWS re:Invent, and you were super busy. So much going on. The industry has seen the light. They're seeing everything going on, and the numbers are up, but still not there, and getting better. This is your passion, our passion, a shared passion. Tell us about your situation, your career, how you got into it. What's your story? >> Yeah. Well, John, first of all, thank you so much for having me. I'm glad that we finally got this opportunity to speak. How did I get into this work? Wow, you know, I'm doing the work that I love to do, number one. It's always been my passion to be a voice for the voiceless, to create a seat at the table for folks that may not be welcome to certain tables. And so, it's been something that's been kind of the theme of my entire professional career. I started off as a lawyer, went to Capitol Hill, was able to do some work with members of Congress, both women members of Congress, but also, minority members of Congress in the US Congress. And then, that just morphed into what I think has become a career for me in inclusion, diversity, and equity. I decided to join Amazon because I could tell that it's a company that was ready to take it to the next level in this space. And sure enough, that's been my experience here. So now, I'm in it, I'm in it with two feet, doing great work. And yeah, yeah, it's almost a full circle moment for me. >> It's really an interesting background. You have a background in public policy. You mentioned Capitol Hill. That's awesome. DC kind of moves slow, but it's a complicated machinery there. Obviously, as you know, navigating that, Amazon grew significantly. We've been at every re:Invent with theCUBE since 2013, like just one year. I watched Amazon grow, and they've become very fast and also complicated, like, I won't say like Capitol, 'cause that's very slow, but Amazon's complicated. AWS is in the realm of powering a generation of public policy. We had the JEDI contract controversy, all kinds of new emerging challenges. This pivot to tech was great timing because one, (laughs) Amazon needed it because they were growing so fast in a male dominated world, but also, their business is having real impact on the public. >> That's right, that's right. And when you say the public, I'll just call it out. I think that there's a full spectrum of diversity and we work backwards from our customers, and our customers are diverse. And so, I really do believe, I agree that I came to the right place at the right time. And yeah, we move fast and we're also moving fast in this space of making sure that both internally and externally, we're doing the things that we need to do in order to reach a diverse population. >> You know, I've noticed how Amazon's changed from the culture, male dominated culture. Let's face it, it was. And now, I've seen over the past five years, specifically go back five, is kind of in my mental model, just the growth of female leaders, it's been impressive. And there was some controversy. They were criticized publicly for this. And we said a few things as well in those, like around 2014. How is Amazon ensuring and continuing to get the female employees feel represented and empowered? What's going on there? What programs do you have? Because it's not just doing it, it's continuing it, right? And 'cause there is a lot more to do. I mean, the half (laughs) the products are digital now for everybody. It's not just one population. (laughs) Everyone uses digital products. What is Amazon doing now to keep it going? >> Well, I'll tell you, John, it's important for me to note that while we've made great progress, there's still more that can be done. I am very happy to be able to report that we have big women leaders. We have leaders running huge parts of our business, which includes storage, customer experience, industries and business development. And yes, we have all types of programs. And I should say that, instead of calling it programs, I'm going to call it strategic initiatives, right? We are very thoughtful about how we engage our women. And not only how we hire, attract women, but how we retain our women. We do that through engagement, groups like our affinity groups. So Women at Amazon is an affinity group. Women in finance, women in engineering. Just recently, I helped our Black employee network women's group launch, BEN Women. And so you have these communities of women who come together, support and mentor one another. We have what we call Amazon Circles. And so these are safe spaces where women can come together and can have conversations, where we are able to connect mentors and sponsors. And we're seeing that it's making all the difference in the world for our women. And we see that through what we call Connections. We have an inclusion sentiment tracker. So we're able to ask questions every single day and we get a response from our employees and we can see how are our women feeling, how are they feeling included at work? Are they feeling as though they can be who they are authentically at Amazon? And so, again, there's more work that needs to be done. But I will say that as I look at the data, as I'm talking to engaging women, I really do believe that we're on the right path. >> LaDavia, talk about the urgent needs of the women that you're hearing from the Circles. That's a great program. The affinity circles, the groups are great. Now, you have the groups, what are you hearing? What are the needs of the women? >> So, John, I'll just go a little bit into what's becoming a conversation around equity. So, initially I think we talked a lot about equality, right? We wanted everyone to have fair access to the same things. But now, women are looking for equity. We're talking about not just leveling the playing field, which is equality, but don't give me the same as you give everyone else. Instead, recognize that I may have different circumstances, I may have different needs. And give me what I need, right? Give me what I need, not just the same as everyone else. And so, I love seeing women evolve in this way, and being very specific about what they need more than, or what's different than what a man may have in the same situation because their circumstances are not always the same and we should treat them as such. >> Yeah, I think that's a great equity point. I interviewed a woman here, ex-Amazonian, she's now a GSI, Global System Integrator. She's a single mom. And she said remote work brought her equity because people on her team realized that she was a single mom. And it wasn't the, how do you balance life, it was her reality. And what happened was, she had more empathy with the team because of the new work environment. So, I think this is an important point to call out, that equity, because that really makes things smoother in terms of the interactions, not the assumptions, you have to be, you know, always the same as a man. So, how does that go? What's the current... How would you characterize the progress in that area right now? >> I believe that employers are just getting better at this. It's just like you said, with the hybrid being the norm now, you have an employer who is looking at people differently based on what they need. And it's not a problem, it's not an issue that a single mother says, "Well, I need to be able to leave by 5:00 PM." I think that employers now, and Amazon is right there along with other employers, are starting just to evolve that muscle of meeting the needs. People don't have to feel different. You don't have to feel as though there's some kind of of special circumstance for me. Instead, it's something that we, as employers, we're asking for. And we want to meet those needs that are different in some situations. >> I know you guys do a lot of support of women outside of AWS, and I had a story I recorded for the program. This woman, she talked about how she was a nerd from day one. She's a tomboy. They called her a tomboy, but she always loved robotics. And she ended up getting dual engineering degrees. And she talked about how she didn't run away and there was many signals to her not to go. And she powered through, at that time, and during her generation, that was tough. And she was successful. How are you guys taking the education to STEM, to women, at young ages? Because we don't want to turn people away from tech if they have the natural affinity towards it. And not everyone is going to be, as, you know, (laughs) strong, if you will. And she was a bulldog, she was great. She's just like, "I'm going for it. I love it so much." But not everyone's like that. So, this is an educational thing. How do you expose technology, STEM for instance, and making it more accessible, no stigma, all that stuff? I mean, I think we've come a long way, but still. >> What I love about women is we don't just focus on ourselves. We do a very good job of thinking about the generation that's coming after us. And so, I think you will see that very clearly with our women Amazonians. I'll talk about three different examples of ways that Amazonian women in particular, and there are men that are helping out, but I'll talk about the women in particular that are leading in this area. On my team, in the Inclusion, Diversity & Equity team, we have a program that we run in Ghana where we meet basic STEM needs for a afterschool program. So we've taken this small program, and we've turned their summer camp into this immersion, where girls and boys, we do focus on the girls, can come and be completely immersed in STEM. And when we provide the technology that they need, so that they'll be able to have access to this whole new world of STEM. Another program which is run out of our AWS In Communities team, called AWS Girls' Tech Day. All across the world where we have data centers, we're running these Girls' Tech Day. They're basically designed to educate, empower and inspire girls to pursue a career in tech. Really, really exciting. I was at the Girls' Tech Day here recently in Columbus, Ohio, and I got to tell you, it was the highlight of my year. And then I'll talk a little bit about one more, it's called AWS GetIT, and it's been around for a while. So this is a program, again, it's a global program, it's actually across 13 countries. And it allows girls to explore cloud technology, in particular, and to use it to solve real world problems. Those are just three examples. There are many more. There are actually women Amazonians that create these opportunities off the side of their desk in they're local communities. We, in Inclusion, Diversity & Equity, we fund programs so that women can do this work, this STEM work in their own local communities. But those are just three examples of some of the things that our Amazonians are doing to bring girls along, to make sure that the next generation is set up and that the next generation knows that STEM is accessible for girls. >> I'm a huge believer. I think that's amazing. That's great inspiration. We need more of that. It's awesome. And why wouldn't we spread it around? I want to get to the equity piece, that's the theme for this year's IWD. But before that, getting that segment, I want to ask you about your title, and the choice of words and the sequence. Okay, Global Inclusion, Diversity, Equity. Not diversity only. Inclusion is first. We've had this debate on theCUBE many years now, a few years back, it started with, "Inclusion is before diversity," "No, diversity before inclusion, equity." And so there's always been a debate (laughs) around the choice of words and their order. What's your opinion? What's your reaction to that? Is it by design? And does inclusion come before diversity, or am I just reading it to it? >> Inclusion doesn't necessarily come before diversity. (John laughs) It doesn't necessarily come before equity. Equity isn't last, but we do lead with inclusion in AWS. And that is very important to us, right? And thank you for giving me the opportunity to talk a little bit about it. We lead with inclusion because we want to make sure that every single one of our builders know that they have a place in this work. And so it's important that we don't only focus on hiring, right? Diversity, even though there are many, many different levels and spectrums to diversity. Inclusion, if you start there, I believe that it's what it takes to make sure that you have a workplace where everyone knows you're included here, you belong here, we want you to stay here. And so, it helps as we go after diversity. And we want all types of people to be a part of our workforce, but we want you to stay. And inclusion is the thing. It's the thing that I believe makes sure that people stay because they feel included. So we lead with inclusion. Doesn't mean that we put diversity or equity second or third, but we are proud to lead with inclusion. >> Great description. That was fabulous. Totally agree. Double click, thumbs up. Now let's get into the theme. Embracing equity, 'cause this is a term, it's in quotes. What does that mean to you? You mentioned it earlier, I love it. What does embrace equity mean? >> Yeah. You know, I do believe that when people think about equity, especially non-women think about equity, it's kind of scary. It's, "Am I going to give away what I have right now to make space for someone else?" But that's not what equity means. And so I think that it's first important that we just educate ourselves about what equity really is. It doesn't mean that someone's going to take your spot, right? It doesn't mean that the pie, let's use that analogy, gets smaller. The pie gets bigger, right? >> John: Mm-hmm. >> And everyone is able to have their piece of the pie. And so, I do believe that I love that IWD, International Women's Day is leading with embracing equity because we're going to the heart of the matter when we go to equity, we're going to the place where most people feel most challenged, and challenging people to think about equity and what it means and how they can contribute to equity and thus, embrace equity. >> Yeah, I love it. And the advice that you have for tech professionals out there on this, how do you advise other groups? 'Cause you guys are doing a lot of great work. Other organizations are catching up. What would be your advice to folks who are working on this equity challenge to reach gender equity and other equitable strategic initiatives? And everyone's working on this. Sustainability and equity are two big projects we're seeing in every single company right now. >> Yeah, yeah. I will say that I believe that AWS has proven that equity and going after equity does work. Embracing equity does work. One example I would point to is our AWS Impact Accelerator program. I mean, we provide 30 million for early stage startups led by women, Black founders, Latino founders, LGBTQ+ founders, to help them scale their business. That's equity. That's giving them what they need. >> John: Yeah. >> What they need is they need access to capital. And so, what I'd say to companies who are looking at going into the space of equity, I would say embrace it. Embrace it. Look at examples of what companies like AWS is doing around it and embrace it because I do believe that the tech industry will be better when we're comfortable with embracing equity and creating strategic initiatives so that we could expand equity and make it something that's just, it's just normal. It's the normal course of business. It's what we do. It's what we expect of ourselves and our employees. >> LaDavia, you're amazing. Thank you for spending the time. My final couple questions really more around you. Capitol Hill, DC, Amazon Global Head of Inclusion, Diversity & Equity, as you look at making change, being a change agent, being a leader, is really kind of similar, right? You've got DC, it's hard to make change there, but if you do it, it works, right? (laughs) If you don't, you're on the side of the road. So, as you're in your job now, what are you most excited about? What's on your agenda? What's your focus? >> Yeah, so I'm most excited about the potential of what we can get done, not just for builders that are currently in our seats, but for builders in the future. I tend to focus on that little girl. I don't know her, I don't know where she lives. I don't know how old she is now, but she's somewhere in the world, and I want her to grow up and for there to be no question that she has access to AWS, that she can be an employee at AWS. And so, that's where I tend to center, I center on the future. I try to build now, for what's to come, to make sure that this place is accessible for that little girl. >> You know, I've always been saying for a long time, the software is eating the world, now you got digital transformation, business transformation. And that's not a male only, or certain category, it's everybody. And so, software that's being built, and the systems that are being built, have to have first principles. Andy Jassy is very strong on this. He's been publicly saying, when trying to get pinned down about certain books in the bookstore that might offend another group. And he's like, "Look, we have first principles. First principles is a big part of leading." What's your reaction to that? How would you talk to another professional and say, "Hey," you know this, "How do I make the right call? Am I doing the wrong thing here? And I might say the wrong thing here." And is it first principles based? What's the guardrails? How do you keep that in check? How would you advise someone as they go forward and lean in to drive some of the change that we're talking about today? >> Yeah, I think as leaders, we have to trust ourselves. And Andy actually, is a great example. When I came in as head of ID&E for AWS, he was our CEO here at AWS. And I saw how he authentically spoke from his heart about these issues. And it just aligned with who he is personally, his own personal principles. And I do believe that leaders should be free to do just that. Not to be scripted, but to lead with their principles. And so, I think Andy's actually a great example. I believe that I am the professional in this space at this company that I am today because of the example that Andy set. >> Yeah, you guys do a great job, LaDavia. What's next for you? >> What's next. >> World tour, you traveling around? What's on your plate these days? Share a little bit about what you're currently working on. >> Yeah, so you know, at Amazon, we're always diving deep. We're always diving deep, we're looking for root cause, working very hard to look around corners, and trying to build now for what's to come in the future. And so I'll continue to do that. Of course, we're always planning and working towards re:Invent, so hopefully, John, I'll see you at re:Invent this December. But we have some great things happening throughout the year, and we'll continue to... I think it's really important, as opposed to looking to do new things, to just continue to flex the same muscles and to show that we can be very, very focused and intentional about doing the same things over and over each year to just become better and better at this work in this space, and to show our employees that we're committed for the long haul. So of course, there'll be new things on the horizon, but what I can say, especially to Amazonians, is we're going to continue to stay focused, and continue to get at this issue, and doing this issue of inclusion, diversity and equity, and continue to do the things that work and make sure that our culture evolves at the same time. >> LaDavia, thank you so much. I'll give you the final word. Just share some of the big projects you guys are working on so people can know about them, your strategic initiatives. Take a minute to plug some of the major projects and things that are going on that people either know about or should know about, or need to know about. Take a minute to share some of the big things you guys got going on, or most of the things. >> So, one big thing that I would like to focus on, focus my time on, is what we call our Innovation Fund. This is actually how we scale our work and we meet the community's needs by providing micro grants to our employees so our employees can go out into the world and sponsor all types of different activities, create activities in their local communities, or throughout the regions. And so, that's probably one thing that I would like to focus on just because number one, it's our employees, it's how we scale this work, and it's how we meet our community's needs in a very global way. And so, thank you John, for the opportunity to talk a bit about what we're up to here at Amazon Web Services. But it's just important to me, that I end with our employees because for me, that's what's most important. And they're doing some awesome work through our Innovation Fund. >> Inclusion makes the workplace great. Empowerment, with that kind of program, is amazing. LaDavia Drane, thank you so much. Head of Global Inclusion and Diversity & Equity at AWS. This is International Women's Day. I'm John Furrier with theCUBE. Thanks for watching and stay with us for more great interviews and people and what they're working on. Thanks for watching. (bright music)

Published Date : Mar 2 2023

SUMMARY :

And I'm excited to have that I love to do, number one. AWS is in the realm of powering I agree that I came to the And 'cause there is a lot more to do. And so you have these communities of women of the women that you're And give me what I need, right? not the assumptions, you have to be, "Well, I need to be able the education to STEM, And it allows girls to and the choice of words and the sequence. And so it's important that we don't What does that mean to you? It doesn't mean that the pie, And everyone is able to And the advice that you I mean, we provide 30 million because I do believe that the to make change there, that she has access to AWS, And I might say the wrong thing here." I believe that I am the Yeah, you guys do a great job, LaDavia. World tour, you traveling around? and to show that we can Take a minute to share some of the And so, thank you John, Inclusion makes the workplace great.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

GhanaLOCATION

0.99+

CongressORGANIZATION

0.99+

LaDavia DranePERSON

0.99+

5:00 PMDATE

0.99+

two feetQUANTITY

0.99+

30 millionQUANTITY

0.99+

International Women's DayEVENT

0.99+

LaDaviaPERSON

0.99+

thirdQUANTITY

0.99+

Columbus, OhioLOCATION

0.99+

firstQUANTITY

0.99+

ID&EORGANIZATION

0.99+

three examplesQUANTITY

0.99+

todayDATE

0.99+

Girls' Tech DayEVENT

0.99+

Capitol HillLOCATION

0.99+

first principQUANTITY

0.98+

three examplesQUANTITY

0.98+

13 countriesQUANTITY

0.98+

first principlesQUANTITY

0.98+

First principlesQUANTITY

0.98+

oneQUANTITY

0.98+

2013DATE

0.98+

Capitol HillLOCATION

0.98+

secondQUANTITY

0.98+

Capitol Hill, DCLOCATION

0.97+

one yearQUANTITY

0.97+

single motherQUANTITY

0.97+

AmazonianOTHER

0.96+

theCUBEORGANIZATION

0.96+

GSIORGANIZATION

0.96+

bothQUANTITY

0.96+

each yearQUANTITY

0.96+

LatinoOTHER

0.96+

one thingQUANTITY

0.95+

One exampleQUANTITY

0.93+

single momQUANTITY

0.93+

two big projectsQUANTITY

0.93+

DCLOCATION

0.91+

Breaking Analysis: Enterprise Technology Predictions 2023


 

(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)

Published Date : Jan 29 2023

SUMMARY :

insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

EricPERSON

0.99+

Eric BradleyPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rob HoofPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10QUANTITY

0.99+

Ravi MayuramPERSON

0.99+

Cheryl KnightPERSON

0.99+

George GilbertPERSON

0.99+

Ken SchiffmanPERSON

0.99+

AWSORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

Atif KahnPERSON

0.99+

NovemberDATE

0.99+

Frank SlootmanPERSON

0.99+

APACORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

PaloORGANIZATION

0.99+

David FoyerPERSON

0.99+

FebruaryDATE

0.99+

January 2023DATE

0.99+

DBT LabsORGANIZATION

0.99+

OctoberDATE

0.99+

Rob EnsslinPERSON

0.99+

Scott StevensonPERSON

0.99+

John FurrierPERSON

0.99+

69%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

4.6%QUANTITY

0.99+

10 timesQUANTITY

0.99+

2023DATE

0.99+

ScottPERSON

0.99+

1,181 responsesQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

third yearQUANTITY

0.99+

BostonLOCATION

0.99+

AlexPERSON

0.99+

thousandsQUANTITY

0.99+

OneTrustORGANIZATION

0.99+

45%QUANTITY

0.99+

33%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

BeyondTrustORGANIZATION

0.99+

7%QUANTITY

0.99+

IBMORGANIZATION

0.99+

Whit Crump, AWS Marketplace | Palo Alto Networks Ignite22


 

>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Hey guys, welcome back to the Cube, the leader in live enterprise and emerging tech coverage. We are live in Las Vegas at MGM Grand Hotel, Lisa Martin with Dave Valante, covering our first time covering Palo Alto Ignite. 22 in person. Dave, we've had some great conversations so far. We've got two days of wall to wall coverage. We're gonna be talking with Palo Alto execs, leaders, customers, partners, and we're gonna be talking about the partner ecosystem >>Next. Wow. Super important. You know, it's funny you talk about for a minute, you didn't know where we were. I, I came to Vegas in May. I feel like I never left two weeks ago reinvent, which was I, I thought the most awesome reinvent ever. And it was really all about the ecosystem and the marketplace. So super excited to have that >>Conversation. Yeah, we've got Wet Whit Krump joining us, director of America's business development worldwide channels and customer programs at AWS marketplace. Wet, welcome to the Cube. Great to have >>You. Thanks for having me. Give >>Us a, you got a big title there. Give us a little bit of flavor of your scope of work at aws. >>Yeah, sure. So I, I've been with the marketplace team now almost eight years and originally founded our channel programs. And my scope has expanded to not just cover channels, but all things related to customers. So if you think about marketplace having sort of two sides, one being very focused on the isv, I tend to manage all things related to our in customer and our, our channel partners. >>What are some of the feedback that you're getting from customers and channel partners as the marketplace has has evolved so much? >>Yeah. You know, it's, it's, it's been interesting to watch over the course of the years, getting to see it start its infancy and grow up. One of the things that we hear often from customers and from our channel partners, and maybe not so directly, is it's not about finding the things they necessarily want to buy, although that's important, but it's the actual act of how they're able to purchase things and making that a much more streamlined process, especially in large enterprises where there's a lot of complexity. We wanna make that a lot simple, simpler for our customers. >>I mean, vendor management is such a hassle, right? But, so when I come into the marketplace, it's all there. I gotta console, it's integrated, I choose what I want. The billing is simplified. How has that capability evolved since the time that you've been at aws and where do you, where do you want to take it? >>Yeah, so when we, we first started Marketplace, it was really a pay as you go model customer come, they buy whatever, you know, whatever the, the whatever the solution was. And then it was, you know, charged by the hour and then the year. And one of the things that we discovered through customer and partner feedback was especially when they're dealing with large enterprise purchases, you know, they want to be able to instantiate those custom price and terms, you know, into that contract while enjoying the benefits of, of marketplace. And that's been, I think the biggest evolution started in 2017 with private offers, 2018 with consulting partner private offers. And then we've added things on over time to streamline procurement for, for >>Customers. So one of the hottest topics right now, everybody wants to talk about the macro and the headwinds and everything else, but when you talk to customers like, look, I gotta do more with less, less, that's the big theme. Yeah. And, and I wanna optimize my spend. Cloud allows me to do that because I can dial down, I can push storage to, to lower tiers. There's a lot of different things that I can do. Yeah. What are the techniques that people are using in the ecosystem Yeah. To bring in the partner cost optimization. Yeah. >>And so one of the key things that, that partners are, are, are doing for customers, they act as that trusted advisor. And, you know, when using marketplace either directly or through a partner, you know, customers are able to really save money through a licensing flexibility. They're also able to streamline their procurement. And then if there's an at-risk spin situation, they're able to, to manage that at-risk spend by combining marketplace and AWS spin into into one, you know, basically draws down their commitments to, to the company. >>And we talk about ask at-risk spend, you might talk about user or lose IT type of spend, right? Yeah. And so you, you increase the optionality in terms of where you can get value from your cloud spend. That's >>All right. Customers are thinking about their, their IT spend more strategically now more than ever. And so they're not just thinking about how do I buy infrastructure here and then software here, data services, they wanna combine this into one place. It's a lot less to keep up with a lot, a lot less overhead for them. But also just the simplification that you alluded to earlier around, you know, all the billing and vendor management is, and now in one, one streamlined, one streamlined process. Talk >>About that as a facilitator of organizations being able to reduce their risk profile. >>Yeah, so, you know, one of the things that, that came out earlier this year with Forrester was a to were total economic impact studies for both an ISV and for the end customer. But there was also a thought leadership study done where they surveyed over 700 customers worldwide to sort of get their thoughts on procurement and risk profile management. And, and one of the things that was really, you know, really surprising was is was that, you know, I guess it was like over 78% of of respondents DEF stated that they didn't feel like their, their companies had a really well-defined governance model and that over half of software and data purchases actually went outside of procurement. And so the companies aren't really able to, don't, they don't really have eyes on all of this spin and it's substantial >>And that's a, a huge risk for the organization. >>Yeah. Huge risk for the organization. And, and you know, half of the respondents stated outright that like they viewed marketplaces a way for them to reduce their risk profile because they, they were able to have a better governance model around that. >>So what's the business case can take us through that. How, how should a customer think about that? So, okay, I get that the procurement department likes it and the CFO probably likes it, but how, what, what's the dynamic around the business? So if I'm a, let's say I'm, I'm a bus, I'm a business person, I'm a, and running the process, I got my little, I get my procurement reach around. Yeah. What does the data suggest that what's in it from me, right? From a company wide standpoint, you know, what are the, maybe the Forester guys address this. So yeah, that overall business case I think is important. >>Yeah, I think, I think one of the big headlines for the end customer is because of license flexibility is that is is about a 10% cost savings in, in license cost. They're able to right size their purchases to buy the things they actually need. They're not gonna have these big overarching ELAs. There's gonna be a lot of other things in there that, that they don't, they don't really aren't gonna really directly use. You're talking about shelfware, you know, that sort of the classic term buy something, it never gets used, you know, also from just a, a getting things done perspective, big piece of feedback from customers is the contracting process takes a long time. It takes several months, especially for a large purchase. And a lot of those discussions are very repetitive. You know, you're talking about the same things over and over again. And we actually built a feature called standardized contract where we talked to a number of customers and ISVs distilled a contract down into a, a largely a set of terms that both sides already agreed to. And it cuts that, that contract time down by 90%. So if you're a legal team in a company, there's only so many of you and you have a lot of things to get done. If you can shave 90% off your time, that that's, that's now you can now work on a lot of other things for the, the corporation. Right. >>A lot of business impact there. You think faster time to value, faster time to market workforce optimization. >>Yeah. Yeah. I mean, it, it, you know, from an ISV standpoint, the measurement is they're, they're able to close deals about 40% faster, which is great for the isv. I mean obviously they love that. But if you're a customer, you're actually getting the innovative technologies you need 40% faster. So you can actually do the work you want to take it to your customers and drive the business. >>You guys recently launched, what is it, vendor Insights? Yeah. Talk a little bit about that, the value. What are some of the things that you're seeing with that? >>Yeah, so that goes into the, the onboarding value add of marketplaces. The number of things that go into, to cutting that time according to Forrester by 75%. But Vendor Insights was based on a key piece, offa impact from customers. So, you know, marketplace is used for, one of the reasons is discoverability by customers, Hey, what is the broader landscape? Look for example of security or storage partners, you know, trying to, trying to understand what is even available. And then the double click is, alright, well how does that company, or how does that vendor fit into my risk profile? You know, understanding what their compliance metrics are, things of that nature. And so historically they would have to, a customer would've to go to an ISV and say, all right, I want you to fill out this form, you know that my questionnaire. And so they would trade this back and forth as they have questions. Now with vendor insights, a customer can actually subscribe to this and they're able to actually see the risk profile of that vendor from the inside out, you know, from the inside of their SaaS application, what does it look like on a real time basis? And they can go back and look at that whenever they want. And you know, the, the, the feedback since the launch has been fantastic. And that, and I think that helps us double down on the already the, the onboarding benefits that we are providing customers. >>This, this, I wanna come back to this idea of cost optimization and, and try to tie it into predictability. You know, a lot of people, you know, complain, oh, I got surprised at the end of the month. So if I understand it wit by, by leveraging the marketplace and the breadth that you have in the marketplace, I can say, okay, look, I'm gonna spend X amount on tech. Yeah. And, and this approach allows me to say, all right, because right now procurement or historically procurement's been a bunch of stove pipes, I can't take from here and easily put it over there. Right. You're saying that this not only addresses the sort of cost optimization, does it also address the predictability challenge? >>Yeah, and I, I think another way to describe that is, is around cost controls. And you know, just from a reporting perspective, you know, we, we have what are called cost utilization reports or curve files. And we provide those to customers anytime they want and they can load those into Tableau, use whatever analysis tools that they want to be able to use. And so, and then you can actually tag usage in those reports. And what we're really talking about is helping customers adopt thin op practices. So, you know, develop directly for the cloud customers are able to understand, okay, who's using what, when and where. So everyone's informed that creates a really collaborative environment. It also holds people accountable for their spin. So that, you know, again, talking about shelfware, we bought things we're not gonna use or we're overusing people are using software that they probably don't really need to. And so that's, that adds to that predictable is everyone has great visibility into what's happening. And there's >>Another, I mean, of course saving money is, is, is in vogue right now because you know, the headwinds and the economics, et cetera. But there's also another side of the equation, which is, I mean, I see this a lot. You know, the CFO says financial people, why is our cloud bill so high? Well it's because we're actually driving all this revenue. And so, you know, you've seen it so many so often in companies, you know, the, the spreadsheet analysis says, oh, cut that. Well, what happens to revenue if you cut that? Right? Yeah. So with that visibility, the answer may be, well actually if we double down on that, yeah, we're actually gonna make more money cuz we actually have a margin on this and it's, it's got operating leverage. So if we double that, you know, we could, so that kind of cross organization communication to make better decisions, I think is another key factor. Yeah. >>Huge impact there. Talk ultimately about how the buyer's journey seems to have been really transformed >>The >>Correct. Right? So if you're, if you're a buyer, you know, initially to your point is, you know, I'm just looking for a point solution, right? And then you move on to the next one and the next one. And now, you know, working with our teams and using the platform, you know, and frankly customers are thinking more strategically about their IT spend holistically. The conversations that we're having with us is, it's not about how do I find the solution today, but here's my forward looking software spend, or I'm going through a migration, I wanna rationalize the software portfolio I have today as I'm gonna lift and shift it to aws. You know, what is going to make the trip? What are we gonna discard entirely because it's not really optimized for the cloud. Or there's that shelf wheel component, which is, hey, you know, maybe 15 to 25% of my portfolio, it's just not even getting utilized. And that, and that's a sunk cost to your point, which is, you know, that's, that's money I could be using on something that really impacts the bottom line in various areas of the business. Right. >>What would you say is the number one request you get or feedback you get from the end customers? And how is that different from what you hear from the channel partners? How aligned or Yeah. Are those >>Vectors? I would say from a customer perspective, one of the key things I hear about is around visibility of spin, right? And I was just talking about these reports and you know, using cost optimization tools, being able to use features like identity and access management, managing entitlements, private marketplaces. Basically them being able to have a stronger governance model in the cloud. For one thing, it's, it's, you know, keeping everybody on track like some of the points I was talking about earlier, but also cost, cost optimization around, you know, limiting vendor sprawl. Are we actually really using all the things that we need? And then from a channel partner perspective, you know, some of the things I talked about earlier about that 40% faster sales cycle, you know, that that TEI or the total economic impact study that was done by Forrester was, was built for the isv. >>But if you're a channel partner sitting between the customer and the isv, you kind of get to, you get a little bit of the best of both worlds, right? You're acting as that, you're acting as that that advisor. And so if you're a channel partner, the procurement streamlining is a huge benefit because the, you know, like you said, saving money is in vogue right now. You're trying to do more with less. So if you're thinking about 20, 27% faster win rates, 40% faster time to close, and you're the customer who's trying to impact the bottom line by, by innovating more, more quickly, those two pieces of feedback are really coming together and meeting in, in the middle >>Throughout 2021, or sorry, 2022, our survey partner, etr Enterprise Technology Research has asked their panel a question is what's your strategy for, you know, doing more with less? By far the number one response has been consolidating redundant vendors. Yes. And then optimizing cloud was, you know, second, but, but way, way lower than that. The number from last survey went from 34%. It's now up to 44% in the January survey, which is in the field, which they gave me a glimpse to last night. So you're seeing dramatic uptick Yeah. In that point. Yeah. And then you guys are helping, >>We, we definitely are. I mean, it, there's the reporting piece so they have a better visibility of what they're doing. And then you think about a, a feature like private marketplace and manage entitlements. So private marketplace enables a customer to create their own private marketplace as the name states where they can limit access to it for certain types of software to the actual in customer who needs to use that software. And so, you know, not everybody needs a license to software X, right? And so that helps with the sprawl comment to your point, that's, that's on the increase, right? Am I actually spending money on things that we need to use? >>But also on the consolidation front, you, we, we talked with nikesh an hour or so ago, he was mentioning on stage, if you, if you just think of this number of security tools or cybersecurity tools that an organization has on its network, 30 to 50. And we were talking about, well, how does Palo Alto Networks what's realistic in terms of consolidation? But it sounds like what you're doing in the marketplace is giving organizations the visibility, correct, for sure. Into what they're running, usage spend, et cetera, to help facilitate ultimately at some point facilitate a strategic consolidation. >>It's, that's exactly right. And if you, you think about cost optimization, our procurement features, you know, the, the practice that we're trying to help customers around, around finops, it's all about helping customers build a, a modern procurement practice and supply chain. And so that helps with, with that point exactly. The keynotes >>Point. Exactly. So last question for you. What, what's next? What can we expect? >>Oh, so what's next for me is, you know, I, I really want to, you know, my channel business for example, you know, I want to think about enabling new types of partners. So if we've worked really heavily with resellers, we worked very heavily with Palo Alto on the reseller community, how are we bringing in more services partners of various types? You know, the gsi, the distributors, cloud service providers, managed security service providers was in a keynote yesterday listening to Palo Alto talk about their five routes to market. And, you know, they had these bubbles. And so I was like, gosh, that's exactly how I'm thinking about the business is how am I expanding my own footprint to customers that have deeper, I mean, excuse me, to partners that have deeper levels of cloud knowledge, can be more of that advisor, help customers really understand how to maximize their business on aws. And, and you know, my job is to really help facilitate that, that innovative technology through those partners. >>So sounds like powerful force, that ecosystem. Exactly. Great alignment. AWS and Palo Alto, thank you so much for joining us with, we >>Appreciate, thanks for having >>With what's going on at aws, the partner network, the mp, and all that good stuff. That's really the value in it for customers, ISVs and channel partners. I like. We appreciate your insights. >>Thank you. Thanks for having me. Thank you. >>Our guests and Dave Valante. I'm Lisa Martin. You're watching the Cube Lee Leer in live enterprise and emerging tech coverage.

Published Date : Dec 13 2022

SUMMARY :

The Cube presents Ignite 22, brought to you by Palo Alto the partner ecosystem You know, it's funny you talk about for a minute, you didn't know where we were. Great to have Give Us a, you got a big title there. So if you think about marketplace having sort of two sides, One of the things that we hear often from customers and from since the time that you've been at aws and where do you, where do you want to take it? And then it was, you know, charged by the hour and then the year. but when you talk to customers like, look, I gotta do more with less, less, that's the big theme. partner, you know, customers are able to really save money through a licensing flexibility. And we talk about ask at-risk spend, you might talk about user or lose IT type of spend, right? But also just the simplification that you alluded to earlier around, Yeah, so, you know, one of the things that, that came out earlier this year with Forrester And, and you know, half of the respondents stated outright that like From a company wide standpoint, you know, what are the, maybe the Forester guys address this. You're talking about shelfware, you know, that sort of the classic term buy something, it never gets used, You think faster time to value, faster time to market workforce optimization. So you can actually do the work you want to take it to your customers and drive the business. What are some of the things that you're seeing with that? the inside out, you know, from the inside of their SaaS application, what does it look like on a real time basis? You know, a lot of people, you know, complain, oh, I got surprised at the end of the month. So, you know, develop directly for the cloud customers are able to understand, And so, you know, Huge impact there. And now, you know, working with our teams and using the platform, you know, And how is that different from what you hear from the channel partners? And I was just talking about these reports and you know, using cost optimization a huge benefit because the, you know, like you said, saving money is in vogue right now. And then you guys are helping, And so, you know, not everybody needs a license to software And we were talking about, well, how does Palo Alto Networks what's our procurement features, you know, the, the practice that we're trying to help customers around, So last question for you. Oh, so what's next for me is, you know, I, I really want thank you so much for joining us with, we That's really the value in it for customers, ISVs and channel partners. Thanks for having me. You're watching the Cube Lee Leer in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave ValantePERSON

0.99+

AWSORGANIZATION

0.99+

2017DATE

0.99+

40%QUANTITY

0.99+

JanuaryDATE

0.99+

30QUANTITY

0.99+

VegasLOCATION

0.99+

15QUANTITY

0.99+

90%QUANTITY

0.99+

2018DATE

0.99+

75%QUANTITY

0.99+

Las VegasLOCATION

0.99+

DavePERSON

0.99+

34%QUANTITY

0.99+

ForresterORGANIZATION

0.99+

two daysQUANTITY

0.99+

2022DATE

0.99+

yesterdayDATE

0.99+

Palo Alto NetworksORGANIZATION

0.99+

MayDATE

0.99+

2021DATE

0.99+

Whit CrumpPERSON

0.99+

first timeQUANTITY

0.99+

over 700 customersQUANTITY

0.99+

TableauTITLE

0.99+

both sidesQUANTITY

0.98+

secondQUANTITY

0.98+

todayDATE

0.98+

over 78%QUANTITY

0.98+

Enterprise Technology ResearchORGANIZATION

0.98+

two sidesQUANTITY

0.98+

last nightDATE

0.97+

Palo AltoORGANIZATION

0.97+

two piecesQUANTITY

0.97+

50QUANTITY

0.97+

nikeshPERSON

0.97+

25%QUANTITY

0.97+

DEFORGANIZATION

0.96+

oneQUANTITY

0.96+

an hour or so agoDATE

0.96+

OneQUANTITY

0.95+

Palo Alto NetworksORGANIZATION

0.95+

earlier this yearDATE

0.95+

both worldsQUANTITY

0.95+

one thingQUANTITY

0.94+

Wet Whit KrumpPERSON

0.94+

two weeks agoDATE

0.94+

five routesQUANTITY

0.94+

awsORGANIZATION

0.93+

ForesterORGANIZATION

0.93+

firstQUANTITY

0.92+

one placeQUANTITY

0.92+

10%QUANTITY

0.91+

bothQUANTITY

0.91+

AmericaLOCATION

0.89+

double clickQUANTITY

0.87+

almost eight yearsQUANTITY

0.87+

half of the respondentsQUANTITY

0.85+

MGM Grand HotelLOCATION

0.84+

about 40%QUANTITY

0.83+

IgniteCOMMERCIAL_ITEM

0.82+

AWS MarketplaceORGANIZATION

0.8+

up to 44%QUANTITY

0.8+

27%QUANTITY

0.78+

over halfQUANTITY

0.77+

AltoLOCATION

0.76+

CubeORGANIZATION

0.72+

Cube Lee LeerPERSON

0.69+

20QUANTITY

0.69+

finopsORGANIZATION

0.68+

WetPERSON

0.67+

Ankur Shah, Palo Alto Networks | AWS re:Invent 2022


 

>>Good afternoon from the Venetian Expo, center, hall, whatever you wanna call it, in Las Vegas. Lisa Martin here. It's day four. I'm not sure what this place is called. Wait, >>What? >>Lisa Martin here with Dave Ante. This is the cube. This is day four of a ton of coverage that we've been delivering to you, which, you know, cause you've been watching since Monday night, Dave, we are almost at the end, we're almost at the show wrap. Excited to bring back, we've been talking about security, a lot about security. Excited to bring back a, an alumni to talk about that. But what's your final thoughts? >>Well, so just in, in, in the context of security, we've had just three in a row talking about cyber, which is like the most important topic. And I, and I love that we're having Palo Alto Networks on Palo Alto Networks is the gold standard in security. Talk to CISOs, they wanna work with them. And, and it was, it's interesting because I've been following them for a little bit now, watch them move to the cloud and a couple of little stumbling points. But I said at the time, they're gonna figure it out and, and come rocking back. And they have, and the company's just performing unbelievably well despite, you know, all the macro headwinds that we love to >>Talk about. So. Right. And we're gonna be unpacking all of that with one of our alumni. As I mentioned, Anker Shaw is with us, the SVP and GM of Palo Alto Networks. Anker, welcome back to the Cub. It's great to see you. It's been a while. >>It's good to be here after a couple years. Yeah, >>Yeah. I think three. >>Yeah, yeah, for sure. Yeah. Yeah. It's a bit of a blur after Covid. >>Everyone's saying that. Yeah. Are you surprised that there are still this many people on the show floor? Cuz I am. >>I am. Yeah. Look, I am not, this is my fourth, last year was probably one third or one fourth of this size. Yeah. But pre covid, this is what dream went looked like. And it's energizing, it's exciting. It's just good to be doing the good old things. So many people and yeah. Amazing technology and innovation. It's been incredible. >>Let's talk about innovation. I know you guys, Palo Alto Networks recently acquired cyber security. Talk to us a little bit about that. How is it gonna compliment Prisma? Give us all the scoop on that. >>Yeah, for sure. Look, some of the recent, the cybersecurity attacks that we have seen are related to supply chain, the colonial pipeline, many, many supply chain. And the reason for that is the modern software supply chain, not the physical supply chain, the one that AWS announced, but this is the software supply chain is really incredibly complicated, complicated developers that are building and shipping code faster than ever before. And the, the site acquisition at the center, the heart of that was securing the entire supply chain. White House came with a new initiative on supply chain security and SBO software bill of material. And we needed a technology, a company, and a set of people who can really deliver to that. And that's why we acquired that for supply chain security, otherwise known as cicd, security, c >>IDC security. Yeah. So how will that complement PRIs McCloud? >>Yeah, so look, if you look at our history lease over the last four years, we have been wanting to, our mission mission has been to build a single code to cloud platform. As you may know, there are over 3000 security vendors in the industry. And we said enough is enough. We need a platform player who can really deliver a unified cohesive platform solution for our customers because they're sick and tired of buying PI point product. So our mission has been to deliver that code to cloud platform supply chain security was a missing piece and we acquired them, it fits right really nicely into our portfolio of products and solution that customers have. And they'll have a single pin of glass with this. >>Yeah. So there's a lot going on. You've got, you've got an adversary that is incredibly capable. Yeah. These days and highly motivated and extremely sophisticated mentioned supply chain. It's caused a shift in, in CSO strategies, talking about the pandemic, of course we know work from home that changed things. You've mentioned public policy. Yeah. And, and so, and as well you have the cloud, cloud, you know, relatively new. I mean, it's not that new, but still. Yeah. But you've got the shared responsibility model and not, not only do you have the shared responsibility model, you have the shared responsibility across clouds and OnPrem. So yes, the cloud helps with security, but that the CISO has to worry about all these other things. The, the app dev team is being asked to shift left, you know, secure and they're not security pros. Yeah. And you know, kind audit is like the last line of defense. So I love this event, I love the cloud, but customers need help in making their lives simpler. Yeah. And the cloud in and of itself, because, you know, shared responsibility doesn't do that. Yeah. That's what Palo Alto and firms like yours come in. >>Absolutely. So look, Jim, this is a unable situation for a lot of the Cisco, simply because there are over 26 million developers, less than 3 million security professional. If you just look at all the announcement the AWS made, I bet you there were like probably over 2000 features. Yeah. I mean, they're shipping faster than ever before. Developers are moving really, really fast and just not enough security people to keep up with the velocity and the innovation. So you are right, while AWS will guarantee securing the infrastructure layer, but everything that is built on top of it, the new machine learning stuff, the new application, the new supply chain applications that are developed, that's the responsibility of the ciso. They stay up at night, they don't know what's going on because developers are bringing new services and new technology. And that's why, you know, we've always taken a platform approach where customers and the systems don't have to worry about it. >>What AWS new service they have, it's covered, it's secured. And that's why the adopters, McCloud and Palo Alto Networks, because regardless what developers bring, security is always there by their side. And so security teams need just a simple one click solution. They don't have to worry about it. They can sleep at night, keep the bad actors away. And, and that's, that's where Palo Alto Networks has been innovating in this area. AWS is one of our biggest partners and you know, we've integrated with, with a lot of their services. We launch about three integrations with their services. And we've been doing this historically for more and >>More. Are you still having conversations with the security folks? Or because security is a board level conversation, are your conversations going up a stack because this is a C-suite problem, this is a board level initiative? >>Absolutely. Look, you know, there was a time about four years ago, like the best we could do is director of security. Now it's just so CEO level conversation, board level conversation to your point, simply because I mean, if, if all your financial stuff is going to public cloud, all your healthcare data, all your supply chain data is going to public cloud, the board is asking very simple question, what are you doing to secure that? And to be honest, the question is simple. The answer's not because all the stuff that we talked about, too many applications, lots and lots of different services, different threat vectors and the bad actors, the bad guys are always a step ahead of the curve. And that's why this has become a board level conversation. They wanna make sure that things are secure from the get go before, you know, the enterprises go too deep into public cloud adoption. >>I mean there, there was shift topics a little bit. There was hope or kinda early this year that that cyber was somewhat insulated from the sort of macro press pressures. Nobody's safe. Even the cloud is sort of, you know, facing those, those headwinds people optimizing costs. But one thing when you talk to customers is, I always like to talk about that, that optiv graph. We've all seen it, right? And it's just this eye test of tools and it's a beautiful taxonomy, but there's just too many tools. So we're seeing a shift from point tools to platforms because obviously a platform play, and that's a way. So what are you seeing in the, in the field with customers trying to optimize their infrastructure costs with regard to consolidating to >>Platforms? Yeah. Look, you rightly pointed out one thing, the cybersecurity industry in general and Palo Alto networks, knock on wood, the stocks doing well. The macro headwinds hasn't impacted the security spend so far, right? Like time will tell, we'll, we'll see how things go. And one of the primary reason is that when you know the economy starts to slow down, the customers again want to invest in platforms. It's simple to deploy, simple to operationalize. They want a security partner of choice that knows that they, it's gonna be by them through the entire journey from code to cloud. And so that's why platform, especially times like these are more important than they've ever been before. You know, customers are investing in the, the, the product I lead at Palo Alto network called Prisma Cloud. It's in the cloud network application protection platform seen app space where once again, customers that investing in platform from quote to cloud and avoiding all the point products for sure. >>Yeah. Yeah. And you've seen it in, in Palo Alto's performance. I mean, not every cyber firm has is, is, >>You know, I know. Ouch. CrowdStrike Yeah. >>Was not. Well you saw that. I mean, and it was, and and you know, the large customers were continuing to spend, it was the small and mid-size businesses Yeah. That were, were were a little bit soft. Yeah. You know, it's a really, it's really, I mean, you see Okta now, you know, after they had some troubles announcing that, you know, their, their, their visibility's a little bit better. So it's, it's very hard to predict right now. And of course if TOMA Brava is buying you, then your stock price has been up and steady. That's, >>Yeah. Look, I think the key is to have a diversified portfolio of products. Four years ago before our CEO cash took over the reins of the company, we were a single product X firewall company. Right. And over time we have added XDR with the first one to introduce that recently launched x Im, you know, to, to make sure we build an NextGen team, cloud security is a completely net new investment, zero trust with access as workers started working remotely and they needed to make sure enterprises needed to make sure that they're accessing the applications securely. So we've added a lot of portfolio products over time. So you have to remain incredibly diversified, stay strong, because there will be stuff like remote work that slowed down. But if you've got other portfolio product like cloud security, while those secular tailwinds continue to grow, I mean, look how fast AWS is growing. 35, 40%, like $80 billion run rate. Crazy at that, that scale. So luckily we've got the portfolio of products to ensure that regardless of what the customer's journey is, macro headwinds are, we've got portfolio of solutions to help our customers. >>Talk a little bit about the AWS partnership. You talked about the run rate and I was reading a few days ago. You're right. It's an 82 billion arr, massive run rate. It's crazy. Well, what are, what is a Palo Alto Networks doing with aws and what's the value in it to help your customers on a secure digital transformation journey? >>Well, absolutely. We have been doing business with aws. We've been one of their security partners of choice for many years now. We have a presence in the marketplace where customers can through one click deploy the, the several Palo Alto Networks security solutions. So that's available. Like I said, we had launch partner to many, many new products and innovation that AWS comes up with. But always the day one partner, Adam was talking about some of those announcements and his keynote security data lake was one of those. And they were like a bunch of others related to compute and others. So we have been a partner for a long time, and look, AWS is an incredibly customer obsessed company. They've got their own security products. But if the customer says like, Hey, like I'd like to pick this from yours, but there's three other things from Palo Alto Networks or S MacCloud or whatever else that may be, they're open to it. And that's the great thing about AWS where it doesn't have to be wall garden open ecosystem, let the customer pick the best. >>And, and that's, I mean, there's, there's examples where AWS is directly competitive. I mean, my favorite example is Redshift and Snowflake. I mean those are directly competitive products, but, but Snowflake is an unbelievably great relationship with aws. They do cyber's, I think different, I mean, yeah, you got guard duty and you got some other stuff there. But generally speaking, the, correct me if I'm wrong, the e the ecosystem has more room to play on AWS than it may on some other clouds. >>A hundred percent. Yeah. Once again, you know, guard duty for examples, we've got a lot of customers who use guard duty and Prisma Cloud and other Palo Alto Networks products. And we also ingest the data from guard duty. So if customers want a single pane of glass, they can use the best of AWS in terms of guard duty threat detection, but leverage other technology suite from, you know, a platform provider like Palo Alto Networks. So you know, that that, you know, look, world is a complicated place. Some like blue, some like red, whatever that may be. But we believe in giving customers that choice, just like AWS customers want that. Not a >>Problem. And at least today they're not like directly, you know, in your space. Yeah. You know, and even if they were, you've got such a much mature stack. Absolutely. And my, my frankly Microsoft's different, right? I mean, you see, I mean even the analysts were saying that some of the CrowdStrike's troubles for, cuz Microsoft's got the good enough, right? So >>Yeah. Endpoint security. Yeah. And >>Yeah, for sure. So >>Do you have a favorite example of a customer where Palo Alto Networks has really helped them come in and, and enable that secure business transformation? Anything come to mind that you think really shines a light on Palo Alto Networks and what it's able to do? >>Yeah, look, we have customers across, and I'm gonna speak to public cloud in general, right? Like Palo Alto has over 60,000 customers. So we've been helping with that business transformation for years now. But because it's reinvented aws, the Prisma cloud product has been helping customers across different industry verticals. Some of the largest credit card processing companies, they can process transactions because we are running security on top of the workloads, the biggest financial services, biggest healthcare customers. They're able to put the patient health records in public cloud because Palo Alto Networks is helping them get there. So we are helping accelerated that digital journey. We've been an enabler. Security is often perceived as a blocker, but we have always treated our role as enabler. How can we get developers and enterprises to move as fast as possible? And like, my favorite thing is that, you know, moving fast and going digital is not a monopoly of just a tech company. Every company is gonna be a tech company Oh absolutely. To public cloud. Yes. And we want to help them get there. Yeah. >>So the other thing too, I mean, I'll just give you some data. I love data. I have a, ETR is our survey partner and I'm looking at Data 395. They do a survey every quarter, 1,250 respondents on this survey. 395 were Palo Alto customers, fortune 500 s and P 500, you know, big global 2000 companies as well. Some small companies. Single digit churn. Yeah. Okay. Yeah. Very, very low replacement >>Rates. Absolutely. >>And still high single digit new adoption. Yeah. Right. So you've got that tailwind going for you. Yeah, >>Right. It's, it's sticky because especially our, our main business firewall, once you deploy the firewall, we are inspecting all the network traffic. It's just so hard to rip and replace. Customers are getting value every second, every minute because we are thwarting attacks from public cloud. And look, we, we, we provide solutions not just product, we just don't leave the product and ask the customers to deploy it. We help them with deployment consumption of the product. And we've been really fortunate with that kind of gross dollar and netten rate for our customers. >>Now, before we wrap, I gotta tease, the cube is gonna be at Palo Alto Ignite. Yeah. In two weeks back here. I think we're at D mgm, right? We >>Were at D MGM December 13th and >>14th. So give us a little, show us a little leg if you would. What could we expect? >>Hey, look, I mean, a lot of exciting new things coming. Obviously I can't talk about it right now. The PR Inc is still not dry yet. But lots of, lots of new innovation across our three main businesses. Network security, public cloud, security, as well as XDR X. Im so stay tuned. You know, you'll, you'll see a lot of new exciting things coming up. >>Looking forward to it. >>We are looking forward to it. Last question on curf. You, if you had a billboard to place in New York Times Square. Yeah. You're gonna take over the the the Times Square Nasdaq. What does the billboard say about why organizations should be working with Palo Alto Networks? Yeah. To really embed security into their dna. Yeah. >>You know when Jim said Palo Alto Networks is the gold standard for security, I thought it was gonna steal it. I think it's pretty good gold standard for security. But I'm gonna go with our mission cyber security partner's choice. We want to be known as that and that's who we are. >>Beautifully said. Walker, thank you so much for joining David in the program. We really appreciate your insights, your time. We look forward to seeing you in a couple weeks back here in Vegas. >>Absolutely. Can't have enough of Vegas. Thank you. Lisa. >>Can't have in Vegas, >>I dunno about that. By this time of the year, I think we can have had enough of Vegas, but we're gonna be able to see you on the cubes coverage, which you could catch up. Palo Alto Networks show Ignite December, I believe 13th and 14th on the cube.net. We want to thank Anker Shaw for joining us. For Dave Ante, this is Lisa Martin. You're watching the Cube, the leader in live enterprise and emerging tech coverage.

Published Date : Dec 2 2022

SUMMARY :

whatever you wanna call it, in Las Vegas. This is the cube. you know, all the macro headwinds that we love to And we're gonna be unpacking all of that with one of our alumni. It's good to be here after a couple years. It's a bit of a blur after Covid. Cuz I am. It's just good to be doing the good old things. I know you guys, Palo Alto Networks recently acquired cyber security. And the reason for that is the modern software supply chain, not the physical supply chain, IDC security. Yeah, so look, if you look at our history lease over the last four years, And the cloud in and of itself, because, you know, shared responsibility doesn't do that. And that's why, you know, we've always taken a platform approach of our biggest partners and you know, we've integrated with, with a lot of their services. this is a board level initiative? the board is asking very simple question, what are you doing to secure that? So what are you seeing in the, And one of the primary reason is that when you know the I mean, not every cyber firm has You know, I know. I mean, and it was, and and you know, the large customers were continuing to And over time we have added XDR with the first one to introduce You talked about the run rate and I was reading a And that's the great thing about AWS where it doesn't have to be wall garden open I think different, I mean, yeah, you got guard duty and you got some other stuff there. So you know, And at least today they're not like directly, you know, in your space. So my favorite thing is that, you know, moving fast and going digital is not a monopoly of just a tech So the other thing too, I mean, I'll just give you some data. Absolutely. So you've got that tailwind going for you. and ask the customers to deploy it. Yeah. So give us a little, show us a little leg if you would. Hey, look, I mean, a lot of exciting new things coming. You're gonna take over the the the Times Square Nasdaq. But I'm gonna go with our mission cyber We look forward to seeing you in a couple weeks back here in Vegas. Can't have enough of Vegas. but we're gonna be able to see you on the cubes coverage, which you could catch up.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AdamPERSON

0.99+

JimPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

McCloudORGANIZATION

0.99+

VegasLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Ankur ShahPERSON

0.99+

CiscoORGANIZATION

0.99+

$80 billionQUANTITY

0.99+

Las VegasLOCATION

0.99+

White HouseORGANIZATION

0.99+

Anker ShawPERSON

0.99+

1,250 respondentsQUANTITY

0.99+

LisaPERSON

0.99+

WalkerPERSON

0.99+

Dave AntePERSON

0.99+

fourthQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

82 billionQUANTITY

0.99+

last yearDATE

0.99+

less than 3 millionQUANTITY

0.99+

oneQUANTITY

0.99+

Monday nightDATE

0.99+

Palo AltoORGANIZATION

0.99+

New York Times SquareLOCATION

0.99+

OktaORGANIZATION

0.99+

over 60,000 customersQUANTITY

0.99+

CovidPERSON

0.99+

Prisma CloudORGANIZATION

0.99+

over 2000 featuresQUANTITY

0.99+

todayDATE

0.99+

40%QUANTITY

0.99+

awsORGANIZATION

0.99+

threeQUANTITY

0.99+

DecemberDATE

0.98+

cube.netOTHER

0.98+

PrismaORGANIZATION

0.98+

2000 companiesQUANTITY

0.98+

first oneQUANTITY

0.98+

singleQUANTITY

0.98+

Venetian ExpoEVENT

0.98+

three main businessesQUANTITY

0.98+

395QUANTITY

0.98+

PR IncORGANIZATION

0.98+

over 26 million developersQUANTITY

0.97+

one clickQUANTITY

0.97+

Four years agoDATE

0.97+

35QUANTITY

0.96+

Palo AltoLOCATION

0.96+

December 13thDATE

0.95+

14thDATE

0.95+

Manu Parbhakar, AWS & Joel Jackson, Red Hat | AWS re:Invent 2022


 

>>Hello, brilliant humans and welcome back to Las Vegas, Nevada, where we are live from the AWS Reinvent Show floor here with the cube. My name is Savannah Peterson, joined with Dave Valante, and we have a very exciting conversation with you. Two, two companies you may have heard of. We've got AWS and Red Hat in the house. Manu and Joel, thank you so much for being here. Love this little fist bump. Started off, that's right. Before we even got rolling, Manu, you said that you wanted this to be the best segment of, of the cubes airing. We we're doing over a hundred segments, so you're gonna have to bring the heat. >>We're ready. We're did go. Are we ready? Yeah, go. We're ready. Let's bring it on. >>We're ready. All right. I'm, I'm ready. Dave's ready. Let's do it. How's the show going for you guys real quick before we dig in? >>Yeah, I think after Covid, it's really nice to see that we're back into the 2019 level and, you know, people just want to get out, meet people, have that human touch with each other, and I think a lot of trust gets built as a functional that, so it's super amazing to see our partners and customers here at Reedman. Yeah, >>And you've got a few in the house. That's true. Just a few maybe, maybe a couple >>Very few shows can say that, by the way. Yeah, it's maybe a handful. >>I think one of the things we were saying, it's almost like the entire Silicon Valley descended in the expo hall area, so >>Yeah, it's >>For a few different reasons. There's a few different silicon defined. Yeah, yeah, yeah. Don't have strong on for you. So far >>It's, it's, it is amazing. It's the 10th year, right? It's decade, I think I've been to five and it's, it grows every single year. It's the, you have to be here. It's as simple as that. And customers from every single industry are here too. You don't get, a lot of shows have every single industry and almost every single location around the globe. So it's, it's a must, must be >>Here. Well, and the personas evolved, right? I was at reinvent number two. That was my first, and it was all developers, not all, but a lot of developers. And today it's a business mix, really is >>Totally, is a business mix. And I just, I've talked about it a little bit down the show, but the diversity on the show floor, it's the first time I've had to wait in line for the ladies' room at a tech conference. Almost a two decade career. It is, yeah. And it was really refreshing. I'm so impressed. So clearly there's a commitment to community, but also a commitment to diversity. Yeah. And, and it's brilliant to see on the show floor. This is a partnership that is robust and has been around for a little while. Money. Why don't you tell us a little bit about the partnership here? >>Yes. So Red Hand and AWS are best friends, you know, forever together. >>Aw, no wonder we got the fist bumps and all the good vibes coming out. I know, it's great. I love that >>We have a decade of working together. I think the relationship in the first phase was around running rail bundled with E two. Sure. We have about 70,000 customers that are running rail, which are running mission critical workloads such as sap, Oracle databases, bespoke applications across the state of verticals. Now, as more and more enterprise customers are finally, you know, endorsing and adopting public cloud, I think that business is just gonna continue to grow. So a, a lot of progress there. The second titration has been around, you know, developers tearing Red Hat and aws, Hey, listen, we wanna, it's getting competitive. We wanna deliver new features faster, quicker, we want scale and we want resilience. So just entire push towards devs containers. So that's the second chapter with, you know, red Hat OpenShift on aws, which launched as a, a joint manage service in 2021 last year. And I think the third phase, which you're super excited about, is just bringing the ease of consumption, one click deployment, and then having our customers, you know, benefit from the joint committed spend programs together. So, you know, making sure that re and Ansible and JBoss, the entire portfolio of Red Hat products are available on AWS marketplace. So that's the 1, 2, 3, it of our relationship. It's a decade of working together and, you know, best friends are super committed to making sure our customers and partners continue successful. >>Yeah, that he said it, he said it perfectly. 2008, I know you don't like that, but we started with Rel on demand just in 2008 before E two even had a console. So the partnership has been there, like Manu says, for a long time, we got the partnership, we got the products up there now, and we just gotta finalize that, go to market and get that gas on the fire. >>Yeah. So Graviton Outpost, local zones, you lead it into all the new stuff. So that portends, I mean, 2008, we're talking two years after the launch of s3. >>That's right. >>Right. So, and now look, so is this a harbinger of things to come with these new innovations? >>Yeah, I, I would say, you know, the innovation is a key tenant of our partnership, our relationship. So if you look at from a product standpoint, red Hat or Rel was one of the first platforms that made a support for graviton, which is basically 40% better price performance than any other distribution. Then that translated into making sure that Rel is available on all of our regions globally. So this year we launched Switzerland, Spain, India, and Red Hat was available on launch there, support for Nitro support for Outpost Rosa support on Outpost as well. So I think that relationship, that innovation on the product side, that's pretty visible. I think that innovation again then translates into what we are doing on marketplace with one click deployments we spoke about. I think the third aspect of the know innovation is around making sure that we are making our partners and our customers successful. So one of the things that we've done so far is Joe leads a, you know, a black belt team that really goes into each customer opportunity, making sure how can we help you be successful. We launched and you know, we should be able to share that on a link. After this, we launched like a big playlist, which talks about every single use case on how do you get successful and running OpenShift on aws. So that innovation on behalf of our customers partners to make them successful, that's been a key tenant for us together as >>Well. That's right. And that team that Manu is talking about, we're gonna, gonna 10 x that team this year going into January. Our fiscal yield starts in January. Love that. So yeah, we're gonna have a lot of no hiring freeze over here. Nope. No ma'am. No. Yeah, that's right. Yeah. And you know what I love about working with aws and, and, and Manu just said it very, all of that's customer driven. Every single event that we, that he just talked about in that timeline, it's customer driven, right? Customers wanted rail on demand, customers want JBoss up in the cloud, Ansible this week, you know, everything's up there now. So it's just getting that go to market tight and we're gonna, we're gonna get that done. >>So what's the algorithm for customer driven in terms of taking the input? Because if every customers saying, Hey, I this a >>Really similar >>Question right up, right? I, that's what I want. And if you know, 95% of the customers say it, Jay, maybe that's a good idea. >>Yeah, that's right. Trends. But >>Yeah. You know, 30% you might be like, mm, you know, 20%, you know, how do you guys decide when to put gas on the fire? >>No, that, I think, as I mentioned, there are about 70,000 large customers that are running rail on Easy Two, many of these customers are informing our product strategy. So we have, you know, close to about couple of thousand power users. We have customer advisory booths, and these are the, you know, customers are informing us, Hey, let's get all of the Red Hat portfolio and marketplace support for graviton, support for Outpost. Why don't we, why are we not able to dip into the consumption committed spend programs for both Red Hat and aws? That's right. So it's these power users both at the developer level as well as the guys who are actually doing large commercial consumption. They are the ones who are informing the roadmap for both Red Hat and aws. >>But do, do you codify the the feedback? >>Yeah, I'm like, I wanna see the database, >>The, I think it was, I don't know, it was maybe Chasy, maybe it was Besos, that that data beats intuition. So do you take that information and somehow, I mean, it's global, 70,000 customers, right? And they have different weights, different spending patterns, different levels of maturity. Yeah. Do you, how do you codify that and then ultimately make the decision? Yeah, I >>If, I mean, well you, you've got the strategic advisory boards, which are made up of customers and partners and you know, you get, you get a good, you gotta get a good slice of your customer base to get, and you gotta take their feedback and you gotta do something with it, right? That's the, that's the way we do it and codify it at the product level, I'm sure open source. That's, that's basically how we work at the product level, right? The most elegant solution in open source wins. And that's, that's pretty much how we do that at the, >>I would just add, I think it's also just the implicit trust that the two companies had built with each other, working in the trenches, making our customers and partners successful over the last decade. And Alex, give an example. So that manifests itself in context of like, you know, Amazon and Red Hat just published the entire roadmap for OpenShift. What are the new features that are becoming over the next six to nine to 12 months? It's open source available on GitHub. Customers can see, and then they can basically come back and give feedback like, Hey, you know, we want hip compliance. We just launched. That was a big request that was coming from our >>Customers. That is not any process >>Also for Graviton or Nvidia instances. So I I I think it's a, >>Here's the thing, the reason I'm pounding on this is because you guys have a pretty high hit rate, and I think as a >>Customer, mildly successful company >>As, as a customer advocate, the better, you know, if, if you guys make bets that pay off, it's gonna pay off for customers. Right. And because there's a lot of failures in it. Yeah. I mean, let's face it. That's >>Right. And I think, I think you said the key word bets. You place a lot of small bets. Do you have the, the innovation engine to do that? AWS is the perfect place to place those small bets. And then you, you know, pour gas on the fire when, when they take off. >>Yeah, it's a good point. I mean, it's not expensive to experiment. Yeah. >>Especially in the managed service world. Right? >>And I know you love taking things to market and you're a go to market guy. Let's talk gtm, what's got your snow pumped about GTM for 2023? >>We, we are gonna, you know, 10 x the teams that's gonna be focused on these products, right? So we're gonna also come out with a hybrid committed spend program for our customers that meet them where they want to go. So they're coming outta the data center going into a cloud. We're gonna have a nice financial model for them to do that. And that's gonna take a lot of the friction out. >>Yeah. I mean, you've nailed it. I, I think the, the fact that now entire Red Hat portfolio is available on marketplace, you can do it on one click deployment. It's deeply integrated with Amazon services and the most important part that Joel was making now customers can double dip. They can drive benefit from the consumption committed spend programs, both from Red Hat and from aws, which is amazing. Which is a game changer That's right. For many of our large >>Customers. That's right. And that, so we're gonna, we're gonna really go to town on that next year. That's, and all the, all the resources that I have, which are the technology sellers and the sas, you know, the engineers we're growing this team the most out that team. So it's, >>When you say 10 x, how many are you at now? I'm >>Curious to see where you're headed. Tell you, okay. There's not right? Oh no, there's not one. It's triple digit. Yeah, yeah. >>Today. Oh, sweet. Awesome. >>So, and it's a very sizable team. They're actually making sure that each of our customers are successful and then really making sure that, you know, no customer left behind policy. >>And it's a great point that customers love when Amazonians and Red Hats show up, they love it and it's, they want to get more of it, and we're gonna, we're gonna give it to 'em. >>Must feel great to be loved like that. >>Yeah, that's right. Yeah. Yeah. I would say yes. >>Seems like it's safe to say that there's another decade of partnership between your two companies. >>Hope so. That's right. That's the plan. >>Yeah. And I would say also, you know, just the IBM coming into the mix here. Yeah. I, you know, red Hat has informed the way we have turned around our partnership with ibm, essentially we, we signed the strategic collaboration agreement with the company. All of IBM software now runs on Rosa. So that is now also providing a lot of tailwinds both to our rail customers and as well as Rosa customers. And I think it's a very net creative, very positive for our partnership. >>That's right. It's been very positive. Yep. Yeah. >>You see the >>Billboards positive. Yeah, right. Also that, that's great. Great point, Dave. Yep. We have a, we have a new challenge, a new tradition on the cube here at Reinvent where we're, well, it's actually kind of a glamor moment for you, depending on how you leverage it. We're looking for your 32nd hot take your Instagram reel, your sizzle thought leadership, biggest takeaway, most important theme from this year's show. I know you want, right, Joel? I mean, you TM boy, I feel like you can spit the time. >>Yeah. It is all about Rosa for us. It is all in on that, that's the native OpenShift offering on aws and that's, that's the soundbite we're going go to town with. Now, I don't wanna forget all the other products that are in there, but Rosa is a, is a very key push for us this year. >>Fantastic. All right. Manu. >>I think our customers, it's getting super competitive. Our customers want to innovate just a >>Little bit. >>The enterprise customers see the cloud native companies. I wanna do what these guys are doing. I wanna develop features at a fast clip. I wanna scale, I wanna be resilient. And I think that's really the spirit that's coming out. So to Joel's point, you know, move to worlds containers, serverless, DevOps, which was like, you know, aha, something that's happening on the side of an enterprise is not becoming mainstream. The business is demanding it. The, it is becoming the centerpiece in the business strategy. So that's been really like the aha. Big thing that's happening here. >>Yeah. And those architectures are coming together, aren't they? That's correct. Right. You know, VMs and containers, it used to be one architecture and then at the other end of the spectrum is serverless. People thought of those as different things and now it's a single architecture and, and it's kind of right approach for the right job. >>And, and a compliments say to Red Hat, they do an incredible job of hiding that complexity. Yeah. Yes. And making sure that, you know, for example, just like, make it easier for the developers to create value and then, and you know, >>Yeah, that's right. Those, they were previously siloed architectures and >>That's right. OpenShift wanna be place where you wanna run containers or virtual machines. We want that to be this Yeah. Single place. Not, not go bolt on another piece of architecture to just do one or the other. Yeah. >>And hey, the hybrid cloud vision is working for ibm. No question. You know, and it's achievable. Yeah. I mean, I just, I've said unlike, you know, some of the previous, you know, visions on fixing the world with ai, hybrid cloud is actually a real problem that you're attacking and it's showing the results. Agreed. Oh yeah. >>Great. Alright. Last question for you guys. Cause it might be kind of fun, 10 years from now, oh, we're at another, we're sitting here, we all look the same. Time has passed, but we are not aging, which is a part of the new technology that's come out in skincare. That's my, I'm just throwing that out there. Why not? What do you guys hope that you can say about the partnership and, and your continued commitment to community? >>Oh, that's a good question. You go first this time. Yeah. >>I think, you know, the, you know, for looking into the future, you need to look into the past. And Amazon has always been driven by working back from our customers. That's like our key tenant, principle number 1 0 1. >>Couple people have said that on this stage this week. Yeah. >>Yeah. And I think our partnership, I hope over the next decade continues to keep that tenant as a centerpiece. And then whatever comes out of that, I think we, we are gonna be, you know, working through that. >>Yeah. I, I would say this, I think you said that, well, the customer innovation is gonna lead us to wherever that is. And it's, it's, it's gonna be in the cloud for sure. I think we can say that in 10 years. But yeah, anything from, from AI to the quant quantum computing that IBM's really pushing behind that, you know, those are, those are gonna be things that hopefully we show up on a, on a partnership with Manu in 10 years, maybe sooner. >>Well, whatever happens next, we'll certainly be covering it here on the cube. That's right. Thank you both for being here. Joel Manu, fantastic interview. Thanks to see you guys. Yeah, good to see you brought the energy. I think you're definitely ranking high on the top interviews. We >>Love that for >>The day. >>Thank >>My pleasure >>Job, guys. Now that you're competitive at all, and thank you all for tuning in to our live coverage here from AWS Reinvent in Las Vegas, Nevada, with Dave Valante. I'm Savannah Peterson. You're watching The Cube, the leading source for high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

Manu and Joel, thank you so much for being here. Are we ready? How's the show going for you guys real and, you know, people just want to get out, meet people, have that human touch with each other, And you've got a few in the house. Very few shows can say that, by the way. So far It's the, you have to be here. I was at reinvent number two. And I just, I've talked about it a little bit down the show, but the diversity on the show floor, you know, forever together. I love that you know, benefit from the joint committed spend programs together. 2008, I know you don't like that, but we started So that portends, I mean, 2008, we're talking two years after the launch of s3. harbinger of things to come with these new innovations? Yeah, I, I would say, you know, the innovation is a key tenant of our So it's just getting that go to market tight and we're gonna, we're gonna get that done. And if you know, 95% of the customers say it, Yeah, that's right. how do you guys decide when to put gas on the fire? So we have, you know, close to about couple of thousand power users. So do you take that information and somehow, I mean, it's global, you know, you get, you get a good, you gotta get a good slice of your customer base to get, context of like, you know, Amazon and Red Hat just published the entire roadmap for OpenShift. That is not any process So I I I think it's a, As, as a customer advocate, the better, you know, if, if you guys make bets AWS is the perfect place to place those small bets. I mean, it's not expensive to experiment. Especially in the managed service world. And I know you love taking things to market and you're a go to market guy. We, we are gonna, you know, 10 x the teams that's gonna be focused on these products, Red Hat portfolio is available on marketplace, you can do it on one click deployment. you know, the engineers we're growing this team the most out that team. Curious to see where you're headed. then really making sure that, you know, no customer left behind policy. And it's a great point that customers love when Amazonians and Red Hats show up, I would say yes. That's the plan. I, you know, red Hat has informed the way we have turned around our partnership with ibm, That's right. I mean, you TM boy, I feel like you can spit the time. It is all in on that, that's the native OpenShift offering I think our customers, it's getting super competitive. So to Joel's point, you know, move to worlds containers, and it's kind of right approach for the right job. And making sure that, you know, for example, just like, make it easier for the developers to create value and Yeah, that's right. OpenShift wanna be place where you wanna run containers or virtual machines. I mean, I just, I've said unlike, you know, some of the previous, What do you guys hope that you can say about Yeah. I think, you know, the, you know, Couple people have said that on this stage this week. you know, working through that. you know, those are, those are gonna be things that hopefully we show up on a, on a partnership with Manu Yeah, good to see you brought the energy. Now that you're competitive at all, and thank you all for tuning in to our live coverage here from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Savannah PetersonPERSON

0.99+

Dave ValantePERSON

0.99+

AmazonORGANIZATION

0.99+

ManuPERSON

0.99+

IBMORGANIZATION

0.99+

Manu ParbhakarPERSON

0.99+

40%QUANTITY

0.99+

AWSORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Joel ManuPERSON

0.99+

2021DATE

0.99+

TwoQUANTITY

0.99+

JanuaryDATE

0.99+

two companiesQUANTITY

0.99+

two companiesQUANTITY

0.99+

DavePERSON

0.99+

Red HandORGANIZATION

0.99+

95%QUANTITY

0.99+

fiveQUANTITY

0.99+

third phaseQUANTITY

0.99+

2019DATE

0.99+

Joel JacksonPERSON

0.99+

RosaORGANIZATION

0.99+

second chapterQUANTITY

0.99+

firstQUANTITY

0.99+

2008DATE

0.99+

20%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

30%QUANTITY

0.99+

next yearDATE

0.99+

OutpostORGANIZATION

0.99+

red HatORGANIZATION

0.99+

10th yearQUANTITY

0.99+

JayPERSON

0.99+

Silicon ValleyLOCATION

0.99+

bothQUANTITY

0.99+

10 yearsQUANTITY

0.99+

first phaseQUANTITY

0.99+

TodayDATE

0.99+

ibmORGANIZATION

0.99+

awsORGANIZATION

0.99+

JoePERSON

0.99+

70,000 customersQUANTITY

0.99+

2023DATE

0.99+

oneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.98+

CovidPERSON

0.98+

first timeQUANTITY

0.98+

nineQUANTITY

0.98+

12 monthsQUANTITY

0.98+

10QUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

this yearDATE

0.98+

about 70,000 customersQUANTITY

0.98+

GravitonORGANIZATION

0.98+

this weekDATE

0.97+

AmazoniansORGANIZATION

0.97+

third aspectQUANTITY

0.97+

SpainLOCATION

0.97+

E twoEVENT

0.97+

each customerQUANTITY

0.97+

The CubeTITLE

0.96+

Mark Terenzoni, AWS | AWS re:Invent 2022


 

(upbeat music) >> Hello, everyone and welcome back to fabulous Las Vegas, Nevada, where we are here on the show floor at AWS re:Invent. We are theCUBE. I am Savannah Peterson, joined with John Furrier. John, afternoon, day two, we are in full swing. >> Yes. >> What's got you most excited? >> Just got lunch, got the food kicking in. No, we don't get coffee. (Savannah laughing) >> Way to bring the hype there, John. >> No, there's so many people here just in Amazon. We're back to 2019 levels of crowd. The interest levels are high. Next gen, cloud security, big part of the keynote. This next segment, I am super excited about. CUBE Alumni, going back to 2013, 10 years ago he was on theCUBE. Now, 10 years later we're at re:Invent, looking forward to this guest and it's about security, great topic. >> I don't want to delay us anymore, please welcome Mark. Mark, thank you so much for being here with us. Massive day for you and the team. I know you oversee three different units at Amazon, Inspector, Detective, and the most recently announced, Security Lake. Tell us about Amazon Security Lake. >> Well, thanks Savannah. Thanks John for having me. Well, Security Lake has been in the works for a little bit of time and it got announced today at the keynote as you heard from Adam. We're super excited because there's a couple components that are really unique and valuable to our customers within Security Lake. First and foremost, the foundation of Security Lake is an open source project we call OCFS, Open Cybersecurity Framework Schema. And what that allows is us to work with the vendor community at large in the security space and develop a language where we can all communicate around security data. And that's the language that we put into Security Data Lake. We have 60 vendors participating in developing that language and partnering within Security Lake. But it's a communal lake where customers can bring all of their security data in one place, whether it's generated in AWS, they're on-prem, or SaaS offerings or other clouds, all in one location in a language that allows analytics to take advantage of that analytics and give better outcomes for our customers. >> So Adams Selipsky big keynote, he spent all the bulk of his time on data and security. Obviously they go well together, we've talked about this in the past on theCUBE. Data is part of security, but this security's a little bit different in the sense that the global footprint of AWS makes it uniquely positioned to manage some security threats, EKS protection, a very interesting announcement, runtime layer, but looking inside and outside the containers, probably gives extra telemetry on some of those supply chains vulnerabilities. This is actually a very nuanced point. You got Guard Duty kind of taking its role. What does it mean for customers 'cause there's a lot of things in this announcement that he didn't have time to go into detail. Unpack all the specifics around what the security announcement means for customers. >> Yeah, so we announced four items in Adam's keynote today within my team. So I'll start with Guard Duty for EKS runtime. It's complimenting our existing capabilities for EKS support. So today Inspector does vulnerability assessment on EKS or container images in general. Guard Duty does detections of EKS workloads based on log data. Detective does investigation and analysis based on that log data as well. With the announcement today, we go inside the container workloads. We have more telemetry, more fine grain telemetry and ultimately we can provide better detections for our customers to analyze risks within their container workload. So we're super excited about that one. Additionally, we announced Inspector for Lambda. So Inspector, we released last year at re:Invent and we focused mostly on EKS container workloads and EC2 workloads. Single click automatically assess your environment, start generating assessments around vulnerabilities. We've added Lambda to that capability for our customers. The third announcement we made was Macy sampling. So Macy has been around for a while in delivering a lot of value for customers providing information around their sensitive data within S3 buckets. What we found is many customers want to go and characterize all of the data in their buckets, but some just want to know is there any sensitive data in my bucket? And the sampling feature allows the customer to find out their sensitive data in the bucket, but we don't have to go through and do all of the analysis to tell you exactly what's in there. >> Unstructured and structured data. Any data? >> Correct, yeah. >> And the fourth? >> The fourth, Security Data Lake? (John and Savannah laughing) Yes. >> Okay, ocean theme. data lake. >> Very complimentary to all of our services, but the unique value in the data lake is that we put the information in the customer's control. It's in their S3 bucket, they get to decide who gets access to it. We've heard from customers over the years that really have two options around gathering large scale data for security analysis. One is we roll our own and we're security engineers, we're not data engineers. It's really hard for them to build these distributed systems at scale. The second one is we can pick a vendor or a partner, but we're locked in and it's in their schemer and their format and we're there for a long period of time. With Security Data Lake, they get the best of both worlds. We run the infrastructure at scale for them, put the data in their control and they get to decide what use case, what partner, what tool gives them the most value on top of their data. >> Is that always a good thing to give the customers too much control? 'Cause you know the old expression, you give 'em a knife they play with and they they can cut themselves, I mean. But no, seriously, 'cause what's the provisions around that? Because control was big part of the governance, how do you manage the security? How does the customer worry about, if I have too much control, someone makes a mistake? >> Well, what we finding out today is that many customers have realized that some of their data has been replicated seven times, 10 times, not necessarily maliciously, but because they have multiple vendors that utilize that data to give them different use cases and outcomes. It becomes costly and unwieldy to figure out where all that data is. So by centralizing it, the control is really around who has access to the data. Now, ultimately customers want to make those decisions and we've made it simple to aggregate this data in a single place. They can develop a home region if they want, where all the data flows into one region, they can distribute it globally. >> They're in charge. >> They're in charge. But the controls are mostly in the hands of the data governance person in the company, not the security analyst. >> So I'm really curious, you mentioned there's 60 AWS partner companies that have collaborated on the Security lake. Can you tell us a little bit about the process? How long does it take? Are people self-selecting to contribute to these projects? Are you cherry picking? What does that look like? >> It's a great question. There's three levels of collaboration. One is around the open source project that we announced at Black Hat early in this year called OCSF. And that collaboration is we've asked the vendor community to work with us to build a schema that is universally acceptable to security practitioners, not vendor specific and we've asked. >> Savannah: I'm sorry to interrupt you, but is this a first of its kind? >> There's multiple schemes out there developed by multiple parties. They've been around for multiple years, but they've been built by a single vendor. >> Yeah, that's what I'm drill in on a little bit. It sounds like the first we had this level of collaboration. >> There's been collaborations around them, but in a handful of companies. We've really gone to a broad set of collaborators to really get it right. And they're focused around areas of expertise that they have knowledge in. So the EDR vendors, they're focused around the scheme around EDR. The firewall vendors are focused around that area. Certainly the cloud vendors are in their scope. So that's level one of collaboration and that gets us the level playing field and the language in which we'll communicate. >> Savannah: Which is so important. >> Super foundational. Then the second area is around producers and subscribers. So many companies generate valuable security data from the tools that they run. And we call those producers the publishers and they publish the data into Security Lake within that OCSF format. Some of them are in the form of findings, many of them in the form of raw telemetry. Then the second one is in the subscriber side and those are usually analytic vendors, SIM vendors, XDR vendors that take advantage of the logs in one place and generate analytic driven outcomes on top of that, use cases, if you will, that highlight security risks or issues for customers. >> Savannah: Yeah, cool. >> What's the big customer focus when you start looking at Security Lakes? How do you see that planning out? You said there's a collaboration, love the open source vibe on that piece, what data goes in there? What's sharing? 'Cause a big part of the keynote I heard today was, I heard clean rooms, I've cut my antenna up. I'd love to hear that. That means there's an implied sharing aspect. The security industry's been sharing data for a while. What kind of data's in that lake? Give us an example, take us through. >> Well, this a number of sources within AWS, as customers run their workloads in AWS. We've identified somewhere around 25 sources that will be natively single click into Amazon Security Lake. We were announcing nine of them. They're traditional network logs, BBC flow, cloud trail logs, firewall logs, findings that are generated across AWS, EKS audit logs, RDS data logs. So anything that customers run workloads on will be available in data lake. But that's not limited to AWS. Customers run their environments hybridly, they have SaaS applications, they use other clouds in some instances. So it's open to bring all that data in. Customers can vector it all into this one single location if they decide, we make it pretty simple for them to do that. Again, in the same format where outcomes can be generated quickly and easily. >> Can you use the data lake off on premise or it has to be in an S3 in Amazon Cloud? >> Today it's in S3 in Amazon. If we hear customers looking to do something different, as you guys know, we tend to focus on our customers and what they want us to do, but they've been pretty happy about what we've decided to do in this first iteration. >> So we got a story about Silicon Angle. Obviously the ingestion is a big part of it. The reporters are jumping in, but the 53rd party sources is a pretty big number. Is that coming from the OCSF or is that just in general? Who's involved? >> Yeah, OCSF is the big part of that and we have a list of probably 50 more that want to join in part of this. >> The other big names are there, Cisco, CrowdStrike, Peloton Networks, all the big dogs are in there. >> All big partners of AWS, anyway, so it was an easy conversation and in most cases when we started having the conversation, they were like, "Wow, this has really been needed for a long time." And given our breadth of partners and where we sit from our customers perspective in the center of their cloud journey that they've looked at us and said, "You guys, we applaud you for driving this." >> So Mark, take us through the conversations you're having with the customers at re:Inforce. We saw a lot of meetings happening. It was great to be back face to face. You guys have been doing a lot of customer conversation, security Data Lake came out of that. What was the driving force behind it? What were some of the key concerns? What were the challenges and what's now the opportunity that's different? >> We heard from our customers in general. One, it's too hard for us to get all the data we need in a single place, whether through AWS, the industry in general, it's just too hard. We don't have those resources to data wrangle that data. We don't know how to pick schema. There's multiple ones out there. Tell us how we would do that. So these three challenges came out front and center for every customer. And mostly what they said is our resources are limited and we want to focus those resources on security outcomes and we have security engines. We don't want to focus them on data wrangling and large scale distributed systems. Can you help us solve that problem? And it came out loud and clear from almost every customer conversation we had. And that's where we took the challenge. We said, "Okay, let's build this data layer." And then on top of that we have services like Detective and Guard Duty, we'll take advantage of it as well. But we also have a myriad of ISV third parties that will also sit on top of that data and render out. >> What's interesting, I want to get your reaction. I know we don't have much time left, but I want to get your thoughts. When I see Security Data Lake, which is awesome by the way, love the focus, love how you guys put that together. It makes me realize the big thing in re:Invent this year is this idea of specialized solutions. You got instances for this and that, use cases that require certain kind of performance. You got the data pillars that Adam laid out. Are we going to start seeing more specialized data lakes? I mean, we have a video data lake. Is there going to be a FinTech data lake? Is there going to be, I mean, you got the Great Lakes kind of going on here, what is going on with these lakes? I mean, is that a trend that Amazon sees or customers are aligning to? >> Yeah, we have a couple lakes already. We have a healthcare lake and a financial lake and now we have a security lake. Foundationally we have Lake Formation, which is the tool that anyone can build a lake. And most of our lakes run on top of Lake Foundation, but specialize. And the specialization is in the data aggregation, normalization, enridgement, that is unique for those use cases. And I think you'll see more and more. >> John: So that's a feature, not a bug. >> It's a feature, it's a big feature. The customers have ask for it. >> So they want roll their own specialized, purpose-built data thing, lake? They can do it. >> And customer don't want to combine healthcare information with security information. They have different use cases and segmentation of the information that they care about. So I think you'll see more. Now, I also think that you'll see where there are adjacencies that those lakes will expand into other use cases in some cases too. >> And that's where the right tools comes in, as he was talking about this ETL zero, ETL feature. >> It be like an 80, 20 rule. So if 80% of the data is shared for different use cases, you can see how those lakes would expand to fulfill multiple use cases. >> All right, you think he's ready for the challenge? Look, we were on the same page. >> Okay, we have a new challenge, go ahead. >> So think of it as an Instagram Reel, sort of your hot take, your thought leadership moment, the clip we're going to come back to and reference your brilliance 10 years down the road. I mean, you've been a CUBE veteran, now CUBE alumni for almost 10 years, in just a few weeks it'll be that. What do you think is, and I suspect, I think I might know your answer to this, so feel free to be robust in this. But what do you think is the biggest story, key takeaway from the show this year? >> We're democratizing security data within Security Data Lake for sure. >> Well said, you are our shortest answer so far on theCUBE and I absolutely love and respect that. Mark, it has been a pleasure chatting with you and congratulations, again, on the huge announcement. This is such an exciting day for you all. >> Thank you Savannah, thank you John, pleasure to be here. >> John: Thank you, great to have you. >> We look forward to 10 more years of having you. >> Well, maybe we don't have to wait 10 years. (laughs) >> Well, more years, in another time. >> I have a feeling it'll be a lot of security content this year. >> Yeah, pretty hot theme >> Very hot theme. >> Pretty odd theme for us. >> Of course, re:Inforce will be there this year again, coming up 2023. >> All the res. >> Yep, all the res. >> Love that. >> We look forward to see you there. >> All right, thanks, Mark. >> Speaking of res, you're the reason we are here. Thank you all for tuning in to today's live coverage from AWS re:Invent. We are in Las Vegas, Nevada with John Furrier. My name is Savannah Peterson. We are theCUBE and we are the leading source for high tech coverage. (upbeat music)

Published Date : Nov 29 2022

SUMMARY :

to fabulous Las Vegas, Nevada, the food kicking in. big part of the keynote. and the most recently First and foremost, the and outside the containers, and do all of the analysis Unstructured and structured data. (John and Savannah laughing) data lake. and they get to decide what part of the governance, that data to give them different of the data governance on the Security lake. One is around the open source project They've been around for multiple years, It sounds like the first we had and the language in in the subscriber side 'Cause a big part of the Again, in the same format where outcomes and what they want us to do, Is that coming from the OCSF Yeah, OCSF is the big part of that all the big dogs are in there. in the center of their cloud journey the conversations you're having and we have security engines. You got the data pillars in the data aggregation, The customers have ask for it. So they want roll of the information that they care about. And that's where the So if 80% of the data is ready for the challenge? Okay, we have a new is the biggest story, We're democratizing security data on the huge announcement. Thank you Savannah, thank We look forward to 10 Well, maybe we don't have of security content this year. be there this year again, the reason we are here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SavannahPERSON

0.99+

Mark TerenzoniPERSON

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

Savannah PetersonPERSON

0.99+

MarkPERSON

0.99+

AmazonORGANIZATION

0.99+

10 timesQUANTITY

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

80%QUANTITY

0.99+

CrowdStrikeORGANIZATION

0.99+

AdamPERSON

0.99+

2019DATE

0.99+

10 yearsQUANTITY

0.99+

2023DATE

0.99+

last yearDATE

0.99+

seven timesQUANTITY

0.99+

60 vendorsQUANTITY

0.99+

2013DATE

0.99+

Peloton NetworksORGANIZATION

0.99+

MacyORGANIZATION

0.99+

three challengesQUANTITY

0.99+

CUBEORGANIZATION

0.99+

TodayDATE

0.99+

10 years laterDATE

0.99+

Las Vegas, NevadaLOCATION

0.99+

todayDATE

0.99+

10 more yearsQUANTITY

0.99+

80QUANTITY

0.99+

OneQUANTITY

0.99+

first iterationQUANTITY

0.98+

10 years agoDATE

0.98+

60QUANTITY

0.98+

two optionsQUANTITY

0.98+

FirstQUANTITY

0.98+

third announcementQUANTITY

0.98+

firstQUANTITY

0.98+

fourthQUANTITY

0.98+

one regionQUANTITY

0.98+

Las Vegas, NevadaLOCATION

0.98+

this yearDATE

0.98+

Data LakeORGANIZATION

0.97+

both worldsQUANTITY

0.97+

20 ruleQUANTITY

0.97+

Great LakesLOCATION

0.97+

single placeQUANTITY

0.96+

Security LakeORGANIZATION

0.96+

S3TITLE

0.96+

one placeQUANTITY

0.96+

one locationQUANTITY

0.96+

InstagramORGANIZATION

0.96+

EKSORGANIZATION

0.95+

Scott Castle, Sisense | AWS re:Invent 2022


 

>>Good morning fellow nerds and welcome back to AWS Reinvent. We are live from the show floor here in Las Vegas, Nevada. My name is Savannah Peterson, joined with my fabulous co-host John Furrier. Day two keynotes are rolling. >>Yeah. What do you thinking this? This is the day where everything comes, so the core gets popped off the bottle, all the announcements start flowing out tomorrow. You hear machine learning from swee lot more in depth around AI probably. And then developers with Verner Vos, the CTO who wrote the seminal paper in in early two thousands around web service that becames. So again, just another great year of next level cloud. Big discussion of data in the keynote bulk of the time was talking about data and business intelligence, business transformation easier. Is that what people want? They want the easy button and we're gonna talk a lot about that in this segment. I'm really looking forward to this interview. >>Easy button. We all want the >>Easy, we want the easy button. >>I love that you brought up champagne. It really feels like a champagne moment for the AWS community as a whole. Being here on the floor feels a bit like the before times. I don't want to jinx it. Our next guest, Scott Castle, from Si Sense. Thank you so much for joining us. How are you feeling? How's the show for you going so far? Oh, >>This is exciting. It's really great to see the changes that are coming in aws. It's great to see the, the excitement and the activity around how we can do so much more with data, with compute, with visualization, with reporting. It's fun. >>It is very fun. I just got a note. I think you have the coolest last name of anyone we've had on the show so far, castle. Oh, thank you. I'm here for it. I'm sure no one's ever said that before, but I'm so just in case our audience isn't familiar, tell us about >>Soy Sense is an embedded analytics platform. So we're used to take the queries and the analysis that you can power off of Aurora and Redshift and everything else and bring it to the end user in the applications they already know how to use. So it's all about embedding insights into tools. >>Embedded has been a, a real theme. Nobody wants to, it's I, I keep using the analogy of multiple tabs. Nobody wants to have to leave where they are. They want it all to come in there. Yep. Now this space is older than I think everyone at this table bis been around since 1958. Yep. How do you see Siente playing a role in the evolution there of we're in a different generation of analytics? >>Yeah, I mean, BI started, as you said, 58 with Peter Lu's paper that he wrote for IBM kind of get became popular in the late eighties and early nineties. And that was Gen one bi, that was Cognos and Business Objects and Lotus 1 23 think like green and black screen days. And the way things worked back then is if you ran a business and you wanted to get insights about that business, you went to it with a big check in your hand and said, Hey, can I have a report? And they'd come back and here's a report. And it wasn't quite right. You'd go back and cycle, cycle, cycle and eventually you'd get something. And it wasn't great. It wasn't all that accurate, but it's what we had. And then that whole thing changed in about two, 2004 when self-service BI became a thing. And the whole idea was instead of going to it with a big check in your hand, how about you make your own charts? >>And that was totally transformative. Everybody started doing this and it was great. And it was all built on semantic modeling and having very fast databases and data warehouses. Here's the problem, the tools to get to those insights needed to serve both business users like you and me and also power users who could do a lot more complex analysis and transformation. And as the tools got more complicated, the barrier to entry for everyday users got higher and higher and higher to the point where now you look, look at Gartner and Forester and IDC this year. They're all reporting in the same statistic. Between 10 and 20% of knowledge workers have learned business intelligence and everybody else is just waiting in line for a data analyst or a BI analyst to get a report for them. And that's why the focus on embedded is suddenly showing up so strong because little startups have been putting analytics into their products. People are seeing, oh my, this doesn't have to be hard. It can be easy, it can be intuitive, it can be native. Well why don't I have that for my whole business? So suddenly there's a lot of focus on how do we embed analytics seamlessly? How do we embed the investments people make in machine learning in data science? How do we bring those back to the users who can actually operationalize that? Yeah. And that's what Tysons does. Yeah. >>Yeah. It's interesting. Savannah, you know, data processing used to be what the IT department used to be called back in the day data processing. Now data processing is what everyone wants to do. There's a ton of data we got, we saw the keynote this morning at Adam Lesky. There was almost a standing of vision, big applause for his announcement around ML powered forecasting with Quick Site Cube. My point is people want automation. They want to have this embedded semantic layer in where they are not having all the process of ETL or all the muck that goes on with aligning the data. All this like a lot of stuff that goes on. How do you make it easier? >>Well, to be honest, I, I would argue that they don't want that. I think they, they think they want that, cuz that feels easier. But what users actually want is they want the insight, right? When they are about to make a decision. If you have a, you have an ML powered forecast, Andy Sense has had that built in for years, now you have an ML powered forecast. You don't need it two weeks before or a week after in a report somewhere. You need it when you're about to decide do I hire more salespeople or do I put a hundred grand into a marketing program? It's putting that insight at the point of decision that's important. And you don't wanna be waiting to dig through a lot of infrastructure to find it. You just want it when you need it. What's >>The alternative from a time standpoint? So real time insight, which is what you're saying. Yep. What's the alternative? If they don't have that, what's >>The alternative? Is what we are currently seeing in the market. You hire a bunch of BI analysts and data analysts to do the work for you and you hire enough that your business users can ask questions and get answers in a timely fashion. And by the way, if you're paying attention, there's not enough data analysts in the whole world to do that. Good luck. I am >>Time to get it. I really empathize with when I, I used to work for a 3D printing startup and I can, I have just, I mean, I would call it PTSD flashbacks of standing behind our BI guy with my list of queries and things that I wanted to learn more about our e-commerce platform in our, in our marketplace and community. And it would take weeks and I mean this was only in 2012. We're not talking 1958 here. We're talking, we're talking, well, a decade in, in startup years is, is a hundred years in the rest of the world life. But I think it's really interesting. So talk to us a little bit about infused and composable analytics. Sure. And how does this relate to embedded? Yeah. >>So embedded analytics for a long time was I want to take a dashboard I built in a BI environment. I wanna lift it and shift it into some other application so it's close to the user and that is the right direction to go. But going back to that statistic about how, hey, 10 to 20% of users know how to do something with that dashboard. Well how do you reach the rest of users? Yeah. When you think about breaking that up and making it more personalized so that instead of getting a dashboard embedded in a tool, you get individual insights, you get data visualizations, you get controls, maybe it's not even actually a visualization at all. Maybe it's just a query result that influences the ordering of a list. So like if you're a csm, you have a list of accounts in your book of business, you wanna rank those by who's priorities the most likely to churn. >>Yeah. You get that. How do you get that most likely to churn? You get it from your BI system. So how, but then the question is, how do I insert that back into the application that CSM is using? So that's what we talk about when we talk about Infusion. And SI started the infusion term about two years ago and now it's being used everywhere. We see it in marketing from Click and Tableau and from Looker just recently did a whole launch on infusion. The idea is you break this up into very small digestible pieces. You put those pieces into user experiences where they're relevant and when you need them. And to do that, you need a set of APIs, SDKs, to program it. But you also need a lot of very solid building blocks so that you're not building this from scratch, you're, you're assembling it from big pieces. >>And so what we do aty sense is we've got machine learning built in. We have an LQ built in. We have a whole bunch of AI powered features, including a knowledge graph that helps users find what else they need to know. And we, we provide those to our customers as building blocks so that they can put those into their own products, make them look and feel native and get that experience. In fact, one of the things that was most interesting this last couple of couple of quarters is that we built a technology demo. We integrated SI sensee with Office 365 with Google apps for business with Slack and MS teams. We literally just threw an Nlq box into Excel and now users can go in and say, Hey, which of my sales people in the northwest region are on track to meet their quota? And they just get the table back in Excel. They can build charts of it and PowerPoint. And then when they go to their q do their QBR next week or week after that, they just hit refresh to get live data. It makes it so much more digestible. And that's the whole point of infusion. It's bigger than just, yeah. The iframe based embedding or the JavaScript embedding we used to talk about four or five years >>Ago. APIs are very key. You brought that up. That's gonna be more of the integration piece. How does embedable and composable work as more people start getting on board? It's kind of like a Yeah. A flywheel. Yes. What, how do you guys see that progression? Cause everyone's copying you. We see that, but this is a, this means it's standard. People want this. Yeah. What's next? What's the, what's that next flywheel benefit that you guys coming out with >>Composability, fundamentally, if you read the Gartner analysis, right, they, when they talk about composable, they're talking about building pre-built analytics pieces in different business units for, for different purposes. And being able to plug those together. Think of like containers and services that can, that can talk to each other. You have a composition platform that can pull it into a presentation layer. Well, the presentation layer is where I focus. And so the, so for us, composable means I'm gonna have formulas and queries and widgets and charts and everything else that my, that my end users are gonna wanna say almost minority report style. If I'm not dating myself with that, I can put this card here, I can put that chart here. I can set these filters here and I get my own personalized view. But based on all the investments my organization's made in data and governance and quality so that all that infrastructure is supporting me without me worrying much about it. >>Well that's productivity on the user side. Talk about the software angle development. Yeah. Is your low code, no code? Is there coding involved? APIs are certainly the connective tissue. What's the impact to Yeah, the >>Developer. Oh. So if you were working on a traditional legacy BI platform, it's virtually impossible because this is an architectural thing that you have to be able to do. Every single tool that can make a chart has an API to embed that chart somewhere. But that's not the point. You need the life cycle automation to create models, to modify models, to create new dashboards and charts and queries on the fly. And be able to manage the whole life cycle of that. So that in your composable application, when you say, well I want chart and I want it to go here and I want it to do this and I want it to be filtered this way you can interact with the underlying platform. And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for the next six months. You don't want it popping down into Python and writing that yourself. >>You wanna be able to say, okay, here's my forecasting algorithm. Here are the inputs, here's the dimensions, and then go and just put it somewhere for me. And so that's what you get withy sense. And there aren't any other analytics platforms that were built to do that. We were built that way because of our architecture. We're an API first product. But more importantly, most of the legacy BI tools are legacy. They're coming from that desktop single user, self-service, BI environment. And it's a small use case for them to go embedding. And so composable is kind of out of reach without a complete rebuild. Right? But with SI senses, because our bread and butter has always been embedding, it's all architected to be API first. It's integrated for software developers with gi, but it also has all those low code and no code capabilities for business users to do the minority report style thing. And it's assemble endless components into a workable digital workspace application. >>Talk about the strategy with aws. You're here at the ecosystem, you're in the ecosystem, you're leading product and they have a strategy. We know their strategy, they have some stuff, but then the ecosystem goes faster and ends up making a better product in most of the cases. If you compare, I know they'll take me to school on that, but I, that's pretty much what we report on. Mongo's doing a great job. They have databases. So you kind of see this balance. How are you guys playing in the ecosystem? What's the, what's the feedback? What's it like? What's going on? >>AWS is actually really our best partner. And the reason why is because AWS has been clear for many, many years. They build componentry, they build services, they build infrastructure, they build Redshift, they build all these different things, but they need, they need vendors to pull it all together into something usable. And fundamentally, that's what Cient does. I mean, we didn't invent sequel, right? We didn't invent jackal or dle. These are not, these are underlying analytics technologies, but we're taking the bricks out of the briefcase. We're assembling it into something that users can actually deploy for their use cases. And so for us, AWS is perfect because they focus on the hard bits. The the underlying technologies we assemble those make them usable for customers. And we get the distribution. And of course AWS loves that. Cause it drives more compute and it drives more, more consumption. >>How much do they pay you to say that >>Keynote, >>That was a wonderful pitch. That's >>Absolutely, we always say, hey, they got a lot of, they got a lot of great goodness in the cloud, but they're not always the best at the solutions and that they're trying to bring out, and you guys are making these solutions for customers. Yeah. That resonates with what they got with Amazon. For >>Example, we, last year we did a, a technology demo with Comprehend where we put comprehend inside of a semantic model and we would compile it and then send it back to Redshift. And it takes comprehend, which is a very cool service, but you kind of gotta be a coder to use it. >>I've been hear a lot of hype about the semantic layer. What is, what is going on with that >>Semantec layer is what connects the actual data, the tables in your database with how they're connected and what they mean so that a user like you or me who's saying I wanna bar chart with revenue over time can just work with revenue and time. And the semantic layer translates between what we did and what the database knows >>About. So it speaks English and then they converts it to data language. It's >>Exactly >>Right. >>Yeah. It's facilitating the exchange of information. And, and I love this. So I like that you actually talked about it in the beginning, the knowledge map and helping people figure out what they might not know. Yeah. I, I am not a bi analyst by trade and I, I don't always know what's possible to know. Yeah. And I think it's really great that you're doing that education piece. I'm sure, especially working with AWS companies, depending on their scale, that's gotta be a big part of it. How much is the community play a role in your product development? >>It's huge because I'll tell you, one of the challenges in embedding is someone who sees an amazing experience in outreach or in seismic. And to say, I want that. And I want it to be exactly the way my product is built, but I don't wanna learn a lot. And so you, what you want do is you want to have a community of people who have already built things who can help lead the way. And our community, we launched a new version of the SES community in early 2022 and we've seen a 450% growth in the c in that community. And we've gone from an average of one response, >>450%. I just wanna put a little exclamation point on that. Yeah, yeah. That's awesome. We, >>We've tripled our organic activity. So now if you post this Tysons community, it used to be, you'd get one response maybe from us, maybe from from a customer. Now it's up to three. And it's continuing to trend up. So we're, it's >>Amazing how much people are willing to help each other. If you just get in the platform, >>Do it. It's great. I mean, business is so >>Competitive. I think it's time for the, it's time. I think it's time. Instagram challenge. The reels on John. So we have a new thing. We're gonna run by you. Okay. We just call it the bumper sticker for reinvent. Instead of calling it the Instagram reels. If we're gonna do an Instagram reel for 30 seconds, what would be your take on what's going on this year at Reinvent? What you guys are doing? What's the most important story that you would share with folks on Instagram? >>You know, I think it's really what, what's been interesting to me is the, the story with Redshift composable, sorry. No, composable, Redshift Serverless. Yeah. One of the things I've been >>Seeing, we know you're thinking about composable a lot. Yes. Right? It's, it's just, it's in there, it's in your mouth. Yeah. >>So the fact that Redshift Serverless is now kind becoming the defacto standard, it changes something for, for my customers. Cuz one of the challenges with Redshift that I've seen in, in production is if as people use it more, you gotta get more boxes. You have to manage that. The fact that serverless is now available, it's, it's the default means it now people are just seeing Redshift as a very fast, very responsive repository. And that plays right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top of things. So for me it's, it's a, maybe it's a narrow Instagram reel, but it's an >>Important one. Yeah. And that makes it better for you because you get to embed that. Yeah. And you get access to better data. Faster data. Yeah. Higher quality, relevant, updated. >>Yep. Awesome. As it goes into that 80% of knowledge workers, they have a consumer great expectation of experience. They're expecting that five ms response time. They're not waiting 2, 3, 4, 5, 10 seconds. They're not trained on theola expectations. And so it's, it matters a lot. >>Final question for you. Five years out from now, if things progress the way they're going with more innovation around data, this front end being very usable, semantic layer kicks in, you got the Lambda and you got serverless kind of coming in, helping out along the way. What's the experience gonna look like for a user? What's it in your mind's eye? What's that user look like? What's their experience? >>I, I think it shifts almost every role in a business towards being a quantitative one. Talking about, Hey, this is what I saw. This is my hypothesis and this is what came out of it. So here's what we should do next. I, I'm really excited to see that sort of scientific method move into more functions in the business. Cuz for decades it's been the domain of a few people like me doing strategy, but now I'm seeing it in CSMs, in support people and sales engineers and line engineers. That's gonna be a big shift. Awesome. >>Thank >>You Scott. Thank you so much. This has been a fantastic session. We wish you the best at si sense. John, always pleasure to share the, the stage with you. Thank you to everybody who's attuning in, tell us your thoughts. We're always eager to hear what, what features have got you most excited. And as you know, we will be live here from Las Vegas at reinvent from the show floor 10 to six all week except for Friday. We'll give you Friday off with John Furrier. My name's Savannah Peterson. We're the cube, the the, the leader in high tech coverage.

Published Date : Nov 29 2022

SUMMARY :

We are live from the show floor here in Las Vegas, Nevada. Big discussion of data in the keynote bulk of the time was We all want the How's the show for you going so far? the excitement and the activity around how we can do so much more with data, I think you have the coolest last name of anyone we've had on the show so far, queries and the analysis that you can power off of Aurora and Redshift and everything else and How do you see Siente playing a role in the evolution there of we're in a different generation And the way things worked back then is if you ran a business and you wanted to get insights about that business, the tools to get to those insights needed to serve both business users like you and me the muck that goes on with aligning the data. And you don't wanna be waiting to dig through a lot of infrastructure to find it. What's the alternative? and data analysts to do the work for you and you hire enough that your business users can ask questions And how does this relate to embedded? Maybe it's just a query result that influences the ordering of a list. And SI started the infusion term And that's the whole point of infusion. That's gonna be more of the integration piece. And being able to plug those together. What's the impact to Yeah, the And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for And so that's what you get withy sense. How are you guys playing in the ecosystem? And the reason why is because AWS has been clear for That was a wonderful pitch. the solutions and that they're trying to bring out, and you guys are making these solutions for customers. which is a very cool service, but you kind of gotta be a coder to use it. I've been hear a lot of hype about the semantic layer. And the semantic layer translates between It's So I like that you actually talked about it in And I want it to be exactly the way my product is built, but I don't wanna I just wanna put a little exclamation point on that. And it's continuing to trend up. If you just get in the platform, I mean, business is so What's the most important story that you would share with One of the things I've been Seeing, we know you're thinking about composable a lot. right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top And you get access to better data. And so it's, it matters a lot. What's the experience gonna look like for a user? see that sort of scientific method move into more functions in the business. And as you know, we will be live here from Las Vegas at reinvent from the show floor

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

AWSORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

2012DATE

0.99+

Peter LuPERSON

0.99+

FridayDATE

0.99+

80%QUANTITY

0.99+

Las VegasLOCATION

0.99+

AmazonORGANIZATION

0.99+

30 secondsQUANTITY

0.99+

JohnPERSON

0.99+

450%QUANTITY

0.99+

ExcelTITLE

0.99+

10QUANTITY

0.99+

IBMORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

John FurrierPERSON

0.99+

Office 365TITLE

0.99+

IDCORGANIZATION

0.99+

1958DATE

0.99+

PowerPointTITLE

0.99+

20%QUANTITY

0.99+

ForesterORGANIZATION

0.99+

PythonTITLE

0.99+

Verner VosPERSON

0.99+

early 2022DATE

0.99+

GartnerORGANIZATION

0.99+

last yearDATE

0.99+

10 secondsQUANTITY

0.99+

five msQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

this yearDATE

0.99+

first productQUANTITY

0.99+

awsORGANIZATION

0.98+

one responseQUANTITY

0.98+

late eightiesDATE

0.98+

Five yearsQUANTITY

0.98+

2QUANTITY

0.98+

tomorrowDATE

0.98+

SavannahPERSON

0.98+

Scott CastlePERSON

0.98+

oneQUANTITY

0.98+

SisensePERSON

0.97+

5QUANTITY

0.97+

EnglishOTHER

0.96+

Click and TableauORGANIZATION

0.96+

Andy SensePERSON

0.96+

LookerORGANIZATION

0.96+

two weeksDATE

0.96+

next weekDATE

0.96+

early ninetiesDATE

0.95+

InstagramORGANIZATION

0.95+

serverlessTITLE

0.94+

AWS ReinventORGANIZATION

0.94+

MongoORGANIZATION

0.93+

singleQUANTITY

0.93+

AuroraTITLE

0.92+

Lotus 1 23TITLE

0.92+

OneQUANTITY

0.92+

JavaScriptTITLE

0.92+

SESORGANIZATION

0.92+

next six monthsDATE

0.91+

MSORGANIZATION

0.91+

five yearsQUANTITY

0.89+

sixQUANTITY

0.89+

a weekDATE

0.89+

Soy SenseTITLE

0.89+

hundred grandQUANTITY

0.88+

RedshiftTITLE

0.88+

Adam LeskyPERSON

0.88+

Day two keynotesQUANTITY

0.87+

floor 10QUANTITY

0.86+

two thousandsQUANTITY

0.85+

Redshift ServerlessTITLE

0.85+

both businessQUANTITY

0.84+

3QUANTITY

0.84+

Dev Ittycheria, MongoDB | Cube Conversation: Partner Exclusive


 

>>Hi, I'm John Ferry with the Cube. We're here for a special exclusive conversation with David Geria, the CEO of Mongo MongoDB. Well established leading platform. It's been around for, I mean, decades. So continues to become the platform of choice for high performance data. This modern data stack that's emerging, a big part of the story here at a reinvent 2022 on top of an already performing a cloud with, you know, chips and silicon specialized instances, the world's gonna be getting faster, smaller, higher performance, lower cost specialized. Dave, thanks for taking the time with me today, >>John. It's great to be here. Thank you for having me. >>Do you see yourself as a ISV or you just go with that, because that's kind of a nomenclature >>When, when I think of the term isv, I think of the notion of someone building an end solution for customer to get something done. Or what we're building is essentially a developer data platform and we have thousands of ISVs who build software applications on our platform. So how could we be an isv? Because by definition I, you know, we enable people to do so many different things and you know, they can be the, you know, the largest companies of the world trying to transform their business or startups who are trying to disrupt either existing industries or create new ones. And so that's, and, and that's how our customers view MongoDB and, and the whole Atlas platform basically enables them to do some amazing things. The reason for that is, you know, you know, we believe that what we are enabling developers to do is be able to reduce the friction and the work required to build modern applications through the document model, which is really intuitive to the way developers think and code through the distributed nature of platforms. >>So, you know, things like charting no other company on the planet offers the capabilities we do to enable people to build the most highly performant and scalable applications. And also what we also do is enable people to, you know, run different types of workloads on our platform. So we have obviously transactional, we have search, we have time series, we enable people to do things like sophisticated device synchronization from Edge to the back end. We do graph, we do real time analytics. So being able to consolidate all that with developers on one elegant unified platform really makes, you know, it attractive for developers to build on long >>Db. You know, you guys are a feature partner of aws and I would speculate, I don't know if you can comment on this, but I would imagine that you probably produce a lot of revenue for Amazon because you really can't turn off EC two when you do a database work. So, you know, you kind of crank it all the time. You guys are a top partner. How long have you guys been a partner with aws? What's the relationship? >>The relationship's been strong, actually, Amazon spoke at one of our first user conferences in 2013. And since then we've been working together. We've been at reinvent since essentially 2015. And we've been a premier partner, an Emerald sponsor for the last Nu you know, I think four or five years. And so we're very committed to the relationship and I think there's some things that we have a lot, we have a lot of things in common. We care a lot about customers and for us, our customers, our developers, we care a lot about removing friction from their day to day work to move, be able to move fast and be able to, in order to seize new opportunities and respond to new threats. And so consequently, I think the partnership, obviously by nature of our, our common objectives has really come together. >>Talk about the journey of Mongo. I mean, you look back at the history, I, you go back the old lamp stack days, right? So you know, the day developer traction is just really kind of stuck at the none. I mean, it's, it's really well known. And I remember over the conversations, Dave Mongo doesn't scale. I mean, every year we heard something along those lines cuz it just kept scaling. I heard the same thing with AWS back in 2013 timeframe. You, oh, it's just, it's really not for a real prime time. It's, it's for hobbyists, not so much builders, maybe startup cloud, but that developer traction is translated. Can you take us through the journey of Mongo where it is now and, and kinda look back and, and, and take us through what's the state of the art now, >>Right? So just for those of you who, who, those, you know, those in your audience who don't know too much about Mon Be I'll just, you know, start with the background. The company was astounded by developers. It was basically the CTO and some key developers from Double Click who really saw the challenges and the limitations of the relational database architecture because they're trying to serve billions of ads per day and they constantly need to work on the constraints and relational database. And so they essentially decided, why don't we just build a database that we'd want to use? And that was a catalyst to starting MongoDB. The first thing they focused on was, rather than having a tabler data structure, they focused on a document data structure. Why documents? Because there's much more natural and intuitive to work with data and documents in terms of you can set parent child relationships and how you just think about the relationship with data is much more natural in a document than trying to connect data in a, you know, in hundreds of different tables. >>And so that enabled developers to just move so much faster. The second thing they focused on was building a truly distributed architecture, not kind of some adjunct, you know, you know, architecture that maybe made the existing architecture a little bit more scalable. They really took from the ground up a truly distributed architecture. So where you can do native replication, you can do charting and you can do it on a global basis. And so that was the, the other profound, you know, thing that they did. And then since then, what we've also done is, you know, the document model is truly a super set of other models. So we enabled other capabilities like search you can do joins, so you can do very transaction intensive use case among be where fully asset compliant. So you have the highest forms of data guarantees you can do very sophisticated things like time series, you can do device synchronization, you can do real time analytics because we can carve off read only nodes to be able to read and query data in real time rather than have to offload that data into a data warehouse. >>And so that enables developers to just build a wide variety of, of application longing to be, and they get one unified developer interface. It's highly elegant and seamless. And so essentially the cost and tax of matching multiple point tools goes away when, when I think of the term isv, I think of the notion of someone building an end solution for a customer to get something done. Or what we're building is essentially a developer data platform and we have thousands of ISVs who build software applications on our platform. So how could we be an isv? Because by definition I, you know, we enable people to do so many different things and you know, they can be the, you know, the largest companies in the world trying to transform their business or startups or trying to disrupt either existing industries or create new ones. And so that's, and and that's how our customers view MongoDB and, and the whole Atlas platform basically enables them to do some amazing things. >>Yeah, we're seeing a lot of activity on the Atlas. Do you see yourself as a ISV or you just go with that because that's kind of a nomenclature? >>No, we don't view ourselves as ISV at all. We view ourselves as a developer data platform. And the reason for that is, you know, you know, we believe that what we are enabling developers to do is be able to reduce the friction and the work required to build modern applications through the document model, which is really intuitive to the way developers think and code through the distributed nature of platforms. So, you know, things like sharding, no other company on the planet offers the capabilities we do to enable people to build the most highly performant and scalable applications. And also what we also do is enable people to, you know, run different types of workflows on our platform. So we have obviously transactional, we have search, we have time series, we enable people to do things like sophisticated device synchronization from Edge to the back end. We do graph, we do real time analytics. So being able to consolidate all that with developers on one elegant unified platform really makes, you know, it attractive for developers to build on long ndb. >>You know, the cloud adoption really is putting a lot of pressure on these systems and you're seeing companies in the ecosystem and AWS stepping up, you guys are doing great job, but we're seeing a lot more acceleration around it, on staying on premise for certain use cases. Yet you got the cloud as well growing for workloads and, and you get this hybrid steady state as an operational mode. I call that 10 of the classic cloud adoption track record. You guys are an example of multiple iterations in cloud. You're doing a lot more, we're starting to see this tipping point with others and customers coming kind of on that same pattern. Building platforms on top of aws on top of the primitives, more horsepower, higher level services, industry specific capabilities with data. I mean this is a new kind of cloud, kind of a next generation, you knows next gen you got the classic high performance infrastructure, it's getting better and better, but now you've got this new application platform, you know, reminds me of the old asp, you know, if you will. I mean, so are you seeing customers doing things differently? Can you share your, your reaction to this role of, you know, this new kind of SaaS platform that just isn't an application, it's, it's more, it's deeper than that. What's going on here? We call it super cloud, but >>Like what? Yeah, so essentially what what, you know, a lot of our customers doing, and by the way we have over 37,000 customers of all shapes and sizes from the largest companies in the world to cutting edge startups who are building applications among B, why do they choose MongoDB? Because essentially it's the, you know, the fastest way to innovate and the reason it's the fastest way to innovate is because they can work with data so much easier than working with data on other types of architecture. So the document model is profoundly a breakthrough way to work with data to make it very, very easy. So customers are essentially building these modern applications, you know, applications built on microservices, event driven architectures, you know, addressing sophisticated use cases like time series to, and then ultimately now they're getting into machine learning. We have a bunch of companies building machine learning applications on top of MongoDB. And the reason they're doing that is because one, they get the benefits of being able to, you know, build and work with, with data so much easier than any other platform. And it's highly scale and performant in a way that no other platform is. So literally they can run their, you know, workloads both locally and one, you know, autonomous zone or they can basically be or available zone or they could be basically, you know, anywhere in the world. And we also offer multicloud capabilities, which I can get into later. >>Let's talk about the performance side. I know I was speaking with some Amazon folks every year it's the same story. They're really working on the physics, they're getting the chips, they wanna squeeze as much energy out of that. I've never met a developer that said they wanna run their workload on a slower platform or slower hardware. We know said no developer, right? No one wants to do that. >>Correct. >>So you guys have a lot of experience tuning in with Graviton instances, we're seeing a lot more AWS EC two instances, we're seeing a lot more kind of integrated end to end stories. Data is now security, it's tied into data stacks or data modern kind of data hybrid stack. A lot going on around the hardware performance specialization, the role of data, kind of a modern data stack emerging. What, what's your thoughts on the that that Yeah, >>I, I think if you had asked me, you know, when the cloud started going vogue, like you know, the, you know, the, the later part of the last decade and told me, you know, sitting here 12, 15 years later, would you know, would we be talking about, you know, chip processing speeds? I'd probably thought, nah, we would've moved on by then. But what's really clear is that customers, to your point, customers care about performance, they care about price performance, right? So AWS's investments in Graviton, we have actually deployed a significant portion of our at fleet on Amazon now runs on Graviton. You know, they've built other chip sets like train and, and inferential for like, you know, training models and running inferences. They're doing things like Nitro. And so what that really speaks to is that the cloud providers are focusing on the price performance of their, as you call it, their primitives and their infrastructure and the infrastructure layer that are still very, very important. >>And, and you know, if you look at their revenue, about 60 to 70% of the revenue comes from that pure infrastructure. So to your point, they can't offer a second class solution and still win. So given that now they're seeing a lot of competition from Azure, Azure's building their own chip sets, Google's already obviously doing that and and building specialized chip sets for machine learning. You're seeing these cloud providers compete. So they have to really compete to make their platform the most performant, the most price competitive in the marketplace. Which gives us a great platform to build on to enable developers to build these incredibly highly performant applications that customers are now demand. >>I think that's a really great point. I mean, you know, it's so funny Dave, because you know, I remember those, we don't talk speeds and feeds anymore. We're not talking about boxes. I mean that's old kind of school thinking because it was a data center mentality, speeds and feeds and that was super important. But we're kind of coming back to that in the cloud now in distributed architecture, as you put your platforms out there for developers, you have to run fast. You gotta, you can't give the developer subpar or any kind of performance that's, they'll, they'll go somewhere else. I mean that's the reality of what developers, no one, again, no one says I wanna go on the slower platform unless it's some sort of policy based on price or some sort of thing. But, but for the most part it's gotta run fast. So you got the tail of two clouds going on here, you got Amazon classic ias, keep making it faster under the hood. >>And then you got the new abstraction layers of the higher level services. That's where you guys are bridging this new, new generational shift where it's like, hey, you know what? I can go, I can run a headless application, I can run a SAS app that's refactored with data. So you've seen a lot more innovation with developers, you know, running stuff in, in the C I C D pipeline that was once it, and you're seeing security and data operations kind of emerging as a structural change of how companies are, are are transforming on the business side. What's your reaction to that business transformation and the role of the developer? >>Right, so I mean I have to obviously give amazing kudos to the, you know, to AWS and the Amazon team for what they've built. Obviously they're the ones who kind of created the cloud industry and they continue to push the innovation in the space. I mean today they have over 300 services and you know, obviously, you know, no star today is building anything not on the cloud because they have so many building blocks to start with. But what we though have found from our talking to our customers is that in some ways there is still, you know, the onus is on the customer to figure out which building block to use to be able to stitch together the applications and solutions they wanna build. And what we have done is taken essentially an opinionated point of view and said we will enable you to do that. >>You know, using one data model. You know, Amazon today offers I think 17 or 18 different types of databases. We don't think like, you know, having a tool for every job makes sense because over time the tax and cost of learning, managing and supporting those different applications just don't make a lot of sense or just become cost prohibitive. And so we think offering one data model, one, you know, elegant user experience, you know, one way to address the broadest set of of use cases is that we think is a better way. But clearly customers have choice. They can use Amazon's primitives and those second layer services as you as you described, or they can use us. Unfortunately we've seen a lot of customers come to us with our approach and so does Amazon. And I have to give obviously again kudos and Amazon is very customer obsessed and so we have a great relationship with them, both technically in terms of the product integrations we do as well as working with 'em in the field, you know, on joint customer opportunities. >>Speaking of, while you mentioned that, I wanna just ask you, how is that marketplace relationship going with aws? Some of the partners are really seeing great economic and joint selling or them selling your, your stuff. So there's a real revenue pop there in that religion. Can you comment on that? >>So we had been working the partner in the marketplace for many years now, more from a field point of view where customers could leverage their existing commitments to AWS and leverage essentially, you know, using Atlas and applying in an atlas towards their commits. There was also some sales incentives for people in the field to basically work together so that, you know, everyone won should we collectively win a customer? What we recently announced is as pay as you Go initiative, where literally a customer on the Amazon marketplace can basically turn up, you know, an Alice instance with no commitment. So it's so easy. So we're just pushing the envelope to just reduce the friction for people to use Atlas on aws. And it's working really very well. The uptake has been been very strong and and we feel like we're just getting started because we're so excited about the results we're >>Seeing. You know, one of the things that's kind of not core in the keynote theme, but I think it's underlying message is clear in the industry, is the developer productivity. You said making things easy is a big deal, self-service, getting in and trying, these are what developer friendly tools are like and platform. So I have to ask you, cuz this comes up a lot in our kind of business conversation, is, is if you take digital transformation concept to its completion, assuming now you know, as a thought exercise, you completely transform a company with technology that's, that is the business transformation outcome. Take it to completion. What does that look like? I mean, if you go there you'd say, okay, the company is the app, the company is the data, it's not a department serving the business, it's the business. And so I think this is kind of what we're seeing as the next big mountain climb, which is companies that do transform there, they are technology companies, they're not a department like it. So I think a lot of companies are kind of saying, wait a minute, why would we have a department? It should be the company. What's your your your view on this because this >>Yeah, so I I've had the for good fortune of being able to talk to thousand customers all over the world. And you know, one thing John, they never tell me, they never tell me that they're innovating too quickly. In fact, they always tell me the reverse. They tell me all the obstacles and impediments they have to be able to be able to be able to move fast. So one of the reasons they gravitate to MongoDB is just the speed that they wish they can build applications to, to your point, developer productivity. And by definition, developer productivity is a proxy for innovation. The faster you can make your developers, you know, move, the faster they can push out code, the faster they can iterate and build new solutions or add more capabilities on the existing applications, the faster you can innovate either to, again, seize new opportunities or to respond to new threats in your business. >>And so that resonates with every C level executive. And to your point, the developers not some side hustle that they kind of think about once in a while. It's core to the business. So developers have amassed enormous amount of power and influence. You know, their, their, their engineering teams are front and center in terms of how they think about building capabilities and and building their business. And that's also obviously enabled, you know, to your point, every software company, every company's not becoming a software company because it all starts with softwares, software enables, defines or creates almost every company's value proposition. >>You know, it makes me smile because I love operating systems as one of my hobbies in college was, you know, systems programming and I remember those network kind of like the operating systems, the cloud. So, you know, everything's got specialized capabilities and that's a big theme here at Reinvent. If you look at the announcements Monday night with Peter DeSantis, you got, you got new instances, new chips. So this whole engine kind of specialized component is like an engine. You got a core and you got other subsystems. This is gonna be an integral part of how companies architect their platform or you know, Adam calls it the landing zone or whatever they wanna call it. But you gotta start seeing a new architectural thinking for companies. What's your, can you share your experience on how companies should look at this opportunity as a plethora of more goodness on the hardware? On hardware, but like chips and instances? Cause now you can mix and match. You've got, you've got, you got everything you need to kind of not roll your own but like really build foundational high performance capabilities. >>Yeah, so I I, so I think this is where I think Amazon is really enabling all companies, including, you know, companies like Mon db, you know, push the envelope and innovation. So for example, you know, the, the next big hurdle for us, I think we've seen two big platform shifts over the last 15 years of platform shifts, you know, to mobile and the platform shift to cloud. I believe the next big platform shift is going from dumb apps to smart apps, which you're building in, you know, machine learning and you know, AI and just very sophisticated automation. And when you start automating human decision making, rather than, you know, looking at a dashboard and saying, okay, I see the data now, now I have to do this. You can automate that into your applications and make your applications leveraging real time data become that much more smart. And that ultimately then becomes a developer challenge. And so we feel really good about our position in taking advantage of those next big trends and software leveraging the price performance curves that, you know, Amazon continues to push in terms of their hardware performance, networking performance, you know, you know, price, performance and storage to build those next generation of modern applications. >>Okay, so let me get this straight. You have next generation intelligent smart apps and you have AI generative solutions coming out around the corner. This is like pretty good position for Mongo to be in with data. I mean, this is what you do, you're in that exactly of the action. What's it like? I mean, you must be like trying to shake the world and wake up. The world's starting to wake up now through this. So what's, what's it like? >>Well, I mean we're really excited and bullish about the future. We think that we're well positioned because we know as to your point, you know, we have amassed amazing amount of developer mindshare. We are the most popular modern data platform out there in the world. There's developers in almost every corner of the planet using us to do something. And to your point, leveraging data and these advances in machine learning ai. And we think the more AI becomes democratized, not, you know, done by a bunch of data scientists sitting in some corner office, but essentially enabling developers to have the tools to build these very, very sophisticated, smart applications will, you know, will position as well. So that's, you know, obviously gonna be a focus for us over the, frankly, I think this is gonna be like a 10 year, 10 15 year run and we're just getting started in this whole >>Area. I think you guys are really well positioned. I think that's a great point. And Adam mentioned to me and, and Mike interviewed, he said on stage talk about it, the role of a data analyst kind of goes away. Everyone's a data analyst, right? You'll still see specialization on, on core data engineering, which is kind of like an SRE role for data. So data ops and data as code is a big deal making data applications. So again, exciting times and you guys are well positioned. If you had to bumper sticker the event this week here at Reinvent, what would you, how would you categorize this this point in time? I mean, Adam's great leader, he is gonna help educate customers how to use technology to, for business advantage and transformation. You know, Andy did a great job making technology great and innovative and setting the table, Adam's gotta bring it to the enterprises and businesses. So it's gonna be an interesting point in time we're in now. What, how would you categorize this year's reinvent, >>Right? I think the, the, the tech world is pivoting towards what I'd call rationalization or cost optimization. I think people obviously in, you know, the last 10 years have, you know, it's all about speed, speed, speed. And I think people still value speed, but they wanna do it at some sort of predictable cost model. And I think you're gonna see a lot more focus around cost and cost optimization. That's where we think having one platform is by definition of vendor consolidation way for people to cut costs so that they can basically, you know, still move fast but don't have to incur the tax of using a whole bunch of different point tools. And so we think we're well positioned. So the bumper sticker I think about is essentially, you know, do more for less with MongoDB. >>Yeah. And the developers on the front lines. Great stuff. You guys are great partner, a top partner at AWS and great reflection on, on where you guys been, but really where you are now and great opportunity. David Didier, thank you so much for spending the time and it's been great following Mongo and the continued rise of, of developers of the on the front lines really driving the business and that, and they are, I know, driving the business, so, and I think they're gonna continue Smart apps, intelligent apps, ai, generative apps are coming. I mean this is real. >>Thanks John. It's great speaking with >>You. Yeah, thanks. Thanks so much. Okay.

Published Date : Nov 24 2022

SUMMARY :

of an already performing a cloud with, you know, chips and silicon specialized instances, Thank you for having me. I, you know, we enable people to do so many different things and you know, they can be the, And also what we also do is enable people to, you know, run different types So, you know, you kind of crank it all the time. an Emerald sponsor for the last Nu you know, I think four or five years. So you know, the day developer traction is just really kind of stuck at the So just for those of you who, who, those, you know, those in your audience who don't know too much about Mon And so that was the, the other profound, you know, things and you know, they can be the, you know, the largest companies in the world trying to transform Do you see yourself as a ISV or you you know, you know, we believe that what we are enabling developers to do is be able to reduce know, reminds me of the old asp, you know, if you will. Yeah, so essentially what what, you know, a lot of our customers doing, and by the way we have over 37,000 Let's talk about the performance side. So you guys have a lot of experience tuning in with Graviton instances, we're seeing a lot like you know, the, you know, the, the later part of the last decade and told me, you know, And, and you know, if you look at their revenue, about 60 to 70% I mean, you know, it's so funny Dave, because you know, I remember those, And then you got the new abstraction layers of the higher level services. to the, you know, to AWS and the Amazon team for what they've built. And so we think offering one data model, one, you know, elegant user experience, Can you comment on that? can basically turn up, you know, an Alice instance with no commitment. is, is if you take digital transformation concept to its completion, assuming now you And you know, one thing John, they never tell me, they never tell me that they're innovating too quickly. you know, to your point, every software company, every company's not becoming a software company because or you know, Adam calls it the landing zone or whatever they wanna call it. So for example, you know, the, the next big hurdle for us, I think we've seen two big platform shifts over the I mean, this is what you do, So that's, you know, you guys are well positioned. I think people obviously in, you know, the last 10 years have, on where you guys been, but really where you are now and great opportunity. Thanks so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MikePERSON

0.99+

AdamPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

AndyPERSON

0.99+

David DidierPERSON

0.99+

David GeriaPERSON

0.99+

2013DATE

0.99+

DavePERSON

0.99+

17QUANTITY

0.99+

2015DATE

0.99+

AWSORGANIZATION

0.99+

Peter DeSantisPERSON

0.99+

John FerryPERSON

0.99+

GoogleORGANIZATION

0.99+

fourQUANTITY

0.99+

10 yearQUANTITY

0.99+

Monday nightDATE

0.99+

Dev IttycheriaPERSON

0.99+

hundredsQUANTITY

0.99+

todayDATE

0.99+

Dave MongoPERSON

0.99+

five yearsQUANTITY

0.99+

awsORGANIZATION

0.99+

thousandsQUANTITY

0.99+

AtlasTITLE

0.99+

MongoPERSON

0.99+

Mongo MongoDBORGANIZATION

0.99+

over 300 servicesQUANTITY

0.99+

Double ClickORGANIZATION

0.98+

10QUANTITY

0.98+

over 37,000 customersQUANTITY

0.98+

one platformQUANTITY

0.98+

MongoDBTITLE

0.98+

EmeraldORGANIZATION

0.98+

MongoORGANIZATION

0.98+

bothQUANTITY

0.98+

this weekDATE

0.98+

thousand customersQUANTITY

0.97+

second layerQUANTITY

0.97+

oneQUANTITY

0.97+

about 60QUANTITY

0.97+

EC twoTITLE

0.96+

two cloudsQUANTITY

0.95+

ReinventORGANIZATION

0.95+

second thingQUANTITY

0.94+

AzureORGANIZATION

0.94+

one data modelQUANTITY

0.93+

second classQUANTITY

0.92+

last decadeDATE

0.92+

NitroORGANIZATION

0.9+

one dataQUANTITY

0.89+

15 yearQUANTITY

0.89+

70%QUANTITY

0.89+

Kirk Haslbeck, Collibra, Data Citizens 22


 

(atmospheric music) >> Welcome to theCUBE Coverage of Data Citizens 2022 Collibra's Customer event. My name is Dave Vellante. With us is Kirk Haslbeck, who's the Vice President of Data Quality of Collibra. Kirk, good to see you, welcome. >> Thanks for having me, Dave. Excited to be here. >> You bet. Okay, we're going to discuss data quality, observability. It's a hot trend right now. You founded a data quality company, OwlDQ, and it was acquired by Collibra last year. Congratulations. And now you lead data quality at Collibra. So we're hearing a lot about data quality right now. Why is it such a priority? Take us through your thoughts on that. >> Yeah, absolutely. It's definitely exciting times for data quality which you're right, has been around for a long time. So why now? And why is it so much more exciting than it used to be? I think it's a bit stale, but we all know that companies use more data than ever before, and the variety has changed and the volume has grown. And while I think that remains true there are a couple other hidden factors at play that everyone's so interested in as to why this is becoming so important now. And I guess you could kind of break this down simply and think about if Dave you and I were going to build a new healthcare application and monitor the heartbeat of individuals, imagine if we get that wrong, what the ramifications could be, what those incidents would look like. Or maybe better yet, we try to build a new trading algorithm with a crossover strategy where the 50 day crosses the 10 day average. And imagine if the data underlying the inputs to that is incorrect. We will probably have major financial ramifications in that sense. So, kind of starts there, where everybody's realizing that we're all data companies, and if we are using bad data we're likely making incorrect business decisions. But I think there's kind of two other things at play. I bought a car not too long ago and my dad called and said, "How many cylinders does it have?" And I realized in that moment, I might have failed him cause I didn't know. And I used to ask those types of questions about any lock breaks and cylinders, and if it's manual or automatic. And I realized, I now just buy a car that I hope works. And it's so complicated with all the computer chips. I really don't know that much about it. And that's what's happening with data. We're just loading so much of it. And it's so complex that the way companies consume them in the IT function is that they bring in a lot of data and then they syndicate it out to the business. And it turns out that the individuals loading and consuming all of this data for the company actually may not know that much about the data itself and that's not even their job anymore. So, we'll talk more about that in a minute, but that's really what's setting the foreground for this observability play and why everybody's so interested. It's because we're becoming less close to the intricacies of the data and we just expect it to always be there and be correct. >> You know, the other thing too about data quality, and for years we did the MIT, CDO, IQ event. We didn't do it last year at COVID, messed everything up. But the observation I would make there, your thoughts is, data quality used to be information quality, used to be this back office function, and then it became sort of front office with financial services, and government and healthcare, these highly regulated industries. And then the whole chief data officer thing happened and people were realizing, well they sort of flipped the bit from sort of a data as a risk to data as an asset. And now as we say, we're going to talk about observability. And so it's really become front and center, just the whole quality issue because data's so fundamental, hasn't it? >> Yeah, absolutely. I mean, let's imagine we pull up our phones right now and I go to my favorite stock ticker app, and I check out the Nasdaq market cap. I really have no idea if that's the correct number. I know it's a number, it looks large, it's in a numeric field. And that's kind of what's going on. There's so many numbers and they're coming from all of these different sources, and data providers, and they're getting consumed and passed along. But there isn't really a way to tactically put controls on every number and metric across every field we plan to monitor, but with the scale that we've achieved in early days, even before Collibra. And what's been so exciting is, we have these types of observation techniques, these data monitors that can actually track past performance of every field at scale. And why that's so interesting, and why I think the CDO is listening right intently nowadays to this topic is, so maybe we could surface all of these problems with the right solution of data observability and with the right scale, and then just be alerted on breaking trends. So we're sort of shifting away from this world of must write a condition and then when that condition breaks that was always known as a break record. But what about breaking trends and root cause analysis? And is it possible to do that with less human intervention? And so I think most people are seeing now that it's going to have to be a software tool and a computer system. It's not ever going to be based on one or two domain experts anymore. >> So how does data observability relate to data quality? Are they sort of two sides of the same coin? Are they cousins? What's your perspective on that? >> Yeah, it's super interesting. It's an emerging market. So the language is changing, a lot of the topic and areas changing. The way that I like to say it or break it down because the lingo is constantly moving, as a target on the space is really breaking records versus breaking trends. And I could write a condition when this thing happens it's wrong, and when it doesn't it's correct. Or I could look for a trend and I'll give you a good example. Everybody's talking about fresh data and stale data, and why would that matter? Well, if your data never arrived, or only part of it arrived, or didn't arrive on time, it's likely stale, and there will not be a condition that you could write that would show you all the good and the bads. That was kind of your traditional approach of data quality break records. But your modern day approach is you lost a significant portion of your data, or it did not arrive on time to make that decision accurately on time. And that's a hidden concern. Some people call this freshness, we call it stale data. But it all points to the same idea of the thing that you're observing may not be a data quality condition anymore. It may be a breakdown in the data pipeline. And with thousands of data pipelines in play for every company out there, there's more than a couple of these happening every day. >> So what's the Collibra angle on all this stuff? Made the acquisition, you got data quality, observability coming together. You guys have a lot of expertise in this area, but you hear providence of data. You just talked about stale data, the whole trend toward realtime. How is Collibra approaching the problem and what's unique about your approach? >> Well I think where we're fortunate is with our background. Myself and team, we sort of lived this problem for a long time in the Wall Street days about a decade ago. And we saw it from many different angles. And what we came up with, before it was called data observability or reliability, was basically the underpinnings of that. So we're a little bit ahead of the curve there when most people evaluate our solution. It's more advanced than some of the observation techniques that currently exist. But we've also always covered data quality and we believe that people want to know more, they need more insights. And they want to see break records and breaking trends together, so they can correlate the root cause. And we hear that all the time. "I have so many things going wrong just show me the big picture. Help me find the thing that if I were to fix it today would make the most impact." So we're really focused on root cause analysis, business impact, connecting it with lineage and catalog metadata. And as that grows you can actually achieve total data governance. At this point with the acquisition of what was a Lineage company years ago, and then my company OwlDQ, now Collibra Data Quality. Collibra may be the best positioned for total data governance and intelligence in the space. >> Well, you mentioned financial services a couple of times and some examples, remember the flash crash in 2010. Nobody had any idea what that was. They would just say, "Oh, it's a glitch." So they didn't understand the root cause of it. So this is a really interesting topic to me. So we know at Data Citizens 22 that you're announcing, you got to announce new products, right? It is your yearly event. What's new? Give us a sense as to what products are coming out but specifically around data quality and observability. >> Absolutely. There's this, there's always a next thing on the forefront. And the one right now is these hyperscalers in the cloud. So you have databases like Snowflake and BigQuery, and Databricks, Delta Lake and SQL Pushdown. And ultimately what that means is a lot of people are storing in loading data even faster in a SaaS like model. And we've started to hook into these databases, and while we've always worked with the same databases in the past they're supported today. We're doing something called Native Database pushdown, where the entire compute and data activity happens in the database. And why that is so interesting and powerful now? Is everyone's concerned with something called Egress. Did my data that I've spent all this time and money with my security team securing ever leave my hands, did it ever leave my secure VPC as they call it? And with these native integrations that we're building and about to unveil here as kind of a sneak peak for next week at Data Citizens, we're now doing all compute and data operations in databases like Snowflake. And what that means is with no install and no configuration you could log into the Collibra data quality app and have all of your data quality running inside the database that you've probably already picked as your go forward team selection secured database of choice. So we're really excited about that. And I think if you look at the whole landscape of network cost, egress cost, data storage and compute, what people are realizing is it's extremely efficient to do it in the way that we're about to release here next week. >> So this is interesting because what you just described, you mentioned Snowflake, you mentioned Google, oh actually you mentioned yeah, Databricks. You know, Snowflake has the data cloud. If you put everything in the data cloud, okay, you're cool. But then Google's got the open data cloud. If you heard, Google next. And now Databricks doesn't call it the data cloud, but they have like the open source data cloud. So you have all these different approaches and there's really no way, up until now I'm hearing, to really understand the relationships between all those and have confidence across, it's like yamarket AMI, you should just be a note on the mesh. I don't care if it's a data warehouse or a data lake, or where it comes from, but it's a point on that mesh and I need tooling to be able to have confidence that my data is governed and has the proper lineage, providence. And that's what you're bringing to the table. Is that right? Did I get that right? >> Yeah, that's right. And it's, for us, it's not that we haven't been working with those great cloud databases, but it's the fact that we can send them the instructions now we can send them the operating ability to crunch all of the calculations, the governance, the quality, and get the answers. And what that's doing, it's basically zero network cost, zero egress cost, zero latency of time. And so when you were to log into BigQuery tomorrow using our tool, or say Snowflake for example, you have instant data quality metrics, instant profiling, instant lineage in access, privacy controls, things of that nature that just become less onerous. What we're seeing is there's so much technology out there just like all of the major brands that you mentioned but how do we make it easier? The future is about less clicks, faster time to value, faster scale, and eventually lower cost. And we think that this positions us to be the leader there. >> I love this example because, we've got talks about well the cloud guys you're going to own the world. And of course now we're seeing that the ecosystem is finding so much white space to add value connect across cloud. Sometimes we call it super cloud and so, or inter clouding. Alright, Kirk, give us your final thoughts on the trends that we've talked about and data Citizens 22. >> Absolutely. Well I think, one big trend is discovery and classification. Seeing that across the board, people used to know it was a zip code and nowadays with the amount of data that's out there they want to know where everything is, where their sensitive data is, if it's redundant, tell me everything inside of three to five seconds. And with that comes, they want to know in all of these hyperscale databases how fast they can get controls and insights out of their tools. So I think we're going to see more one click solutions, more SaaS based solutions, and solutions that hopefully prove faster time to value on all of these modern cloud platforms. >> Excellent. All right, Kirk Haslbeck, thanks so much for coming on theCUBE and previewing Data Citizens 22. Appreciate it. >> Thanks for having me, Dave. >> You're welcome. All right. And thank you for watching. Keep it right there for more coverage from theCUBE. (atmospheric music)

Published Date : Nov 2 2022

SUMMARY :

Kirk, good to see you, welcome. Excited to be here. And now you lead data quality at Collibra. And it's so complex that the And now as we say, we're going and I check out the Nasdaq market cap. of the thing that you're observing and what's unique about your approach? ahead of the curve there and some examples, And the one right now is these and has the proper lineage, providence. and get the answers. And of course now we're and solutions that hopefully and previewing Data Citizens 22. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

CollibraORGANIZATION

0.99+

2010DATE

0.99+

Kirk HaslbeckPERSON

0.99+

oneQUANTITY

0.99+

OwlDQORGANIZATION

0.99+

KirkPERSON

0.99+

50 dayQUANTITY

0.99+

GoogleORGANIZATION

0.99+

10 dayQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two sidesQUANTITY

0.99+

last yearDATE

0.99+

Collibra Data QualityORGANIZATION

0.99+

next weekDATE

0.99+

Data CitizensORGANIZATION

0.99+

tomorrowDATE

0.98+

two other thingsQUANTITY

0.98+

BigQueryTITLE

0.98+

five secondsQUANTITY

0.98+

one clickQUANTITY

0.97+

todayDATE

0.97+

CollibraTITLE

0.96+

Wall StreetLOCATION

0.96+

SQL PushdownTITLE

0.94+

Data Citizens 22ORGANIZATION

0.93+

COVIDORGANIZATION

0.93+

SnowflakeTITLE

0.91+

NasdaqORGANIZATION

0.9+

Data Citizens 22ORGANIZATION

0.89+

Delta LakeTITLE

0.89+

EgressORGANIZATION

0.89+

MITEVENT

0.89+

more than a coupleQUANTITY

0.87+

a decade agoDATE

0.85+

zeroQUANTITY

0.84+

CitizensORGANIZATION

0.83+

Data Citizens 2022 CollibraEVENT

0.83+

yearsDATE

0.81+

thousands of dataQUANTITY

0.8+

Data Citizens 22TITLE

0.78+

two domain expertsQUANTITY

0.77+

SnowflakeORGANIZATION

0.76+

IQEVENT

0.76+

coupleQUANTITY

0.75+

CollibraPERSON

0.75+

theCUBEORGANIZATION

0.71+

many numbersQUANTITY

0.7+

Vice PresidentPERSON

0.68+

LineageORGANIZATION

0.66+

DatabricksTITLE

0.64+

too long agoDATE

0.62+

threeQUANTITY

0.6+

DataORGANIZATION

0.57+

CDOEVENT

0.53+

minuteQUANTITY

0.53+

CDOTITLE

0.53+

numberQUANTITY

0.51+

AMIORGANIZATION

0.44+

QualityPERSON

0.43+

Collibra Data Citizens 22


 

>>Collibra is a company that was founded in 2008 right before the so-called modern big data era kicked into high gear. The company was one of the first to focus its business on data governance. Now, historically, data governance and data quality initiatives, they were back office functions and they were largely confined to regulatory regulated industries that had to comply with public policy mandates. But as the cloud went mainstream, the tech giants showed us how valuable data could become and the value proposition for data quality and trust. It evolved from primarily a compliance driven issue to becoming a lynchpin of competitive advantage. But data in the decade of the 2010s was largely about getting the technology to work. You had these highly centralized technical teams that were formed and they had hyper specialized skills to develop data architectures and processes to serve the myriad data needs of organizations. >>And it resulted in a lot of frustration with data initiatives for most organizations that didn't have the resources of the cloud guys and the social media giants to really attack their data problems and turn data into gold. This is why today for example, this quite a bit of momentum to rethinking monolithic data architectures. You see, you hear about initiatives like data mesh and the idea of data as a product. They're gaining traction as a way to better serve the the data needs of decentralized business Uni users, you hear a lot about data democratization. So these decentralization efforts around data, they're great, but they create a new set of problems. Specifically, how do you deliver like a self-service infrastructure to business users and domain experts? Now the cloud is definitely helping with that, but also how do you automate governance? This becomes especially tricky as protecting data privacy has become more and more important. >>In other words, while it's enticing to experiment and run fast and loose with data initiatives kinda like the Wild West, to find new veins of gold, it has to be done responsibly. As such, the idea of data governance has had to evolve to become more automated. And intelligence governance and data lineage is still fundamental to ensuring trust as data. It moves like water through an organization. No one is gonna use data that isn't trusted. Metadata has become increasingly important for data discovery and data classification. As data flows through an organization, the continuously ability to check for data flaws and automating that data quality, they become a functional requirement of any modern data management platform. And finally, data privacy has become a critical adjacency to cyber security. So you can see how data governance has evolved into a much richer set of capabilities than it was 10 or 15 years ago. >>Hello and welcome to the Cube's coverage of Data Citizens made possible by Calibra, a leader in so-called Data intelligence and the host of Data Citizens 2022, which is taking place in San Diego. My name is Dave Ante and I'm one of the hosts of our program, which is running in parallel to data citizens. Now at the Cube we like to say we extract the signal from the noise, and over the, the next couple of days, we're gonna feature some of the themes from the keynote speakers at Data Citizens and we'll hear from several of the executives. Felix Von Dala, who is the co-founder and CEO of Collibra, will join us along with one of the other founders of Collibra, Stan Christians, who's gonna join my colleague Lisa Martin. I'm gonna also sit down with Laura Sellers, she's the Chief Product Officer at Collibra. We'll talk about some of the, the announcements and innovations they're making at the event, and then we'll dig in further to data quality with Kirk Hasselbeck. >>He's the vice president of Data quality at Collibra. He's an amazingly smart dude who founded Owl dq, a company that he sold to Col to Collibra last year. Now many companies, they didn't make it through the Hado era, you know, they missed the industry waves and they became Driftwood. Collibra, on the other hand, has evolved its business. They've leveraged the cloud, expanded its product portfolio, and leaned in heavily to some major partnerships with cloud providers, as well as receiving a strategic investment from Snowflake earlier this year. So it's a really interesting story that we're thrilled to be sharing with you. Thanks for watching and I hope you enjoy the program. >>Last year, the Cube Covered Data Citizens Collibra's customer event. And the premise that we put forth prior to that event was that despite all the innovation that's gone on over the last decade or more with data, you know, starting with the Hado movement, we had data lakes, we'd spark the ascendancy of programming languages like Python, the introduction of frameworks like TensorFlow, the rise of ai, low code, no code, et cetera. Businesses still find it's too difficult to get more value from their data initiatives. And we said at the time, you know, maybe it's time to rethink data innovation. While a lot of the effort has been focused on, you know, more efficiently storing and processing data, perhaps more energy needs to go into thinking about the people and the process side of the equation, meaning making it easier for domain experts to both gain insights for data, trust the data, and begin to use that data in new ways, fueling data, products, monetization and insights data citizens 2022 is back and we're pleased to have Felix Van Dema, who is the founder and CEO of Collibra. He's on the cube or excited to have you, Felix. Good to see you again. >>Likewise Dave. Thanks for having me again. >>You bet. All right, we're gonna get the update from Felix on the current data landscape, how he sees it, why data intelligence is more important now than ever and get current on what Collibra has been up to over the past year and what's changed since Data Citizens 2021. And we may even touch on some of the product news. So Felix, we're living in a very different world today with businesses and consumers. They're struggling with things like supply chains, uncertain economic trends, and we're not just snapping back to the 2010s. That's clear, and that's really true as well in the world of data. So what's different in your mind, in the data landscape of the 2020s from the previous decade, and what challenges does that bring for your customers? >>Yeah, absolutely. And, and I think you said it well, Dave, and and the intro that that rising complexity and fragmentation in the broader data landscape, that hasn't gotten any better over the last couple of years. When when we talk to our customers, that level of fragmentation, the complexity, how do we find data that we can trust, that we know we can use has only gotten kinda more, more difficult. So that trend that's continuing, I think what is changing is that trend has become much more acute. Well, the other thing we've seen over the last couple of years is that the level of scrutiny that organizations are under respect to data, as data becomes more mission critical, as data becomes more impactful than important, the level of scrutiny with respect to privacy, security, regulatory compliance, as only increasing as well, which again, is really difficult in this environment of continuous innovation, continuous change, continuous growing complexity and fragmentation. >>So it's become much more acute. And, and to your earlier point, we do live in a different world and and the the past couple of years we could probably just kind of brute for it, right? We could focus on, on the top line. There was enough kind of investments to be, to be had. I think nowadays organizations are focused or are, are, are, are, are, are in a very different environment where there's much more focus on cost control, productivity, efficiency, How do we truly get value from that data? So again, I think it just another incentive for organization to now truly look at data and to scale it data, not just from a a technology and infrastructure perspective, but how do you actually scale data from an organizational perspective, right? You said at the the people and process, how do we do that at scale? And that's only, only only becoming much more important. And we do believe that the, the economic environment that we find ourselves in today is gonna be catalyst for organizations to really dig out more seriously if, if, if, if you will, than they maybe have in the have in the best. >>You know, I don't know when you guys founded Collibra, if, if you had a sense as to how complicated it was gonna get, but you've been on a mission to really address these problems from the beginning. How would you describe your, your, your mission and what are you doing to address these challenges? >>Yeah, absolutely. We, we started Colli in 2008. So in some sense and the, the last kind of financial crisis, and that was really the, the start of Colli where we found product market fit, working with large finance institutions to help them cope with the increasing compliance requirements that they were faced with because of the, of the financial crisis and kind of here we are again in a very different environment, of course 15 years, almost 15 years later. But data only becoming more important. But our mission to deliver trusted data for every user, every use case and across every source, frankly, has only become more important. So what has been an incredible journey over the last 14, 15 years, I think we're still relatively early in our mission to again, be able to provide everyone, and that's why we call it data citizens. We truly believe that everyone in the organization should be able to use trusted data in an easy, easy matter. That mission is is only becoming more important, more relevant. We definitely have a lot more work ahead of us because we are still relatively early in that, in that journey. >>Well, that's interesting because, you know, in my observation it takes seven to 10 years to actually build a company and then the fact that you're still in the early days is kind of interesting. I mean, you, Collibra's had a good 12 months or so since we last spoke at Data Citizens. Give us the latest update on your business. What do people need to know about your, your current momentum? >>Yeah, absolutely. Again, there's, there's a lot of tail organizations that are only maturing the data practices and we've seen it kind of transform or, or, or influence a lot of our business growth that we've seen, broader adoption of the platform. We work at some of the largest organizations in the world where it's Adobe, Heineken, Bank of America, and many more. We have now over 600 enterprise customers, all industry leaders and every single vertical. So it's, it's really exciting to see that and continue to partner with those organizations. On the partnership side, again, a lot of momentum in the org in, in the, in the markets with some of the cloud partners like Google, Amazon, Snowflake, data bricks and, and others, right? As those kind of new modern data infrastructures, modern data architectures that are definitely all moving to the cloud, a great opportunity for us, our partners and of course our customers to help them kind of transition to the cloud even faster. >>And so we see a lot of excitement and momentum there within an acquisition about 18 months ago around data quality, data observability, which we believe is an enormous opportunity. Of course, data quality isn't new, but I think there's a lot of reasons why we're so excited about quality and observability now. One is around leveraging ai, machine learning, again to drive more automation. And the second is that those data pipelines that are now being created in the cloud, in these modern data architecture arch architectures, they've become mission critical. They've become real time. And so monitoring, observing those data pipelines continuously has become absolutely critical so that they're really excited about about that as well. And on the organizational side, I'm sure you've heard a term around kind of data mesh, something that's gaining a lot of momentum, rightfully so. It's really the type of governance that we always believe. Then federated focused on domains, giving a lot of ownership to different teams. I think that's the way to scale data organizations. And so that aligns really well with our vision and, and from a product perspective, we've seen a lot of momentum with our customers there as well. >>Yeah, you know, a couple things there. I mean, the acquisition of i l dq, you know, Kirk Hasselbeck and, and their team, it's interesting, you know, the whole data quality used to be this back office function and, and really confined to highly regulated industries. It's come to the front office, it's top of mind for chief data officers, data mesh. You mentioned you guys are a connective tissue for all these different nodes on the data mesh. That's key. And of course we see you at all the shows. You're, you're a critical part of many ecosystems and you're developing your own ecosystem. So let's chat a little bit about the, the products. We're gonna go deeper in into products later on at, at Data Citizens 22, but we know you're debuting some, some new innovations, you know, whether it's, you know, the, the the under the covers in security, sort of making data more accessible for people just dealing with workflows and processes as you talked about earlier. Tell us a little bit about what you're introducing. >>Yeah, absolutely. We're super excited, a ton of innovation. And if we think about the big theme and like, like I said, we're still relatively early in this, in this journey towards kind of that mission of data intelligence that really bolts and compelling mission, either customers are still start, are just starting on that, on that journey. We wanna make it as easy as possible for the, for our organization to actually get started because we know that's important that they do. And for our organization and customers that have been with us for some time, there's still a tremendous amount of opportunity to kind of expand the platform further. And again, to make it easier for really to, to accomplish that mission and vision around that data citizen that everyone has access to trustworthy data in a very easy, easy way. So that's really the theme of a lot of the innovation that we're driving. >>A lot of kind of ease of adoption, ease of use, but also then how do we make sure that lio becomes this kind of mission critical enterprise platform from a security performance architecture scale supportability that we're truly able to deliver that kind of an enterprise mission critical platform. And so that's the big theme from an innovation perspective, From a product perspective, a lot of new innovation that we're really excited about. A couple of highlights. One is around data marketplace. Again, a lot of our customers have plans in that direction, how to make it easy. How do we make, how do we make available to true kind of shopping experience that anybody in your organization can, in a very easy search first way, find the right data product, find the right dataset, that data can then consume usage analytics. How do you, how do we help organizations drive adoption, tell them where they're working really well and where they have opportunities homepages again to, to make things easy for, for people, for anyone in your organization to kind of get started with ppia, you mentioned workflow designer, again, we have a very powerful enterprise platform. >>One of our key differentiators is the ability to really drive a lot of automation through workflows. And now we provided a new low code, no code kind of workflow designer experience. So, so really customers can take it to the next level. There's a lot more new product around K Bear Protect, which in partnership with Snowflake, which has been a strategic investor in kib, focused on how do we make access governance easier? How do we, how do we, how are we able to make sure that as you move to the cloud, things like access management, masking around sensitive data, PII data is managed as much more effective, effective rate, really excited about that product. There's more around data quality. Again, how do we, how do we get that deployed as easily and quickly and widely as we can? Moving that to the cloud has been a big part of our strategy. >>So we launch more data quality cloud product as well as making use of those, those native compute capabilities in platforms like Snowflake, Data, Bricks, Google, Amazon, and others. And so we are bettering a capability, a capability that we call push down. So actually pushing down the computer and data quality, the monitoring into the underlying platform, which again, from a scale performance and ease of use perspective is gonna make a massive difference. And then more broadly, we, we talked a little bit about the ecosystem. Again, integrations, we talk about being able to connect to every source. Integrations are absolutely critical and we're really excited to deliver new integrations with Snowflake, Azure and Google Cloud storage as well. So there's a lot coming out. The, the team has been work at work really hard and we are really, really excited about what we are coming, what we're bringing to markets. >>Yeah, a lot going on there. I wonder if you could give us your, your closing thoughts. I mean, you, you talked about, you know, the marketplace, you know, you think about data mesh, you think of data as product, one of the key principles you think about monetization. This is really different than what we've been used to in data, which is just getting the technology to work has been been so hard. So how do you see sort of the future and, you know, give us the, your closing thoughts please? >>Yeah, absolutely. And I, and I think we we're really at this pivotal moment, and I think you said it well. We, we all know the constraint and the challenges with data, how to actually do data at scale. And while we've seen a ton of innovation on the infrastructure side, we fundamentally believe that just getting a faster database is important, but it's not gonna fully solve the challenges and truly kind of deliver on the opportunity. And that's why now is really the time to deliver this data intelligence vision, this data intelligence platform. We are still early, making it as easy as we can. It's kind of, of our, it's our mission. And so I'm really, really excited to see what we, what we are gonna, how the marks gonna evolve over the next, next few quarters and years. I think the trend is clearly there when we talk about data mesh, this kind of federated approach folks on data products is just another signal that we believe that a lot of our organization are now at the time. >>The understanding need to go beyond just the technology. I really, really think about how do we actually scale data as a business function, just like we've done with it, with, with hr, with, with sales and marketing, with finance. That's how we need to think about data. I think now is the time given the economic environment that we are in much more focus on control, much more focused on productivity efficiency and now's the time. We need to look beyond just the technology and infrastructure to think of how to scale data, how to manage data at scale. >>Yeah, it's a new era. The next 10 years of data won't be like the last, as I always say. Felix, thanks so much and good luck in, in San Diego. I know you're gonna crush it out there. >>Thank you Dave. >>Yeah, it's a great spot for an in-person event and, and of course the content post event is gonna be available@collibra.com and you can of course catch the cube coverage@thecube.net and all the news@siliconangle.com. This is Dave Valante for the cube, your leader in enterprise and emerging tech coverage. >>Hi, I'm Jay from Collibra's Data Office. Today I want to talk to you about Collibra's data intelligence cloud. We often say Collibra is a single system of engagement for all of your data. Now, when I say data, I mean data in the broadest sense of the word, including reference and metadata. Think of metrics, reports, APIs, systems, policies, and even business processes that produce or consume data. Now, the beauty of this platform is that it ensures all of your users have an easy way to find, understand, trust, and access data. But how do you get started? Well, here are seven steps to help you get going. One, start with the data. What's data intelligence? Without data leverage the Collibra data catalog to automatically profile and classify your enterprise data wherever that data lives, databases, data lakes or data warehouses, whether on the cloud or on premise. >>Two, you'll then wanna organize the data and you'll do that with data communities. This can be by department, find a business or functional team, however your organization organizes work and accountability. And for that you'll establish community owners, communities, make it easy for people to navigate through the platform, find the data and will help create a sense of belonging for users. An important and related side note here, we find it's typical in many organizations that data is thought of is just an asset and IT and data offices are viewed as the owners of it and who are really the central teams performing analytics as a service provider to the enterprise. We believe data is more than an asset, it's a true product that can be converted to value. And that also means establishing business ownership of data where that strategy and ROI come together with subject matter expertise. >>Okay, three. Next, back to those communities there, the data owners should explain and define their data, not just the tables and columns, but also the related business terms, metrics and KPIs. These objects we call these assets are typically organized into business glossaries and data dictionaries. I definitely recommend starting with the topics that are most important to the business. Four, those steps that enable you and your users to have some fun with it. Linking everything together builds your knowledge graph and also known as a metadata graph by linking or relating these assets together. For example, a data set to a KPI to a report now enables your users to see what we call the lineage diagram that visualizes where the data in your dashboards actually came from and what the data means and who's responsible for it. Speaking of which, here's five. Leverage the calibra trusted business reporting solution on the marketplace, which comes with workflows for those owners to certify their reports, KPIs, and data sets. >>This helps them force their trust in their data. Six, easy to navigate dashboards or landing pages right in your platform for your company's business processes are the most effective way for everyone to better understand and take action on data. Here's a pro tip, use the dashboard design kit on the marketplace to help you build compelling dashboards. Finally, seven, promote the value of this to your users and be sure to schedule enablement office hours and new employee onboarding sessions to get folks excited about what you've built and implemented. Better yet, invite all of those community and data owners to these sessions so that they can show off the value that they've created. Those are my seven tips to get going with Collibra. I hope these have been useful. For more information, be sure to visit collibra.com. >>Welcome to the Cube's coverage of Data Citizens 2022 Collibra's customer event. My name is Dave Valante. With us is Kirk Hasselbeck, who's the vice president of Data Quality of Collibra Kirk, good to see you. Welcome. >>Thanks for having me, Dave. Excited to be here. >>You bet. Okay, we're gonna discuss data quality observability. It's a hot trend right now. You founded a data quality company, OWL dq, and it was acquired by Collibra last year. Congratulations. And now you lead data quality at Collibra. So we're hearing a lot about data quality right now. Why is it such a priority? Take us through your thoughts on that. >>Yeah, absolutely. It's, it's definitely exciting times for data quality, which you're right, has been around for a long time. So why now and why is it so much more exciting than it used to be? I think it's a bit stale, but we all know that companies use more data than ever before and the variety has changed and the volume has grown. And, and while I think that remains true, there are a couple other hidden factors at play that everyone's so interested in as, as to why this is becoming so important now. And, and I guess you could kind of break this down simply and think about if Dave, you and I were gonna build, you know, a new healthcare application and monitor the heartbeat of individuals, imagine if we get that wrong, you know, what the ramifications could be, what, what those incidents would look like, or maybe better yet, we try to build a, a new trading algorithm with a crossover strategy where the 50 day crosses the, the 10 day average. >>And imagine if the data underlying the inputs to that is incorrect. We will probably have major financial ramifications in that sense. So, you know, it kind of starts there where everybody's realizing that we're all data companies and if we are using bad data, we're likely making incorrect business decisions. But I think there's kind of two other things at play. You know, I, I bought a car not too long ago and my dad called and said, How many cylinders does it have? And I realized in that moment, you know, I might have failed him because, cause I didn't know. And, and I used to ask those types of questions about any lock brakes and cylinders and, and you know, if it's manual or, or automatic and, and I realized I now just buy a car that I hope works. And it's so complicated with all the computer chips, I, I really don't know that much about it. >>And, and that's what's happening with data. We're just loading so much of it. And it's so complex that the way companies consume them in the IT function is that they bring in a lot of data and then they syndicate it out to the business. And it turns out that the, the individuals loading and consuming all of this data for the company actually may not know that much about the data itself, and that's not even their job anymore. So we'll talk more about that in a minute, but that's really what's setting the foreground for this observability play and why everybody's so interested. It, it's because we're becoming less close to the intricacies of the data and we just expect it to always be there and be correct. >>You know, the other thing too about data quality, and for years we did the MIT CDO IQ event, we didn't do it last year, Covid messed everything up. But the observation I would make there thoughts is, is it data quality? Used to be information quality used to be this back office function, and then it became sort of front office with financial services and government and healthcare, these highly regulated industries. And then the whole chief data officer thing happened and people were realizing, well, they sort of flipped the bit from sort of a data as a, a risk to data as a, as an asset. And now as we say, we're gonna talk about observability. And so it's really become front and center just the whole quality issue because data's so fundamental, hasn't it? >>Yeah, absolutely. I mean, let's imagine we pull up our phones right now and I go to my, my favorite stock ticker app and I check out the NASDAQ market cap. I really have no idea if that's the correct number. I know it's a number, it looks large, it's in a numeric field. And, and that's kind of what's going on. There's, there's so many numbers and they're coming from all of these different sources and data providers and they're getting consumed and passed along. But there isn't really a way to tactically put controls on every number and metric across every field we plan to monitor, but with the scale that we've achieved in early days, even before calibra. And what's been so exciting is we have these types of observation techniques, these data monitors that can actually track past performance of every field at scale. And why that's so interesting and why I think the CDO is, is listening right intently nowadays to this topic is, so maybe we could surface all of these problems with the right solution of data observability and with the right scale and then just be alerted on breaking trends. So we're sort of shifting away from this world of must write a condition and then when that condition breaks, that was always known as a break record. But what about breaking trends and root cause analysis? And is it possible to do that, you know, with less human intervention? And so I think most people are seeing now that it's going to have to be a software tool and a computer system. It's, it's not ever going to be based on one or two domain experts anymore. >>So, So how does data observability relate to data quality? Are they sort of two sides of the same coin? Are they, are they cousins? What's your perspective on that? >>Yeah, it's, it's super interesting. It's an emerging market. So the language is changing a lot of the topic and areas changing the way that I like to say it or break it down because the, the lingo is constantly moving is, you know, as a target on this space is really breaking records versus breaking trends. And I could write a condition when this thing happens, it's wrong and when it doesn't it's correct. Or I could look for a trend and I'll give you a good example. You know, everybody's talking about fresh data and stale data and, and why would that matter? Well, if your data never arrived or only part of it arrived or didn't arrive on time, it's likely stale and there will not be a condition that you could write that would show you all the good in the bads. That was kind of your, your traditional approach of data quality break records. But your modern day approach is you lost a significant portion of your data, or it did not arrive on time to make that decision accurately on time. And that's a hidden concern. Some people call this freshness, we call it stale data, but it all points to the same idea of the thing that you're observing may not be a data quality condition anymore. It may be a breakdown in the data pipeline. And with thousands of data pipelines in play for every company out there there, there's more than a couple of these happening every day. >>So what's the Collibra angle on all this stuff made the acquisition, you got data quality observability coming together, you guys have a lot of expertise in, in this area, but you hear providence of data, you just talked about, you know, stale data, you know, the, the whole trend toward real time. How is Calibra approaching the problem and what's unique about your approach? >>Well, I think where we're fortunate is with our background, myself and team, we sort of lived this problem for a long time, you know, in, in the Wall Street days about a decade ago. And we saw it from many different angles. And what we came up with before it was called data observability or reliability was basically the, the underpinnings of that. So we're a little bit ahead of the curve there when most people evaluate our solution, it's more advanced than some of the observation techniques that that currently exist. But we've also always covered data quality and we believe that people want to know more, they need more insights, and they want to see break records and breaking trends together so they can correlate the root cause. And we hear that all the time. I have so many things going wrong, just show me the big picture, help me find the thing that if I were to fix it today would make the most impact. So we're really focused on root cause analysis, business impact, connecting it with lineage and catalog metadata. And as that grows, you can actually achieve total data governance at this point with the acquisition of what was a Lineage company years ago, and then my company Ldq now Collibra, Data quality Collibra may be the best positioned for total data governance and intelligence in the space. >>Well, you mentioned financial services a couple of times and some examples, remember the flash crash in 2010. Nobody had any idea what that was, you know, they just said, Oh, it's a glitch, you know, so they didn't understand the root cause of it. So this is a really interesting topic to me. So we know at Data Citizens 22 that you're announcing, you gotta announce new products, right? You're yearly event what's, what's new. Give us a sense as to what products are coming out, but specifically around data quality and observability. >>Absolutely. There's this, you know, there's always a next thing on the forefront. And the one right now is these hyperscalers in the cloud. So you have databases like Snowflake and Big Query and Data Bricks is Delta Lake and SQL Pushdown. And ultimately what that means is a lot of people are storing in loading data even faster in a SaaS like model. And we've started to hook in to these databases. And while we've always worked with the the same databases in the past, they're supported today we're doing something called Native Database pushdown, where the entire compute and data activity happens in the database. And why that is so interesting and powerful now is everyone's concerned with something called Egress. Did your, my data that I've spent all this time and money with my security team securing ever leave my hands, did it ever leave my secure VPC as they call it? >>And with these native integrations that we're building and about to unveil, here's kind of a sneak peek for, for next week at Data Citizens. We're now doing all compute and data operations in databases like Snowflake. And what that means is with no install and no configuration, you could log into the Collibra data quality app and have all of your data quality running inside the database that you've probably already picked as your your go forward team selection secured database of choice. So we're really excited about that. And I think if you look at the whole landscape of network cost, egress, cost, data storage and compute, what people are realizing is it's extremely efficient to do it in the way that we're about to release here next week. >>So this is interesting because what you just described, you know, you mentioned Snowflake, you mentioned Google, Oh actually you mentioned yeah, data bricks. You know, Snowflake has the data cloud. If you put everything in the data cloud, okay, you're cool, but then Google's got the open data cloud. If you heard, you know, Google next and now data bricks doesn't call it the data cloud, but they have like the open source data cloud. So you have all these different approaches and there's really no way up until now I'm, I'm hearing to, to really understand the relationships between all those and have confidence across, you know, it's like Jak Dani, you should just be a note on the mesh. And I don't care if it's a data warehouse or a data lake or where it comes from, but it's a point on that mesh and I need tooling to be able to have confidence that my data is governed and has the proper lineage, providence. And, and, and that's what you're bringing to the table, Is that right? Did I get that right? >>Yeah, that's right. And it's, for us, it's, it's not that we haven't been working with those great cloud databases, but it's the fact that we can send them the instructions now, we can send them the, the operating ability to crunch all of the calculations, the governance, the quality, and get the answers. And what that's doing, it's basically zero network costs, zero egress cost, zero latency of time. And so when you were to log into Big Query tomorrow using our tool or like, or say Snowflake for example, you have instant data quality metrics, instant profiling, instant lineage and access privacy controls, things of that nature that just become less onerous. What we're seeing is there's so much technology out there, just like all of the major brands that you mentioned, but how do we make it easier? The future is about less clicks, faster time to value, faster scale, and eventually lower cost. And, and we think that this positions us to be the leader there. >>I love this example because, you know, Barry talks about, wow, the cloud guys are gonna own the world and, and of course now we're seeing that the ecosystem is finding so much white space to add value, connect across cloud. Sometimes we call it super cloud and so, or inter clouding. All right, Kirk, give us your, your final thoughts and on on the trends that we've talked about and Data Citizens 22. >>Absolutely. Well, I think, you know, one big trend is discovery and classification. Seeing that across the board, people used to know it was a zip code and nowadays with the amount of data that's out there, they wanna know where everything is, where their sensitive data is. If it's redundant, tell me everything inside of three to five seconds. And with that comes, they want to know in all of these hyperscale databases how fast they can get controls and insights out of their tools. So I think we're gonna see more one click solutions, more SAS based solutions and solutions that hopefully prove faster time to value on, on all of these modern cloud platforms. >>Excellent. All right, Kurt Hasselbeck, thanks so much for coming on the Cube and previewing Data Citizens 22. Appreciate it. >>Thanks for having me, Dave. >>You're welcome. Right, and thank you for watching. Keep it right there for more coverage from the Cube. Welcome to the Cube's virtual Coverage of Data Citizens 2022. My name is Dave Valante and I'm here with Laura Sellers, who's the Chief Product Officer at Collibra, the host of Data Citizens. Laura, welcome. Good to see you. >>Thank you. Nice to be here. >>Yeah, your keynote at Data Citizens this year focused on, you know, your mission to drive ease of use and scale. Now when I think about historically fast access to the right data at the right time in a form that's really easily consumable, it's been kind of challenging, especially for business users. Can can you explain to our audience why this matters so much and what's actually different today in the data ecosystem to make this a reality? >>Yeah, definitely. So I think what we really need and what I hear from customers every single day is that we need a new approach to data management and our product teams. What inspired me to come to Calibra a little bit a over a year ago was really the fact that they're very focused on bringing trusted data to more users across more sources for more use cases. And so as we look at what we're announcing with these innovations of ease of use and scale, it's really about making teams more productive in getting started with and the ability to manage data across the entire organization. So we've been very focused on richer experiences, a broader ecosystem of partners, as well as a platform that delivers performance, scale and security that our users and teams need and demand. So as we look at, Oh, go ahead. >>I was gonna say, you know, when I look back at like the last 10 years, it was all about getting the technology to work and it was just so complicated. But, but please carry on. I'd love to hear more about this. >>Yeah, I, I really, you know, Collibra is a system of engagement for data and we really are working on bringing that entire system of engagement to life for everyone to leverage here and now. So what we're announcing from our ease of use side of the world is first our data marketplace. This is the ability for all users to discover and access data quickly and easily shop for it, if you will. The next thing that we're also introducing is the new homepage. It's really about the ability to drive adoption and have users find data more quickly. And then the two more areas of the ease of use side of the world is our world of usage analytics. And one of the big pushes and passions we have at Collibra is to help with this data driven culture that all companies are trying to create. And also helping with data literacy, with something like usage analytics, it's really about driving adoption of the CLE platform, understanding what's working, who's accessing it, what's not. And then finally we're also introducing what's called workflow designer. And we love our workflows at Libra, it's a big differentiator to be able to automate business processes. The designer is really about a way for more people to be able to create those workflows, collaborate on those workflow flows, as well as people to be able to easily interact with them. So a lot of exciting things when it comes to ease of use to make it easier for all users to find data. >>Y yes, there's definitely a lot to unpack there. I I, you know, you mentioned this idea of, of of, of shopping for the data. That's interesting to me. Why this analogy, metaphor or analogy, I always get those confused. I let's go with analogy. Why is it so important to data consumers? >>I think when you look at the world of data, and I talked about this system of engagement, it's really about making it more accessible to the masses. And what users are used to is a shopping experience like your Amazon, if you will. And so having a consumer grade experience where users can quickly go in and find the data, trust that data, understand where the data's coming from, and then be able to quickly access it, is the idea of being able to shop for it, just making it as simple as possible and really speeding the time to value for any of the business analysts, data analysts out there. >>Yeah, I think when you, you, you see a lot of discussion about rethinking data architectures, putting data in the hands of the users and business people, decentralized data and of course that's awesome. I love that. But of course then you have to have self-service infrastructure and you have to have governance. And those are really challenging. And I think so many organizations, they're facing adoption challenges, you know, when it comes to enabling teams generally, especially domain experts to adopt new data technologies, you know, like the, the tech comes fast and furious. You got all these open source projects and get really confusing. Of course it risks security, governance and all that good stuff. You got all this jargon. So where do you see, you know, the friction in adopting new data technologies? What's your point of view and how can organizations overcome these challenges? >>You're, you're dead on. There's so much technology and there's so much to stay on top of, which is part of the friction, right? It's just being able to stay ahead of, of and understand all the technologies that are coming. You also look at as there's so many more sources of data and people are migrating data to the cloud and they're migrating to new sources. Where the friction comes is really that ability to understand where the data came from, where it's moving to, and then also to be able to put the access controls on top of it. So people are only getting access to the data that they should be getting access to. So one of the other things we're announcing with, with all of the innovations that are coming is what we're doing around performance and scale. So with all of the data movement, with all of the data that's out there, the first thing we're launching in the world of performance and scale is our world of data quality. >>It's something that Collibra has been working on for the past year and a half, but we're launching the ability to have data quality in the cloud. So it's currently an on-premise offering, but we'll now be able to carry that over into the cloud for us to manage that way. We're also introducing the ability to push down data quality into Snowflake. So this is, again, one of those challenges is making sure that that data that you have is d is is high quality as you move forward. And so really another, we're just reducing friction. You already have Snowflake stood up. It's not another machine for you to manage, it's just push down capabilities into Snowflake to be able to track that quality. Another thing that we're launching with that is what we call Collibra Protect. And this is that ability for users to be able to ingest metadata, understand where the PII data is, and then set policies up on top of it. So very quickly be able to set policies and have them enforced at the data level. So anybody in the organization is only getting access to the data they should have access to. >>Here's Topica data quality is interesting. It's something that I've followed for a number of years. It used to be a back office function, you know, and really confined only to highly regulated industries like financial services and healthcare and government. You know, you look back over a decade ago, you didn't have this worry about personal information, g gdpr, and, you know, California Consumer Privacy Act all becomes, becomes so much important. The cloud is really changed things in terms of performance and scale and of course partnering for, for, with Snowflake it's all about sharing data and monetization, anything but a back office function. So it was kind of smart that you guys were early on and of course attracting them and as a, as an investor as well was very strong validation. What can you tell us about the nature of the relationship with Snowflake and specifically inter interested in sort of joint engineering or, and product innovation efforts, you know, beyond the standard go to market stuff? >>Definitely. So you mentioned there were a strategic investor in Calibra about a year ago. A little less than that I guess. We've been working with them though for over a year really tightly with their product and engineering teams to make sure that Collibra is adding real value. Our unified platform is touching pieces of our unified platform or touching all pieces of Snowflake. And when I say that, what I mean is we're first, you know, able to ingest data with Snowflake, which, which has always existed. We're able to profile and classify that data we're announcing with Calibra Protect this week that you're now able to create those policies on top of Snowflake and have them enforce. So again, people can get more value out of their snowflake more quickly as far as time to value with, with our policies for all business users to be able to create. >>We're also announcing Snowflake Lineage 2.0. So this is the ability to take stored procedures in Snowflake and understand the lineage of where did the data come from, how was it transformed with within Snowflake as well as the data quality. Pushdown, as I mentioned, data quality, you brought it up. It is a new, it is a, a big industry push and you know, one of the things I think Gartner mentioned is people are losing up to $15 million without having great data quality. So this push down capability for Snowflake really is again, a big ease of use push for us at Collibra of that ability to, to push it into snowflake, take advantage of the data, the data source, and the engine that already lives there and get the right and make sure you have the right quality. >>I mean, the nice thing about Snowflake, if you play in the Snowflake sandbox, you, you, you, you can get sort of a, you know, high degree of confidence that the data sharing can be done in a safe way. Bringing, you know, Collibra into the, into the story allows me to have that data quality and, and that governance that I, that I need. You know, we've said many times on the cube that one of the notable differences in cloud this decade versus last decade, I mean ob there are obvious differences just in terms of scale and scope, but it's shaping up to be about the strength of the ecosystems. That's really a hallmark of these big cloud players. I mean they're, it's a key factor for innovating, accelerating product delivery, filling gaps in, in the hyperscale offerings cuz you got more stack, you know, mature stack capabilities and you know, it creates this flywheel momentum as we often say. But, so my question is, how do you work with the hyperscalers? Like whether it's AWS or Google, whomever, and what do you see as your role and what's the Collibra sweet spot? >>Yeah, definitely. So, you know, one of the things I mentioned early on is the broader ecosystem of partners is what it's all about. And so we have that strong partnership with Snowflake. We also are doing more with Google around, you know, GCP and kbra protect there, but also tighter data plex integration. So similar to what you've seen with our strategic moves around Snowflake and, and really covering the broad ecosystem of what Collibra can do on top of that data source. We're extending that to the world of Google as well and the world of data plex. We also have great partners in SI's Infosys is somebody we spoke with at the conference who's done a lot of great work with Levi's as they're really important to help people with their whole data strategy and driving that data driven culture and, and Collibra being the core of it. >>Hi Laura, we're gonna, we're gonna end it there, but I wonder if you could kind of put a bow on, you know, this year, the event your, your perspectives. So just give us your closing thoughts. >>Yeah, definitely. So I, I wanna say this is one of the biggest releases Collibra's ever had. Definitely the biggest one since I've been with the company a little over a year. We have all these great new product innovations coming to really drive the ease of use to make data more valuable for users everywhere and, and companies everywhere. And so it's all about everybody being able to easily find, understand, and trust and get access to that data going forward. >>Well congratulations on all the pro progress. It was great to have you on the cube first time I believe, and really appreciate you, you taking the time with us. >>Yes, thank you for your time. >>You're very welcome. Okay, you're watching the coverage of Data Citizens 2022 on the cube, your leader in enterprise and emerging tech coverage. >>So data modernization oftentimes means moving some of your storage and computer to the cloud where you get the benefit of scale and security and so on. But ultimately it doesn't take away the silos that you have. We have more locations, more tools and more processes with which we try to get value from this data. To do that at scale in an organization, people involved in this process, they have to understand each other. So you need to unite those people across those tools, processes, and systems with a shared language. When I say customer, do you understand the same thing as you hearing customer? Are we counting them in the same way so that shared language unites us and that gives the opportunity for the organization as a whole to get the maximum value out of their data assets and then they can democratize data so everyone can properly use that shared language to find, understand, and trust the data asset that's available. >>And that's where Collibra comes in. We provide a centralized system of engagement that works across all of those locations and combines all of those different user types across the whole business. At Collibra, we say United by data and that also means that we're united by data with our customers. So here is some data about some of our customers. There was the case of an online do it yourself platform who grew their revenue almost three times from a marketing campaign that provided the right product in the right hands of the right people. In other case that comes to mind is from a financial services organization who saved over 800 K every year because they were able to reuse the same data in different kinds of reports and before there was spread out over different tools and processes and silos, and now the platform brought them together so they realized, oh, we're actually using the same data, let's find a way to make this more efficient. And the last example that comes to mind is that of a large home loan, home mortgage, mortgage loan provider where they have a very complex landscape, a very complex architecture legacy in the cloud, et cetera. And they're using our software, they're using our platform to unite all the people and those processes and tools to get a common view of data to manage their compliance at scale. >>Hey everyone, I'm Lisa Martin covering Data Citizens 22, brought to you by Collibra. This next conversation is gonna focus on the importance of data culture. One of our Cube alumni is back, Stan Christians is Collibra's co-founder and it's Chief Data citizens. Stan, it's great to have you back on the cube. >>Hey Lisa, nice to be. >>So we're gonna be talking about the importance of data culture, data intelligence, maturity, all those great things. When we think about the data revolution that every business is going through, you know, it's so much more than technology innovation. It also really re requires cultural transformation, community transformation. Those are challenging for customers to undertake. Talk to us about what you mean by data citizenship and the role that creating a data culture plays in that journey. >>Right. So as you know, our event is called Data Citizens because we believe that in the end, a data citizen is anyone who uses data to do their job. And we believe that today's organizations, you have a lot of people, most of the employees in an organization are somehow gonna to be a data citizen, right? So you need to make sure that these people are aware of it. You need that. People have skills and competencies to do with data what necessary and that's on, all right? So what does it mean to have a good data culture? It means that if you're building a beautiful dashboard to try and convince your boss, we need to make this decision that your boss is also open to and able to interpret, you know, the data presented in dashboard to actually make that decision and take that action. Right? >>And once you have that why to the organization, that's when you have a good data culture. Now that's continuous effort for most organizations because they're always moving, somehow they're hiring new people and it has to be continuous effort because we've seen that on the hand. Organizations continue challenged their data sources and where all the data is flowing, right? Which in itself creates a lot of risk. But also on the other set hand of the equation, you have the benefit. You know, you might look at regulatory drivers like, we have to do this, right? But it's, it's much better right now to consider the competitive drivers, for example, and we did an IDC study earlier this year, quite interesting. I can recommend anyone to it. And one of the conclusions they found as they surveyed over a thousand people across organizations worldwide is that the ones who are higher in maturity. >>So the, the organizations that really look at data as an asset, look at data as a product and actively try to be better at it, don't have three times as good a business outcome as the ones who are lower on the maturity scale, right? So you can say, ok, I'm doing this, you know, data culture for everyone, awakening them up as data citizens. I'm doing this for competitive reasons, I'm doing this re reasons you're trying to bring both of those together and the ones that get data intelligence right, are successful and competitive. That's, and that's what we're seeing out there in the market. >>Absolutely. We know that just generally stand right, the organizations that are, are really creating a, a data culture and enabling everybody within the organization to become data citizens are, We know that in theory they're more competitive, they're more successful. But the IDC study that you just mentioned demonstrates they're three times more successful and competitive than their peers. Talk about how Collibra advises customers to create that community, that culture of data when it might be challenging for an organization to adapt culturally. >>Of course, of course it's difficult for an organization to adapt but it's also necessary, as you just said, imagine that, you know, you're a modern day organization, laptops, what have you, you're not using those, right? Or you know, you're delivering them throughout organization, but not enabling your colleagues to actually do something with that asset. Same thing as through with data today, right? If you're not properly using the data asset and competitors are, they're gonna to get more advantage. So as to how you get this done, establish this. There's angles to look at, Lisa. So one angle is obviously the leadership whereby whoever is the boss of data in the organization, you typically have multiple bosses there, like achieve data officers. Sometimes there's, there's multiple, but they may have a different title, right? So I'm just gonna summarize it as a data leader for a second. >>So whoever that is, they need to make sure that there's a clear vision, a clear strategy for data. And that strategy needs to include the monetization aspect. How are you going to get value from data? Yes. Now that's one part because then you can leadership in the organization and also the business value. And that's important. Cause those people, their job in essence really is to make everyone in the organization think about data as an asset. And I think that's the second part of the equation of getting that right, is it's not enough to just have that leadership out there, but you also have to get the hearts and minds of the data champions across the organization. You, I really have to win them over. And if you have those two combined and obviously a good technology to, you know, connect those people and have them execute on their responsibilities such as a data intelligence platform like s then the in place to really start upgrading that culture inch by inch if you'll, >>Yes, I like that. The recipe for success. So you are the co-founder of Collibra. You've worn many different hats along this journey. Now you're building Collibra's own data office. I like how before we went live, we were talking about Calibra is drinking its own champagne. I always loved to hear stories about that. You're speaking at Data Citizens 2022. Talk to us about how you are building a data culture within Collibra and what maybe some of the specific projects are that Collibra's data office is working on. >>Yes, and it is indeed data citizens. There are a ton of speaks here, are very excited. You know, we have Barb from m MIT speaking about data monetization. We have Dilla at the last minute. So really exciting agen agenda. Can't wait to get back out there essentially. So over the years at, we've doing this since two and eight, so a good years and I think we have another decade of work ahead in the market, just to be very clear. Data is here to stick around as are we. And myself, you know, when you start a company, we were for people in a, if you, so everybody's wearing all sorts of hat at time. But over the years I've run, you know, presales that sales partnerships, product cetera. And as our company got a little bit biggish, we're now thousand two. Something like people in the company. >>I believe systems and processes become a lot important. So we said you CBRA isn't the size our customers we're getting there in of organization structure, process systems, et cetera. So we said it's really time for us to put our money where is and to our own data office, which is what we were seeing customers', organizations worldwide. And they organizations have HR units, they have a finance unit and over time they'll all have a department if you'll, that is responsible somehow for the data. So we said, ok, let's try to set an examples that other people can take away with it, right? Can take away from it. So we set up a data strategy, we started building data products, took care of the data infrastructure. That's sort of good stuff. And in doing all of that, ISA exactly as you said, we said, okay, we need to also use our product and our own practices and from that use, learn how we can make the product better, learn how we make, can make the practice better and share that learning with all the, and on, on the Monday mornings, we sometimes refer to eating our dog foods on Friday evenings. >>We referred to that drinking our own champagne. I like it. So we, we had a, we had the driver to do this. You know, there's a clear business reason. So we involved, we included that in the data strategy and that's a little bit of our origin. Now how, how do we organize this? We have three pillars, and by no means is this a template that everyone should, this is just the organization that works at our company, but it can serve as an inspiration. So we have a pillar, which is data science. The data product builders, if you'll or the people who help the business build data products. We have the data engineers who help keep the lights on for that data platform to make sure that the products, the data products can run, the data can flow and you know, the quality can be checked. >>And then we have a data intelligence or data governance builders where we have those data governance, data intelligence stakeholders who help the business as a sort of data partner to the business stakeholders. So that's how we've organized it. And then we started following the CBRA approach, which is, well, what are the challenges that our business stakeholders have in hr, finance, sales, marketing all over? And how can data help overcome those challenges? And from those use cases, we then just started to build a map and started execution use of the use case. And a important ones are very simple. We them with our, our customers as well, people talking about the cata, right? The catalog for the data scientists to know what's in their data lake, for example, and for the people in and privacy. So they have their process registry and they can see how the data flows. >>So that's a starting place and that turns into a marketplace so that if new analysts and data citizens join kbra, they immediately have a place to go to, to look at, see, ok, what data is out there for me as an analyst or a data scientist or whatever to do my job, right? So they can immediately get access data. And another one that we is around trusted business. We're seeing that since, you know, self-service BI allowed everyone to make beautiful dashboards, you know, pie, pie charts. I always, my pet pee is the pie chart because I love buy and you shouldn't always be using pie charts. But essentially there's become proliferation of those reports. And now executives don't really know, okay, should I trust this report or that report the reporting on the same thing. But the numbers seem different, right? So that's why we have trusted this reporting. So we know if a, the dashboard, a data product essentially is built, we not that all the right steps are being followed and that whoever is consuming that can be quite confident in the result either, Right. And that silver browser, right? Absolutely >>Decay. >>Exactly. Yes, >>Absolutely. Talk a little bit about some of the, the key performance indicators that you're using to measure the success of the data office. What are some of those KPIs? >>KPIs and measuring is a big topic in the, in the data chief data officer profession, I would say, and again, it always varies with to your organization, but there's a few that we use that might be of interest. Use those pillars, right? And we have metrics across those pillars. So for example, a pillar on the data engineering side is gonna be more related to that uptime, right? Are the, is the data platform up and running? Are the data products up and running? Is the quality in them good enough? Is it going up? Is it going down? What's the usage? But also, and especially if you're in the cloud and if consumption's a big thing, you have metrics around cost, for example, right? So that's one set of examples. Another one is around the data sciences and products. Are people using them? Are they getting value from it? >>Can we calculate that value in ay perspective, right? Yeah. So that we can to the rest of the business continue to say we're tracking all those numbers and those numbers indicate that value is generated and how much value estimated in that region. And then you have some data intelligence, data governance metrics, which is, for example, you have a number of domains in a data mesh. People talk about being the owner of a data domain, for example, like product or, or customer. So how many of those domains do you have covered? How many of them are already part of the program? How many of them have owners assigned? How well are these owners organized, executing on their responsibilities? How many tickets are open closed? How many data products are built according to process? And so and so forth. So these are an set of examples of, of KPIs. There's a, there's a lot more, but hopefully those can already inspire the audience. >>Absolutely. So we've, we've talked about the rise cheap data offices, it's only accelerating. You mentioned this is like a 10 year journey. So if you were to look into a crystal ball, what do you see in terms of the maturation of data offices over the next decade? >>So we, we've seen indeed the, the role sort of grow up, I think in, in thousand 10 there may have been like 10 achieve data officers or something. Gartner has exact numbers on them, but then they grew, you know, industries and the number is estimated to be about 20,000 right now. Wow. And they evolved in a sort of stack of competencies, defensive data strategy, because the first chief data officers were more regulatory driven, offensive data strategy support for the digital program. And now all about data products, right? So as a data leader, you now need all of those competences and need to include them in, in your strategy. >>How is that going to evolve for the next couple of years? I wish I had one of those balls, right? But essentially I think for the next couple of years there's gonna be a lot of people, you know, still moving along with those four levels of the stack. A lot of people I see are still in version one and version two of the chief data. So you'll see over the years that's gonna evolve more digital and more data products. So for next years, my, my prediction is it's all products because it's an immediate link between data and, and the essentially, right? Right. So that's gonna be important and quite likely a new, some new things will be added on, which nobody can predict yet. But we'll see those pop up in a few years. I think there's gonna be a continued challenge for the chief officer role to become a real executive role as opposed to, you know, somebody who claims that they're executive, but then they're not, right? >>So the real reporting level into the board, into the CEO for example, will continue to be a challenging point. But the ones who do get that done will be the ones that are successful and the ones who get that will the ones that do it on the basis of data monetization, right? Connecting value to the data and making that value clear to all the data citizens in the organization, right? And in that sense, they'll need to have both, you know, technical audiences and non-technical audiences aligned of course. And they'll need to focus on adoption. Again, it's not enough to just have your data office be involved in this. It's really important that you're waking up data citizens across the organization and you make everyone in the organization think about data as an asset. >>Absolutely. Because there's so much value that can be extracted. Organizations really strategically build that data office and democratize access across all those data citizens. Stan, this is an exciting arena. We're definitely gonna keep our eyes on this. Sounds like a lot of evolution and maturation coming from the data office perspective. From the data citizen perspective. And as the data show that you mentioned in that IDC study, you mentioned Gartner as well, organizations have so much more likelihood of being successful and being competitive. So we're gonna watch this space. Stan, thank you so much for joining me on the cube at Data Citizens 22. We appreciate it. >>Thanks for having me over >>From Data Citizens 22, I'm Lisa Martin, you're watching The Cube, the leader in live tech coverage. >>Okay, this concludes our coverage of Data Citizens 2022, brought to you by Collibra. Remember, all these videos are available on demand@thecube.net. And don't forget to check out silicon angle.com for all the news and wiki bod.com for our weekly breaking analysis series where we cover many data topics and share survey research from our partner ETR Enterprise Technology Research. If you want more information on the products announced at Data Citizens, go to collibra.com. There are tons of resources there. You'll find analyst reports, product demos. It's really worthwhile to check those out. Thanks for watching our program and digging into Data Citizens 2022 on the Cube, your leader in enterprise and emerging tech coverage. We'll see you soon.

Published Date : Nov 2 2022

SUMMARY :

largely about getting the technology to work. Now the cloud is definitely helping with that, but also how do you automate governance? So you can see how data governance has evolved into to say we extract the signal from the noise, and over the, the next couple of days, we're gonna feature some of the So it's a really interesting story that we're thrilled to be sharing And we said at the time, you know, maybe it's time to rethink data innovation. 2020s from the previous decade, and what challenges does that bring for your customers? as data becomes more impactful than important, the level of scrutiny with respect to privacy, So again, I think it just another incentive for organization to now truly look at data You know, I don't know when you guys founded Collibra, if, if you had a sense as to how complicated the last kind of financial crisis, and that was really the, the start of Colli where we found product market Well, that's interesting because, you know, in my observation it takes seven to 10 years to actually build a again, a lot of momentum in the org in, in the, in the markets with some of the cloud partners And the second is that those data pipelines that are now being created in the cloud, I mean, the acquisition of i l dq, you know, So that's really the theme of a lot of the innovation that we're driving. And so that's the big theme from an innovation perspective, One of our key differentiators is the ability to really drive a lot of automation through workflows. So actually pushing down the computer and data quality, one of the key principles you think about monetization. And I, and I think we we're really at this pivotal moment, and I think you said it well. We need to look beyond just the I know you're gonna crush it out there. This is Dave Valante for the cube, your leader in enterprise and Without data leverage the Collibra data catalog to automatically And for that you'll establish community owners, a data set to a KPI to a report now enables your users to see what Finally, seven, promote the value of this to your users and Welcome to the Cube's coverage of Data Citizens 2022 Collibra's customer event. And now you lead data quality at Collibra. imagine if we get that wrong, you know, what the ramifications could be, And I realized in that moment, you know, I might have failed him because, cause I didn't know. And it's so complex that the way companies consume them in the IT function is And so it's really become front and center just the whole quality issue because data's so fundamental, nowadays to this topic is, so maybe we could surface all of these problems with So the language is changing a you know, stale data, you know, the, the whole trend toward real time. we sort of lived this problem for a long time, you know, in, in the Wall Street days about a decade you know, they just said, Oh, it's a glitch, you know, so they didn't understand the root cause of it. And the one right now is these hyperscalers in the cloud. And I think if you look at the whole So this is interesting because what you just described, you know, you mentioned Snowflake, And so when you were to log into Big Query tomorrow using our I love this example because, you know, Barry talks about, wow, the cloud guys are gonna own the world and, Seeing that across the board, people used to know it was a zip code and nowadays Appreciate it. Right, and thank you for watching. Nice to be here. Can can you explain to our audience why the ability to manage data across the entire organization. I was gonna say, you know, when I look back at like the last 10 years, it was all about getting the technology to work and it And one of the big pushes and passions we have at Collibra is to help with I I, you know, you mentioned this idea of, and really speeding the time to value for any of the business analysts, So where do you see, you know, the friction in adopting new data technologies? So one of the other things we're announcing with, with all of the innovations that are coming is So anybody in the organization is only getting access to the data they should have access to. So it was kind of smart that you guys were early on and We're able to profile and classify that data we're announcing with Calibra Protect this week that and get the right and make sure you have the right quality. I mean, the nice thing about Snowflake, if you play in the Snowflake sandbox, you, you, you, you can get sort of a, We also are doing more with Google around, you know, GCP and kbra protect there, you know, this year, the event your, your perspectives. And so it's all about everybody being able to easily It was great to have you on the cube first time I believe, cube, your leader in enterprise and emerging tech coverage. the cloud where you get the benefit of scale and security and so on. And the last example that comes to mind is that of a large home loan, home mortgage, Stan, it's great to have you back on the cube. Talk to us about what you mean by data citizenship and the And we believe that today's organizations, you have a lot of people, And one of the conclusions they found as they So you can say, ok, I'm doing this, you know, data culture for everyone, awakening them But the IDC study that you just mentioned demonstrates they're three times So as to how you get this done, establish this. part of the equation of getting that right, is it's not enough to just have that leadership out Talk to us about how you are building a data culture within Collibra and But over the years I've run, you know, So we said you the data products can run, the data can flow and you know, the quality can be checked. The catalog for the data scientists to know what's in their data lake, and data citizens join kbra, they immediately have a place to go to, Yes, success of the data office. So for example, a pillar on the data engineering side is gonna be more related So how many of those domains do you have covered? to look into a crystal ball, what do you see in terms of the maturation industries and the number is estimated to be about 20,000 right now. How is that going to evolve for the next couple of years? And in that sense, they'll need to have both, you know, technical audiences and non-technical audiences And as the data show that you mentioned in that IDC study, the leader in live tech coverage. Okay, this concludes our coverage of Data Citizens 2022, brought to you by Collibra.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LauraPERSON

0.99+

Lisa MartinPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

HeinekenORGANIZATION

0.99+

Dave ValantePERSON

0.99+

Laura SellersPERSON

0.99+

2008DATE

0.99+

CollibraORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

Felix Von DalaPERSON

0.99+

GoogleORGANIZATION

0.99+

Felix Van DemaPERSON

0.99+

sevenQUANTITY

0.99+

Stan ChristiansPERSON

0.99+

2010DATE

0.99+

LisaPERSON

0.99+

San DiegoLOCATION

0.99+

JayPERSON

0.99+

50 dayQUANTITY

0.99+

FelixPERSON

0.99+

oneQUANTITY

0.99+

Kurt HasselbeckPERSON

0.99+

Bank of AmericaORGANIZATION

0.99+

10 yearQUANTITY

0.99+

California Consumer Privacy ActTITLE

0.99+

10 dayQUANTITY

0.99+

SixQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

Dave AntePERSON

0.99+

Last yearDATE

0.99+

demand@thecube.netOTHER

0.99+

ETR Enterprise Technology ResearchORGANIZATION

0.99+

BarryPERSON

0.99+

GartnerORGANIZATION

0.99+

one partQUANTITY

0.99+

PythonTITLE

0.99+

2010sDATE

0.99+

2020sDATE

0.99+

CalibraLOCATION

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

CalibraORGANIZATION

0.99+

K Bear ProtectORGANIZATION

0.99+

two sidesQUANTITY

0.99+

Kirk HasselbeckPERSON

0.99+

12 monthsQUANTITY

0.99+

tomorrowDATE

0.99+

AWSORGANIZATION

0.99+

BarbPERSON

0.99+

StanPERSON

0.99+

Data CitizensORGANIZATION

0.99+

Richard Hartmann, Grafana Labs | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon everyone, and welcome back to the Cube. I am Savannah Peterson here, coming to you from Detroit, Michigan. We're at Cuban Day three. Such a series of exciting interviews. We've done over 30, but this conversation is gonna be extra special, don't you think, John? >>Yeah, this is gonna be a good one. Griffon Labs is here with us. We're getting the conversation of what's going on in the industry management, watching the Kubernetes clusters. This is large scale conversations this week. It's gonna be a good one. >>Yeah. Yeah. I'm very excited. He's also got a fantastic Twitter handle, twitchy. H Please welcome Richie Hartman, who is the director of community here at Griffon. Richie, thank you so much for joining us. Thanks >>For having me. >>How's the show been for you? >>Busy. I, I mean, I, I, >>In >>A word, I have a ton of talks at at like maintain a thing and like the covering board searches at the TLC panel. I run forme day. So it's, it's been busy. It, yeah. Monday, I didn't have to run anything. That was quite nice. But there >>You, you have your hands in a lot. I'm not even gonna cover it. Looking at your bio, there's, there's so many different things that you're working on. I know that Grafana specifically had some announcements this week. Yeah, >>Yeah, yeah. We had quite a few, like the, the two largest ones is a, we now have a field Kubernetes integration on Grafana Cloud. So our, our approach is generally extremely open source first. So we try to push stuff into the exporters, like into the open source exporters, into mixes into things which are out there as open source for anyone to use. But that's little bit like a tool set, not a ready made solution. So when we talk integrations, we actually talk about things where you get this like one click experience, You log into your Grafana cloud, you click, I have a Kubernetes, which probably most of us have, and things just work like you in just the data. You have to write dashboards, you have to write alerts, you have to write everything to just get started with extremely opinionated dashboards, SLOs, alerts, again, all those things made by experts, so anyone can use them. And you don't have to reinvent the view for every single user. So that's the one. The other is, >>It's a big deal. >>Oh yeah, it is. Yeah. It is. It, we, we has, its heavily in integrations course. While, I mean, I don't have to convince anyone that perme is a DD factor standard in everything. Cloudnative. But again, it's, it's, it's sometimes a little bit hard to handle or a little bit not easy to get into. So, so smoothing this, this, this path onto onboarding yourself onto this stack and onto those types of solutions. Yes. Is what a lot of people need. Course, if you, if you look at the statistics from coupon, and we just heard this in the governing board session yesterday. Yeah. Like 60% of the people here are first time attendees. So there's a lot of people who just come into this thing and who need, like, this is your path. This is where you should be going. Or at least if you want to go, go there. This is how to get there. >>Here's your runway for takeoff. Yes. Yeah. I think that's a really good point. And I love that you, you had those numbers. I was curious. I, I had seen on Twitter, speaking of Twitter, I had seen, I had seen that, that there were a lot of people here coming for the first time. You're a community guy. Are we at an inflection point where this community is about to continue to scale? >>That's a very good question. Which I can't really answer. So I mean, >>Obviously I bet you're gonna try. >>I covid changed a few things. Yeah. Probably most people, >>A couple things. I mean, you know, casually, it's like such a gentle way of putting that, that was >>Beautiful. I'm gonna say yes, just to explode. All these new ERs are gonna learn Prometheus. They're gonna roll in with a open, open metrics, open telemetry. I love it, >>You know, But, but at the same time, like Cuban is, is ramping back up. But if you look at the, if you look at the registration numbers between Valencia Andro, it was more or less the same. Interesting. Which, so it didn't go onto this, onto this flu trajectory, which it was on like, up to, up to 2019. I expect this to take up again. But also with the economic situation, everything, I, I don't think >>It's, I think the jury's still out on hybrid. I think there's a lot, lot more hybrid. Let's see how the projects are gonna go. That's what I think it's gonna be the tell sign. How many people are in participating? How are the project's advancing? Some of the momentum, >>I mean, from the project level, Most of this is online anyway. Of course. That's how open source, right. I've been working for >>Ages. That's >>Cause you don't have any trouble budget or, or any office or, It's >>Always been that way. >>Yeah, precisely. So the projects are arguably spearheading this, this development and the, the online numbers. I I, I have some numbers in my head, but I'm, I'm not a hundred percent certain to, but they're higher for this time in Detroit than in volunteer as far somewhere. Cool. So that is growing and it's grown in parallel, which also is great. Cause it's much more accessible, much more inclusive. You don't have to have a budget of at least, let's say, I don't know, two to five k to, to fly over the pond and, and attend this thing. You can just do it from your home. So that is, that's a lot more inclusive. And I expect this to, to basically be a second more or less orthogonal growth, growth path. But the best thing about coupon is the hallway track. I'm just meeting people, talking to people and that kind of thing is not really possible with, >>It's, it's great to see people >>In person. No, and it makes such a difference. I mean, yeah. Even and interviewing people in person too. I mean, it does a, it's, it's, and, and this, this whole, I mean cncf, this whole community, every company here is community first. It's how these projects come to be. I think it's awesome. I feel like you got something you're saying to say, Johnny. >>Yeah. And I love some of the advancements. Rich Richie, we talked last time about, you know, open telemetry, open metrics. You're involved in dashboards. Yeah. One of the themes here is ease of use, simplicity, developer productivity. Where do you see the ease of use going from a project standpoint? For me, as you mentions everywhere, it's pretty much, it is, it's almost all corners of the world. Yep. And new people coming in. How, how are you making it easier? What's going on? Give us the update on that. >>So we also, funnily enough at precisely this topic in the TC panel just a few hours ago, about ease of use and about how to, how to make things easier to, to handle how developers currently, like if they just want to get into the cloud native seen, they have like, like we, we did some neck and math, like maybe 10 tools at least, which you have to be somewhat proficient in to just get started, which is honestly horrendous. Yeah. Course. Like with a server, I just had my survey install my thing and it runs, maybe I need a database, but that's roughly it. And this needs to change again. Like it's, it's nice that everything is, is un unraveled. And you have, you, you, you, you don't have those service boundaries which you had before. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. But at the same time, this complexity, which used to be nicely compartmentalized, was deliberately broken up. And so it's becoming a lot harder to, to, like, we, we need to find new ways to compartmentalize this complexity back to, to human understandable levels again, in particular, as we keep onboarding new and new and new, new people, of course it's just not good use of anyone's time to, to just like learn the basics again and again and again. This is something which should be just compartmentalized and automated away. We're >>The three, We were talking to Matt Klein earlier and he was talking about as projects become mature and all over the place and have reach and and usage, you gotta work on the boring stuff. Yes. And when it's boring, that means you have success. Yes. But then you gotta work on the plumbing. What are some of the things that you guys are working on? Because people are relying on the product. >>Oh yeah. So for with my premises head on, the highlight feature is exponential or native or spars. Histograms. There's like three different names for one single concept. If you know Prometheus, you ha you currently have hard bucket boundaries where I say my latency is lower equal two seconds, one second, a hundred milliseconds, what have you. And I can put stuff into those histogram buckets accordingly to those predefined levels, which is extremely efficient, but like on the, on the code level. But it's not very nice for the humans course you need to understand your system before you're able to, to, to choose good cutoff points. And if you, if you, if you add new ones, that's completely fine. But if you want to actually change them, course you, you figured out that you made a fundamental mistake, you're going to have a break in the continue continuity of your observability data. And you cannot undo this in, into the past. So this is just gone native histograms. On the other hand, allow me to, to, okay, I'm not going to get get into the math, but basically you define a single formula, which there comes a good default. If you have good reasons, then you can change it. But if you don't, just don't talk, >>The people are in the math, Hit him up on Twitter. Twitter, h you'll get you that math. >>So the, >>The thing is people want the math, believe me. >>Oh >>Yeah. I mean we don't have time, but hit him up. Yeah. >>There's ProCon in two weeks in Munich and there will be whole talk about like the, the dirty details of all of the stuff. But the, the high level answer is it just does what people would expect it to do. And with very little overhead, you become, you get highly, highly or high resolution histograms, which is really important for a lot of use cases. But this is not just Prometheus with my open metrics head on the 2.0 feature, like the breaking highlight feature of Open Metrics 2.0 will be you guested precisely the same with my open telemetry head on. Low and behold the same underlying technology is being put or has been put into open telemetry. And we've worked for month and month and month and even longer between all different projects to, to assert that we have one single standard which is actually compatible with each other course. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and they break in subtly wrong ways, like it's much better to just not work than to break in a way, which is just a little bit wrong. Of course you won't figure this out until it's too late. So we spent, like with all three hats, we spent insane amounts of time on making this happen and, and making this nice. >>Savannah, one of the things we have so much going on at Cube Con. I mean just you're unpacking like probably another day of cube. We can't go four days, but open time. >>I know, I know. I'm the same >>Open telemetry >>Challenge acceptance open. >>Sorry, we're gonna stay here. All the, They >>Shut the lights off on us last night. >>They literally gonna pull the plug on us. Yeah, yeah, yeah, yeah. They've done that before. It's not the first time we go until they kick us out. We love, love doing this. But Open telemetry is got a lot of news too. So that's, We haven't really talked much about that. >>We haven't at >>All. So there's a lot of stuff going on that, I won't call it boring. That's like code word's. That's cube talk for, for it's working. Yeah. So it's not bad, but there's a lot of stuff going on. Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, that's key. It's just what, missing all the, all the stuff. >>No, >>What are we missing? What are people missing? What's going on in the show that you think that's not actually being reported on? I mean it's a lot of high web assembly for instance got a lot >>Of high. Oh yeah, I was gonna say, I'm glad you're asking this because you, you've already mentioned about seven different hats that you wear. I can only imagine how many hats are actually in your hat cabinet. But you, you are someone with your, with your fingers in a lot of different things. So you can kind of give us a state of the union. Yeah. So go ahead. Let's talk about >>It. So I think you already hit a few good points. Ease of use is definitely one of them. And, and improving the developer experience and not having this like a value of pain. Yeah. That is one of the really big ones. It's going to be interesting cause it is boring. It is janitorial and it needs a different type of persona. A lot of, or maybe not most, but a large fraction of developers like the shiny stuff. And we could see this in Prometheus where like initially the people who contributed this the most where like those restless people who need to fix that one thing, this is impossible, are going to do it. Which changed over the years where the people who now contribute the most are off the janitorial. Like keep things boring, keep things running, still have substantial changes. But but not like more on the maintenance level. >>Yeah. The maintainers. I was just gonna bring that >>Up. Yeah. On the, on the keep things boring while still pushing 'em forward. Yeah. And the thing about ease of use is a lot of this is boring. A lot of this is strategy. A lot of this is toil. A lot of this takes lots of research also in areas where developers are not really good at, like UX for example, and ui like most software developers are really bad at those cause they just think differently from normal humans, I guess. >>So that's an interesting observation that you just made. I we could unpack that on a whole nother show as well. >>So the, the thing is this is going to be interesting for the open source scene course. This needs deliberate investment by companies who assign people to those projects and say, okay, fix that one thing or make it easier to use what have you. That is a lot easier with, with first party products and projects from companies cuz they can invest directly into the thing and they see much more of a value prop. It's, it's kind of normal by now to, to allow developers or even assigned developers onto open source projects. That's not so much the case for the tpms, for the architects, for the UX and your I people like for the documentation people that there's not as much awareness of that this is also driving value for everyone. Yes. And also there's not much as much. >>Yeah, that's a great point. This whole workflow production system of open source, which has grown and keeps growing and we'll keep growing. These be funded. And one of the things we were talking earlier in another session about is about the recession potentially we're hitting and the global issues, macroeconomics that might force some of these projects or companies not to get VC >>Funding. It's such a theme at the show. So, >>So to me, I said it's just not about VC funding. There's other funding mechanisms that's community oriented. There's companies participating, there's other meccas. Richie, if you could have your wishlist of how things could progress an open source, what would you want to see happen in terms of how it's, how things are funded, how things are executed. Cuz developers are going to run businesses. Cuz ultimately if you follow digital transformation to completion, it and developers aren't a department serving the business. They are the business. And that's coming fast. You know, what has to happen in your opinion, if you had the wish magic wand, what would you, what would you snap your fingers to make happen? >>If I had a magic wand that's very different from, from what is achievable. But let, let's >>Go with, Okay, go with the magic wand first. Cause we'll, we'll, we'll we'll riff on that. So >>I'm here for dreams. Yeah, yeah, >>Yeah. I mean I, I've been in open source for more than two, two decades, but now, and most of the open source is being driven forward by people who are not being paid for those. So for example, Gana is the first time I'm actually paid by a company to do my com community work. It's always been on the side. Of course I believe in it and I like doing it. I'm also not bad at it. And so I just kept doing it. But it was like at night on the weekends and everything. And to be honest, it's still at night and in the weekends, but the majority of it is during paid company time, which is awesome. Yeah. Most of the people who have driven this space forward are not in this position. They're doing it at night, they're doing it on the weekends. They're doing it out of dedication to a cause. Yeah. >>The commitment is insane. >>Yeah. At the same time you have companies mostly hyperscalers and either they have really big cloud offerings or they have really big advertisement business or both. And they're extracting a huge amount of value, which has been created in large part elsewhere. Like yes, they employ a ton of developers, but a lot of the technologies they built on and the shoulders of the giants they stand upon it are really poorly paid. And there are some efforts to like, I think the core foundation like which redistribute a little bit of money and such. But if I had my magic wand, everyone who is an open source and actually drives things forwards, get, I don't know, 20% of the value which they create just magically somehow. Yeah. >>Or, or other companies don't extract as much value and, and redistribute more like put more full-time engineers onto projects or whichever, like that would be the ideal state where the people who actually make the thing out of dedication are not more or less left on the sideline. Of course they're too dedicated to just say, Okay, I'm, I'm not doing this anymore. You figure this stuff out and let things tremble and falter. So I mean, it's like with nurses and such who, who just like, they, they know they have something which is important and they keep doing it. Of course they believe in it. >>I think this, I think this is an opportunity to start messaging this narrative because yeah, absolutely. Now we're at an inflection point where there's a big community, there is a shared responsibility in my opinion, to not spread the wealth, but make sure that it's equally balanced and, and the, and I think there's a way to do that. I don't know how yet, but I see that more than ever, it's not just come in, raid the kingdom, steal all the jewels, monetize it, and throw some token token money around. >>Well, in the burnout. Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, it's, it's the, it's the financial aspect of this. It's the cognitive load. And I'm curious actually, when I ask you this question, how do you avoid burnout? You do a million different things and we're, you know, I'm sure the open source community that passion the >>Coach. Yeah. So it's just write code, >>It's, oh, my, my, my software engineering days are firmly over. I'm, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. I, I don't really write code anymore. >>It's how do you avoid burnout? >>So a i I didn't curse ahead burnout a few years ago. I was not nice, but that was still when I had like a full day job and that day job was super intense and on top I did all the things. Part of being honest, a lot of the people who do this are really dedicated and are really bad at setting boundaries between work >>And process. That's why I bring it up. Yeah. Literally why I bring it up. Yeah. >>I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully figured out yet. It's also even more risky to some extent per like, it's, it's good if you're paid for this and you can do it during your work time. But on the other hand, if it's so nice and like if your hobby and your job are almost completely intersectional, it >>Becomes really, the lines are blurry. >>Yeah. And then yeah, like have work from home. You, you don't even commute anything or anymore. You just sit down at your computer and you just have fun doing your stuff and all of a sudden it's deep at night and you're still like, I want to keep going. >>Sounds like God, something cute. I >>Know. I was gonna say, I was like, passion is something we all have in common here on this. >>That's the key. That is the key point There is a, the, the passion project becomes the job. But now the contribution is interesting because now yeah, this ecosystem is, is has a commercial aspect. Again, this is the, this is the balance between commercialization and keeping that organic production system that's called open source. I mean, it's so fascinating and this is amazing. I want to continue that conversation. It's >>Awesome. Yeah. Yeah. This is, this is great. Richard, this entire conversation has been excellent. Thank you so much for joining us. How can people find you? I mean, I give em your Twitter handle, but if they wanna find out more about Grafana Prometheus and the 1700 things you do >>For grafana grafana.com, for Prometheus, promeus.io for my own stuff, GitHub slash richie age slash talks. Of course I track all my talks in there and like, I don't, I currently don't have a personal website cause I stop bothering, but my, like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded to this GitHub. >>Yeah. Great. Follow. You also run a lot of events and a lot of community activity. Congratulations for you. Also, I talked about this last time, the largest IRC network on earth. You ran, built a data center from scratch. What happened? You done >>That? >>Haven't done a, he even built a cloud hyperscale compete with Amazon. That's the next one. Why don't you put that on the >>Plate? We'll be sure to feature whatever Richie does next year on the cube. >>I'm game. Yeah. >>Fantastic. On that note, Richie, again, thank you so much for being here, John, always a pleasure. Thank you. And thank you for tuning in to us here live from Detroit, Michigan on the cube. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.

Published Date : Oct 28 2022

SUMMARY :

We've done over 30, but this conversation is gonna be extra special, don't you think, We're getting the conversation of what's going on in the industry management, Richie, thank you so much for joining us. I mean, I, I, I run forme day. You, you have your hands in a lot. You have to write dashboards, you have to write alerts, you have to write everything to just get started with Like 60% of the people here are first time attendees. And I love that you, you had those numbers. So I mean, I covid changed a few things. I mean, you know, casually, it's like such a gentle way of putting that, I love it, I expect this to take up again. Some of the momentum, I mean, from the project level, Most of this is online anyway. So the projects are arguably spearheading this, I feel like you got something you're saying to say, Johnny. it's almost all corners of the world. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. What are some of the things that you But it's not very nice for the humans course you need The people are in the math, Hit him up on Twitter. Yeah. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and Savannah, one of the things we have so much going on at Cube Con. I'm the same All the, They It's not the first time we go until they Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, So you can kind of give us a state of the union. And, and improving the developer experience and not having this like a I was just gonna bring that the thing about ease of use is a lot of this is boring. So that's an interesting observation that you just made. So the, the thing is this is going to be interesting for the open source scene course. And one of the things we were talking earlier in So, Richie, if you could have your wishlist of how things could But let, let's So Yeah, yeah, Gana is the first time I'm actually paid by a company to do my com community work. shoulders of the giants they stand upon it are really poorly paid. are not more or less left on the sideline. I think this, I think this is an opportunity to start messaging this narrative because yeah, Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. a lot of the people who do this are really dedicated and are really Yeah. I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully You, you don't even commute anything or anymore. I That is the key point There is a, the, the passion project becomes the job. things you do like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded Also, I talked about this last time, the largest IRC network on earth. That's the next one. We'll be sure to feature whatever Richie does next year on the cube. Yeah. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Richie HartmanPERSON

0.99+

RichiePERSON

0.99+

Matt KleinPERSON

0.99+

Savannah PetersonPERSON

0.99+

Richard HartmannPERSON

0.99+

RichardPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Grafana LabsORGANIZATION

0.99+

PrometheusTITLE

0.99+

Rich RichiePERSON

0.99+

60%QUANTITY

0.99+

Griffon LabsORGANIZATION

0.99+

two secondsQUANTITY

0.99+

one secondQUANTITY

0.99+

MunichLOCATION

0.99+

20%QUANTITY

0.99+

10 toolsQUANTITY

0.99+

DetroitLOCATION

0.99+

MondayDATE

0.99+

Detroit, MichiganLOCATION

0.99+

GrafanaORGANIZATION

0.99+

yesterdayDATE

0.99+

Grafana PrometheusTITLE

0.99+

threeQUANTITY

0.99+

five kQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.98+

next yearDATE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

this weekDATE

0.98+

two decadesQUANTITY

0.98+

one single conceptQUANTITY

0.98+

GitHubORGANIZATION

0.98+

2019DATE

0.98+

Grafana cloudTITLE

0.98+

OneQUANTITY

0.97+

last nightDATE

0.97+

SavannahPERSON

0.97+

TwitterORGANIZATION

0.96+

earthLOCATION

0.96+

four daysQUANTITY

0.96+

over 30QUANTITY

0.95+

JohnnyPERSON

0.95+

one clickQUANTITY

0.95+

Grafana CloudTITLE

0.95+

CloudNativeConEVENT

0.94+

few hours agoDATE

0.93+

2.0OTHER

0.93+

GriffonORGANIZATION

0.93+

hundred percentQUANTITY

0.92+

two weeksQUANTITY

0.92+

one thingQUANTITY

0.91+

grafana grafana.comOTHER

0.9+

more than twoQUANTITY

0.89+

three different namesQUANTITY

0.88+

two largestQUANTITY

0.88+

promeus.ioOTHER

0.86+

a hundred millisecondsQUANTITY

0.86+

few years agoDATE

0.86+

single formulaQUANTITY

0.85+

firstQUANTITY

0.83+

Con.EVENT

0.83+

IRCORGANIZATION

0.82+

KubernetesTITLE

0.81+

seven different hatsQUANTITY

0.8+

one single standardQUANTITY

0.79+

Valencia AndroORGANIZATION

0.79+

NA 2022EVENT

0.77+

Open Metrics 2.0OTHER

0.74+

KubeCon +EVENT

0.7+

Kirk Haslbeck, Collibra | Data Citizens '22


 

(bright upbeat music) >> Welcome to theCUBE's Coverage of Data Citizens 2022 Collibra's Customer event. My name is Dave Vellante. With us is Kirk Hasselbeck, who's the Vice President of Data Quality of Collibra. Kirk, good to see you. Welcome. >> Thanks for having me, Dave. Excited to be here. >> You bet. Okay, we're going to discuss data quality, observability. It's a hot trend right now. You founded a data quality company, OwlDQ and it was acquired by Collibra last year. Congratulations! And now you lead data quality at Collibra. So we're hearing a lot about data quality right now. Why is it such a priority? Take us through your thoughts on that. >> Yeah, absolutely. It's definitely exciting times for data quality which you're right, has been around for a long time. So why now, and why is it so much more exciting than it used to be? I think it's a bit stale, but we all know that companies use more data than ever before and the variety has changed and the volume has grown. And while I think that remains true, there are a couple other hidden factors at play that everyone's so interested in as to why this is becoming so important now. And I guess you could kind of break this down simply and think about if Dave, you and I were going to build, you know a new healthcare application and monitor the heartbeat of individuals, imagine if we get that wrong, what the ramifications could be? What those incidents would look like? Or maybe better yet, we try to build a new trading algorithm with a crossover strategy where the 50 day crosses the 10 day average. And imagine if the data underlying the inputs to that is incorrect. We'll probably have major financial ramifications in that sense. So, it kind of starts there where everybody's realizing that we're all data companies and if we are using bad data, we're likely making incorrect business decisions. But I think there's kind of two other things at play. I bought a car not too long ago and my dad called and said, "How many cylinders does it have?" And I realized in that moment, I might have failed him because 'cause I didn't know. And I used to ask those types of questions about any lock brakes and cylinders and if it's manual or automatic and I realized I now just buy a car that I hope works. And it's so complicated with all the computer chips. I really don't know that much about it. And that's what's happening with data. We're just loading so much of it. And it's so complex that the way companies consume them in the IT function is that they bring in a lot of data and then they syndicate it out to the business. And it turns out that the individuals loading and consuming all of this data for the company actually may not know that much about the data itself and that's not even their job anymore. So, we'll talk more about that in a minute but that's really what's setting the foreground for this observability play and why everybody's so interested, it's because we're becoming less close to the intricacies of the data and we just expect it to always be there and be correct. >> You know, the other thing too about data quality and for years we did the MIT CDOIQ event we didn't do it last year at COVID, messed everything up. But the observation I would make there love thoughts is it data quality used to be information quality used to be this back office function, and then it became sort of front office with financial services and government and healthcare, these highly regulated industries. And then the whole chief data officer thing happened and people were realizing, well, they sort of flipped the bit from sort of a data as a a risk to data as an asset. And now, as we say, we're going to talk about observability. And so it's really become front and center, just the whole quality issue because data's fundamental, hasn't it? >> Yeah, absolutely. I mean, let's imagine we pull up our phones right now and I go to my favorite stock ticker app and I check out the NASDAQ market cap. I really have no idea if that's the correct number. I know it's a number, it looks large, it's in a numeric field. And that's kind of what's going on. There's so many numbers and they're coming from all of these different sources and data providers and they're getting consumed and passed along. But there isn't really a way to tactically put controls on every number and metric across every field we plan to monitor. But with the scale that we've achieved in early days, even before Collibra. And what's been so exciting is we have these types of observation techniques, these data monitors that can actually track past performance of every field at scale. And why that's so interesting and why I think the CDO is listening right intently nowadays to this topic is so maybe we could surface all of these problems with the right solution of data observability and with the right scale and then just be alerted on breaking trends. So we're sort of shifting away from this world of must write a condition and then when that condition breaks, that was always known as a break record. But what about breaking trends and root cause analysis? And is it possible to do that, with less human intervention? And so I think most people are seeing now that it's going to have to be a software tool and a computer system. It's not ever going to be based on one or two domain experts anymore. >> So, how does data observability relate to data quality? Are they sort of two sides of the same coin? Are they cousins? What's your perspective on that? >> Yeah, it's super interesting. It's an emerging market. So the language is changing a lot of the topic and areas changing the way that I like to say it or break it down because the lingo is constantly moving as a target on this space is really breaking records versus breaking trends. And I could write a condition when this thing happens it's wrong and when it doesn't, it's correct. Or I could look for a trend and I'll give you a good example. Everybody's talking about fresh data and stale data and why would that matter? Well, if your data never arrived or only part of it arrived or didn't arrive on time, it's likely stale and there will not be a condition that you could write that would show you all the good and the bads. That was kind of your traditional approach of data quality break records. But your modern day approach is you lost a significant portion of your data, or it did not arrive on time to make that decision accurately on time. And that's a hidden concern. Some people call this freshness, we call it stale data but it all points to the same idea of the thing that you're observing may not be a data quality condition anymore. It may be a breakdown in the data pipeline. And with thousands of data pipelines in play for every company out there there, there's more than a couple of these happening every day. >> So what's the Collibra angle on all this stuff made the acquisition you got data quality observability coming together, you guys have a lot of expertise in this area but you hear providence of data you just talked about stale data, the whole trend toward real time. How is Collibra approaching the problem and what's unique about your approach? >> Well, I think where we're fortunate is with our background, myself and team we sort of lived this problem for a long time in the Wall Street days about a decade ago. And we saw it from many different angles. And what we came up with before it was called data observability or reliability was basically the underpinnings of that. So we're a little bit ahead of the curve there when most people evaluate our solution. It's more advanced than some of the observation techniques that currently exist. But we've also always covered data quality and we believe that people want to know more, they need more insights and they want to see break records and breaking trends together so they can correlate the root cause. And we hear that all the time. I have so many things going wrong just show me the big picture. Help me find the thing that if I were to fix it today would make the most impact. So we're really focused on root cause analysis, business impact connecting it with lineage and catalog, metadata. And as that grows, you can actually achieve total data governance. At this point, with the acquisition of what was a lineage company years ago and then my company OwlDQ, now Collibra Data Quality, Collibra may be the best positioned for total data governance and intelligence in the space. >> Well, you mentioned financial services a couple of times and some examples, remember the flash crash in 2010. Nobody had any idea what that was, they just said, "Oh, it's a glitch." So they didn't understand the root cause of it. So this is a really interesting topic to me. So we know at Data Citizens '22 that you're announcing you got to announce new products, right? Your yearly event, what's new? Give us a sense as to what products are coming out but specifically around data quality and observability. >> Absolutely. There's always a next thing on the forefront. And the one right now is these hyperscalers in the cloud. So you have databases like Snowflake and Big Query and Data Bricks, Delta Lake and SQL Pushdown. And ultimately what that means is a lot of people are storing in loading data even faster in a salike model. And we've started to hook in to these databases. And while we've always worked with the same databases in the past they're supported today we're doing something called Native Database pushdown, where the entire compute and data activity happens in the database. And why that is so interesting and powerful now is everyone's concerned with something called Egress. Did my data that I've spent all this time and money with my security team securing ever leave my hands? Did it ever leave my secure VPC as they call it? And with these native integrations that we're building and about to unveil here as kind of a sneak peek for next week at Data Citizens, we're now doing all compute and data operations in databases like Snowflake. And what that means is with no install and no configuration you could log into the Collibra Data Quality app and have all of your data quality running inside the database that you've probably already picked as your your go forward team selection secured database of choice. So we're really excited about that. And I think if you look at the whole landscape of network cost, egress cost, data storage and compute, what people are realizing is it's extremely efficient to do it in the way that we're about to release here next week. >> So this is interesting because what you just described you mentioned Snowflake, you mentioned Google, oh actually you mentioned yeah, the Data Bricks. Snowflake has the data cloud. If you put everything in the data cloud, okay, you're cool but then Google's got the open data cloud. If you heard Google Nest and now Data Bricks doesn't call it the data cloud but they have like the open source data cloud. So you have all these different approaches and there's really no way up until now I'm hearing to really understand the relationships between all those and have confidence across, it's like (indistinct) you should just be a note on the mesh. And I don't care if it's a data warehouse or a data lake or where it comes from, but it's a point on that mesh and I need tooling to be able to have confidence that my data is governed and has the proper lineage, providence. And that's what you're bringing to the table. Is that right? Did I get that right? >> Yeah, that's right. And for us, it's not that we haven't been working with those great cloud databases, but it's the fact that we can send them the instructions now we can send them the operating ability to crunch all of the calculations, the governance, the quality and get the answers. And what that's doing, it's basically zero network cost, zero egress cost, zero latency of time. And so when you were to log into Big BigQuery tomorrow using our tool or let or say Snowflake, for example, you have instant data quality metrics, instant profiling, instant lineage and access privacy controls things of that nature that just become less onerous. What we're seeing is there's so much technology out there just like all of the major brands that you mentioned but how do we make it easier? The future is about less clicks, faster time to value faster scale, and eventually lower cost. And we think that this positions us to be the leader there. >> I love this example because every talks about wow the cloud guys are going to own the world and of course now we're seeing that the ecosystem is finding so much white space to add value, connect across cloud. Sometimes we call it super cloud and so, or inter clouding. Alright, Kirk, give us your final thoughts and on the trends that we've talked about and Data Citizens '22. >> Absolutely. Well I think, one big trend is discovery and classification. Seeing that across the board people used to know it was a zip code and nowadays with the amount of data that's out there, they want to know where everything is where their sensitive data is. If it's redundant, tell me everything inside of three to five seconds. And with that comes, they want to know in all of these hyperscale databases, how fast they can get controls and insights out of their tools. So I think we're going to see more one click solutions, more SAS-based solutions and solutions that hopefully prove faster time to value on all of these modern cloud platforms. >> Excellent, all right. Kurt Hasselbeck, thanks so much for coming on theCUBE and previewing Data Citizens '22. Appreciate it. >> Thanks for having me, Dave. >> You're welcome. All right, and thank you for watching. Keep it right there for more coverage from theCUBE.

Published Date : Oct 24 2022

SUMMARY :

Kirk, good to see you. Excited to be here. and it was acquired by Collibra last year. And it's so complex that the And now, as we say, we're going and I check out the NASDAQ market cap. and areas changing the and what's unique about your approach? of the curve there when most and some examples, remember and data activity happens in the database. and has the proper lineage, providence. and get the answers. and on the trends that we've talked about and solutions that hopefully and previewing Data Citizens '22. All right, and thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

CollibraORGANIZATION

0.99+

Kurt HasselbeckPERSON

0.99+

2010DATE

0.99+

oneQUANTITY

0.99+

Kirk HasselbeckPERSON

0.99+

50 dayQUANTITY

0.99+

KirkPERSON

0.99+

10 dayQUANTITY

0.99+

OwlDQORGANIZATION

0.99+

Kirk HaslbeckPERSON

0.99+

next weekDATE

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.99+

two sidesQUANTITY

0.99+

thousandsQUANTITY

0.99+

NASDAQORGANIZATION

0.99+

SnowflakeTITLE

0.99+

Data CitizensORGANIZATION

0.99+

Data BricksORGANIZATION

0.99+

two other thingsQUANTITY

0.98+

one clickQUANTITY

0.98+

tomorrowDATE

0.98+

todayDATE

0.98+

five secondsQUANTITY

0.97+

two domainQUANTITY

0.94+

Collibra Data QualityTITLE

0.92+

MIT CDOIQEVENT

0.9+

Data Citizens '22TITLE

0.9+

EgressORGANIZATION

0.89+

Delta LakeTITLE

0.89+

threeQUANTITY

0.86+

zeroQUANTITY

0.85+

Big QueryTITLE

0.85+

about a decade agoDATE

0.85+

SQL PushdownTITLE

0.83+

Data Citizens 2022 CollibraEVENT

0.82+

Big BigQueryTITLE

0.81+

more than a coupleQUANTITY

0.79+

coupleQUANTITY

0.78+

one bigQUANTITY

0.77+

Collibra Data QualityORGANIZATION

0.75+

CollibraOTHER

0.75+

Google NestORGANIZATION

0.75+

Data Citizens '22ORGANIZATION

0.74+

zero latencyQUANTITY

0.72+

SASORGANIZATION

0.71+

SnowflakeORGANIZATION

0.69+

COVIDORGANIZATION

0.69+

years agoDATE

0.68+

Wall StreetLOCATION

0.66+

theCUBEORGANIZATION

0.66+

many numbersQUANTITY

0.63+

CollibraPERSON

0.63+

timesQUANTITY

0.61+

DataORGANIZATION

0.61+

too longDATE

0.6+

Vice PresidentPERSON

0.57+

dataQUANTITY

0.56+

CDOTITLE

0.52+

BricksTITLE

0.48+

Noor Shadid, Wells Fargo | AnsibleFest 2022


 

(melodic music) >> Good afternoon. Welcome back to Chicago. Lisa Martin here with John Furrier. Day one of our coverage of Ansible Fest 2022. John, it's great to be back in person. People are excited to be here. >> Yeah. We've had some great conversations with folks from Ansible and the community and the partner side. >> Yeah. One of the things I always love talking about John, is talking with organizations that have been around for a long time that maybe history, maybe around nearly a hundred years, how are they embracing technology to modernize? Yeah, we got a great segment here with the financial services leader, end user of Ansible. So it's be great segment. >> Absolutely. Please welcome Noor Shadid to the program, the senior SVP, excuse me, senior technology manager at Wells Fargo. Noor it's great to have you on theCUBE. Thank you for joining us. >> Of course. Happy to be here. >> Thanks. >> Talk a little bit about technology at Wells Fargo. I was mentioning to you I've been a longtime customer and I've seen the bank evolve incredibly so in the years I've been with it. But... >> Yeah. >> ...talk about Wells Fargo was a technology-driven company. >> Yeah. So I like to consider Wells, right? Being in a financial institution company. So I consider us a technology company that does banking as a customer, right? Like we were talking about. There's so much that we've been able to release over the couple of years, right? I mean, decades worth of automation and technology has been coming out, but lately, right? The way we provide for our customers, how fast at scale, what we're doing for our customers, it's been, it's been significant, right? And I think our goal is always how can we enhance the process for our customers and how can we provide them the next best thing? And I think technology has really allowed us to evolve with our customers. >> The customers. We are so demanding these days. Right? I think one of the things that short supplied in the last two years was patience and tolerance. >> Yes. >> People. And I don't think that's going to rubber band back? >> Yeah. No, I don't think so. >> So how, talk to us about how Wells is using automation to really drive innovation and, surprise and delight those customers on a minute by minute basis. >> Yeah. And so, you know, if you think about banking, we've been able, with automation, we've been able to bring banking into the 21st century. You do not have to go to a branch to manage your money anymore. You do not have to go, you know, go to deposit your check inside of a branch. You can do it through your mobile app, right? That's driven by automation and innovation, right? And, you know, we have all of these back ends tools working for us to help get us to this next generation of, of banking. We can instantly send money to each other. We don't have to worry about, I need to go and figure out how I'm going to get money to this person and I need to wait, you know, X amount of days. You, you have the ability and you have, you feel safe being able to manage your money at the organization. And so automation has really allowed us to get to this place where we can constantly enhance and provide features and reliability to our customers. >> It's interesting you mentioned that you guys are a technology can have it do banking reminds me of the old iPhone analogy. It's a computer that happens to make phone calls. >> Yeah. >> So like, this is the similar mindset. How do you guys keep up? >> Yeah. >> With the technology? >> So it's tough, right? Because there's so much that comes out. And I think the only thing that's constant in technology is change, right? Because it's constantly evolving. But what we do is we, integrate very well with these new tools. We do proof of concepts where we try to, you know, what's on the market, what's hot, how can we involve, like, how can we involve these new tools in our processes? How can we provide a better end result for our customers by bringing in these new tools? So we have a lot of different teams that bring, you know, their jobs are to like, do these proof of concepts and help us build and evolve our own strategies, right? So it keeps us, it keeps us on our toes and I think it keeps, you know, all these new things that are coming out in the market. We're a part of it. We want to evolve with those, what the latest and greatest is. And it's, it's been working right as customers of financial services and us managing our money through, you know, through banks. It's been great. >> So the business is the application. >> Yes. >> And how do you guys make that happen when it comes down to getting the teams aligned? What's the culture like? Explain. >> Yeah. So at Wells we have evolved so much over the, over the last few years. The culture right now is we want to make changes. You know, we are making changes. We want to drive through innovation. We want to be able to provide our, you know, it's a developer centric approach right now, right? We want to push to the next and the greatest. And so everybody is excited and everybody's adapting to all of what's happening in the environment right now. So it's been great because we are able to use all of these new features and tools and things that we were just talking about by allowing our developers to do that work and allowing people to learn these new skills and be able to apply them in their jobs, which is now creating this, you know, a better result for our customers because we're releasing at such a faster pace. And at scale. >> Talk about how, you talked about multiple groups in the organization really investing in innovative technology. How do you get buy-in? What's that sort of pyramid like up to the top level? >> Yeah. >> Because to your point, you're making changes very quickly and consumers demand it. >> Yep. >> You can do everything from home these days. >> Yep. >> You don't have to go into a branch. >> Yeah, yeah. >> Which has changed dramatically in the last it's. >> Powerful few years. Yeah. >> But how, what's that buy-in conversation like from our leadership? >> Yeah. If you don't have leadership buy-in, it's very difficult to make those changes happen. But we at Wells have such a strong support from our leadership to be a part of the change and be, you know, constantly evolve and get better. So the way we work, cause we're such a large organization, you know, we bring in our business, you know, our business teams and we talk to them about what is it that's best going to better our customers. How do we also not just support external but internal, right? How do we provide these automated tools or processes for people to want to do this next work and, and do these, you know, these new releases for our customers. And so we bring in our business partners and, and we bring in our leadership and, our stakeholders and we kind of present to them, you know, this is what we're trying to do. This is the return that you'll get. This is what our customers will also receive. And this is, you know, this is how we keep evolving with that. >> How has the automation culture changed? Because big discussion here is reuse, teamwork, I call it multiplayer kind of organizations where people are working together. 'Cause that's a big theme of automation. >> Yeah. >> Reuse, leverage. >> Yep. >> Can you explain how you guys look at that? >> Yeah. It's changed the way that we do banking because we're eliminating a lot of the repetitive tasks in the toil because we have partners that are developing these, you know, services. So specifically with Ansible, we have these playbooks, rather than having every customer write the same playbook but with their own little, you know, flavor to it, we're able to create these generic patterns that customers can just consume simply by just going into a tool, filling out you know, filling out that playbook template, credentials, or whatever it is that they need and executing it. They don't have to worry about developing something from scratch. And it also allows our customers to feel safe because they don't have to have those skills out the box to be able to use these automation tools, right? They can use what's already been written and executed. >> So that make things go faster with the benefits or what? Speed? >> Faster stability, right? We're now speed, stability, scalability, because we're now able to use this at scale. It's not just individual teams trying to do this within small spaces. We're able to reliable, right? Automation allows us to be reliable internally and for our customers. Because you're not asking, there's no human intervention when you're automating, right? You have these opportunities now for people to just, it's one click, you know, one click solution or you're, you're end to end. You got self-healing involved. It's really driving the way that we do our work today. >> So automation sounds like it's really fueling the internal employee experience at Wells... >> Yes. >> ...as well as the customer experience. And those two things are like this to me. They're inextricably linked. >> A hundred percent because if you need it, they need to be together, right? You want your internal to also be happy because they want to be able to develop these solutions and provide these automation opportunities for our teams, right? And so with the customers, they're constantly seeing these great features come out, right? We can, you know, with AIML today, we're now able to detect fraud significantly. What we would've, what we could've done a couple years ago. And, and developers are excited to be able to do that, right? To be able to learn all these new tools and new technologies. >> What's interesting Wells is you guys are like an edge application. Obviously everyone's got banking in their hand. FinTech obviously money's involved. So there's people interested in getting that money. >> Yeah. >> Security hackers or whatnot. So when you got speed and you got the consistency, I get that. As you look at securing the app, that becomes a big part of what, what's the conversations like there? >> Yeah. >> 'Cause that's the number one concern. And it's an Edge app. I got my mobile, I got my desktop. >> Yeah. >> Everything's in the cloud on premise. >> Yeah. And, and I think for us, security is number one. You know, we want to make sure that we are providing the best for our customers and that they feel safe. Banking, whatever financial service you're working with, you want to feel like you can trust that your money with those services. Right? So what we do is we make sure that our security partners are with us from day one. They're a part of the process. They're automating their pieces as well. We don't want to rely on humans to do a lot of the manual work and do the checking and the logging. You want it to be through automation and new tools, right? You want it to be done through trusted services. You don't, you know, security is right there with us. They're part of our technology organization. They are in the technology org. So they're the ones that are helping us get to that next generation to provide, you know, more secure processes and services for customers. >> And that's key for trust. >> Yes. >> And trust is critical to reduce churn and to, you know, increase the customer lifetime value. But, but people, I mean, especially with the amount of generations that are alive today in banking, you need to be able to deliver that trust intrinsically to any customer. >> Yes, a hundred percent. And you want to be able to not only trust the service but yourself that you can do it. You know, when you go into your app and you make a payment, or when you go in and you want to send, you know, you want to send money to a different, you know, a different bank account, you want to be able to know that what you just did is secure and is where you plan to send it. And so being able to create that environment and provide those services is, is everything right for our customers. >> What are some of the state-of-the-art kind of techniques or trade craft around building apps? 'Cause I mean, basically you're digitally transformed. I mean, you guys are technology first. >> Yeah. >> The app is the company. >> Yeah. >> That's, that's the bank. How do you stay current? What's some of the state of the art things that you guys do that wasn't around just a few years ago? >> Yeah, I mean, right now just using, we're using tools like Terraform and Ansible. We're making sure that those two are hand in hand working well together. So when we work on provisioning, when we, during provisioning where it's all, you know, it's automated, fully end to end, you know, AI ops, right? Being able to detect reoccurring issues that are happening. So if you have a incident we want to learn from that incident and we want to be able to create, you know, incident tickets without having to rely on a human to find that, you know, that problem that was occurring and self-healing, right? All of this is starting to evolve and bringing in the, the proper alerting tools, bringing in the pro, you know, the right automation tools to allow that self-healing to work. That's, you know, these are things that we didn't have, you know, year, decade ago. This is all coming out now as we're starting to progress and, and really take innovation and, you know, automation itself.... >> What's the North star internally when you guys say, hey, you know, down five years down the road, bridge to the future, we're transforming, we've continued to innovate. Scale is a big deal. Data, data sovereignty, all these things are coming up. And what's the internal conversation like when you talk about a future state? >> Yeah, I think right now we're on our cloud transformation journey, right? We're moving right now. We have workloads into our two CSPs or public cloud. Also providing a better service for infrastructure and being able to provide services internally at a faster space, right? So moving into the public cloud, making sure everything's virtualized, moving away from hard, you know, physical hardware or physical servers. That's kind of the journey that we're on right now. Right? Also, machine learning. We want to be able to rely on these, you know, bots. We want to be able to rely on, on things learning from what we're doing so that we don't make the same mistakes again. >> Where would you say the most value or the highest ROI that you've gotten from automation today? Where is that in the organization? >> There's so much, but what I mean because of all of the work that we're doing, there's a lot that I could list, but what I will say is that the ability to allow self-healing in our environments without causing issues is a very big return. Automating failovers, right? I think a lot of our financial institutions have made that a priority where they want to make sure that their applications are active, active and also that when things do go wrong, there is something in place to make sure that that incident actually doesn't, you know, take down any problems. I think it's just also investing in people. Right now, the market is hot and we want to make sure that people feel like they're being able to contribute, they're using the latest and greatest tools. They're able to upskill within our own environments at the firm. And I think our organization does an amazing job of prioritizing people. And so we see the return because we're prioritizing people. And I think, you know, a lot of institutions are trying, you know, people first, people first. But I can say that at Wells, because we are actually driving this, we're allowing, you know, we're enforcing that. We want our engineers to get the certifications. We're providing, you know, vouchers so that people can get those clouds certifications. It's when you do that and you put people first, everything kind of comes together. And I think, you know, a lot of what we see in our industry, it's not really the technology that's the problem, it's process because you're so, you know, we're working at large scales. Our environments are massive. So, you know, my three years at Wells have seen a significant amount of change that has really driven us to be.... >> On that point better. How about changing of the roles? IT, I mean, back in the day, IT serves the business, you know, IT is the business now, right? As, as you've been pointing out. What does the roles change of as automation scales in, is it the operator? I mean, we know what's going on with dev's devs are doing more IT in the CICD pipe lining. >> Yep. >> So we see that velocity check, good cloud native development. What's the op scene look like? It seems to be a multi-tool role. >> Yeah. >> Where the versatility of the skill set... >> Yep. >> ...is the quick learner. >> Yep, able to adapt. >> And yeah, what's your view on this new persona that's emerging from this new opportunity? >> Yeah, and I think it's a great question because if you think about where we're going, and even the term DevOps, right? It means so many things to different people. But literally when you think about what DevOps is allowing our developers and our operations to work together on one team, it's allowing, you know, our operation engineers aren't, you know, years ago, ops engineers were not doing the development work. They were relying on somebody to do the development work and they were just supporting making sure our systems were always available, right? Our engineers are ops are now doing the development work. They're able to contribute and to get, they're writing their own playbooks. They're able to take them into production and ensure that they're, being used correctly. We are change driven execution organization. Everything is driven through change and allowing our ops engineers or production score engineers to write their own playbooks, right? And they know what's happening in the environment. It's powerful. >> Yeah. You're seeing DevOps become a job title. >> Yeah (laughs). >> Used to be like a function of philosophy... >> Yeah, yeah. >> ... and then SRE's... >> SRE's. >> SRE are like how many servers do you have? I don't know, a cloud, what's next? (all laugh) >> What's next? Yeah, I think with SREs it's, you know, it's important that if you have site reliability engineers, you're working towards, you know, those non-functional requirements... >> Yeah. >> ...making sure that you're handling those key components that are required to ensure that our systems, our applications and our integrations, you know, are up there and they're meeting the standards that we set for those other faults. >> And, and I think Red Hat Ansible nailed it here because infrastructure is code. We get that infrastructure has configuration as code, but OPS says code really is that SRE outcome. SRE also came from the Google background, but that means infrastructure's just doing, it's thing. >> Yes. >> The ops is automated. >> Yes. >> That's an interesting concept. >> Yeah, because it's not, you know, it's still new, right? A lot of organizations used to see, and they probably still see operations as being the, you know, their role is just to make sure that the lights are on and they have specific access so they, you know, they're not touching code, but the people that are doing the work and know the environment should really be the ones under creating the content for it. So yeah, I mean it's crazy what's happening now. >> So I got an analogy that's going to be banking analogy, but for tech, you know, back in the automation, Oh, going to put my job out of business, ATMs are going to put the teller out of business as more tellers now than there are before the ATMs. So that metaphor applies into tech where people are like, "What am I auto? What's automating away? Is it my job?" And so actually people know it's not. >> Yeah. >> But what does that free up? So if you assume, if you believe that's good, you say, okay, all the grunt work and the low level on differentiated heavy lifting gets automated away. >> Yeah. >> Great. What does that free up the talent to do? >> Yeah, so when you, and that's great that you bring it up because I think people fear, you know, of automation, especially people that weren't doing automation in the past and now their roles are now they're able to automate those roles out. They're fearful that they don't have a space, a role anymore. But that's not the case at all. What we prioritize is now that those new engineers have this new skill set, apply them. Start using it to be a part of this transformation, right? We're moving from, we went from physical to virtual to now, you know, we're moving into the public, moving into the cloud, right? And that, that transformation, you need people who are ramping up their skill sets, you know, being a part of one of the tools that I own is terraform at Wells that, you know, right now our priority is we're trying to ramp up the organization to learn terraform, right? We want people to learn, you know, this new syntax, this new, you know, HCL and it's, you know, people have been automating some of the stuff that they're doing in their day to day and now trying to learn something new so that they can contribute to this new transformation. >> So new functionality, higher value services? >> Yes, yeah. >> It brings tremendous opportunity for those folks involved in automation. >> Yes. >> or on so many levels. >> Yep. >> Last question, Noor for you is what, you know, as we are rounding out calendar year 2022, entering into 2023, that patience is, that we talked about is still not coming back. What's next for Wells as a technology company that does banking? >> I mean, you name it, we're working on it, because we want to be able to deliver the best for our customers. And I think right now, you know, our digital transformation strategy and, and moving into the public cloud and getting our applications re-architected so that we are moving into microservice driven apps, right? We're moving these workloads into the public cloud in a seamless way. We're not lifting and shifting so that we're not causing more problems into the environment. Right. And I think our, our, our goal is right, Like I was saying earlier, people and evolving with the technology that's coming out. We're not, you know, we are a part of the change and we are happy to be a part of that change and making those changes happen. >> People first. >> Awesome, awesome stuff. >> Automation first sounds outstanding and I will never look at Wells Fargo as a bank again. >> Yeah. (laughter) >> Perfect. Perfect. >> Yeah, that's awesome. >> It's been such a pleasure having you on the program, talking about how transformative Wells has been and continues to be. >> Yeah. >> We appreciate your insights and your time. >> Thank you. >> Thank you so much. It was lovely being her. Pleasure here. Thank you guys. >> For our guest and John Furrier, I'm Lisa Martin. You've been watching theCUBE all day, I'm sure, live from Chicago at Ansible Fest 2022. We hope you have a wonderful rest of your day and John and I will see you tomorrow morning.

Published Date : Oct 19 2022

SUMMARY :

John, it's great to be back in person. and the community and the partner side. One of the things I always Noor it's great to have you on theCUBE. Happy to be here. I was mentioning to you I've ...talk about Wells Fargo So I like to consider Wells, right? short supplied in the last that's going to rubber band back? So how, talk to us about You do not have to go, you know, mentioned that you guys are a How do you guys keep up? teams that bring, you know, And how do you guys make that provide our, you know, How do you get buy-in? Because to your point, You can do everything dramatically in the last it's. Yeah. the change and be, you know, How has the automation culture changed? out the box to be able to it's one click, you know, it's really fueling the internal things are like this to me. We can, you know, with AIML today, is you guys are like an edge So when you got speed and 'Cause that's the number one concern. generation to provide, you know, reduce churn and to, you know, to a different, you know, you guys are technology first. the art things that you guys do bringing in the pro, you know, you know, down five years down the road, on these, you know, bots. And I think, you know, you know, IT is the business now, right? It seems to be a multi-tool role. of the skill set... aren't, you know, years ago, Yeah. Used to be like a with SREs it's, you know, integrations, you know, SRE also came from the Google background, access so they, you know, but for tech, you know, So if you assume, if you believe What does that free up the talent to do? HCL and it's, you know, those folks involved in automation. for you is what, you know, I think right now, you know, I will never look at Yeah. Perfect. having you on the program, We appreciate your Thank you so much. We hope you have a wonderful

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

WellsORGANIZATION

0.99+

Noor ShadidPERSON

0.99+

JohnPERSON

0.99+

Wells FargoORGANIZATION

0.99+

ChicagoLOCATION

0.99+

AnsibleORGANIZATION

0.99+

21st centuryDATE

0.99+

one clickQUANTITY

0.99+

2023DATE

0.99+

John FurrierPERSON

0.99+

EdgeTITLE

0.99+

tomorrow morningDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

three yearsQUANTITY

0.99+

twoQUANTITY

0.99+

one teamQUANTITY

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

NoorPERSON

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.97+

hundred percentQUANTITY

0.97+

GoogleORGANIZATION

0.97+

two CSPsQUANTITY

0.96+

five yearsQUANTITY

0.96+

OPSORGANIZATION

0.95+

Day oneQUANTITY

0.95+

Ansible Fest 2022EVENT

0.93+

firstQUANTITY

0.93+

few years agoDATE

0.9+

couple years agoDATE

0.9+

yearsDATE

0.9+

AnsibleFestEVENT

0.89+

day oneQUANTITY

0.88+

decade agoDATE

0.85+

Red Hat AnsibleORGANIZATION

0.84+

SRETITLE

0.8+

last two yearsDATE

0.78+

2022DATE

0.75+

nearly a hundred yearsQUANTITY

0.73+

decadesQUANTITY

0.71+

HCLORGANIZATION

0.67+

vOpsTITLE

0.65+

coupleQUANTITY

0.63+

last few yearsDATE

0.63+

DePERSON

0.61+

TerraformORGANIZATION

0.6+

yearsQUANTITY

0.58+

DevOpsTITLE

0.53+

FinTechORGANIZATION

0.5+

Michael Ouissi, IFS | IFS Unleashed 2022


 

(soft music) >> Hey, welcome back to theCUBE's coverage from Miami of IFS Unleashed 2022, Lisa Martin here with you. We've had great conversations today with IFS execs, customers, partners. Our ecosystem is quite robust and quite strong. And we've had some alumni on, I've got another alumni who's back with me, Michael Ouissi, the group's COO of IFS. Michael, welcome back to theCUBE. >> Thanks for having us, my pleasure. >> It's great to be back in-person. >> Absolutely. >> It was great to walk into the keynote this morning and see a full room. I was talking with Darren Roos, your CEO earlier this morning and I said, it must have felt great to walk out on stage and actually see a sea of people and customers and partners who want to engage and get that relationship with IFS just turbocharged. >> Absolutely, I mean, it's been three years, we haven't had this buzz, this energy, and the opportunity to actually see all our customers and also show our customers who we are, how we are evolving and how we're becoming a different company over the past four years. >> And it's impressive what IFS has done in that timeframe. All the conversations I've had today, really reflect the strategy, the strong strategy and vision that this company has. But I was looking at some of the financials and saw that your first half of 2022, which ended in June, there was tremendous growth. ARR up 33%, I think they're recurring revenue is in the 70 percentile now. Lot of new customers, a lot of of trust that existing customers are showing to the company. >> Yeah, absolutely. Look, and I think the secret sauce is that we have focused on where our strengths are, we haven't gone astray, we haven't tried to actually capture growth in any other vertical. We are really very religious about where we're going and there, where we are going, we are going deep and we really are trying to be the best version of ourselves for our customers and for those customers' business transformation needs. >> Talk a little bit about that vertical specialization. It's something that we don't see very often but throughout all of my conversations today with your executives, IFS executives, with customers, with partners, that domain expertise, really the granularity of the domain expertise is really resonant that IFS has achieved that in those five key verticals in which you have such specialization. >> Yeah, look, I mean, I would love to take credit for having been the person who has done that, but IFS has over the past 35 years, really had this very strong focus. But what actually was important when you try to double a business in the space of four years, not to be tempted to go away from that but actually double down on exactly that and see the opportunity in those verticals and make sure that our customers actually are getting the attention and the functionality they deserve. >> Let's talk about customers. Over 10,000 customers right now. I was also in the keynote this morning where Christian Peterson was sharing that, in its first 18 months, IFS Cloud has over 400,000 users. So the growth is tremendous. The customer loyalty is ostensible in those verticals. Talk about customers and their influence on the company, the direction the technology goes, the evolution, that kind of stuff. >> Yeah, I mean, look, as I said, we are all about the depth of the functionality and that means that we need to listen to our customers, We need to listen what's going on in the industries. We also need to not just listen but we need to think forward. >> Yeah. >> We need to have some thought leadership on what we think is going to emerge and then test that with our customers again. So our customers are at the core of everything we do. When we engage with a customer, we start with trying to understand their business in depth. We've got our own methodology around that and we don't just try to push technology onto them, but we are trying to understand what are their business drivers and then actually try to apply technology to what enables them to deliver on those business transformation objectives they've got. >> What are some of the changes or the waves that you've seen, especially the last couple of years during the pandemic when we saw so many customers pivot, we need to transform digitally to stay alive, and then those that did that well enough to be competitive and to thrive, talk to me about some of the changes as the group's COO that you've seen. >> Yeah, so when you go back, I mean, there's two types of transformation, business and digital transformation but they are the same thing, they're just a different side of the coin. And when I talk about business transformation, what we're seeing a lot is, and there's this big buzzword overtization out there, but customers going service and customers trying to build an end to end business that is more viable, more sustainable, more successful in how they develop great moments of service for their customers, that is something we are seeing a lot. And during this business transformation, digital transformation has become a means to that end. And that is something where customers have matured a lot, where in the past we have seen a lot of the IOT, AI, machine learning, cloud, everything was a means or a purpose in itself and that has changed. It's now become actually a means to an end. It's become a means to actually deliver a business transformation and a business outcome that is meaningful for their customers. >> Has to be meaningful for their customers. I love how IFS talks about enabling your customers to deliver those moments of service. And when we think of, in our consumer lives, many of us flew here, and you think about what's the moment of service for an airline? Well, it's being able to get on that plan on time, have it leave on time and meet my expectations as a demanding consumer. But regardless if we're talking about aerospace, energy, manufacturing, engineering, the customers on the other end expect to have an integrated seamless experience that's not fragmented, that is able to deliver moments of service that then help drive up their revenue. So what IFS is doing is so embedded in what your customers are able to deliver to their customers. >> Yeah, absolutely. And look, if you look at all the things that have to come together to actually have a plane taken off at the right point in time or if you take any other examples, but there's so many things that need to go right. Crew scheduling, you need to have the right crew at the right point in time. You need to have them actually with the right experience to fly the right plane. You need to have airplane maintenance going right to have the plane available at the right point in time and no technical failures and so on and so forth. And we look at that as between customers, the people, and the assets that an organization has, you need to coordinate between all those dimensions in everything you do to make sure that this one moment of service where your plane takes off on time, you actually catch your connecting flight at the other end, that this actually is being delivered. And that's what drives us, that's what customers are driving into our product development, into how we embed AI, machine learning and so on in our technology to make it relevant to exactly that moment of service. >> That's what we as those consumers want. We want relevance, we want personalization, we want that relationship to know who we are and how to serve us best. Let's dig into the Jotun case study. He was going to join us, our CEO was going to join us, couldn't make it. Talk to me a little bit about Jotun, what type of business is it and then let's kind of start unpacking how they're leveraging IFS technology. >> Yeah, so Jotun is the seventh largest paints and coatings manufacturer in the world. And they've got obviously a home decoration part of the business, but they've got an industrial part of the business where one large part of the business is also a marines part. So they actually provide paints, coating, for all sorts of large ships and it's quite astonishing what you learn about that customer. I mean, we are now partnering with them for more than 20 years, so we are very intimate with that customer obviously. But when you see all of a sudden, three, four years ago, they started going onto a journey where they looked at apart from paint and coating, what actually can I provide to my customer in the marine industry to actually make their business more efficient, to actually make it easier for them to get a ship from A to B in an efficient way, in a timely way and so on. And they developed something called Hull Skating Solutions and those Hull Skating Solutions are integrating all sorts of weather data, all sorts of other data and provide them to the marine companies that actually then help them drive this... Well, actually get this ship in a more efficient way from A to B. And at the same time, also where there's predictions as to when you need to clean that ship, and they've got Hull Skating Solutions, which then actually clean the ship automatically as well. So it's quite an astonishing thing for a paints and coating manufacturer to then think about what do I need to know about my customer's business to provide that additional service to my customer? Great solution and great way of dealing with or delivering that great moment of service to their customers. >> Absolutely, the evolution of that business from paint manufacturing into the marine industry is not a stretch based on how you described it, but it's very innovative. How is IFS enabling them to do that and do it well? >> Well, one, they went on a modernization program for all their factories for all these kinds of things that they need to integrate then deliver to their customers. And we are in the central part in being that agile partner that actually delivers those technology solutions that enable them to, well, first of all think about that service, provide that service to their customers and make sure that they run a very efficient, very integrated version of IFS and can actually harmonize globally to make sure that wherever the customer is, they can deliver on that promise. >> Fantastic, let's talk a little bit about from your team's perspective, the go to market. We talked about the five verticals in which IFS specializes energy, aerospace and defense, engineering, manufacturing and there's one I'm missing. >> Utilities. >> Utilities, of course. >> Yeah. >> In terms of the domain expertise, are there vertical teams that are focused? I imagine that there are, talk to me a little bit about that specialization from that lens. So obviously, I mean, there are so many dimensions. There's our sales teams, there's our pre-sales teams, there's our industry teams which actually are working with the customers on receiving their feedback, on actually providing thought leadership and then organizing the feedback loop into our development teams who are providing these solutions then that hopefully our customers will cherish. So we are very specialized in that respect. We are driving the industry specialization. We've got a complete aerospace and defense business unit. We are in the market unit, specializing in the industries where we work in the various different territories with just those industry teams. We've got specialization in the pre-sales teams. So we take that really deep down and very seriously to make sure that whenever we talk to a customer, we also have the understanding and we have also got the curiosity to understand more of the customer's business, and that is something that is part of the IFS DNA. >> It's a differentiating part of IFS' DNA that not only having the domain expertise, and a lot of people talk about, well, we got to meet the customer where they are, wherever they are digitally, wherever they are in business transformation. But you're actually talking the customer's language. >> Yeah. >> By industry, which I would imagine really helps to not only solidify that relationship, but you actually get to really do a double click and get much more tightly connected with the customers and the outcomes that they're wanting to achieve so that those moments of service happen. >> Well, that's so true. And actually this is not just while we are selling to the customers, but it's actually throughout the whole life cycle of this application and the technology in Jotun's case more than two decades. And we've got a lot of customers who are actually that long with us because we don't run away once we've implemented a solution, but we actually stay close to it because first of all, we want to learn from our customers continuously. We want to actually give to our customers also what we are learning outside of the conversations we have with these customers. And we make sure that these customers continuously evolve how they think about their business, how they think about the application of our technology and then in turn, we can actually develop technology again, for their use cases. >> It's a flywheel. >> It's a complete flywheel and that creates loyalty. >> Yeah. >> That actually creates the longstanding relationships we have with many, many of our customers, yeah. >> I was speaking with a number of your executives, Marni Martin was here and we were talking about brand recognition and the loyalty, but that intimate customer knowledge that IFS really works hard to gain with its customers. 'Cause as consumers, we bleed into our business lives and we have very little tolerance, very little patients. I think that was one of the things in COVID that went away. People were just not tolerating this rapid change and we had no choice. But I don't know that patience is going to come back at the level in which we experienced it before COVID. So customers expect businesses and brands to know them and help anticipate what's next for me, how do I get there? And it sounds to me like IFS has really nailed that from a customer relationship perspective. >> As I said, I mean it's really part of our DNA and we try to preserve that culture while we're doubling our business and hopefully, doubling our business in the next three years again, because that is really the secret sauce to being that successful, and not only with our existing customers, but also with the net new customers. And we are driving almost 50% of our revenue, which is very, very much a benchmark in the industry from net new customers that we're winning while we're actually keeping or staying close to our existing customers and try to apply that knowledge to our net new customers. >> Yeah. >> But it's something that we absolutely have to preserve to be as successful as we've been in the past four years, also in the next four years. >> So coming off a great first half in the summer, when I teased Darren, "Any nuggets you want to say?" He said financials for Q3 are coming out in the next couple of weeks. And I said, I imagine that trajectory is up and to the right. >> Yeah. >> What are some of the things, Michael, that excite you for where you've seen this company go in your time there and the rocket ship that it seems to be on today? >> Yeah, look, I mean, what's amazing to me is... And if I look back, I joined four and a half years ago, and only the first one and a half years were under normal circumstances. >> Right. >> The other three years were a major pandemic, now a major war and recession and we've got all sorts of economic and macroeconomic headwinds. And what what impresses me about the company, about our customers, about our employees is the resilience we've got to just carry on with what we're doing. And I mean, I don't give too much away when I say we had a pretty good Q3 as well, and we are looking forward to a really good 2022 as a full year, and there are no excuses that actually the organization makes, it has just taken along. And we are facing the economic headwinds and we are going through that time hugely successful. And I'm very optimistic about the year and about 2023 as much. >> Fantastic, it's kind of hard to believe that calendar year 2023 is literally around the corner. But Michael, it's been great having you on theCUBE. Thank you for coming back, talking about what's going on at IFS from the overall COO's perspective, the customer synergies that IFS has, the work that you do to really get granular in those industries, it's impressive and congratulations on the success. We'll have to have you back next year to talk about what else is new. >> Thank you very much, Lisa. >> All right, my pleasure. >> Thank you. >> For Michael Ouissi, I'm Lisa Martin, you're watching theCUBE's coverage live from Miami on the show floor of IFS Unleashed. We'll be back with our final guest in just a minute. (soft music)

Published Date : Oct 12 2022

SUMMARY :

Michael Ouissi, the group's COO of IFS. and get that relationship and the opportunity to and saw that your first half and we really are trying It's something that we and see the opportunity in influence on the company, and that means that we need and we don't just try to and to thrive, talk to me about some that is something we are seeing a lot. that is able to deliver moments of service and the assets that an organization has, and how to serve us best. and provide them to the marine companies evolution of that business that they need to integrate the go to market. the curiosity to understand that not only having the domain expertise, to not only solidify that relationship, and the technology in Jotun's and that creates loyalty. That actually creates the and brands to know them because that is really the secret sauce But it's something that we in the next couple of weeks. and only the first one and a half years and we are going through and congratulations on the success. from Miami on the show

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

MichaelPERSON

0.99+

Michael OuissiPERSON

0.99+

Hull Skating SolutionsORGANIZATION

0.99+

JuneDATE

0.99+

LisaPERSON

0.99+

IFSORGANIZATION

0.99+

DarrenPERSON

0.99+

Christian PetersonPERSON

0.99+

three yearsQUANTITY

0.99+

next yearDATE

0.99+

2022DATE

0.99+

MiamiLOCATION

0.99+

more than 20 yearsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

four yearsQUANTITY

0.99+

Darren RoosPERSON

0.99+

over 400,000 usersQUANTITY

0.98+

Over 10,000 customersQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

almost 50%QUANTITY

0.97+

four and a half years agoDATE

0.97+

first 18 monthsQUANTITY

0.97+

five verticalsQUANTITY

0.97+

IFS UnleashedTITLE

0.96+

two typesQUANTITY

0.96+

first halfQUANTITY

0.95+

33%QUANTITY

0.94+

JotunORGANIZATION

0.93+

JotunPERSON

0.92+

70 percentileQUANTITY

0.91+

more than two decadesQUANTITY

0.91+

pandemicEVENT

0.9+

next couple of weeksDATE

0.9+

double clickQUANTITY

0.89+

seventh largest paintsQUANTITY

0.89+

half yearsQUANTITY

0.87+

earlier this morningDATE

0.87+

this morningDATE

0.86+

Marni MartinPERSON

0.86+

four years agoDATE

0.86+

five key verticalsQUANTITY

0.85+

next four yearsDATE

0.85+

last couple of yearsDATE

0.84+

first one andQUANTITY

0.83+

2023DATE

0.81+

one momentQUANTITY

0.8+

one large partQUANTITY

0.79+

Q3DATE

0.79+

this morningDATE

0.72+

COVIDOTHER

0.71+

35 yearsQUANTITY

0.7+

next three yearsDATE

0.7+

IFS CloudTITLE

0.68+

past four yearsDATE

0.61+

half ofDATE

0.6+

manufacturerQUANTITY

0.59+

Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally


 

hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching

Published Date : Sep 28 2022

SUMMARY :

that's the sort of stuff that we do you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

Jennifer LeePERSON

0.99+

ChrisPERSON

0.99+

TonyPERSON

0.99+

2013DATE

0.99+

Raina RichterPERSON

0.99+

SingaporeLOCATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

FrankfurtLOCATION

0.99+

JohnPERSON

0.99+

20-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

95QUANTITY

0.99+

FordORGANIZATION

0.99+

2.7 billionQUANTITY

0.99+

MarchDATE

0.99+

FinlandLOCATION

0.99+

seven hoursQUANTITY

0.99+

sixty percentQUANTITY

0.99+

John FurrierPERSON

0.99+

SwedenLOCATION

0.99+

John FurrierPERSON

0.99+

six weeksQUANTITY

0.99+

seven hoursQUANTITY

0.99+

19 credentialsQUANTITY

0.99+

ten dollarsQUANTITY

0.99+

JenniferPERSON

0.99+

5 000 hostsQUANTITY

0.99+

Horizon 3TITLE

0.99+

WednesdayDATE

0.99+

30QUANTITY

0.99+

eightQUANTITY

0.99+

Asia PacificLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

three licensesQUANTITY

0.99+

two companiesQUANTITY

0.99+

2019DATE

0.99+

European UnionORGANIZATION

0.99+

sixQUANTITY

0.99+

seven occurrencesQUANTITY

0.99+

70QUANTITY

0.99+

three peopleQUANTITY

0.99+

Horizon 3.aiTITLE

0.99+

ATTORGANIZATION

0.99+

Net ZeroORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

UberORGANIZATION

0.99+

fiveQUANTITY

0.99+

less than two percentQUANTITY

0.99+

less than two hoursQUANTITY

0.99+

2012DATE

0.99+

UKLOCATION

0.99+

AdobeORGANIZATION

0.99+

four issuesQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

next yearDATE

0.99+

three stepsQUANTITY

0.99+

node 0TITLE

0.99+

15 minutesQUANTITY

0.99+

hundred percentQUANTITY

0.99+

node zeroTITLE

0.99+

10xQUANTITY

0.99+

last yearDATE

0.99+

7 minutesQUANTITY

0.99+

one licenseQUANTITY

0.99+

second thingQUANTITY

0.99+

thousands of hostsQUANTITY

0.99+

five thousand hostsQUANTITY

0.99+

next weekDATE

0.99+

Todd Crosley, CrowdStrike & Patrick McDowell, AWS | CrowdStrike Fal.Con 2022


 

hi everybody this is dave vellante and this is day two of the cube's coverage of falcon 2022 we're live from the aria in las vegas everybody was out last night at the brooklyn bowl awesome band customers were dancing a lot of fun a lot of business going on here todd crosley's here he's to my left he's the senior director of cloud partnerships at crowdstrike and patrick mcdowell is the global technical lead for security partners at aws these guys have been partnering for a long time and we're going to dig into that partnership gents welcome to the cube thanks for having us thanks happy birthday you're very welcome todd talk about the the history of the relationship you guys are kind of bet business on each other but take us back sure thing so you know yesterday or the day before the company turned 11 years old or so i think george talked a lot about that the other day but uh we've actually been working closely with the amazon team for more than five years at this point and it's really evolved into a strategic collaboration really so uh from an executive on down into field alignment channel alignment uh the marketing team and and the build team where we we work with patrick and his extended team on different service integrations and different uh you know effectively positive security outcomes for the customers together i mean patrick if you think about the history of aws it's like you guys realized you had lightning in a bottle and then also realized wow and ecosystem play is the way to go and when you go to re invent it's palpable the the ecosystem innovation and the the flywheel effect that you've created but what's aws's perspective on the partnership with crowdstrike yeah it's essential to us and our customers right so we've been doing deep integrations probably since i think the first big one of crowdstrike was with guard duty amazon guard duty which is our uh easy to use threat detection service in aws one click on and their threat intelligence actually build is built directly into that service so an aws customer turns on guard duty it's automatically uh being uh enhanced and enriched with falcon x threat intelligence uh by default yeah so the cloud has become the first line of defense for a lot of the csos that i talk to you know everybody's cloud first cloud first and it's like okay that's awesome because cloud has really good security but then it's okay but if there's some differences i got there's a shared security model that i have to understand and and so when you guys talk to customers i know it's you know one of the leadership principles is you got to be focused you know insanely focused on customers crowdstrike very customer focused as well that's how you sort of created this company that is doing such innovative things what are customers telling you um about how they want you to work together what kind of feedback are you getting any other examples that you might have in the future yeah sure thing i'll go first so that well so they they depend on uh the like you said this shared security model but there's ample opportunity where vendors like crowdstrike and we've worked with patrick's team extensively to to pinpoint areas where we can provide so examples of that would be like on the in compute so like you recently released the graviton processors we've had a recent success with a customer where uh they've walked down their digital transformation journey they had they were looking to switch over to the graviton processors and we work closely with patrick's team to say okay we're going to certify our sensor uh on that particular area of compute so the customer continue to enjoy crowdstrike in our single-platform cloud-first native platform to say okay you've got skill sets on the on-prem environment your endpoint environment and good news you're switching to graviton no problem we still support that and we've been able to do that by working closely with each other inclusive not just the architects but the product teams work closely together as well yeah in this customer case um you know uh crowdstrike already supported for amazon linux but this customer a very large customer of ours need to move 10 000 ec2 instances to graviton on red hat linux not amazon linux so we got crowdstrike engineering our engineering our architects and we were able to get this customer red hat support for graviton within two months right in production ready to go and unblock this migration so i love the graviton example so what i always default to when somebody says oh we're cloud native i'd say are you running on graviton uh because because graviton is is is uh amazon's custom silicon that complements what you're doing with intel what you're doing with amd and they're all kinds of different instant types but it's based on an arm system and it's delivering new levels of performance and and an energy reduction if i can use that term um and and it's on a new curve yeah and so tremendous cost savings as well right i think out of the box with no change in the application you're getting 20 and that's and i i don't even think you're really driving it as hard as you can is my assessment but you gotta be considerate of these days so but that's an example of of how you're using from a technology standpoint cloud native and then and then sort of partnering does this you know graviton one graviton true graviton three i'm sure there'll be graviton 10 someday no doubt i think it's a good example of us working closely together paying attention to the customer's needs and making sure they don't they don't miss a step and and still stop the breach and pay attention to their security needs so you're part of the apn the amazon partner network yep what do you got to do to be like certified at an elite level there you probably have to go through a lot of hoops and maybe you could describe what you guys do there and how you work together to ensure that a company is adequate and more than adequate for its customers yeah sure thing so we we've participated in and we're certified in for example the security competency area which elevates us amongst other security isvs we're one of the few that have that um we have the well we participate in the well architected program which means that we've demonstrated a common set of criteria and customer references i mean that's a example um another area where we've participated quite a bit is in in the land of digital supply chains notably aws marketplace where we've uh latched on to many of their features and capabilities and participated in strategic programs whether it be um you know including the channel partner or taking a look at traditional private offers or taking a look at like the looping in the entire ecosystem to make sure the customer gets what they need so how do you integrate with things like control tower where where are the seams and how do you make that as seamless as possible for customers or maybe you can explain what control power yeah so uh they have multiple integrations for control tower for their cspm horizon uh it automatically onboards new aws accounts so uh you know as you're vending accounts you're giving to more devops teams horizon is automatically deploying and being protected those accounts so it has those guard rails in place for customers in a nice easy to use deployment model that you don't have to think about right so control tower in general is uh it kind of gives customers guard rails an easy button if you're new to aws i'm migrating hey aws can you just tell me the best practices how should i set up my accounts i need a landing zone i'm doing migration so it's really like a wizard for getting started in aws and crowdstrike integrates that with falcon discover and as well as falcon horizon and your age so yeah you guys really don't compete um you know maybe there's some overlap overlap is better than than gaps but you know when you when you take something like you know network firewalls and things like that amazon brings that to the table and then crowdstrike will build on top of that is that correct yeah i'll take this one uh so george has said it crowdstrike is not a network security company right however they have an integration using their threat intelligence on on our amazon network firewall so aws amazon and crouchstrike coming together actually have a joint offering for customers in a space that crowdstrike has never been in before itself so i think that's very exciting so yeah yeah all those integrations that pat's talking about we've actually cataloged the whole thing on a github page where we find that's where customers go they took a look at the integration and the supporting documentation we're like okay yeah this makes sense this these two companies augment each other well and it turns out to be a good outcome and you check you'll take telemetry data from the aws cloud you can take it from you know any your agents can run anywhere right and then you bring that in to the or i guess you sort of you index it i in my term in in the aws cloud enables that because you've got virtually unlimited scaling capability and that's kind of where you guys started yeah cloud native dogma that's right yeah it's a competitive differentiator for us uh i we think it's nice we're a market leader in our space and amazon's a market leader in their space and and we've got a lot of synergy together where do you guys last question where do you guys respectively want to see the the relationship go if you had to put on your binoculars or even telescope where do you want to see this go well i think we're i think we're all in the business of accelerating positive security outcomes for the customer and the what we're doing is we're spending a lot of time educating our respective fields and respective customers to know that these these integrations do in fact exist uh they absolutely complement each other we were in a meeting uh you know maybe six ten months ago we're in a cio said i didn't know that the two that the two products work so well together speaking about the control tower and horizon particular example had i known that i would have bought it uh a lot quicker this is this is a great outcome and the fact that you're working with amazon together is a bit of a relief so that was nice yeah i'm gonna echo what george kirk said in his keynote yesterday that like security's a journey xdr is a journey and i think the work that we did on the open cyber security schema framework which is an open source common uh security language that all vendors can use including aws and crowdstrike i think that is where we're going to see uh the the industry rally around in the upcoming year there's so much security data there's a common uh now language that all products and clouds could talk to each other that's right tell tell me more about it ocsf is that right where did that come from and yeah so um it's it's a it's an open source framework and you know both crowdstrike aws and other uh you know players in the industry are like there's a common problem none of our products talk together it's all about customer benefit right so what can we do to democratize security data make things talk well play together everyone wants to do more analytics on lots of data lakes so this is where it's all coming together yeah better collaboration in industry obviously is is needed and then the other piece is education you guys both sort of refer to that that's what i when i come to conferences like this and reinforce as well as a lot of it i mean i remember the first reinforcement was like explaining the shared responsibility model now of course a lot of people understood it but a lot of people didn't when you fast forward to 2022 and reinvent it was a lot more focused on how to really exploit the capabilities that aws has and then here at crowdstrike it's like okay helping practitioners really understand how to take advantage of the full platform and and that's to your point patrick the journey all right guys hey we got to go thanks so much you for having us all right keep it right there fast and furious day two from crowdstrike's falcon 2022. you're watching thecube [Music] you

Published Date : Sep 21 2022

SUMMARY :

accounts so uh you know as you're

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Todd CrosleyPERSON

0.99+

amazonORGANIZATION

0.99+

patrickPERSON

0.99+

gravitonTITLE

0.99+

AWSORGANIZATION

0.99+

george kirkPERSON

0.99+

twoQUANTITY

0.99+

awsORGANIZATION

0.99+

two companiesQUANTITY

0.99+

yesterdayDATE

0.99+

georgePERSON

0.99+

Patrick McDowellPERSON

0.99+

more than five yearsQUANTITY

0.99+

las vegasLOCATION

0.99+

2022DATE

0.98+

firstQUANTITY

0.98+

crowdstrikeORGANIZATION

0.98+

six ten months agoDATE

0.97+

11 years oldQUANTITY

0.97+

first reinforcementQUANTITY

0.97+

dave vellantePERSON

0.97+

patrick mcdowellPERSON

0.97+

first lineQUANTITY

0.96+

two monthsQUANTITY

0.96+

bothQUANTITY

0.95+

two productsQUANTITY

0.95+

oneQUANTITY

0.94+

last nightDATE

0.93+

single-platformQUANTITY

0.9+

day twoQUANTITY

0.88+

CrowdStrikeTITLE

0.86+

dayQUANTITY

0.81+

CrowdStrikeORGANIZATION

0.81+

red hat linuxTITLE

0.81+

intelORGANIZATION

0.8+

20QUANTITY

0.8+

amdORGANIZATION

0.78+

githubTITLE

0.78+

todd crosleyPERSON

0.78+

aws cloudORGANIZATION

0.78+

a lot of peopleQUANTITY

0.77+

lot of peopleQUANTITY

0.77+

crouchstrikeORGANIZATION

0.76+

10 000 ec2QUANTITY

0.74+

horizonORGANIZATION

0.74+

falcon horizonTITLE

0.71+

one clickQUANTITY

0.71+

crowdstrikeTITLE

0.7+

10TITLE

0.67+

brooklyn bowlEVENT

0.66+

falconEVENT

0.65+

lots of dataQUANTITY

0.61+

Jack Andersen & Joel Minnick, Databricks | AWS Marketplace Seller Conference 2022


 

>>Welcome back everyone to the cubes coverage here in Seattle, Washington, AWS's marketplace seller conference. It's the big news within the Amazon partner network, combining with marketplaces, forming the Amazon partner organization, part of a big reorg as they grow the next level NextGen cloud mid-game on the chessboard. Cube's got cover. I'm John fur, host of Cub, a great guests here from data bricks, both cube alumnis, Jack Anderson, GM of the and VP of the data bricks partnership team. For ADOS, you handle that relationship and Joel Minick vice president of product and partner marketing. You guys are the, have the keys to the kingdom with data, bricks, and AWS. Thanks for joining. Thanks for good to see you again. Thanks for >>Having us back. Yeah, John, great to be here. >>So I feel like we're at reinvent 2013 small event, no stage, but there's a real shift happening with procurement. Obviously it makes it's a no brainer on the micro, you know, people should be buying online self-service cloud scale, but Amazon's got billions being sold to their marketplace. They've reorganized their partner network. You can see kind of what's going on. They've kind of figured it out. Like let's put everything together and simplify and make it less of a website marketplace merge our partner to have more synergy and friction, less experiences so everyone can make more money and customer's gonna be happier. >>Yeah, that's right. >>I mean, you're run relationship. You're in the middle of it. >>Well, Amazon's mental model here is that they want the world's best ISVs to operate on AWS so that we can collaborate and co architect on behalf of customers. And that's exactly what the APO and marketplace allow us to do is to work with Amazon on these really, you know, unique use cases. >>You know, I interviewed Ali many times over the years. I remember many years ago, I think six, maybe six, seven years ago, we were talking. He's like, we're all in ons. Obviously. Now the success of data bricks, you've got multiple clouds. See that customers have choice, but I remember the strategy early on. It was like, we're gonna be deep. So this is speaks volumes to the, the relationship you have years. Jack take us through the relationship that data bricks has with AWS from a, from a partner perspective, Joel, and from a product perspective, because it's not like you got to Johnny come lately new to the new, to the scene, right? We've been there almost president creation of this wave. What's the relationship and has it relate to what's going on today? >>So, so most people may not know that data bricks was born on AWS. We actually did our first 100 million of revenue on Amazon. And today we're obviously available on multiple clouds, but we're very fond of our Amazon relationship. And when you look at what the APN allows us to do, you know, we're able to expand our reach and co-sell with Amazon and marketplace broadens our reach. And so we think of marketplace in three different aspects. We've got the marketplace, private offer business, which we've been doing for a number of years. Matter of fact, we we're driving well over a hundred percent year over year growth in private offers and we have a nine figure business. So it's a very significant business. And when a customer uses a private offer that private offer counts against their private pricing agreement with AWS. So they get pricing power against their, their private pricing. >>So it's really important. It goes on their Amazon bill in may. We launched our pay as you go on demand offering. And in five short months, we have well over a thousand subscribers. And what this does is it really reduces the barriers to entry it's low friction. So anybody in an enterprise or startup or public sector company can start to use data bricks on AWS and pay consumption based model and have it go against their monthly bill. And so we see customers, you know, doing rapid experimentation pilots, POCs, they're, they're really learning the value of that first use case. And then we see rapid use case expansion. And the third aspect is the consulting partner, private offers C P O super important in how we involve our partner ecosystem of our consulting partners and our resellers that are able to work with data bricks on behalf of customers. >>So you got the big contracts with the private offer. You got the product market fit, kind of people iterating with data coming in with, with the buyers you go. And obviously the integration piece all fitting in there. Exactly. Exactly. Okay. So that's that those are the offers that's current and what's in marketplace today. Is that the products, what are, what are people buying? I mean, I guess what's the Joel, what are, what are people buying in the marketplace and what does it mean for >>Them? So fundamentally what they're buying is the ability to take silos out of their organization. And that's, that is the problem that data bricks is out there to solve, which is when you look across your data landscape today, you've got unstructured data, you've got structured data, you've got real time streaming data, and your teams are trying to use all of this data to solve really complicated problems. And as data bricks as the lake house company, what we're helping customers do is how do they get into the new world? How do they move to a place where they can use all of that data across all of their teams? And so we allow them to begin to find through the marketplace, those rapid adoption use cases where they can get rid of these data, warehousing data lake silos they've had in the past, get their unstructured and structured data onto one data platform and open data platform that is no longer adherent to any proprietary formats and standards and something. >>They can very much, very easily integrate into the rest of their data environment, apply one common data governance layer on top of that. So that from the time they ingest that data to the time they use that data to the time they share that data inside and outside of their organization, they know exactly how it's flowing. They know where it came from. They know who's using it. They know who has access to it. They know how it's changing. And then with that common data platform with that common governance solution, they'd being able to bring all of those use cases together across their real time, streaming their data engineering, their BI, their AI, all of their teams working on one set of data. And that lets them move really, really fast. And it also lets them solve challenges. They just couldn't solve before a good example of this, you know, one of the world's now largest data streaming platforms runs on data bricks with AWS. >>And if you think about what does it take to set that up? Well, they've got all this customer data that was historically inside of data warehouses, that they have to understand who their customers are. They have all this unstructured data, they've built their data science model, so they can do the right kinds of recommendation engines and forecasting around. And then they've got all this streaming data going back and forth between click stream data from what the customers are doing with their platform and the recommendations they wanna push back out. And if those teams were all working in individual silos, building these kinds of platforms would be extraordinarily slow and complex, but by building it on data bricks, they were able to release it in record time and have grown at, at record pace >>To not be that's product platform that's impacting product development. Absolutely. I mean, this is like the difference between lagging months of product development to like days. Yes. Pretty much what you're getting at. Yeah. So total agility. I got that. Okay. Now I'm a customer I wanna buy in the marketplace, but I also, you got direct Salesforce up there. So how do you guys look at this? Is there channel conflict? Are there comp programs? Because one of the things I heard today in on the stage from a Davis's leadership, Chris was up there speaking and, and, and moment I was, Hey, he's a CRO conference, chief revenue officer conversation, which means someone's getting compensated. So if I'm the sales rep at data bricks, what's my motion to the customer. Do I get paid? Does Amazon sell it? Take us through that. Is there channel conflict? Is there or an audio lift? >>Well, I I'd add what Joel just talked about with, with, you know, what the solution, the value of the solution our entire offering is available on AWS marketplace. So it's not a subset, the entire data bricks offering and >>The flagship, all the, the top, >>Everything, the flagship, the complete offering. So it's not, it's not segmented. It's not a sub segment. It's it's, you know, you can use all of our different offerings. Now when it comes to seller compensation, we, we, we view this two, two different ways, right? One is that AWS is also incented, right? Versus selling a native service to recommend data bricks for the right situation. Same thing with data bricks. Our Salesforce wants to do the right thing for the customer. If the customer wants to use marketplace as their procurement vehicle. And that really helps customers because if you get data bricks and five other ISVs together, and let's say each ISV is spending, you're spending a million dollars, you have $5 million of spend, you put that spend through the flywheel with AWS marketplace. And then you can use that in your negotiations with AWS to get better pricing overall. So that's how we, >>We do it. So customers are driving. This sounds like, correct. For sure. So they're looking at this as saying, Hey, I'm gonna just get purchasing power with all my relationships because it's a solution architectural market, right? >>Yeah. It makes sense. Because if most customers will have a primary and secondary cloud provider, if they can consolidate, you know, multiple ISV spend through that same primary provider, you get pricing >>Power, okay, Jill, we're gonna date ourselves. At least I will. So back in the old days, it used to be, do a Barney deal with someone, Hey, let's go to market together. You gotta get paper, you do a biz dev deal. And then you gotta say, okay, now let's coordinate our sales teams, a lot of moving parts. So what you're getting at here is that the alternative for data bricks or any company is to go find those partners and do deals versus now Amazon is the center point for the customer so that you can still do those joint deals. But this seems to be flipping the script a little bit. >>Well, it is, but we still have VAs and consulting partners that are doing implementation work very valuable work advisory work that can actually work with marketplace through the C PPO offering. So the marketplace allows multiple ways to procure your >>Solution. So it doesn't change your business structure. It just makes it more efficient. That's >>Correct. >>That's a great way to say it. Yeah, >>That's great. So that's so that's it. So that's just makes it more efficient. So you guys are actually incented to point customers to the marketplace. >>Yes, >>Absolutely. Economically. Yeah. >>E economically it's the right thing to do for the customer. It's the right thing to do for our relationship with Amazon, especially when it comes back to co-selling right? Because Amazon now is leaning in with ISVs and making recommendations for, you know, an ISV solution and our teams are working backwards from those use cases, you know, to collaborate, land them. >>Yeah. I want, I wanna get that out there. Go ahead, Joel. >>So one of the other things I might add to that too, you know, and why this is advantageous for, for companies like data bricks to, to work through the marketplace, is it makes it so much easier for customers to deploy a solution. It's, it's very, literally one click through the marketplace to get data bricks stood up inside of your environment. And so if you're looking at how do I help customers most rapidly adopt these solutions in the AWS cloud, the marketplace is a fantastic accelerator to that. You >>Know, it's interesting. I wanna bring this up and get your reaction to it because to me, I think this is the future of procurement. So from a procurement standpoint, I mean, again, dating myself EDI back in the old days, you know, all that craziness. Now this is all the, all the internet, basically through the console, I get the infrastructure side, you know, spin up and provision. Some servers, all been good. You guys have played well there in the marketplace. But now as we get into more of what I call the business apps, and they brought this up on stage little nuance, most enterprises aren't yet there of integrating tech on the business apps, into the stack. This is where I think you guys are a use case of success where you guys have been successful with data integration. It's an integrator's dilemma, not an innovator's dilemma. So like, I want to integrate, so now I have integration points with data bricks, but I want to put an app in there. I want to provision an application, but it has to be built. It's not, you don't buy it. You build, you gotta build stuff. And this is the nuance. What's your reaction to that? Am I getting this right? Or, or am I off because no, one's gonna be buying software. Like they used to, they buy software to integrate it. >>Yeah, >>No, I, cause everything's integrated. >>I think AWS has done a great job at creating a partner ecosystem, right. To give customers the right tools for the right jobs. And those might be with third parties, data bricks is doing the same thing with our partner connect program. Right. We've got customer, customer partners like five tra and D V T that, you know, augment and enhance our platform. And so you, you're looking at multi ISV architectures and all of that can be procured through the AWS marketplace. >>Yeah. It's almost like, you know, bundling and unbundling. I was talking about this with, with Dave ante about Supercloud, which is why wouldn't a customer want the best solution in their architecture period. And it's class. If someone's got API security or an API gateway. Well, you know, I don't wanna be forced to buy something because it's part of a suite and that's where you see things get suboptimized where someone dominates a category and they have, oh, you gotta buy my version of this. Yeah. >>Joel, Joel. And that's Joel and I were talking, we're actually saying what what's really important about Databricks is that customers control the data. Right? You wanna comment on that? >>Yeah. I was say the, you know what you're pushing on there we think is extraordinarily, you know, the way the market is gonna go is that customers want a lot of control over how they build their data stack. And everyone's unique in what tools are the right ones for them. And so one of the, you know, philosophically I think really strong places, data, bricks, and AWS have lined up is we both take an approach that you should be able to have maximum flexibility on the platform. And as we think about the lake house, one thing we've always been extremely committed to as a company is building the data platform on an open foundation. And we do that primarily through Delta lake and making sure that to Jack's point with data bricks, the data is always in your control. And then it's always stored in a completely open format. And that is one of the things that's allowed data bricks to have the breadth of integrations that it has with all the other data tools out there, because you're not tied into any proprietary format, but instead are able to take advantage of all the innovation that's happening out there in the open source ecosystem. >>When you see other solutions out there that aren't as open as you guys, you guys are very open by the way, we love that too. We think that's a great strategy, but what's the, what am I foreclosing? If I go with something else that's not as open what what's the customer's downside as you think about what's around the corner in the industry. Cuz if you believe it's gonna be open, open source, which I think opens our software is the software industry and integration is a big deal, cuz software's gonna be plentiful. Let's face it. It's a good time to be in software business, but cloud's booming. So what's the downside from your data bricks perspective, you see a buyer clicking on data bricks versus that alternative what's potentially is should they be a nervous about down the road if they go with a more proprietary or locked in approach? Well, >>I think the challenge with proprietary ecosystems is you become beholden to the ability of that provider to both build relationships and convince other vendors that they should invest in that format. But you're also then beholden to the pace at which that provider is able to innovate. And I think we've seen lots of times over history where, you know, a proprietary format may run ahead for a while on a lot of innovation. But as that market control begins to solidify that desire to innovate begins to, to degrade, whereas in the open format. So >>Extract rents versus innovation. Exactly. >>Yeah, exactly. >>But >>I'll say it in the open world, you know, you have to continue to innovate. Yeah. And the open source world is always innovating. If you look at the last 10 to 15 years, I challenge you to find, you know, an example where the innovation in the data and AI world is not coming from open source. And so by investing in open ecosystems, that means you were always going to be at the forefront of what is the >>Latest, you know, again, not to date myself again, but you look back at the eighties and nineties, the protocol stacked for proprietary. Yeah. You know, SNA at IBM deck net was digital, you know, the rest is, and then TCP, I P was part of the open systems, interconnect, revolutionary Oly, a big part of that as well as my school did. And so like, you know, that was, but it didn't standardize the whole stack. It stopped at IP and TCP. Yeah. But that helped interoperate, that created a nice defacto. So this is a big part of this mid game. I call it the chessboard, you know, you got opening game and mid game. Then you got the end game and we're not there. The end game yet cloud the cloud. >>There's, there's always some form of lock in, right. Andy jazzy will, will address it, you know, when making a decision. But if you're gonna make a decision you want to reduce as you don't wanna be limited. Right. So I would advise a customer that there could be limitations with a proprietary architecture. And if you look at what every customer's trying to become right now is an AI driven business. Right? And so it has to do with, can you get that data outta silos? Can you, can you organize it and secure it? And then can you work with data scientists to feed those models? Yeah. In a, in a very consistent manner. And so the tools of tomorrow will to Joel's point will be open and we want interoperability with those >>Tools and, and choice is a matter too. And I would say that, you know, the argument for why I think Amazon is not as locked in as maybe some other clouds is that they have to compete directly too. Redshift competes directly with a lot of other stuff, but they can't play the bundling game because the customers are getting savvy to the fact that if you try to bundle an inferior product with something else, it may not work great at all. And they're gonna be they're onto it. This is >>The Amazon's credit by having these, these solutions that may compete with native services in marketplace, they are providing customers with choice, low >>Price and access to the S and access to the core value. Exactly. Which the >>Hardware, which is their platform. Okay. So I wanna get you guys thought on something else. I, I see emerging, this is again kind of cube rumination moment. So on stage Chris unpacked, a lot of stuff. I mean this marketplace, they're touching a lot of hot buttons here, you know, pricing compensation, workflows services behind the curtain. And one of the things he mentioned was they talk about resellers or channel partners, depending upon what you talk about. We believe Dave and I believe on the cube that the entire indirect sales channel of the industry is gonna be disrupted radically because those players were selling hardware in the old days and software, that game is gonna change. You know, you mentioned you guys have a program, want to get your thoughts on this. We believe that once this gets set up, they can play in this game and bring their services in which means that the old reseller channels are gonna be rewritten. They're gonna be refactored with this new kinds of access. Cuz you've got scale, you've got money and you've got product and you got customers coming into the marketplace. So if you're like a reseller that sold computers to data centers or software, you know, value added reseller or V or business, >>You've gotta evolve. >>You gotta, you gotta be here. Yes. How are you guys working with those partners? Cuz you say you have a part in your marketplace there. How do I make money? If I'm a reseller with data bricks with eight Amazon, take me through that use case. >>Well I'll let Joel comment, but I think it's, it's, it's pretty straightforward, right? Customers need expertise. They need knowhow. When we're seeing customers do mass migrations to the cloud or Hadoop specific migrations or data transformation implementations, they need expertise from consulting and SI partners. If those consulting SI partners happen to resell the solution as well. Well, that's another aspect of their business, but I really think it is the expertise that the partners bring to help customers get outcomes. >>Joel, channel big opportunity for re re Amazon to reimagine this. >>For sure. Yeah. And I think, you know, to your comment about how to resellers take advantage of that, I think what Jack was pushing on is spot on, which is it's becoming more about more and more about the expertise you bring to the table and not just transacting the software, but now actually helping customers make the right choices. And we're seeing, you know, both SI begin to be able to resell solutions and finding a lot of opportunity in that. Yeah. And I think we're seeing traditional resellers begin to move into that SI model as well. And that's gonna be the evolution that >>This gets at the end of the day. It's about services for sure, for sure. You've got a great service. You're gonna have high gross profits. And >>I think that the managed service provider business is alive and well, right? Because there are a number of customers that want that, that type of a service. >>I think that's gonna be a really hot, hot button for you guys. I think being the way you guys are open this channel partner services model coming in to the fold really kind of makes for kind of that super cloudlike experience where you guys now have an ecosystem. And that's my next question. You guys have an ecosystem going on within data bricks for sure. On top of this ecosystem, how does that work? This is kinda like hasn't been written up in business school and case studies yet this is new. What is this? >>I think, you know, what it comes down to is you're seeing ecosystems begin to evolve around the data platforms and that's gonna be one of the big kind of new horizons for us as we think about what drives ecosystems it's going to be around. Well, what is the, what's the data platform that I'm using and then all the tools that have to encircle that to get my business done. And so I think there's, you know, absolutely ecosystems inside of the AWS business on all of AWS's services, across data analytics and AI. And then to your point, you are seeing ecosystems now arise around data bricks in its Lakehouse platform, as well as customers are looking at well, if I'm standing these Lakehouse up and I'm beginning to invest in this, then I need a whole set of tools that help me get that done as well. >>I mean you think about ecosystem theory, we're living a whole nother dream and I'm, and I'm not kidding. It hasn't yet been written up and for business school case studies is that we're now in a whole nother connective tissue ecology thing happening where you have dependencies and value proposition economics connectedness. So you have relationships in these ecosystems. >>And I think one of the great things about relationships with these ecosystems is that there's a high degree of overlap. Yeah. So you're seeing that, you know, the way that the cloud business is evolving, the, the ecosystem partners of data bricks are the same ecosystem partners of AWS. And so as you build these platforms out into the cloud, you're able to really take advantage of best of breed, the broadest set of solutions out there for >>You. Joel, Jack, I love it because you know what it means the best ecosystem will win. If you keep it open. Sure. You can see everything. If you're gonna do it in the dark, you know, you don't know the outcome. I mean, this is really kind we're talking about. >>And John, can I just add that when I was in Amazon, we had a, a theory that there's buyers and builders, right? There's very innovative companies that want to build things themselves. We're seeing now that that builders want to buy a platform. Right? Yeah. And so there's a platform decision being made and that ecosystem gonna evolve around the >>Platform. Yeah. And I totally agree. And, and, and the word innovation get kicks around. That's why, you know, when we had our super cloud panel was called the innovators dilemma with a slash through it called the integrated dilemma, innovation is the digital transformation. So absolutely like that becomes cliche in a way, but it really becomes more of a, are you open? Are you integrating if APIs are the connective tissue, what's automation, what's the service message look like. I mean, a whole nother set of kind of thinking goes on and these new ecosystems and these new products >>And that, and that thinking is, has been born in Delta sharing. Right? So the idea that you can have a multi-cloud implementation of data bricks, and actually share data between those two different clouds, that is the next layer on top of the native cloud >>Solution. Well, data bricks has done a good job of building on top of the goodness of, and the CapEx gift from AWS. But you guys have done a great job taking that building differentiation into the product. You guys have great customer base, great grow ecosystem. And again, I think in a shining example of what every enterprise is going to do, build on top of something operating model, get that operating model, driving revenue. >>Yeah. >>Well we, whether whether you're Goldman Sachs or capital one or XYZ corporation >>S and P global NASDAQ, right. We've got, you know, these, the biggest verticals in the world are solving tough problems with data breaks. I think we'd be remiss cuz if Ali was here, he would really want to thank Amazon for all of the investments across all of the different functions, whether it's the relationship we have with our engineering and service teams. Yeah. Our marketing teams, you know, product development and we're gonna be at reinvent the big presence of reinvent. We're looking forward to seeing you there again. >>Yeah. We'll see you guys there. Yeah. Again, good ecosystem. I love the ecosystem evolutions happening this next gen cloud is here. We're seeing this evolve kind of new economics, new value propositions kind of scaling up, producing more so you guys are doing a great job. Thanks for coming on the Cuban, taking time. Chill. Great to see you at the check. Thanks for having us. Thanks. Going. Okay. Cube coverage here. The world's changing as APN comes to give the marketplace for a new partner organization at Amazon web services, the Cube's got a covered. This should be a very big growing ecosystem as this continues, billions of being sold through the marketplace. Of course the buyers are happy as well. So we've got it all covered. I'm John furry, your host of the cube. Thanks for watching.

Published Date : Sep 21 2022

SUMMARY :

Thanks for good to see you again. Yeah, John, great to be here. Obviously it makes it's a no brainer on the micro, you know, You're in the middle of it. you know, unique use cases. So this is speaks volumes to the, the relationship you have years. And when you look at what the APN allows us to do, And so we see customers, you know, doing rapid experimentation pilots, POCs, So you got the big contracts with the private offer. And that's, that is the problem that data bricks is out there to solve, They just couldn't solve before a good example of this, you know, And if you think about what does it take to set that up? So how do you guys look at this? Well, I I'd add what Joel just talked about with, with, you know, what the solution, the value of the solution our entire offering And that really helps customers because if you get data bricks So they're looking at this as saying, you know, multiple ISV spend through that same primary provider, you get pricing And then you gotta say, okay, now let's coordinate our sales teams, a lot of moving parts. So the marketplace allows multiple ways to procure your So it doesn't change your business structure. Yeah, So you guys are actually incented to Yeah. It's the right thing to do for our relationship with Amazon, So one of the other things I might add to that too, you know, and why this is advantageous for, I get the infrastructure side, you know, spin up and provision. you know, augment and enhance our platform. you know, I don't wanna be forced to buy something because it's part of a suite and the data. And that is one of the things that's allowed data bricks to have the breadth of integrations that it has with When you see other solutions out there that aren't as open as you guys, you guys are very open by the I think the challenge with proprietary ecosystems is you become beholden to the Exactly. I'll say it in the open world, you know, you have to continue to innovate. I call it the chessboard, you know, you got opening game and mid game. And so it has to do with, can you get that data outta silos? And I would say that, you know, the argument for why I think Amazon Price and access to the S and access to the core value. So I wanna get you guys thought on something else. You gotta, you gotta be here. If those consulting SI partners happen to resell the solution as well. And we're seeing, you know, both SI begin to be This gets at the end of the day. I think that the managed service provider business is alive and well, right? I think being the way you guys are open this channel I think, you know, what it comes down to is you're seeing ecosystems begin to evolve around So you have relationships in And so as you build these platforms out into the cloud, you're able to really take advantage you don't know the outcome. And John, can I just add that when I was in Amazon, we had a, a theory that there's buyers and builders, That's why, you know, when we had our super cloud panel So the idea that you can have a multi-cloud implementation of data bricks, and actually share data But you guys have done a great job taking that building differentiation into the product. We're looking forward to seeing you there again. Great to see you at the check.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Joel MinickPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

JoelPERSON

0.99+

AliPERSON

0.99+

Jack AndersonPERSON

0.99+

DavePERSON

0.99+

$5 millionQUANTITY

0.99+

JackPERSON

0.99+

twoQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

XYZORGANIZATION

0.99+

Joel MinnickPERSON

0.99+

Jack AndersenPERSON

0.99+

Andy jazzyPERSON

0.99+

third aspectQUANTITY

0.99+

John furPERSON

0.99+

NASDAQORGANIZATION

0.99+

BarneyORGANIZATION

0.99+

bothQUANTITY

0.99+

five short monthsQUANTITY

0.99+

OneQUANTITY

0.99+

APOORGANIZATION

0.99+

todayDATE

0.99+

IBMORGANIZATION

0.99+

first 100 millionQUANTITY

0.98+

tomorrowDATE

0.98+

oneQUANTITY

0.98+

billionsQUANTITY

0.98+

JohnnyPERSON

0.97+

DavisPERSON

0.97+

a million dollarsQUANTITY

0.96+

SalesforceORGANIZATION

0.96+

data bricksORGANIZATION

0.95+

each ISVQUANTITY

0.95+

Seattle, WashingtonLOCATION

0.95+

two different waysQUANTITY

0.95+

one data platformQUANTITY

0.95+

seven years agoDATE

0.94+

Sam Kassoumeh, SecurityScorecard | CUBE Conversation


 

(upbeat music) >> Hey everyone, welcome to this CUBE conversation. I'm John Furrier, your host of theCUBE here in Palo Alto, California. We've got Sam Kassoumeh, co-founder and chief operating office at SecurityScorecard here remotely coming in. Thanks for coming on Sam. Security, Sam. Thanks for coming on. >> Thank you, John. Thanks for having me. >> Love the security conversations. I love what you guys are doing. I think this idea of managed services, SaaS. Developers love it. Operation teams love getting into tools easily and having values what you guys got with SecurityScorecard. So let's get into what we were talking before we came on. You guys have a unique solution around ratings, but also it's not your grandfather's pen test want to be security app. Take us through what you guys are doing at SecurityScorecard. >> Yeah. So just like you said, it's not a point in time assessment and it's similar to a traditional credit rating, but also a little bit different. You can really think about it in three steps. In step one, what we're doing is we're doing threat intelligence data collection. We invest really heavily into R&D function. We never stop investing in R&D. We collect all of our own data across the entire IPV force space. All of the different layers. Some of the data we collect is pretty straightforward. We might crawl a website like the example I was giving. We might crawl a website and see that the website says copyright 2005, but we know it's 2022. Now, while that signal isn't enough to go hack and break into the company, it's definitely a signal that someone might not be keeping things up to date. And if a hacker saw that it might encourage them to dig deeper. To more complex signals where we're running one of the largest DNS single infrastructures in the world. We're monitoring command and control malware and its behaviors. We're essentially collecting signals and vulnerabilities from the entire IPV force space, the entire network layer, the entire web app player, leaked credentials. Everything that we think about when we talk about the security onion, we collect data at each one of those layers of the onion. That's step one. And we can do all sorts of interesting insights and information and reports just out of that thread intel. Now, step two is really interesting. What we do is we go identify the attack surface area or what we call the digital footprint of any company in the world. So as a customer, you can simply type in the name of a company and we identify all of the domains, sub domains, subsidiaries, organizations that are identified on the internet that belong to that organization. So every digital asset of every company we go out and we identify that and we update that every 24 hours. And step three is the rating. The rating is probabilistic and it's deterministic. The rating is a benchmark. We're looking at companies compared to their peers of similar size within the same industry and we're looking at how they're performing. And it's probabilistic in the sense that companies that have an F are about seven to eight times more likely to experience a breach. We're an A through F scale, universally understood. Ds and Fs, more likely to experience a breach. A's we see less breaches now. Like I was mentioning before, it doesn't mean that an F is always going to get hacked or an A can never get hacked. If a nation state targets an A, they're going to eventually get in with enough persistence and budget. If the pizza shop on the corner has an F, they may never get hacked because no one cares, but natural correlation, more doors open to the house equals higher likelihood someone unauthorized is going to walk in. So it's really those three steps. The collection, we map it to the surface area of the company and then we produce a rating. Today we're rating about 12 million companies every single day. >> And how many people do you have as customers? >> We have 50,000 organizations using us, both free and paid. We have a freemium tier where just like Yelp or a LinkedIn business profile. Any company in the world has a right to go claim the score. We never extort companies to fix the score. We never charge a company to see the score or fix it. Any company in a world without paying us a cent can go in. They can understand what we're seeing about them, what a hacker could see about their environment. And then we empower them with the tools to fix it and they can fix it and the score will go up. Now companies pay us because they want enterprise capabilities. They want additional modules, insights, which we can talk about. But in total, there's about 50,000 companies that at any given point in time, they're monitoring about a million and a half organizations of the 12 million that we're rating. It sounds like Google. >> If you want to look at it. >> Sounds like Google Search you got going on there. You got a lot of search and then you create relevance, a score, like a ranking. >> That's precisely it. And that's exactly why Google ventures invested in us in our Series B round. And they're on our board. They looked and they said, wow, you guys are building like a Google Search engine over some really impressive threat intelligence. And then you're distilling it into a score which anybody in the world can easily understand. >> Yeah. You obviously have page rank, which changed the organic search business in the late 90s, early 2000s and the rest is history. AdWords. >> Yeah. >> So you got a lot of customer growth there potentially with the opt-in customer view, but you're looking at this from the outside in. You're looking at companies and saying, what's your security posture? Getting a feel for what they got going on and giving them scores. It sounds like it's not like a hacker proof. It's just more of a indicator for management and the team. >> It's an indicator. It's an indicator. Because today, when we go look at our vendors, business partners, third parties were flying blind. We have no idea how they're doing, how they're performing. So the status quo for the last 20 years has been perform a risk assessments, send a questionnaire, ask for a pen test and an audit evidence. We're trying to break that cycle. Nobody enjoys it. They're long tail. It's a trust without verification. We don't really like that. So we think we can evolve beyond this point in time assessment and give a continuous view. Now, today, historically, we've been outside in. Not intrusive, and we'll show you what a hacker can see about an environment, but we have some cool things percolating under the hood that give more of a 360 view outside, inside, and also a regulatory compliance view as well. >> Why is the compliance of the whole third party thing that you're engaging with important? Because I mean, obviously having some sort of way to say, who am I dealing with is important. I mean, we hear all kinds of things in the security landscape, oh, zero trust, and then we hear trust, supply chain, software risk, for example. There's a huge trust factor there. I need to trust this tool or this container. And then you got the zero trust, don't trust anything. And then you've got trust and verify. So you have all these different models and postures, and it just seems hard to keep up with. >> Sam: It's so hard. >> Take us through what that means 'cause pen tests, SOC reports. I mean the clouds help with the SOC report, but if you're doing agile, anything DevOps, you basically would need to do a pen test like every minute. >> It's impossible. The market shifted to the cloud. We watched and it still is. And that created a lot of complexity, not to date myself. But when I was starting off as a security practitioner, the data center used to be in the basement and I would have lunch with the database administrator and we talk about how we were protecting the data. Those days are long gone. We outsource a lot of our key business practices. We might use, for example, ADP for a payroll provider or Dropbox to store our data. But we've shifted and we no longer no who that person is that's protecting our data. They're sitting in another company in another area unknown. And I think about 10, 15 years ago, CISOs had the realization, Hey, wait a second. I'm relying on that third party to function and operate and protect my data, but I don't have any insight, visibility or control of their program. And we were recommended to use questionnaires and audit forms, and those are great. It's good hygiene. It's good practice. Get to know the people that are protecting your data, ask them the questions, get the evidence. The challenge is it's point in time, it's limited. Sometimes the information is inaccurate. Not intentionally, I don't think people intentionally want to go lie, but Hey, if there's a $50 million deal we're trying to close and it's dependent on checking this one box, someone might bend a rule a little bit. >> And I said on theCUBE publicly that I think pen test reports are probably being fudged and dates being replicated because it's just too fast. And again, today's world is about velocity on developers, trust on the code. So you got all kinds of trust issues. So I think verification, the blue check mark on Twitter kind of thing going on, you're going to see a lot more of that and I think this is just the beginning. I think what you guys are doing is scratching the surface. I think this outside in is a good first step, but that's not going to solve the internal problem that still coming and have big surface areas. So you got more surface area expanding. I mean, IOT's coming in, the Edge is coming fast. Never mind hybrid on-premise cloud. What's your organizations do to evaluate the risk and the third party? Hands shaking, verification, scorecards. Is it like a free look here or is it more depth to it? Do you double click on it? Take us through how this evolves. >> John it's become so disparate and so complex, Because in addition to the market moving to the cloud, we're now completely decentralized. People are working from home or working hybrid, which adds more endpoints. Then what we've learned over time is that it's not just a third party problem, because guess what? My third parties behind the scenes are also using third parties. So while I might be relying on them to process my customer's payment information, they're relying on 20 vendors behind the scene that I don't even know about. I might have an A, they might have an A. It's really important that we expand beyond that. So coming out of our innovation hub, we've developed a number of key capabilities that allow us to expand the value for the customer. One, you mentioned, outside in is great, but it's limited. We can see what a hacker sees and that's helpful. It gives us pointers where to maybe go ask double click, get comfort, but there's a whole nother world going on behind the firewall inside of an organization. And there might be a lot of good things going on that CISO security teams need to be rewarded for. So we built an inside module and component that allows teams to start plugging in the tools, the capabilities, keys to their cloud environments. And that can show anybody who's looking at the scorecard. It's less like a credit score and more like a social platform where we can go and look at someone's profile and say, Hey, how are things going on the inside? Do they have two-factor off? Are there cloud instances configured correctly? And it's not a point in time. This is a live connection that's being made. This is any point in time, we can validate that. The other component that we created is called an evidence locker. And an evidence locker, it's like a secure vault in my scorecard and it allows me to upload things that you don't really stand for or check for. Collateral, compliance paperwork, SOC 2 reports. Those things that I always begrudgingly email. I don't want to share with people my trade secrets, my security policies, and have it sit on their exchange server. So instead of having to email the same documents out, 300 times a month, I just upload them to my evidence locker. And what's great is now anybody following my scorecard can proactively see all the great things I'm doing. They see the outside view. They see the inside view. They see the compliance view. And now they have the holy grail view of my environment and can have a more intelligent conversation. >> Access to data and access methods are an interesting innovation area around data lineage. Tracing is becoming a big thing. We're seeing that. I was just talking with the Snowflake co-founder the other day here in theCUBE about data access and they're building a proprietary mesh on top of the clouds to figure out, Hey, I don't want to give just some tool access to data because I don't know what's on the other side of those tools. Now they had a robust ecosystem. So I can see this whole vendor risk supply chain challenge around integration as a huge problem space that you guys are attacking. What's your reaction to that? >> Yeah. Integration is tricky because we want to be really particular about who we allow access into our environment or where we're punching holes in the firewall and piping data out out of the environment. And that can quickly become unwieldy just with the control that we have. Now, if we give access to a third party, we then don't have any control over who they're sharing our information with. When I talk to CISOs today about this challenge, a lot of folks are scratching their head, a lot of folks treat this as a pet project. Like how do I control the larger span beyond just the third parties? How do I know that their software partners, their contractors that they're working with building their tools are doing a good job? And even if I know, meaning, John, you might send me a list of all of your vendors. I don't want to be the bad guy. I don't really have the right to go reach out to my vendors' vendors knocking on their door saying, hi, I'm Sam. I'm working with John and he's your customer. And I need to make sure that you're protecting my data. It's an awkward chain of conversation. So we're building some tools that help the security teams hold the entire ecosystem accountable. We actually have a capability called automatic vendor discovery. We can go detect who are the vendors of a company based on the connections that we see, the inbound and outbound connections. And what often ends up happening John is we're bringing to the attention to our customers, awareness about inbound and outbound connections. They had no idea existed. There were the shadow IT and the ghost vendors that were signed without going through an assessment. We detect those connections and then they can go triage and reduce the risk accordingly. >> I think that risk assessment of vendors is key. I was just reading a story about this, about how a percentage, I forget the number. It was pretty large of applications that aren't even being used that are still on in companies. And that becomes a safe haven for bad actors to hang out and penetrate 'cause they get overlooked 'cause no one's using them, but they're still online. And so there's a whole, I called cleaning up the old dead applications that are still connected. >> That happens all the time. Those applications also have applications that are dead and applications that are alive may also have users that are dead as well. So you have that problem at the application level, at the user level. We also see a permutation of what you describe, which is leftover artifacts due to configuration mistakes. So a company just put up a new data center, a satellite office in Singapore and they hired a team to go install all the hardware. Somebody accidentally left an administrative portal exposed to the public internet and nobody knew the internet works, the lights are on, the office is up and running, but there was something that was supposed to be turned off that was left turned on. So sometimes we bring to company's attention and they say, that's not mine. That doesn't belong to me. And we're like, oh, well, we see some reason why. >> It's his fault. >> Yeah and they're like, oh, that was the contractor set up the thing. They forgot to turn off the administrative portal with the default login credentials. So we shut off those doors. >> Yeah. Sam, this is really something that's not talked about a lot in the industry that we've become so reliant on managed services and other people, CISOs, CIOs, and even all departments that have applications, even marketing departments, they become reliant on agencies and other parties to do stuff for them which inherently just increases the risk here of what they have. So there inherently could be as secure as they could be, but yet exposed completely on the other side. >> That's right. We have so many virtual touch points with our partners, our vendors, our managed service providers, suppliers, other third parties, and all the humans that are involved in that mix. It creates just a massive ripple effect. So everybody in a chain can be doing things right. And if there's one bad link, the whole chain breaks. I know it's like the cliche analogy, but it rings true. >> Supply chain trust again. Trust who you trust. Let's see how those all reconcile. So Sam, I have to ask you, okay, you're a former CISO. You've seen many movies in the industry. Co-founded this company. You're in the front lines. You've got some cool things happening. I can almost imagine the vision is a lot more than just providing a rating and score. I'm sure there's more vision around intelligence, automation. You mentioned vault, wallet capabilities, exchanging keys. We heard at re:Inforce automated reasoning, metadata reasoning. You got all kinds of crypto and quantum. I mean, there's a lot going on that you can tap into. What's your vision where you see SecurityScorecard going? >> When we started the company, the rating was the thing that we sold and it was a language that helped technical and non-technical folks alike level the playing field and talk about risk and use it to drive their strategy. Today, the rating just opens the door to that discussion and there's so much additional value. I think in the next one to two years, we're going to see the rating becomes standardized. It's going to be more frequently asked or even required or leveraged by key decision makers. When we're doing business, it's going to be like, Hey, show me your scorecard. So I'm seeing the rating get baked more and more the lexicon of risk. But beyond the rating, the goal is really to make a world a safer place. Help transform and rise the tide. So all ships can lift. In order to do that, we have to help companies, not only identify the risk, but also rectify the risk. So there's tools we build to really understand the full risk. Like we talked about the inside, the outside, the fourth parties, fifth parties, the real ecosystem. Once we identified where are all the Fs and bad things, will then what? So couple things that we're doing. We've launched a pro serve arm to help companies. Now companies don't have to pay to fix the score. Anybody, like I said, can fix the score completely free of charge, but some companies need help. They ask us and they say, Hey, I'm looking for a trusted advisor. A Sherpa, a guide to get me to a better place or they'll say, Hey, I need some pen testing services. So we've augmented a service arm to help accelerate the remediation efforts. We're also partnered with different industries that use the rating as part of a larger picture. The cyber rating isn't the end all be all. When companies are assessing risk, they may be looking at a financial ratings, ESG ratings, KYC AML, cyber security, and they're trying to form a complete risk profile. So we go and we integrate into those decision points. Insurance companies, all the top insurers, re-insurers, brokers are leveraging SecurityScorecard as an ingredient to help underwrite for cyber liability insurance. It's not the only ingredient, but it helps them underwrite and identify the help and price the risk so they can push out a policy faster. First policy is usually the one that's signed. So time to quote is an important metric. We help to accelerate that. We partner with credit rating agencies like Fitch, who are talking to board members, who are asking, Hey, I need a third party, independent verification of what my CISO is saying. So the CISO is presenting the rating, but so are the proxy advisors and the ratings companies to the board. So we're helping to inform the boards and evolve how they're thinking about cyber risk. We're helping with the insurance space. I think that, like you said, we're only scratching the surface. I can see, today we have about 50,000 companies that are engaging a rating and there's no reason why it's not going to be in the millions in just the next couple years here. >> And you got the capability to bring in more telemetry and see the new things, bring that into the index, bring that into the scorecard and then map that to potential any vulnerabilities. >> Bingo. >> But like you said, the old days, when you were dating yourself, you were in a glass room with a door lock and key and you can see who's two folks in there having lunch, talking database. No one's going to get hurt. Now that's gone, right? So now you don't know who's out there and machines. So you got humans that you don't know and you got machines that are turning on and off services, putting containers out there. Who knows what's in those payloads. So a ton of surface area and complexity to weave through. I mean only is going to get done with automation. >> It's the only way. Part of our vision includes not attempting to make a faster questionnaire, but rid ourselves of the process all altogether and get more into the continuous assessment mindset. Now look, as a former CISO myself, I don't want another tool to log into. We already have 50 tools we log into every day. Folks don't need a 51st and that's not the intent. So what we've done is we've created today, an automation suite, I call it, set it and forget it. Like I'm probably dating myself, but like those old infomercials. And look, and you've got what? 50,000 vendors business partners. Then behind there, there's another a hundred thousand that they're using. How are you going to keep track of all those folks? You're not going to log in every day. You're going to set rules and parameters about the things that you care about and you care depending on the nature of the engagement. If we're exchanging sensitive data on the network layer, you might care about exposed database. If we're doing it on the app layer, you're going to look at application security vulnerabilities. So what our customers do is they go create rules that say, Hey, if any of these companies in my tier one critical vendor watch list, if they have any of these parameters, if the score drops, if they drop below a B, if they have these issues, pick these actions and the actions could be, send them a questionnaire. We can send the questionnaire for you. You don't have to send pen and paper, forget about it. You're going to open your email and drag the Excel spreadsheet. Those days are over. We're done with that. We automate that. You don't want to send a questionnaire, send a report. We have integrations, notify Slack, create a Jira ticket, pipe it to ServiceNow. Whatever system of record, system of intelligence, workflow tools companies are using, we write in and allow them to expedite the whole. We're trying to close the window. We want to close the window of the attack. And in order to do that, we have to bring the attention to the people as quickly as possible. That's not going to happen if someone logs in every day. So we've got the platform and then that automation capability on top of it. >> I love the vision. I love the utility of a scorecard, a verification mark, something that could be presented, credential, an image, social proof. To security and an ongoing way to monitor it, observe it, update it, add value. I think this is only going to be the beginning of what I would see as much more of a new way to think about credentialing companies. >> I think we're going to reach a point, John, where and some of our customers are already doing this. They're publishing their scorecard in the public domain, not with the technical details, but an abstracted view. And thought leaders, what they're doing is they're saying, Hey, before you send me anything, look at my scorecard securityscorecard.com/securityrating, and then the name of their company, and it's there. It's in the public domain. If somebody Googles scorecard for certain companies, it's going to show up in the Google Search results. They can mitigate probably 30, 40% of inbound requests by just pointing to that thing. So we want to give more of those tools, turn security from a reactive to a proactive motion. >> Great stuff, Sam. I love it. I'm going to make sure when you hit our site, our company, we've got camouflage sites so we can make sure you get the right ones. I'm sure we got some copyright dates. >> We can navigate the decoys. We can navigate the decoys sites. >> Sam, thanks for coming on. And looking forward to speaking more in depth on showcase that we have upcoming Amazon Startup Showcase where you guys are going to be presenting. But I really appreciate this conversation. Thanks for sharing what you guys are working on. We really appreciate. Thanks for coming on. >> Thank you so much, John. Thank you for having me. >> Okay. This is theCUBE conversation here in Palo Alto, California. Coming in from New York city is the co-founder, chief operating officer of securityscorecard.com. I'm John Furrier. Thanks for watching. (gentle music)

Published Date : Aug 18 2022

SUMMARY :

to this CUBE conversation. Thanks for having me. and having values what you guys and see that the website of the 12 million that we're rating. then you create relevance, wow, you guys are building and the rest is history. for management and the team. So the status quo for the and it just seems hard to keep up with. I mean the clouds help Sometimes the information is inaccurate. and the third party? the capabilities, keys to the other day here in IT and the ghost vendors I forget the number. and nobody knew the internet works, the administrative portal the risk here of what they have. and all the humans that You're in the front lines. and the ratings companies to the board. and see the new things, I mean only is going to and get more into the I love the vision. It's in the public domain. I'm going to make sure when We can navigate the decoys. And looking forward to speaking Thank you so much, John. city is the co-founder,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Sam KassoumehPERSON

0.99+

SamPERSON

0.99+

30QUANTITY

0.99+

John FurrierPERSON

0.99+

SingaporeLOCATION

0.99+

50 toolsQUANTITY

0.99+

12 millionQUANTITY

0.99+

20 vendorsQUANTITY

0.99+

FitchORGANIZATION

0.99+

TodayDATE

0.99+

$50 millionQUANTITY

0.99+

fifth partiesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

todayDATE

0.99+

SecurityScorecardORGANIZATION

0.99+

First policyQUANTITY

0.99+

two folksQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

ExcelTITLE

0.99+

50,000 vendorsQUANTITY

0.99+

DropboxORGANIZATION

0.99+

late 90sDATE

0.99+

fourth partiesQUANTITY

0.99+

51stQUANTITY

0.99+

YelpORGANIZATION

0.99+

early 2000sDATE

0.99+

two-factorQUANTITY

0.99+

securityscorecard.comOTHER

0.99+

first stepQUANTITY

0.99+

two yearsQUANTITY

0.99+

three stepsQUANTITY

0.98+

eight timesQUANTITY

0.98+

one bad linkQUANTITY

0.98+

about 50,000 companiesQUANTITY

0.98+

one boxQUANTITY

0.98+

millionsQUANTITY

0.98+

GooglesORGANIZATION

0.97+

bothQUANTITY

0.97+

step twoQUANTITY

0.97+

about 12 million companiesQUANTITY

0.97+

SnowflakeORGANIZATION

0.97+

50,000 organizationsQUANTITY

0.97+

OneQUANTITY

0.96+

2005DATE

0.96+

TwitterORGANIZATION

0.96+

zero trustQUANTITY

0.96+

2022DATE

0.95+

step oneQUANTITY

0.95+

360 viewQUANTITY

0.95+

300 times a monthQUANTITY

0.94+

securityscorecard.com/securityratingOTHER

0.94+

a centQUANTITY

0.93+

SherpaORGANIZATION

0.93+

AdWordsTITLE

0.93+

SOC 2TITLE

0.92+

New York cityLOCATION

0.91+

CUBEORGANIZATION

0.91+

about a million and a half organizationsQUANTITY

0.89+

Amazon Startup ShowcaseEVENT

0.89+

Series BOTHER

0.86+

CISOORGANIZATION

0.86+

oneQUANTITY

0.86+

step threeQUANTITY

0.86+

next couple yearsDATE

0.84+

24 hoursQUANTITY

0.84+

zeroQUANTITY

0.84+

singleQUANTITY

0.84+

about sevenQUANTITY

0.83+

Nadir Izrael, Armis | Manage Risk with the Armis Platform


 

(upbeat music) >> Today's organizations are overwhelmed by the number of different assets connected to their networks, which now include not only IT devices and assets, but also a lot of unmanaged assets, like cloud, IoT, building management systems, industrial control systems, medical devices, and more. That's not just it, there's more. We're seeing massive volume of threats, and a surge of severe vulnerabilities that put these assets at risk. This is happening every day. And many, including me, think it's only going to get worse. The scale of the problem will accelerate. Security and IT teams are struggling to manage all these vulnerabilities at scale. With the time it takes to exploit a new vulnerability, combined with the lack of visibility into the asset attack surface area, companies are having a hard time addressing the vulnerabilities as quickly as they need. This is today's special CUBE program, where we're going to talk about these problems and how they're solved. Hello, everyone. I'm John Furrier, host of theCUBE. This is a special program called Managing Risk Across Your Extended Attack Surface Area with Armis, new asset intelligence platform. To start things off, let's bring in the co-founder and CTO of Armis, Nadir Izrael. Nadir, great to have you on the program. >> Yeah, thanks for having me. >> Great success with Armis. I want to just roll back and just zoom out and look at, what's the big picture? What are you guys focused on? What's the holy grail? What's the secret sauce? >> So Armis' mission, if you will, is to solve to your point literally one of the holy grails of security teams for the past decade or so, which is, what if you could actually have a complete, unified, authoritative asset inventory of everything, and stressing that word, everything. IT, OT, IoT, everything on kind of the physical space of things, data centers, virtualization, applications, cloud. What if you could have everything mapped out for you so that you can actually operate your organization on top of essentially a map? I like to equate this in a way to organizations and security teams everywhere seem to be running, basically running the battlefield, if you will, of their organization, without an actual map of what's going on, with charts and graphs. So we are here to provide that map in every aspect of the environment, and be able to build on top of that business processes, products, and features that would assist security teams in managing that battlefield. >> So this category, basically, is a cyber asset attack surface management kind of focus, but it really is defined by this extended asset attack surface area. What is that? Can you explain that? >> Yeah, it's a mouthful. I think the CAASM, for short, and Gartner do love their acronyms there, but CAASM, in short, is a way to describe a bit of what I mentioned before, or a slice out of it. It's the whole part around a unified view of the attack surface, where I think where we see things, and kind of where Armis extends to that is really with the extended attack surface. That basically means that idea of, what if you could have it all? What if you could have both a unified view of your environment, but also of every single thing that you have, with a strong emphasis on the completeness of that picture? If I take the map analogy slightly more to the extreme, a map of some of your environment isn't nearly as useful as a map of everything. If you had to, in your own kind of map application, you know, chart a path from New York to whichever your favorite surrounding city, but it only takes you so far, and then you sort of need to do the rest of it on your own, not nearly as effective, and in security terms, I think it really boils down into you can't secure what you can't see. And so from an Armis perspective, it's about seeing everything in order to protect everything. And not only do we discover every connected asset that you have, we provide a risk rating to every single one of them, we provide a criticality rating, and the ability to take action on top of these things. >> Having a map is huge. Everyone wants to know what's in their inventory, right, from a risk management standpoint, also from a vulnerability perspective. So I totally see that, and I can see that being the holy grail, but on the vulnerability side, you got to see everything, and you guys have new stuff around vulnerability management. What's this all about? What kind of gaps are you seeing that you're filling in the vulnerability side, because, okay, I can see everything. Now I got to watch out for threat vectors. >> Yeah, and I'd say a different way of asking this is, okay, vulnerability management has been around for a while. What the hell are you bringing into the mix that's so new and novel and great? So I would say that vulnerability scanners of different sorts have existed for over a decade. And I think that ultimately what Armis brings into the mix today is how do we fill in the gaps in a world where critical infrastructure is in danger of being attacked by nation states these days, where ransomware is an everyday occurrence, and where I think credible, up-to-the-minute, and contextualize vulnerability and risk information is essential. Scanners, or how we've been doing things for the last decade, just aren't enough. I think the three things that Armis excels at and completes the security staff today on the vulnerability management side are scale, reach, and context. Scale, meaning ultimately, and I think this is of no news to any enterprise, environments are huge. They are beyond huge. When most of the solutions that enterprises use today were built, they were built for thousands, or tens of thousands of assets. These days, we measure enterprises in the billions, billions of different assets, especially if you include how applications are structured, containers, cloud, all that, billions and billions of different assets, and I think that, ultimately, when the latest and greatest in catastrophic new vulnerabilities come out, and sadly, that's a monthly occurrence these days. You can't just now wait around for things to kind of scan through the environment, and figure out what's going on there. Real time images of vulnerabilities, real time understanding of what the risk is across that entire massive footprint is essential to be able to do things, and if you don't, then lots and lots of teams of people are tasked with doing this day in, day out, in order to accomplish the task. The second thing, I think, is the reach. Scanners can't go everywhere. They don't really deal well with environments that are a mixed IT/OT, for instance, like some of our clients deal with. They can't really deal with areas that aren't classic IT. And in general, these days over 70% of assets are in fact of the unmanaged variety, if you will. So combining different approaches from an Armis standpoint of both passive and active, we reach a tremendous scale, I think, within the environment, and ability to provide or reach that is complete. What if you could have vulnerability management, cover a hundred percent of your environment, and in a very effective manner, and in a very scalable manner? And the last thing really is context. And that's a big deal here. I think that most vulnerability management programs hinge on asset context, on the ability to understand, what are the assets I'm dealing with? And more importantly, what is the criticality of these assets, so I can better prioritize and manage the entire process along the way? So with these things in mind, that's what Armis has basically pulled out is a vulnerability management process. What if we could collect all the vulnerability information from your entire environment, and give you a map of that, on top of that map of assets? Connect every single vulnerability and finding to the relevant assets, and give you a real way to manage that automatically, and in a way that prevents teams of people from having to do a lot of grunt work in the process. >> Yeah, it's like building a search engine, almost. You got the behavioral, contextual. You got to understand what's going on in the environment, and then you got to have the context to what it means relative to the environment. And this is the criticality piece you mentioned, this is a huge differentiator in my mind. I want to unpack that. Understanding what's going on, and then what to pay attention to, it's a data problem. You got that kind of search and cataloging of the assets, and then you got the contextualization of it, but then what alarms do I pay attention to? What is the vulnerability? This is the context. This is a huge deal, because your businesses, your operation's going to have some important pieces, but also it changes on agility. So how do you guys do that? That's, I think, a key piece. >> Yeah, that's a really good question. So asset criticality is a key piece in being able to prioritize the operation. The reason is really simple, and I'll take an example we're all very, very familiar with, and it's been beaten to death, but it's still a good example, which is Log4j, or Log4Shell. When that came out, hundreds of people in large organizations started mapping the entire environment on which applications have what aspect of Log4j. Now, one of the key things there is that when you're doing that exercise for the first time, there are literally millions of systems in a typical enterprise that have Log4j in them, but asset criticality and the application and business context are key here, because some of these different assets that have Log4j are part of your critical business function and your critical business applications, and they deserve immediate attention. Some of them, or some Git server of some developer somewhere, don't warrant quite the same attention or criticality as others. Armis helps by providing the underlying asset map as a built-in aspect of the process. It maps the relationships and dependencies for you. It pulls together and clusters together. What applications does each asset serve? So I might be looking at a server and saying, okay, this server, it supports my ERP system. It supports my production applications to be able to serve my customers. It serves maybe my .com website. Understanding what applications each asset serves and every dependency along the way, meaning that endpoint, that server, but also the load balancers are supported, and the firewalls, and every aspect along the way, that's the bread and butter of the relationship mapping that Armis puts into place to be able to do that, and we also allow users to tweak, add information, connect us with their CMDB or anywhere else where they put this in, but once the information is in, that can serve vulnerability management. It can serve other security functions as well. But in the context of vulnerability management, it creates a much more streamlined process for being able to do the basics. Some critical applications, I want to know exactly what all the critical vulnerabilities that apply to them are. Some business applications, I just want to be able to put SLAs on, that this must be solved within a week, this must be solved within a month, and be able to actually automatically track all of these in a world that is very, very complex inside of an operation or an enterprise. >> We're going to hear from some of your customers later, but I want to just get your thoughts on, anecdotally, what do you hear from? You're the CTO, co-founder, you're actually going into the big accounts. When you roll this out, what are they saying to you? What are some of the comments? Oh my God, this is amazing. Thank you so much. >> Well, of course. Of course. >> Share some of the comments. >> Well, first of all, of course, that's what they're saying. They're saying we're great. Of course, always, but more specifically, I think this solves a huge gap for them. They are used to tools coming in and discovering vulnerabilities for them, but really close to nothing being able to streamline the truly complex and scalable process of being able to manage vulnerabilities within the environment. Not only that, the integration-led, designer-led deployment and the fact that we are a completely agent-less SaaS platform are extremely important for them. These are times where if something isn't easily deployable for an enterprise, its value is next to nothing. I think that enterprises have come to realize that if something isn't a one click deployment across the environment, it's almost not worth the effort these days, because environments are so complex that you can't fully realize the value any other way. So from an Armis standpoint, the fact that we can deploy with a few clicks, the fact that we immediately provide that value, the fact that we're agent-less, in the sense that we don't need to go around installing a footprint within the environment, and for clients who already have Armis, the fact that it's a flip of a switch, just turn it on, are extreme. I think that the fact, in particular, that Armis can be deployed. the vulnerability management can be deployed on top of the existing vulnerability scanner with a simple one-click integration is huge for them. And I think all of these together are what contribute to them saying how great this is. But yeah, that's it. >> The agent listing is huge. What's the alternative? What does it look like if they're going to go the other route, slow to deploy, have meetings, launch it in the environment? What's it look like? >> I think anything these days that touches an endpoint with an agent goes through a huge round of approvals before anything goes into an environment. Same goes, by the way, for additional scanners. No one wants to hear about additional scanners. They've already gone through the effort with some of the biggest tools out there to punch holes through firewalls, to install scanners in different ways. They don't want yet another scanner, or yet another agent. Armis rides on top of the existing infrastructure, the existing agents, the existing scanners. You don't need to do a thing. It just deploys on top of it, and that's really what makes this so easy and seamless. >> Talk about Armis research. Can you talk about, what's that about? What's going on there? What are you guys doing? How do you guys stay relevant for your customers? >> For sure. So one of the, I've made a lot of bold claims throughout, I think, the entire Q and A here, but one of the biggest magic components, if you will, to Armis that kind of help explain what all these magic components are, are really something that we call our collective asset knowledge base. And it's really the source of our power. Think of it as a giant collective intelligent that keeps learning from all of the different environments combined that Armis is deployed at. Essentially, if we see something in one environment, we can translate it immediately into all environments. So anyone who joins this or uses the product joins this collective intelligence in essence. What does that mean? It means that Armis learns about vulnerabilities from other environments. A new Log4j comes out, for instance. It's enough that, in some environments, Armis is able to see it from scanners, or from agents, or from SBOMs, or anything that basically provides information about Log4j, and Armis immediately infers or creates enrichment rules that act across the entire tenant base, or the entire client base of Armis. So very quick response to industry events, whenever something comes out, again, the results are immediate, very up to the minute, very up to the hour, but also I'd say that Armis does its own proactive asset research. We have a huge data set at our disposal, a lot of willing and able clients, and also a lot of partners within the industry that Armis leverages, but our own research is into interesting aspects within the environment. We do our own proactive research into things like TLStorm, which is kind of a bit of a bridging research and vulnerabilities between cyber physical aspect. So on the one hand, the cyber space and kind of virtual environments, but on the other hand, the actual physical space, vulnerabilities, and things like UPSs, or industrial equipment, or things like that. But I will say that also, Armis targets its research along different paths that we feel are underserved. We started a few years back research into firmwares, different types of real time operating systems. We came out with things like URGENT/11, which was research into, on the one hand, operating systems that run on two billion different devices worldwide, on the other hand, in the 40 years it existed, only 13 vulnerabilities were ever exposed or revealed about that operating system. Either it's the most secure operating system in the world, or it's just not gone through enough rigor and enough research in doing this. The type of active research we do is to complement a lot of the research going on in the industry, serve our clients better, but also provide kind of inroads, I think, for the industry to be better at what they do. >> Awesome, Nadir, thanks for sharing the insights. Great to see the research. You got to be at the cutting edge. You got to investigate, be ready for a moment's notice on all aspects of the operating environment, down to the hardware, down to the packet level, down to the any vulnerability, be ready for it. Great job. Thanks for sharing. Appreciate it. >> Absolutely. >> In a moment, Tim Everson's going to join us. He's the CSO of Kalahari Resorts and Conventions. He'll be joining me next. You're watching theCUBE, the leader in high tech coverage. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jun 21 2022

SUMMARY :

With the time it takes to What's the holy grail? in every aspect of the environment, management kind of focus, and the ability to take and I can see that being the holy grail, and manage the entire and cataloging of the assets, and every dependency along the way, What are some of the comments? Well, of course. and the fact that we are What's the alternative? of the biggest tools out there What are you guys doing? from all of the different on all aspects of the He's the CSO of Kalahari

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Tim EversonPERSON

0.99+

New YorkLOCATION

0.99+

John FurrierPERSON

0.99+

thousandsQUANTITY

0.99+

John FurrierPERSON

0.99+

NadirPERSON

0.99+

billionsQUANTITY

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

todayDATE

0.99+

40 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

GartnerORGANIZATION

0.99+

each assetQUANTITY

0.98+

second thingQUANTITY

0.98+

one clickQUANTITY

0.98+

13 vulnerabilitiesQUANTITY

0.98+

a weekQUANTITY

0.98+

over 70%QUANTITY

0.98+

millions of systemsQUANTITY

0.98+

oneQUANTITY

0.98+

two billion different devicesQUANTITY

0.97+

a monthQUANTITY

0.97+

one-clickQUANTITY

0.97+

bothQUANTITY

0.96+

Log4jTITLE

0.96+

hundred percentQUANTITY

0.96+

over a decadeQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

one environmentQUANTITY

0.94+

Log4ShellTITLE

0.93+

Managing Risk Across Your Extended Attack Surface AreaTITLE

0.91+

SBOMsORGANIZATION

0.89+

past decadeDATE

0.88+

threeQUANTITY

0.86+

hundreds of peopleQUANTITY

0.84+

CUBETITLE

0.84+

singleQUANTITY

0.82+

last decadeDATE

0.81+

CAASMTITLE

0.75+

CMDBTITLE

0.74+

billions of different assetsQUANTITY

0.72+

CAASMORGANIZATION

0.66+

URGENTORGANIZATION

0.65+

single vulnerabilityQUANTITY

0.65+

TLStormORGANIZATION

0.65+

Armis'ORGANIZATION

0.64+

GitTITLE

0.64+

11TITLE

0.63+

a few yearsDATE

0.61+

CTOPERSON

0.57+

the holy grailsQUANTITY

0.55+

assetsQUANTITY

0.55+

lotsQUANTITY

0.51+

clicksQUANTITY

0.5+

Nadir Izrael, Armis | Managing Risk with the Armis Platform


 

(upbeat music) >> Today's organizations are overwhelmed by the number of different assets connected to their networks, which now include not only IT devices and assets, but also a lot of unmanaged assets, like cloud, IoT, building management systems, industrial control systems, medical devices, and more. That's not just it, there's more. We're seeing massive volume of threats, and a surge of severe vulnerabilities that put these assets at risk. This is happening every day. And many, including me, think it's only going to get worse. The scale of the problem will accelerate. Security and IT teams are struggling to manage all these vulnerabilities at scale. With the time it takes to exploit a new vulnerability, combined with the lack of visibility into the asset attack surface area, companies are having a hard time addressing the vulnerabilities as quickly as they need. This is today's special CUBE program, where we're going to talk about these problems and how they're solved. Hello, everyone. I'm John Furrier, host of theCUBE. This is a special program called Managing Risk Across Your Extended Attack Surface Area with Armis, new asset intelligence platform. To start things off, let's bring in the co-founder and CTO of Armis, Nadir Izrael. Nadir, great to have you on the program. >> Yeah, thanks for having me. >> Great success with Armis. I want to just roll back and just zoom out and look at, what's the big picture? What are you guys focused on? What's the holy grail? What's the secret sauce? >> So Armis' mission, if you will, is to solve to your point literally one of the holy grails of security teams for the past decade or so, which is, what if you could actually have a complete, unified, authoritative asset inventory of everything, and stressing that word, everything. IT, OT, IoT, everything on kind of the physical space of things, data centers, virtualization, applications, cloud. What if you could have everything mapped out for you so that you can actually operate your organization on top of essentially a map? I like to equate this in a way to organizations and security teams everywhere seem to be running, basically running the battlefield, if you will, of their organization, without an actual map of what's going on, with charts and graphs. So we are here to provide that map in every aspect of the environment, and be able to build on top of that business processes, products, and features that would assist security teams in managing that battlefield. >> So this category, basically, is a cyber asset attack surface management kind of focus, but it really is defined by this extended asset attack surface area. What is that? Can you explain that? >> Yeah, it's a mouthful. I think the CAASM, for short, and Gartner do love their acronyms there, but CAASM, in short, is a way to describe a bit of what I mentioned before, or a slice out of it. It's the whole part around a unified view of the attack surface, where I think where we see things, and kind of where Armis extends to that is really with the extended attack surface. That basically means that idea of, what if you could have it all? What if you could have both a unified view of your environment, but also of every single thing that you have, with a strong emphasis on the completeness of that picture? If I take the map analogy slightly more to the extreme, a map of some of your environment isn't nearly as useful as a map of everything. If you had to, in your own kind of map application, you know, chart a path from New York to whichever your favorite surrounding city, but it only takes you so far, and then you sort of need to do the rest of it on your own, not nearly as effective, and in security terms, I think it really boils down into you can't secure what you can't see. And so from an Armis perspective, it's about seeing everything in order to protect everything. And not only do we discover every connected asset that you have, we provide a risk rating to every single one of them, we provide a criticality rating, and the ability to take action on top of these things. >> Having a map is huge. Everyone wants to know what's in their inventory, right, from a risk management standpoint, also from a vulnerability perspective. So I totally see that, and I can see that being the holy grail, but on the vulnerability side, you got to see everything, and you guys have new stuff around vulnerability management. What's this all about? What kind of gaps are you seeing that you're filling in the vulnerability side, because, okay, I can see everything. Now I got to watch out for threat vectors. >> Yeah, and I'd say a different way of asking this is, okay, vulnerability management has been around for a while. What the hell are you bringing into the mix that's so new and novel and great? So I would say that vulnerability scanners of different sorts have existed for over a decade. And I think that ultimately what Armis brings into the mix today is how do we fill in the gaps in a world where critical infrastructure is in danger of being attacked by nation states these days, where ransomware is an everyday occurrence, and where I think credible, up-to-the-minute, and contextualize vulnerability and risk information is essential. Scanners, or how we've been doing things for the last decade, just aren't enough. I think the three things that Armis excels at and completes the security staff today on the vulnerability management side are scale, reach, and context. Scale, meaning ultimately, and I think this is of no news to any enterprise, environments are huge. They are beyond huge. When most of the solutions that enterprises use today were built, they were built for thousands, or tens of thousands of assets. These days, we measure enterprises in the billions, billions of different assets, especially if you include how applications are structured, containers, cloud, all that, billions and billions of different assets, and I think that, ultimately, when the latest and greatest in catastrophic new vulnerabilities come out, and sadly, that's a monthly occurrence these days. You can't just now wait around for things to kind of scan through the environment, and figure out what's going on there. Real time images of vulnerabilities, real time understanding of what the risk is across that entire massive footprint is essential to be able to do things, and if you don't, then lots and lots of teams of people are tasked with doing this day in, day out, in order to accomplish the task. The second thing, I think, is the reach. Scanners can't go everywhere. They don't really deal well with environments that are a mixed IT/OT, for instance, like some of our clients deal with. They can't really deal with areas that aren't classic IT. And in general, these days over 70% of assets are in fact of the unmanaged variety, if you will. So combining different approaches from an Armis standpoint of both passive and active, we reach a tremendous scale, I think, within the environment, and ability to provide or reach that is complete. What if you could have vulnerability management, cover a hundred percent of your environment, and in a very effective manner, and in a very scalable manner? And the last thing really is context. And that's a big deal here. I think that most vulnerability management programs hinge on asset context, on the ability to understand, what are the assets I'm dealing with? And more importantly, what is the criticality of these assets, so I can better prioritize and manage the entire process along the way? So with these things in mind, that's what Armis has basically pulled out is a vulnerability management process. What if we could collect all the vulnerability information from your entire environment, and give you a map of that, on top of that map of assets? Connect every single vulnerability and finding to the relevant assets, and give you a real way to manage that automatically, and in a way that prevents teams of people from having to do a lot of grunt work in the process. >> Yeah, it's like building a search engine, almost. You got the behavioral, contextual. You got to understand what's going on in the environment, and then you got to have the context to what it means relative to the environment. And this is the criticality piece you mentioned, this is a huge differentiator in my mind. I want to unpack that. Understanding what's going on, and then what to pay attention to, it's a data problem. You got that kind of search and cataloging of the assets, and then you got the contextualization of it, but then what alarms do I pay attention to? What is the vulnerability? This is the context. This is a huge deal, because your businesses, your operation's going to have some important pieces, but also it changes on agility. So how do you guys do that? That's, I think, a key piece. >> Yeah, that's a really good question. So asset criticality is a key piece in being able to prioritize the operation. The reason is really simple, and I'll take an example we're all very, very familiar with, and it's been beaten to death, but it's still a good example, which is Log4j, or Log4Shell. When that came out, hundreds of people in large organizations started mapping the entire environment on which applications have what aspect of Log4j. Now, one of the key things there is that when you're doing that exercise for the first time, there are literally millions of systems in a typical enterprise that have Log4j in them, but asset criticality and the application and business context are key here, because some of these different assets that have Log4j are part of your critical business function and your critical business applications, and they deserve immediate attention. Some of them, or some Git server of some developer somewhere, don't warrant quite the same attention or criticality as others. Armis helps by providing the underlying asset map as a built-in aspect of the process. It maps the relationships and dependencies for you. It pulls together and clusters together. What applications does each asset serve? So I might be looking at a server and saying, okay, this server, it supports my ERP system. It supports my production applications to be able to serve my customers. It serves maybe my .com website. Understanding what applications each asset serves and every dependency along the way, meaning that endpoint, that server, but also the load balancers are supported, and the firewalls, and every aspect along the way, that's the bread and butter of the relationship mapping that Armis puts into place to be able to do that, and we also allow users to tweak, add information, connect us with their CMDB or anywhere else where they put this in, but once the information is in, that can serve vulnerability management. It can serve other security functions as well. But in the context of vulnerability management, it creates a much more streamlined process for being able to do the basics. Some critical applications, I want to know exactly what all the critical vulnerabilities that apply to them are. Some business applications, I just want to be able to put SLAs on, that this must be solved within a week, this must be solved within a month, and be able to actually automatically track all of these in a world that is very, very complex inside of an operation or an enterprise. >> We're going to hear from some of your customers later, but I want to just get your thoughts on, anecdotally, what do you hear from? You're the CTO, co-founder, you're actually going into the big accounts. When you roll this out, what are they saying to you? What are some of the comments? Oh my God, this is amazing. Thank you so much. >> Well, of course. Of course. >> Share some of the comments. >> Well, first of all, of course, that's what they're saying. They're saying we're great. Of course, always, but more specifically, I think this solves a huge gap for them. They are used to tools coming in and discovering vulnerabilities for them, but really close to nothing being able to streamline the truly complex and scalable process of being able to manage vulnerabilities within the environment. Not only that, the integration-led, designer-led deployment and the fact that we are a completely agent-less SaaS platform are extremely important for them. These are times where if something isn't easily deployable for an enterprise, its value is next to nothing. I think that enterprises have come to realize that if something isn't a one click deployment across the environment, it's almost not worth the effort these days, because environments are so complex that you can't fully realize the value any other way. So from an Armis standpoint, the fact that we can deploy with a few clicks, the fact that we immediately provide that value, the fact that we're agent-less, in the sense that we don't need to go around installing a footprint within the environment, and for clients who already have Armis, the fact that it's a flip of a switch, just turn it on, are extreme. I think that the fact, in particular, that Armis can be deployed. the vulnerability management can be deployed on top of the existing vulnerability scanner with a simple one-click integration is huge for them. And I think all of these together are what contribute to them saying how great this is. But yeah, that's it. >> The agent listing is huge. What's the alternative? What does it look like if they're going to go the other route, slow to deploy, have meetings, launch it in the environment? What's it look like? >> I think anything these days that touches an endpoint with an agent goes through a huge round of approvals before anything goes into an environment. Same goes, by the way, for additional scanners. No one wants to hear about additional scanners. They've already gone through the effort with some of the biggest tools out there to punch holes through firewalls, to install scanners in different ways. They don't want yet another scanner, or yet another agent. Armis rides on top of the existing infrastructure, the existing agents, the existing scanners. You don't need to do a thing. It just deploys on top of it, and that's really what makes this so easy and seamless. >> Talk about Armis research. Can you talk about, what's that about? What's going on there? What are you guys doing? How do you guys stay relevant for your customers? >> For sure. So one of the, I've made a lot of bold claims throughout, I think, the entire Q and A here, but one of the biggest magic components, if you will, to Armis that kind of help explain what all these magic components are, are really something that we call our collective asset knowledge base. And it's really the source of our power. Think of it as a giant collective intelligent that keeps learning from all of the different environments combined that Armis is deployed at. Essentially, if we see something in one environment, we can translate it immediately into all environments. So anyone who joins this or uses the product joins this collective intelligence in essence. What does that mean? It means that Armis learns about vulnerabilities from other environments. A new Log4j comes out, for instance. It's enough that, in some environments, Armis is able to see it from scanners, or from agents, or from SBOMs, or anything that basically provides information about Log4j, and Armis immediately infers or creates enrichment rules that act across the entire tenant base, or the entire client base of Armis. So very quick response to industry events, whenever something comes out, again, the results are immediate, very up to the minute, very up to the hour, but also I'd say that Armis does its own proactive asset research. We have a huge data set at our disposal, a lot of willing and able clients, and also a lot of partners within the industry that Armis leverages, but our own research is into interesting aspects within the environment. We do our own proactive research into things like TLStorm, which is kind of a bit of a bridging research and vulnerabilities between cyber physical aspect. So on the one hand, the cyber space and kind of virtual environments, but on the other hand, the actual physical space, vulnerabilities, and things like UPSs, or industrial equipment, or things like that. But I will say that also, Armis targets its research along different paths that we feel are underserved. We started a few years back research into firmwares, different types of real time operating systems. We came out with things like URGENT/11, which was research into, on the one hand, operating systems that run on two billion different devices worldwide, on the other hand, in the 40 years it existed, only 13 vulnerabilities were ever exposed or revealed about that operating system. Either it's the most secure operating system in the world, or it's just not gone through enough rigor and enough research in doing this. The type of active research we do is to complement a lot of the research going on in the industry, serve our clients better, but also provide kind of inroads, I think, for the industry to be better at what they do. >> Awesome, Nadir, thanks for sharing the insights. Great to see the research. You got to be at the cutting edge. You got to investigate, be ready for a moment's notice on all aspects of the operating environment, down to the hardware, down to the packet level, down to the any vulnerability, be ready for it. Great job. Thanks for sharing. Appreciate it. >> Absolutely. >> In a moment, Tim Everson's going to join us. He's the CSO of Kalahari Resorts and Conventions. He'll be joining me next. You're watching theCUBE, the leader in high tech coverage. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Jun 17 2022

SUMMARY :

With the time it takes to What's the holy grail? in every aspect of the environment, management kind of focus, and the ability to take and I can see that being the holy grail, and manage the entire and cataloging of the assets, and every dependency along the way, What are some of the comments? Well, of course. and the fact that we are What's the alternative? of the biggest tools out there What are you guys doing? from all of the different on all aspects of the He's the CSO of Kalahari

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Tim EversonPERSON

0.99+

New YorkLOCATION

0.99+

thousandsQUANTITY

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

NadirPERSON

0.99+

billionsQUANTITY

0.99+

Kalahari Resorts and ConventionsORGANIZATION

0.99+

ArmisORGANIZATION

0.99+

todayDATE

0.99+

40 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

TodayDATE

0.99+

GartnerORGANIZATION

0.99+

each assetQUANTITY

0.98+

second thingQUANTITY

0.98+

one clickQUANTITY

0.98+

13 vulnerabilitiesQUANTITY

0.98+

a weekQUANTITY

0.98+

over 70%QUANTITY

0.98+

millions of systemsQUANTITY

0.98+

oneQUANTITY

0.98+

two billion different devicesQUANTITY

0.97+

a monthQUANTITY

0.97+

one-clickQUANTITY

0.97+

bothQUANTITY

0.96+

Log4jTITLE

0.96+

hundred percentQUANTITY

0.96+

over a decadeQUANTITY

0.95+

tens of thousandsQUANTITY

0.94+

one environmentQUANTITY

0.94+

Log4ShellTITLE

0.93+

Managing Risk Across Your Extended Attack Surface AreaTITLE

0.91+

SBOMsORGANIZATION

0.89+

past decadeDATE

0.88+

threeQUANTITY

0.86+

hundreds of peopleQUANTITY

0.84+

CUBETITLE

0.84+

singleQUANTITY

0.82+

last decadeDATE

0.81+

CAASMTITLE

0.75+

CMDBTITLE

0.74+

billions of different assetsQUANTITY

0.72+

CAASMORGANIZATION

0.66+

URGENTORGANIZATION

0.65+

single vulnerabilityQUANTITY

0.65+

TLStormORGANIZATION

0.65+

Armis'ORGANIZATION

0.64+

GitTITLE

0.64+

11TITLE

0.63+

a few yearsDATE

0.61+

CTOPERSON

0.57+

the holy grailsQUANTITY

0.55+

assetsQUANTITY

0.55+

lotsQUANTITY

0.51+

clicksQUANTITY

0.5+

ArmisPERSON

0.49+

Chris Degnan, Snowflake & Chris Grusz, Amazon Web Services | Snowflake Summit 2022


 

(upbeat techno music) >> Hey everyone, and welcome back to theCUBE's coverage of Snowflake Summit '22 live from Caesar's Forum in beautiful, warm, and sunny Las Vegas. I'm Lisa Martin. I got the Chris and Chris show, next. Bear with me. Chris Degnan joins us again. One of our alumni, the Chief Revenue Officer at Snowflake. Good to have you back, Chris. >> Thank you for having us. >> Lisa: Chris Grusz also joins us. Director of Business Development AWS Marketplace and Service Catalog at AWS. Chris and Chris, welcome. >> Thank you. >> Thank you. >> Thank you. Good to be back in person. >> Isn't it great. >> Chris G: It's so much better. >> Chris D: Yeah. >> Nothing like it. So let's talk. There's been so much momentum, Chris D, at Snowflake the last few years. I mean the momentum at this show since we launched yesterday, I know you guys launched the day before with partners, has been amazing. A lot of change, and it's like this for Snowflake. Talk to us about AWS working together with Snowflake and some of the benefits in it from your customer. And then Chris G, I'll go to you for the same question. >> Chris G: Yep. >> You know, first of all, it's awesome. Like, I just, you know, it's been three years since I've had a Snowflake Summit in person, and it's crazy to see the growth that we've seen. You know, I can't, our first cloud that we ever launched on top of was, was AWS, and AWS is our largest cloud, you know, in in terms of revenue today. And they've been, they just kind of know how to do it right. And they've been a wonderful partner all along. There's been challenges, and we've kind of leaned in together and figured out ways to work together, you know, and to solve those challenges. So, been a wonderful partnership. >> And talk about it, Chris G, from your perspective obviously from a coopetition perspective. >> Yep. >> AWS has databases, cloud data forms. >> Chris G: Yeah. >> Talk to us about it. What was the impetus for the partnership with Snowflake from AWS's standpoint? >> Yeah, well first and foremost, they're building on top of AWS. And so that, by default, makes them a great partner. And it's interesting, Chris and I have been working together for, gosh, seven years now? And the relationship's come a really long way. You know, when we first started off, we were trying to sort out how we were going to work together, when we were competing, and when we're working together. And, you know, you fast forward to today, and it's just such a good relationship. Because both companies work backwards from customers. And so that's, you know, kind of in both of our DNA. And so if the customer makes that selection, we're going to support them, even from an AWS perspective. When they're going with Snowflake, that's still a really good thing for AWS, 'cause there's a lot of associated services that Snowflake either integrates to, or we're integrating to them. And so, it's really kind of contributed to how we can really work together in a co-sell motion. >> Talk to us, talk about that. The joint GOTO market and the co-selling motion from Snowflake's perspective, how do customers get engaged? >> Well, I think, you know, typically we, where we are really good at co-selling together is we identify on premise systems. So whether it's, you know, some Legacy UDP system, some Legacy database solution, and they want to move to the cloud? You know, Amazon is all in on getting everyone to the cloud. And I think that's their approach they've taken with us is saying we're really good at accelerating that adoption and moving all these, you know, massive workloads into the cloud. And then to Chris's point, you know, we've integrated so nicely into things like SageMaker and other tool sets. And we, we even have exciting scenarios where they've allowed us to use, you know, some of their Amazon.com retail data sets that we actually use in data sharing via the partnership. So we continue to find unique ways to partner with our great friends at Amazon. >> Sounds like a very deep partnership. >> Chris D: Yeah. Absolutely. >> Chris G: Oh, absolutely, yeah. We're integrating into Snowflake, and they're integrating to AWS. And so it just provides a great combined experience for our customers. And again, that's kind of what we're both looking forward from both of our organizations. >> That customer centricity is, >> Yeah. >> is I think the center of the flywheel that is both that both of you, your companies have. Chris D, talk about the the industry's solutions, specific, industry-specific solutions that Snowflake and AWS have. I know we talked yesterday about the pivot from a sales perspective >> Chris D: Yes. >> That snowflake made in recent months. Talk to us about the industries that you are help, really targeting with AWS to help customers solve problems. >> Yeah. I think there's, you know, we're focused on a number of industries. I think, you know, some of the examples, like I said, I gave you the example of we're using data sharing to help the retail space. And I think it's a really good partnership. Because some of the, some companies view Amazon as a competitor in the retail space, and I think we kind of soften that blow. And we actually leverage some of the Amazon.com data sets. And this is where the partnership's been really strong. In the healthcare space, in the life sciences space, we have customers like Anthem, where we're really focused on helping actually Anthem solve real business problems. Not necessarily like technical problems. It's like, oh no, they want to get, you know, figure out how they can get the whole customer and take care of their whole customer, and get them using the Anthem platform more effectively. So there's a really great, wonderful partnership there. >> We've heard a lot in the last day and a half on theCUBE from a lot of retail customers and partners. There seems to be a lot of growth in that. So there's so much change in the retail market. I was just talking with Click and Snowflake about Urban Outfitters, as an example. And you think of how what these companies are doing together and obviously AWS and Snowflake, helping companies not just pivot during the pandemic, but really survive. I mean, in the beginning with, you know, retail that didn't have a digital presence, what were they going to do? And then the supply chain issues. So it really seems to be what Snowflake and its partner Ecosystem is doing, is helping companies now, obviously, thrive. But it was really kind of like a no-go sort of situation for a lot of industries. >> Yeah, and I think the neat part of, you know, both the combined, you know, Snowflake and AWS solution is in, a good example is DoorDash, you know. They had hyper growth, and they could not have handled, especially during COVID, as we all know. We all used DoorDash, right? We were just talking about it. Chipotle, like, you know, like (laughter) and I think they were able to really take advantage of our hyper elastic platforms, both on the Amazon side and the Snowflake side to scale their business and meet the high demand that they were seeing. And that's kind of some of the great examples of where we've enabled customer growth to really accelerate. >> Yeah. Yeah, right. And I'd add to that, you know, while we saw good growth for those types of companies, a lot of your traditional companies saw a ton of benefit as well. Like another good example, and it's been talked about here at the show, is Western Union, right? So they're a company that's been around for a long time. They do cross border payments and cross currency, you know, exchanges, and, you know, like a lot of companies that have been around for a while, they have data all over the place. And so they started to look at that, and that became an inhibitor to their growth. 'Cause they couldn't get a full view of what was actually going on. And so they did a lengthy evaluation, and they ended up going with Snowflake. And, it was great, 'cause it provided a lot of immediate benefits, so first of all, they were able to take all those disparate systems and pull that into Snowflake. So they finally had a single source of the truth, which was lacking before that. So that was one of the big benefits. The second benefit, and Chris has mentioned this a couple times, is the fact that they could use data sharing. And so now they could pull in third data. And now that they had a holistic view of their entire data set, they could pull in that third party data, and now they could get insights that they never could get before. And so that was another large benefit. And then the third part, and this is where the relationship between AWS and Snowflake is great, is they could then use Amazon SageMaker. So one of the decisions that Western Union made a long time ago is they use R for their data science platform, and SageMaker supports R. And so it really allowed them to dovetail the skill sets that they had around data science into SageMaker. They could now look across all of Snowflake. And so that was just a really good benefit. And so it drove the cost down for Western Union which was a big benefit, but the even bigger benefit is they were now able to start to package and promote different solutions to their customers. So they were effectively able to monetize all the data that they were now getting and the information they were getting out of Snowflake. And then of course, once it was in there, they could also use things like Tableau or ThoughtSpot, both of which available in AWS Marketplace. And it allowed them to get all kinds of visualization of data that they never got in the past. >> The monetization piece is, is interesting. It's so challenging for organizations, one, to get that single source view, to be able to have a customer 360, but to also then be able to monetize data. When you're in customer conversations, how do you help customers on that journey, start? Because the, their competitors are clearly right behind them, ready to take first place spot. How do you help customers go, all right this is what we're going to do to help you on this journey with AWS to monetize your data? >> I think, you know, it's everything from, you know, looking at removing the silos of data. So one of the challenges they've had is they have these Legacy systems, and a lot of times they don't want to just take the Legacy systems and throw them into the cloud. They want to say, we need a holistic view of our customer, 360 view of our customer data. And then they're saying, hey, how can we actually monetize that data? That's where we do everything from, you know, Snowflake has the data marketplace where we list it in the data marketplace. We help them monetize it there. And we use some of the data sets from Amazon to help them do that. We use the technologies like Chris said with SageMaker and other tool sets to help them realize the value of their data in a real, meaningful way. >> So this sounds like a very strategic and technical partnership. >> Yeah, well, >> On both sides. >> It's technical and it's GOTO market. So if you take a look at, you know, Snowflake where they've built over 20 integrations now to different AWS services. So if you're using S3 for object storage, you can use Snowflake on top of that. If you want to load up Snowflake with Glue which is our ETL tool, you can do that. If you want to use QuickSite to do your data visualization on top of Snowflake, you can do that. So they've built integration to all of our services. And then we've built integrations like SageMaker back into Snowflake, and so that supports all kinds of specific customer use cases. So if you think of people that are doing any kind of cloud data platform workload, stuff like data engineering, data warehousing, data lakes, it could be even data applications, cyber security, unistore type things, Snowflake does an excellent job of helping our customers get into those types of environments. And so that's why we support the relationship with a variety of, you know, credit programs. We have a lot of co-sell motions on top of these technical integrations because we want to make sure that we not only have the right technical platform, but we've got the right GOTO market motion. And that's super important. >> Yeah, and I would add to that is like, you know one of the things that customers do is they make these large commitments to Amazon. And one of the best things that Amazon did was allow those customers to draw down Snowflake via the AWS Marketplace. So it's been wonderful to his point around the GOTO market, that was a huge issue for us. And, and again, this is where Amazon was innovative on identifying the ways to help make the customer have a better experience >> Chris G: Yeah. >> Chris D: and put the customer first. And this has been, you know, wonderful partnership there. >> Yeah. It really has. It's been a great, it's been really good. >> Well, and the customers are here. Like we said, >> Yep. >> Yes. Yes they are. >> we're north of 10,000 folks total, and customers are just chomping at the bit. There's been so much growth in the last three years from the last time, I think I heard the 2019 Snowflake Summit had about 1500 people. And here we are at 10,000 plus now, and standing-room-only keynote, the very big queue to get in, people turned away, pushed back to an overflow area to be able to see that, and that was yesterday. I didn't even get a chance to see what it was like today, but I imagine it was probably the same. Talk about the, when you're in customer conversations, where do you bring, from a GTM perspective, Where do you bring Snowflake into the conversation? >> Yeah >> Obviously, there's Redshift there, what does that look like? I imagine it follows the customer's needs, challenges. >> Exactly. >> Compelling events. >> Yeah. We're always going to work backwards from the customer need, and so that is the starting point for kindling both organizations. And so we're going to, you know, look at what they need. And from an AWS perspective, you know, if they're going with Snowflake, that's a very good thing. Right? 'Cause one of the things that we want to support is a selection experience to our AWS customers and make sure that no matter what they're doing, they're getting a very good, supported experience. And so we're always going to work backwards from the customer. And then once they make that technology decision, then we're going to support them, as I mentioned, with a whole bunch of co-sell resources. We have technical resources in the field. We have credit programs and in, you know, and, of course, we're going to market in a variety of different verticals as well with Snowflake. If you take a look at all the industry clouds that Snowflake has spun up, financial services and healthcare, and media entertainment, you know, those are all very specific use cases that are very valuable to an AWS customer. And AWS is going more and more to market on a vertical approach, and so Snowflake really just fits right in with our overall strategy. >> Right. Sounds like very tight alignment there. That mission alignment that Frank talked about yesterday. I know he was talking about that with respect to customers, but it sounds like there's a mission alignment between AWS and Snowflake. >> Mission alignment, yeah. >> I live that every week. (laughter) >> Sorry if I brought up a pain point. >> Yeah. Little bit. No. >> Guys, what's, in terms of use cases, obviously we've been here for a couple days. I'm sure you've had tremendous feedback, >> Chris G: Yeah. >> from, from customers, from partners, from the ecosystem. What's next, what can we expect to hear next? Maybe give us a preview of re:Invent in the few months. >> Preview of re:Invent. Yeah. No, well, one of the things we really want to start doing is just, you know, making the use case of, of launching Snowflake on AWS a lot easier. So what can we do to streamline those types of experiences? 'Cause a lot of times we'll find that customers, once they buy a third party solution like Snowflake, they have to then go through a whole series of configuration steps, and what can we do to streamline that? And so we're going to continue to work on that front. One of the other places that we've been exploring with Snowflake is how we work with channel partners. And, you know, when we first launched Marketplace it was really more of an app store model that was ISVs on one side and channel partners on the other, and there wasn't really a good fit for channel partners. And so four years ago we retrofitted the platform and have opened it up to resellers like an SHI or SIs like Salam or Deloitte who are top, two top SIs for Snowflake. And now they can use Marketplace to resell those technologies and also sell their services on top of that. So Snowflake's got a big, you know, practice with Salam, as I mentioned. You know, Salam can now sell through Marketplace and they can actually sell that statement of work and put that on the AWS bill all by virtue of using Marketplace, that automation platform. >> Ease of use for customers, ease of use for partners as well. >> Yes. >> And that ease of use is it's no joke. It's, it's not just a marketing term. It's measurable and it's about time-to-value, time-to-market, getting customers ahead of their competition so that they can be successful. Guys, thanks for joining me on theCUBE today. Talking about AWS and >> Nice to be back. Nice to be back in person. >> Isn't it nice to be back. It's great to be actually sitting across from another human. >> Exactly. >> Thank you so much for your insights, what you shared about the partnership and where it's going. We appreciate it. >> Thank you. >> Cool. Thank you. >> Thank you. >> All right guys. For Chris and Chris, I'm Lisa Martin, here watching theCUBE live from Las Vegas. I'll be back with my next guest momentarily, so stick around. (Upbeat techno music)

Published Date : Jun 15 2022

SUMMARY :

One of our alumni, the Chief Chris and Chris, welcome. Good to be back in person. and some of the benefits and it's crazy to see the And talk about it, Chris AWS has databases, Talk to us about it. And so that's, you know, and the co-selling motion And then to Chris's point, you know, and they're integrating to AWS. of the flywheel that is both that you are help, really targeting I think, you know, some of the examples, So it really seems to be what Snowflake and the Snowflake side And so they started to look at that, this is what we're going to do to help you I think, you know, and technical partnership. at, you know, Snowflake And one of the best And this has been, you know, It's been a great, it's been really good. Well, and the customers in the last three years I imagine it follows the And so we're going to, you That mission alignment that I live that every week. obviously we've been partners, from the ecosystem. and put that on the AWS bill all by virtue Ease of use for so that they can be successful. Nice to be back in person. Isn't it nice to be back. Thank you so much for your For Chris and Chris,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Chris GruszPERSON

0.99+

FrankPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Chris DegnanPERSON

0.99+

Chris DPERSON

0.99+

Western UnionORGANIZATION

0.99+

SnowflakeTITLE

0.99+

Amazon.comORGANIZATION

0.99+

Chris GPERSON

0.99+

Las VegasLOCATION

0.99+

yesterdayDATE

0.99+

LisaPERSON

0.99+

seven yearsQUANTITY

0.99+

bothQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

ChipotleORGANIZATION

0.99+

GruszPERSON

0.99+

three yearsQUANTITY

0.99+

S3TITLE

0.99+

Chris GPERSON

0.99+

third partQUANTITY

0.99+

todayDATE

0.99+

both sidesQUANTITY

0.99+

EcosystemORGANIZATION

0.99+

QuickSiteTITLE

0.99+

both companiesQUANTITY

0.99+

twoQUANTITY

0.98+

both organizationsQUANTITY

0.98+

pandemicEVENT

0.98+

Breaking Analysis: Broadcom, Taming the VMware Beast


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 28 2022

SUMMARY :

This is Breaking Analysis and promises that the acquisition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Stephanie ChanPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SymantecORGANIZATION

0.99+

Rob HofPERSON

0.99+

Alex MyersonPERSON

0.99+

April 22DATE

0.99+

HPORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Paul MaritzPERSON

0.99+

BroadcomORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

April 21DATE

0.99+

NSXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

$61 billionQUANTITY

0.99+

8.5 billionQUANTITY

0.99+

$2.1 billionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

90%QUANTITY

0.99+

6%QUANTITY

0.99+

4.7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Hock TanORGANIZATION

0.99+

60%QUANTITY

0.99+

44%QUANTITY

0.99+

40 dayQUANTITY

0.99+

61%QUANTITY

0.99+

8 billionQUANTITY

0.99+

Michael DellPERSON

0.99+

52%QUANTITY

0.99+

47%QUANTITY

0.99+

Anish Dhar & Ganesh Datta, Cortex | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: TheCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend and we are in a beautiful locale. The city itself is not that big, 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company and I think that's really the philosophy we have. When we built Cortex, is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. The engineers who write the services they're often the ones who understand all the intricacies with the service. What are the downstream dependencies, who's on call for this service? Where does the documentation live? All of these things I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively codify your best practices in some way and have that distributed to all engineers in the company. >> And to add to that, I think very concrete examples for an organization that's already modern like their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century, they're trying to get to Kubernetes. For them, quality means where are we in that journey? Are you on our latest platforms? Are you running CI, are you doing continuous delivery? Like quality can mean a lot of things and so our perspective is how do we give you the tools to say as an organization, here's what quality means to us. >> So at first, my mind was going through when you said quality, Anish, you started out the conversation about having this kind of non-codified set of measurements, historical knowledge, et cetera. I was thinking observability, measuring how much time does it take to have a transaction. But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff. And I think that's very high level and I think that's one perspective of what quality is, but as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> I'm kind of taken back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCUBE clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... What's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version, and then what? I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different sets of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. Like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where the areas of risk with my service across reliability or security or operation maturity. I think it gives us in insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standards for SLO that is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability or reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my API integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs, we want you to define your thresholds. We want you to be tracking them, are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be, ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you got to be tracking them. And the service catalog that data flows from a service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, you can say like, hey, for this service, we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans. Here's the APM, the monitors, the SLOs, the JIRA ticket is like all that stuff comes into a single place. And then our scorecards product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah, and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them. And scorecards can be used to one, which teams are actually meeting these guidelines? And then two, let's get those teams adopted on SLOs. Let's track that, you can do all of that in Cortex, which is I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering if not, you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define this scorecard. And that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful and those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service, you can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices, so I reduce call wait time on my customer service calls. So I can easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard, but what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run books to quickly roll back to an older version that we know is running. That way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind of the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics. And then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics like are we doing things the right way, versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely, I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock? >> Time for the Q clock. Start the Q clock. (upbeat music) Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on. You're you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this. For a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my engineering team actually producing. >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that can- >> A goal without a timeline is just a dream. >> And CTO comes in and they say, here's what we care about. Here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's where Cortex can provide. We can give you that visibility that you never have before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube. The leader in high tech coverage. (soft music) (soft music) >> Narrator: TheCube presents Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend. And we are in a beautiful locale. The city itself is not that big 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company. And I think that's really the philosophy we have when we build Cortex is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. Engineers who write the services, they're often the ones who understand all the intricacies with the service. What are the downstream I dependencies, who's on call for this service, where does the documentation live? All of these things, I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively like codify your best practices in some way, and have that distributed to all engineers in the company. >> And to add to that, I think like very concrete examples for an organization that's already modern their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century. They're trying to get to Kubernetes. For them quality means like, where are we in that journey? Are you on our latest platforms? Are you running CI? Are you doing continuous delivery? Like quality can mean a lot of things. And so our perspective is how do we give you the tools to say as an organization here's what quality means to us. >> So at first my mind was going through when you said quality and as you started out the conversation about having this kind of non codified set of measurements, historical knowledge, et cetera. I was thinking observability measuring how much time does it take to have a transaction? But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff and I think that's very high level. And I think that's one perspective of what quality is. But as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of, are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> Wow, I'm kind of taken me back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCube clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... what's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version and then what, I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different set of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where are the areas of risk with my service across reliability or security or operation maturity. I think it gives us insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standard for SLO. That is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability and reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my APM integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs. We want to define your thresholds. We want you to be tracking them. Are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be. Ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you've got to be tracking them. And the service catalog that data flows from the service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, we can say like, hey, for this service we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans, here's the APM, the monitor, the SLOs, the JIRA ticket, like all that stuff comes into a single place. And then our scorecard product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them? And scorecards can be used to one, which teams are actually meeting these guidelines? And then two let's get those teams adopted on SLOs. Let's track that. You can do all of that in Cortex, which is, I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define the scorecard and that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful. And like those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service. You can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level. Saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices so I reduce call wait time on my customer service calls. So, I could easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard. But what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run book to quickly roll back to an older version that we know is running that way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics and then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics? Are we doing things the right way versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on like the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely. I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right, Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock >> Time for the Q clock. Start the Q clock. (upbeat music) >> Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on, you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this, for a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my enduring team actually producing- >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that you can- >> A goal without a timeline is just a dream. >> And a CTO comes in and they say, here's what we care about, here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's what Cortex can provide. We can give you that visibility that you never had before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube, the leader in high tech coverage. (soft music)

Published Date : May 20 2022

SUMMARY :

Brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think What's the first process you do that has all the information you need So how do you help me we want you to define your thresholds. And so we can start mapping adopted SLOs in the first place? in the end to end process And so you can start seeing like trends So I'm going to give you And so if you have those metrics coming in and Cortex lets you represent that. the speed round here. Time for the Q clock. You're you're 10 seconds in. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never have before. And hey, your SRE can't And you're watching theCube. 2022 brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think And I think later on, this that has all the information you need So how do you help me And the service catalog that data flows And so we can start mapping You can do all of that in the end to end process And so you can start seeing So I'm going to give you And so if you have those metrics coming in I think just to touch on the reporting, the speed round here. Time for the Q clock. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never had before. And hey, your SRE can't And you're watching theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnishPERSON

0.99+

Keith TownsendPERSON

0.99+

CortexORGANIZATION

0.99+

80%QUANTITY

0.99+

KeithPERSON

0.99+

Red HatORGANIZATION

0.99+

USLOCATION

0.99+

GaneshPERSON

0.99+

21st centuryDATE

0.99+

100,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

five secondsQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

800,000 peopleQUANTITY

0.99+

CortexTITLE

0.99+

Valencia SpainLOCATION

0.99+

one elementQUANTITY

0.99+

one aspectQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

one perspectiveQUANTITY

0.99+

DatadogORGANIZATION

0.99+

one componentQUANTITY

0.99+

Ganesh DattaPERSON

0.98+

OneQUANTITY

0.98+

SLOTITLE

0.98+

2022DATE

0.98+

first stepQUANTITY

0.98+

KubeconORGANIZATION

0.97+

about 800,000 peopleQUANTITY

0.97+

one clickQUANTITY

0.97+

Naina Singh & Roland Huß, Red Hat | Kubecon + Cloudnativecon Europe 2022


 

>> Announcer: "theCUBE" presents KubeCon and CloudNativeCon Europe 2022 brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain and KubeCon and CloudNativeCon Europe 2022. I'm Keith Townsend, my co-host, Paul Gillin, Senior Editor Enterprise Architecture for SiliconANGLE. We're going to talk, or continue to talk to amazing people. The coverage has been amazing, but also the city of Valencia is beautiful. I have to eat a little crow, I landed and I saw the convention center, Paul, have you got out and explored the city at all? >> Absolutely, my first reaction to Valencia when we were out in this industrial section was, "This looks like Cincinnati." >> Yes. >> But then I got on the bus second day here, 10 minutes to downtown, another world, it's almost a middle ages flavor down there with these little winding streets and just absolutely gorgeous city. >> Beautiful city. I compared it to Charlotte, no disrespect to Charlotte, but this is an amazing city. Naina Singh, Principal Product Manager at Red Hat, and Roland Huss, also Principal Product Manager at Red Hat. We're going to talk a little serverless. I'm going to get this right off the bat. People get kind of feisty when we call things like Knative serverless. What's the difference between something like a Lambda and Knative? >> Okay, so I'll start. Lambda is, like a function as a server, right? Which is one of the definitions of serverless. Serverless is a deployment platform now. When we introduced serverless to containers through Knative, that's when the serverless got revolutionized, it democratized serverless. Lambda was proprietary-based, you write small snippets of code, run for a short duration of time on demand, and done. And then Knative which brought serverless to containers, where all those benefits of easy, practical, event-driven, running on demand, going up and down, all those came to containers. So that's where Knative comes into picture. >> Yeah, I would also say that Knative is based on containers from the very beginning, and so, it really allows you to run arbitrary workloads in your container, whereas with Lambda you have only a limited set of language that you can use and you have a runtime contract there which is much easier with Knative to run your applications, for example, if it's coming in a language that is not supported by Lambda. And of course the most important benefit of Knative is it's run on top of Kubernetes, which allows you- >> Yes. >> To run your serverless platform on any other Kubernetes installation, so I think this is one of the biggest thing. >> I think we saw about three years ago there was a burst of interest around serverless computing and really some very compelling cost arguments for using it, and then it seemed to die down, we haven't heard a lot about serverless, and maybe I'm just not listening to the right people, but what is it going to take for serverless to kind of break out and achieve its potential? >> Yeah, I would say that really the big advantage of course of Knative in that case is that you can scale down to zero. I think this is one of the big things that will really bring more people onto board because you really save a lot of money with that if your applications are not running when they're not used. Yeah, I think also that, because you don't have this vendor log in part thing, when people realize that you can run really on every Kubernete platform, then I think that the journey of serverless will continue. >> And I will add that the event-driven applications, there hasn't been enough buzz around them yet. There is, but serverless is going to bring a new lease on life on them, right? The other thing is the ease of use for developers. With Knative, we are introducing a new programming model, the functions, where you don't even have to create containers, it would do create containers for you. >> So you create the servers, but not the containers? >> Right now, you create the containers and then you deploy them in a serverless fashion using Knative. But the container creation was on the developers, and functions is going to be the third component of Knative that we are developing upstream, and Red Hat donated that project, is going to be where code to cloud capability. So you bring your code and everything else will be taken care of, so. >> So, I'd call a function or, it's funny, we're kind of circular with this. What used to be, I'd write a function and put it into a container, this server will provide that function not just call that function as if I'm developing kind of a low code no code, not no code, but a low code effort. So if there's a repetitive thing that the community wants to do, you'll provide that as a predefined function or as a server. >> Yeah, exactly. So functions really helps the developer to bring their code into the container, so it's really kind of a new (indistinct) on top of Knative- >> on top op. >> And of course, it's also a more opinionated approach. It's really more closer coming to Lambda now because it also comes with a programming model, which means that you have certain signature that you have to implement and other stuff. But you can also create your own templates, because at the end what matters is that you have a container at the end that you can run on Knative. >> What kind of applications is serverless really the ideal platform? >> Yeah, of course the ideal application is a HTTP-based web application that has no state and that has a very non-uniform traffic shape, which means that, for example, if you have a business where you only have spikes at certain times, like maybe for Super Bowl or Christmas, when selling some merchandise like that, then you can scale up from zero very quickly at a arbitrary high depending on the load. And this is, I think, the big benefit over, for example, Kubernetes Horizontal Pod Autoscaling where it's more like indirect measures of value scaling based on CPR memory, but here, it directly relates one to one to the traffic that is coming in to concurrent request. Yeah, so this helps a lot for non-uniform traffic shapes that I think this has become one of the ideal use case. >> Yeah. But I think that is one of the most used or defined one, but I do believe that you can write almost all applications. There are some, of course, that would not be the right load, but as long as you are handling state through external mechanism. Let's say, for example you're using database to save the state, or you're using physical volume amount to save the state, it increases the density of your cluster because when they're running, the containers would pop up, when your application is not running, the container would go down, and the resources can be used to run any other application that you want to us, right? >> So, when I'm thinking about Lambda, I kind of get the event-driven nature of Lambda. I have a S3 bucket, and if a S3 event is driven, then my functions as the server will start, and that's kind of the listening servers. How does that work with Knative or a Kubernetes-based thing? 'Cause I don't have an event-driven thing that I can think of that kicks off, like, how can I do that in Kubernetes? >> So I'll start. So it is exactly the same thing. In Knative world, it's the container that's going to come up and your servers in the container, that will do the processing of that same event that you are talking. So let's say the notification came from S3 server when the object got dropped, that would trigger an application. And in world of Kubernetes, Knative, it's the container that's going to come up with the servers in it, do the processing, either find another servers or whatever it needs to do. >> So Knative is listening for the event, and when the event happens, then Knative executes the container. >> Exactly. >> Basically. >> So the concept of Knative source which is kind of adapted to the external world, for example, for the S3 bucket. And as soon as there is an event coming in, Knative will wake up that server, will transmit this event as a cloud event, which is another standard from the CNCF, and then when the server is done, then the server spins down again to zero so that the server is only running when there are events, which is very cost effective and which people really actually like to have this kind of way of dynamic scaling up from zero to one and even higher like that. >> Lambda has been sort of synonymous with serverless in the early going here, is Knative a competitor to Lambda, is it complimentary? Would you use the two together? >> Yeah, I would say that Lambda is a offering from AWS, so it's a cloud server there. Knative itself is a platform, so you can run it in the cloud, and there are other cloud offerings like from IBM, but you can also run it on-premise for example, that's the alternative. So you can also have hybrid set scenarios where you really can put one part into the cloud, the other part on-prem, and I think there's a big difference in that you have a much more flexibility and you can avoid this kind of Windows login compared to AWS Lambda. >> Because Knative provides specifications and performance tests, so you can move from one server to another. If you are on IBM offering that's using Knative, and if you go to a Google offering- >> A google offering. >> That's on Knative, or a Red Hat offering on Knative, it should be seamless because they're both conforming to the same specifications of Knative. Whereas if you are in Lambda, there are custom deployments, so you are only going to be able to run those workloads only on AWS. >> So KnativeCon, co-located event as part of KubeCon, I'm curious as to the level of effort in the user interaction for deploying Knative. 'Cause when I think about Lambda or cloud-run or one of the other functions as a servers, there is no backend that I have to worry about. And I think this is where some of the debate becomes over serverless versus some other definition. What's the level of lifting that needs to be done to deploy Knative in my Kubernetes environment? >> So if you like... >> Is this something that comes as based part of the OpenShift install or do I have to like, you know, I have to... >> Go ahead, you answer first. >> Okay, so actually for OpenShift, it's a code layer product. So you have this catalog of operator that you can choose from, and OpenShift Serverless is one part of that. So it's really kind of a one click install where you have also get a default configuration, you can flexibly configure it as you like. Yeah, we think that's a good user experience and of course you can go to these cloud offerings like Google Cloud one or IBM Code Engine, they just have everything set up for you. And the idea of other different alternatives, you have (indistinct) charts, you can install Knative in different ways, you also have options for the backend systems. For example, we mentioned that when an event comes in, then there's a broker in the middle of something which dispatches all the events to the servers, and there you can have a different backend system like Kafka or AMQ. So you can have very production grade messaging system which really is responsible for delivering your events to your servers. >> Now, Knative has recently, I'm sorry, did I interrupt you? >> No, I was just going to say that Knative, when we talk about, we generally just talk about the serverless deployment model, right? And the Eventing gets eclipsed in. That Eventing which provides this infrastructure for producing and consuming event is inherent part of Knative, right? So you install Knative, you install Eventing, and then you are ready to connect all your disparate systems through Events. With CloudEvents, that's the specification we use for consistent and portable events. >> So Knative recently admitted to the, or accepted by the Cloud Native Computing Foundation, incubating there. Congratulations, it's a big step. >> Thank you. >> Thanks. >> How does that change the outlook for Knative adoption? >> So we get a lot of support now from the CNCF which is really great, so we could be part of this conference, for example which was not so easy before that. And we see really a lot of interest and we also heard before the move that many contributors were not, started into looking into Knative because of this kind of non being part of a mutual foundation, so they were kind of afraid that the project would go away anytime like that. And we see the adoption really increases, but slowly at the moment. So we are still ramping up there and we really hope for more contributors. Yeah, that's where we are. >> CNCF is almost synonymous with open source and trust. So, being in CNCF and then having this first KnativeCon event as part of KubeCon, we are hoping, and it's a recent addition to CNCF as well, right? So we are hoping that this events and these interviews, this will catapult more interest into serverless. So I'm really, really hopeful and I only see positive from here on out for Knative. >> Well, I can sense the excitement. KnativeCon sold out, congratulations on that. >> Thank you. >> I can talk about serverless all day, it's a topic that I really love, it's a fascinating way to build applications and manage applications, but we have a lot more coverage to do today on "theCUBE" from Spain. From Valencia, Spain, I'm Keith Townsend along with Paul Gillin, and you're watching "theCUBE," the leader in high-tech coverage. (gentle upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, I have to eat a little crow, reaction to Valencia 10 minutes to downtown, another world, I compared it to Charlotte, Which is one of the that you can use and you of the biggest thing. that you can run really the functions, where you don't even have and then you deploy them that the community wants So functions really helps the developer that you have a container at the end Yeah, of course the but I do believe that you can and that's kind of the listening servers. it's the container that's going to come up So Knative is listening for the event, so that the server is only running in that you have a much more flexibility and if you go so you are only going to be able that needs to be done of the OpenShift install and of course you can go and then you are ready So Knative recently admitted to the, that the project would go to CNCF as well, right? Well, I can sense the excitement. coverage to do today

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillinPERSON

0.99+

Naina SinghPERSON

0.99+

IBMORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SpainLOCATION

0.99+

twoQUANTITY

0.99+

10 minutesQUANTITY

0.99+

Roland HussPERSON

0.99+

ValenciaLOCATION

0.99+

LambdaTITLE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

CincinnatiLOCATION

0.99+

second dayQUANTITY

0.99+

ChristmasEVENT

0.99+

PaulPERSON

0.99+

CharlotteLOCATION

0.99+

AWSORGANIZATION

0.99+

OpenShiftTITLE

0.99+

Super BowlEVENT

0.99+

KnativeORGANIZATION

0.99+

one partQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

KubeConEVENT

0.99+

Roland HußPERSON

0.98+

KnativeConEVENT

0.98+

S3TITLE

0.98+

one clickQUANTITY

0.98+

bothQUANTITY

0.98+

zeroQUANTITY

0.98+

GoogleORGANIZATION

0.98+

CNCFORGANIZATION

0.97+

oneQUANTITY

0.96+

googleORGANIZATION

0.96+

theCUTITLE

0.95+

CloudNativeCon Europe 2022EVENT

0.95+

todayDATE

0.95+

KubernetesTITLE

0.95+

firstQUANTITY

0.94+

one serverQUANTITY

0.93+

KnativeTITLE

0.93+

KubeconORGANIZATION

0.91+

KuberneteTITLE

0.91+

WindowsTITLE

0.9+

CloudEventsTITLE

0.9+

Dr. Matt Wood, AWS | AWS Summit SF 2022


 

(gentle melody) >> Welcome back to theCUBE's live coverage of AWS Summit in San Francisco, California. Events are back. AWS Summit in New York City this summer, theCUBE will be there as well. Check us out there. I'm glad to have events back. It's great to have of everyone here. I'm John Furrier, host of theCUBE. Dr. Matt Wood is with me, CUBE alumni, now VP of Business Analytics Division of AWS. Matt, great to see you. >> Thank you, John. It's great to be here. I appreciate it. >> I always call you Dr. Matt Wood because Andy Jackson always says, "Dr. Matt, we would introduce you on the arena." (Matt laughs) >> Matt: The one and only. >> The one and only, Dr. Matt Wood. >> In joke, I love it. (laughs) >> Andy style. (Matt laughs) I think you had walk up music too. >> Yes, we all have our own personalized walk up music. >> So talk about your new role, not a new role, but you're running the analytics business for AWS. What does that consist of right now? >> Sure. So I work. I've got what I consider to be one of the best jobs in the world. I get to work with our customers and the teams at AWS to build the analytics services that millions of our customers use to slice dice, pivot, better understand their data, look at how they can use that data for reporting, looking backwards. And also look at how they can use that data looking forward, so predictive analytics and machine learning. So whether it is slicing and dicing in the lower level of Hadoop and the big data engines, or whether you're doing ETL with Glue, or whether you're visualizing the data in QuickSight or building your models in SageMaker. I got my fingers in a lot of pies. >> One of the benefits of having CUBE coverage with AWS since 2013 is watching the progression. You were on theCUBE that first year we were at Reinvent in 2013, and look at how machine learning just exploded onto the scene. You were involved in that from day one. It's still day one, as you guys say. What's the big thing now? Look at just what happened. Machine learning comes in and then a slew of services come in. You've got SageMaker, became a hot seller right out of the gate. The database stuff was kicking butt. So all this is now booming. That was a real generational change over for database. What's the perspective? What's your perspective on that's evolved? >> I think it's a really good point. I totally agree. I think for machine learning, there's sort of a Renaissance in machine learning and the application of machine learning. Machine learning as a technology has been around for 50 years, let's say. But to do machine learning right, you need like a lot of data. The data needs to be high quality. You need a lot of compute to be able to train those models and you have to be able to evaluate what those models mean as you apply them to real world problems. And so the cloud really removed a lot of the constraints. Finally, customers had all of the data that they needed. We gave them services to be able to label that data in a high quality way. There's all the compute you need to be able to train the models. And so where you go? And so the cloud really enabled this Renaissance with machine learning. And we're seeing honestly a similar Renaissance with data and analytics. If you look back five to ten years, analytics was something you did in batch, your data warehouse ran an analysis to do reconciliation at the end of the month, and that was it. (John laughs) And so that's when you needed it. But today, if your Redshift cluster isn't available, Uber drivers don't turn up, DoorDash deliveries don't get made. Analytics is now central to virtually every business, and it is central to virtually every business's digital transformation. And being able to take that data from a variety of sources, be able to query it with high performance, to be able to actually then start to augment that data with real information, which usually comes from technical experts and domain experts to form wisdom and information from raw data. That's kind of what most organizations are trying to do when they kind of go through this analytics journey. >> It's interesting. Dave Velanta and I always talk on theCUBE about the future. And you look back, the things we're talking about six years ago are actually happening now. And it's not hyped up statement to say digital transformation is actually happening now. And there's also times when we bang our fists on the table saying, say, "I really think this is so important." And David says, "John, you're going to die on that hill." (Matt laughs) And so I'm excited that this year, for the first time, I didn't die on that hill. I've been saying- >> Do all right. >> Data as code is the next infrastructure as code. And Dave's like, "What do you mean by that?" We're talking about how data gets... And it's happening. So we just had an event on our AWS startups.com site, a showcase for startups, and the theme was data as code. And interesting new trends emerging really clearly, the role of a data engineer, right? Like an SRE, what an SRE did for cloud, you have a new data engineering role because of the developer onboarding is massively increasing, exponentially, new developers. Data science scientists are growing, but the pipelining and managing and engineering as a system, almost like an operating system. >> Kind of as a discipline. >> So what's your reaction to that about this data engineer, data as code? Because if you have horizontally scalable data, you've got to be open, that's hard (laughs), okay? And you got to silo the data that needs to be siloed for compliance and reason. So that's a big policy around that. So what's your reaction to data's code and the data engineering phenomenon? >> It's a really good point. I think with any technology project inside of an organization, success with analytics or machine learning, it's kind of 50% technology and then 50% cultural. And you have often domain experts. Those could be physicians or drug design experts, or they could be financial experts or whoever they might be, got deep domain expertise, and then you've got technical implementation teams. And there's kind of a natural, often repulsive force. I don't mean that rudely, but they just don't talk the same language. And so the more complex a domain and the more complex the technology, the stronger their repulsive force. And it can become very difficult for domain experts to work closely with the technical experts to be able to actually get business decisions made. And so what data engineering does and data engineering is, in some cases a team, or it can be a role that you play. It's really allowing those two disciplines to speak the same language. You can think of it as plumbing, but I think of it as like a bridge. It's a bridge between the technical implementation and the domain experts, and that requires a very disparate range of skills. You've got to understand about statistics, you've got to understand about the implementation, you got to understand about the data, you got to understand about the domain. And if you can put all of that together, that data engineering discipline can be incredibly transformative for an organization because it builds the bridge between those two groups. >> I was advising some young computer science students at the sophomore, junior level just a couple of weeks ago, and I told them I would ask someone at Amazon this question. So I'll ask you, >> Matt: Okay. since you've been in the middle of it for years, they were asking me, and I was trying to mentor them on how do you become a data engineer, from a practical standpoint? Courseware, projects to work on, how to think, not just coding Python, because everyone's coding in Python, but what else can they do? So I was trying to help them. I didn't really know the answer myself. I was just trying to kind of help figure it out with them. So what is the answer, in your opinion, or the thoughts around advice to young students who want to be data engineers? Because data scientists is pretty clear on what that is. You use tools, you make visualizations, you manage data, you get answers and insights and then apply that to the business. That's an application. That's not the standing up a stack or managing the infrastructure. So what does that coding look like? What would your advice be to folks getting into a data engineering role? >> Yeah, I think if you believe this, what I said earlier about 50% technology, 50 % culture, the number one technology to learn as a data engineer is the tools in the cloud which allow you to aggregate data from virtually any source into something which is incrementally more valuable for the organization. That's really what data engineering is all about. It's about taking from multiple sources. Some people call them silos, but silos indicates that the storage is kind of fungible or undifferentiated. That's really not the case. Success requires you to have really purpose built, well crafted, high performance, low cost engines for all of your data. So understanding those tools and understanding how to use them, that's probably the most important technical piece. Python and programming and statistics go along with that, I think. And then the most important cultural part, I think is... It's just curiosity. You want to be able to, as a data engineer, you want to have a natural curiosity that drives you to seek the truth inside an organization, seek the truth of a particular problem, and to be able to engage because probably you're going to some choice as you go through your career about which domain you end up in. Maybe you're really passionate about healthcare, or you're really just passionate about transportation or media, whatever it might be. And you can allow that to drive a certain amount of curiosity. But within those roles, the domains are so broad you kind of got to allow your curiosity to develop and lead you to ask the right questions and engage in the right way with your teams, because you can have all the technical skills in the world. But if you're not able to help the team's truth seek through that curiosity, you simply won't be successful. >> We just had a guest, 20 year old founder, Johnny Dallas who was 16 when he worked at Amazon. Youngest engineer- >> Johnny Dallas is a great name, by the way. (both chuckle) >> It's his real name. It sounds like a football player. >> That's awesome. >> Rock star. Johnny CUBE, it's me. But he's young and he was saying... His advice was just do projects. >> Matt: And get hands on. Yeah. >> And I was saying, hey, I came from the old days where you get to stand stuff up and you hugged on for the assets because you didn't want to kill the project because you spent all this money. And he's like, yeah, with cloud you can shut it down. If you do a project that's not working and you get bad data no one's adopting it or you don't like it anymore, you shut it down, just something else. >> Yeah, totally. >> Instantly abandon it and move on to something new. That's a progression. >> Totally! The blast radius of decisions is just way reduced. We talk a lot about in the old world, trying to find the resources and get the funding is like, all right, I want to try out this kind of random idea that could be a big deal for the organization. I need $50 million and a new data center. You're not going to get anywhere. >> And you do a proposal, working backwards, documents all kinds of stuff. >> All that sort of stuff. >> Jump your hoops. >> So all of that is gone. But we sometimes forget that a big part of that is just the prototyping and the experimentation and the limited blast radius in terms of cost, and honestly, the most important thing is time, just being able to jump in there, fingers on keyboards, just try this stuff out. And that's why at AWS, we have... Part of the reason we have so many services, because we want, when you get into AWS, we want the whole toolbox to be available to every developer. And so as your ideas develop, you may want to jump from data that you have that's already in a database to doing realtime data. And then you have the tools there. And when you want to get into real time data, you don't just have kinesis, you have real time analytics, and you can run SQL against that data. The capabilities and the breadth really matter when it comes to prototyping. >> That's the culture piece, because what was once a dysfunctional behavior. I'm going to go off the reservation and try something behind my boss' back, now is a side hustle or fun project. So for fun, you can just code something. >> Yeah, totally. I remember my first Hadoop projects. I found almost literally a decommissioned set of servers in the data center that no one was using. They were super old. They're about to be literally turned off. And I managed to convince the team to leave them on for me for another month. And I installed Hadoop on them and got them going. That just seems crazy to me now that I had to go and convince anybody not to turn these servers off. But what it was like when you- >> That's when you came up with Elastic MapReduce because you said this is too hard, we got to make it easier. >> Basically yes. (John laughs) I was installing Hadoop version Beta 9.9 or whatever. It was like, this is really hard. >> We got to make it simpler. All right, good stuff. I love the walk down memory Lane. And also your advice. Great stuff. I think culture is huge. That's why I like Adam's keynote at Reinvent, Adam Selipsky talk about Pathfinders and trailblazers, because that's a blast radius impact when you can actually have innovation organically just come from anywhere. That's totally cool. >> Matt: Totally cool. >> All right, let's get into the product. Serverless has been hot. We hear a lot of EKS is hot. Containers are booming. Kubernetes is getting adopted, still a lot of work to do there. Cloud native developers are booming. Serverless, Lambda. How does that impact the analytics piece? Can you share the hot products around how that translates? >> Absolutely, yeah. >> Aurora, SageMaker. >> Yeah, I think it's... If you look at kind of the evolution and what customers are asking for, they don't just want low cost. They don't just want this broad set of services. They don't just want those services to have deep capabilities. They want those services to have as low an operating cost over time as possible. So we kind of really got it down. We got built a lot of muscle, a lot of services about getting up and running and experimenting and prototyping and turning things off and turning them on and turning them off. And that's all great. But actually, you really only in most projects start something once and then stop something once, and maybe there's an hour in between or maybe there's a year. But the real expense in terms of time and operations and complexity is sometimes in that running cost. And so we've heard very loudly and clearly from customers that running cost is just undifferentiated to them. And they want to spend more time on their work. And in analytics, that is slicing the data, pivoting the data, combining the data, labeling the data, training their models, running inference against their models, and less time doing the operational pieces. >> Is that why the service focuses there? >> Yeah, absolutely. It dramatically reduces the skill required to run these workloads of any scale. And it dramatically reduces the undifferentiated heavy lifting because you get to focus more of the time that you would have spent on the operations on the actual work that you want to get done. And so if you look at something just like Redshift Serverless, that we launched a Reinvent, we have a lot of customers that want to run the cluster, and they want to get into the weeds where there is benefit. We have a lot of customers that say there's no benefit for me, I just want to do the analytics. So you run the operational piece, you're the experts. We run 60 million instant startups every single day. We do this a lot. >> John: Exactly. We understand the operations- >> I just want the answers. Come on. >> So just give me the answers or just give me the notebook or just give me the inference prediction. Today, for example, we announced Serverless Inference. So now once you've trained your machine learning model, just run a few lines of code or you just click a few buttons and then you got an inference endpoint that you do not have to manage. And whether you're doing one query against that end point per hour or you're doing 10 million, we'll just scale it on the back end. I know we got not a lot of time left, but I want to get your reaction on this. One of the things about the data lakes not being data swamps has been, from what I've been reporting and hearing from customers, is that they want to retrain their machine learning algorithm. They need that data, they need the real time data, and they need the time series data. Even though the time has passed, they got to store in the data lake. So now the data lake's main function is being reusing the data to actually retrain. It works properly. So a lot of post mortems turn into actually business improvements to make the machine learnings smarter, faster. Do you see that same way? Do you see it the same way? >> Yeah, I think it's really interesting >> Or is that just... >> No, I think it's totally interesting because it's convenient to kind of think of analytics as a very clear progression from point A to point B. But really, you're navigating terrain for which you do not have a map, and you need a lot of help to navigate that terrain. And so having these services in place, not having to run the operations of those services, being able to have those services be secure and well governed. And we added PII detection today. It's something you can do automatically, to be able to use any unstructured data, run queries against that unstructured data. So today we added text queries. So you can just say, well, you can scan a badge, for example, and say, well, what's the name on this badge? And you don't have to identify where it is. We'll do all of that work for you. It's more like a branch than it is just a normal A to B path, a linear path. And that includes loops backwards. And sometimes you've got to get the results and use those to make improvements further upstream. And sometimes you've got to use those... And when you're downstream, it will be like, "Ah, I remember that." And you come back and bring it all together. >> Awesome. >> So it's a wonderful world for sure. >> Dr. Matt, we're here in theCUBE. Just take the last word and give the update while you're here what's the big news happening that you're announcing here at Summit in San Francisco, California, and update on the business analytics group. >> Yeah, we did a lot of announcements in the keynote. I encourage everyone to take a look at, that this morning with Swami. One of the ones I'm most excited about is the opportunity to be able to take dashboards, visualizations. We're all used to using these things. We see them in our business intelligence tools, all over the place. However, what we've heard from customers is like, yes, I want those analytics, I want that visualization, I want it to be up to date, but I don't actually want to have to go from my tools where I'm actually doing my work to another separate tool to be able to look at that information. And so today we announced 1-click public embedding for QuickSight dashboard. So today you can literally as easily as embedding a YouTube video, you can take a dashboard that you've built inside QuickSight, cut and paste the HTML, paste it into your application and that's it. That's what you have to do. It takes seconds. >> And it gets updated in real time. >> Updated in real time. It's interactive. You can do everything that you would normally do. You can brand it, there's no power by QuickSight button or anything like that. You can change the colors, fit in perfectly with your application. So that's an incredibly powerful way of being able to take an analytics capability that today sits inside its own little fiefdom and put it just everywhere. Very transformative. >> Awesome. And the business is going well. You got the Serverless detail win for you there. Good stuff. Dr. Matt Wood, thank you for coming on theCUBE. >> Anytime. Thank you. >> Okay, this is theCUBE's coverage of AWS Summit 2022 in San Francisco, California. I'm John Furrier, host of theCUBE. Stay with us for more coverage of day two after this short break. (gentle music)

Published Date : Apr 21 2022

SUMMARY :

It's great to have of everyone here. I appreciate it. I always call you Dr. Matt Wood The one and only, In joke, I love it. I think you had walk up music too. Yes, we all have our own So talk about your and the big data engines, One of the benefits and you have to be able to evaluate And you look back, and the theme was data as code. And you got to silo the data And so the more complex a domain students at the sophomore, junior level I didn't really know the answer myself. the domains are so broad you kind of We just had a guest, is a great name, by the way. It's his real name. His advice was just do projects. Matt: And get hands on. and you hugged on for the assets move on to something new. and get the funding is like, And you do a proposal, And then you have the tools there. So for fun, you can just code something. And I managed to convince the team That's when you came I was installing Hadoop I love the walk down memory Lane. How does that impact the analytics piece? that is slicing the data, And so if you look at something We understand the operations- I just want the answers. that you do not have to manage. And you don't have to and give the update while you're here is the opportunity to be able that you would normally do. And the business is going well. Thank you. I'm John Furrier, host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Johnny DallasPERSON

0.99+

Andy JacksonPERSON

0.99+

John FurrierPERSON

0.99+

Dave VelantaPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

MattPERSON

0.99+

Adam SelipskyPERSON

0.99+

10 millionQUANTITY

0.99+

$50 millionQUANTITY

0.99+

Matt WoodPERSON

0.99+

60 millionQUANTITY

0.99+

todayDATE

0.99+

50%QUANTITY

0.99+

fiveQUANTITY

0.99+

AdamPERSON

0.99+

two groupsQUANTITY

0.99+

San Francisco, CaliforniaLOCATION

0.99+

16QUANTITY

0.99+

2013DATE

0.99+

PythonTITLE

0.99+

1-clickQUANTITY

0.99+

a yearQUANTITY

0.99+

TodayDATE

0.99+

HadoopTITLE

0.99+

ten yearsQUANTITY

0.99+

two disciplinesQUANTITY

0.99+

New York CityLOCATION

0.99+

San Francisco, CaliforniaLOCATION

0.99+

an hourQUANTITY

0.99+

firstQUANTITY

0.99+

this yearDATE

0.99+

CUBEORGANIZATION

0.99+

first timeQUANTITY

0.98+

50 %QUANTITY

0.98+

theCUBEORGANIZATION

0.98+

millionsQUANTITY

0.98+

AWS SummitEVENT

0.98+

YouTubeORGANIZATION

0.98+

memory LaneLOCATION

0.98+

UberORGANIZATION

0.98+

20 year oldQUANTITY

0.97+

day twoQUANTITY

0.97+

OneQUANTITY

0.97+

SageMakerTITLE

0.97+

AWS Summit 2022EVENT

0.97+

QuickSightTITLE

0.96+

bothQUANTITY

0.96+

SwamiPERSON

0.96+

50 yearsQUANTITY

0.96+

oneQUANTITY

0.96+

SQLTITLE

0.95+

Elastic MapReduceTITLE

0.95+

Dr.PERSON

0.94+

Johnny CUBEPERSON

0.93+