Image Title

Search Results for second segment:

Joseph Nelson, Roboflow | Cube Conversation


 

(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)

Published Date : Mar 3 2023

SUMMARY :

Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

JosephPERSON

0.99+

Joseph NelsonPERSON

0.99+

January, 2021DATE

0.99+

John FurrierPERSON

0.99+

MedtronicORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

2019DATE

0.99+

UberORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JohnPERSON

0.99+

400 millionQUANTITY

0.99+

Evan SpiegelPERSON

0.99+

24 monthsQUANTITY

0.99+

2017DATE

0.99+

RoboFlowORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

RivianORGANIZATION

0.99+

12 monthsQUANTITY

0.99+

20%QUANTITY

0.99+

Cardinal HealthORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

BarcelonaLOCATION

0.99+

WimbledonEVENT

0.99+

roboflow.com/careersOTHER

0.99+

firstQUANTITY

0.99+

second segmentQUANTITY

0.99+

each teamQUANTITY

0.99+

six monthsQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

both worldsQUANTITY

0.99+

2%QUANTITY

0.99+

two years laterDATE

0.98+

Mobile World CongressEVENT

0.98+

UbersORGANIZATION

0.98+

third wayQUANTITY

0.98+

oneQUANTITY

0.98+

a weekQUANTITY

0.98+

Magic SudokuTITLE

0.98+

secondQUANTITY

0.98+

NvidiaORGANIZATION

0.98+

SudokuTITLE

0.98+

MWCEVENT

0.97+

todayDATE

0.97+

billion dollarQUANTITY

0.97+

one single thingQUANTITY

0.97+

over a hundred thousand developersQUANTITY

0.97+

fourQUANTITY

0.97+

thirdQUANTITY

0.96+

ElonORGANIZATION

0.96+

third thingQUANTITY

0.96+

TeslaORGANIZATION

0.96+

JetsonCOMMERCIAL_ITEM

0.96+

ElonPERSON

0.96+

RoboFlowTITLE

0.96+

InstagramORGANIZATION

0.95+

TwilioORGANIZATION

0.95+

twentiesQUANTITY

0.95+

Product Hunt ARTITLE

0.95+

MoorePERSON

0.95+

both researchersQUANTITY

0.95+

one thingQUANTITY

0.94+

How to Make a Data Fabric Smart A Technical Demo With Jess Jowdy


 

(inspirational music) (music ends) >> Okay, so now that we've heard Scott talk about smart data fabrics, it's time to see this in action. Right now we're joined by Jess Jowdy, who's the manager of Healthcare Field Engineering at InterSystems. She's going to give a demo of how smart data fabrics actually work, and she's going to show how embedding a wide range of analytics capabilities, including data exploration business intelligence, natural language processing and machine learning directly within the fabric makes it faster and easier for organizations to gain new insights and power intelligence predictive and prescriptive services and applications. Now, according to InterSystems, smart data fabrics are applicable across many industries from financial services to supply chain to healthcare and more. Jess today is going to be speaking through the lens of a healthcare focused demo. Don't worry, Joe Lichtenberg will get into some of the other use cases that you're probably interested in hearing about. That will be in our third segment, but for now let's turn it over to Jess. Jess, good to see you. >> Hi, yeah, thank you so much for having me. And so for this demo, we're really going to be bucketing these features of a smart data fabric into four different segments. We're going to be dealing with connections, collections, refinements, and analysis. And so we'll see that throughout the demo as we go. So without further ado, let's just go ahead and jump into this demo, and you'll see my screen pop up here. I actually like to start at the end of the demo. So I like to begin by illustrating what an end user's going to see, and don't mind the screen 'cause I gave you a little sneak peek of what's about to happen. But essentially what I'm going to be doing is using Postman to simulate a call from an external application. So we talked about being in the healthcare industry. This could be, for instance, a mobile application that a patient is using to view an aggregated summary of information across that patient's continuity of care or some other kind of application. So we might be pulling information in this case from an electronic medical record. We might be grabbing clinical history from that. We might be grabbing clinical notes from a medical transcription software, or adverse reaction warnings from a clinical risk grouping application, and so much more. So I'm really going to be simulating a patient logging in on their phone and retrieving this information through this Postman call. So what I'm going to do is I'm just going to hit send, I've already preloaded everything here, and I'm going to be looking for information where the last name of this patient is Simmons, and their medical record number or their patient identifier in the system is 32345. And so as you can see, I have this single JSON payload that showed up here of, just, relevant clinical information for my patient whose last name is Simmons, all within a single response. So fantastic, right? Typically though, when we see responses that look like this there is an assumption that this service is interacting with a single backend system, and that single backend system is in charge of packaging that information up and returning it back to this caller. But in a smart data fabric architecture, we're able to expand the scope to handle information across different, in this case, clinical applications. So how did this actually happen? Let's peel back another layer and really take a look at what happened in the background. What you're looking at here is our mission control center for our smart data fabric. On the left we have our APIs that allow users to interact with particular services. On the right we have our connections to our different data silos. And in the middle here, we have our data fabric coordinator which is going to be in charge of this refinement and analysis, those key pieces of our smart data fabric. So let's look back and think about the example we just showed. I received an inbound request for information for a patient whose last name is Simmons. My end user is requesting to connect to that service, and that's happening here at my patient data retrieval API location. Users can define any number of different services and APIs depending on their use cases. And to that end, we do also support full life cycle API management within this platform. When you're dealing with APIs, I always like to make a little shout out on this, that you really want to make sure you have enough, like a granular enough security model to handle and limit which APIs and which services a consumer can interact with. In this IRIS platform, which we're talking about today we have a very granular role-based security model that allows you to handle that, but it's really important in a smart data fabric to consider who's accessing your data and in what context. >> Can I just interrupt you for a second, Jess? >> Yeah, please. >> So you were showing on the left hand side of the demo a couple of APIs. I presume that can be a very long list. I mean, what do you see as typical? >> I mean you could have hundreds of these APIs depending on what services an organization is serving up for their consumers. So yeah, we've seen hundreds of these services listed here. >> So my question is, obviously security is critical in the healthcare industry, and API securities are like, really hot topic these days. How do you deal with that? >> Yeah, and I think API security is interesting 'cause it can happen at so many layers. So, there's interactions with the API itself. So can I even see this API and leverage it? And then within an API call, you then have to deal with all right, which end points or what kind of interactions within that API am I allowed to do? What data am I getting back? And with healthcare data, the whole idea of consent to see certain pieces of data is critical. So, the way that we handle that is, like I said, same thing at different layers. There is access to a particular API, which can happen within the IRIS product, and also we see it happening with an API management layer, which has become a really hot topic with a lot of organizations. And then when it comes to data security, that really happens under the hood within your smart data fabric. So, that role-based access control becomes very important in assigning, you know, roles and permissions to certain pieces of information. Getting that granular becomes the cornerstone of the security. >> And that's been designed in, it's not a bolt on as they like to say. >> Absolutely. >> Okay, can we get into collect now? >> Of course, we're going to move on to the collection piece at this point in time, which involves pulling information from each of my different data silos to create an overall aggregated record. So commonly, each data source requires a different method for establishing connections and collecting this information. So for instance, interactions with an EMR may require leveraging a standard healthcare messaging format like Fire. Interactions with a homegrown enterprise data warehouse for instance, may use SQL. For a cloud-based solutions managed by a vendor, they may only allow you to use web service calls to pull data. So it's really important that your data fabric platform that you're using has the flexibility to connect to all of these different systems and applications. And I'm about to log out, so I'm going to (chuckles) keep my session going here. So therefore it's incredibly important that your data fabric has the flexibility to connect to all these different kinds of applications and data sources, and all these different kinds of formats and over all of these different kinds of protocols. So let's think back on our example here. I had four different applications that I was requesting information for to create that payload that we saw initially. Those are listed here under this operations section. So these are going out and connecting to downstream systems to pull information into my smart data fabric. What's great about the IRIS platform is, it has an embedded interoperability platform. So there's all of these native adapters that can support these common connections that we see for different kinds of applications. So using REST, or SOAP, or SQL, or FTP, regardless of that protocol, there's an adapter to help you work with that. And we also think of the types of formats that we typically see data coming in as in healthcare we have HL7, we have Fire, we have CCDs, across the industry, JSON is, you know, really hitting a market strong now, and XML payloads, flat files. We need to be able to handle all of these different kinds of formats over these different kinds of protocols. So to illustrate that, if I click through these when I select a particular connection on the right side panel, I'm going to see the different settings that are associated with that particular connection that allows me to collect information back into my smart data fabric. In this scenario, my connection to my chart script application in this example, communicates over a SOAP connection. When I'm grabbing information from my clinical risk grouping application I'm using a SQL based connection. When I'm connecting to my EMR, I'm leveraging a standard healthcare messaging format known as Fire, which is a REST based protocol. And then when I'm working with my health record management system, I'm leveraging a standard HTTP adapter. So you can see how we can be flexible when dealing with these different kinds of applications and systems. And then it becomes important to be able to validate that you've established those connections correctly, and be able to do it in a reliable and quick way. Because if you think about it, you could have hundreds of these different kinds of applications built out and you want to make sure that you're maintaining and understanding those connections. So I can actually go ahead and test one of these applications and put in, for instance my patient's last name and their MRN, and make sure that I'm actually getting data back from that system. So it's a nice little sanity check as we're building out that data fabric to ensure that we're able to establish these connections appropriately. So turnkey adapters are fantastic, as you can see we're leveraging them all here, but sometimes these connections are going to require going one step further and building something really specific for an application. So why don't we go one step further here and talk about doing something custom or doing something innovative. And so it's important for users to have the ability to develop and go beyond what's an out-of-the box or black box approach to be able to develop things that are specific to their data fabric, or specific to their particular connection. In this scenario, the IRIS data platform gives users access to the entire underlying code base. So you not only get an opportunity to view how we're establishing these connections or how we're building out these processes, but you have the opportunity to inject your own kind of processing, your own kinds of pipelines into this. So as an example, you can leverage any number of different programming languages right within this pipeline. And so I went ahead and I injected Python. So Python is a very up and coming language, right? We see more and more developers turning towards Python to do their development. So it's important that your data fabric supports those kinds of developers and users that have standardized on these kinds of programming languages. This particular script here, as you can see actually calls out to our turnkey adapters. So we see a combination of out-of-the-box code that is provided in this data fabric platform from IRIS, combined with organization specific or user specific customizations that are included in this Python method. So it's a nice little combination of how do we bring the developer experience in and mix it with out-of-the-box capabilities that we can provide in a smart data fabric. >> Wow. >> Yeah, I'll pause. (laughs) >> It's a lot here. You know, actually- >> I can pause. >> If I could, if we just want to sort of play that back. So we went to the connect and the collect phase. >> Yes, we're going into refine. So it's a good place to stop. >> So before we get there, so we heard a lot about fine grain security, which is crucial. We heard a lot about different data types, multiple formats. You've got, you know, the ability to bring in different dev tools. We heard about Fire, which of course big in healthcare. And that's the standard, and then SQL for traditional kind of structured data, and then web services like HTTP you mentioned. And so you have a rich collection of capabilities within this single platform. >> Absolutely. And I think that's really important when you're dealing with a smart data fabric because what you're effectively doing is you're consolidating all of your processing, all of your collection, into a single platform. So that platform needs to be able to handle any number of different kinds of scenarios and technical challenges. So you've got to pack that platform with as many of these features as you can to consolidate that processing. >> All right, so now we're going into refinement. >> We're going into refinement. Exciting. (chuckles) So how do we actually do refinement? Where does refinement happen? And how does this whole thing end up being performant? Well the key to all of that is this SDF coordinator, or stands for Smart Data Fabric coordinator. And what this particular process is doing is essentially orchestrating all of these calls to all of these different downstream systems. It's aggregating, it's collecting that information, it's aggregating it, and it's refining it into that single payload that we saw get returned to the user. So really this coordinator is the main event when it comes to our data fabric. And in the IRIS platform we actually allow users to build these coordinators using web-based tool sets to make it intuitive. So we can take a sneak peek at what that looks like. And as you can see, it follows a flow chart like structure. So there's a start, there is an end, and then there are these different arrows that point to different activities throughout the business process. And so there's all these different actions that are being taken within our coordinator. You can see an action for each of the calls to each of our different data sources to go retrieve information. And then we also have the sync call at the end that is in charge of essentially making sure that all of those responses come back before we package them together and send them out. So this becomes really crucial when we're creating that data fabric. And you know, this is a very simple data fabric example where we're just grabbing data and we're consolidating it together. But you can have really complex orchestrators and coordinators that do any number of different things. So for instance, I could inject SQL logic into this or SQL code, I can have conditional logic, I can do looping, I can do error trapping and handling. So we're talking about a whole number of different features that can be included in this coordinator. So like I said, we have a really very simple process here that's just calling out, grabbing all those different data elements from all those different data sources and consolidating it. We'll look back at this coordinator in a second when we introduce, or we make this data fabric a bit smarter, and we start introducing that analytics piece to it. So this is in charge of the refinement. And so at this point in time we've looked at connections, collections, and refinements. And just to summarize what we've seen 'cause I always like to go back and take a look at everything that we've seen. We have our initial API connection, we have our connections to our individual data sources and we have our coordinators there in the middle that are in charge of collecting the data and refining it into a single payload. As you can imagine, there's a lot going on behind the scenes of a smart data fabric, right? There's all these different processes that are interacting. So it's really important that your smart data fabric platform has really good traceability, really good logging, 'cause you need to be able to know, you know, if there was an issue, where did that issue happen in which connected process, and how did it affect the other processes that are related to it? In IRIS, we have this concept called a visual trace. And what our clients use this for is basically to be able to step through the entire history of a request from when it initially came into the smart data fabric, to when data was sent back out from that smart data fabric. So I didn't record the time, but I bet if you recorded the time it was this time that we sent that request in and you can see my patient's name and their medical record number here, and you can see that that instigated four different calls to four different systems, and they're represented by these arrows going out. So we sent something to chart script, to our health record management system, to our clinical risk grouping application, into my EMR through their Fire server. So every request, every outbound application gets a request and we pull back all of those individual pieces of information from all of those different systems, and we bundle them together. And from my Fire lovers, here's our Fire bundle that we got back from our Fire server. So this is a really good way of being able to validate that I am appropriately grabbing the data from all these different applications and then ultimately consolidating it into one payload. Now we change this into a JSON format before we deliver it, but this is those data elements brought together. And this screen would also be used for being able to see things like error trapping, or errors that were thrown, alerts, warnings, developers might put log statements in just to validate that certain pieces of code are executing. So this really becomes the one stop shop for understanding what's happening behind the scenes with your data fabric. >> Sure, who did what when where, what did the machine do what went wrong, and where did that go wrong? Right at your fingertips. >> Right. And I'm a visual person so a bunch of log files to me is not the most helpful. While being able to see this happened at this time in this location, gives me that understanding I need to actually troubleshoot a problem. >> This business orchestration piece, can you say a little bit more about that? How people are using it? What's the business impact of the business orchestration? >> The business orchestration, especially in the smart data fabric, is really that crucial part of being able to create a smart data fabric. So think of your business orchestrator as doing the heavy lifting of any kind of processing that involves data, right? It's bringing data in, it's analyzing that information it's transforming that data, in a format that your consumer's not going to understand. It's doing any additional injection of custom logic. So really your coordinator or that orchestrator that sits in the middle is the brains behind your smart data fabric. >> And this is available today? It all works? >> It's all available today. Yeah, it all works. And we have a number of clients that are using this technology to support these kinds of use cases. >> Awesome demo. Anything else you want to show us? >> Well, we can keep going. I have a lot to say, but really this is our data fabric. The core competency of IRIS is making it smart, right? So I won't spend too much time on this, but essentially if we go back to our coordinator here, we can see here's that original, that pipeline that we saw where we're pulling data from all these different systems and we're collecting it and we're sending it out. But then we see two more at the end here, which involves getting a readmission prediction and then returning a prediction. So we can not only deliver data back as part of a smart data fabric, but we can also deliver insights back to users and consumers based on data that we've aggregated as part of a smart data fabric. So in this scenario, we're actually taking all that data that we just looked at, and we're running it through a machine learning model that exists within the smart data fabric pipeline, and producing a readmission score to determine if this particular patient is at risk for readmission within the next 30 days. Which is a typical problem that we see in the healthcare space. So what's really exciting about what we're doing in the IRIS world, is we're bringing analytics close to the data with integrated ML. So in this scenario we're actually creating the model, training the model, and then executing the model directly within the IRIS platform. So there's no shuffling of data, there's no external connections to make this happen. And it doesn't really require having a PhD in data science to understand how to do that. It leverages all really basic SQL-like syntax to be able to construct and execute these predictions. So, it's going one step further than the traditional data fabric example to introduce this ability to define actionable insights to our users based on the data that we've brought together. >> Well that readmission probability is huge, right? Because it directly affects the cost for the provider and the patient, you know. So if you can anticipate the probability of readmission and either do things at that moment, or, you know, as an outpatient perhaps, to minimize the probability then that's huge. That drops right to the bottom line. >> Absolutely. And that really brings us from that data fabric to that smart data fabric at the end of the day, which is what makes this so exciting. >> Awesome demo. >> Thank you! >> Jess, are you cool if people want to get in touch with you? Can they do that? >> Oh yes, absolutely. So you can find me on LinkedIn, Jessica Jowdy, and we'd love to hear from you. I always love talking about this topic so we'd be happy to engage on that. >> Great stuff. Thank you Jessica, appreciate it. >> Thank you so much. >> Okay, don't go away because in the next segment, we're going to dig into the use cases where data fabric is driving business value. Stay right there. (inspirational music) (music fades)

Published Date : Feb 22 2023

SUMMARY :

and she's going to show And to that end, we do also So you were showing hundreds of these APIs depending in the healthcare industry, So can I even see this as they like to say. that are specific to their data fabric, Yeah, I'll pause. It's a lot here. So we went to the connect So it's a good place to stop. So before we get So that platform needs to All right, so now we're that are related to it? Right at your fingertips. I need to actually troubleshoot a problem. of being able to create of clients that are using this technology Anything else you want to show us? So in this scenario, we're and the patient, you know. And that really brings So you can find me on Thank you Jessica, appreciate it. in the next segment,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Joe LichtenbergPERSON

0.99+

Jessica JowdyPERSON

0.99+

JessicaPERSON

0.99+

Jess JowdyPERSON

0.99+

InterSystemsORGANIZATION

0.99+

ScottPERSON

0.99+

PythonTITLE

0.99+

SimmonsPERSON

0.99+

JessPERSON

0.99+

32345OTHER

0.99+

hundredsQUANTITY

0.99+

IRISORGANIZATION

0.99+

eachQUANTITY

0.99+

todayDATE

0.99+

LinkedInORGANIZATION

0.99+

third segmentQUANTITY

0.98+

FireCOMMERCIAL_ITEM

0.98+

SQLTITLE

0.98+

single platformQUANTITY

0.97+

each dataQUANTITY

0.97+

oneQUANTITY

0.97+

singleQUANTITY

0.95+

single responseQUANTITY

0.94+

single backend systemQUANTITY

0.92+

two moreQUANTITY

0.92+

four different segmentsQUANTITY

0.89+

APIsQUANTITY

0.88+

one stepQUANTITY

0.88+

fourQUANTITY

0.85+

Healthcare Field EngineeringORGANIZATION

0.82+

JSONTITLE

0.8+

single payloadQUANTITY

0.8+

secondQUANTITY

0.79+

one payloadQUANTITY

0.76+

next 30 daysDATE

0.76+

IRISTITLE

0.75+

FireTITLE

0.72+

PostmanTITLE

0.71+

everyQUANTITY

0.68+

four different callsQUANTITY

0.66+

JesPERSON

0.66+

a secondQUANTITY

0.61+

servicesQUANTITY

0.6+

evelopersPERSON

0.58+

PostmanORGANIZATION

0.54+

HL7OTHER

0.4+

Today’s Data Challenges and the Emergence of Smart Data Fabrics


 

(intro music) >> Now, as we all know, businesses are awash with data, from financial services to healthcare to supply chain and logistics and more. Our activities, and increasingly, actions from machines are generating new and more useful information in much larger volumes than we've ever seen. Now, meanwhile, our data-hungry society's expectations for experiences are increasingly elevated. Everybody wants to leverage and monetize all this new data coming from smart devices and innumerable sources around the globe. All this data, it surrounds us, but more often than not, it lives in silos, which makes it very difficult to consume, share, and make valuable. These factors, combined with new types of data and analytics, make things even more complicated. Data from ERP systems to images, to data generated from deep learning and machine learning platforms, this is the reality that organizations are facing today. And as such, effectively leveraging all of this data has become an enormous challenge. So, today, we're going to be discussing these modern data challenges and the emergence of so-called "Smart Data Fabrics" as a key solution to said challenges. To do so, we're joined by thought leaders from InterSystems. This is a really creative technology provider that's attacking some of the most challenging data obstacles. InterSystems tells us that they're dedicated to helping customers address their critical scalability, interoperability, and speed-to-value challenges. And in this first segment, we welcome Scott Gnau, he's the global Head of Data Platforms at InterSystems, to discuss the context behind these issues and how smart data fabrics provide a solution. Scott, welcome. Good to see you again. >> Thanks a lot. It's good to be here. >> Yeah. So, look, you and I go back, you know, several years and, you know, you've worked in Tech, you've worked in Data Management your whole career. You've seen many data management solutions, you know, from the early days. And then we went through the hoop, the Hadoop era together and you've come across a number of customer challenges that sort of change along the way. And they've evolved. So, what are some of the most pressing issues that you see today when you're talking to customers and, you know, put on your technical hat if you want to. >> (chuckles) Well, Dave, I think you described it well. It's a perfect storm out there. You know, combined with there's just data everywhere and it's coming up on devices, it's coming from new different kinds of paradigms of processing and people are trying to capture and harness the value from this data. At the same time, you talked about silos and I've talked about data silos through my entire career. And I think, I think the interesting thing about it is for so many years we've talked about, "We've got to reduce the silos and we've got to integrate the data, we've got to consolidate the data." And that was a really good paradigm for a long time. But frankly, the perfect storm that you described? The sources are just too varied. The required agility for a business unit to operate and manage their customers is creating an enormous presser and I think ultimately, silos aren't going away. So, there's a realization that, "Okay, we're going to have these silos, we want to manage them, but how do we really take advantage of data that may live across different parts of our business and in different organizations?" And then of course, the expectation of the consumer is at an all-time high, right? They expect that we're going to treat them and understand their needs or they're going to find some other provider. So, you know, pulling all of this together really means that, you know, our customers and businesses around the world are struggling to keep up and it's forcing a real, a new paradigm shift in underlying data management, right? We started, you know, many, many years ago with data marts and then data warehouses and then we graduated to data lakes, where we expanded beyond just traditional transactional data into all kinds of different data. And at each step along the way, we help businesses to thrive and survive and compete and win. But with the perfect storm that you've described, I think those technologies are now just a piece of the puzzle that is really required for success. And this is really what's leading to data fabrics and data meshes in the industry. >> So what are data fabrics? What problems do they solve? How do they work? Can you just- >> Yeah. So the idea behind it is, and this is not to the exclusion of other technologies that I described in data warehouses and data lakes and so on, but data fabrics kind of take the best of those worlds but add in the notion of being able to do data connectivity with provenance as a way to integrate data versus data consolidation. And when you think about it, you know, data has gravity, right? It's expensive to move data. It's expensive in terms of human cost to do ETL processes where you don't have known provenance of data. So, being able to play data where it lies and connect the information from disparate systems to learn new things about your business is really the ultimate goal. You think about in the world today, we hear about issues with the supply chain and supply and logistics is a big issue, right? Why is that an issue? Because all of these companies are data-driven. They've got lots of access to data. They have formalized and automated their processes, they've installed software, and all of that software is in different systems within different companies. But being able to connect that information together, without changing the underlying system, is an important way to learn and optimize for supply and logistics, as an example. And that's a key use case for data fabrics. Being able to connect, have provenance, not interfere with the operational system, but glean additional knowledge by combining multiple different operational systems' data together. >> And to your point, data is by its very nature, you know, distributed around the globe, it's on different clouds, it's in different systems. You mentioned "data mesh" before. How do data fabrics relate to this concept of data mesh? Are they competing? Are they complimentary? >> Ultimately, we think that they're complimentary. And we actually like to talk about smart data fabrics as a way to kind of combine the best of the two worlds. >> What is that? >> The biggest thing really is there's a lot around data fabric architecture that talks about centralized processing. And in data meshes, it's more about distributed processing. Ultimately, we think a smart data fabric will support both and have them be interchangeable and be able to be used where it makes the most sense. There are some things where it makes sense to process, you know, for a local business unit, or even on a device for real-time kinds of implementations. There are some other areas where centralized processing of multiple different data sources make sense. And what we're saying is, "Your technology and the architecture that you define behind that technology should allow for both where they make the most sense." >> What's the bottom line business benefit of implementing a data fabric? What can I expect if I go that route? >> I think there are a couple of things, right? Certainly, being able to interact with customers in real time and being able to manage through changes in the marketplace is certainly a key concept. Time-to-value is another key concept. You know, if you think about the supply and logistics discussion that I had before, right? No company is going to rewrite their ERP operational system. It's how they manage and run their business. But being able to glean additional insights from that data combined with data from a partner combined with data from a customer or combined with algorithmic data that, you know, you may create some sort of forecast and that you want to fit into. And being able to combine that together without interfering with the operational process and get those answers quickly is an important thing. So, seeing through the silos and being able to do the connectivity, being able to have interoperability, and then, combining that with flexibility on the analytics and flexibility on the algorithms you might want to run against that data. Because in today's world, of course, you know, certainly there's the notion of predictive modeling and relational theory, but also now adding in machine learning, deep learning algorithms, and have all of those things kind of be interchangeable is another important concept behind data fabric. So you're not relegated to one type of processing. You're saying, "It's data and I have multiple different processing engines and I may want to interchange them over time." >> So, I know, well actually, you know, when you said "real time", I infer from that, I don't have a zillion copies of the data and it's not in a bunch of silos. Is that a correct premise? >> You try to minimize your copies of the data? >> Yeah. Okay. >> There's certainly, there's a nirvana that says, "There's only ever one copy of data." That's probably impossible. But you certainly don't want to be forced into making multiple copies of data to support different processing engines unnecessarily. >> And so, you've recently made some enhancements to the data fabric capability that takes it, you know, ostensibly to the next level. Is that the smart piece? Is that machine intelligence? Can you describe what's in there? >> Well, you know, ultimately, the business benefit is be able to have a single source of the truth for a company. And so, what we're doing is combining multiple technologies in a single set of software that makes that software agile and supportable and not fragile for deployment of applications. At its core, what we're saying is, you know, we want to be able to consume any kind of data and I think your data fabric architecture is predicated on the fact that you're going to have relational data, you're going to have document data, you may have key-value store data, you may have images, you may have other things, and you want to be able to not be limited by the kind of data that you want to process. And so that certainly is what we build into our product set. And then, you want to be able to have any kind of algorithm, where appropriate, run against that data without having to do a bunch of massive ETL processes or make another copy of the data and move it somewhere else. And so, to that end, we have, taking our award-winning engine, which, you know, provides, you know, traditional analytic capabilities and relational capabilities, we've now integrated machine learning. So, you basically can bring machine learning algorithms to the data without having to move data to the machine learning algorithm. What does that mean? Well, number one, your application developer doesn't have to think differently to take advantage of the new algorithm. So that's a really good thing. The other thing that happens is if you, you're playing that algorithm where the data actually exists from your operational system, that means the round trip from running the model to inferring some decision you want to make to actually implementing that decision can happen instantaneously, as opposed to, you know, other kinds of architectures, where you may want to make a copy of the data and move it somewhere else. That takes time, latency. Now the data gets stale, your model may not be as efficient because you're running against stale data. We've now taken all of that off the table by being able to pull that processing inside the data fabric, inside of the single source of truth. >> And you got to manage all that complexity. So you got one system, so that makes it, you know, cost-effective, and you're bringing modern tooling to the platform. Is that right? >> That's correct. >> How can people learn more and maybe continue the conversation with you if they have other questions? (both chuckle) >> Call or write. >> Yeah. >> Yeah, I mean, certainly, check out our website. We've got a lot of information about the different kinds of solutions, the different industries, the different technologies. Reach out: scottg@intersystems.com. >> Excellent. Thank you, Scott. Really appreciate it and great to see you again. >> Good to see you. >> All right, keep it right there. We have a demo coming up next. You want to see smart data fabrics in action? Stay tuned. (ambient music)

Published Date : Feb 17 2023

SUMMARY :

Good to see you again. It's good to be here. and I go back, you know, and data meshes in the industry. and this is not to the exclusion data is by its very nature, you know, the best of the two worlds. and be able to be used where and that you want to fit into. and it's not in a bunch of silos. But you certainly don't want to be forced Is that the smart piece? and you want to be able to not be limited so that makes it, you about the different kinds of solutions, great to see you again. data fabrics in action?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

InterSystemsORGANIZATION

0.99+

DavePERSON

0.99+

Scott GnauPERSON

0.99+

scottg@intersystems.comOTHER

0.99+

one systemQUANTITY

0.99+

bothQUANTITY

0.99+

one copyQUANTITY

0.99+

todayDATE

0.98+

first segmentQUANTITY

0.98+

singleQUANTITY

0.97+

each stepQUANTITY

0.96+

two worldsQUANTITY

0.96+

single sourceQUANTITY

0.96+

single setQUANTITY

0.94+

TodayDATE

0.91+

many years agoDATE

0.84+

zillion copiesQUANTITY

0.73+

one typeQUANTITY

0.71+

oneQUANTITY

0.64+

Today’s Data Challenges and the Emergence of Smart Data Fabrics


 

(upbeat music) >> Now, as we all know, businesses are awash with data, from financial services to healthcare to supply chain and logistics and more. Our activities, and increasingly, actions from machines are generating new and more useful information in much larger volumes than we've ever seen. Now, meanwhile, our data hungry society's expectations for experiences are increasingly elevated. Everybody wants to leverage and monetize all this new data coming from smart devices and innumerable sources around the globe. All this data, it surrounds us, but more often than not, it lives in silos, which makes it very difficult to consume, share, and make valuable. These factors combined with new types of data and analytics make things even more complicated. Data from ERP systems to images, to data generated from deep learning and machine learning platforms, this is the reality that organizations are facing today. And as such, effectively leveraging all of this data has become an enormous challenge. So today, we're going to be discussing these modern data challenges in the emergence of so-called smart data fabrics as a key solution to said challenges. To do so, we're joined by thought leaders from InterSystems. This is a really creative technology provider that's attacking some of the most challenging data obstacles. InterSystems tells us that they're dedicated to helping customers address their critical scalability, interoperability, and speed to value challenges. And in this first segment, we welcome Scott now. He's the global head of data platforms at InterSystems to discuss the context behind these issues and how smart data fabrics provide a solution. Scott, welcome, good to see you again. >> Thanks a lot. It's good to be here. >> Yeah, so look, you and I go back, you know, several years and you've worked in tech. You've worked in data management your whole career. You've seen many data management solutions, you know, from the early days. And then we went through the Hadoop era together. And you've come across a number of customer challenges that sort of changed along the way, and they've evolved. So what are some of the most pressing issues that you see today when you're talking to customers, and, you know, put on your technical hat if you want to? >> Well, Dave, I think you described it well. It's a perfect storm out there, you know, combined with, there's just data everywhere. And it's coming up on devices, it's coming from new different kinds of paradigms of processing and people are trying to capture and harness the value from this data. At the same time, you talked about silos, and I've talked about data silos through my entire career. And I think the interesting thing about it is for so many years we've talked about we've got to reduce the silos, and we've got to integrate the data, we've got to consolidate the data. And that was a really good paradigm for a long time. But frankly, the perfect storm that you described, the sources are just too varied. The required agility for a business unit to operate and manage their customers is creating an enormous pressure. And I think, ultimately, silos aren't going away. So there's a realization that, okay, we're going to have these silos, we want to manage them, but how do we really take advantage of data that may live across different parts of our business and in different organizations? And then, of course, the expectation of the consumer is at an all-time high, right? They expect that we're going to treat them and understand their needs, or they're going to find some other provider. So, you know, pulling all of this together really means that, you know, our customers and businesses around the world are struggling to keep up, and it's forcing a new paradigm shift in underlying data management, right? We started, you know, many, many years ago with data marts and then data warehouses, and then we graduated to data lakes where we expanded beyond just traditional transactional data into all kinds of different data. And at each step along the way, we help businesses to thrive and survive and compete and win. But with the perfect storm that you've described, I think those technologies are now just a piece of the puzzle that is really required for success. And this is really what's leading to data fabrics and data meshes in the industry. >> So what are data fabrics? What problems do they solve? How do they work? Can you just add- >> Yeah, so the idea behind it is, and this is not to the exclusion of other technologies that I described in data warehouses and data lakes and so on. But data fabrics kind of take the best of those worlds, but add in the notion of being able to do data connectivity with provenance as a way to integrate data versus data consolidation. And when you think about it, you know, data has gravity, right? It's expensive to move data. It's expensive in terms of human cost to do ETL processes where you don't have known provenance of data. So being able to play data where it lies and connect the information from disparate systems to learn new things about your business is really the ultimate goal. You think about in the world today, we hear about issues with the supply chain, and supply and logistics is a big issue, right? Why is that an issue? Because all of these companies are data driven. They've got lots of access to data. They have formalized and automated their processes. They've installed software. And all of that software is in different systems within different companies. But being able to connect that information together without changing the underlying system is an important way to learn and optimize for supply and logistics, as an example. And that's a key use case for data fabrics being able to connect, have provenance, not interfere with the operational system, but glean additional knowledge by combining multiple different operational systems' data together. >> And to your point, data is by its very nature, you're distributed around the globe, it's on different clouds, it's in different systems. You mentioned data mesh before. How do data fabrics relate to this concept of data mesh? Are they competing? Are they complimentary? >> Ultimately, we think that they're complimentary. And we actually like to talk about smart data fabrics as a way to kind of combine the best of the two worlds. >> What is that? I mean, the biggest thing really is there's a lot around data fabric architecture that talks about centralized processing. And in data meshes, it's more about distributed processing. Ultimately, we think a smart data fabric will support both and have them be interchangeable and be able to be used where it makes the most sense. There are some things where it makes sense to process, you know, for a local business unit, or even on a device for real time kinds of implementations. There are some other areas where centralized processing of multiple different data sources make sense. And what we're saying is your technology and the architecture that you define behind that technology should allow for both where they make the most sense. >> What's the bottom line business benefit of implementing a data fabric? What can I expect if I go that route? >> I think there are a couple of things, right? Certainly being able to interact with customers in real time and being able to manage through changes in the marketplace is certainly a key concept. Time to value is another key concept. You know, if you think about the supply and logistics discussion that I had before, right? No company is going to rewrite their ERP operational system. It's how they manage and run their business. But being able to glean additional insights from that data combined with data from a partner, combined with data from a customer, or combined with algorithmic data that, you know, you may create some sort of forecast and that you want to fit into. And being able to combine that together without interfering with the operational process and get those answers quickly is an important thing. So seeing through the silos and being able to do the connectivity being able to have interoperability, and then combining that with flexibility on the analytics and flexibility on the algorithms you might want to run against that data. Because in today's world, of course, certainly there's the notion of predictive modeling and relational theory, but also now adding in machine learning, deep learning algorithms, and have all of those things kind of be interchangeable is another important concept behind data fabrics. So you're not relegated to one type of processing. You're saying it's data, and I have multiple different processing engines and I may want to interchange them over time. >> So, I know, well actually, when you said real time, I infer from that I don't have a zillion copies of the data and it's not in a bunch of silos. Is that a correct premise? >> You try to minimize your copies of the data. There's a nirvana that says there's only ever one copy of data. That's probably impossible. But you certainly don't want to be forced into making multiple copies of data to support different processing engines unnecessarily. >> And so you've recently made some enhancements to the data fabric capability that takes it, you know, ostensibly to the next level. Is that the smart piece, is that machine intelligence? Can you describe what's in there? >> Well, you know, ultimately the business benefit is be able to have a single source of the truth for a company. And so what we're doing is combining multiple technologies in a single set of software that makes that software agile and supportable and not fragile for deployment of applications. At its core, what we're saying is, we want to be able to consume any kind of data, and I think your data fabric architecture is predicated on the fact that you're going to have relational data you're going to have document data, you may have key value store data, you may have images, you may have other things, and you want to be able to not be limited by the kind of data that you want to process. And so that certainly is what we build into our product set. And then you want to be able to have any kind of algorithm where appropriate run against that data without having to do a bunch of massive ETL processes or make another copy of the data and move it somewhere else. And so to that end, we have taken our award-winning engine, which, you know, provides traditional analytic capabilities and relational capabilities. We've now integrated machine learning. So you basically can bring machine learning algorithms to the data without having to move data to the machine learning algorithm. What does that mean? Well, number one, your application developer doesn't have to think differently to take advantage of the new algorithms. So that's a really good thing. The other thing that happens is if you're playing that algorithm where the data actually exists from your operational system, that means the roundtrip from running the model to inferring some decision you want to make to actually implementing that decision can happen instantaneously. As opposed to, you know, other kinds of architectures where you may want to make a copy of the data and move it somewhere else. That takes time, latency. Now the data gets stale. Your model may not be as efficient because you're running against stale data. We've now taken all of that off the table by being able to pull that processing inside the data fabric, inside of the single source of truth. >> And you got to manage all that complexity. So you got one system, so that makes it cost effective, and you're bringing modern tooling to the platform. Is that right? >> That's correct. How can people learn more and maybe continue the conversation with you if they have other questions? >> (Scott laughs) Call or write. Yeah, I mean, certainly check out our website. We've got a lot of information about the different kinds of solutions, the different industries, the different technologies. Reach out at scottg@intersystems.com. >> Excellent, thank you, Scott. Really appreciate it. And great to see you again. >> Good to see you. All right, keep it right there. We have a demo coming up next. If you want to see smart data fabrics in action, stay tuned. (upbeat music)

Published Date : Feb 15 2023

SUMMARY :

and innumerable sources around the globe. It's good to be here. that you see today when At the same time, you talked about silos, and this is not to the exclusion And to your point, data the best of the two worlds. and the architecture that you define and that you want to fit into. and it's not in a bunch of silos. But you certainly don't want to be forced Is that the smart piece, is and you want to be able to not be limited And you got to manage the conversation with you if about the different kinds of solutions, And great to see you again. If you want to see smart

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

DavePERSON

0.99+

InterSystemsORGANIZATION

0.99+

scottg@intersystems.comOTHER

0.99+

bothQUANTITY

0.99+

one systemQUANTITY

0.99+

one copyQUANTITY

0.99+

todayDATE

0.98+

first segmentQUANTITY

0.98+

each stepQUANTITY

0.97+

single sourceQUANTITY

0.93+

two worldsQUANTITY

0.92+

many years agoDATE

0.87+

zillion copiesQUANTITY

0.86+

single setQUANTITY

0.84+

one typeQUANTITY

0.83+

TodayDATE

0.67+

oneQUANTITY

0.33+

How to Make a Data Fabric "Smart": A Technical Demo With Jess Jowdy


 

>> Okay, so now that we've heard Scott talk about smart data fabrics, it's time to see this in action. Right now we're joined by Jess Jowdy, who's the manager of Healthcare Field Engineering at InterSystems. She's going to give a demo of how smart data fabrics actually work, and she's going to show how embedding a wide range of analytics capabilities including data exploration, business intelligence natural language processing, and machine learning directly within the fabric, makes it faster and easier for organizations to gain new insights and power intelligence, predictive and prescriptive services and applications. Now, according to InterSystems, smart data fabrics are applicable across many industries from financial services to supply chain to healthcare and more. Jess today is going to be speaking through the lens of a healthcare focused demo. Don't worry, Joe Lichtenberg will get into some of the other use cases that you're probably interested in hearing about. That will be in our third segment, but for now let's turn it over to Jess. Jess, good to see you. >> Hi. Yeah, thank you so much for having me. And so for this demo we're really going to be bucketing these features of a smart data fabric into four different segments. We're going to be dealing with connections, collections, refinements and analysis. And so we'll see that throughout the demo as we go. So without further ado, let's just go ahead and jump into this demo and you'll see my screen pop up here. I actually like to start at the end of the demo. So I like to begin by illustrating what an end user's going to see and don't mind the screen 'cause I gave you a little sneak peek of what's about to happen. But essentially what I'm going to be doing is using Postman to simulate a call from an external application. So we talked about being in the healthcare industry. This could be for instance, a mobile application that a patient is using to view an aggregated summary of information across that patient's continuity of care or some other kind of application. So we might be pulling information in this case from an electronic medical record. We might be grabbing clinical history from that. We might be grabbing clinical notes from a medical transcription software or adverse reaction warnings from a clinical risk grouping application and so much more. So I'm really going to be assimilating a patient logging on in on their phone and retrieving this information through this Postman call. So what I'm going to do is I'm just going to hit send, I've already preloaded everything here and I'm going to be looking for information where the last name of this patient is Simmons and their medical record number their patient identifier in the system is 32345. And so as you can see I have this single JSON payload that showed up here of just relevant clinical information for my patient whose last name is Simmons all within a single response. So fantastic, right? Typically though when we see responses that look like this there is an assumption that this service is interacting with a single backend system and that single backend system is in charge of packaging that information up and returning it back to this caller. But in a smart data fabric architecture we're able to expand the scope to handle information across different, in this case, clinical applications. So how did this actually happen? Let's peel back another layer and really take a look at what happened in the background. What you're looking at here is our mission control center for our smart data fabric. On the left we have our APIs that allow users to interact with particular services. On the right we have our connections to our different data silos. And in the middle here we have our data fabric coordinator which is going to be in charge of this refinement and analysis those key pieces of our smart data fabric. So let's look back and think about the example we just showed. I received an inbound request for information for a patient whose last name is Simmons. My end user is requesting to connect to that service and that's happening here at my patient data retrieval API location. Users can define any number of different services and APIs depending on their use cases. And to that end we do also support full lifecycle API management within this platform. When you're dealing with APIs I always like to make a little shout out on this that you really want to make sure you have enough like a granular enough security model to handle and limit which APIs and which services a consumer can interact with. In this IRIS platform, which we're talking about today we have a very granular role-based security model that allows you to handle that, but it's really important in a smart data fabric to consider who's accessing your data and in what contact. >> Can I just interrupt you for a second? >> Yeah, please. >> So you were showing on the left hand side of the demo a couple of APIs. I presume that can be a very long list. I mean, what do you see as typical? >> I mean you can have hundreds of these APIs depending on what services an organization is serving up for their consumers. So yeah, we've seen hundreds of these services listed here. >> So my question is, obviously security is critical in the healthcare industry and API securities are really hot topic these days. How do you deal with that? >> Yeah, and I think API security is interesting 'cause it can happen at so many layers. So there's interactions with the API itself. So can I even see this API and leverage it? And then within an API call, you then have to deal with all right, which end points or what kind of interactions within that API am I allowed to do? What data am I getting back? And with healthcare data, the whole idea of consent to see certain pieces of data is critical. So the way that we handle that is, like I said, same thing at different layers. There is access to a particular API, which can happen within the IRIS product and also we see it happening with an API management layer, which has become a really hot topic with a lot of organizations. And then when it comes to data security, that really happens under the hood within your smart data fabric. So that role-based access control becomes very important in assigning, you know, roles and permissions to certain pieces of information. Getting that granular becomes the cornerstone of security. >> And that's been designed in, >> Absolutely, yes. it's not a bolt-on as they like to say. Okay, can we get into collect now? >> Of course, we're going to move on to the collection piece at this point in time, which involves pulling information from each of my different data silos to create an overall aggregated record. So commonly each data source requires a different method for establishing connections and collecting this information. So for instance, interactions with an EMR may require leveraging a standard healthcare messaging format like FIRE, interactions with a homegrown enterprise data warehouse for instance may use SQL for a cloud-based solutions managed by a vendor. They may only allow you to use web service calls to pull data. So it's really important that your data fabric platform that you're using has the flexibility to connect to all of these different systems and and applications. And I'm about to log out so I'm going to keep my session going here. So therefore it's incredibly important that your data fabric has the flexibility to connect to all these different kinds of applications and data sources and all these different kinds of formats and over all of these different kinds of protocols. So let's think back on our example here. I had four different applications that I was requesting information for to create that payload that we saw initially. Those are listed here under this operations section. So these are going out and connecting to downstream systems to pull information into my smart data fabric. What's great about the IRIS platform is it has an embedded interoperability platform. So there's all of these native adapters that can support these common connections that we see for different kinds of applications. So using REST or SOAP or SQL or FTP regardless of that protocol there's an adapter to help you work with that. And we also think of the types of formats that we typically see data coming in as, in healthcare we have H7, we have FIRE we have CCDs across the industry. JSON is, you know, really hitting a market strong now and XML, payloads, flat files. We need to be able to handle all of these different kinds of formats over these different kinds of protocols. So to illustrate that, if I click through these when I select a particular connection on the right side panel I'm going to see the different settings that are associated with that particular connection that allows me to collect information back into my smart data fabric. In this scenario, my connection to my chart script application in this example communicates over a SOAP connection. When I'm grabbing information from my clinical risk grouping application I'm using a SQL based connection. When I'm connecting to my EMR I'm leveraging a standard healthcare messaging format known as FIRE, which is a rest based protocol. And then when I'm working with my health record management system I'm leveraging a standard HTTP adapter. So you can see how we can be flexible when dealing with these different kinds of applications and systems. And then it becomes important to be able to validate that you've established those connections correctly and be able to do it in a reliable and quick way. Because if you think about it, you could have hundreds of these different kinds of applications built out and you want to make sure that you're maintaining and understanding those connections. So I can actually go ahead and test one of these applications and put in, for instance my patient's last name and their MRN and make sure that I'm actually getting data back from that system. So it's a nice little sanity check as we're building out that data fabric to ensure that we're able to establish these connections appropriately. So turnkey adapters are fantastic, as you can see we're leveraging them all here, but sometimes these connections are going to require going one step further and building something really specific for an application. So let's, why don't we go one step further here and talk about doing something custom or doing something innovative. And so it's important for users to have the ability to develop and go beyond what's an out of the box or black box approach to be able to develop things that are specific to their data fabric or specific to their particular connection. In this scenario, the IRIS data platform gives users access to the entire underlying code base. So you cannot, you not only get an opportunity to view how we're establishing these connections or how we're building out these processes but you have the opportunity to inject your own kind of processing your own kinds of pipelines into this. So as an example, you can leverage any number of different programming languages right within this pipeline. And so I went ahead and I injected Python. So Python is a very up and coming language, right? We see more and more developers turning towards Python to do their development. So it's important that your data fabric supports those kinds of developers and users that have standardized on these kinds of programming languages. This particular script here, as you can see actually calls out to our turnkey adapters. So we see a combination of out of the box code that is provided in this data fabric platform from IRIS combined with organization specific or user specific customizations that are included in this Python method. So it's a nice little combination of how do we bring the developer experience in and mix it with out of the box capabilities that we can provide in a smart data fabric. >> Wow. >> Yeah, I'll pause. >> It's a lot here. You know, actually, if I could >> I can pause. >> If I just want to sort of play that back. So we went through the connect and the collect phase. >> And the collect, yes, we're going into refine. So it's a good place to stop. >> Yeah, so before we get there, so we heard a lot about fine grain security, which is crucial. We heard a lot about different data types, multiple formats. You've got, you know the ability to bring in different dev tools. We heard about FIRE, which of course big in healthcare. >> Absolutely. >> And that's the standard and then SQL for traditional kind of structured data and then web services like HTTP you mentioned. And so you have a rich collection of capabilities within this single platform. >> Absolutely, and I think that's really important when you're dealing with a smart data fabric because what you're effectively doing is you're consolidating all of your processing, all of your collection into a single platform. So that platform needs to be able to handle any number of different kinds of scenarios and technical challenges. So you've got to pack that platform with as many of these features as you can to consolidate that processing. >> All right, so now we're going into refine. >> We're going into refinement, exciting. So how do we actually do refinement? Where does refinement happen and how does this whole thing end up being performant? Well the key to all of that is this SDF coordinator or stands for smart data fabric coordinator. And what this particular process is doing is essentially orchestrating all of these calls to all of these different downstream systems. It's aggregating, it's collecting that information it's aggregating it and it's refining it into that single payload that we saw get returned to the user. So really this coordinator is the main event when it comes to our data fabric. And in the IRIS platform we actually allow users to build these coordinators using web-based tool sets to make it intuitive. So we can take a sneak peek at what that looks like and as you can see it follows a flow chart like structure. So there's a start, there is an end and then there are these different arrows that point to different activities throughout the business process. And so there's all these different actions that are being taken within our coordinator. You can see an action for each of the calls to each of our different data sources to go retrieve information. And then we also have the sync call at the end that is in charge of essentially making sure that all of those responses come back before we package them together and send them out. So this becomes really crucial when we're creating that data fabric. And you know, this is a very simple data fabric example where we're just grabbing data and we're consolidating it together. But you can have really complex orchestrators and coordinators that do any number of different things. So for instance, I could inject SQL Logic into this or SQL code, I can have conditional logic, I can do looping, I can do error trapping and handling. So we're talking about a whole number of different features that can be included in this coordinator. So like I said, we have a really very simple process here that's just calling out, grabbing all those different data elements from all those different data sources and consolidating it. We'll look back at this coordinator in a second when we introduce or we make this data fabric a bit smarter and we start introducing that analytics piece to it. So this is in charge of the refinement. And so at this point in time we've looked at connections, collections, and refinements. And just to summarize what we've seen 'cause I always like to go back and take a look at everything that we've seen. We have our initial API connection we have our connections to our individual data sources and we have our coordinators there in the middle that are in charge of collecting the data and refining it into a single payload. As you can imagine, there's a lot going on behind the scenes of a smart data fabric, right? There's all these different processes that are interacting. So it's really important that your smart data fabric platform has really good traceability, really good logging 'cause you need to be able to know, you know, if there was an issue, where did that issue happen, in which connected process and how did it affect the other processes that are related to it. In IRIS, we have this concept called a visual trace. And what our clients use this for is basically to be able to step through the entire history of a request from when it initially came into the smart data fabric to when data was sent back out from that smart data fabric. So I didn't record the time but I bet if you recorded the time it was this time that we sent that request in. And you can see my patient's name and their medical record number here and you can see that that instigated four different calls to four different systems and they're represented by these arrows going out. So we sent something to chart script to our health record management system, to our clinical risk grouping application into my EMR through their FIRE server. So every request, every outbound application gets a request and we pull back all of those individual pieces of information from all of those different systems and we bundle them together. And for my FIRE lovers, here's our FIRE bundle that we got back from our FIRE server. So this is a really good way of being able to validate that I am appropriately grabbing the data from all these different applications and then ultimately consolidating it into one payload. Now we change this into a JSON format before we deliver it, but this is those data elements brought together. And this screen would also be used for being able to see things like error trapping or errors that were thrown alerts, warnings, developers might put log statements in just to validate that certain pieces of code are executing. So this really becomes the one stop shop for understanding what's happening behind the scenes with your data fabric. >> Etcher, who did what, when, where what did the machine do? What went wrong and where did that go wrong? >> Exactly. >> Right in your fingertips. >> Right, and I'm a visual person so a bunch of log files to me is not the most helpful. Well, being able to see this happened at this time in this location gives me that understanding I need to actually troubleshoot a problem. >> This business orchestration piece, can you say a little bit more about that? How people are using it? What's the business impact of the business orchestration? >> The business orchestration, especially in the smart data fabric is really that crucial part of being able to create a smart data fabric. So think of your business orchestrator as doing the heavy lifting of any kind of processing that involves data, right? It's bringing data in, it's analyzing that information, it's transforming that data, in a format that your consumer's not going to understand it's doing any additional injection of custom logic. So really your coordinator or that orchestrator that sits in the middle is the brains behind your smart data fabric. >> And this is available today? This all works? >> It's all available today. Yeah, it all works. And we have a number of clients that are using this technology to support these kinds of use cases. >> Awesome demo. Anything else you want to show us? >> Well we can keep going. 'Cause right now, I mean we can, oh, we're at 18 minutes. God help us. You can cut some of this. (laughs) I have a lot to say, but really this is our data fabric. The core competency of IRIS is making it smart, right? So I won't spend too much time on this but essentially if we go back to our coordinator here we can see here's that original that pipeline that we saw where we're pulling data from all these different systems and we're collecting it and we're sending it out. But then we see two more at the end here which involves getting a readmission prediction and then returning a prediction. So we can not only deliver data back as part of a smart data fabric but we can also deliver insights back to users and consumers based on data that we've aggregated as part of a smart data fabric. So in this scenario, we're actually taking all that data that we just looked at and we're running it through a machine learning model that exists within the smart data fabric pipeline and producing a readmission score to determine if this particular patient is at risk for readmission within the next 30 days. Which is a typical problem that we see in the healthcare space. So what's really exciting about what we're doing in the IRIS world is we're bringing analytics close to the data with integrated ML. So in this scenario we're actually creating the model, training the model, and then executing the model directly within the IRIS platform. So there's no shuffling of data, there's no external connections to make this happen. And it doesn't really require having a PhD in data science to understand how to do that. It leverages all really basic SQL like syntax to be able to construct and execute these predictions. So it's going one step further than the traditional data fabric example to introduce this ability to define actionable insights to our users based on the data that we've brought together. >> Well that readmission probability is huge. >> Yes. >> Right, because it directly affects the cost of for the provider and the patient, you know. So if you can anticipate the probability of readmission and either do things at that moment or you know, as an outpatient perhaps to minimize the probability then that's huge. That drops right to the bottom line. >> Absolutely, absolutely. And that really brings us from that data fabric to that smart data fabric at the end of the day which is what makes this so exciting. >> Awesome demo. >> Thank you. >> Fantastic people, are you cool? If people want to get in touch with you? >> Oh yes, absolutely. So you can find me on LinkedIn, Jessica Jowdy and we'd love to hear from you. I always love talking about this topic, so would be happy to engage on that. >> Great stuff, thank you Jess, appreciate it. >> Thank you so much. >> Okay, don't go away because in the next segment we're going to dig into the use cases where data fabric is driving business value. Stay right there.

Published Date : Feb 15 2023

SUMMARY :

for organizations to gain new insights And to that end we do also So you were showing hundreds of these APIs in the healthcare industry So the way that we handle that it's not a bolt-on as they like to say. that data fabric to ensure that we're able It's a lot here. So we went through the So it's a good place to stop. the ability to bring And so you have a rich collection So that platform needs to we're going into refine. that are related to it. so a bunch of log files to of being able to create this technology to support Anything else you want to show us? So in this scenario, we're Well that readmission and the patient, you know. to that smart data fabric So you can find me on you Jess, appreciate it. because in the next segment

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jessica JowdyPERSON

0.99+

Joe LichtenbergPERSON

0.99+

InterSystemsORGANIZATION

0.99+

Jess JowdyPERSON

0.99+

ScottPERSON

0.99+

JessPERSON

0.99+

18 minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

32345OTHER

0.99+

PythonTITLE

0.99+

SimmonsPERSON

0.99+

eachQUANTITY

0.99+

IRISORGANIZATION

0.99+

third segmentQUANTITY

0.99+

EtcherORGANIZATION

0.99+

todayDATE

0.99+

LinkedInORGANIZATION

0.98+

SQLTITLE

0.98+

single platformQUANTITY

0.98+

oneQUANTITY

0.98+

JSONTITLE

0.96+

each data sourceQUANTITY

0.96+

singleQUANTITY

0.95+

one stepQUANTITY

0.94+

one stepQUANTITY

0.94+

single backendQUANTITY

0.92+

single responseQUANTITY

0.9+

two moreQUANTITY

0.85+

single payloadQUANTITY

0.84+

SQL LogicTITLE

0.84+

a secondQUANTITY

0.83+

IRISTITLE

0.83+

four different segmentsQUANTITY

0.82+

PostmanPERSON

0.78+

FIRETITLE

0.77+

SOAPTITLE

0.76+

four different applicationsQUANTITY

0.74+

one stopQUANTITY

0.74+

PostmanTITLE

0.73+

one payloadQUANTITY

0.72+

each ofQUANTITY

0.71+

RESTTITLE

0.7+

Healthcare Field EngineeringORGANIZATION

0.67+

next 30 daysDATE

0.65+

fourQUANTITY

0.63+

these APIsQUANTITY

0.62+

secondQUANTITY

0.54+

GodPERSON

0.53+

everyQUANTITY

0.53+

servicesQUANTITY

0.51+

H7COMMERCIAL_ITEM

0.5+

applicationQUANTITY

0.48+

FIREORGANIZATION

0.38+

XMLTITLE

0.38+

Ignite22 Analysis | Palo Alto Networks Ignite22


 

>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Welcome back everyone. We're so glad that you're still with us. It's the Cube Live at the MGM Grand. This is our second day of coverage of Palo Alto Networks Ignite. This is takeaways from Ignite 22. Lisa Martin here with two really smart guys, Dave Valante. Dave, we're joined by one of our cube alumni, a friend, a friend of the, we say friend of the Cube. >>Yeah, otc. A friend of the Cube >>Karala joined us. Guys, it's great to have you here. It's been an exciting show. A lot of cybersecurity is one of my favorite topics to talk about. But I'd love to get some of the big takeaways from both of you. Dave, we'll start with you. >>A breathing room from two weeks ago. Yeah, that was, that was really pleasant. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were from there. But, you know, coming into this, we wrote a piece, Palo Alto's Gold Standard, what they need to do to, to keep that, that status. And we hear it a lot about consolidation. That's their big theme now, which is timely, right? Cause people wanna save money, they wanna do more with less. But I'm really interested in hearing zeus's thoughts on how that's playing in the market. How customers, how easy is it to just say, oh, hey, I'm gonna consolidate. I wanna get into that a little bit with you, how well the strategy's working. We're gonna get into some of the m and a activity and really bring your perspectives to the table. Well, >>It's, it's not easy. I mean, people have been calling for the consolidation of security for decades, and it's, it's, they're the first company that's actually made it happen. Right? And, and I think this is what we're seeing here is the culmination of this long term strategy, this company trying to build more of a platform. And they, you know, they, they came out as a firewall vendor. And I think it's safe to say they're more than firewall today. That's only about two thirds of their revenue now. So down from 80% a few years ago. And when I think of what Palo Alto has become, they're really a data company. Now, if you look at, you know, unit 42 in Cortex, the, the, the Cortex Data Lake, they've done an excellent job of taking telemetry from their products and from the acquisitions they have, right? And bringing that together into one big data lake. >>And then they're able to use that to, to do faster threat notification, forensics, things like that. And so I think the old model of security of create signatures for known threats, it's safe to say it never really worked and it wasn't ever gonna work. You had too many day zero exploits and things. The only way to fight security today is with a AI and ML based analytics. And they have, they're the gold standard. I think the one thing about your post that I would add the gold standard from a data standpoint, and that's given them this competitive advantage to go out and become a platform for a security. Which, like I said, the people have tried to do that for years. And the first one that's actually done it, well, >>We've heard this from some of the startups, like Lacework will say, oh, we treat security as a data problem. Of course there's a startup, Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. But one of the things I wanted to explore with you coming into this was the notion of can you be best of breed and develop a suite? And we, we've been hearing a consistent answer to that question, which is, and, and do you need to, and the answer is, well, best of breed in security requires that full spectrum, that full view. So here's my question to you. So, okay, let's take Esty win relatively new for these guys, right? Yeah. Okay. And >>And one of the few products are not top two, top three in, right? Exactly. >>Yeah. So that's why I want to take that. Yeah. Because in bakeoffs, they're gonna lose on a head-to-head best of breed. And so the customer's gonna say, Hey, you know, I love your, your consolidation play, your esty win's. Just, okay, how about a little discount on that? And you know, these guys are premium priced. Yes. So, you know, are they in essentially through their pricing strategies, sort of creating that stuff, fighting that, is that friction for them where they've got, you know, the customer says, all right, well forget it, we're gonna go stove pipe with the SD WAN will consolidate some of the stuff. Are you seeing that? >>Yeah, I, I, I still think the sales model is that way. And I think that's something they need to work on changing. If they get into a situation where they have to get down into a feature battle of my SD WAN versus your SD wan, my firewall versus your firewall, frankly they've already lost, you know, because their value prop is the suite and, and is the platform. And I was talking to the CISO here that told me, he realizes now that you don't need best of breed everywhere to have best in class threat protection. In fact, best of breed everywhere leads to suboptimal threat protection. Cuz you have all these data data sets that are in silos, right? And so from a data scientist standpoint, right, there's the good data leads to good insights. Well, partial data leads to fragmented insights and that's, that's what the best, best of breed approach gives you. And so I was talking with Palo about this, can they have this vision of being best of breed and platform? I don't really think you can maintain best of breed everywhere across this portfolio this big, but you don't need to. >>That was my second point of my >>Question. That's the point. >>Yeah. And so, cuz cuz because you know, we've talked about this, that that sweets always win in the long run, >>Sweets >>Win. Yeah. But here's the thing, I, I wonder to your your point about, you know, the customer, you know, understanding that that that, that this resonates with them. I, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort of wed, you know, hugging that, that tool. So there's, there's work to be done here, but I think they, they, they got it right Because if they devolve, to your point, if they devolve down to that speeds and feeds, eh, what's the point of that? Where's their valuable? >>You do not wanna get into a knife fight. And I, and I, and I think for them the, a big challenge now is convincing customers that the suite, the suite approach does work. And they have to be able to do that in actual customer examples. And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR and xor and even are looking at their sim have told me that the, the, so think of soc operations, the old way heavily manually oriented, right? You have multiple panes of glass and you know, and then you've got, so there's a lot of people work before you bring the tools in, right? If done correctly with AI and ml, the machines would do all the heavy lifting and then you'd bring people in at the end to clean up the little bits that were missed, right? >>And so you, you moved to, from something that was very people heavy to something that's machine heavy and machines can work a lot faster than people. And the, and so the ones that I've talked that have, that have done that have said, look, our engineers have moved on to a lot different things. They're doing penetration testing, they're, you know, helping us with, with strategy and they're not fighting that, that daily fight of looking through log files. And the only proof point you need, Dave, is look at every big breach that we've had over the last five years. There's some SIM vendor up there that says, we caught it. Yeah. >>Yeah. We we had the data. >>Yeah. But, but, but the security team missed it. Well they missed it because you're, nobody can look at that much data manually. And so the, I I think their approach of relying heavily on machines to fight the fight is actually the right way. >>Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back in 2017 at Fort Net. Is that, where do the two stand in your >>Yeah, it's funny cuz if you talk to the two vendors, they don't really see each other in a lot of accounts because Fort Net's more small market mid-market. It's the same strategy to some degree where Fort Net relies heavily on in-house development and Palo Alto relies heavily on acquisition. Yeah. And so I think from a consistently feature set, you know, Fort Net has an advantage there because it, it's all run off their, their their silicon. Where, where Palo's able to innovate very quickly. The, it it requires a lot of work right? To, to bring the front end and back ends together. But they're serving different markets. So >>Do you see that as a differentiator? The integration strategy that Palo Alto has as a differentiator? We talk to so many companies who have an a strong m and a strategy and, and execution arm. But the challenge is always integrating the technology so that the customer to, you know, ultimately it's the customer. >>I actually think they're, they're underrated as a, an acquirer. In fact, Dave wrote a post to a prior on Silicon Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to rank 'em as an acquirer and they were in the middle of the pack, >>Right? It was, it was. So it was Oracle, VMware, emc, ibm, Cisco, ServiceNow, and Palo Alto. Yeah. Or Oracle got very high marks. It was like 8.5 out of, you know, 10. Yeah. VMware I think was 6.5. Nice. Era was high emc, big range. IBM five to seven. Cisco was three to eight. Yeah. Yeah, right. ServiceNow was a seven. And then, yeah, Palo Alto was like a five. And I, which I think it was unfair. >>Well, and I think it depends on how you look at it. And I, so I think a lot of the acquisitions Palo Altos made, they've done a good job of integrating their backend data and they've almost ignored the front end. And so when you buy some of the products, it's a little clunky today. You know, if you work with Prisma Cloud, it could be a little bit cleaner. And even with, you know, the SD wan that took 'em a long time to bring CloudGenix in and stuff. But I think the approach is right. I don't, I don't necessarily believe you should integrate the front end until you've integrated the back end. >>That's >>The hard part, right? Because UL ultimately what you're gonna get, you're gonna get two panes of glass and one pane of glass and it might look pretty all mush together, but ultimately you're not solving the bigger problem, right. Of, of being able to create that big data like the, the fight security. And so I think, you know, the approach they've taken is the right one. I think from a user standpoint, maybe it doesn't show up as neatly because you don't see the frontend integration, but the way they're doing it is the right way to do it. And I'm glad they're doing it that way versus caving to the pressures of what, you know, the industry might want >>Showed up in the performance of the company. I mean, this company was basically gonna double revenues to 7 billion from 2020 to >>2023. Three. Think about that at that, that >>Make a, that's unbelievable, right? I mean, and then and they wanna double again. Yeah. You know, so, well >>What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. He didn't give a timeline market cap. >>Right. >>Market cap, right. Do what I wanna get both of your opinions on what you saw and heard and felt this week. What do you think the likelihood is? And and do you have any projections on how, you know, how many years it's gonna take for them to get there? >>Well, >>Well I think so if they're gonna get that big, right? And, and we were talking about this pre-show, any company that's becoming a big company does it through ecosystem >>Bingo. >>Right? And that when you look around the show floor, it's not that impressive. And if that, if there's an area they need to focus on, it's building that ecosystem. And it's not with other security vendors, it's with application vendors and it's with the cloud companies and stuff. And they've got some relationships there, but they need to do more. I actually challenge 'em on that. One of the analyst sessions. They said, look, we've got 800 cortex partners. Well where are they? Right? Why isn't there a cortex stand here with a bunch of the small companies here? So I do think that that is an area they need to focus on. If they are gonna get to that, that market caps number, they will do so do so through ecosystem. Because every company that's achieved that has done it through ecosystem. >>A hundred percent agree. And you know, if you look at CrowdStrike's ecosystem, it's pretty similar. Yeah. You know, it doesn't really, you know, make much, much, not much different from this, but I went back and just looked at some, you know, peak valuations during the pandemic and shortly thereafter CrowdStrike was 70 billion. You know, that's what their roughly their peak Palo Alto was 56, fortune was 59 for the actually diverged. Right. And now Palo Alto has taken the, the top mantle, you know, today it's market cap's 52. So it's held 93% of its peak value. Everybody else is tanking. Even Okta was 45 billion. It's been crushed as you well know. But, so Palo Alto wasn't always, you know, the number one in terms of market cap. But I guess my point is, look, if CrowdStrike could got to 70 billion during Yeah. During the frenzy, I think it's gonna take, to answer your question, I think it's gonna be five years. Okay. Before they get back there. I think this market's gonna be tough for a while from a valuation standpoint. I think generally tech is gonna kind of go up and down and sideways for a good year and a half, maybe even two years could be even longer. And then I think there's gonna be some next wave of productivity innovation that that hits. And then you're gonna, you're almost always gonna exceed the previous highs. It's gonna take a while. Yeah, >>Yeah, yeah. But I think their ability to disrupt the SIM market actually is something I, I believe they're gonna do. I've been calling for the death of the sim for a long time and I know some people at Palo Alto are very cautious about saying that cuz the Splunks and the, you know, they're, they're their partners. But I, I think the, you know, it's what I said before, the, the tools are catching them, but they're, it's not in a way that's useful for the IT pro and, but I, I don't think the SIM vendors have that ecosystem of insight across network cloud endpoint. Right. Which is what you need in order to make a sim useful. >>CISO at an ETR roundtable said, if, if it weren't for my regulators, I would chuck my sim. >>Yes. >>But that's the only reason that, that this person was keeping it. So, >>Yeah. And I think the, the fact that most of those companies have moved to a perpetual MO or a a recurring revenue model actually helps unseat them. Typically when you pour a bunch of money into something, you remember the old computer associate days, nobody ever took it out cuz the sunk dollars you spent to do it. But now that you're paying an annual recurring fee, it's actually makes it easier to take out. So >>Yeah, it's it's an ebb and flow, right? Yeah. Because the maintenance costs were, you know, relatively low. Maybe it was 20% of the total. And then, you know, once every five years you had to do a refresh and you were still locked into the sort of maintenance and, and so yeah, I think you're right. The switching costs with sas, you know, in theory anyway, should be less >>Yeah. As long as you can migrate the data over. And I think they've got a pretty good handle on that. So, >>Yeah. So guys, I wanna get your perspective as a whole bunch of announcements here. We've only been here for a couple days, not a big conference as, as you can see from behind us. What Zs in your opinion was Palo Alto's main message and and what do you think about it main message at this event? And then same question for you. >>Yeah, I, I think their message largely wrapped around disruption, right? And, and they, in The's keynote already talked about that, right? And where they disrupted the firewall market by creating a NextGen firewall. In fact, if you look at all the new services they added to their firewall, you, you could almost say it's a NextGen NextGen firewall. But, but I do think the, the work they've done in the area of cloud and cortex actually I think is, is pretty impressive. And I think that's the, the SOC is ripe for disruption because it's for, for the most part, most socks still, you know, run off legacy playbooks. They run off legacy, you know, forensic models and things and they don't work. It's why we have so many breaches today. The, the dirty little secret that nobody ever wants to talk about is the bad guys are using machine learning, right? And so if you're using a signature based model, all they're do is tweak their model a little bit and it becomes, it bypasses them. So I, I think the only way to fight the the bad guys today is with you gotta fight fire with fire. And I think that's, that's the path they've, they've headed >>Down and the bad guys are hiding in plain sight, you know? >>Yeah, yeah. Well it's, it's not hard to do now with a lot of those legacy tools. So >>I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, you know, the ETR data shows that are, that are that last survey around 35% of the respondents said we are actively consolidating, sorry, 44%, sorry, 35 says we're actively consolidating vendors, redundant vendors today. That number's up to 44%. Yeah. It's by far the number one cost optimization technique. That's what these guys are pitching. And I think it's gonna resonate with people and, and I think to your point, they're integrating at the backend, their beeps are technical, right? I mean, they can deal with that complexity. Yeah. And so they don't need eye candy. Eventually they, they, they want to have that cuz it'll allow 'em to have deeper market penetration and make people more productive. But you know, that consolidation message came through loud and clear. >>Yeah. The big change in this industry too is all the new startups are all cloud native, right? They're all built on Amazon or Google or whatever. Yeah. And when your cloud native and you buy a cloud native integration is fast. It's not like having to integrate this big monolithic software stack anymore. Right. So I I think their pace of integration will only accelerate from here because everything's now cloud native. >>If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation we have, our board isn't necessarily with our executives in terms of execution of a security strategy. How do you advise them where Palo Alto is concerned? >>Yeah. You know, a lot, a lot of this is just fighting legacy mindset. And I've, I was talking with some CISOs here from state and local governments and things and they're, you know, they can't get more budget. They're fighting the tide. But what they did find is through the use of automation technology, they're able to bring their people costs way down. Right. And then be able to use that budget to invest in a lot of new projects. And so with that, you, you have to start with your biggest pain points, apply automation where you can, and then be able to use that budget to reinvest back in your security strategy. And it's good for the IT pros too, the security pros, my advice to, to it pros is if you're doing things today that aren't resume building, stop doing them. Right? Find a way to automate the money your job. And so if you're patching systems and you're looking through log files, there's no reason machines can't do that. And you go do something a lot more interesting. >>So true. It's like storage guys 10 years ago, provisioning loans. Yes. It's like, stop doing that. Yeah. You're gonna be outta a job. And so who, last question I have is, is who do you see as the big competitors, the horses on the track question, right? So obviously Cisco kind of service has led for a while and you know, big portfolio company, CrowdStrike coming at it from end point. You know who, who, who do you see as the real players going for that? You know, right now the market's three to 4%. The leader has three, three 4% of the market. You know who they're all going for? 10, 15, maybe 20% of the market. Who, who are the likely candidates? Yeah, >>I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I I think they've had a nice run, but I, we might start to see the follow 'em. I think Microsoft is gonna be for middle. They've laid down the gauntlet, right? They are a security vendor, right? We, we were at Reinvent and a AWS is the platform for security vendors. Yes. Middle, somewhere in the middle. But Microsoft make no mistake, they're in security. They've got some good products. I think a lot of 'em are kind of good enough and they, they tie it to the licensing and I'm not sure that works in security, but they've certainly got the ear of a lot of it pros. >>It might work in smb. >>Yeah. Yeah. It, it might. And, and I do like Zscaler. I, I know these guys poo poo the proxy model, but they've, they've done about as much with proxies as you can. And I, I think it's, it's a battle of, I love the, the, the near, you know, proxies are dead and Jay's model, you know, Jay over at c skater throw 'em back at 'em. So I, it's good to see that kind of fight going on between the two. >>Oh, it's great. Well, and, and again, ZScaler's coming at it from their cloud security angle. CrowdStrike's coming at it from endpoint. I, I do think CrowdStrike has an opportunity to build out the portfolio through m and a and maybe ecosystem. And then obviously, you know, Palo Alto's getting it done. How about Cisco? >>Yeah. Cisco's interesting. And I, I think if Cisco can make the network matter in security and it should, right? We're talking about how a lot of you need a lot of forensics to fight security today. Well, they're gonna see things long before anybody else because they have all that network data. If they can tie network security, I, I mean they could really have that business take off. But we've been saying that about Cisco for 20 years. >>But big install based though. Yeah. It's hard for a company, any company to just say, okay, hey Cisco customer sweep the floor and come with us. That's, that's >>A tough thing. They have a lot of good peace parts, right? And like duo's a good product and umbrella's a good product. They've, they've not done a good job. >>They're the opposite of these guys. >>They've not done a good job of the backend integration that, that's where Cisco needs to, to focus. And I do think g G two Patel there fixed the WebEx group and I think he's now, in fact when you talk to him, he's doing very little on WebEx that that group's running itself and he's more focused in security. So I, I think we could see a resurgence there. But you know, they have a, from a revenue perspective, it's a little misleading cuz they have this big legacy base that's in decline while they're moving to cloud and stuff. So, but they, but they, there's a lot of work there're trying to, to tie to network. >>Right. Lots of fuel for conversation. We're gonna have to carry this on, on Silicon angle.com guys. Yes. And Wikibon, lets do see us. Thank you so much for joining Dave and me giving us your insights as to this event. Where are you gonna be next? Are you gonna be on vacation? >>There's nothing more fun than mean on the cube, so, right. What's outside of that though? Yeah, you know, Christmas coming up, I gotta go see family and do the obligatory, although for me that's a lot of travel, so I guess >>More planes. Yeah. >>Hopefully not in Vegas. >>Not in Vegas. >>Awesome. Nothing against Vegas. Yeah, no, >>We love it. We >>Love it. Although I will say my year started off with ces. Yeah. And it's finishing up with Palo Alto here. The bookends. Yeah, exactly. In Vegas bookends. >>Well thanks so much for joining us. Thank you Dave. Always a pleasure to host a show with you and hear your insights. Reading your breaking analysis always kicks off my prep for show and it's always great to see, but predictions come true. So thank you for being my co-host bet. All right. For Dave Valante Enz as Carla, I'm Lisa Martin. You've been watching The Cube, the leader in live, emerging and enterprise tech coverage. Thanks for watching.

Published Date : Dec 15 2022

SUMMARY :

It's the Cube Live at A friend of the Cube Guys, it's great to have you here. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were And they, you know, they, they came out as a firewall vendor. And so I think the old model of security of create Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. And one of the few products are not top two, top three in, right? And so the customer's gonna say, Hey, you know, I love your, your consolidation play, And I think that's something they need to work on changing. That's the point. win in the long run, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR And the only proof point you need, Dave, is look at every big breach that we've had over the last And so the, I I think their approach of relying heavily on Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back And so I think from a consistently you know, ultimately it's the customer. Silicon Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to you know, 10. And even with, you know, the SD wan that took 'em a long time to bring you know, the approach they've taken is the right one. I mean, this company was basically gonna double revenues to 7 billion Think about that at that, that I mean, and then and they wanna double again. What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. And and do you have any projections on how, you know, how many years it's gonna take for them to get And that when you look around the show floor, it's not that impressive. And you know, if you look at CrowdStrike's ecosystem, it's pretty similar. But I, I think the, you know, it's what I said before, the, the tools are catching I would chuck my sim. But that's the only reason that, that this person was keeping it. you remember the old computer associate days, nobody ever took it out cuz the sunk dollars you spent to do it. And then, you know, once every five years you had to do a refresh and you were still And I think they've got a pretty good handle on that. Palo Alto's main message and and what do you think about it main message at this event? So I, I think the only way to fight the the bad guys today is with you gotta fight Well it's, it's not hard to do now with a lot of those legacy tools. I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, And when your cloud native and you buy a cloud native If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation And you go do something a lot more interesting. of service has led for a while and you know, big portfolio company, CrowdStrike coming at it from end point. I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I love the, the, the near, you know, proxies are dead and Jay's model, And then obviously, you know, Palo Alto's getting it done. And I, I think if Cisco can hey Cisco customer sweep the floor and come with us. And like duo's a good product and umbrella's a good product. And I do think g G two Patel there fixed the WebEx group and I think he's now, Thank you so much for joining Dave and me giving us your insights as to this event. you know, Christmas coming up, I gotta go see family and do the obligatory, although for me that's a lot of travel, Yeah. Yeah, no, We love it. And it's finishing up with Palo Alto here. Always a pleasure to host a show with you and hear your insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

DavePERSON

0.99+

CiscoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave ValantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

20%QUANTITY

0.99+

Fort NetORGANIZATION

0.99+

2017DATE

0.99+

93%QUANTITY

0.99+

PaloORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

CarlaPERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

VegasLOCATION

0.99+

threeQUANTITY

0.99+

7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

70 billionQUANTITY

0.99+

2020DATE

0.99+

80%QUANTITY

0.99+

44%QUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

45 billionQUANTITY

0.99+

52QUANTITY

0.99+

second pointQUANTITY

0.99+

10QUANTITY

0.99+

59QUANTITY

0.99+

yesterdayDATE

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

five yearsQUANTITY

0.99+

two vendorsQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

KaralaPERSON

0.99+

CrowdStrikeORGANIZATION

0.99+

ibmORGANIZATION

0.99+

15QUANTITY

0.99+

JayPERSON

0.99+

8.5QUANTITY

0.99+

Palo AltosORGANIZATION

0.99+

Dave Valante EnzPERSON

0.99+

two panesQUANTITY

0.99+

two yearsQUANTITY

0.99+

ThreeQUANTITY

0.99+

56QUANTITY

0.99+

bothQUANTITY

0.99+

ChristmasEVENT

0.99+

ServiceNowORGANIZATION

0.99+

second dayQUANTITY

0.99+

oneQUANTITY

0.99+

2023DATE

0.99+

35QUANTITY

0.99+

twoQUANTITY

0.99+

ReinventORGANIZATION

0.98+

The CubeTITLE

0.98+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

WebExORGANIZATION

0.98+

first segmentQUANTITY

0.98+

Palo AltoLOCATION

0.98+

emcORGANIZATION

0.98+

two weeks agoDATE

0.98+

4%QUANTITY

0.98+

Takeaways from Ignite22 | Palo Alto Networks Ignite22


 

>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Welcome back everyone. We're so glad that you're still with us. It's the Cube Live at the MGM Grand. This is our second day of coverage of Palo Alto Networks Ignite. This is takeaways from Ignite 22. Lisa Martin here with two really smart guys, Dave Valante. Dave, we're joined by one of our cube alumni, a friend, a friend of the, we say friend of the Cube. >>Yeah, F otc. A friend of the Cube >>Karala joins us. Guys, it's great to have you here. It's been an exciting show. A lot of cybersecurity is one of my favorite topics to talk about. But I'd love to get some of the big takeaways from both of you. Dave, we'll start with >>You. A breathing room from two weeks ago. Yeah, that was, that was really pleasant. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were from there. But, you know, coming into this, we wrote a piece, Palo Alto's Gold Standard, what they need to do to, to keep that, that status. And we hear it a lot about consolidation. That's their big theme now, which is timely, right? Cause people wanna save money, they wanna do more with less. But I'm really interested in hearing zeus's thoughts on how that's playing in the market. How customers, how easy is it to just say, oh, hey, I'm gonna consolidate. I wanna get into that a little bit with you, how well the strategy's working. We're gonna get into some of the m and a activity and really bring your perspectives to the table. Well, >>It's, it's not easy. I mean, people have been calling for the consolidation of security for decades, and it's, it's, they're the first company that's actually made it happen. Right? And, and I think this is what we're seeing here is the culmination of this long-term strategy, this company trying to build more of a platform. And they, you know, they, they came out as a firewall vendor. And I think it's safe to say they're more than firewall today. That's only about two thirds of their revenue now. So down from 80% a few years ago. And when I think of what Palo Alto has become, they're really a data company. Now, if you look at, you know, unit 42 in Cortex, the, the, the Cortex Data Lake, they've done an excellent job of taking telemetry from their products and from the acquisitions they have, right? And bringing that together into one big data lake. >>And then they're able to use that to, to do faster threat notification, forensics, things like that. And so I think the old model of security of create signatures for known threats, it's safe to say it never really worked and it wasn't ever gonna work. You had too many days, zero exploits and things. The only way to fight security today is with a AI and ML based analytics. And they have, they're the gold standard. I think the one thing about your post that I would add, they're the gold standard from a data standpoint. And that's given them this competitive advantage to go out and become a platform for security. Which, like I said, the people have tried to do that for years. And the first one that's actually done it, well, >>We've heard this from some of the startups, like Lacework will say, oh, we treat security as a data problem. Of course there's a startup, Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. But one of the things I wanted to explore with you coming into this was the notion of can you be best of breed and develop a suite? And we, we've been hearing a consistent answer to that question, which is, and, and do you need to, and the answer is, well, best of breed in security requires that full spectrum, that full view. So here's my question to you. So, okay, let's take Estee win relatively new for these guys, right? Yeah. Okay. And >>And one of the few products are not top two, top three in, right? >>Exactly. Yeah. So that's why I want to take that. Yeah. Because in bakeoffs, they're gonna lose on a head-to-head best of breed. And so the customer's gonna say, Hey, you know, I love your, your consolidation play, your esty win's. Just, okay, how about a little discount on that? And you know, these guys are premium priced. Yes. So, you know, are they in essentially through their pricing strategies, sort of creating that stuff, fighting that, is that friction for them where they've got, you know, the customer says, all right, well forget it, we're gonna go stove pipe with the SD WAN will consolidate some of the stuff. Are you seeing that? >>Yeah, I, I, I still think the sales model is that way. And I think that's something they need to work on changing. If they get into a situation where they have to get down into a feature battle of my SD WAN versus your SD wan, my firewall versus your firewall, frankly they've already lost, you know, because their value prop is the suite and, and is the platform. And I was talking with the CISO here that told me, he realizes now that you don't need best of breed everywhere to have best in class threat protection. In fact, best of breed everywhere leads to suboptimal threat protection. Cuz you have all these data data sets that are in silos, right? And so from a data scientist standpoint, right, there's the good data leads to good insights. Well, partial data leads to fragmented insights and that's, that's what the best, best of breed approach gives you. And so I was talking with Palo about this, can they have this vision of being best of breed and platform? I don't really think you can maintain best of breed everywhere across this portfolio this big, but you don't need to. >>That was my second point of my question. That's the point I'm saying. Yeah. And so, cuz cuz because you know, we've talked about this, that that sweets always win in the long run, >>Sweets win. >>Yeah. But here's the thing, I, I wonder to your your point about, you know, the customer, you know, understanding that that that, that this resonates with them. I, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort of wed, you know, hugging that, that tool. So there's, there's work to be done here, but I think they, they, they got it right Because if they devolve, to your point, if they devolve down to that speeds and feeds, eh, what's the point of that? Where's their >>Valuable? You do not wanna get into a knife fight. And I, and I, and I think for them the, a big challenge now is convincing customers that the suite, the suite approach does work. And they have to be able to do that in actual customer examples. And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR and xor and even are looking at their sim have told me that the, the, so think of soc operations, the old way heavily manually oriented, right? You have multiple panes of glass and you know, and then you've got, so there's a lot of people work before you bring the tools in, right? If done correctly with AI and ml, the machines would do all the heavy lifting and then you'd bring people in at the end to clean up the little bits that were missed, right? >>And so you, you moved to, from something that was very people heavy to something that's machine heavy and machines can work a lot faster than people. And the, and so the ones that I've talked that have, that have done that have said, look, our engineers have moved on to a lot different things. They're doing penetration testing, they're, you know, helping us with, with strategy and they're not fighting that, that daily fight of looking through log files. And the only proof point you need, Dave, is look at every big breach that we've had over the last five years. There's some SIM vendor up there that says, we caught it. Yeah. >>Yeah. We we had the data. >>Yeah. But, but, but the security team missed it. Well they missed it because you're, nobody can look at that much data manually. And so the, I I think their approach of relying heavily on machines to fight the fight is actually the right way. >>Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back in 2017 at Fort Net. Is that, where do the two stand in your >>Yeah, it's funny cuz if you talk to the two vendors, they don't really see each other in a lot of accounts because Fort Net's more small market mid-market. It's the same strategy to some degree where Fort Net relies heavily on in-house development in Palo Alto relies heavily on acquisition. Yeah. And so I think from a consistently feature set, you know, Fort Net has an advantage there because it, it's all run off their, their their silicon. Where, where Palo's able to innovate very quickly. The, it it requires a lot of work right? To, to bring the front end and back ends together. But they're serving different markets. So >>Do you see that as a differentiator? The integration strategy that Palo Alto has as a differentiator? We talk to so many companies who have an a strong m and a strategy and, and execution arm. But the challenge is always integrating the technology so that the customer to, you know, ultimately it's the customer. >>I actually think they're, they're underrated as a, an acquirer. In fact, Dave wrote a post to a prior on Silicon Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to rank 'em as an acquirer and they were in the middle of the pack, >>Right? It was, it was. So it was Oracle, VMware, emc, ibm, Cisco, ServiceNow, and Palo Alto. Yeah. Or Oracle got very high marks. It was like 8.5 out of, you know, 10. Yeah. VMware I think was 6.5. Naira was high emc, big range. IBM five to seven. Cisco was three to eight. Yeah. Yeah, right. ServiceNow was a seven. And then, yeah, Palo Alto was like a five. And I, which I think it was unfair. Well, >>And I think it depends on how you look at it. And I, so I think a lot of the acquisitions Palo Alto's made, they've done a good job of integrating the backend data and they've almost ignored the front end. And so when you buy some of the products, it's a little clunky today. You know, if you work with Prisma Cloud, it could be a little bit cleaner. And even with, you know, the SD wan that took 'em a long time to bring CloudGenix in and stuff. But I think the approach is right. I don't, I don't necessarily believe you should integrate the front end until you've integrated the back end. >>That's >>The hard part, right? Because UL ultimately what you're gonna get, you're gonna get two panes of glass and one pane of glass and it might look pretty and all mush together, but ultimately you're not solving the bigger problem, right. Of, of being able to create that big data lake to, to fight security. And so I think, you know, the approach they've taken is the right one. I think from a user standpoint, maybe it doesn't show up as neatly because you don't see the frontend integration, but the way they're doing it is the right way to do it. And I'm glad they're doing it that way versus caving to the pressures of what, you know, the industry might want or >>Showed up in the performance of the company. I mean, this company was basically gonna double revenues to 7 billion from 2020 to >>2023. Think about that at that. That makes, >>I mean that's unbelievable, right? I mean, and then and they wanna double again. Yeah. You know, so, well >>What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. He didn't give a timeline market >>Cap. Right. >>Market cap, right. Do what I wanna get both of your opinions on what you saw and heard and felt this week. What do you think the likelihood is? And and do you have any projections on how, you know, how many years it's gonna take for them to get there? >>Well, >>Well I think so if they're gonna get that big, right? And, and we were talking about this pre-show, any company that's becoming a big company does it through ecosystem >>Bingo >>Go, right? And that when you look around the show floor, it's not that impressive. No. And if that, if there's an area they need to focus on, it's building that ecosystem. And it's not with other security vendors, it's with application vendors and it's with the cloud companies and stuff. And they've got some relationships there, but they need to do more. I actually challenge 'em on that. One of the analyst sessions. They said, look, we've got 800 cortex partners. Well where are they? Right? Why isn't there a cortex stand here with a bunch of the small companies here? So I do think that that is an area they need to focus on. If they are gonna get to that, that market caps number, they will do so do so through ecosystem. Because every company that's achieved that has done it through ecosystem. >>A hundred percent agree. And you know, if you look at CrowdStrike's ecosystem, it's, I mean, pretty similar. Yeah. You know, it doesn't really, you know, make much, much, not much different from this, but I went back and just looked at some, you know, peak valuations during the pandemic and shortly thereafter CrowdStrike was 70 billion. You know, that's what their roughly their peak Palo Alto was 56, fortune was 59 for the actually diverged. Right. And now Palo Alto has taken the, the top mantle, you know, today it's market cap's 52. So it's held 93% of its peak value. Everybody else is tanking. Even Okta was 45 billion. It's been crushed as you well know. But, so Palo Alto wasn't always, you know, the number one in terms of market cap. But I guess my point is, look, if CrowdStrike could got to 70 billion during Yeah. During the frenzy, I think it's gonna take, to answer your question, I think it's gonna be five years. Okay. Before they get back there. I think this market's gonna be tough for a while from a valuation standpoint. I think generally tech is gonna kind of go up and down and sideways for a good year and a half, maybe even two years could be even longer. And then I think there's gonna be some next wave of productivity innovation that that hits. And then you're gonna, you're almost always gonna exceed the previous highs. It's gonna take a while. Yeah. >>Yeah, yeah. But I think their ability to disrupt the SIM market actually is something that I, I believe they're gonna do. I've been calling for the death of the sim for a long time and I know some people of Palo Alto are very cautious about saying that cuz the Splunks and the, you know, they're, they're their partners. But I, I think the, you know, it's what I said before, the, the tools are catching them, but they're, it's not in a way that's useful for the IT pro and, but I, I don't think the SIM vendors have that ecosystem of insight across network cloud endpoint. Right. Which is what you need in order to make a sim useful. >>CISO at an ETR round table said, if, if it weren't for my regulators, I would chuck my sim. >>Yes. >>But that's the only reason that, that this person was keeping it. No. >>Yeah. And I think the, the fact that most of those companies have moved to a perpetual MO or a a recurring revenue model actually helps unseat them. Typically when you pour a bunch of money into something, you remember the old computer associate says nobody ever took it out cuz the sunk dollars you spent to do it. But now that you're paying an annual recurring fee, it's actually makes it easier to take out. So >>Yeah, it's just an ebb and flow, right? Yeah. Because the maintenance costs were, you know, relatively low. Maybe it was 20% of the total. And then, you know, once every five years you had to do a refresh and you were still locked into the sort of maintenance and, and so yeah, I think you're right. The switching costs with sas, you know, in theory anyway, should be less >>Yeah. As long as you can migrate the data over. And I think they've got a pretty good handle on that. So, >>Yeah. So guys, I wanna get your perspective as a whole bunch of announcements here. We've only been here for a couple days, not a big conference as, as you can see from behind us. What Zs in your opinion was Palo Alto's main message and and what do you think about it main message at this event? And then same question for you. >>Yeah, I, I think their message largely wrapped around disruption, right? And, and they, and The's keynote already talked about that, right? And where they disrupted the firewall market by creating a NextGen firewall. In fact, if you look at all the new services they added to their firewall, you, you could almost say it's a NextGen NextGen firewall. But, but I do think the, the work they've done in the area of cloud and cortex actually I think is, is pretty impressive. And I think that's the, the SOC is ripe for disruption because it's for, for the most part, most socks still, you know, run off legacy playbooks. They run off legacy, you know, forensic models and things and they don't work. It's why we have so many breaches today. The, the dirty little secret that nobody ever wants to talk about is the bad guys are using machine learning, right? And so if you're using a signature based model, all they gotta do is tweak their model a little bit and it becomes, it bypasses them. So I, I think the only way to fight the the bad guys today is with you're gonna fight fire with fire. And I think that's, that's the path they've, they've headed >>Down. Yeah. The bad guys are hiding in plain sight, you know? Yeah, >>Yeah. Well it's, it's not hard to do now with a lot of those legacy tools. So >>I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, you know, the ETR data shows that are, that are that last survey around 35% of the respondents said we are actively consolidating, sorry, 44%, sorry, 35 says who are actively consolidating vendors, redundant vendors today that number's up to 44%. Yeah. It's by far the number one cost optimization technique. That's what these guys are pitching. And I think it's gonna resonate with people and, and I think to your point, they're integrating at the backend, their beeps are technical, right? I mean, they can deal with that complexity. Yeah. And so they don't need eye candy. Eventually they, they, they want to have that cuz it'll allow 'em to have deeper market penetration and make people more productive. But you know, that consolidation message came through loud and clear. >>Yeah. The big change in this industry too is all the new startups are all cloud native, right? They're all built on Amazon or Google or whatever. Yeah. And when your cloud native and you buy a cloud native integration is fast. It's not like having to integrate this big monolithic software stack anymore. Right. So I, I think their pace of integration will only accelerate from here because everything's now cloud native. >>If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation we have, our board isn't necessarily aligned with our executives in terms of execution of a security strategy. How do you advise them where Palo Alto is concerned? >>Yeah. You know, a lot, a lot of this is just fighting legacy mindset. And I've, I was talking with some CISOs here from state and local governments and things and they're, you know, they can't get more budget. They're fighting the tide. But what they did find is through the use of automation technology, they're able to bring their people costs way down. Right. And then be able to use that budget to invest in a lot of new projects. And so with that, you, you have to start with your biggest pain points, apply automation where you can, and then be able to use that budget to reinvest back in your security strategy. And it's good for the IT pros too, the security pros, my advice to the IT pros is, is if you're doing things today that aren't resume building, stop doing them. Right. Find a way to automate the money your job. And so if you're patching systems and you're looking through log files, there's no reason machines can't do that. And you go do something a lot more interesting. >>So true. It's like storage guys 10 years ago, provisioning loans. Yes. It's like, stop doing that. Yeah. You're gonna be outta a job. So who, last question I have is, is who do you see as the big competitors, the horses on the track question, right? So obviously Cisco kind of service has led for a while and you know, big portfolio company, CrowdStrike coming at it from end point. You know who, who, who do you see as the real players going for that? You know, right now the market's three to 4%. The leader has three, three 4% of the market. You know who they're all going for? 10, 15, maybe 20% of the market. Who, who are the likely candidates? Yeah, >>I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I I think they've had a nice run, but I, we might start to see the follow 'em. I think Microsoft is gonna be for middle. They've laid down the gauntlet, right? They are a security vendor, right? We, we were at Reinvent and a AWS is the platform for security vendors. Yes. Middle, somewhere in the middle. But Microsoft make no mistake, they're in security. They've got some good products. I think a lot of 'em are kind of good enough and they, they tie it to the licensing and I'm not sure that works in security, but they've certainly got the ear of a lot of it pros. >>It might work in smb. >>Yeah, yeah. It, it might. And, and I do like Zscaler. I, I know these guys poo poo the proxy model, but they've, they've done about as much with prox as you can. And I, I think it's, it's a battle of, I love the, the, the near, you know, proxies are dead and Jay's model, you know, Jay over at csca, throw 'em back at 'em. So I, it's good to see that kind of fight going on between the >>Two. Oh, it's great. Well, and, and again, ZScaler's coming at it from their cloud security angle. CrowdStrike's coming at it from endpoint. I, I do think CrowdStrike has an opportunity to build out the portfolio through m and a and maybe ecosystem. And then obviously, you know, Palo Alto's getting it done. How about Cisco? >>Yeah, Cisco's interesting. And I I think if Cisco can make the network matter in security and it should, right? We're talking about how a lot of you need a lot of forensics to fight security today. Well, they're gonna see things long before anybody else because they have all that network data. If they can tie network security, I, I mean they could really have that business take off. But we've been saying that about Cisco for 20 years. >>But big install based though. Yeah. It's hard for a company, any company to say, okay, hey Cisco customer sweep the floor and come with us. That's, that's >>A tough thing. They have a lot of good peace parts, right? And like duo's a good product and umbrella's a good product. They've, they've not done a good job. >>They're the opposite of these guys. >>They've not done a good job of the backend integration and that, that's where Cisco needs to, to focus. And I do think g G two Patel there fixed the WebEx group and I think he's now, in fact when you talk to him, he's doing very little on WebEx that that group's running itself and he's more focused in security. So I, I think we could see a resurgence there. But you know, they have a, from a revenue perspective, it's a little misleading cuz they have this big legacy base that's in decline while they're moving to cloud and stuff. So, but they, but they, there's a lot of Rick there trying to, to tie to network. >>Lots of fuel for conversation. We're gonna have to carry this on, on Silicon angle.com guys. Yes. And Wi KeePon. Lets do see us. Thank you so much for joining Dave and me giving us your insights as to this event. Where are gonna be next? Are you gonna be on >>Vacation? There's nothing more fun than mean on the cube. So what's outside of that though? Yeah, you know, Christmas coming up, I gotta go see family and be the obligatory, although for me that's a lot of travel, so I guess >>More planes. Yeah. >>Hopefully not in Vegas. >>Not in Vegas. >>Awesome. Nothing against Vegas. Yeah, no, >>We love it. We love >>It. Although I will say my year started off with ces. Yeah. And it's finishing up with Palo Alto here. The bookends. Yeah, exactly. In Vegas bookends. >>Well thanks so much for joining us. Thank you Dave. Always a pleasure to host a show with you and hear your insights. Reading your breaking analysis always kicks off my prep for show. And it, it's always great to see, but predictions come true. So thank you for being my co-host bet. All right. For Dave Valante Enz as Carla, I'm Lisa Martin. You've been watching The Cube, the leader in live, emerging and enterprise tech coverage. Thanks for watching.

Published Date : Dec 15 2022

SUMMARY :

The Cube presents Ignite 22, brought to you by Palo Alto It's the Cube Live at A friend of the Cube Guys, it's great to have you here. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were And I think it's safe to say they're more than firewall today. And so I think the old model of security of create Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. And so the customer's gonna say, Hey, you know, I love your, your consolidation play, And I think that's something they need to work on changing. And so, cuz cuz because you know, we've talked about this, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR And the only proof point you need, Dave, is look at every big breach that we've had over the last five And so the, I I think their approach of relying heavily on Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back And so I think from a consistently you know, ultimately it's the customer. Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to rank you know, 10. And I think it depends on how you look at it. you know, the approach they've taken is the right one. I mean, this company was basically gonna double revenues to 7 billion That makes, I mean, and then and they wanna double again. What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. And and do you have any projections on how, you know, how many years it's gonna take for them to get And that when you look around the show floor, it's not that impressive. And you know, if you look at CrowdStrike's ecosystem, it's, But I, I think the, you know, it's what I said before, the, the tools are catching I would chuck my sim. But that's the only reason that, that this person was keeping it. you remember the old computer associate says nobody ever took it out cuz the sunk dollars you spent to do it. And then, you know, once every five years you had to do a refresh and you were still And I think they've got a pretty good handle on that. Palo Alto's main message and and what do you think about it main message at this event? it's for, for the most part, most socks still, you know, run off legacy playbooks. Yeah, So I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, And when your cloud native and you buy a cloud native If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation And you go do something a lot more interesting. So obviously Cisco kind of service has led for a while and you know, big portfolio company, I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I love the, the, the near, you know, proxies are dead and Jay's model, And then obviously, you know, Palo Alto's getting it done. And I I think if Cisco can hey Cisco customer sweep the floor and come with us. And like duo's a good product and umbrella's a good product. And I do think g G two Patel there fixed the WebEx group and I think he's now, Thank you so much for joining Dave and me giving us your insights as to this event. you know, Christmas coming up, I gotta go see family and be the obligatory, although for me that's a lot of travel, Yeah. Yeah, no, We love it. And it's finishing up with Palo Alto here. Always a pleasure to host a show with you and hear your insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Lisa MartinPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave ValantePERSON

0.99+

OracleORGANIZATION

0.99+

20%QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Fort NetORGANIZATION

0.99+

2017DATE

0.99+

AmazonORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

VegasLOCATION

0.99+

CarlaPERSON

0.99+

70 billionQUANTITY

0.99+

80%QUANTITY

0.99+

IBMORGANIZATION

0.99+

10QUANTITY

0.99+

93%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

AWSORGANIZATION

0.99+

five yearsQUANTITY

0.99+

2020DATE

0.99+

Palo Alto NetworksORGANIZATION

0.99+

JayPERSON

0.99+

45 billionQUANTITY

0.99+

7 billionQUANTITY

0.99+

Dave Valante EnzPERSON

0.99+

yesterdayDATE

0.99+

KaralaPERSON

0.99+

PaloORGANIZATION

0.99+

44%QUANTITY

0.99+

ibmORGANIZATION

0.99+

two vendorsQUANTITY

0.99+

35QUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Palo AltoORGANIZATION

0.99+

two panesQUANTITY

0.99+

threeQUANTITY

0.99+

ChristmasEVENT

0.99+

VMwareORGANIZATION

0.99+

8.5QUANTITY

0.99+

bothQUANTITY

0.99+

two yearsQUANTITY

0.99+

CrowdStrikeORGANIZATION

0.99+

56QUANTITY

0.99+

oneQUANTITY

0.99+

15QUANTITY

0.99+

second dayQUANTITY

0.99+

firstQUANTITY

0.99+

ReinventORGANIZATION

0.99+

LaceworkORGANIZATION

0.99+

ServiceNowORGANIZATION

0.99+

second pointQUANTITY

0.99+

59QUANTITY

0.99+

emcORGANIZATION

0.99+

4%QUANTITY

0.98+

OneQUANTITY

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

Ignite22ORGANIZATION

0.98+

two weeks agoDATE

0.98+

NairaORGANIZATION

0.98+

The CubeTITLE

0.98+

2023DATE

0.98+

RickPERSON

0.98+

Noor Faraby & Brian Brunner, Stripe Data Pipeline | AWS re:Invent 2022


 

>>Hello, fabulous cloud community and welcome to Las Vegas. We are the Cube and we will be broadcasting live from the AWS Reinvent Show floor for the next four days. This is our first opening segment. I am joined by the infamous John Furrier. John, it is your 10th year being here at Reinvent. How does >>It feel? It's been a great to see you. It feels great. I mean, just getting ready for the next four days. It's, this is the marathon of all tech shows. It's, it's busy, it's crowd, it's loud and the content and the people here are really kind of changing the game and the stories are always plentiful and deep and just it's, it really is one of those shows you kind of get intoxicated on the show floor and in the event and after hours people are partying. I mean it is like the big show and 10 years been amazing run People getting bigger. You're seeing the changing ecosystem Next Gen Cloud and you got the Classics Classic still kind of doing its thing. So getting a lot data, a lot of data stories. And our guests here are gonna talk more about that. This is the year the cloud kind of goes next gen and you start to see the success Gen One cloud players go on the next level. It's gonna be really fun. Fun week. >>Yes, I'm absolutely thrilled and you can certainly feel the excitement. The show floor doors just opened, people pouring in the drinks are getting stacked behind us. As you mentioned, it is gonna be a marathon and very exciting. On that note, fantastic interview to kick us off here. We're starting the day with Stripe. Please welcome nor and Brian, how are you both doing today? Excited to be here. >>Really happy to be here. Nice to meet you guys. Yeah, >>Definitely excited to be here. Nice to meet you. >>Yeah, you know, you were mentioning you could feel the temperature and the energy in here. It is hot, it's a hot show. We're a hot crew. Let's just be honest about that. No shame in that. No shame in that game. But I wanna, I wanna open us up. You know, Stripe serving 2 million customers according to the internet. AWS with 1 million customers of their own, both leading companies in your industries. What, just in case there's someone in the audience who hasn't heard of Stripe, what is Stripe and how can companies use it along with AWS nor, why don't you start us off? >>Yeah, so Stripe started back in 2010 originally as a payments company that helped businesses accept and process their payments online. So that was something that traditionally had been really tedious, kind of difficult for web developers to set up. And what Stripe did was actually introduce a couple of lines of code that developers could really easily integrate into their websites and start accepting those payments online. So payments is super core to who Stripe is as a company. It's something that we still focus on a lot today, but we actually like to think of ourselves now as more than just a payments company but rather financial infrastructure for the internet. And that's just because we have expanded into so many different tools and technologies that are beyond payments and actually help businesses with just about anything that they might need to do when it comes to the finances of running an online company. So what I mean by that, couple examples being setting up online marketplaces to accept multi-party payments, running subscriptions and recurring payments, collecting sales tax accurately and compliantly revenue recognition and data and analytics. Importantly on all of those things, which is what Brian and I focus on at Stripe. So yeah, since since 2010 Stripes really grown to serve millions of customers, as you said, from your small startups to your large multinational companies, be able to not only run their payments but also run complex financial operations online. >>Interesting. Even the Cube, the customer of Stripe, it's so easy to integrate. You guys got your roots there, but now as you guys got bigger, I mean you guys have massive traction and people are doing more, you guys are gonna talk here on the data pipeline in front you, the engineering manager. What has it grown to, I mean, what are some of the challenges and opportunities your customers are facing as they look at that data pipeline that you guys are talking about here at Reinvent? >>Yeah, so Stripe Data Pipeline really helps our customers get their data out of Stripe and into, you know, their data warehouse into Amazon Redshift. And that has been something that for our customers it's super important. They have a lot of other data sets that they want to join our Stripe data with to kind of get to more complex, more enriched insights. And Stripe data pipeline is just a really seamless way to do that. It lets you, without any engineering, without any coding, with pretty minimal setup, just connect your Stripe account to your Amazon Redshift data warehouse, really secure. It's encrypted, you know, it's scalable, it's gonna meet all of the needs of kind of a big enterprise and it gets you all of your Stripe data. So anything in our api, a lot of our reports are just like there for you to take and this just overcomes a big hurdle. I mean this is something that would take, you know, multiple engineers months to build if you wanted to do this in house. Yeah, we give it to you, you know, with a couple clicks. So it's kind of a, a step change for getting data out of Stripe into your data work. >>Yeah, the topic of this chat is getting more data outta your data from Stripe with the pipelining, this is kind of an interesting point, I want to get your thoughts. You guys are in the, in the front lines with customers, you know, stripes started out with their roots line of code, get up and running, payment gateway, whatever you wanna call it. Developers just want to get cash on the door. Thank you very much. Now you're kind of turning in growing up and continue to grow. Are you guys like a financial cloud? I mean, would you categorize yourself as a, cuz you're on top of aws? >>Yeah, financial infrastructure of the internet was a, was a claim I definitely wanna touch on from your, earlier today it was >>Powerful. You guys are super financial cloud basically. >>Yeah, super cloud basically the way that AWS kind of is the superstar in cloud computing. That's how we feel at Stripe that we want to put forth as financial infrastructure for the internet. So yeah, a lot of similarities. Actually it's funny, we're, we're really glad to be at aws. I think this is the first time that we've participated in a conference like this. But just to be able to participate and you know, be around AWS where we have a lot of synergies both as companies. Stripe is a customer of AWS and you know, for AWS users they can easily process payments through Stripe. So a lot of synergies there. And yeah, at a company level as well, we find ourselves really aligned with AWS in terms of the goals that we have for our users, helping them scale, expand globally, all of those good things. >>Let's dig in there a little bit more. Sounds like a wonderful collaboration. We love to hear of technology partnerships like that. Brian, talk to us a little bit about the challenges that the data pipeline solves from Stripe for Redshift users. >>Yeah, for sure. So Stripe Data Pipeline uses Amazon RedShift's built in data sharing capabilities, which gives you kind of an instant view into your Stripe data. If you weren't using Stripe data pipeline, you would have to, you know, ingest the state out of our api, kind of pull yourself manually. And yeah, I think that's just like a big part of it really is just the simplicity with what you can pull the data. >>Yeah, absolutely. And I mean the, the complexity of data and the volume of it is only gonna get bigger. So tools like that that can make things a lot easier are what we're all looking for. >>What's the machine learning angle? Cause I know there's lots of big topic here this year. More machine learning, more ai, a lot more solutions on top of the basic building blocks and the primitives at adds, you guys fit right into that. Cause developers doing more, they're either building their own or rolling out solutions. How do you guys see you guys connecting into that with the pipeline? Because, you know, data pipelining people like, they like that's, it feels like a heavy lift. What's the challenge there? Because when people roll their own or try to get in, it's, it's, it could be a lot of muck as they say. Yeah. What's the, what's the real pain point that you guys solve? >>So in terms of, you know, AI and machine learning, what Stripe Data Pipeline is gonna give you is it gives you a lot of signals around your payments that you can incorporate into your models. We actually have a number of customers that use Stripe radar data, so our fraud product and they integrate it with their in-house data that they get from other sources, have a really good understanding of fraud within their whole business. So it's kind of a way to get that data without having to like go through the process of ingesting it. So like, yeah, your, your team doesn't have to think about the ingestion piece. They can just think about, you know, building models, enriching the data, getting insights on top >>And Adam, so let's, we call it etl, the nasty three letter word in my interview with them. And that's what we're getting to where data is actually connecting via APIs and pipelines. Yes. Seamlessly into other data. So the data mashup, it feels like we're back into in the old mashup days now you've got data mashing up together. This integration's now a big practice, it's a becoming an industry standard. What are some of the patterns and matches that you see around how people are integrating their data? Because we all know machine learning works better when there's more data available and people want to connect their data and integrate it without the hassle. What's the, what's some of the use cases that >>Yeah, totally. So as Brian mentioned, there's a ton of use case for engineering teams and being able to get that data reported over efficiently and correctly and that's, you know, something exactly like you touched on that we're seeing nowadays is like simply having access to the data isn't enough. It's all about consolidating it correctly and accurately and effectively so that you can draw the best insights from that. So yeah, we're seeing a lot of use cases for teams across companies, including, a big example is finance teams. We had one of our largest users actually report that they were able to close their books faster than ever from integrating all of their Stripe revenue data for their business with their, the rest of their data in their data warehouse, which was traditionally something that would've taken them days, weeks, you know, having to do the manual aspect. But they were able to, to >>Simplify that, Savannah, you know, we were talking at the last event we were at Supercomputing where it's more speeds and feeds as people get more compute power, right? They can do more at the application level with developers. And one of the things we've been noticing I'd love to get your reaction to is as you guys have customers, millions of customers, are you seeing customers doing more with Stripe that's not just customers where they're more of an ecosystem partner of Stripe as people see that Stripe is not just a, a >>More comprehensive solution. >>Yeah. What's going on with the customer base? I can see the developers embedding it in, but once you get Stripe, you're like a, you're the plumbing, you're the financial bloodline if you will for the all the applications. Are your customers turning into partners, ecosystem partners? How do you see that? >>Yeah, so we definitely, that's what we're hoping to do. We're really hoping to be everything that a user needs when they wanna run an online business, be able to come in and maybe initially they're just using payments or they're just using billing to set up subscriptions but down the line, like as they grow, as they might go public, we wanna be able to scale with them and be able to offer them all of the products that they need to do. So Data Pipeline being a really important one for, you know, if you're a smaller company you might not be needing to leverage all of this big data and making important product decisions that you know, might come down to the very details, but as you scale, it's really something that we've seen a lot of our larger users benefit from. >>Oh and people don't wanna have to factor in too many different variables. There's enough complexity scaling a business, especially if you're headed towards IPO or something like that. Anyway, I love that the Stripe data pipeline is a no code solution as well. So people can do more faster. I wanna talk about it cuz it struck me right away on our lineup that we have engineering and product marketing on the stage with us. Now for those who haven't worked in a very high growth, massive company before, these teams can have a tiny bit of tension only because both teams want a lot of great things for the end user and their community. Tell me a little bit about the culture at Stripe and what it's like collaborating on the data pipeline. >>Yeah, I mean I, I can kick it off, you know, from, from the standpoint like we're on the same team, like we want to grow Stripe data pipeline, that is the goal. So whatever it takes to kind of get that job done is what we're gonna do. And I think that is something that is just really core to all of Stripe is like high collaboration, high trust, you know, this is something where we can all win if we work together. You don't need to, you know, compete with like products for like resourcing or to get your stuff done. It's like no, what's the, what's the, the team goal here, right? Like we're looking for team wins, not, you know, individual wins. >>Awesome. Yeah. And at the end of the day we have the same goal of connecting the product and the user in a way that makes sense and delivering the best product to that target user. So it's, it's really, it's a great collaboration and as Brian mentioned, the culture at Stripe really aligns with that as >>Well. So you got the engineering teams that get value outta that you guys are dealing with, that's your customer. But the security angle really becomes a big, I think catalyst cuz not just engineering, they gotta build stuff in so they're always building, but the security angle's interesting cuz now you got that data feeding security teams, this is becoming very secure security ops oriented. >>Yeah, you know, we are really, really tight partners with our internal security folks. They review everything that we do. We have a really robust security team. But I think, you know, kind of tying back to the Amazon side, like Amazon, Redshift is a very secure product and the way that we share data is really secure. You know, the, the sharing mechanism only works between encrypted clusters. So your data is encrypted at rest, encrypted and transit and excuse me, >>You're allowed to breathe. You also swallow the audience as well as your team at Stripe and all of us here at the Cube would like your survival. First and foremost, the knowledge we'll get to the people. >>Yeah, for sure. Where else was I gonna go? Yeah, so the other thing like you kind of mentioned, you know, there are these ETLs out there, but they, you know that that requires you to trust your data to a third party. So that's another thing here where like your data is only going from stripe to your cluster. There's no one in the middle, no one else has seen what you're doing, there's no other security risks. So security's a big focus and it kind of runs through the whole process both on our side and Amazon side. >>What's the most important story for Stripe at this event? You guys hear? How would you say, how would you say, and if you're on the elevator, what's going on with Stripe? Why now? What's so important at Reinvent for Stripe? >>Yeah, I mean I'm gonna use this as an opportunity to plug data pipelines. That's what we focus on. We're here representing the product, which is the easiest way for any user of aws, a user of Amazon, Redshift and a user of Stripe be able to connect the dots and get their data in the best way possible so that they can draw important business insights from that. >>Right? >>Yeah, I think, you know, I would double what North said, really grow Stripe data pipeline, get it to more customers, get more value for our customers by connecting them with their data and with reporting. I think that's, you know, my goal here is to talk to folks, kind of understand what they want to see out of their data and get them onto Stripe data pipeline. >>And you know, former Mike Mikela, former eight executive now over there at Stripe leading the charge, he knows a lot about Amazon here at aws. The theme tomorrow, Adams Leslie keynote, it's gonna be a lot about data, data integration, data end to end Lifeing, you see more, we call it data as code where engineering infrastructure as code was cloud was starting to see a big trend towards data as code where it's more of an engineering opportunity and solution insights. This data as code is kinda like the next evolution. What do you guys think about that? >>Yeah, definitely there is a ton that you can get out of your data if it's in the right place and you can analyze it in the correct ways. You know, you look at Redshift and you can pull data from Redshift into a ton of other products to like, you know, visualize it to get machine learning insights and you need the data there to be able to do this. So again, Stripe Data Pipeline is a great way to take your data and integrate it into the larger data picture that you're building within your company. >>I love that you are supporting businesses of all sizes and millions of them. No. And Brian, thank you so much for being here and telling us more about the financial infrastructure of the internet. That is Stripe, John Furrier. Thanks as always for your questions and your commentary. And thank you to all of you for tuning in to the Cubes coverage of AWS Reinvent Live here from Las Vegas, Nevada. I'm Savannah Peterson and we look forward to seeing you all week.

Published Date : Nov 29 2022

SUMMARY :

I am joined by the infamous John Furrier. kind of goes next gen and you start to see the success Gen One cloud players go Yes, I'm absolutely thrilled and you can certainly feel the excitement. Nice to meet you guys. Definitely excited to be here. Yeah, you know, you were mentioning you could feel the temperature and the energy in here. as you said, from your small startups to your large multinational companies, I mean you guys have massive traction and people are doing more, you guys are gonna talk here and it gets you all of your Stripe data. you know, stripes started out with their roots line of code, get up and running, payment gateway, whatever you wanna call it. You guys are super financial cloud basically. But just to be able to participate and you know, be around AWS We love to hear of technology of it really is just the simplicity with what you can pull the data. And I mean the, the complexity of data and the volume of it is only gonna get bigger. blocks and the primitives at adds, you guys fit right into that. So in terms of, you know, AI and machine learning, what Stripe Data Pipeline is gonna give you is matches that you see around how people are integrating their data? that would've taken them days, weeks, you know, having to do the manual aspect. Simplify that, Savannah, you know, we were talking at the last event we were at Supercomputing where it's more speeds and feeds as people I can see the developers embedding it in, but once you get Stripe, decisions that you know, might come down to the very details, but as you scale, Anyway, I love that the Stripe data pipeline is Yeah, I mean I, I can kick it off, you know, from, So it's, it's really, it's a great collaboration and as Brian mentioned, the culture at Stripe really aligns they gotta build stuff in so they're always building, but the security angle's interesting cuz now you Yeah, you know, we are really, really tight partners with our internal security folks. You also swallow the audience as well as your team at Stripe Yeah, so the other thing like you kind of mentioned, We're here representing the product, which is the easiest way for any user I think that's, you know, my goal here is to talk to folks, kind of understand what they want And you know, former Mike Mikela, former eight executive now over there at Stripe leading the charge, Yeah, definitely there is a ton that you can get out of your data if it's in the right place and you can analyze I love that you are supporting businesses of all sizes and millions of them.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Mike MikelaPERSON

0.99+

2010DATE

0.99+

Brian BrunnerPERSON

0.99+

StripeORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

Las VegasLOCATION

0.99+

John FurrierPERSON

0.99+

AdamPERSON

0.99+

JohnPERSON

0.99+

10th yearQUANTITY

0.99+

StripesORGANIZATION

0.99+

SavannahPERSON

0.99+

Noor FarabyPERSON

0.99+

1 million customersQUANTITY

0.99+

10 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

RedshiftORGANIZATION

0.99+

stripesORGANIZATION

0.99+

2 million customersQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

both teamsQUANTITY

0.98+

first timeQUANTITY

0.98+

todayDATE

0.98+

FirstQUANTITY

0.98+

awsORGANIZATION

0.98+

millionsQUANTITY

0.98+

Stripe Data PipelineORGANIZATION

0.97+

this yearDATE

0.97+

oneQUANTITY

0.97+

eight executiveQUANTITY

0.96+

tomorrowDATE

0.96+

first opening segmentQUANTITY

0.96+

millions of customersQUANTITY

0.96+

stripeORGANIZATION

0.91+

Adams LesliePERSON

0.9+

Victoria Avseeva & Tom Leyden, Kasten by Veeam | KubeCon + CloudNativeCon NA 2022


 

>>Hello everyone, and welcome back to the Cube's Live coverage of Cuban here in Motor City, Michigan. My name is Savannah Peterson and I'm delighted to be joined for this segment by my co-host Lisa Martin. Lisa, how you doing? Good. >>We are, we've had such great energy for three days, especially on a Friday. Yeah, that's challenging to do for a tech conference. Go all week, push through the end of day Friday. But we're here, We're excited. We have a great conversation coming up. Absolutely. A little of our alumni is back with us. Love it. We have a great conversation about learning. >>There's been a lot of learning this week, and I cannot wait to hear what these folks have to say. Please welcome Tom and Victoria from Cast by Beam. You guys are swag up very well. You've got the Fanny pack. You've got the vest. You even were nice enough to give me a Carhartt Beanie. Carhartt being a Michigan company, we've had so much love for Detroit and, and locally sourced swag here. I've never seen that before. How has the week been for you? >>The week has been amazing, as you can say by my voice probably. >>So the mic helps. Don't worry. You're good. >>Yeah, so, So we've been talking to tons and tons of people, obviously some vendors, partners of ours. That was great seeing all those people face to face again, because in the past years we haven't really been able to meet up with those people. But then of course, also a lot of end users and most importantly, we've met a lot of people that wanted to learn Kubernetes, that came here to learn Kubernetes, and we've been able to help them. So feel very satisfied about that. >>When we were at VMware explorer, Tom, you were on the program with us, just, I guess that was a couple of months ago. I'm listening track. So many events are coming up. >>Time is a loop. It's >>Okay. It really is. You, you teased some new things coming from a learning perspective. What is going on there? >>All right. So I'm happy that you link back to VMware explorer there because Yeah, I was so excited to talk about it, but I couldn't, and it was frustrating. I knew it was coming up. That was was gonna be awesome. So just before Cuban, we launched Cube Campus, which is the rebrand of learning dot cast io. And Victoria is the great mind behind all of this, but what the gist of it, and then I'll let Victoria talk a little bit. The gist of Cube Campus is this all started as a small webpage in our own domain to bring some hands on lab online and let people use them. But we saw so many people who were interested in those labs that we thought, okay, we have to make this its own community, and this should not be a branded community or a company branded community. >>This needs to be its own thing because people, they like to be in just a community environment without the brand from the company being there. So we made it completely independent. It's a Cube campus, it's still a hundred percent free and it's still the That's right. Only platform where you actually learn Kubernetes with hands on labs. We have 14 labs today. We've been creating one per month and we have a lot of people on there. The most exciting part this week is that we had our first learning day, but before we go there, I suggest we let Victoria talk a little bit about that user experience of Cube Campus. >>Oh, absolutely. So Cube Campus is, and Tom mentioned it's a one year old platform, and we rebranded it specifically to welcome more and, you know, embrace this Kubernetes space total as one year anniversary. We have over 11,000 students and they've been taking labs Wow. Over 7,000. Yes. Labs taken. And per each user, if you actually count approximation, it's over three labs, three point 29. And I believe we're growing as per user if you look at the numbers. So it's a huge success and it's very easy to use overall. If you look at this, it's a number one free Kubernetes learning platform. So for you user journey for your Kubernetes journey, if you start from scratch, don't be afraid. That's we, we got, we got it all. We got you back. >>It's so important and, and I'm sure most of our audience knows this, but the, the number one challenge according to Gartner, according to everyone with Kubernetes, is the complexity. Especially when you're getting harder. I think it's incredibly awesome that you've decided to do this. 11,000 students. I just wanna settle on that. I mean, in your first year is really impressive. How did this become, and I'm sure this was a conversation you two probably had. How did this become a priority for CAST and by Beam? >>I have to go back for that. To the last virtual only Cuban where we were lucky enough to have set up a campaign. It was actually, we had an artist that was doing caricatures in a Zoom room, and it gave us an opportunity to actually talk to people because the challenge back in the days was that everything virtual, it's very hard to talk to people. Every single conversation we had with people asking them, Why are you at cu com virtual was to learn Kubernetes every single conversation. Yeah. And so that was, that is one data point. The other data point is we had one lab to, to use our software, and that was extremely popular. So as a team, we decided we should make more labs and not just about our product, but also about Kubernetes. So that initial page that I talked about that we built, we had three labs at launch. >>One was to learn install Kubernetes. One was to build a first application on Kubernetes, and then a third one was to learn how to back up and restore your application. So there was still a little bit of promoting our technology in there, but pretty soon we decided, okay, this has to become even more. So we added storage, we added security and, and a lot more labs. So today, 14 labs, and we're still adding one every month. The next step for the labs is going to be to involve other partners and have them bring their technologies in the lab. So that's our user base can actually learn more about Kubernetes related technologies and then hopefully with links to open source tools or free software tools. And it's, it's gonna continue to be a, a learning experience for Kubernetes. I >>Love how this seems to be, have been born out of the pandemic in terms of the inability to, to connect with customers, end users, to really understand what their challenges are, how do we help you best? But you saw the demand organically and built this, and then in, in the first year, not only 11,000 as Victoria mentioned, 11,000 users, but you've almost quadrupled the number of labs that you have on the platform in such a short time period. But you did hands on lab here, which I know was a major success. Talk to us about that and what, what surprised you about Yeah, the appetite to learn that's >>Here. Yeah. So actually I'm glad that you relay this back to the pandemic because yes, it was all online because it was still the, the tail end of the pandemic, but then for this event we're like, okay, it's time to do this in person. This is the next step, right? So we organized our first learning day as a co-located event. We were hoping to get 60 people together in a room. We did two labs, a rookie and a pro. So we said two times 30 people. That's our goal because it's really, it's competitive here with the collocated events. It's difficult >>Bringing people lots going on. >>And why don't I, why don't I let Victoria talk about the success of that learning day, because it was big part also her help for that. >>You know, our main goal is to meet expectations and actually see the challenges of our end user. So we actually, it also goes back to what we started doing research. We saw the pain points and yes, it's absolutely reflecting, reflecting on how we deal with this and what we see. And people very appreciative and they love platform because it's not only prerequisites, but also hands on lab practice. So, and it's free again, it's applied, which is great. Yes. So we thought about the user experience, user flow, also based, you know, the product when it's successful and you see the result. And that's where we, can you say the numbers? So our expectation was 60 >>People. You're kinda, you I feel like a suspense is starting killing. How many people came? >>We had over 350 people in our room. Whoa. >>Wow. Wow. >>And small disclaimer, we had a little bit of a technical issue in the beginning because of the success. There was a wireless problem in the hotel amongst others. Oh geez. So we were getting a little bit nervous because we were delayed 20 minutes. Nobody left that, that's, I was standing at the door while people were solving the issues and I was like, Okay, now people are gonna walk out. Right. Nobody left. Kind >>Of gives me >>Ose bump wearing that. We had a little reception afterwards and I talked to people, sorry about the, the disruption that we had under like, no, we, we are so happy that you're doing this. This was such a great experience. Castin also threw party later this week at the party. We had people come up to us like, I was at your learning day and this was so good. Thank you so much for doing this. I'm gonna take the rest of the classes online now. They love it. Really? >>Yeah. We had our instructors leading the program as well, so if they had any questions, it was also address immediately. So it was a, it was amazing event actually. I'm really grateful for people to come actually unappreciated. >>But now your boss knows how you can blow out metrics though. >>Yeah, yeah, yeah, yeah. Gonna >>Raise Victoria. >>Very good point. It's a very >>Good point. I can >>Tell. It's, it's actually, it's very tough to, for me personally, to analyze where the success came from. Because first of all, the team did an amazing job at setting the whole thing up. There was food and drinks for everybody, and it was really a very nice location in a hotel nearby. We made it a colocated event and we saw a lot of people register through the Cuban registration website. But we've done colocated events before and you typically see a very high no-show rate. And this was not the case right now. The a lot of, I mean the, the no-show was actually very low. Obviously we did our own campaign to our own database. Right. But it's hard to say like, we have a lot of people all over the world and how many people are actually gonna be in Detroit. Yeah. One element that also helped, I'm actually very proud of that, One of the people on our team, Thomas Keenan, he reached out to the local universities. Yes. And he invited students to come to learning day as well. I don't think it was very full with students. It was a good chunk of them. So there was a lot of people from here, but it was a good mix. And that way, I mean, we're giving back a little bit to the universities versus students. >>Absolutely. Much. >>I need to, >>There's a lot of love for Detroit this week. I'm all about it. >>It's amazing. But, but from a STEM perspective, that's huge. We're reaching down into that community and really giving them the opportunity to >>Learn. Well, and what a gateway for Castin. I mean, I can easily say, I mean, you are the number, we haven't really talked about casting at all, but before we do, what are those pins in front of you? >>So this is a physical pain. These are physical pins that we gave away for different programs. So people who took labs, for example, rookie level, they would get this p it's a rookie. >>Yes. I'm gonna hold this up just so they can do a little close shot on if you want. Yeah. >>And this is PR for, it's a, it's a next level program. So we have a program actually for IS to beginners inter intermediate and then pro. So three, three different levels. And this one is for Helman. It's actually from previous. >>No, Helmsman is someone who has taken the first three labs, right? >>Yes, it is. But we actually had it already before. So this one is, yeah, this one is, So we built two new labs for this event and it was very, very great, you know, to, to have a ready absolutely new before this event. So we launched the whole website, the whole platform with new labs, additional labs, and >>Before an event, honestly. Yeah. >>Yeah. We also had such >>Your expression just said it all. Exactly. >>You're a vacation and your future. I >>Hope so. >>We've had a couple of rough freaks. Yeah. This is part of it. Yeah. So, but about those labs. So in the classroom we had two, right? We had the, the, the rookie and the pro. And like I said, we wanted an audience for both. Most people stayed for both. And there were people at the venue one hour before we started because they did not want to miss it. Right. And what that chose to me is that even though Cuban has been around for a long time, and people have been coming back to this, there is a huge audience that considers themselves still very early on in their Kubernetes journey and wants to take and, and is not too proud to go to a rookie class for Kubernetes. So for us, that was like, okay, we're doing the right thing because yeah, with the website as well, more rookie users will keep, keep coming. And the big goal for us is just to accelerate their Kubernetes journey. Right. There's a lot of platforms out there. One platform I like as well is called the tech world with nana, she has a lot of instructional for >>You. Oh, she's a wonderful YouTuber. >>She, she's, yeah, her following is amazing. But what we add to this is the hands on part. Right? And, and there's a lot of auto resources as well where you have like papers and books and everything. We try to add those as well, but we feel that you can only learn it by doing it. And that is what we offer. >>Absolutely. Totally. Something like >>Kubernetes, and it sounds like you're demystifying it. You talked about one of the biggest things that everyone talks about with respect to Kubernetes adoption and some of the barriers is the complexity. But it sounds to me like at the, we talked about the demand being there for the hands on labs, the the cube campus.io, but also the fact that people were waiting an hour early, they're recognizing it's okay to raise, go. I don't really understand this. Yeah. In fact, another thing that I heard speaking of, of the rookies is that about 60% of the attendees at this year's cube con are Yeah, we heard that >>Out new. >>Yeah. So maybe that's smell a lot of those rookies showed up saying, >>Well, so even >>These guys are gonna help us really demystify and start learning this at a pace that works for me as an individual. >>There's some crazy macro data to support this. Just to echo this. So 85% of enterprise companies are about to start making this transition in leveraging Kubernetes. That means there's only 15% of a very healthy, substantial market that has adopted the technology at scale. You are teaching that group of people. Let's talk about casting a little bit. Number one, Kubernetes backup, 900% growth recently. How, how are we managing that? What's next for you, you guys? >>Yeah, so growth last year was amazing. Yeah. This year we're seeing very good numbers as well. I think part of the explanation is because people are going into production, you cannot sell back up to a company that is not in production with their right. With their applications. Right? So what we are starting to see is people are finally going into production with their Kubernetes applications and are realizing we have to back this up. The other trend that we're seeing is, I think still in LA last year we were having a lot of stateless first estate full conversations. Remember containers were created for stateless applications. That's no longer the case. Absolutely. But now the acceptance is there. We're not having those. Oh. But we're stateless conversations because everybody runs at least a database with some user data or application data, whatever. So all Kubernetes applications need to be backed up. Absolutely. And we're the number one product for that. >>And you guys just had recently had a new release. Yes. Talk to us a little bit about that before we wrap. It's new in the platform and, and also what gives you, what gives cast. And by being that competitive advantage in this new release, >>The competitive advantage is really simple. Our solution was built for Kubernetes. With Kubernetes. There are other products. >>Talk about dog fooding. Yeah. Yeah. >>That's great. Exactly. Yeah. And you know what, one of our successes at the show is also because we're using Kubernetes to build our application. People love to come to our booth to talk to our engineers, who we always bring to the show because they, they have so much experience to share. That also helps us with ems, by the way, to, to, to build those labs, Right? You need to have the, the experience. So the big competitive advantage is really that we're Kubernetes native. And then to talk about 5.5, I was going like, what was the other part of the question? So yeah, we had 5.5 launched also during the show. So it was really a busy week. The big focus for five five was simplicity. To make it even easier to use our product. We really want people to, to find it easy. We, we were using, we were using new helm charts and, and, and things like that. The second part of the launch was to do even more partner integrations. Because if you look at the space, this cloud native space, it's, you can also attest to that with, with Cube campus, when you build an application, you need so many different tools, right? And we are trying to integrate with all of those tools in the most easy and most efficient way so that it becomes easy for our customers to use our technology in their Kubernetes stack. >>I love it. Tom Victoria, one final question for you before we wrap up. You mentioned that you have a fantastic team. I can tell just from the energy you two have. That's probably the truth. You also mentioned that you bring the party everywhere you go. Where are we all going after this? Where's the party tonight? Yeah. >>Well, let's first go to a ballgame tonight. >>The party's on the court. I love it. Go Pistons. >>And, and then we'll end up somewhere downtown in a, in a good club, I guess. >>Yeah. Yeah. Well, we'll see how the show down with the hawks goes. I hope you guys make it to the game. Tom Victoria, thank you so much for being here. We're excited about what you're doing. Lisa, always a joy sharing the stage with you. My love. And to all of you who are watching, thank you so much for tuning into the cube. We are wrapping up here with one segment left in Detroit, Michigan. My name's Savannah Peterson. Thanks for being here.

Published Date : Oct 28 2022

SUMMARY :

Lisa, how you doing? Yeah, that's challenging to do for a tech conference. There's been a lot of learning this week, and I cannot wait to hear what these folks have to say. So the mic helps. So feel very satisfied about that. When we were at VMware explorer, Tom, you were on the program with us, just, Time is a loop. You, you teased some new things coming from a learning perspective. So I'm happy that you link back to VMware explorer there because Yeah, So we made it completely independent. And I believe we're growing as per user if you look and I'm sure this was a conversation you two probably had. So that initial page that I talked about that we built, we had three labs at So we added storage, Talk to us about that and what, what surprised you about Yeah, the appetite to learn that's So we organized our first learning day as a co-located event. because it was big part also her help for that. So we actually, it also goes back to what How many people came? We had over 350 people in our room. So we were getting a little bit We had people come up to us like, I was at your learning day and this was so good. it was a, it was amazing event actually. Yeah, yeah, yeah, yeah. It's a very I can But it's hard to say like, we have a lot of people all over the world and how Absolutely. There's a lot of love for Detroit this week. really giving them the opportunity to I mean, I can easily say, I mean, you are the number, These are physical pins that we gave away for different Yeah. So we have a program actually So we launched the whole website, Yeah. Your expression just said it all. I So in the classroom we had two, right? And, and there's a lot of auto resources as well where you have like Something like about 60% of the attendees at this year's cube con are Yeah, we heard that These guys are gonna help us really demystify and start learning this at a pace that works So 85% of enterprise companies is because people are going into production, you cannot sell back Talk to us a little bit about that before we wrap. Our solution was built for Kubernetes. Talk about dog fooding. And then to talk about 5.5, I was going like, what was the other part of the question? I can tell just from the energy you two have. The party's on the court. And to all of you who are watching, thank you so much for tuning into the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Thomas KeenanPERSON

0.99+

Tom LeydenPERSON

0.99+

Savannah PetersonPERSON

0.99+

TomPERSON

0.99+

14 labsQUANTITY

0.99+

DetroitLOCATION

0.99+

twoQUANTITY

0.99+

CarharttORGANIZATION

0.99+

LALOCATION

0.99+

20 minutesQUANTITY

0.99+

85%QUANTITY

0.99+

Tom VictoriaPERSON

0.99+

900%QUANTITY

0.99+

LisaPERSON

0.99+

VictoriaPERSON

0.99+

last yearDATE

0.99+

60 peopleQUANTITY

0.99+

bothQUANTITY

0.99+

two labsQUANTITY

0.99+

60QUANTITY

0.99+

This yearDATE

0.99+

Detroit, MichiganLOCATION

0.99+

Victoria AvseevaPERSON

0.99+

threeQUANTITY

0.99+

MichiganLOCATION

0.99+

11,000 usersQUANTITY

0.99+

Motor City, MichiganLOCATION

0.99+

three labsQUANTITY

0.99+

11,000 studentsQUANTITY

0.99+

one labQUANTITY

0.99+

over 11,000 studentsQUANTITY

0.99+

fiveQUANTITY

0.99+

first yearQUANTITY

0.99+

KubernetesTITLE

0.99+

first applicationQUANTITY

0.99+

30 peopleQUANTITY

0.99+

11,000QUANTITY

0.98+

three daysQUANTITY

0.98+

todayDATE

0.98+

one final questionQUANTITY

0.98+

OneQUANTITY

0.98+

CubeORGANIZATION

0.98+

first learning dayQUANTITY

0.98+

15%QUANTITY

0.98+

pandemicEVENT

0.98+

firstQUANTITY

0.98+

over 350 peopleQUANTITY

0.98+

oneQUANTITY

0.98+

third oneQUANTITY

0.98+

tonightDATE

0.97+

one data pointQUANTITY

0.97+

Over 7,000QUANTITY

0.97+

this weekDATE

0.97+

two new labsQUANTITY

0.97+

later this weekDATE

0.97+

One platformQUANTITY

0.97+

KubeConEVENT

0.96+

One elementQUANTITY

0.96+

HelmsmanPERSON

0.96+

Cube CampusORGANIZATION

0.95+

KastenPERSON

0.95+

KubernetesORGANIZATION

0.95+

about 60%QUANTITY

0.95+

hundred percentQUANTITY

0.95+

Alvaro Celiss and Michal Lesiczka Accelerate Hybrid Cloud with Nutanix & Microsoft


 

>>In late 2009 when the industry was just beginning to offer so-called converged infrastructure, CI Nutanix was skating to the puck, so to speak, meaning unlike conversion infrastructure, which essentially bolted together compute and networking and storage into a single skew that was very hardware centric. Nutanix was focused on creating HCI hyperconverged infrastructure, which was a software led architecture that unified the key elements of data center infrastructure. Now, while both approaches saved time and money, HCI took the concept to new heights of cost savings and simplicity. Hyperconverged infrastructure became a staple of private clouds creating a cloudlike experience. OnPrem. As the public cloud evolved and grew, more and more customers are now taking a cloud first approach to it. So the challenge becomes how do you remodel your IT house so that you can connect your on-prem workloads to the cloud, to both simplify cloud migration, while at the same time creating an identical experience across your estate? >>Hello, and welcome to this special program, Accelerate Hybrid Cloud with Nutanix and Microsoft Made Possible by By Nutanix and produced by the Cube. I'm Dave Ante, one of your hosts today. Now, in this session, we'll hear how Nutanix is evolving its initial vision of simplifying infrastructure, deployment and management to support modern applications by partnering with Microsoft to enable that consistent experience that we talked about earlier, to extend hybrid cloud to Microsoft Azure and take advantage of cloud native tooling. Now, what's really important to stress here, and you'll hear this in our second segment, substantive engineering work has gone into this partnership. A lot of partnerships are sealed with a press release. We sometimes call it a Barney deal. You know, I love you, you love me. Like Barney, the once popular children's dinosaur character. We dig into the critical engineering aspects that enable that seamless connection between on-prem infrastructure and the public cloud. >>Now, in our first segment, Lisa Martin talks to Alro Salise, who is the vice president of Global ISD Commercial Solutions at Microsoft, and Michael Les Chica, who is the vice president of business development for the cloud and database partner ecosystem at Nutanix. Now, after that, Lisa will kick it back to me in our Boston studios to speak with Eric Lockard, who is the corporate vice president of Microsoft Azure specialized, along with Thomas Cornell, who is the senior vice president of products at Nutanix. And Indu Carey, who's the senior vice president of of engineering for NCI and NNC two at Nutanix. And we'll dig deeper into the announcement and it's salient features. Thanks for being with us. We hope you enjoy the program. Over to Lisa. >>Hi everyone. Welcome to our event Accelerate Hybrid Cloud with Nutanix and Microsoft. I'm your host Lisa Martin, and I've got two great guests here with me to give you some exciting news. Please welcome Alva Salise, the Vice President of Global ISD Commercial Solutions at Microsoft, and Michael Les Chika, VP of Business Development Cloud and database partner ecosystem at Nutanix. Guys, it's great to have you on the program. Thanks so much for joining me today. Great to be here. >>Thank you, Lisa. Looking forward, >>Yeah, so let's go ahead and start with you. Talk to me from your lens, what are you seeing in terms of the importance of the role of the the ISV ecosystem and really helping customers make their business outcomes successful? >>Oh, absolutely. Well, first of all, thank you for the invitation and thank you Michael and the Nutanix team for the partnership. The the ISV ecosystem plays a critical role as we support our customers and enable them in their data transformation journeys to create value, to move at their own pace, and more important to be sure that every one of them, as they transform themselves, have the right set of solutions for the long term with high differentiation, cost effectiveness and resiliency, especially given the times that we're living. >>Yeah, that resiliency is getting more and more critical as each day goes on. Ava was sticking with you. We got Microsoft Ignite going on today. What are some of the key themes that we should expect this year and how do they align to Microsoft's vision and strategy? >>Ah, great question. Thank you. When you think about it, we wanna talk about the topics that are very relevant and our customers have asked us to go deeper and, and share with them. One of them, as you may imagine, is how can we do more with less using Azure, especially given the current times that we're living in the, the business context has changed so much, they have different imperative, different different amount of pressure and priorities. How can we help? How can we combine the platform, the value that Microsoft can bring and our Microsoft ISV partner ecosystem to deliver more value and enable them to have their own journey? Actually, in that frame, if I may, we are making this announcement today with Nutanix. I, the Nutanix cloud clusters are often the fastest way on which customers will be able to do that journey into the cloud because it's very consistent with environments that they already know and use on premise. And once they go into the cloud, then they have all the benefit of scale, agility, resiliency, security, and cost benefits that they're looking for. So that topic and this type of announcements will be a big part of what we doing. Ignite, >>Exciting. Michael, let's bring you into the conversation now. Big milestone of our RDTs that the general availability of Nutanix Cloud clusters on Azure. Talk to us about that from Nutanix's perspective and also gimme a little bit of color, Michael, on the partnership, the relationship. >>Yeah, sure, absolutely. So we actually entered a partnership couple years ago, so we've been working on this solution quite a while, but really our ultimate goal from day one was really to make our customers journeys to hybrid cloud simpler and faster. So really for both companies, I think our goal is really being that trusted partner for our customers in their innovation journey. And as mentioned, you know, in the current macroeconomic conditions, really our customers really care about, but they have to be mindful of their bottom line as well. So they're really looking to leverage their existing investments in technology skill sets and leverage the most out of that. So the things like, for example, cost to operations and keeping those things consistent, cost on premises and the cloud are really important as customers are thinking about growth initiatives that they wanna implement. And of course, going to Azure public cloud is an important one as they think about flexibility, scale and modernizing their apps. >>And of course, as we look at the customer landscape, a lot of customers have an on on footprint, right? Whether that's for regulatory reasons for business or other technical reasons. So hybrid cloud has really become an ideal operating model for a lot of the customers that we see today. So really our partnership with Microsoft is critical because together, I really do see our US together simplifying that journey to the public cloud and making sure that it's not only easy but secure and really seamless. And really, I see our partnership as bringing the strengths of each company together, right? So Nutanix, of course, is known in the past versus hyperconverge infrastructure and really breaking down those silos between networking, compute, storage, and simplifying that infrastructure and operations. And our customers love that for the products and our, our NPS score of 90 over the last seven years. And if you look at Azure, at Microsoft, they're truly best in class cloud infrastructure with cutting edge services and innovation and really global scale. So when you think about those two combinations, right, that's really powerful for customers to be able to take their applications and whether they're on or even, and really combining all those various hybrid scenarios. And I think that's something that's pretty unique that we're to offer customers. >>Let's dig into that uniqueness of our, bringing you back into the conversation. You guys are meeting customers where they are helping them to accelerate their cloud transformations, delivering that consistency, you know, whether they're on-prem in Azure, in in the cloud. Talk to me about, from Microsoft's perspective about the significance of this announcement. I understand that the, the preview was oversubscribed, so the demand from your joint customers is clear. >>Thank you, Lisa. Michael, personally, I'm very proud and at the company we're very proud of the world that we did together with Nutanix. When you see two companies coming together with the mission of empowering customers and with the customer at the center and trying to solve real problems in this case, how to drive hybrid cloud and what is the best approach for them, opening more opportunities is, is, is extremely inspiring. And of course the welcome reception that we have from customer reiterates that we generating that value. Now, when you combine the power of Azure, that is very well known by resiliency, the scale, the performance, the elasticity, and the range of services with the reality of companies that might have hundreds or even thousands of different applications and data sources, those cloud journeys are very different for each and every one of them. So how do we combine our capabilities between Nutanix and Microsoft to be sure that that hybrid cloud journey that every one is gonna take can be simplified, you can take away the risk, the complexity on that transformation creates tones of value. >>And that's what a customers are asking us today. Either because they're trying to move and modernize their environment to Azure, or they're bringing their, you know, a enable ordinate services and cluster and data services on premise to a Nutanix platform, we together can combine and solve for that adding more value for any scenario that customers may have. And this is not once and done, this is not that we building, we forget it. It's a partnership that keeps evolving and also includes work that we do with our solution sales alliances that go to market seems to be sure that the customers have diverse service and support to make, to create the outcomes that they're asking us to deliver. >>Talk to me a little bit about the customers that were in the beta, as we mentioned, Alva, the, the preview was oversubscribed. So as I talked about earlier, the demand is clearly there. Talk to me about some of the customers in beta, you can even anonymize them or maybe talk about them by industry, but what, what were some of the, the key things they came to these two companies looking to, to solve, get to the cloud faster, be able to deliver the same sets of services with familiarity so that from a, they're able to do more with less? >>Maybe I could take that one out of our abital lines. It did. It means, but yeah, so like, like we, like you mentioned Lisa, you know, we've had a great preview oversubscribe, we had lots of, of cu not only customers, but also partners battle testing the solution. And you know, we're obviously very pleased now to have GN offered to everyone else, but one of our customers, Camper J was really looking forward to seeing how do they leverage Ncq and Azure to, like I mentioned, reduce that work workload, my, my migration and a risk for that and making sure, hey, some of the applications, maybe we are going to go and rewrite them, refactor them to take them natively to Azure. But there's others where we wanna lift and shift them to Azure. But like I mentioned, it's not just customers, right? We've been working with partners like PCs and Citrix where they share the same goal as Microsoft and Nutanix provides that superior customer experience where whatever the operating model might be for that customer. So they're going to be leveraging NC two on Azure to really provide those hybrid cloud experiences for their solutions on top of building on top of the, the work that we've done together. >>So this really kind of highlights the power of that Alva, the power of the ISV ecosystem and what you're all able to do together to really help customers achieve the outcomes that they individually need. >>A absolutely, look, I mean, we strongly believe that when you partner properly with an V you get to the, to the magical framework, one plus one equals three or more because you are combining superpowers and you are solving the problem on behalf of the customer so they can focus on their business. And this is a wonderful example, a very inspiring one where when you see the risk, the complexity that all these projects normally have, and Michael did a great job framing some of them, and the difference that they have now by having NC to on Azure, it's night and day. And we are fully committed to keep driving this innovation, this partnership on service of our customers and our partner ecosystem because at the same time, making our partners more successful, generating more value for customers and for all of us. >>Abar, can you comment a little bit on the go to market? Like how, how do your joint customers engage? What does that look like from their perspective? >>You know, when you think about the go to market, a lot of that is we have, you know, teams all over the world that will be aligned and working together in service of the customer. There is marketing and demand generation that will be done, that will be also work on enjoying opportunities that we will manage as well as a very tight connection on projects to be sure that the support experience for customers is well aligned. I don't wanna go into too much detail, but I will like to guarantee that our intent is not only to create an incredible technological experience, which the, the development teams are done, but also a great experience for the customers that are going through these projects, interacting with both teams that will work as one in service to empower the customer to achieve the outcomes that they need. >>Yeah, and just to comment maybe a little bit more on what Albar said, you know, it's not just about the product integration or it's really the full end to end experience for our customers. So when we embarked on this partnership with Microsoft, we really thought about what is the right product integration and with our engineering teams, but also how do we go and talk to customers with value prop together and all the way down through to support. So we actually been worked on how do we have a single joint support for our customers. So it doesn't really matter how the customer engages, they really see this as an end to end single solution across two companies. >>And that's so critical given just the, the natural challenges that that organizations face and the dynamics of the macro economic environment that we're living in. For them, for customers to be able to have that really seamless single point of interaction, they want that consistent experience on-prem to the cloud. But from an engagement perspective that you're, what sounds like what you're doing, Michael and Avaro is, is goes a long way to really giving customers a much more streamlined approach so that they can be laser focused on solving the business problems that they have, being competitive, getting products to market faster and all that good stuff. Michael, I wonder if you could comment on maybe the cultural alignment that Nutanix and Microsoft have. I know Microsoft's partner program has been around for decades and decades. Michael, what does that cultural alignment look like from, you know, the sales and marketing folks down to engineering, down to support? >>Yeah, I think honestly that was, that was something that kind of fit really well and we saw really a long alignment from day one. Of course, you know, Nutanix cares a lot about our customer experience, not just within the products, but again, through the entire life cycle to support and so forth. And Microsoft's no different, right? There's a huge emphasis on making sure that we provide the best customer experience and that we're also focusing on solving real world customer problems, right? And really focusing on the biggest problems that customers have. So really culturally it felt, it felt really natural. It felt like we were a single team, although it's, you know, two bar organizations working together, but I really felt like a single team working day in, day out on, on solving customer problems together. >>Yeah, >>Let, go ahead. >>No, I would say, well say Michael, the, the one element that we complement, the, I think the answer was super complete, is the, the fact that we work together from the outside in, look at it from the customer lenses is extremely powerful and inspire, as I mentioned, because that's what it's all about. And when you put the customer at the center, everything else falls in part on its its own place very, very quickly. And then it's hard work and innovation and, you know, doing what we do best, which is combining over superpowers in service of that customer. So that was the piece that, you know, I, I cannot emphasize enough how inspiring he's been. And again, the, the response for the previous is a great example of the opportunity that we have in there. >>And you've taken a lot of complexity out of the customer environment and I can imagine that the GA of Nutanix cloud clusters on Azure is gonna be a huge benefit for customers in every industry. Last question guys, I wanna get both your perspectives on Michael, we'll start with you and then Lvra will wrap with you. What's next? Obviously a lot of exciting stuff. What's next for the partnership of these, these two superheroes together, Michael? >>Yeah, so I think our goal doesn't change, right? I think our North star is to continue to make it easy for our customers to adopt, migrate and modernize their applications, leveraging Nutanix and Microsoft Azure, right? And I think NC two and Azure is just the start of that. So kind of maybe more immediate, like, you know, we mentioned obviously we have, we announced the ga that's J in Americas, but kind of the next more immediate step over the next few months look for us to continue expanding beyond Americas and making sure that we have support across all the global regions. And then beyond that, you know, again, as of our mentioned, it's working from kind of the s backwards. So we're, we're not, no, we're not waiting for ega. We're already working on the next set of solutions saying what are other problems that customer facing, especially across, they're running their workload cross on premises and public cloud, and what are the next set of solutions that we can deliver to the market to solve those real challenges for. >>It sounds really strongly that, that the partnership here, we're talking about Nutanix and Microsoft, it's really Nutanix and Microsoft with the customer at this center. I think you've both done a great job of articulating that there's laser focus there. Our last word to you, what excites you about the momentum that Microsoft and Nutanix have for the customers? >>Well, thank you Lisa. Michael, I will tell you, when you hear the customer feedback on the impact that you're having, that's the most inspiring part because you know you're generating value, you know, you're making a difference, especially in these complex times when the, the partnership gets tested where the, the right, you know, relationship gets built. We're being there for customers is extremely inspiring. Now, as Michael mentioned, this is all about what customer needs and how do we go even ahead of the game, being sure that we're ready not for what is the problem today, but the opportunities that we have tomorrow to keep working on this. We have a huge TA task ahead to be sure that we bring this value globally in the right way with the right quality. Every word, which is a, is never as small fist as you may imagine. You know, the, the world is a big place, but also the next wave of innovations that will be customer driven to keep and, and raise the bar on how, how much more value can we unlock and how much empowerment can we make for the customer to keep in innovating at their own pace, in their own terms. >>Absolutely that customer empowerment's key. Guys, it's been a pleasure talking to you about the announcement Nutanix cloud clusters on Azure of our Michael, thank you for your time, your inputs and helping us understand the impact that this powerhouse relationship is making. >>Thank you for having Lisa and thank you AAR for joining >>Me. Thank you Lisa, Michael, it's been fantastic. I looking forward and thank you to the audience for being here with us. Yeah, stay >>Tuned. Thanks to the audience. Exactly. And stay tuned. There's more to come. We have coming up next, a deeper conversation on the announcement with Dave and product execs from both Microsoft. You won't wanna.

Published Date : Oct 12 2022

SUMMARY :

So the experience that we talked about earlier, to extend hybrid cloud to Microsoft We hope you enjoy the program. Guys, it's great to have you on the program. what are you seeing in terms of the importance of the role of the the ISV ecosystem Well, first of all, thank you for the invitation and thank you Michael and the Nutanix team for the partnership. that we should expect this year and how do they align to Microsoft's vision in that frame, if I may, we are making this announcement today with Nutanix. our RDTs that the general availability of Nutanix Cloud clusters on Azure. So the things like, for example, cost to operations and keeping those And our customers love that for the products and our, our NPS score of 90 Let's dig into that uniqueness of our, bringing you back into the conversation. And of course the welcome reception that we have from customer reiterates that we generating that value. and modernize their environment to Azure, or they're bringing their, you know, Talk to me about some of the customers in beta, you can even anonymize them or maybe talk about them by industry, And you know, we're obviously very pleased now to have GN offered to everyone else, So this really kind of highlights the power of that Alva, the power of the ISV ecosystem and that they have now by having NC to on Azure, it's night and day. you know, teams all over the world that will be aligned and working together in service of Yeah, and just to comment maybe a little bit more on what Albar said, you know, problems that they have, being competitive, getting products to market faster and all that good stuff. It felt like we were a single team, although it's, you know, two bar organizations working together, And when you put the customer we'll start with you and then Lvra will wrap with you. So kind of maybe more immediate, like, you know, we mentioned obviously we have, what excites you about the momentum that Microsoft and Nutanix have for the customers? task ahead to be sure that we bring this value globally in the right way with the right quality. Guys, it's been a pleasure talking to you about the I looking forward and thank you to the audience for being Thanks to the audience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Eric LockardPERSON

0.99+

Lisa MartinPERSON

0.99+

Michael Les ChicaPERSON

0.99+

Alva SalisePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Michael Les ChikaPERSON

0.99+

LisaPERSON

0.99+

NutanixORGANIZATION

0.99+

Thomas CornellPERSON

0.99+

DavePERSON

0.99+

Alro SalisePERSON

0.99+

two companiesQUANTITY

0.99+

Michal LesiczkaPERSON

0.99+

first segmentQUANTITY

0.99+

AmericasLOCATION

0.99+

Indu CareyPERSON

0.99+

BostonLOCATION

0.99+

hundredsQUANTITY

0.99+

late 2009DATE

0.99+

threeQUANTITY

0.99+

both companiesQUANTITY

0.99+

AvaPERSON

0.99+

Dave AntePERSON

0.99+

NNCORGANIZATION

0.99+

NCIORGANIZATION

0.99+

second segmentQUANTITY

0.99+

todayDATE

0.99+

two superheroesQUANTITY

0.99+

tomorrowDATE

0.99+

bothQUANTITY

0.99+

Snehal Antani, Horizon3.ai Market Deepdive


 

foreign welcome back everyone to our special presentation here at thecube with Horizon 3.a I'm John Furrier host thecube here in Palo Alto back it's niho and Tony CEO and co-founder of horizon 3 for deep dive on going under the hood around the big news and also the platform autonomous pen testing changing the game and security great to see you welcome back thank you John I love what you guys have been doing with the cube huge fan been here a bunch of times and yeah looking forward to the conversation let's get into it all right so what what's the market look like and how do you see it evolving we're in a down Market relative to startups some say our data we're reporting on siliconangle in the cube that yeah there might be a bit of downturn in the economy with inflation but the tech Market is booming because the hyperscalers are still pumping out massive scale and still innovating so so you know for the first time in history this is a recession or downturn where there's now Cloud scale players that are an economic engine what's your view on this where's the market heading relative to the downturn and how are you guys navigating that so um I think about it one the there's a lot of belief out there that we're going to hit a downturn and we started to see that we started to see deals get longer and longer to close back in May across the board in the industry we continue to see deals get at least backloaded in the quarter as people understand their procurement how much money they really have to spend what their earnings are going to be so we're seeing this across the board one is quarters becoming lumpier for tech companies and we think that that's going to become kind of the norm over the next over the next year but what's interesting in our space of security testing is a very basic supply and demand problem the demand for security testing has skyrocketed when I was a CIO eight years ago I only had to worry about my on-prem attack surface my perimeter and Insider threat those are my primary threat vectors now if I was a CIO I have to include multiple clouds all of the data in my SAS offerings my Salesforce account and so on as well as work from home threat vectors and other pieces and I've got Regulatory Compliance in Europe in Asia in in the U.S tons of demand for testing and there's just not enough Supply there's only 5 000 certified pen testers in the United States so I think for starters you have a fundamental supply and demand problem that plays to our strength because we're able to bring a tremendous amount of pen testing supply to the table but now let's flip to if you are the CEO of a large security company or whether it's a Consulting shop or so on you've got a whole bunch of deferred revenue in your business model around security testing services and what we've done in our past in previous companies I worked at is if we didn't think we were going to make the money the quarter with product Revenue we would start to unlock some of that deferred Services Revenue to make the number to hit what we expected Wall Street to hit what Wall Street expected of us in testing that's not possible because there's not enough Supply except us so if I'm the CEO of an mssp or a large security company and I need I see a huge backlog of security testing revenue on the table the easy button to convert that to recognized revenue is Horizon 3. and when I think about the next six months and the amount of Revenue misses we're going to see in security shops especially those that can't fulfill their orders I think there's a ripe opportunity for us to win yeah one of the few opportunities where on any Market you win because the forces will drive your flywheel that's exactly right very basic supply and demand forces that are only increasing with pressure and there's no way it takes 10 years just to build a master hacker just it's a very hard complex space we become the easy button to address that supply problem yeah and this and the autonomous aspect makes appsec reviews as new things get pushed with Cloud native developers they're shifting left but still the security policies need to stay Pace as these new vectors threat vectors appear yeah I mean because that's what's happening a new new thing makes a vector possible that's exactly right I think there's two aspects one is the as you in increase change in your environment you need to increase testing they are absolutely correlated the second thing though is you know for 20 years we focused on remote code execution or rces as an industry what was the latest rce that gave an attacker access to my environment but if you look over the past few years that entire mindset has shifted credentials are the new code execution what I mean by that is if I have a large organization with a hundred a thousand ten thousand employees all it takes is one of them to have a password I can crack in credential spray and gain access to as an attacker and once I've gained access to a single user I'm going to systematically snowball that into something of consequence and so I think that the attackers have shifted away from looking for code execution and looked more towards harvesting credentials and cascading credentials from a regular domain user into an admin this brings up the conversation I would like to do it more Deep dive now shift into more of like the real kind of landscape of the market and your positioning and value proposition in that and that is managed services are becoming really popular as we move into this next next wave of super cloud and multi-cloud and hybrid Cloud because I mean multi-cloud and hybrid hybrid than multi-cloud sounds good on paper but the security Ops become big and one of the things we're reporting with here on the cube and siliconangle the past six months is devops has made the developer the IT team because they've essentially run it now in CI CD pipeline as they say that means it's replaced by data Ops or AI Ops or security Ops and data and security kind of go hand in hand so I can see that playing out do you believe that to be true that that's kind of the new operational kind of beach head that's critical and if so secure if data is part of security that makes security the new it yeah I I think that if you think about organizations hell even for Horizon 3 right now I don't need to hire a CIO I'll have a CSO and that CSO will own it and governance risk and compliance and security operations because at the end of the day the most pressing question for me to answer as a CEO is my security posture IIT is a supporting function of that security posture and we see that at say or a growth stage company like Horizon 3 but when I thought about my time at GE Capital we really shifted to this mindset of security by Design architecture as code and it was very much security driven conversation and I think that is the norm going forward and how do you view the idea that you have to enable a managed service provider with security also managing comp and which then manages the company to enable them to have agile security um security is code because what you're getting at is this autonomous layer that's going to be automated away to make the next talented layer whether it's coder or architect scale so the question is what is abstracted away at at automation seems to be the conversation that's coming out of this big cloud native or super cloud next wave of cloud scale I think there's uh there's two Dimensions to that and honestly I think the more interesting Dimension is not the technical side of it but rather think of the Equifax hack a bunch of years ago had Equifax used a managed security services provider would the CEO have been fired after the breach and the answer is probably not I think the CEO would have transferred enough reputational risk in operational risk to the third party mssp to save his job from being you know from him being fired you can look at that across the board I think that if if I were a CIO again I would be hard-pressed to build my own internal security function because I'm accepting that risk as an executive and we saw what just happened at Uber there's a ton of risk coming with that with the with accepting that as a security person so I think in the future the role of the mssp becomes more significant as a mechanism for transferring enough reputational and operational and legal risk to a third party so that you as the Core Company are able to protect yourself and your people now then what you think is a super cloud printables and Concepts being applied at mssp scale and I think that becomes really interesting talk about the talent opportunity because I think the managed service providers point to markets that are growing and changing also having managed service means that the customers can't always hire Talent hence they go to a Channel or a partner this seems to be a key part of the growth in your area talk about the talent aspect of it yeah um think back to what we saw in Cloud so as as Cloud picked up we saw IBM HP other Hardware companies sell more servers but to fewer customers Amazon Google and others right and so I think something similar is going to happen in the security space where I think you're going to see security tools providers selling more volume but to fewer customers that are just really big mssps so that is the the path forward and I think that the underlying Talent issue gives us economies at scale and that's what we saw this with Cloud we're going to see the same thing in the mssp space I've got a density of Talent Plus a density of automation plus a density of of relationships and ecosystem that give mssps a huge economies of scale advantage over everybody else I mean I want to get into the mssp business sounds like I make a lot of money yeah definitely it's profitable no doubt about it like that I got to ask more on the more of the burden side of it because if you're a partner I don't need another training class I don't need another tool I don't need someone saying this is the highest margin product I need to actually downsize my tools so right now there's hundreds of tools that mssps have all the time dealing with and does the customer so tools platforms we've kind of teased this out in previous conversations together but more more relevant to the mssp is what they do to the customers so talk about this uh burden of tools and the socks out there in the in in the landscape how do you how do you view that and what's the conversation like on average an organization has 130 different cyber security tools installed none of those tools were designed to work together none of those tools are from the same vendor and in fact oftentimes they're from vendors that have competing products and so what we don't have and they're still getting breached in the industry we don't have a tools problem we have an Effectiveness problem we have to reduce the number of tools we have get more out of out of the the effectiveness out of the existing infrastructure build muscle memory you know how to detect and respond to a breach and continuously verify that posture I think that's what the the most successful security organizations have mastered the fundamentals and they mastered that by making sure they were effective in detection and response not mastering it by buying the next shiny AI tool on the defensive side okay so you mentioned supply and demand early since you're brought up economics we'll get into the economic equations here when you have great profits that's going to attract more entrance into the marketplace so as more mssps enter the market you're going to start to see a little bit of competition maybe some fud maybe some price competitive price penetration all kinds of different Tactics get out go on there um how does that impact you because now does that impact your price or are you now part of them just competing on their own value what's that mean for the channel as more entrants come in hey you know I can compete against that other one does that create conflict is that an opportunity does are you neutral on that what's the position it's a great question actually I think the way it plays out is one we are neutral two the mssp has to stand on their own with their own unique value proposition otherwise they're going to become commoditized we saw this in the early cloud provider days the cloud providers that were just basically wrapping existing Hardware with with a race to the bottom pricing model didn't survive those that use the the cloud infrastructure as a starting point to build higher value capabilities they're the ones that have succeeded to this day the same Mo I think will occur in mssps which is there's a base level of capability that they've got to be able to deliver and it is the burden of the mssp to innovate effectively to elevate their value problem it's interesting Dynamic and I brought it up mainly because if you believe that this is going to be a growing New Market price erosion is more in mature markets so it's interesting to see that Dynamic come up and we'll see how that handles on the on the economics and just the macro side of it getting more into kind of like the next gen autonomous pen testing is a leading indicator that a new kind of security assessment is here um if I said that to you how do you respond to that what is this new security assessment mean what does that mean for the customer and to the partner and that that relationship down that whole chain yeah um back to I'm wearing a CIO hat right now don't tell me we're secure in PowerPoint show me we're secure Today Show me where we're secure tomorrow and then show me we're secure again next week because that's what matters to me if you can show me we're secure I can understand the risk I'm accepting and articulate it up to my board to my Regulators up until now we've had a PowerPoint tell me where secure culture and security and I just don't think that's going to last all that much longer so I think the future of security testing and assessment is this shift from a PowerPoint report to truly showing me that my I'm secure enough you guys auto-generate those statements now you mentioned that earlier that's exactly right because the other part is you know the classic way to do security reports was garbage in garbage out you had a human kind of theoretically fill out a spreadsheet that magically came up with the risk score or security posture that doesn't work that's a check the box mentality what you want to have is an accurate High Fidelity understanding of your blind spots your threat vectors what data is at risk what credentials are at risk you want to look at those results over time how quickly did I find problems how quickly did I fix them how often did they reoccur and that is how you get to a show me where secure culture whether I'm a company or I'm a channel partner working with Horizon 3.ai I have to put my name on the line and say Here's a service level agreement I'm going to stand behind there's levels of compliance you mentioned that earlier how do you guys help that area because that becomes I call the you know below the line I got to do it anyway usually it's you know they grind out the work but it has to be fundamental because if the threats vectors are increasing and you're handling it like you say you are the way it is real time today tomorrow the next day you got to have that other stuff flow into it can you describe how that works under the hood yeah there's there's two parts to it the first part is that attackers don't have to hack in with zero days they log in with credentials that they found but often what attackers are doing is chaining together different types of problems so if you have 10 different tactics you can chain those together a number of different ways it's not just 10 to the 10th it's it's actually because you don't you don't have to use all the tactics at once this is a very large number of combinations that an attacker can apply upon you is what it comes down to and so at the base level what you want to have is what are the the primary tactics that are being used and those tactics are always being added to and evolving what are the primary outcomes that an attacker is trying to achieve steal your data disrupt your systems become a domain admin and borrow and now what you have is it actually looks more like a chess game algorithm than it does any sort of hard-coded automation or anything else which is based on the pieces on the board the the it infrastructure I've discovered what is the next best action to become a domain admin or steal your data and that's the underlying innovation in IP we've created which is next best action Knowledge Graph analytics and adaptiveness to figure out how to combine different problems together to achieve an objective that an attacker cares about so the 3D chess players out there I'd say that's more like 3D chess are the practitioners implementing it but when I think about compliance managers I don't see 3D chess players I see back office accountants in my mind like okay are they actually even understand what comes out of that so how do you handle the compliance side do you guys just check the boxes there is it not part of it is it yeah I I know I don't Envision the compliance guys on the front lines identifying vectors do you know what it doesn't even know what it means yeah it's a great question when you think about uh the market segmentation I think there are we've seen are three basic types of users you've got the the really mature high frequency security testing purple team type folks and for them we are the the force multiplier for them to secure the environment you then have the middle group where the IT person and the security person are the same individual they are barely Treading Water they don't know what their attack surface is and they don't know what to focus on we end up that's actually where we started with the barely Treading Water Persona and that's why we had a product that helped those Network Engineers become superheroes the third segment are those that view security and compliance as synonymous and they don't really care about continuous they care about running and checking the box for PCI and forever else and those customers while they use us they are better served by our partner ecosystem and that's really so the the first two categories tend to use us directly self-service pen tests as often as they want that compliance-minded folks end up going through our partners because they're better served there steel great to have you on thanks for this deep dive on um under the hood section of the interview appreciate it and I think autonomous is is an indicator Beyond pen testing pen testing has become like okay penetration security but this is not going away where do you see this evolving what's next what's next for Horizon take a minute to give a plug for what's going on with copy how do you see it I know you got good margins you're raising Capital always raising money you're not yet public um looking good right now as they say yeah yeah well I think the first thing is our company strategy is in three chapters chapter one is become the best security testing platform in the industry period that's it and be very good at helping you find and fix your security blind spots that's chapter one we've been crushing it there with great customer attraction great partner traction chapter two which we've started to enter is look at our results over time to help that that GRC officer or auditor accurately assess the security posture of an organization and we're going to enter that chapter about this time next year longer term though the big Vision I have is how do I use offense to inform defense so for me chapter three is how do I get away from just security testing towards autonomous security overall where you can use our security testing platform to identify ways to attack that informs defensive tools exactly where to focus how to adjust and so on and now you've got offset and integrated learning Loop between attack and defense that's the future never been done before Master the art of attack to become a better Defender is the bigger vision of the company love the new paradigm security congratulations been following you guys we will continue to follow you thanks for coming on the Special Report congratulations on the new Market expansion International going indirect that a big way congratulations thank you John appreciate it okay this is a special presentation with the cube and Horizon 3.ai I'm John Furrier your host thanks for watching thank you

Published Date : Oct 11 2022

SUMMARY :

the game and security great to see you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
10 yearsQUANTITY

0.99+

Snehal AntaniPERSON

0.99+

EquifaxORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

Palo AltoLOCATION

0.99+

GE CapitalORGANIZATION

0.99+

UberORGANIZATION

0.99+

next weekDATE

0.99+

TonyPERSON

0.99+

PowerPointTITLE

0.99+

two partsQUANTITY

0.99+

10 different tacticsQUANTITY

0.99+

tomorrowDATE

0.99+

U.SLOCATION

0.99+

first partQUANTITY

0.99+

United StatesLOCATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

GRCORGANIZATION

0.99+

third segmentQUANTITY

0.99+

IBMORGANIZATION

0.99+

two aspectsQUANTITY

0.99+

10thQUANTITY

0.99+

AsiaLOCATION

0.99+

first two categoriesQUANTITY

0.99+

three basic typesQUANTITY

0.99+

MayDATE

0.99+

10QUANTITY

0.98+

first timeQUANTITY

0.98+

todayDATE

0.98+

second thingQUANTITY

0.98+

CloudTITLE

0.97+

eight years agoDATE

0.97+

Horizon 3TITLE

0.96+

hundreds of toolsQUANTITY

0.95+

next yearDATE

0.95+

single userQUANTITY

0.95+

horizonORGANIZATION

0.94+

Horizon 3.aiTITLE

0.93+

oneQUANTITY

0.93+

past six monthsDATE

0.93+

hundred a thousand ten thousand employeesQUANTITY

0.92+

5 000 certified pen testersQUANTITY

0.92+

zero daysQUANTITY

0.92+

130 different cyber security toolsQUANTITY

0.91+

next dayDATE

0.9+

waveEVENT

0.89+

Horizon 3.aORGANIZATION

0.88+

threeQUANTITY

0.87+

next six monthsDATE

0.87+

SASORGANIZATION

0.87+

chapter threeOTHER

0.86+

Horizon 3ORGANIZATION

0.85+

lot of moneyQUANTITY

0.82+

first thingQUANTITY

0.77+

CEOPERSON

0.74+

nihoPERSON

0.72+

chapter oneOTHER

0.71+

of years agoDATE

0.7+

chapter twoOTHER

0.7+

two DimensionsQUANTITY

0.7+

past few yearsDATE

0.7+

StreetLOCATION

0.7+

HorizonORGANIZATION

0.7+

3TITLE

0.65+

SalesforceTITLE

0.64+

Wall StreetORGANIZATION

0.63+

twoQUANTITY

0.61+

GoogleORGANIZATION

0.61+

HPORGANIZATION

0.61+

3.aiTITLE

0.6+

CSOTITLE

0.59+

usersQUANTITY

0.5+

WallORGANIZATION

0.5+

TodayDATE

0.47+

Nayaki Nayyar and Nick Warner | Ivanti & SentinelOne Partner to Revolutionize Patch Management


 

hybrid work is the new reality according to the most recent survey data from enterprise technology research cios expect that 65 of their employees will work either as fully remote or in a hybrid model splitting time between remote and in office remote of course can be anywhere it could be home it could be at the beach overseas literally anywhere there's internet so it's no surprise that these same technology executives cite security as their number one priority well ahead of other critical technology initiatives including collaboration software cloud computing and analytics which round out the top four in the etr survey now as we've reported securing endpoints was important prior to the pandemic but the explosion in the past two plus years of remote work and corollary device usage has made the problem even more acute and let's face it managing sprawling i.t assets has always been a pain patch management for example has been a nagging concern for practitioners and with ransomware attacks on the rise it's critical that security teams harden it assets throughout their life cycle staying current and constantly staying on top of vulnerabilities within the threat surface welcome to this special program on the cube enable and secure the everywhere workplace brought to you by ivanti in this program we highlight key partnerships between avanti and its ecosystem to address critical problems faced by technology and security teams in our first segment we explore a collaboration between avanti and sentinel one where the two companies are teaming to simplify patch management my name is dave vellante and i'll be your host today and with me are nayaki nayar who's the president and chief product officer at avanti and nick warner president and security of the security group at sentinel one welcome naki and nick and hackie good to have you back in the cube great to see you guys thank you thank you dave uh really good to be back on cube uh i'm a veteran of cube so thank you for having us and um look forward to a great discussion today yeah you better thanks okay hey good nick nick good to have you on as well what do we need to know about this partnership please so uh if you look at uh we are super excited about this partnership nick thank you for joining us on this session today um when you look at ivanti ivanti has been a leader in two big segments uh we are a leader in unified endpoint management with the acquisition of mobileye now we have a holistic end-to-end management of all devices be it windows linux mac ios you name it right so we have that seamless single pane of glass to manage all devices but in addition to that we are also a leader in risk-based patch management um dave that's what we are very excited about this partnership with the with central one where now we can combine the strength we have in the risk-based patch management with central one's xdr platform and truly help address what i call the need of the hour with our customers for them to be able to detect uh vulnerabilities and being able to remediate them proactively remediate them right so that's what we are super excited about this partnership and nick would love to hand it over to you to talk about uh the partnership and the journey ahead of us thanks and you know from center one's perspective we see autonomous vulnerability assessment and remediation as really necessary given the evolution uh in the sophistication the volume and the ferocity of threats out there and what's really key is being able to remediate risks and machine speed and also identify vulnerability exposure in real time and you know if you look traditionally at uh vulnerability scanning and patch management they've really always been two separate things and when things are separate they take time between the two coordination communication what we're looking to do with our singularity xdr platform is holistically deliver one unified solution that can identify threats identify vulnerabilities and automatically and autonomously leverage patch management to much better protect our customers so maybe maybe that's why patch management is such a challenge for many organizations because as you described nick it's sort of a siloed from security and those worlds are coming together but maybe you guys could address the specific problems that you're trying to solve with this collaboration yeah so if you look at uh just in a holistic level uh dave today cyber crime is at catastrophic heights right and this is not just a cio or a cso issue this is a board issue every organization every enterprise is addressing this at the board level and when you double click on it one of the challenges that we have heard from our customers over and over again is the complexity and the manual processes that are in place for remediation or patching all their operating systems their applications their third party apps and that is where it's very very time consuming very complex very cumbersome and the question is how do we help them automate it right how do we help them remove those manual processes and autonomously intermediate right so which is where this partnership between ivanti and central one helps organizations to bring this autonomous nature to bring those proactive predictive capabilities to detect an issue prioritize that issue based on risk-based prioritization is what we call it and autonomously remediate that issue right so that's where uh this partnership really really uh helps our customers address the the top concerns they have in cyber crime or cyber security got it so prioritization automation nick maybe you could address what are the keys i mean you got to map vulnerabilities to software updates how do you make sure that your the patches there's not a big lag between your patch and and the known vulnerabilities and you've got this diverse set of you know i.t portfolio assets how do you manage all that it's a great question and i and i think really the number one uh issue around this topic is that security teams and it teams are facing a really daunting task of identifying all the time every day all the vulnerabilities in their ecosystem and the biggest problem with this is how do they get context and priority and i think what people have come to realize through the years of dealing with with patch management uh and vulnerability scanning is that patching without the context of what the possible impact or priority of that risk is really comes down to busy work and i think what's so important in a totally interconnected world with attacks happening at machine speed is being able to take that precious asset that we call time and make sure you properly prioritize that how we're doing it from sentinel one singularity xdr perspective is by leveraging autonomous threat information and being able to layer that against vulnerability information to properly view through that lens the highest priority threats and vulnerabilities that you need to patch and then using our single agent technology be able to autonomously remediate and patch those vulnerabilities whether or not it's on a mac a pc server a cloud workload and the beauty of our solution is it gives you proper clarity so you can see the impact of vulnerabilities each and every day in your environment and know that you're doing the right thing in the right order got it okay so the context gives you the risks profile allows you to prioritize and then of course you can you know remediate what else should we know about this this joint solution uh in terms of you know what it is how i engage any other detail on how it addresses the the problem specifically yeah so it's all about race against the time um uh dave when it's how we help our customers uh detect the vulnerability prioritize it and remediate it the attackers are able to weaponize those vulnerabilities and and have an attack right so it's really it's how we help our customers be a lot more proactive and predictive address those vulnerabilities versus um before the attackers really get access to it right so that's where our joint solution in fact i always say whatever edr with this edr or mdr or xdr the r portion of that r is very one he comes in our neurons for patch management or what we call neurons but risk based patch management combined with um central ones xdr is where we truly uh bring the combined solutions to to to life right so the r is where ivanti really plays a big part in uh in the joint solution yeah absolutely the response i mean people i think all agree you're going to get infiltrated that's how you respond to it you know the thing about this topic is when you make a business case a lot of times you'll go to the cfo and say hey if we don't do this we're going to be in big trouble and so it's this fear factor and i get that it's super important but but are there other measurements of success that that you you can share in other words how are customers going to determine the value of this joint solution so it's a mean time to repair let me go nick and then i'm sure you have your uh metrics and how you're measuring the success it's about how we can detect an issue and repair that issue it's reducing that mean time to repair as much as possible and making it as real-time as possible for our customers right that's where the true outcome through success and the metric that customers can track measure and continuously improve on nick you want to add to that for sure yeah you know you make some great great points niaki and what what i would add is um what sentinel one singularity platform is known for is automated and autonomous detection prevention and response and remediation across threats and if you look traditionally at patch management or vulnerability assessment they're typically deployed and run in point-of-time solutions what i mean by that is that they're scans and re-scans the way that advanced edr solutions and xdr solutions such as single one singularity platform work is we're constantly recording everything that's happening on all of your systems in real time and so what we do is literally eliminate the window of opportunity between a patch being uh needed a vulnerability being discovered and you knowing that you have that need for that vulnerability to be patched in your environment you don't have to wait for that 12 or 24-hour window to scan for vulnerabilities you will immediately know it in your network you'll also know the security implications of that vulnerability so you know when and how to prioritize and then furthermore you can take autonomous hatching measures against that so at the end of the day the name of the game in security is time and it's about reducing that window of opportunity for the adversaries for the threat actors and this is a epic leap forward in in doing that for our customers and that capability nick is a function of your powerful agent or is it architecture where's that come from that's a great question it's it's a combination of a couple of things the first is our agent technology which performs constant monitoring on every system every behavior every process running on all your systems live and in real time so this is not a batch process that that kicks up once a day this is always running in the background so the moment a new application is installed the moment a new application version is deployed we know about it we record it instantaneously so if you think about that and layer against getting best in class vulnerability information from a partner like avanti and then also being able to deploy patch management against that you can start to see how you're applying that in real time in your environment and the last thing i i'd like to add is because we're watching everything and then layering it against thread intel and context using our proprietary machine learning technology that that idea of being able to prioritize and escalate is critical because if you talk to security providers there's a couple different uh challenges that they're facing and i would say the top two are alert fatigue and then also human human power limitations and so no security team has enough people on their team and no security teams have an absence of alerts and so the fact that we can prioritize alerts surface the ones that are the most important give context to that and also save them precious hours of their personnel's time by being able to do this autonomously and automatically we're really killing two birds with one stone that's great there's the business case right there you just laid out some other things that we can measure right it all comes back to the data doesn't it we got to go but i'll give you the last word yeah i mean we are super excited about this partnership uh like nick said uh we believe in how we can help our customers discover all the assets we have they have um manage those assets but a big chunk of it is how we help them secure it right secure uh their devices the applications the data that's on those devices the end points and being able to provide an experience a service experience at the end of the day so that end users don't have to worry about securing you don't have to think about security it should be embedded it should be autonomous and it should be contactually personalized right so uh that's the journey we are on and uh thank you nick for this great partnership and look forward to a great journey ahead of us thank you yeah thanks to both of you nick appreciate it okay keep it right there after this quick break we're gonna be back to look at how ivanti is working with other partners to simplify and harden the anywhere workplace you're watching the cube your leader in enterprise and emerging tech coverage [Music] you

Published Date : Sep 16 2022

SUMMARY :

got it okay so the context gives you the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
12QUANTITY

0.99+

65QUANTITY

0.99+

Nick WarnerPERSON

0.99+

avantiORGANIZATION

0.99+

first segmentQUANTITY

0.99+

bothQUANTITY

0.99+

two companiesQUANTITY

0.99+

todayDATE

0.99+

24-hourQUANTITY

0.99+

ivantiORGANIZATION

0.99+

Nayaki NayyarPERSON

0.99+

davePERSON

0.98+

dave vellantePERSON

0.98+

central oneORGANIZATION

0.98+

two separate thingsQUANTITY

0.97+

nayaki nayarPERSON

0.97+

nickPERSON

0.96+

once a dayQUANTITY

0.96+

two birdsQUANTITY

0.96+

firstQUANTITY

0.95+

nakiPERSON

0.95+

top twoQUANTITY

0.94+

one stoneQUANTITY

0.94+

central oneORGANIZATION

0.94+

pandemicEVENT

0.93+

hackiePERSON

0.92+

oneQUANTITY

0.89+

top fourQUANTITY

0.88+

niakiPERSON

0.88+

macCOMMERCIAL_ITEM

0.87+

one of the challengesQUANTITY

0.87+

eachQUANTITY

0.87+

number oneQUANTITY

0.86+

two big segmentsQUANTITY

0.85+

two coordinationQUANTITY

0.8+

singleQUANTITY

0.79+

IvantiORGANIZATION

0.78+

sentinelORGANIZATION

0.76+

every systemQUANTITY

0.74+

every enterpriseQUANTITY

0.73+

everyQUANTITY

0.71+

single paneQUANTITY

0.69+

past two plus yearsDATE

0.68+

lotQUANTITY

0.66+

windows linux mac iosTITLE

0.65+

sentinel oneORGANIZATION

0.63+

SentinelOneORGANIZATION

0.62+

ivantiPERSON

0.6+

every processQUANTITY

0.59+

every organizationQUANTITY

0.56+

timesQUANTITY

0.56+

xdrTITLE

0.54+

theirQUANTITY

0.54+

chiefPERSON

0.53+

coupleQUANTITY

0.47+

nickORGANIZATION

0.39+

Jason Klein, Alteryx | Democratizing Analytics Across the Enterprise


 

>> It's no surprise that 73% of organizations indicate analytics spend will outpace other software investments in the next 12 to 18 months. After all, as we know, data is changing the world, and the world is changing with it. But is everyone's spending resulting in the same ROI? This is Lisa Martin. Welcome to the Cube's presentation of "Democratizing Analytics Across the Enterprise," made possible by Alteryx. An Alteryx-commissioned IDC InfoBrief entitled, Four Ways to Unlock Transformative Business Outcomes From Analytics Investments, found that 93% of organizations are not utilizing the analytics skills of their employees, which is creating a widening analytics gap. On this special Cube presentation, Jason Klein, Product Marketing Director of Alteryx, will join me to share key findings from the new Alteryx-commissioned IDC Brief, and uncover how enterprises can derive more value from their data. In our second segment, we'll hear from Alan Jacobson, Chief Data and Analytics Officer at Alteryx. He's going to discuss how organizations across all industries can accelerate their analytic maturity to drive transformational business outcomes. And then, in our final segment, Paula Hansen, who is the President and Chief Revenue Officer of Alteryx, and Jacqui Van der Leij-Greyling, who is the Global Head of Tax Technology at eBay, they'll join me. They're going to share how Alteryx is helping the global eCommerce company innovate with analytics. Let's get the show started. (upbeat music) Jason Klein joins me next, Product Marketing Director at Alteryx. Jason, welcome to the program. >> Hello, nice to be here. >> Excited to talk with you. What can you tell me about the new Alteryx IDC research which spoke with about 1500 leaders? What nuggets were in there? >> Well, as the business landscape changes over the next 12 to 18 months, we're going to see that analytics is going to be a key component to navigating this change. 73% of the orgs indicated that analytics spend will outpace other software investments. But just putting more money towards technology, it isn't going to solve everything. And this is why everyone's spending is resulting in different ROIs. And one of the reasons for this gap is because 93% of organizations, they're still not fully using the analytics skills of their employees. And this widening analytics gap, it's threatening operational progress by wasting workers' time, harming business productivity, and introducing costly errors. So in this research, we developed a framework of enterprise analytics proficiency that helps organizations reap greater benefits from their investments. And we based this framework on the behaviors of organizations that saw big improvements across financial, customer, and employee metrics. And we're able to focus on the behaviors driving higher ROI. >> So the InfoBrief also revealed that nearly all organizations are planning to increase their analytics spend. And it looks like from the InfoBrief that nearly three quarters plan on spending more on analytics than any other software. And can you unpack what's driving this demand, this need for analytics across organizations? >> Sure, well, first, there's more data than ever before. The data's changing the world, and the world is changing data. Enterprises across the world, they're accelerating digital transformation to capitalize on new opportunities, to grow revenue, to increase margins, and to improve customer experiences. And analytics, along with automation and AI, is what's making digital transformation possible. They're providing the fuel to new digitally enabled lines of business. >> Yet not all analytics spending is resulting in the same ROI. So, what are some of the discrepancies that the InfoBrief uncovered with respect to ROI? >> Well, our research with IDC revealed significant roadblocks across people, processes and technologies, all preventing companies from reaping greater benefits from their investments. So on the people side, for example, only one out of five organizations reported a commensurate investment in upskilling for analytics and data literacy as compared to the technology itself. And next, while data is everywhere, most organizations, 63% in our survey, are still not using the full breadth of data types available. Data has never been this prolific. It's going to continue to grow, and orgs should be using it to their advantage. And lastly, organizations, they need to provide the right analytic tools to help everyone unlock the power of data, yet instead, they're relying on outdated spreadsheet technology. Nine out of 10 survey respondents said that less than half of their knowledge workers are active users of analytics software. True analytics transformation can't happen for an organization in a few select pockets or silos. We believe everyone, regardless of skill level, should be able to participate in the data and analytics process and drive value. >> So if I look at this holistically then, what would you say organizations need to do to make sure that they're really deriving value from their investments in analytics? >> Yeah, sure. So overall, the enterprises that derive more value >> from their data and analytics and achieved more ROI, they invested more aggressively in the four dimensions of enterprise analytics proficiency. So they've invested in the comprehensiveness of analytics, across all data sources and data types, meaning they're applying analytics to everything. They've invested in the flexibility of analytics across deployment scenarios and departments, meaning they're putting analytics everywhere. They've invested in the ubiquity of analytics and insights for every skill level, meaning they're making analytics for everyone. And they've invested in the usability of analytics software, meaning they're prioritizing easy technology to accelerate analytics democratization. >> So are there any specific areas that the survey uncovered where most companies are falling short? Like any black holes organizations need to be aware of from the outset? >> It did. You need to build a data-centric culture, and this begins with people. But we found that the people aspect of analytics is most heavily skewed towards low proficiency. In order to maximize ROI, organizations need to make sure everyone has access to the data and analytics technology they need. Organizations that align their analytics investments with upskilling enjoy higher ROI than orgs that are less aligned. For example, among the high ROI achievers in our survey, 78% had good or great alignment between analytics investments and workforce upskilling, compared to only 64% among those without positive ROI. And as more enterprises adopt cloud data warehouses or cloud data lakes to manage increasingly massive data sets, analytics needs to exist everywhere, especially for those cloud environments. And what we found is organizations that use more data types and more data sources generate higher ROI from their analytics investments. Among those with improved customer metrics, 90% were good or great at utilizing all data sources compared to only 67% among the ROI laggards. >> So interesting that you mentioned people. I'm glad that you mentioned people. Data scientists, everybody talks about data scientists. They're in high demand. We know that, but there aren't enough to meet the needs of all enterprises. So given that discrepancy, how can organizations fill the gap and really maximize the investments that they're making in analytics? >> Right. So analytics democratization, it's no longer optional, but it doesn't have to be complex. So we at Alteryx, we're democratizing analytics by empowering every organization to upskill every worker into a data worker. And the data from this survey shows this is the optimal approach. Organizations with a higher percentage of knowledge workers who are actively using analytics software enjoy higher returns from their analytics investment than orgs still stuck on spreadsheets. Among those with improved financial metrics, AKA the high ROI achievers, nearly 70% say that at least a quarter of their knowledge workers are using analytics software other than spreadsheets compared to only 56% in the low ROI group. Also, among the high ROI performers, 63% said data and analytic workers collaborate well or extremely well, compared to only 51% in the low ROI group. The data from the survey shows that supporting more business domains with analytics and providing cross-functional analytics correlates with higher ROI. So to maximize ROI, orgs should be transitioning workers from spreadsheets to analytics software. They should be letting them collaborate effectively, and letting them do so cross-functionally >> Yeah, that cross-functional collaboration is essential for anyone in any organization and in any discipline. Another key thing that jumped out from the survey was around shadow IT. The business side is using more data science tools than the IT side, and is expected to spend more on analytics than other IT. What risks does this present to the overall organization? If IT and the lines of business guys and gals aren't really aligned? >> Well, there needs to be better collaboration and alignment between IT and the line of business. The data from the survey, however, shows that business managers, they're expected to spend more on analytics and use more analytics tools than IT is aware of. And this is because the lines of business have recognized the value of analytics and plan to invest accordingly. But a lack of alignment between IT and business, this will negatively impact governance, which ultimately impedes democratization and hence, ROI. >> So Jason, where can organizations that are maybe at the outset of their analytics journey, or maybe they're in environments where there's multiple analytics tools across shadow IT, where can they go to Alteryx to learn more about how they can really simplify, streamline, and dial up the value on their investment? >> Well, they can learn more, you know, on our website. I also encourage them to explore the Alteryx community, which has lots of best practices, not just in terms of how you do the analytics, but how you stand up an Alteryx environment. But also to take a look at your analytics stack, and prioritize technologies that can snap to and enhance your organization's governance posture. It doesn't have to change it, but it should be able to align to and enhance it. >> And of course, as you mentioned, it's about people, process and technologies. Jason, thank you so much for joining me today, unpacking the IDC InfoBrief and the great nuggets in there. Lots that organizations can learn, and really become empowered to maximize their analytics investments. We appreciate your time. >> Thank you. It's been a pleasure. >> In a moment, Alan Jacobson, who's the Chief Data and Analytics Officer at Alteryx, is going to join me. He's going to be here to talk about how organizations across all industries can accelerate their analytic maturity to drive transformational business outcomes. You're watching the Cube, the leader in tech enterprise coverage. (gentle music)

Published Date : Sep 13 2022

SUMMARY :

in the next 12 to 18 months. Excited to talk with you. over the next 12 to 18 months, And it looks like from the InfoBrief and the world is changing data. that the InfoBrief uncovered So on the people side, for example, should be able to participate So overall, the enterprises analytics to everything. analytics needs to exist everywhere, and really maximize the investments And the data from this survey shows If IT and the lines of and plan to invest accordingly. that can snap to and really become empowered to maximize It's been a pleasure. at Alteryx, is going to join me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jason KleinPERSON

0.99+

Paula HansenPERSON

0.99+

Alan JacobsonPERSON

0.99+

Lisa MartinPERSON

0.99+

JasonPERSON

0.99+

AlteryxORGANIZATION

0.99+

Jacqui Van der Leij-GreylingPERSON

0.99+

NineQUANTITY

0.99+

IDCORGANIZATION

0.99+

90%QUANTITY

0.99+

second segmentQUANTITY

0.99+

93%QUANTITY

0.99+

63%QUANTITY

0.99+

eBayORGANIZATION

0.99+

73%QUANTITY

0.99+

78%QUANTITY

0.99+

oneQUANTITY

0.99+

InfoBriefORGANIZATION

0.99+

firstQUANTITY

0.99+

10 survey respondentsQUANTITY

0.97+

five organizationsQUANTITY

0.97+

about 1500 leadersQUANTITY

0.97+

todayDATE

0.97+

Democratizing Analytics Across the EnterpriseTITLE

0.97+

Four Ways to Unlock Transformative Business Outcomes From Analytics InvestmentsTITLE

0.96+

Alteryx IDCORGANIZATION

0.93+

CubeORGANIZATION

0.92+

nearly 70%QUANTITY

0.92+

18 monthsQUANTITY

0.92+

CubePERSON

0.91+

less than halfQUANTITY

0.9+

56%QUANTITY

0.89+

12QUANTITY

0.86+

67%QUANTITY

0.83+

IDC InfoBriefTITLE

0.78+

64%QUANTITY

0.75+

ChiefPERSON

0.73+

51%QUANTITY

0.73+

PresidentPERSON

0.73+

three quartersQUANTITY

0.64+

Chief Revenue OfficerPERSON

0.57+

the reasonsQUANTITY

0.52+

Jason Klein Alteryx


 

>> It's no surprise that 73% of organizations indicate analytics spend will outpace other software investments in the next 12 to 18 months. After all, as we know, data is changing the world, and the world is changing with it. But is everyone's spending resulting in the same ROI? This is Lisa Martin. Welcome to the Cube's presentation of "Democratizing Analytics Across the Enterprise," made possible by Alteryx. An Alteryx-commissioned IDC InfoBrief entitled, Four Ways to Unlock Transformative Business Outcomes From Analytics Investments, found that 93% of organizations are not utilizing the analytics skills of their employees, which is creating a widening analytics gap. On this special Cube presentation, Jason Klein, Product Marketing Director of Alteryx, will join me to share key findings from the new Alteryx-commissioned IDC Brief, and uncover how enterprises can derive more value from their data. In our second segment, we'll hear from Alan Jacobson, Chief Data and Analytics Officer at Alteryx. He's going to discuss how organizations across all industries can accelerate their analytic maturity to drive transformational business outcomes. And then, in our final segment, Paula Hansen, who is the President and Chief Revenue Officer of Alteryx, and Jacqui Van der Leij-Greyling, who is the Global Head of Tax Technology at eBay, they'll join me. They're going to share how Alteryx is helping the global eCommerce company innovate with analytics. Let's get the show started. (upbeat music) Jason Klein joins me next, Product Marketing Director at Alteryx. Jason, welcome to the program. >> Hello, nice to be here. >> Excited to talk with you. What can you tell me about the new Alteryx IDC research which spoke with about 1500 leaders? What nuggets were in there? >> Well, as the business landscape changes over the next 12 to 18 months, we're going to see that analytics is going to be a key component to navigating this change. 73% of the orgs indicated that analytics spend will outpace other software investments. But just putting more money towards technology, it isn't going to solve everything. And this is why everyone's spending is resulting in different ROIs. And one of the reasons for this gap is because 93% of organizations, they're still not fully using the analytics skills of their employees. And this widening analytics gap, it's threatening operational progress by wasting workers' time, harming business productivity, and introducing costly errors. So in this research, we developed a framework of enterprise analytics proficiency that helps organizations reap greater benefits from their investments. And we based this framework on the behaviors of organizations that saw big improvements across financial, customer, and employee metrics. And we're able to focus on the behaviors driving higher ROI. >> So the InfoBrief also revealed that nearly all organizations are planning to increase their analytics spend. And it looks like from the InfoBrief that nearly three quarters plan on spending more on analytics than any other software. And can you unpack what's driving this demand, this need for analytics across organizations? >> Sure, well, first, there's more data than ever before. The data's changing the world, and the world is changing data. Enterprises across the world, they're accelerating digital transformation to capitalize on new opportunities, to grow revenue, to increase margins, and to improve customer experiences. And analytics, along with automation and AI, is what's making digital transformation possible. They're providing the fuel to new digitally enabled lines of business. >> One of the things that the study also showed was that not all analytics spending is resulting in the same ROI. What are some of the discrepancies that the InfoBrief uncovered with respect to the the changes in ROI that organizations are achieving? >> Our research with IDC revealed significant roadblocks across people, processes, and technologies. They're preventing companies from reaping greater benefits from their investments. So for example, on the people side, only one out of five organizations reported a commensurate investment in upskilling for analytics and data literacy, as compared to the technology itself. And next, while data is everywhere, most organizations, 63%, from our survey, are still not using the full breadth of data types available. Yet, data's never been this prolific. It's going to continue to grow, and orgs should be using it to their advantage. And lastly, organizations, they need to provide the right analytics tools to help everyone unlock the power of data. They instead rely on outdated spreadsheet technology. In our survey, 9 out of 10 respondents said less than half of their knowledge workers are active users of analytics software beyond spreadsheets. But true analytic transformation can't happen for an organization in a few select pockets or silos. We believe everyone, regardless of skill level, should be able to participate in the data and analytics process and be driving value. >> Should we retake that, since I started talking over Jason accidentally? >> Yep, absolutely, you can do so. Yep, we'll go back to Lisa's question. Let's retake the question and the answer. >> That'll be not all analog spending results in the same ROI. What are some of the discrepancies? >> Yes, Lisa, so we'll go from your ISO, just so we can get that clean question and answer. >> Okay. >> Thank you for that. on your ISO, we're still speeding, Lisa. So give it a beat in your head, and then on you. >> Yet not all analytics spending is resulting in the same ROI. So, what are some of the discrepancies that the InfoBrief uncovered with respect to ROI? >> Well, our research with IDC revealed significant roadblocks across people, processes and technologies, all preventing companies from reaping greater benefits from their investments. So on the people side, for example, only one out of five organizations reported a commensurate investment in upskilling for analytics and data literacy as compared to the technology itself. And next, while data is everywhere, most organizations, 63% in our survey, are still not using the full breadth of data types available. Data has never been this prolific. It's going to continue to grow, and orgs should be using it to their advantage. And lastly, organizations, they need to provide the right analytic tools to help everyone unlock the power of data, yet instead, they're relying on outdated spreadsheet technology. Nine out of 10 survey respondents said that less than half of their knowledge workers are active users of analytics software. True analytics transformation can't happen for an organization in a few select pockets or silos. We believe everyone, regardless of skill level, should be able to participate in the data and analytics process and drive value. >> So if I look at this holistically then, what would you say organizations need to do to make sure that they're really deriving value from their investments in analytics? >> Yeah, sure. So overall, the enterprises that derive more value from their data and analytics and achieved more ROI, they invested more aggressively in the four dimensions of enterprise analytics proficiency. So they've invested in the comprehensiveness of analytics, across all data sources and data types, meaning they're applying analytics to everything. They've invested in the flexibility of analytics across deployment scenarios and departments, meaning they're putting analytics everywhere. They've invested in the ubiquity of analytics and insights for every skill level, meaning they're making analytics for everyone. And they've invested in the usability of analytics software, meaning they're prioritizing easy technology to accelerate analytics democratization. >> So very strategic investments. Did the survey uncover any specific areas where most companies are falling short, like any black holes that organizations need to be aware of at the outset? >> It did. It did. So organizations, they need to build a data-centric culture. And this begins with people. But what the survey told us is that the people aspect of analytics is the most heavily skewed towards low proficiency. In order to maximize ROI, organizations need to make sure everyone in the organization has access to the data and analytics technology they need. And then the organizations also have to align their investments with upskilling in data literacy to enjoy that higher ROI. Companies who did so experience higher ROI than companies who underinvested in analytics literacy. So among the high ROI achievers, 78% have a good or great alignment between analytics investment and workforce upskilling compared to only 64% among those without positive ROI. And as more orgs adopt cloud data warehouses or cloud data lakes, in order to manage the massively increasing workloads. Can I start that one over? Can I redo this one? >> Sure. >> Yeah >> Of course. Stand by. >> Tongue tied. >> Yep. No worries. >> One second. >> If we could get, if we could do the same, Lisa, just have a clean break. We'll go to your question. Yep. >> Yeah. >> On you Lisa. Just give that a count and whenever you're ready, here, I'm going to give us a little break. On you Lisa. >> So are there any specific areas that the survey uncovered where most companies are falling short? Like any black holes organizations need to be aware of from the outset? >> It did. You need to build a data-centric culture, and this begins with people. But we found that the people aspect of analytics is most heavily skewed towards low proficiency. In order to maximize ROI, organizations need to make sure everyone has access to the data and analytics technology they need. Organizations that align their analytics investments with upskilling enjoy higher ROI than orgs that are less aligned. For example, among the high ROI achievers in our survey, 78% had good or great alignment between analytics investments and workforce upskilling, compared to only 64% among those without positive ROI. And as more enterprises adopt cloud data warehouses or cloud data lakes to manage increasingly massive data sets, analytics needs to exist everywhere, especially for those cloud environments. And what we found is organizations that use more data types and more data sources generate higher ROI from their analytics investments. Among those with improved customer metrics, 90% were good or great at utilizing all data sources compared to only 67% among the ROI laggards. >> So interesting that you mentioned people. I'm glad that you mentioned people. Data scientists, everybody talks about data scientists. They're in high demand. We know that, but there aren't enough to meet the needs of all enterprises. So given that discrepancy, how can organizations fill the gap and really maximize the investments that they're making in analytics? >> Right. So analytics democratization, it's no longer optional, but it doesn't have to be complex. So we at Alteryx, we're democratizing analytics by empowering every organization to upskill every worker into a data worker. And the data from this survey shows this is the optimal approach. Organizations with a higher percentage of knowledge workers who are actively using analytics software enjoy higher returns from their analytics investment than orgs still stuck on spreadsheets. Among those with improved financial metrics, AKA the high ROI achievers, nearly 70% say that at least a quarter of their knowledge workers are using analytics software other than spreadsheets compared to only 56% in the low ROI group. Also, among the high ROI performers, 63% said data and analytic workers collaborate well or extremely well, compared to only 51% in the low ROI group. The data from the survey shows that supporting more business domains with analytics and providing cross-functional analytics correlates with higher ROI. So to maximize ROI, orgs should be transitioning workers from spreadsheets to analytics software. They should be letting them collaborate effectively, and letting them do so cross-functionally >> Yeah, that cross-functional collaboration is essential for anyone in any organization and in any discipline. Another key thing that jumped out from the survey was around shadow IT. The business side is using more data science tools than the IT side, and is expected to spend more on analytics than other IT. What risks does this present to the overall organization? If IT and the lines of business guys and gals aren't really aligned? >> Well, there needs to be better collaboration and alignment between IT and the line of business. The data from the survey, however, shows that business managers, they're expected to spend more on analytics and use more analytics tools than IT is aware of. And this is because the lines of business have recognized the value of analytics and plan to invest accordingly. But a lack of alignment between IT and business, this will negatively impact governance, which ultimately impedes democratization and hence, ROI. >> So Jason, where can organizations that are maybe at the outset of their analytics journey, or maybe they're in environments where there's multiple analytics tools across shadow IT, where can they go to Alteryx to learn more about how they can really simplify, streamline, and dial up the value on their investment? >> Well, they can learn more, you know, on our website. I also encourage them to explore the Alteryx community, which has lots of best practices, not just in terms of how you do the analytics, but how you stand up an Alteryx environment. But also to take a look at your analytics stack, and prioritize technologies that can snap to and enhance your organization's governance posture. It doesn't have to change it, but it should be able to align to and enhance it. >> And of course, as you mentioned, it's about people, process and technologies. Jason, thank you so much for joining me today, unpacking the IDC InfoBrief and the great nuggets in there. Lots that organizations can learn, and really become empowered to maximize their analytics investments. We appreciate your time. >> Thank you. It's been a pleasure. >> In a moment, Alan Jacobson, who's the Chief Data and Analytics Officer at Alteryx, is going to join me. He's going to be here to talk about how organizations across all industries can accelerate their analytic maturity to drive transformational business outcomes. You're watching the Cube, the leader in tech enterprise coverage. (gentle music)

Published Date : Sep 10 2022

SUMMARY :

in the next 12 to 18 months. Excited to talk with you. over the next 12 to 18 months, And it looks like from the InfoBrief and the world is changing data. that the InfoBrief uncovered So for example, on the people side, Let's retake the question and the answer. in the same ROI. just so we can get that So give it a beat in your that the InfoBrief uncovered So on the people side, for example, So overall, the enterprises organizations need to be aware of is that the people aspect We'll go to your question. here, I'm going to give us a little break. to the data and analytics and really maximize the investments And the data from this survey shows If IT and the lines of and plan to invest accordingly. that can snap to and really become empowered to maximize Thank you. at Alteryx, is going to join me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jason KleinPERSON

0.99+

JasonPERSON

0.99+

Paula HansenPERSON

0.99+

Alan JacobsonPERSON

0.99+

LisaPERSON

0.99+

Lisa MartinPERSON

0.99+

AlteryxORGANIZATION

0.99+

Jacqui Van der Leij-GreylingPERSON

0.99+

IDCORGANIZATION

0.99+

9QUANTITY

0.99+

90%QUANTITY

0.99+

second segmentQUANTITY

0.99+

63%QUANTITY

0.99+

NineQUANTITY

0.99+

93%QUANTITY

0.99+

78%QUANTITY

0.99+

eBayORGANIZATION

0.99+

One secondQUANTITY

0.99+

73%QUANTITY

0.99+

oneQUANTITY

0.99+

InfoBriefORGANIZATION

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

five organizationsQUANTITY

0.98+

OneQUANTITY

0.98+

about 1500 leadersQUANTITY

0.97+

five organizationsQUANTITY

0.97+

nearly 70%QUANTITY

0.97+

Democratizing Analytics Across the EnterpriseTITLE

0.97+

10 respondentsQUANTITY

0.97+

10 survey respondentsQUANTITY

0.97+

Four Ways to Unlock Transformative Business Outcomes From Analytics InvestmentsTITLE

0.96+

Alteryx IDCORGANIZATION

0.93+

64%QUANTITY

0.93+

CubeORGANIZATION

0.92+

ISOORGANIZATION

0.92+

18 monthsQUANTITY

0.92+

Jason Klein AlteryxPERSON

0.9+

less than halfQUANTITY

0.87+

12QUANTITY

0.86+

four dimensionsQUANTITY

0.83+

67%QUANTITY

0.81+

51%QUANTITY

0.78+

56%QUANTITY

0.76+

CubePERSON

0.75+

ChiefPERSON

0.73+

PresidentPERSON

0.73+

three quartersQUANTITY

0.64+

thingsQUANTITY

0.63+

Chief Revenue OfficerPERSON

0.57+

the reasonsQUANTITY

0.52+

InfoBriefTITLE

0.5+

Alteryx Democratizing Analytics Across the Enterprise Full Episode V1b


 

>> It's no surprise that 73% of organizations indicate analytics spend will outpace other software investments in the next 12 to 18 months. After all as we know, data is changing the world and the world is changing with it. But is everyone's spending resulting in the same ROI? This is Lisa Martin. Welcome to "theCUBE"'s presentation of democratizing analytics across the enterprise, made possible by Alteryx. An Alteryx commissioned IDC info brief entitled, "Four Ways to Unlock Transformative Business Outcomes from Analytics Investments" found that 93% of organizations are not utilizing the analytics skills of their employees, which is creating a widening analytics gap. On this special "CUBE" presentation, Jason Klein, product marketing director of Alteryx, will join me to share key findings from the new Alteryx commissioned IDC brief and uncover how enterprises can derive more value from their data. In our second segment, we'll hear from Alan Jacobson, chief data and analytics officer at Alteryx. He's going to discuss how organizations across all industries can accelerate their analytic maturity to drive transformational business outcomes. And then in our final segment, Paula Hansen, who is the president and chief revenue officer of Alteryx, and Jacqui Van der Leij Greyling, who is the global head of tax technology at eBay, they'll join me. They're going to share how Alteryx is helping the global eCommerce company innovate with analytics. Let's get the show started. (upbeat music) Jason Klein joins me next, product marketing director at Alteryx. Jason, welcome to the program. >> Hello, nice to be here. >> Excited to talk with you. What can you tell me about the new Alteryx IDC research, which spoke with about 1500 leaders, what nuggets were in there? >> Well, as the business landscape changes over the next 12 to 18 months, we're going to see that analytics is going to be a key component to navigating this change. 73% of the orgs indicated that analytics spend will outpace other software investments. But just putting more money towards technology, it isn't going to solve everything. And this is why everyone's spending is resulting in different ROIs. And one of the reasons for this gap is because 93% of organizations, they're still not fully using the analytics skills of their employees, and this widening analytics gap, it's threatening operational progress by wasting workers' time, harming business productivity and introducing costly errors. So in this research, we developed a framework of enterprise analytics proficiency that helps organizations reap greater benefits from their investments. And we based this framework on the behaviors of organizations that saw big improvements across financial, customer, and employee metrics, and we're able to focus on the behaviors driving higher ROI. >> So the info brief also revealed that nearly all organizations are planning to increase their analytics spend. And it looks like from the info brief that nearly three quarters plan on spending more on analytics than any other software. And can you unpack, what's driving this demand, this need for analytics across organizations? >> Sure, well first there's more data than ever before, the data's changing the world, and the world is changing data. Enterprises across the world, they're accelerating digital transformation to capitalize on new opportunities, to grow revenue, to increase margins and to improve customer experiences. And analytics along with automation and AI is what's making digital transformation possible. They're providing the fuel to new digitally enabled lines of business. >> One of the things that the study also showed was that not all analytics spending is resulting in the same ROI. What are some of the discrepancies that the info brief uncovered with respect to the changes in ROI that organizations are achieving? >> Our research with IDC revealed significant roadblocks across people, processes, and technologies. They're preventing companies from reaping greater benefits from their investments. So for example, on the people side, only one out of five organizations reported a commensurate investment in upskilling for analytics and data literacy as compared to the technology itself. And next, while data is everywhere, most organizations, 63% from our survey, are still not using the full breadth of data types available. Yet data's never been this prolific, it's going to continue to grow, and orgs should be using it to their advantage. And lastly organizations, they need to provide the right analytics tools to help everyone unlock the power of data. >> So they- >> They instead rely on outdated spreadsheet technology. In our survey, nine out of 10 respondents said less than half of their knowledge workers are active users of analytics software beyond spreadsheets. But true analytic transformation can't happen for an organization in a few select pockets or silos. We believe everyone regardless of skill level should be able to participate in the data and analytics process and be driving value. >> Should we retake that, since I started talking over Jason accidentally? >> Yep, absolutely we can do so. We'll just go, yep, we'll go back to Lisa's question. Let's just, let's do the, retake the question and the answer, that'll be able to. >> It'll be not all analytics spending results in the same ROI, what are some of the discrepancies? >> Yes, Lisa, so we'll go from your ISO, just so we get that clean question and answer. >> Okay. >> Thank you for that. On your ISO, we're still speeding, Lisa, so give it a beat in your head and then on you. >> Yet not all analytics spending is resulting in the same ROI. So what are some of the discrepancies that the info brief uncovered with respect to ROI? >> Well, our research with IDC revealed significant roadblocks across people, processes, and technologies, all preventing companies from reaping greater benefits from their investments. So on the people side, for example, only one out of five organizations reported a commensurate investment in upskilling for analytics and data literacy as compared to the technology itself. And next, while data is everywhere, most organizations, 63% in our survey, are still not using the full breadth of data types available. Data has never been this prolific. It's going to continue to grow and orgs should be using it to their advantage. And lastly, organizations, they need to provide the right analytic tools to help everyone unlock the power of data, yet instead they're relying on outdated spreadsheet technology. Nine of 10 survey respondents said that less than half of their knowledge workers are active users of analytics software. True analytics transformation can't happen for an organization in a few select pockets or silos. We believe everyone regardless of skill level should be able to participate in the data and analytics process and drive value. >> So if I look at this holistically, then what would you say organizations need to do to make sure that they're really deriving value from their investments in analytics? >> Yeah, sure. So overall, the enterprises that derive more value from their data and analytics and achieve more ROI, they invested more aggressively in the four dimensions of enterprise analytics proficiency. So they've invested in the comprehensiveness of analytics across all data sources and data types, meaning they're applying analytics to everything. They've invested in the flexibility of analytics across deployment scenarios and departments, meaning they're putting analytics everywhere. They've invested in the ubiquity of analytics and insights for every skill level, meaning they're making analytics for everyone. And they've invested in the usability of analytics software, meaning they're prioritizing easy technology to accelerate analytics democratization. >> So very strategic investments. Did the survey uncover any specific areas where most companies are falling short, like any black holes that organizations need to be aware of at the outset? >> It did, it did. So organizations, they need to build a data-centric culture. And this begins with people. But what the survey told us is that the people aspect of analytics is the most heavily skewed towards low proficiency. In order to maximize ROI, organizations need to make sure everyone in the organization has access to the data and analytics technology they need. And then the organizations also have to align their investments with upskilling in data literacy to enjoy that higher ROI. Companies who did so experience higher ROI than companies who underinvested in analytics literacy. So among the high ROI achievers, 78% have a good or great alignment between analytics investment and workforce upskilling compared to only 64% among those without positive ROI. And as more orgs adopt cloud data warehouses or cloud data lakes, in order to manage the massively increasing workloads- Can I start that one over. >> Sure. >> Can I redo this one? >> Yeah. >> Of course, stand by. >> Tongue tied. >> Yep, no worries. >> One second. >> If we could do the same, Lisa, just have a clean break, we'll go your question. >> Yep, yeah. >> On you Lisa. Just give that a count and whenever you're ready. Here, I'm going to give us a little break. On you Lisa. >> So are there any specific areas that the survey uncovered where most companies are falling short? Like any black holes organizations need to be aware of from the outset? >> It did. You need to build a data-centric culture and this begins with people, but we found that the people aspect of analytics is most heavily skewed towards low proficiency. In order to maximize ROI organizations need to make sure everyone has access to the data and analytics technology they need. Organizations that align their analytics investments with upskilling enjoy higher ROI than orgs that are less aligned. For example, among the high ROI achievers in our survey, 78% had good or great alignment between analytics investments and workforce upskilling, compared to only 64% among those without positive ROI. And as more enterprises adopt cloud data warehouses or cloud data lakes to manage increasingly massive data sets, analytics needs to exist everywhere, especially for those cloud environments. And what we found is organizations that use more data types and more data sources generate higher ROI from their analytics investments. Among those with improved customer metrics, 90% were good or great at utilizing all data sources, compared to only 67% among the ROI laggards. >> So interesting that you mentioned people, I'm glad that you mentioned people. Data scientists, everybody talks about data scientists. They're in high demand, we know that, but there aren't enough to meet the needs of all enterprises. So given that discrepancy, how can organizations fill the gap and really maximize the investments that they're making in analytics? >> Right, so analytics democratization, it's no longer optional, but it doesn't have to be complex. So we at Alteryx, we're democratizing analytics by empowering every organization to upskill every worker into a data worker. And the data from this survey shows this is the optimal approach. Organizations with a higher percentage of knowledge workers who are actively using analytics software enjoy higher returns from their analytics investment than orgs still stuck on spreadsheets. Among those with improved financial metrics, AKA the high ROI achievers, nearly 70% say that at least a quarter of their knowledge workers are using analytics software other than spreadsheets compared to only 56% in the low ROI group. Also among the high ROI performers, 63% said data and analytic workers collaborate well or extremely well compared to only 51% in the low ROI group. The data from the survey shows that supporting more business domains with analytics and providing cross-functional analytics correlates with higher ROI. So to maximize ROI, orgs should be transitioning workers from spreadsheets to analytics software. They should be letting them collaborate effectively and letting them do so cross-functionally. >> Yeah, that cross-functional collaboration is essential for anyone in any organization and in any discipline. Another key thing that jumped out from the survey was around shadow IT. The business side is using more data science tools than the IT side. And it's expected to spend more on analytics than other IT. What risks does this present to the overall organization, if IT and the lines of business guys and gals aren't really aligned? >> Well, there needs to be better collaboration and alignment between IT and the line of business. The data from the survey, however, shows that business managers, they're expected to spend more on analytics and use more analytics tools than IT is aware of. And this isn't because the lines of business have recognized the value of analytics and plan to invest accordingly, but a lack of alignment between IT and business. This will negatively impact governance, which ultimately impedes democratization and hence ROI. >> So Jason, where can organizations that are maybe at the outset of their analytics journey, or maybe they're in environments where there's multiple analytics tools across shadow IT, where can they go to Alteryx to learn more about how they can really simplify, streamline, and dial up the value on their investment? >> Well, they can learn more on our website. I also encourage them to explore the Alteryx community, which has lots of best practices, not just in terms of how you do the analytics, but how you stand up in Alteryx environment, but also to take a look at your analytics stack and prioritize technologies that can snap to and enhance your organization's governance posture. It doesn't have to change it, but it should be able to align to and enhance it. >> And of course, as you mentioned, it's about people, process, and technologies. Jason, thank you so much for joining me today, unpacking the IDC info brief and the great nuggets in there. Lots that organizations can learn and really become empowered to maximize their analytics investments. We appreciate your time. >> Thank you, it's been a pleasure. >> In a moment, Alan Jacobson, who's the chief data and analytics officer at Alteryx is going to join me. He's going to be here to talk about how organizations across all industries can accelerate their analytic maturity to drive transformational business outcomes. You're watching "theCUBE", the leader in tech enterprise coverage. >> Somehow many have come to believe that data analytics is for the few, for the scientists, the PhDs, the MBAs. Well, it is for them, but that's not all. You don't have to have an advanced degree to do amazing things with data. You don't even have to be a numbers person. You can be just about anything. A titan of industry or a future titan of industry. You could be working to change the world, your neighborhood, or the course of your business. You can be saving lives or just looking to save a little time. The power of data analytics shouldn't be limited to certain job titles or industries or organizations because when more people are doing more things with data, more incredible things happen. Analytics makes us smarter and faster and better at what we do. It's practically a superpower. That's why we believe analytics is for everyone, and everything, and should be everywhere. That's why we believe in analytics for all. (upbeat music) >> Hey, everyone. Welcome back to "Accelerating Analytics Maturity". I'm your host, Lisa Martin. Alan Jacobson joins me next. The chief of data and analytics officer at Alteryx. Alan, it's great to have you on the program. >> Thanks, Lisa. >> So Alan, as we know, everyone knows that being data driven is very important. It's a household term these days, but 93% of organizations are not utilizing the analytics skills of their employees, which is creating a widening analytics gap. What's your advice, your recommendations for organizations who are just starting out with analytics? >> You're spot on, many organizations really aren't leveraging the full capability of their knowledge workers. And really the first step is probably assessing where you are on the journey, whether that's you personally, or your organization as a whole. We just launched an assessment tool on our website that we built with the International Institute of Analytics, that in a very short period of time, in about 15 minutes, you can go on and answer some questions and understand where you sit versus your peer set versus competitors and kind of where you are on the journey. >> So when people talk about data analytics, they often think, ah, this is for data science experts like people like you. So why should people in the lines of business like the finance folks, the marketing folks, why should they learn analytics? >> So domain experts are really in the best position. They know where the gold is buried in their companies. They know where the inefficiencies are. And it is so much easier and faster to teach a domain expert a bit about how to automate a process or how to use analytics than it is to take a data scientist and try to teach them to have the knowledge of a 20 year accounting professional or a logistics expert of your company. Much harder to do that. And really, if you think about it, the world has changed dramatically in a very short period of time. If you were a marketing professional 30 years ago, you likely didn't need to know anything about the internet, but today, do you know what you would call that marketing professional if they didn't know anything about the internet, probably unemployed or retired. And so knowledge workers are having to learn more and more skills to really keep up with their professions. And analytics is really no exception. Pretty much in every profession, people are needing to learn analytics to stay current and be capable for their companies. And companies need people who can do that. >> Absolutely, it seems like it's table stakes these days. Let's look at different industries now. Are there differences in how you see analytics in automation being employed in different industries? I know Alteryx is being used across a lot of different types of organizations from government to retail. I also see you're now with some of the leading sports teams. Any differences in industries? >> Yeah, there's an incredible actually commonality between the domains industry to industry. So if you look at what an HR professional is doing, maybe attrition analysis, it's probably quite similar, whether they're in oil and gas or in a high tech software company. And so really the similarities are much larger than you might think. And even on the sports front, we see many of the analytics that sports teams perform are very similar. So McLaren is one of the great partners that we work with and they use Alteryx across many areas of their business from finance to production, extreme sports, logistics, wind tunnel engineering, the marketing team analyzes social media data, all using Alteryx, and if I take as an example, the finance team, the finance team is trying to optimize the budget to make sure that they can hit the very stringent targets that F1 Sports has, and I don't see a ton of difference between the optimization that they're doing to hit their budget numbers and what I see Fortune 500 finance departments doing to optimize their budget, and so really the commonality is very high, even across industries. >> I bet every Fortune 500 or even every company would love to be compared to the same department within McLaren F1. Just to know that wow, what they're doing is so incredibly important as is what we're doing. >> So talk- >> Absolutely. >> About lessons learned, what lessons can business leaders take from those organizations like McLaren, who are the most analytically mature? >> Probably first and foremost, is that the ROI with analytics and automation is incredibly high. Companies are having a ton of success. It's becoming an existential threat to some degree, if your company isn't going on this journey and your competition is, it can be a huge problem. IDC just did a recent study about how companies are unlocking the ROI using analytics. And the data was really clear, organizations that have a higher percentage of their workforce using analytics are enjoying a much higher return from their analytic investment, and so it's not about hiring two double PhD statisticians from Oxford. It really is how widely you can bring your workforce on this journey, can they all get 10% more capable? And that's having incredible results at businesses all over the world. An another key finding that they had is that the majority of them said that when they had many folks using analytics, they were going on the journey faster than companies that didn't. And so picking technologies that'll help everyone do this and do this fast and do it easily. Having an approachable piece of software that everyone can use is really a key. >> So faster, able to move faster, higher ROI. I also imagine analytics across the organization is a big competitive advantage for organizations in any industry. >> Absolutely the IDC, or not the IDC, the International Institute of Analytics showed huge correlation between companies that were more analytically mature versus ones that were not. They showed correlation to growth of the company, they showed correlation to revenue and they showed correlation to shareholder values. So across really all of the key measures of business, the more analytically mature companies simply outperformed their competition. >> And that's key these days, is to be able to outperform your competition. You know, one of the things that we hear so often, Alan, is people talking about democratizing data and analytics. You talked about the line of business workers, but I got to ask you, is it really that easy for the line of business workers who aren't trained in data science to be able to jump in, look at data, uncover and extract business insights to make decisions? >> So in many ways, it really is that easy. I have a 14 and 16 year old kid. Both of them have learned Alteryx, they're Alteryx certified and it was quite easy. It took 'em about 20 hours and they were off to the races, but there can be some hard parts. The hard parts have more to do with change management. I mean, if you're an accountant that's been doing the best accounting work in your company for the last 20 years, and all you happen to know is a spreadsheet for those 20 years, are you ready to learn some new skills? And I would suggest you probably need to, if you want to keep up with your profession. The big four accounting firms have trained over a hundred thousand people in Alteryx. Just one firm has trained over a hundred thousand. You can't be an accountant or an auditor at some of these places without knowing Alteryx. And so the hard part, really in the end, isn't the technology and learning analytics and data science, the harder part is this change management, change is hard. I should probably eat better and exercise more, but it's hard to always do that. And so companies are finding that that's the hard part. They need to help people go on the journey, help people with the change management to help them become the digitally enabled accountant of the future, the logistics professional that is E enabled, that's the challenge. >> That's a huge challenge. Cultural shift is a challenge, as you said, change management. How do you advise customers if you might be talking with someone who might be early in their analytics journey, but really need to get up to speed and mature to be competitive, how do you guide them or give them recommendations on being able to facilitate that change management? >> Yeah, that's a great question. So people entering into the workforce today, many of them are starting to have these skills. Alteryx is used in over 800 universities around the globe to teach finance and to teach marketing and to teach logistics. And so some of this is happening naturally as new workers are entering the workforce, but for all of those who are already in the workforce, have already started their careers, learning in place becomes really important. And so we work with companies to put on programmatic approaches to help their workers do this. And so it's, again, not simply putting a box of tools in the corner and saying free, take one. We put on hackathons and analytic days, and it can be great fun. We have a great time with many of the customers that we work with, helping them do this, helping them go on the journey, and the ROI, as I said, is fantastic. And not only does it sometimes affect the bottom line, it can really make societal changes. We've seen companies have breakthroughs that have really made great impact to society as a whole. >> Isn't that so fantastic, to see the difference that that can make. It sounds like you guys are doing a great job of democratizing access to Alteryx to everybody. We talked about the line of business folks and the incredible importance of enabling them and the ROI, the speed, the competitive advantage. Can you share some specific examples that you think of Alteryx customers that really show data breakthroughs by the lines of business using the technology? >> Yeah, absolutely, so many to choose from. I'll give you two examples quickly. One is Armor Express. They manufacture life saving equipment, defensive equipments, like armor plated vests, and they were needing to optimize their supply chain, like many companies through the pandemic. We see how important the supply chain is. And so adjusting supply to match demand is really vital. And so they've used Alteryx to model some of their supply and demand signals and built a predictive model to optimize the supply chain. And it certainly helped out from a dollar standpoint. They cut over a half a million dollars of inventory in the first year, but more importantly, by matching that demand and supply signal, you're able to better meet customer demand. And so when people have orders and are looking to pick up a vest, they don't want to wait. And it becomes really important to get that right. Another great example is British Telecom. They're a company that services the public sector. They have very strict reporting regulations that they have to meet and they had, and this is crazy to think about, over 140 legacy spreadsheet models that they had to run to comply with these regulatory processes and report, and obviously running 140 legacy models that had to be done in a certain order and length, incredibly challenging. It took them over four weeks each time that they had to go through that process. And so to save time and have more efficiency in doing that, they trained 50 employees over just a two week period to start using Alteryx and learn Alteryx. And they implemented an all new reporting process that saw a 75% reduction in the number of man hours it took to run in a 60% run time performance. And so, again, a huge improvement. I can imagine it probably had better quality as well, because now that it was automated, you don't have people copying and pasting data into a spreadsheet. And that was just one project that this group of folks were able to accomplish that had huge ROI, but now those people are moving on and automating other processes and performing analytics in other areas. So you can imagine the impact by the end of the year that they will have on their business, potentially millions upon millions of dollars. And this is what we see again and again, company after company, government agency after government agency, is how analytics are really transforming the way work is being done. >> That was the word that came to mind when you were describing the all three customer examples, transformation, this is transformative. The ability to leverage Alteryx, to truly democratize data and analytics, give access to the lines of business is transformative for every organization. And also the business outcome you mentioned, those are substantial metrics based business outcomes. So the ROI in leveraging a technology like Alteryx seems to be right there, sitting in front of you. >> That's right, and to be honest, it's not only important for these businesses. It's important for the knowledge workers themselves. I mean, we hear it from people that they discover Alteryx, they automate a process, they finally get to get home for dinner with their families, which is fantastic, but it leads to new career paths. And so knowledge workers that have these added skills have so much larger opportunity. And I think it's great when the needs of businesses to become more analytic and automate processes actually matches the needs of the employees, and they too want to learn these skills and become more advanced in their capabilities. >> Huge value there for the business, for the employees themselves to expand their skillset, to really open up so many opportunities for not only the business to meet the demands of the demanding customer, but the employees to be able to really have that breadth and depth in their field of service. Great opportunities there, Alan. Is there anywhere that you want to point the audience to go to learn more about how they can get started? >> Yeah, so one of the things that we're really excited about is how fast and easy it is to learn these tools. So any of the listeners who want to experience Alteryx, they can go to the website, there's a free download on the website. You can take our analytic maturity assessment, as we talked about at the beginning, and see where you are on the journey and just reach out. We'd love to work with you and your organization to see how we can help you accelerate your journey on analytics and automation. >> Alan, it was a pleasure talking to you about democratizing data and analytics, the power in it for organizations across every industry. We appreciate your insights and your time. >> Thank you so much. >> In a moment, Paula Hansen, who is the president and chief revenue officer of Alteryx, and Jacqui Van der Leij Greyling, who's the global head of tax technology at eBay, will join me. You're watching "theCUBE", the leader in high tech enterprise coverage. >> 1200 hours of wind tunnel testing, 30 million race simulations, 2.4 second pit stops. >> Make that 2.3. >> Sector times out the wazoo. >> Way too much of this. >> Velocities, pressures, temperatures, 80,000 components generating 11.8 billion data points and one analytics platform to make sense of it all. When McLaren needs to turn complex data into winning insights, they turn to Alteryx. Alteryx, analytics automation. (upbeat music) >> Hey, everyone, welcome back to the program. Lisa Martin here, I've got two guests joining me. Please welcome back to "theCUBE" Paula Hansen, the chief revenue officer and president at Alteryx, and Jacqui Van der Leij Greyling joins us as well, the global head of tax technology at eBay. They're going to share with you how Alteryx is helping eBay innovate with analytics. Ladies, welcome, it's great to have you both on the program. >> Thank you, Lisa, it's great to be here. >> Yeah, Paula, we're going to start with you. In this program, we've heard from Jason Klein, we've heard from Alan Jacobson. They talked about the need to democratize analytics across any organization to really drive innovation. With analytics, as they talked about, at the forefront of software investments, how's Alteryx helping its customers to develop roadmaps for success with analytics? >> Well, thank you, Lisa. It absolutely is about our customers' success. And we partner really closely with our customers to develop a holistic approach to their analytics success. And it starts of course with our innovative technology and platform, but ultimately we help our customers to create a culture of data literacy and analytics from the top of the organization, starting with the C-suite. And we partner with our customers to build their roadmaps for scaling that culture of analytics, through things like enablement programs, skills assessments, hackathons, setting up centers of excellence to help their organization scale and drive governance of this analytics capability across the enterprise. So at the end of the day, it's really about helping our customers to move up their analytics maturity curve with proven technologies and best practices, so they can make better business decisions and compete in their respective industries. >> Excellent, sounds like a very strategic program, we're going to unpack that. Jacqui, let's bring you into the conversation. Speaking of analytics maturity, one of the things that we talked about in this event is the IDC report that showed that 93% of organizations are not utilizing the analytics skills of their employees, but then there's eBay. How Jacqui did eBay become one of the 7% of organizations who's really maturing and how are you using analytics across the organization at eBay? >> So I think the main thing for us is when we started out was is that, our, especially in finance, they became spreadsheet professionals instead of the things that we really want our employees to add value to. And we realized we had to address that. And we also knew we couldn't wait for all our data to be centralized until we actually start using the data or start automating and being more effective. So ultimately we really started very, very actively embedding analytics in our people and our data and our processes. >> Starting with people is really critical. Jacqui, continuing with you, what were some of the roadblocks to analytics adoption that you faced and how did you overcome them? >> So I think eBay is a very data driven company. We have a lot of data. I think we are 27 years around this year, so we have the data, but it is everywhere. And how do you use that data? How do you use it efficiently? How do you get to the data? And I believe that that is definitely one of our biggest roadblocks when we started out and just finding those data sources and finding ways to connect to them to move forward. The other thing is that people were experiencing a lot of frustration. I mentioned before about the spreadsheet professionals. And there was no, we were not independent. You couldn't move forward, you would've put it on somebody else's roadmap to get the data and to get the information if you want it. So really finding something that everybody could access analytics or access data. And finally we have to realize is that this is uncharted territory. This is not exactly something that everybody is used to working with every day. So how do you find something that is easy, and that is not so daunting on somebody who's brand new to the field? And I would call those out as your major roadblocks, because you always have, not always, but most of the times you have support from the top, and in our case we have, but at the end of the day, it's our people that need to actually really embrace it, and making that accessible for them, I would say is definitely not per se, a roadblock, but basically a block you want to be able to move. >> It's really all about putting people first. Question for both of you, and Paula we'll start with you, and then Jacqui we'll go to you. I think the message in this program that the audience is watching with us is very clear. Analytics is for everyone, should be for everyone. Let's talk now about how both of your organizations are empowering people, those in the organization that may not have technical expertise to be able to leverage data, so that they can actually be data driven. Paula. >> Yes, well, we leverage our platform across all of our business functions here at Alteryx. And just like Jacqui explained, at eBay finance is probably one of the best examples of how we leverage our own platform to improve our business performance. So just like Jacqui mentioned, we have this huge amount of data flowing through our enterprise and the opportunity to leverage that into insights and analytics is really endless. So our CFO Kevin Rubin has been a key sponsor for using our own technology. We use Alteryx for forecasting all of our key performance metrics, for business planning, across our audit function, to help with compliance and regulatory requirements, tax, and even to close our books at the end of each quarter. So it's really going to remain across our business. And at the end of the day, it comes to how do you train users? How do you engage users to lean into this analytic opportunity to discover use cases? And so one of the other things that we've seen many companies do is to gamify that process, to build a game that brings users into the experience for training and to work with each other, to problem solve and along the way, maybe earn badges depending on the capabilities and trainings that they take. And just have a little healthy competition as an employee base around who can become more sophisticated in their analytic capability. So I think there's a lot of different ways to do it. And as Jacqui mentioned, it's really about ensuring that people feel comfortable, that they feel supported, that they have access to the training that they need, and ultimately that they are given both the skills and the confidence to be able to be a part of this great opportunity of analytics. >> That confidence is key. Jacqui, talk about some of the ways that you're empowering folks without that technical expertise to really be data driven. >> Yeah, I think it means to what Paula has said in terms of getting people excited about it, but it's also understanding that this is a journey and everybody is at a different place in their journey. You have folks that's already really advanced who has done this every day. And then you have really some folks that this is brand new or maybe somewhere in between. And it's about how you get everybody in their different phases to get to the initial destination. I say initial, because I believe a journey is never really complete. What we have done is that we decided to invest, and built a proof of concept, and we got our CFO to sponsor a hackathon. We opened it up to everybody in finance in the middle of the pandemic. So everybody was on Zoom and we told people, listen, we're going to teach you this tool, it's super easy, and let's just see what you can do. We ended up having 70 entries. We had only three weeks. So and these are people that do not have a background. They are not engineers, they're not data scientists. And we ended up with a 25,000 hour savings at the end of that hackathon from the 70 entries with people that have never, ever done anything like this before. And there you have the result. And then it just went from there. People had a proof of concept. They knew that it worked and they overcame the initial barrier of change. And that's where we are seeing things really, really picking up now. >> That's fantastic. And the business outcome that you mentioned there, the business impact is massive, helping folks get that confidence to be able to overcome sometimes the cultural barriers is key here. I think another thing that this program has really highlighted is there is a clear demand for data literacy in the job market, regardless of organization. Can each of you share more about how you're empowering the next generation of data workers? Paula, we'll start with you. >> Absolutely, and Jacqui says it so well, which is that it really is a journey that organizations are on and we as people in society are on in terms of upskilling our capabilities. So one of the things that we're doing here at Alteryx to help address this skillset gap on a global level is through a program that we call SparkED, which is essentially a no-cost analytics education program that we take to universities and colleges globally to help build the next generation of data workers. When we talk to our customers like eBay and many others, they say that it's difficult to find the skills that they want when they're hiring people into the job market. And so this program's really developed just to do just that, to close that gap and to work hand in hand with students and educators to improve data literacy for the next generation. So we're just getting started with SparkED. We started last May, but we currently have over 850 educational institutions globally engaged across 47 countries, and we're going to continue to invest here because there's so much opportunity for people, for society and for enterprises, when we close the gap and empower more people with the necessary analytics skills to solve all the problems that data can help solve. >> So SparkED has made a really big impact in such a short time period. It's going to be fun to watch the progress of that. Jacqui, let's go over to you now. Talk about some of the things that eBay is doing to empower the next generation of data workers. >> So we basically wanted to make sure that we kept that momentum from the hackathon, that we don't lose that excitement. So we just launched the program called eBay Masterminds. And what it basically is, is it's an inclusive innovation in each other, where we firmly believe that innovation is for upskilling for all analytics roles. So it doesn't matter your background, doesn't matter which function you are in, come and participate in in this where we really focus on innovation, introducing new technologies and upskilling our people. We are, apart from that, we also said, well, we shouldn't just keep it to inside eBay. We have to share this innovation with the community. So we are actually working on developing an analytics high school program, which we hope to pilot by the end of this year, where we will actually have high schoolers come in and teach them data essentials, the soft skills around analytics, but also how to use Alteryx. And we're working with, actually, we're working with SparkED and they're helping us develop that program. And we really hope that at, say, by the end of the year, we have a pilot and then also next year, we want to roll it out in multiple locations in multiple countries and really, really focus on that whole concept of analytics for all. >> Analytics for all, sounds like Alteryx and eBay have a great synergistic relationship there that is jointly aimed at especially going down the stuff and getting people when they're younger interested, and understanding how they can be empowered with data across any industry. Paula, let's go back to you, you were recently on "theCUBE"'s Supercloud event just a couple of weeks ago. And you talked about the challenges the companies are facing as they're navigating what is by default a multi-cloud world. How does the Alteryx Analytics Cloud platform enable CIOs to democratize analytics across their organization? >> Yes, business leaders and CIOs across all industries are realizing that there just aren't enough data scientists in the world to be able to make sense of the massive amounts of data that are flowing through organizations. Last I checked, there was 2 million data scientists in the world, so that's woefully underrepresented in terms of the opportunity for people to be a part of the analytics solution. So what we're seeing now with CIOs, with business leaders is that they're integrating data analysis and the skillset of data analysis into virtually every job function, and that is what we think of when we think of analytics for all. And so our mission with Alteryx Analytics Cloud is to empower all of those people in every job function, regardless of their skillset, as Jacqui pointed out from people that are just getting started all the way to the most sophisticated of technical users. Every worker across that spectrum can have a meaningful role in the opportunity to unlock the potential of the data for their company and their organizations. So that's our goal with Alteryx Analytics Cloud, and it operates in a multi cloud world and really helps across all sizes of data sets to blend, cleanse, shape, analyze, and report out so that we can break down data silos across the enterprise and help drive real business outcomes as a result of unlocking the potential of data. >> As well as really lessening that skill gap. As you were saying, there's only 2 million data scientists. You don't need to be a data scientist, that's the beauty of what Alteryx is enabling and eBay is a great example of that. Jacqui, let's go ahead and wrap things with you. You talked a great deal about the analytics maturity that you have fostered at eBay. It obviously has the right culture to adapt to that. Can you talk a little bit and take us out here in terms of where Alteryx fits in as that analytics maturity journey continues and what are some of the things that you are most excited about as analytics truly gets democratized across eBay? >> When we're starting up and getting excited about things when it comes to analytics, I can go on all day, but I'll keep it short and sweet for you. I do think we are on the top of the pool of data scientists. And I really feel that that is your next step, for us anyways, is that how do we get folks to not see data scientists as this big thing, like a rocket scientist, it's something completely different. And it's something that is in everybody in a certain extent. So again, partnering with Alteryx who just released the AI ML solution, allowing folks to not have a data scientist program, but actually build models and be able to solve problems that way. So we have engaged with Alteryx and we purchased the licenses, quite a few. And right now through our Masterminds program, we're actually running a four month program for all skill levels, teaching them AI ML and machine learning and how they can build their own models. We are really excited about that. We have over 50 participants without a background from all over the organization. We have members from our customer services. We have even some of our engineers are actually participating in the program. We just kicked it off. And I really believe that that is our next step. I want to give you a quick example of the beauty of this is where we actually just allow people to go out and think about ideas and come up with things. And one of the people in our team who doesn't have a data scientist background at all, was able to develop a solution where there is a checkout feedback functionality on the eBay side where sellers or buyers can verbatim add information. And she built a model to be able to determine what relates to tax specific, what is the type of problem, and even predict how that problem can be solved before we as a human even step in, and now instead of us or somebody going to verbatim and try to figure out what's going on there, we can focus on fixing the error versus actually just reading through things and not adding any value, and it's a beautiful tool and I was very impressed when I saw the demo and definitely developing that sort of thing. >> That sounds fantastic. And I think just the one word that keeps coming to mind, and we've said this a number of times in the program today is empowerment. What you're actually really doing to truly empower people across the organization with varying degrees of skill level, going down to the high school level, really exciting. We'll have to stay tuned to see what some of the great things are that come from this continued partnership. Ladies, I want to thank you so much for joining me on the program today and talking about how Alteryx and eBay are really partnering together to democratize analytics and to facilitate its maturity. It's been great talking to you. >> Thank you, Lisa. >> Thank you so much. (cheerful electronic music) >> As you heard over the course of our program, organizations where more people are using analytics who have deeper capabilities in each of the four Es, that's everyone, everything, everywhere, and easy analytics, those organizations achieve more ROI from their respective investments in analytics and automation than those who don't. We also heard a great story from eBay, great example of an enterprise that is truly democratizing analytics across its organization. It's enabling and empowering line of business users to use analytics, not only focused on key aspects of their job, but develop new skills rather than doing the same repetitive tasks. We want to thank you so much for watching the program today. Remember you can find all of the content on thecube.net. You can find all of the news from today on siliconangle.com and of course alteryx.com. We also want to thank Alteryx for making this program possible and for sponsoring "theCUBE". For all of my guests, I'm Lisa Martin. We want to thank you for watching and bye for now. (upbeat music)

Published Date : Sep 10 2022

SUMMARY :

in the next 12 to 18 months. Excited to talk with you. over the next 12 to 18 months, And it looks like from the info brief and the world is changing data. that the info brief uncovered with respect So for example, on the people side, in the data and analytics and the answer, that'll be able to. just so we get that clean Thank you for that. that the info brief uncovered as compared to the technology itself. So overall, the enterprises to be aware of at the outset? is that the people aspect of analytics If we could do the same, Lisa, Here, I'm going to give us a little break. to the data and analytics and really maximize the investments And the data from this survey shows this And it's expected to spend more and plan to invest accordingly, that can snap to and the great nuggets in there. Alteryx is going to join me. that data analytics is for the few, Alan, it's great to that being data driven is very important. And really the first step the lines of business and more skills to really keep of the leading sports teams. between the domains industry to industry. to be compared to the same is that the majority of them said So faster, able to So across really all of the is to be able to outperform that is E enabled, that's the challenge. and mature to be competitive, around the globe to teach finance and the ROI, the speed, that they had to run to comply And also the business of the employees, and they of the demanding customer, to see how we can help you the power in it for organizations and Jacqui Van der Leij 1200 hours of wind tunnel testing, to make sense of it all. back to the program. going to start with you. So at the end of the day, one of the 7% of organizations to be centralized until we of the roadblocks to analytics adoption and to get the information if you want it. that the audience is watching and the confidence to be able to be a part to really be data driven. in their different phases to And the business outcome and to work hand in hand Jacqui, let's go over to you now. We have to share this Paula, let's go back to in the opportunity to unlock and eBay is a great example of that. and be able to solve problems that way. that keeps coming to mind, Thank you so much. in each of the four Es,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JacquiPERSON

0.99+

PaulaPERSON

0.99+

Jason KleinPERSON

0.99+

Paula HansenPERSON

0.99+

Lisa MartinPERSON

0.99+

Paula HansenPERSON

0.99+

Alan JacobsonPERSON

0.99+

AlteryxORGANIZATION

0.99+

eBayORGANIZATION

0.99+

JasonPERSON

0.99+

International Institute of AnalyticsORGANIZATION

0.99+

LisaPERSON

0.99+

AlanPERSON

0.99+

Alan JacobsonPERSON

0.99+

60%QUANTITY

0.99+

Kevin RubinPERSON

0.99+

Jacqui Van der Leij GreylingPERSON

0.99+

14QUANTITY

0.99+

International Institute of AnalyticsORGANIZATION

0.99+

10%QUANTITY

0.99+

50 employeesQUANTITY

0.99+

63%QUANTITY

0.99+

93%QUANTITY

0.99+

90%QUANTITY

0.99+

nineQUANTITY

0.99+

75%QUANTITY

0.99+

70 entriesQUANTITY

0.99+

16 yearQUANTITY

0.99+

1200 hoursQUANTITY

0.99+

Thomas Bienkowski, Netscout |Netscout Advanced NPR Panel 7 22


 

>>EDR NDR, what are the differences, which one's better? Are they better together? Today's security stack contains a lot of different tools and types of data and fortunate, as you know, this creates data silos, which leads to vis visibility gaps. EDR is endpoint detection and response. It's designed to monitor and mitigate endpoint attacks, which are typically focused on computers and servers, NDR network detection, and response. On the other hand, monitors network traffic to gain visibility into potential or active cyber threats, delivering real time visibility across the broader network. One of the biggest advantages that NDR has over EDR is that bad actors can hide or manipulate endpoint data, pretty easily network data. On the other hand, much harder to manipulate because attackers and malware can avoid detection at the endpoint. NDR, as you're gonna hear is the only real source for reliable, accurate, and comprehensive data. >>All endpoints use the network to communicate, which makes your network data, the ultimate source of truth. My name is Lisa Martin, and today on the special cube presentation, Tom Binkowski senior director of product marketing at net scout, and I are gonna explore the trends and the vital reasons why relying upon EDR is not quite enough. We're also gonna share with you the growing importance of advanced NDR. Welcome to the series, the growing importance of advanced NDR in the first segment, Tom's gonna talk with me about the trends that are driving enterprise security teams to implement multiple cyber security solutions that enable greater visibility, greater protection. We're also gonna explore Gartner's concept of the security operations center, SOC visibility triad, and the three main data sources for visibility, SIM EDR and NDR in segment two, Tom. And I will talk about the role of NDR and how it overcomes the challenges of EDR as Tom's gonna discuss, as you'll hear EDR is absolutely needed, but as he will explain it, can't be solely relied upon for comprehensive cybersecurity. And then finally, we'll come back for a third and final segment to discuss why not all NDR is created equal. Tom's gonna unpack the features and the capabilities that are most important when choosing an NDR solution. Let's do this. Here comes our first segment. >>Hey, everyone kicking things off. This is segment one. I'm Lisa Martin with Tom Binowski, senior director of product marketing at nets scout. Welcome to the growing importance of advanced NDR. Tom, great to have you on the program, >>Glad to be here. >>So we're gonna be talking about the trends that are driving enterprise security teams to implement multiple cyber security solutions that really enable greater visibility and protection. And there are a number of factors that continue to expand the ECAC service for enterprise networks. I always like to think of them as kind of the spreading amorphously you shared had shared some stats with me previously, Tom, some cloud adoption stats for 2022 94% of all enterprises today use a cloud service and more than 60% of all corporate data is store in the cloud. So, Tom, what are some of the key trends that nets scout is seeing in the market with respect to this? >>Yeah, so just to continue that, you know, those stats that, that migration of workloads to the cloud is a major trend that we're seeing in that was exasperated by the pandemic, right along with working from home. Those two things are probably the most dramatic changes that we we see out there today. But along with that is also this growing sophistication of the network, you know, today, you know, your network environment, isn't a simple hub and spoke or something like that. It is a very sophisticated combination of, you know, high speed backbones, potentially up to a hundred gigabits combination with partner networks. You have, like we said, workloads up in, in private clouds, pub public clouds. So you have this hybrid cloud environment. So, and then you have applications that are multi-tiered, there are pieces and parts. And in all of that, some on your premise, some up in a private cloud, some on a public cloud, some actually pulling data off when you a customer network or potentially even a, a partner network. So really, really sophisticated environment today. And that's requiring this need for very comprehensive network visibility, not only for, for cybersecurity purposes, but also just to make sure that those applications and networks are performing as you have designed them. >>So when it comes to gaining visibility into cyber threats, I, you talked about the, the sophistication and it sounds like even the complexity of these networks, Gartner introduced the concept of the security operations, visibility triad, or the SOC visibility triad break that down for us. It consists of three main data sources, but to break those three main data sources down for us. >>Sure. So Gartner came out a few years ago where they were trying to, you know, summarize where do security operations team get visibility into threats and they put together a triad and the three sides of the trier consists of one, the SIM security information event manager, two, the endpoint or, or data that you get from EDR systems, endpoint detection, response systems. And the third side is the network or the data you get from network detection, response systems. And, you know, they didn't necessarily say one is better than the other. They're basically said that you need all three in order to have comprehensive visibility for cybersecurity purposes. >>So talk, so all, all three perspectives are needed. Talk about what each provides, what are the different perspectives on threat detection and remediation? >>Yeah. So let's start with the SIM, you know, that is a device that is gathering alerts or logs from all kinds of different devices all over your network. Be it routers servers, you know, firewalls IDs, or even from endpoint detection and network detection devices too. So it is, it is the aggregator or consumer of all those alerts. The SIM is trying to correlate those alerts across all those different data sources and, and trying to the best it can to bubble up potentially the highest priority alerts or drawing correlations and, and, and, and giving you some guidance on, Hey, here's something that we think is, is really of importance or high priority. Here's some information that we have across these disparate data sources. Now go investigate the disadvantage of the SIM is that's all it gives you is just these logs or, or, or information. It doesn't give you any further context. >>Like what happened, what is really happening at the end point? Can I get visibility into the, into the files that were potentially manipulated or the, the registry setting or what, what happened on the network? And I get visibility into the packet date or things like that. It that's, so that's where it ends. And, and that's where the, so there other two sides of the equation come in, the endpoint will give you that deeper visibility, endpoint detection response. It will look for known and or unknown threats, you know, at that endpoint, it'll give you all kinds of additional information that is occurring in endpoint, whether it be a registry setting in memory on the file, et cetera. But you know, one of, some of its disadvantages, it's really difficult because really difficult to deploy pervasive because it requires an agent and, you know, not all devices can accept an agent, but what it miss, what is lacking is the context on the network. >>So if I was an analyst and I started pursuing from my SIM, I went down to the end point and, and said, I wanna investigate this further. And I hit a, I hit a dead end from some sort, or I realize that the device that's potentially I should be alerted to, or should be concerned about is an IOT device that doesn't even have an agent on it. My next source of visibility is on the network and that's where NDR comes in. It, it sees what's traversing. The entire network provides you visibility into that from both a metadata and even a ultimately a packer perspective. And maybe, you know, could be deployed a little bit more strategically, but you know, it doesn't have the perspective of the endpoint. So you can see how each of these sort of compliments each other. And that's why, you know, Gartner said that, that you need 'em all, then they all play a role. They all have their pros and cons or advantage and disadvantages, but, you know, bringing them and using 'em together is, is the key. >>I wanna kinda dig into some of the, the EDR gaps and challenges, as you talked about as, as the things evolve and change the network, environment's becoming far more sophisticated and as well as threat actors are, and malware is. So can you crack that open more on some of the challenges that EDR is presenting? What are some of those gaps and how can organizations use other, other, other data sources to solve them? >>Yeah, sure. So, you know, again, just be clear that EDR is absolutely required, right? We, we need that, but as sort of these network environments get more complex, are you getting all kinds of new devices being put on the network that devices being brought into the network that may be, you didn't know of B Y O D devices you have, I T devices, you know, popping up potentially by the thousands in, in, in some cases when new applications or world that maybe can't accept an and endpoint detection or an EDR agent, you may have environments like ICS and skate environments that just, you can't put an endpoint agent there. However, those devices can be compromised, right? You have different environments up in the cloud or SaaS environments again, where you may not be able to deploy an endpoint agent and all that together leaves visibility gaps or gaps in, in, in the security operation triad. Right. And that is basically open door for exploitation >>Open door. Go ahead. Sorry. >>Yeah. And then, then you just have the malware and the, and the attackers getting more sophisticated. They, they have malware that can detect an EDR agent running or some anti malware agent running on device. And they'll simply avoid that and move on to the next one, or they know how to hide their tracks, you know, whether it be deleting files, registry, settings, things like that. You know, so it's, that's another challenge that, that, that just an agent faces. Another one is there are certain applications like my SQL that are, you know, have ministry administrative rights into certain parts of the windows operate system that EDR doesn't have visibility into another area that maybe EDR may not have visibility is, is, is in, you know, malware that tries to compromise, you know, hardware, especially like bios or something like that. So there's a number of challenges as sort of the whole network environment and sophistication of bad actors and malware increases. >>Ultimately, I think one of the things that, that we've learned, and, and we've heard from you in this segment, is that doing business in, in today's digital economy, demands, agility, table stakes, right? Absolutely essential corporate digital infrastructures have changed a lot in response to the dynamic environment, but its businesses are racing to the clouds. Dave Alane likes to call it the forced March to the cloud, expanding activities across this globally distributed digital ecosystem. They also sounds like need to reinvent cybersecurity to defend this continuously expanding threat surface. And for that comprehensive network, visibility is, as I think you were saying is really, really fundamental and more advanced network detection is, and responses required. Is that right? >>That's correct. You know, you know, we, we at ESCO, this is, this is where we come from. Our perspective is the network. It has been over for over 30 years. And, and we, as well as others believe that that network visibility, comprehensive network visibility is fundamental for cyber security as well as network performance and application analysis. So it, it, it's sort of a core competency or need for, for modern businesses today. >>Excellent. And hold that thought, Tom, cause in a moment, you and I are gonna be back to talk about the role of NDR and how it overcomes the challenges of EDR. You're watching the cube, the leader in enterprise tech coverage. Hey everyone, welcome back. This is segment two kicking things off I'm Lisa Martin with Tom Binkowski, senior director of product marketing at nets scout, Tom, great to have you back on the program. >>Good to be here. >>We're gonna be talking about the growing importance of advanced NDR in this series. In this segment specifically, Tom's gonna be talking about the role of NDR and how it overcomes the challenges of EDR. So Tom, one of the things that we talked about previously is one of the biggest advantages that NDR has over EDR is that bad actors can hide or manipulate endpoint data pretty easily, whereas network data, much harder to manipulate. So my question, Tom, for you is, is NDR the only real source for reliable, accurate, comprehensive data. >>I'm sure that's arguable, right? Depending on who you are as a vendor, but you know, it's, it's our, our answer is yes, NDR solutions also bring an analyst down to the packet level. And there's a saying, you know, the, the packet is the ultimate source or source of truth. A bad actor cannot manipulate a packet. Once it's on the wire, they could certainly manipulate it from their end point and then blast it out. But once it hits the wire, that's it they've lost control of it. And once it's captured by a network detection or, or network monitoring device, they can't manipulate it. They can't go into that packet store and, and manipulate those packets. So the ultimate source of truth is, is lies within that packet somewhere. >>Got you. Okay. So as you said in segment one EDR absolutely necessary, right. But you did point out it can't organizations can't solely rely on it for comprehensive cybersecurity. So Tom, talk about the benefits of, of this complimenting, this combination of EDR and NDR and, and how can that deliver more comprehensive cybersecurity for organizations? >>Yeah, so, so one of the things we talked about in the prior segment was where EDR, maybe can't be deployed and it's either on different types of devices like IOT devices, or even different environments. They have a tough time maybe in some of these public cloud environments, but that's where NDR can, can step in, especially in these public cloud environments. So I think there's a misconception out there that's difficult to get packet level or network visibility and public clouds like AWS or Azure or Google and so on. And that's absolutely not true. They have all kinds of virtual tapping capabilities that an NDR solution or network based monitoring solution could take advantage of. And one of the things that we know we spoke about before some of that growing trends of migrating workloads to the cloud, that's, what's driving that those virtual networks or virtual taps is providing visibility into the performance and security of those workloads. >>As they're migrated to public clouds, NDR can also be deployed more strategically, you know, prior segment talking about how the, in order to gain pervasive visibility with EDR, you have to deploy an agent everywhere agents can't be deployed everywhere. So what you can do with NDR is there's a lot fewer places in a network where you can strategically deploy a network based monitoring device to give you visibility into not only that north south traffic. So what's coming in and out of your network, but also the, the, the, the east west traffic too west traversing, you know, within your network environment between different points of your op your, your multi-tiered application, things like that. So that's where, you know, NDR has a, a, a little bit more advantage. So fewer points of points in the network, if you will, than everywhere on every single endpoint. And then, you know, NDR is out there continuously gathering network data. It's both either before, during, and even after a threat or an attack is, is detected. And it provides you with this network context of, of, you know, what's happening on the wire. And it does that through providing you access to, you know, layer two through layer seven metadata, or even ultimately packets, you know, the bottom line is simply that, you know, NDR is providing, as we said before, that that network context that is potentially missing or is missing in EDR. >>Can you talk a little bit about XDR that kind of sounds like a superhero name to me, but this is extended detection and response, and this is an evolution of EDR talk to us about XDR and maybe EDR NDR XDR is really delivering that comprehensive cybersecurity strategy for organizations. >>Yeah. So, you know, it's, it's interesting. I think there's a lot of confusion out there in the industry. What is, what is XDR, what is XDR versus an advanced SIM, et cetera. So in some cases, there are some folks that don't think it's just an evolution of EDR. You know, to me, XDR is taking, look at these, all these disparate data sources. So going back to our, when our first segment, we talked about the, the, the security operations center triad, and it has data from different perspectives, as we were saying, right? And XCR, to me is the, is, is trying to bring them all together. All these disparate data source sets or sources bring them together, conduct some level of analysis on that data for the analyst and potentially, you know, float to the top. The most, you know, important events are events that we, that you know, that the system deems high priority or most risky and so on. But as I, as I'm describing this, I know there are many advanced Sims out there trying to do this today too. Or they do do this today. So this there's this little area of confusion around, you know, what exactly is XDR, but really it is just trying to pull together these different sources of information and trying to help that analyst figure out, you know, what, where's the high priority event that's they should be looking at, >>Right? Getting those high priority events elevated to the top as soon as possible. One of the things that I wanted to ask you about was something that occurred in March of this year, just a couple of months ago, when the white house released a statement from president Biden regarding the nation's cyber security, it included recommendations for private companies. I think a lot of you are familiar with this, but the first set of recommendations were best practices that all organizations should already be following, right? Multifactor authentication, patching against known vulnerabilities, educating employees on the phishing attempts on how to be effective against them. And the next statement in the president's release, focus on data safety practices, also stuff that probably a lot of corporations doing encryption maintaining offline backups, but where the statement focused on proactive measures companies should take to modernize and improve their cybersecurity posture. It was vague. It was deploy modern security tools on your computers and devices to continuously look for and mitigate threats. So my question to you is how do, how do you advise organizations do that? Deploy modern security tools look for and mitigate threats, and where do the data sources, the SOC tri that we talked about NDR XDR EDR, where did they help fit into helping organizations take something that's a bit nebulous and really figure out how to become much more secure? >>Yeah, it was, it was definitely a little vague there with that, with that sentence. And also if you, if you, I think if, if you look at the sentence, deploy modern security tools on your computers and devices, right. It's missing the network as we've been talking about there, there's, there's a key, key point of, of reference that's missing from that, from that sentence. Right. But I think what they mean by deploying monitor security tools is, is really taking advantage of all these, these ways to gain visibility into, you know, the threats like we've been talking about, you're deploying advanced Sims that are pulling logs from all kinds of different security devices or, and, or servers cetera. You're, you're deploying advanced endpoint detection systems, advanced NDR systems. And so on, you're trying to use, you're trying to utilize XDR new technology to pull data from all those different sources and analyze it further. And then, you know, the other one we, we haven't even mentioned yet. It was the, so the security operation and automation, right. Response it's now, now what do we do? We've detected something, but now help me automate the response to that. And so I think that's what they mean by leveraging modern, you know, security tools and so on >>When you're in customer conversations, I imagine they're coming to, to Netscale looking for advice like what we just talked through the vagueness in that statement and the different tools that organizations can use. So when you're talking to customers and they're talking about, we need to gain visibility across our entire network, across all of our devices, from your perspective from net Scout's perspective, what does that visibility actually look like and deliver across an organization that does it well? >>Yeah, we, I mean, I think the simple way to put it is you need visibility. That is both broad and deep. And what I mean by broad is that you need visibility across your network, no matter where that network may reside, no matter what protocols it's running, what, you know, technologies is it, is it virtualized or, or legacy running in a hundred gigabits? Is it in a private cloud, a public cloud, a combination of both. So that broadness, meaning wherever that network is or whatever it's running, that's, that's what you need visibility into. It has to be able to support that environment. Absolutely. And the, the, absolutely when I, we talk about being deep it's, it has to get down to a packet level. It can't be, you know, as high as say, just looking at net flow records or something like that, that they are valuable, they have their role. However, you know, when we talk about getting deep, it has to ultimately get down to the packet level and that's, and we've said this in this time that it's ultimately that source of truth. So that, that's what that's, I think that's what we need. >>Got it. That that depth is incredibly important. Thanks so much, Tom, for talking about this in a moment, you and I are gonna be back, we're gonna be talking about why not all NDR is created equally, and Tom's gonna actually share with you some of the features and capabilities that you should be looking for when you're choosing an NDR solution. You're watching the cube, the leader in enterprise tech coverage, >>And we're clear. >>All right. >>10 45. Perfect. You guys are >>Okay. Good >>Cruising. Well, >>Welcome back everyone. This is segment three. I'm Lisa Martin with Tom gin. Kowski senior director of product marketing at nets scout. Welcome back to the growing importance of advanced NDR in this segment, Tom and I are gonna be talking about the fact that not all NDR is created equally. He's gonna impact the features, the capabilities that are most important when organizations are choosing an NDR solution. Tom, it's great to have you back on the program. >>Great, great to be here. >>So we've, we've covered a lot of content in the first two segments, but as we, as we see enterprises expanding their it infrastructure, enabling the remote workforce, which is here to stay leveraging the crowd cloud, driving innovation, the need for cybersecurity approaches and strategies that are far more robust and deep is really essential. But in response to those challenges, more and more enterprises are relying on NDR solutions that fill some of the gaps that we talked about with some of the existing tool sets in the last segment, we talked about some of the gaps in EDR solutions, how NDR resolves those. But we also know that not all NDR tools are created equally. So what, in your perspective, Tom are some of the absolutely fundamental components of NDR tools that organizations need to have for those tools to really be robust. >>Yeah. So we, we, we touched upon this a little bit in the previous segment when we talked about first and foremost, your NDR solution is providing you comprehensive network visibility that must support whatever your network environment is. And it should be in a single tool. It shouldn't have a one vendor per providing you, you know, network visibility in the cloud and another vendor providing network visibility in a local network. It should be a single NDR solution that provides you visibility across your entire network. So we also talked about it, not only does it need to be broadened like that, but also has to be deep too, eventually down to a packet level. So those are, those are sort of fundamental table stakes, but the NDR solution also must give you the ability to access a robust source of layer two or layer three metadata, and then ultimately give you access to, to packets. And then last but not least that solution must integrate into your existing cybersecurity stack. So in the prior segments, we talked a lot about, you know, the, the SIM, so that, that, that NDR solution must have the ability to integrate into that SIM or into your XDR system or even into your source system. >>Let's kind of double click on. Now, the evolution of NDR can explain some of the differences between the previous generations and advanced NDR. >>Yeah. So let's, let's start with what we consider the most fundamental difference. And that is solution must be packet based. There are other ways to get network visibility. One is using net flow and there are some NDR solutions that rely upon net flow for their source of, of, of visibility. But that's too shallow. You ultimately, you need to get deeper. You need to get down to a pack level and that's again where some, so, you know, you, you want to make sure that your NDR or advanced NDR solution is packet based. Number two, you wanna make sure that when you're pulling packets off the wire, you can do it at scale, that full line rate and in any environment, as we, as we spoke about previously, whether it be your local environment or a public cloud environment, number three, you wanna be able to do this when your traffic is encrypted. As we know a lot of, lot of not of network traffic is encrypted today. So you have the ability to have to have the ability to decrypt that traffic and then analyze it with your NDR system. >>Another, another, another one number four is, okay, I'm not just pulling packets off the wire, throwing full packets into a data storage someplace. That's gonna, you know, fill up a disc in a matter of seconds, right? You want the ability to extract a meaningful set of metadata from layer two to layer seven, the OSI model look at key metrics and conducting initial set of analysis, have the ability to index and compress that data, that metadata as well as packets on these local storage devices on, you know, so having the ability to do this packet capture at scale is really important, storing that packets and metadata locally versus up in a cloud to, you know, help with some compliance and, and confidentiality issues. And then, you know, last final least when we talk about integration into that security stack, it's multiple levels of integration. Sure. We wanna send alerts up into that SIM, but we also want the ability to, you know, work with that XDR system to, or that, that source system to drill back down into that metadata packets for further analysis. And then last but not least that piece of integration should be that there's a robust set of information that these NDR systems are pulling off the wire many times in more advanced mature organizations, you know, security teams, data scientists, et cetera. They just want access to that raw data, let them do their own analysis outside, say the user interface with the boundaries of a, of a vendor's user interface. Right? So have the ability to export that data too is really important and advance in the systems. >>Got it. So, so essentially that the, the, the breadth, the visibility across the entire infrastructure, the depth you mentioned going down to a packet level, the scale, the metadata encryption, is that what net scout means when you talk about visibility without borders? >>Yeah, exactly. You know, we, we have been doing this for over 30 years, pulling packets off of wire, converting them using patent technology to a robust set of metadata, you know, at, at full line rates up to a hundred in any network environment, any protocols, et cetera. So that, that's what we mean by that breadth. And in depth of visibility, >>Can you talk a little bit about smart detection if we say, okay, advanced NDR needs to deliver this threat intelligence, but it also needs to enable smart detection. What does net scout mean by that? >>So what you wanna make sure you have multiple methods of detection, not just a methods. So, you know, not just doing behavioral analysis or not just detecting threats based on known indicators or compromise, what you wanna wanna have multiple ways of detecting threats. It could be using statistical behavioral analysis. It could be using curated threat intelligence. It could be using, you know, open source signature engine, like from Sara COTA or other threat analytics, but to, but you also wanna make sure that you're doing this both in real time and have the ability to do it historically. So after a, a threat has been detected, for example, with another, with another product, say an EDR device, you now want the ability to drill into the data from the network that had occurred in, in, you know, prior to this. So historically you want the ability to comb through a historical set of metadata or packets with new threat intelligence that you've you've gathered today. I wanna be able to go back in time and look through with a whole new perspective, looking for something that I didn't know about, but you know, 30 days ago. So that's, that's what we, what we mean by smart detection. >>So really what organizations need is these tools that deliver a far more comprehensive approach. I wanna get into a little bit more on in integration. You talked about that in previous segments, but can you, can you give us an example of, of what you guys mean by smart integration? Is that, what does that deliver for organizations specifically? >>Yeah, we really it's three things. One will say the integration to the SIM to the security operations center and so on. So when, when an ed, when an NDR device detects something, have it send an alert to the SIM using, you know, open standards or, or, or like syslog standards, et cetera, the other direction is from the SIM or from the so, so one, you know, that SIM that, so is receiving information from many different devices that are, or detecting threats. The analyst now wants the ability to one determine if that's a true threat or not a false positive, if it is a true threat, you know, what help me with the remediation effort. So, you know, an example could be an alert comes into a SIM slash. So, and part of the playbook is to go out and grab the metadata packets associated with this alert sometime before and sometime after when that alert came in. >>So that could be part of the automation coming from the SIM slash. So, and then last one, not least is we alluded to this before is having the ability to export that robust set of layer two through layer seven metadata and or packets to a third party data lake, if you will, and where analysts more sophisticated analysts, data scientists, and so on, can do their own correlation, enrich it with their own data, combined it with other data sets and so on, do their own analysis. So it's that three layers of, of integration, if you will, that really what should be an advanced NDR system? >>All right, Tom, take this home for me. How does nets scout deliver advanced NDRs for organizations? >>We do that via solution. We call Omni the security. This is Netscout's portfolio of, of multiple different cyber security products. It all starts with the packets. You know, our core competency for the last 30 years has been to pull packets off the wire at scale, using patented technologies, for example, adapt service intelligence technologies to convert those broad packets into robust set of layer seven layer two through seven metadata. We refer to that data as smart data with that data in hand, you now have the ability to conduct multiple types of threat detection using statistical behavioral, you know, curative threat intelligence, or even open source. So rules engine, you have the ability to detect threats both in real time, as well as historically, but then a solution goes beyond just detecting threats or investigating threats has the ability to influence the blocking of threats too. So we have integrations with different firewall vendors like Palo Alto, for example, where they could take the results of our investigation and then, you know, create policies, blocking policies into firewall. >>In addition to that, we have our own Omni a E D product or our Arbor edge defense. That's, that's a product that sits in front of the firewall and protects the firewall from different types of attacks. We have integration that where you can, you can also influence policies being blocked in the a E and in last but not least, our, our solution integrates this sort of three methods of integration. As we mentioned before, with an existing security system, sending alerts to it, allowing for automation and investigation from it, and having the ability to export our data for, you know, custom analysis, you know, all of this makes that security stack that we've been talking about better, all those different tools that we have. That's that operations triads that we talked about or visibility triad, we talked about, you know, our data makes that entire triad just better and makes the overall security staff better and makes overall security just, just better too. So that, that that's our solution on the security. >>Got it. On the security. And what you've talked about did a great job. The last three segments talking about the differences between the different technologies, data sources, why the complimentary and collaborative nature of them working together is so important for that comprehensive cybersecurity. So Tom, thank you so much for sharing such great and thoughtful information and insight for the audience. >>Oh, you're welcome. Thank you. >>My pleasure. We wanna thank you for watching the program today. Remember that all these videos are available@thecube.net, and you can check out today's news on Silicon angle.com and of course, net scout.com. We also wanna thank net scout for making this program possible and sponsoring the cube. I'm Lisa Martin for Tomski. Thanks for watching and bye for now.

Published Date : Jul 13 2022

SUMMARY :

as you know, this creates data silos, which leads to vis visibility gaps. with you the growing importance of advanced NDR. Tom, great to have you on the program, I always like to think of them as kind of the spreading amorphously you shared had shared some stats with me sophistication of the network, you know, today, you know, your network environment, So when it comes to gaining visibility into cyber threats, I, you talked about the, the sophistication And the third side is the network or the data you get from network detection, So talk, so all, all three perspectives are needed. of the SIM is that's all it gives you is just these logs or, come in, the endpoint will give you that deeper visibility, or advantage and disadvantages, but, you know, bringing them and using 'em together is, is the key. So can you crack that open more on some of the into the network that may be, you didn't know of B Y O D devices you have, or they know how to hide their tracks, you know, whether it be deleting files, as I think you were saying is really, really fundamental and more advanced network detection is, You know, you know, we, we at ESCO, this is, this is where we come from. And hold that thought, Tom, cause in a moment, you and I are gonna be back to talk about the role of NDR So my question, Tom, for you is, is NDR the And there's a saying, you know, So Tom, talk about the benefits of, of this complimenting, And one of the things that we know we spoke about before some the bottom line is simply that, you know, NDR is providing, as we said before, that that network context Can you talk a little bit about XDR that kind of sounds like a superhero name to me, important events are events that we, that you know, that the system deems high So my question to you is And then, you know, the other one we, So when you're talking to customers and they're talking about, And what I mean by broad is that you need visibility across your and Tom's gonna actually share with you some of the features and capabilities that you should be looking for You guys are Tom, it's great to have you back on the program. challenges, more and more enterprises are relying on NDR solutions that fill some of the So in the prior segments, we talked a lot about, you know, the, some of the differences between the previous generations and advanced NDR. So you have the ability to have to have the ability to And then, you know, is that what net scout means when you talk about visibility without borders? a robust set of metadata, you know, at, at full line rates up to a hundred in Can you talk a little bit about smart detection if we say, okay, advanced NDR needs to deliver this threat the data from the network that had occurred in, in, you know, prior to this. So really what organizations need is these tools that deliver a far more comprehensive the so, so one, you know, that SIM that, so is receiving So that could be part of the automation coming from the SIM slash. All right, Tom, take this home for me. and then, you know, create policies, blocking policies into firewall. triads that we talked about or visibility triad, we talked about, you know, our data makes that So Tom, thank you so much for sharing such great and thoughtful information and insight for the audience. Oh, you're welcome. We wanna thank you for watching the program today.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave AlanePERSON

0.99+

Tom BinkowskiPERSON

0.99+

Tom BinowskiPERSON

0.99+

Thomas BienkowskiPERSON

0.99+

GartnerORGANIZATION

0.99+

three sidesQUANTITY

0.99+

third sideQUANTITY

0.99+

NetscoutORGANIZATION

0.99+

todayDATE

0.99+

Palo AltoORGANIZATION

0.99+

thirdQUANTITY

0.99+

more than 60%QUANTITY

0.99+

AWSORGANIZATION

0.99+

first segmentQUANTITY

0.99+

over 30 yearsQUANTITY

0.99+

ESCOORGANIZATION

0.99+

BidenPERSON

0.99+

2022DATE

0.99+

March of this yearDATE

0.99+

three main data sourcesQUANTITY

0.99+

two sidesQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

TodayDATE

0.98+

three thingsQUANTITY

0.98+

single toolQUANTITY

0.98+

thousandsQUANTITY

0.98+

MarchDATE

0.98+

TomskiPERSON

0.98+

30 days agoDATE

0.98+

first two segmentsQUANTITY

0.98+

GoogleORGANIZATION

0.98+

twoQUANTITY

0.98+

XDRTITLE

0.98+

OneQUANTITY

0.98+

net scoutORGANIZATION

0.98+

two thingsQUANTITY

0.97+

94%QUANTITY

0.97+

SQLTITLE

0.97+

firstQUANTITY

0.97+

over 30 yearsQUANTITY

0.97+

singleQUANTITY

0.96+

NetscaleORGANIZATION

0.96+

eachQUANTITY

0.96+

one vendorQUANTITY

0.95+

threeQUANTITY

0.95+

Larry Lancaster & Rod Bagg


 

(bright intro music) >> Full stack observability is all the rage today. As businesses lean in to digital, customer experience becomes ever more important, why? Well, it's obvious. Fickle consumers can switch brands in the blink of an eye or the click of a mouse. Technology companies have sprung into action, and the observability space is getting pretty crowded in an effort to simplify the process of figuring out the root cause of application performance problems without an army of PhDs and lab coats, also known as endlessly digging through logs, for example. We see decades-old software companies that have traditionally done monitoring or log analytics and/or application performance management stepping up their game. These established players, you know, they typically have deep feature sets and sometimes purpose built tools that attack one particular segment of the marketplace, and now, they're pivoting through M&A and some organic development trying to fill gaps in their portfolio, and then you got all these new entrants coming to the market claiming end to end visibility across the so-called modern cloud and now edge-native stacks. Meanwhile, cloud players are gaining traction and participating through a combination of native tooling combined with strong ecosystems to address this problem, but, you know, recent survey research from ETR confirms our thesis that no one company has at all. Here's the thing. Customers just want to figure out the root cause as quickly and efficiently as possible. It's one thing to observe the stack end to end, but the question is who is automating the observers? And that's why we're here today. Hello, my name is Dave Vellante, and welcome to this special "CUBE" presentation where we dig into root cause analysis and, specifically, how one company, Zebrium, is using unsupervised machine learning to detect anomalies and pinpoint root causes and delivering it as an automated service. In this session, we have two deep dives. First, we're going to dig into this exciting new field of RCA, root cause as a service, with two of the founders and technical experts behind Zebrium, and then we bring in two technical experts from Cisco, an early Zebrium customer who ran a POC with Zebrium's service, automating and identifying root cause problems within four very well established and well-known Cisco product lines including Webex client and UCS. I was pretty amazed at the results, and I think you'll be impressed as well. So thanks for being here. Let's get started with me right now is Larry Lancaster who's a founder and CTO of Zebrium, and he's joined by Rod Bagg who's a founder and Vice-President of Engineering at the company. Gents, welcome, thanks for coming on. >> Thanks. >> (indistinct). >> To be here. >> Great to be here. >> All right, Rod, talk to me. Talk to me about software downtime, what root cause means, all the buzzwords in your domain, MTTR and SLO, what do we need to know? >> Yeah, I mean, it's like you said. I mean, it's extremely important to our customers and to most businesses out there to drive up time and avoid as much downtime as possible. So, you know, when you think about it, all of these businesses, most companies nowadays, either their product is software and it's running, you know, running on the web, and that that's how you get a point click or their business depends on it and, you know, internal systems to drive their business and to run it. Now, when that is down, that is hugely impacting to them. So if you take a look, you know, way back, you know, 20, 30 years ago, software was simple. You know, there wasn't much to it. It was pretty monolithic, and maybe it took a couple of people to maintain it and keep it running. It wasn't really anything complicated about it. It was a single tenant piece of software. Today's software is so complicated, often running, you know, maybe hundreds of services to keep that or to actually implement what that software is doing. So as you point out, you know, enter the sort of observability space and the tools that are now in use to help monitor that software and make sure when something goes wrong, they know about it, but there's kind of an interesting stat around the observability space. So when you look at observability in the context or through the lens of the cost of downtime, it's really interesting. So observability tools are about a $20 billion market, okay? But the cost of downtime, even with that in place, is still hundreds of billions of dollars. So you're not taking much of a bite out of what the real problem is. You have to solve root cause and get to that fast. So it's all great to know that something went wrong, but you got to know why, and it it's our contention here that, you know, really, when you take a look at the observability space, you have metrics. That's a great tool. I mean, there's lots of great tools out there, you know, around metrics monitoring that's going to tell you when something went wrong. It's very rarely it's going to tell you why. Similarly for tracing, it's going to point you to where the issue is. It's going to take you through that stack and probably pinpoint where you're being, you know, where it's happening or where something is running slow potentially. So that's great, but again, the root cause of why it's happening is going to be buried in log files, and I can expand on that a little bit more, but, you know, when you're a software developer, and you're writing your software, those log files are a wealth of information. It's just a set of breadcrumbs that are littered with facts about how the software is behaving and why it's doing what it's doing or why it went wrong, and it's that that really gets you to the root cause very fast, and that's, our contention is that these software systems are so complex nowadays, and that the root cause is lying in those logs. So how do you get there fast? You know, we would contend that you better automate that or you're just doomed for failure, and that's where we come in. >> Great. >> Getting to that request. >> Thank you, Rod. You know, it's interesting. You talk about the $20 billion market. There's an analogy with security, right? We spend 80, $100 billion a year on securing our infrastructure, and yet we lose, probably, closer to a trillion dollars a year in breaches, and there's a similar analogy here. 20 billion could be 5x in downtime impacts or more. Okay, let's go to Larry. Tell us a little bit more about Zebrium. I'm interested always to ask a founder why you started the company. Rod touched on that a little bit. You guys have invented this concept of RCAs. What does it mean? What problems does it solve? And how does it solve the problem? Let's get into it. >> Yeah, hey, thanks, Dave. So I think when you said, you know, who's automating the observer? That's a great way to think about it because what observability really means is it's a property of a system that means you can see into it. You can observe the internal state, and that makes it easier to troubleshoot, right? But the problem is if it's too complicated, you just push the bottleneck up to your eyeball. There's only so much a person can filter through manually, right? And I love the way you put that. So that's a great way to think about it is automating the observer. Now, of course, it means that, you know, you reduce your MTTR, you meet your service level objectives, all that stuff, you improve customer experience, that's all true, but it's important to step back and realize like we have cracked a real nut here. People have been trying to figure out how to automate this part of sort of the troubleshooting experience, this human part of finding the root cause indicators for a long time, and until Zebrium came along, I would argue no one's really done it right. So, you know, I think it's also important, you know, as we step back, we can probably look forward five to 10 years and say, "Everyone's going to look back and say, 'How did we do all this manually?'" You're going to see this sort of last mile of observability and troubleshooting is going to be automated everywhere because otherwise, you know, people are just, they're not going to be able to scale their business. So, you know, I think one more thing that's important to point out is, you know, I think Zebrium, you know, it's one thing to have the technology, but we've learned we need to deliver it right where people are today. You can't just expect people to dive into a new tool. So, you know, we're looking at, you know, if you look at Zebrium, you'll put us on your dashboard, and we don't care what kind of a dashboard it is. It could be, you know, Datadog, New Relic, Elastic, Dynatrace, Grafana, AppDynamics, ScienceLogic, we don't care. You know, they're all our friends. So we're more interested in getting to that root cause than trying to fight, you know, these incumbents and all that stuff, yeah. >> Yeah, so interesting. Again, another analogy I think about, you know, you talked about automation, where to look back, and say, "This is what- We're never going to do this again." It's like provisioning LANs. Nobody provisioned LANs anymore. It's all automated. >> That's correct. >> So, Larry, stay with you. The skeptic in me says, "This sounds amazing," but if, you know, it probably too good to be true. Tell us how it works. >> Yeah, so that's interesting. So Cisco came along and they were equally skeptical. So what they did was they took a couple of months, and they did a very detailed study, and they got together 192 incidents across four product lines where they knew that the root cause was in the logs, and they knew what that root cause was because they'd had their best engineers, you know, work on those cases and take detailed notes of the incidents that had taken place, and so they ran that data through the Zebrium software, and what they found was that in more than 95% of those incidents, Zebrium reflected the correct root cause indicators at the correct time. Like that blew us away. When we saw that kind of evidence, Dave, I have to tell you, everyone was just jumping up and down. It was like, you know, it was like the Apollo Command Center, you know, when they finally, (Dave laughs) you know, touchdown on the moon kind of thing. So, you know, it's really exciting at a point in time to be at the company, like just seeing everything finally being proven out according to this vision. I'm going to tell you one more story, which is actually one of my favorites, because we got a chance to work with Seagate Lyve Cloud. So they're, you know, a hyper modern, you know, SaaS business. They're an S3 competitor. Zoom has their files stored on Lyve Cloud to give, you know, to let you know who they are. So, essentially, what happened was they were in alpha, in their early access, and they had an outage, and it was pretty bad. I mean, it went on for longer than a day, actually, before they were completely restored, and it was, you know, fortunately, for them, it was early access. So no one was expecting, you know, uptime, you know, service level objectives and so on, but they were scared because they realized if something like this happens in production, you know, they're screwed. So what they did was they saw Zebrium, they did some research, they saw Zebrium. They went in a staging environment, recreated the exact (indistinct) that they'd had, and what they saw was, immediately, Zebrium pops up a root cause report that tells them exactly the root cause that they took over a day to find. These are the kind of stories that let us know we're onto something transformational. >> Yeah, that's great. I mean, you guys are jumping up and down. I'm sure, we're going to hear from Cisco later. I bet you, they were jumping up and down, too, 'cause they didn't have to do all that heavy lifting anymore. So Rod, Larry's just sort of implying that or, actually, you guys both talked about that your tool's agnostic. So how does one actually use the service? How do I deploy it? >> Yeah, so let me step back. So when we talk about logs, right? Like, you know, all these red crumbs being in logs and everything else. So, you know, they are a great wealth of, you know, information, but people hate dealing with them. I mean, they hate having to go in and figure out what log to look at. In fact, you know, we had one of our, or we've heard from several of our customers now prior to using Zebrium, but when they're, you know, have some issue, and they know there's something wrong, something on their dashboard has told them that something's wrong, maybe a metrics is, you know, taken a blip or something's happened that they know there's a problem, we've heard from them that it can take like a number of hours just to get to the right set of logs, like figuring out over these hundreds of services where the logs are to get to them, maybe searching in a log manager, just to get into the right context even can take hours. So, you know, that's obviously the problem we solve, but, you know, we don't want them just looking at logs. I mean, you know, we don't want to put 'em back in the thing they don't like doing 'cause people don't do what they don't like doing. So we put it up on the dashboard. So if something is going wrong with your metrics, and that's the indicator or maybe it's something with tracing that you're sort of digging through now that you know something's wrong, we will be right on that same dashboard. So we're deployed as a SaaS service. You send us your logs. You click on one of our integrations, and we integrate with all these tools that Larry's talked about, and when we detect anything that is a root cause report, it will show up on your dashboard in the same timeline as those blips in your metrics. So when you see something going wrong, and you know there's an issue, take a look at the portion of your dashboard that is us, and we're going to tell you why. We're going to get you to the why that went wrong. Not no other work could be- You can, you know, also click down and click through to us so that you land up in our portal if you want to do some more digging around if you need to or whatever, maybe to get some context, what have you, but it's fair that you ever need to do that. The answer should be right there on your dashboard, and that's how we expect people to use it. We don't want them digging in logs and going through things. We want it to be right in their workflow. >> Great, thank you, Larry. So Rod, we talked about Cisco. We're going to hear more from them in a moment and Seagate. I would think this is like a perfect solution for a SaaS provider, anybody doing AIOps, do you have some examples of those types of firms leaning into this? >> Yeah, a couple of great, well, I mean, we got many of them, but couple that I'll touch on. We have an actual AIOps company that was looking for, you know, sort of some complimentary technology and so on, and so they decided to just put us through our paces by having one of their own SREs sign up for our service in our SaaS environment and send the logs from their system to us, you know, and just see how we did. So it turned out we ended up talking back to this SRE like a week after he had installed the product, you know, signed up, and then, you know, started sending us logs, and, you know, he was hemming and hawing saying that he was busy like, you know, like every SRE is, and that he didn't have a chance to really do much with us yet, and, you know, we just, you know, having this conversation on the phone, and he comes to tell us that, "Yeah, I've been busy because we had this, you know, terrible outage like, you know, five days ago," and we said like, "Okay, did you actually look on the Zebrium dashboard?" (laughs) And he goes, "You know what? I didn't even think to do it yet. I mean, I'd just been so busy and frazzled." So we have an integration with that company. He hadn't put that integration in so it wasn't in his dashboard yet, but it was certainly on ours. So he went there and he looks on the day like, you know, on the time range of when he had this incident, and right at the very top of the page on our portal was the incident with the root cause, and he was flabbergasted. It literally would've saved him hours and hours and hours. They had this issue going on for over 24 hours, and we had the answer right there in five minutes, and it was crazy, and we get that kind of story. It's just like the Seagate one. If you use us and you have a problem, we're going to detect it, and you're going to hear from Cisco how successful we are at detecting things. I mean, it'll be there when you have a problem. In SaaS companies, you know, one of our customers is Archera. They do cost optimizations for cloud properties, you know, for AWS optimization, Google cloud, and so on, but they use our software, and they have a lot of interaction, obviously, with these cloud vendors and the APIs of those cloud vendors. So, you know, in order to figure out you're costing at AWS, they're using all those APIs. So it turned out, you know, they had some issue where their services were breaking and we had that root cause report right on the screen, again, within five minutes that was pointing to an API problem with Google, and they had changed one of their APIs, and Archera was not aware of it. So their stuff was breaking because of a change downstream that we had caught, and I'll just tell you one last one because it's somewhat related to one of these cloud vendors of, you know, big cloud vendor who had an outage couple of months ago, and it's interesting because, you know, lot of our customers will set up shared Slack channels with us where we're monitoring or seeing their incidents as well as they are. So we get a little Slack representation of the incident that we detected for them or the root cause that we've detected for them, and that's in a shared community channel. So we could see this happening when that AWS outage happened. We could see our customers getting impacted by that AWS outage and the root cause of what was going on there in AWS that was impacting our customers, that was showing up in our incidents. Now, we didn't obviously, you know, have the very root cause of what was going on in AWS per se, but we were getting to the root cause of why our customer's applications were failing, and that was because of issues going on at AWS. >> Very interesting. I mean, I think one of your biggest challenge is going to be getting people's attention because these SREs is so busy, their hair's on fire. (all laughs) You know, he's like, "Hey, chap, I'm going to show you, look at this." >> I tell you. You get their attention, they love it. I mean, this AIOps company, I didn't even tell you the punchline there, but, you know, they had this incident that occurred that we found and, quite literally, the next week, they ended up signing up as a paid customer, so. >> That's great, and Larry, give you the last word. I mean, you know, Rod was talking about, you know, changes in APIs, and, you know, there's still a lot of scripts out there. You guys, if I understand it correctly, run both as a service in the cloud and you can run on-prem, which is important because there's a lot of sensitive information in logs and people don't want to leave. >> That's right, absolutely. >> But, yeah, close it out here. >> Yeah, I mean, you can, that's right, you can run it on-prem, just like we run it in our cloud. You can run it in your cloud or on your own infrastructure. Now, that's all true. You know, I think the one hurdle now that we have left as a company is getting the word out and getting people to believe that this is actually possible and try it for themselves. You don't believe it? Do a POC, try it yourself. And, you know, people have become so jaded by the lack of, you know, real sort of innovation in the software industry for the last 10 years that it's hard to get people to... But guys, you got to give it a shot. I'm telling you. I'm telling you right now, it works, and you'll hear more about that from one of our customers in a minute. >> Alright guys, thanks so much. Great story, really appreciate you sharing. >> Thank you. >> Yeah, thanks, Dave. Appreciate the time. >> Okay, in a moment, we're going to hear from Cisco who is the customer in this case example, and a company that is... Look, they have quite an impressive suite of observability tooling, and they've done a pretty compelling proof of concept with Zebrium using real data on some Cisco products that you've heard of like Webex. So stay tuned and learn about how you can really take advantage of this new technology called root cause as a service. You're watching "theCUBE", the leader in enterprise and emerging tech coverage. (bright outro music)

Published Date : May 25 2022

SUMMARY :

and then you got all these new entrants all the buzzwords in your and that that's how you get a point click why you started the company. Now, of course, it means that, you know, about, you know, you but if, you know, it and it was, you know, I mean, you guys are jumping up and down. I mean, you know, we do you have some examples saying that he was busy like, you know, is going to be getting people's attention but, you know, they had I mean, you know, Rod was talking by the lack of, you know, appreciate you sharing. Appreciate the time. So stay tuned and learn about how you can

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Larry LancasterPERSON

0.99+

DavePERSON

0.99+

RodPERSON

0.99+

SeagateORGANIZATION

0.99+

twoQUANTITY

0.99+

LarryPERSON

0.99+

CiscoORGANIZATION

0.99+

Rod BaggPERSON

0.99+

AWSORGANIZATION

0.99+

ZebriumORGANIZATION

0.99+

WebexORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

UCSORGANIZATION

0.99+

$20 billionQUANTITY

0.99+

20 billionQUANTITY

0.99+

GrafanaORGANIZATION

0.99+

192 incidentsQUANTITY

0.99+

fiveQUANTITY

0.99+

DynatraceORGANIZATION

0.99+

two technical expertsQUANTITY

0.99+

AppDynamicsORGANIZATION

0.99+

ScienceLogicORGANIZATION

0.99+

New RelicORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

FirstQUANTITY

0.99+

five minutesQUANTITY

0.99+

ElasticORGANIZATION

0.99+

ETRORGANIZATION

0.99+

five days agoDATE

0.99+

10 yearsQUANTITY

0.99+

hundredsQUANTITY

0.99+

oneQUANTITY

0.98+

5xQUANTITY

0.98+

more than 95%QUANTITY

0.98+

next weekDATE

0.98+

bothQUANTITY

0.98+

couple of months agoDATE

0.97+

20DATE

0.97+

ZoomORGANIZATION

0.97+

ArcheraORGANIZATION

0.97+

todayDATE

0.97+

Seagate Lyve CloudORGANIZATION

0.96+

over 24 hoursQUANTITY

0.95+

over a dayQUANTITY

0.95+

TodayDATE

0.95+

fourQUANTITY

0.95+

AIOpsORGANIZATION

0.95+

hundreds of servicesQUANTITY

0.94+

decadesQUANTITY

0.94+

one thingQUANTITY

0.92+

DD Dasgupta, Cisco | Simplifying Hybrid Cloud


 

>>The introduction of the modern public cloud in the mid two thousands permanently changed the way we think about it at the heart of it. The cloud operating model attacked one of the biggest problems in enterprise infrastructure, human labor costs more than half of it, budgets were spent on people. And much of that effort added little or no differentiable value to the business. The automation of provisioning management, recovery optimization and decommissioning infrastructure resources has gone mainstream as organizations demand a cloud-like model across all their application infrastructure, irrespective of its physical location. This is not only cut costs, but it's also improved quality and reduced human error. Hello everyone. My name is Dave Vellante and welcome to simplifying hybrid cloud made possible by Cisco today, we're going to explore hybrid cloud as an operating model for organizations or the definition of cloud is expanding. Cloud is no longer an abstract set of remote services, you know, somewhere out in the clouds. >>No, it's an operating model that spans public cloud on premises infrastructure. And it's also moving to edge locations. This trend is happening at massive scale. While at the same time, preserving granular control of resources. It's an entirely new game where it managers must think differently to deal with this complexity. And the environment is constantly changing the growth and diversity of applications continues. And now we're living in a world where the workforce is remote hybrid work is now a permanent state and will be the dominant model. In fact, a recent survey of CIO is by enterprise technology. Research ETR indicates that organizations expect 36% of their workers will be operating in a hybrid mode splitting time between remote work and in office environments. This puts added pressure on the application infrastructure required to support these workers. The underlying technology must be more dynamic and adaptable to accommodate constant change. >>So the challenge for it managers is ensuring that modern applications can be run with a cloud-like experience that spans on-prem public cloud and edge locations. This is the future of it. Now today we have three segments where we're going to dig into these issues and trends surrounding hybrid cloud. First up is Didi Dasgupta, who will set the stage and share with us how Cisco is approaching this challenge. Next we're going to hear from Maneesh Agra wall and Darren Williams, who will help us unpack HyperFlex, which is Cisco's hyper-converged infrastructure offering. And finally, our third segment we'll drill into unified compute more than a decade ago. Cisco pioneered the concept of bringing together compute with networking in a single offering. Cisco frankly changed the legacy server market with UCS unified compute system. The X series is Cisco's next generation architecture for the coming decade, and we'll explore how it fits into the world of hybrid cloud and its role in simplifying the complexity that we just discussed. So thanks for being here. Let's go. >>Okay. Let's start things off. Gus is back on the cube to talk about how we're going to simplify hybrid cloud complexity. DD. Welcome. Good to see you again. >>Hey Dave, thanks for having me. Good to see you again. Yeah, >>Our pleasure here. Uh, look, let's start with big picture. Talk about the trends you're seeing from your customers. >>Well, I think first off every customer, these days is a public cloud customer. They do have their on-premise data centers, but um, every customer is looking to move workloads, use services, cloud native services from the public cloud. I think that's, that's one of the big things that we're seeing, um, while that is happening. We're also seeing a pretty dramatic evolution of the application landscape itself. You've got bare metal applications. You always have virtualized applications. Um, and then most modern applications are, um, are containerized and, you know, managed by Kubernetes. So I think we're seeing a big change in, uh, uh, in the application landscape as well, and probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves it's triggering a change in the it organizations in the development organizations and sort of not only how they work within their organizations, but how they work across, um, all of these different organizations. So I think those are some of the big things that, uh, that I hear about when I talk to customers. >>Well, so it's interesting. I often say Cisco kind of changed the game and in server and compute when it, when it developed the original UCS and you remember there were organizational considerations back then bringing together the server team and the networking team. And of course the bus storage team. And now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >>No, I think you're right. Um, back to the origins of UCS, I mean, we widen the networking company, builder server, well, we just enabled with the best networking technology. So we do compute that and now doing something similar on the software, actually the software for our, um, for our and you know, we've been on this journey for about four years. Um, but the software is called intersite and, you know, we started out with intersite being just the element manager management software for Cisco's compute and hyperconverged devices. Um, but then we've evolved it over the last few years because we believe that the customer shouldn't have to manage a separate piece of software would do manage the hardware of the underlying hardware and then a separate tool to connect it to a public cloud. And then the third tool to do optimization, workload optimization or performance optimization or cost optimization, a fourth tool do now manage Kubernetes and not just in one cluster, one cloud, but multi cluster multicloud. >>They should not have to have a fifth tool that does go into observability. Anyway, I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that manage their infrastructure, but it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the Kubernetes infrastructure, uh, Kubernetes clusters. I mean, whether it's on-prem or in the cloud. So overall that's the strategy, bring it to a single platform and a platform is a loaded word, but we'll get into that a little bit, uh, you know, in this, in this conversation, but that's the overall strategy simplify? >>Well, you know, we brought a platform, I, I like to say platform beats products, but you know, there was a day and you could still point to some examples today in the it industry where, Hey, another tool we can monetize that and another one to solve a different problem. We can monetize that. Uh, and so tell me more about how intersite came about. You obviously sat back, you saw what your customers were going through. You said we can do better. So w tell us the story there. >>Yeah, absolutely. So look, it started with, um, you know, three or four guys in getting in a room and saying, look, we've had this, you know, management software, UCS manager, UCS director, and these are just the Cisco's management, you know, uh, for our softwares, for our own platform. Then every company has their, their own flavor. We said, we took on this ball goal of like, we're not when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service, or we're going to create a SAS offering because the same is the infrastructure built by us, whether it's on networking or compute or on software, how do our customers use it? Well, they use it to write and run their applications, their SAS services, every customer, every customer, every company today is a software company. >>They live and die by how they work or don't. And so we were like, we want to eat our own dog food here, right? We want to deliver this as a SAS offering. And so that's how it started being on this journey for about four years, tens of thousands of customers. Um, but it was a pretty big boat patient because, you know, um, the big change with SAS is, is you're, uh, as you're familiar today is the job of now managing this, this piece of software is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities, and we've learned so much along the way, whether it's around scalability, reliability, um, working with, uh, our own companies, security organizations on what can or cannot be in a SAS service. Um, so again, it's just been a wonderful journey, but, uh, I wanted to point out, we are in some ways eating our own dog food because we built a SAS application that helps other companies deliver their SAS applications. >>So Cisco, I look at Cisco's business model and I compete, I of course, compare it to other companies in the infrastructure business, and obviously a very profitable company or large company you're growing faster than, than, than most of the traditional competitors. And so that means that you have more to invest. You, you, you can, you can afford things like doing stock buybacks, and you can invest in R and D. You don't have to make those hard trade-offs that a lot of your competitors have to make. So It's never enough, right. Never enough. But, but, but in speaking of R and D and innovations that your intro introducing I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud in the operations there and prove flexibility and things around cloud native initiatives as well? >>Absolutely. Absolutely. Well, look, I think one of the fundamentals where we're philosophically different from a lot of options that I see in the industry is we don't need to build everything ourselves. We don't, I just need to create a damn good platform with really good platform services, whether it's, you know, around, um, search ability, whether it's around logging, whether it's around, you know, access control, multi-tenants, I need to create a really good platform and make it open. I do not need to go on a shopping spree to buy 17 and a half companies, and then figure out how to stitch it all together. Cause it's, it's almost impossible if it's impossible for us as a vendor, it's, it's three times more difficult, but for the customer who then has to consume it. So that was the philosophical difference in how we went about building in our sites. >>We've created a harden platform that's, that's always on. Okay. And then you, then the magic starts happening. Then you get partners, whether it is, um, you know, infrastructure partners like, uh, you know, some of our storage partners like NetApp or your, you know, others who want their conversion infrastructure is also to be managed or are other SAS offerings and software vendors, um, who have now become partners. Like we do not, we did not write to Terraform, you know, but we partnered with Tashi and now, uh, you know, Terraform services available on the intercept platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem, we partnered with a company called ergonomics. And so that's now an offering on the intercept platform. So that's where we're philosophically different and sort of, uh, you know, w how we have gone about this. >>And, uh, it actually dovetails well into some of the new things that I want to talk about today that we're announcing on the inner side platform where we're actually announcing the ability to attach and, and be able to manage Kubernetes clusters, which are not on prem. They're actually on AWS, on Azure, uh, soon coming on, uh, on GC, on, uh, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there, but in terms of monitoring, managing it, you can use in our site is since you're using it on prem, you can use that same piece of software to manage Kubernetes clusters in a public cloud, or even manage the end in, in a, in an easy to instance. So, >>So the fact that you could, you mentioned storage, pure net app. So it's intersite can manage that infrastructure. I remember the hot-seat deal. It caught my attention. And of course, a lot of companies want to partner with Cisco because you've got such a strong ecosystem, but I thought that was an interesting move Turbonomic. You mentioned. And now you're saying Kubernetes in the public cloud, so a lot different than it was 10 years ago. Um, so my last question is how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, it was kind of a tug of war there. We see these, these, these two worlds coming together. How will that evolve over the next few years? >>Well, I think it's, it's the evolution of the model and really look at depending on, you know, how you're keeping time. But I think one thing has become very clear. Again, we may be eating our own dog food. I mean, innercise is a hybrid cloud SAS applications that we've learned. Some of these lessons ourselves. One thing is referred that customers are looking for a consistent model, whether it's on the edge, on the polo public cloud, on-prem no data center. It doesn't matter if they're looking for a consistent model for operations, for governings or upgrades, or they're looking for a consistent operating model. What my crystal ball doesn't mean. There's going to be the rise of more custom plugs. It's still going to be hybrid. So allegations will want to reside wherever it makes most sense for them, which is most as the data moving data is the most expensive thing. >>So it's going to be located with the data that's on the edge. We on the air colo public cloud doesn't matter, but, um, basically you're gonna see more custom clouds, more industry-specific clouds, you know, whether it's for finance or constipation or retail industry specific, I think sovereign is going to play a huge role. Uh, you know, today, if you look at the cloud providers, you know, American and Chinese companies that these, the rest of the world, when it goes to making, you know, a good digital citizens, they're they're people and, you know, whether it's, gonna play control, um, and then distributed cloud also on edge, um, is, is gonna be the next frontier. And so that's where we are trying to line up our strategy. And if I had to sum it up in one sentence, it's really your cloud, your way, every customer is on a different journey. They will have their choice of like workload data, um, you know, upgrading your liability concerns. That's really what, what we are trying to enable for our customers. >>Uh, you know, I think I agree with doing that custom clouds. And I think what you're seeing is you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers. They're connecting their on-prem to their, to their public cloud. They're doing that. They're, they're doing that across clouds. And they're looking for companies like Cisco to do the hard work. It give me an infrastructure layer that I can build value on top of, because I'm going to take my financial services business to my cloud model or my healthcare business. I don't want to mess around with it. I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco in your R and D to do that. Do you buy that? >>Absolutely. I think, again, it goes back back to what I was talking about with blacks. You got to get the world, uh, a solid open, flexible platform, and it's flexible in terms of the technology flexible in how they want to consume it at some customers are fine with a SAS software. What if I talk to, you know, my friends in the federal team now that does not work so how they want to consume it, they want to, you know, our perspective sovereignty, we talked about it. So, you know, job for an infrastructure vendor like ourselves is give the world an open platform, give them the knobs, give them the right API. Um, but the last thing I would mention is, you know, there's still a place for innovation in hardware. Some of my colleagues are gonna engage into some of those, um, you know, details, whether it's on our X series platform or HyperFlex. Um, but it's really, it's going to, it's going to be software defined to SAS service and then, you know, give the world and open rock-solid platform, >>Got to run on something. All right, thanks DDL. It was a pleasure to have you in the queue. Great to see you. You're welcome in a moment, I'll be back to dig into hyperconverged and where HyperFlex fits and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today.

Published Date : Mar 23 2022

SUMMARY :

abstract set of remote services, you know, somewhere out in the clouds. the application infrastructure required to support these workers. So the challenge for it managers is ensuring that modern applications Gus is back on the cube to talk about how we're going to simplify Good to see you again. Talk about the trends you're seeing from you know, managed by Kubernetes. And of course the bus storage team. Um, but the software is called intersite and, you know, we started out with intersite being It's the same platform on which you can use to manage the Kubernetes but you know, there was a day and you could still point to some examples today in the it industry where, So look, it started with, um, you know, patient because, you know, um, the big change with SAS is, is you're, So Cisco, I look at Cisco's business model and I compete, I of course, compare it to other companies in the infrastructure whether it's around logging, whether it's around, you know, access control, So that's where we're philosophically different and sort of, uh, you know, clusters on, you know, in AWS or Azure, stay there, So the fact that you could, you mentioned storage, pure net app. on, you know, how you're keeping time. data, um, you know, upgrading your liability concerns. I'm not going to develop, you know, custom infrastructure like an Amazon but the last thing I would mention is, you know, there's still a place for innovation in hardware. It was a pleasure to have you in the queue.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Didi DasguptaPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

36%QUANTITY

0.99+

FirstQUANTITY

0.99+

Darren WilliamsPERSON

0.99+

TerraformORGANIZATION

0.99+

threeQUANTITY

0.99+

third toolQUANTITY

0.99+

AWSORGANIZATION

0.99+

10 years agoDATE

0.99+

one sentenceQUANTITY

0.99+

three timesQUANTITY

0.99+

17 and a half companiesQUANTITY

0.99+

a dayQUANTITY

0.99+

fourth toolQUANTITY

0.99+

third segmentQUANTITY

0.99+

DDLPERSON

0.99+

fifth toolQUANTITY

0.98+

GusPERSON

0.98+

todayDATE

0.98+

KubernetesORGANIZATION

0.98+

one clusterQUANTITY

0.98+

two worldsQUANTITY

0.98+

DD DasguptaPERSON

0.98+

one cloudQUANTITY

0.98+

one thingQUANTITY

0.97+

about four yearsQUANTITY

0.97+

four guysQUANTITY

0.97+

first two thingsQUANTITY

0.97+

firstQUANTITY

0.97+

SASTITLE

0.96+

tens of thousandsQUANTITY

0.96+

oneQUANTITY

0.96+

ChineseOTHER

0.96+

KubernetesTITLE

0.96+

three segmentsQUANTITY

0.94+

TashiORGANIZATION

0.94+

SASORGANIZATION

0.93+

KubernetesPERSON

0.93+

more than a decade agoDATE

0.93+

single platformQUANTITY

0.92+

HyperFlexTITLE

0.91+

UCSORGANIZATION

0.91+

AzureTITLE

0.91+

One thingQUANTITY

0.9+

TurbonomicORGANIZATION

0.89+

ergonomicsORGANIZATION

0.89+

ETRORGANIZATION

0.85+

ThursdayDATE

0.82+

more than half ofQUANTITY

0.79+

Maneesh Agra wallPERSON

0.79+

mid two thousandsQUANTITY

0.78+

yearsDATE

0.77+

customersQUANTITY

0.76+

XCOMMERCIAL_ITEM

0.74+

single offeringQUANTITY

0.72+

X seriesCOMMERCIAL_ITEM

0.68+

next few yearsDATE

0.65+

Netscout Threat Report Welcome Lisa Martin


 

>>The pandemic saw a majority of employees working remotely, as we all know, and the world turning to digital services, which caused an uptick in cyber attacks because almost all business was conducted virtually well, the unprecedented events of 2020 led to an enormous and extended upswing in innovation for threat actors. And it's not going away anytime soon. This is according to our colleagues at NetScout and an excerpt from its first half 2021 threat intelligence report. And this event, we're going to unpack Netscout's semi-annual security report for the second half of 2021, which outlines how and why these attacks are carried out and what individuals and businesses can do to prevent attacks. Now, one of the things that NetScout discovered in the second half threat intelligence report is that these cyber attacks, they're not motivated by a single factor on notable example of a recent attack just last month, where government and private websites in Ukraine were knocked offline in a massive distributed denial of service DDoSs attack. >>As Russian troops moved into contested areas in the east of the country. My name is Lisa Martin. And today on this special Q presentation, Richard Hummel joins me manager of threat intelligence at NetScout. He and I are going to explore three of the key findings in the second half of 2021 threat intelligence reports. In the first segment, Richard's going to talk with me about the dark side of DDoS for hire. And one of the things that you're going to learn is that launching DDoS attacks with illicit DDoS for hire services no longer requires a nominal fee in segment two. Richard's going to talk to me about the rise of server class bot net armies. And as Richard will discuss recently, adversaries not only increased the size of IOT botnets, but also conscripted high powered servers into larger button nuts. Then we'll come back for a third and final segment to discuss the vertical industries where attackers really zeroed in for DDoSs attacks in the second half. And here Richard's going to explore some of the verticals that haven't traditionally been in the crosshairs, such as a software publishers and computer manufacturing. All right, guys, let's do this. Here comes our first segment.

Published Date : Mar 22 2022

SUMMARY :

Now, one of the things that NetScout discovered in the second half threat intelligence And one of the things that you're going to learn

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Richard HummelPERSON

0.99+

RichardPERSON

0.99+

UkraineLOCATION

0.99+

NetScoutORGANIZATION

0.99+

second halfQUANTITY

0.99+

last monthDATE

0.99+

first segmentQUANTITY

0.99+

NetscoutORGANIZATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

thirdQUANTITY

0.98+

pandemicEVENT

0.94+

2020DATE

0.94+

second half of 2021DATE

0.9+

single factorQUANTITY

0.88+

RussianOTHER

0.87+

three of the key findingsQUANTITY

0.82+

half 2021DATE

0.78+

half of 2021DATE

0.75+

secondQUANTITY

0.63+

thingsQUANTITY

0.53+

firstQUANTITY

0.49+

segment twoQUANTITY

0.47+

Cisco: Simplifying Hybrid Cloud


 

>> The introduction of the modern public cloud in the mid 2000s, permanently changed the way we think about IT. At the heart of it, the cloud operating model attacked one of the biggest problems in enterprise infrastructure, human labor costs. More than half of IT budgets were spent on people, and much of that effort added little or no differentiable value to the business. The automation of provisioning, management, recovery, optimization, and decommissioning infrastructure resources has gone mainstream as organizations demand a cloud-like model across all their application infrastructure, irrespective of its physical location. This has not only cut cost, but it's also improved quality and reduced human error. Hello everyone, my name is Dave Vellante and welcome to Simplifying Hybrid Cloud, made possible by Cisco. Today, we're going to explore Hybrid Cloud as an operating model for organizations. Now the definite of cloud is expanding. Cloud is no longer an abstract set of remote services, you know, somewhere out in the clouds. No, it's an operating model that spans public cloud, on-premises infrastructure, and it's also moving to edge locations. This trend is happening at massive scale. While at the same time, preserving granular control of resources. It's an entirely new game where IT managers must think differently to deal with this complexity. And the environment is constantly changing. The growth and diversity of applications continues. And now, we're living in a world where the workforce is remote. Hybrid work is now a permanent state and will be the dominant model. In fact, a recent survey of CIOs by Enterprise Technology Research, ETR, indicates that organizations expect 36% of their workers will be operating in a hybrid mode. Splitting time between remote work and in office environments. This puts added pressure on the application infrastructure required to support these workers. The underlying technology must be more dynamic and adaptable to accommodate constant change. So the challenge for IT managers is ensuring that modern applications can be run with a cloud-like experience that spans on-prem, public cloud, and edge locations. This is the future of IT. Now today, we have three segments where we're going to dig into these issues and trends surrounding Hybrid Cloud. First up, is DD Dasgupta, who will set the stage and share with us how Cisco is approaching this challenge. Next, we're going to hear from Manish Agarwal and Darren Williams, who will help us unpack HyperFlex which is Cisco's hyperconverged infrastructure offering. And finally, our third segment will drill into Unified Compute. More than a decade ago, Cisco pioneered the concept of bringing together compute with networking in a single offering. Cisco frankly, changed the legacy server market with UCS, Unified Compute System. The X-Series is Cisco's next generation architecture for the coming decade and we'll explore how it fits into the world of Hybrid Cloud, and its role in simplifying the complexity that we just discussed. So, thanks for being here. Let's go. (upbeat music playing) Okay, let's start things off. DD Dasgupta is back on theCUBE to talk about how we're going to simplify Hybrid Cloud complexity. DD welcome, good to see you again. >> Hey Dave, thanks for having me. Good to see you again. >> Yeah, our pleasure. Look, let's start with big picture. Talk about the trends you're seeing from your customers. >> Well, I think first off, every customer these days is a public cloud customer. They do have their on-premise data centers, but, every customer is looking to move workloads, new services, cloud native services from the public cloud. I think that's one of the big things that we're seeing. While that is happening, we're also seeing a pretty dramatic evolution of the application landscape itself. You've got, you know, bare metal applications, you always have virtualized applications, and then most modern applications are containerized, and, you know, managed by Kubernetes. So I think we're seeing a big change in, in the application landscape as well. And, probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves, it's triggering a change in the IT organizations in the development organizations and sort of not only how they work within their organizations, but how they work across all of these different organizations. So I think those are some of the big things that, that I hear about when I talk to customers. >> Well, so it's interesting. I often say Cisco kind of changed the game in server and compute when it developed the original UCS. And you remember there were organizational considerations back then bringing together the server team and the networking team and of course the storage team as well. And now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >> No, I think you're right Dave, back to the origins of UCS and we, you know, why did a networking company build a server? Well, we just enabled with the best networking technologies so, would do compute better. And now, doing something similar on the software, actually the managing software for our hyperconvergence, for our, you know, Rack server, for our blade servers. And, you know, we've been on this journey for about four years. The software is called Intersight, and, you know, we started out with Intersight being just the element manager, the management software for Cisco's compute and hyperconverged devices. But then we've evolved it over the last few years because we believe that a customer shouldn't have to manage a separate piece of software, would do manage the hardware, the underlying hardware. And then a separate tool to connect it to a public cloud. And then a third tool to do optimization, workload optimization or performance optimization, or cost optimization. A fourth tool to now manage, you know, Kubernetes and like, not just in one cluster, one cloud, but multi-cluster, multi-cloud. They should not have to have a fifth tool that does, goes into observability anyway. I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that manage their infrastructure. But it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the, the Kubernetes infrastructure, Kubernetes clusters, I mean, whether it's on-prem or in a cloud. So, overall that's the strategy. Bring it to a single platform, and a platform is a loaded word we'll get into that a little bit, you know, in this conversation, but, that's the overall strategy, simplify. >> Well, you know, you brought platform. I like to say platform beats products, but you know, there was a day, and you could still point to some examples today in the IT industry where, hey, another tool we can monetize that. And another one to solve a different problem, we can monetize that. And so, tell me more about how Intersight came about. You obviously sat back, you saw what your customers were going through, you said, "We can do better." So tell us the story there. >> Yeah, absolutely. So, look, it started with, you know, three or four guys in getting in a room and saying, "Look, we've had this, you know, management software, UCS manager, UCS director." And these are just the Cisco's management, you know, for our, softwares for our own platforms. And every company has their own flavor. We said, we took on this bold goal of like, we're not, when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service. Or we're going to create a SaaS offering. Because the same, the infrastructure built by us whether it's on networking or compute, or the cyber cloud software, how do our customers use it? Well, they use it to write and run their applications, their SaaS services, every customer, every customer, every company today is a software company. They live and die by how their applications work or don't. And so, we were like, "We want to eat our own dog food here," right? We want to deliver this as a SaaS offering. And so that's how it started, we've being on this journey for about four years, tens of thousands of customers. But it was a pretty big, bold ambition 'cause you know, the big change with SaaS as you're familiar Dave is, the job of now managing this piece of software, is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities, and we've learned so much along the way, whether it's to announce scalability, reliability, working with, our own company's security organizations on what can or cannot be in a SaaS service. So again, it's been a wonderful journey, but, I wanted to point out, we are in some ways eating our own dog food 'cause we built a SaaS application that helps other companies deliver their SaaS applications. >> So Cisco, I look at Cisco's business model and I compare, of course compare it to other companies in the infrastructure business and, you're obviously a very profitable company, you're a large company, you're growing faster than most of the traditional competitors. And, so that means that you have more to invest. You, can afford things, like to you know, stock buybacks, and you can invest in R&D you don't have to make those hard trade offs that a lot of your competitors have to make, so-- >> You got to have a talk with my boss on the whole investment. >> Yeah, right. You'd never enough, right? Never enough. But in speaking of R&D and innovations that you're intro introducing, I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud, the operations there, improve flexibility, and things around Cloud Native initiatives as well? >> Absolutely, absolutely. Well, look, I think, one of the fundamentals where we're kind of philosophically different from a lot of options that I see in the industry is, we don't need to build everything ourselves, we don't. I just need to create a damn good platform with really good platform services, whether it's, you know, around, searchability, whether it's around logging, whether it's around, you know, access control, multi-tenants. I need to create a really good platform, and make it open. I do not need to go on a shopping spree to buy 17 and 1/2 companies and then figure out how to stich it all together. 'Cause it's almost impossible. And if it's impossible for us as a vendor, it's three times more difficult for the customer who then has to consume it. So that was the philosophical difference and how we went about building Intersight. We've created a hardened platform that's always on, okay? And then you, then the magic starts happening. Then you get partners, whether it is, you know, infrastructure partners, like, you know, some of our storage partners like NetApp or PR, or you know, others, who want their conversion infrastructures also to be managed, or their other SaaS offerings and software vendors who have now become partners. Like we did not write Terraform, you know, but we partnered with Hashi and now, you know, Terraform service's available on the Intersight platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem. We partner with a company called Turbonomic and so that's now an offering on the Intersight platform. So that's where we're philosophically different, in sort of, you know, how we have gone about this. And, it actually dovetails well into, some of the new things that I want to talk about today that we're announcing on the Intersight platform where we're actually announcing the ability to attach and be able to manage Kubernetes clusters which are not on-prem. They're actually on AWS, on Azure, soon coming on GC, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there. But in terms of monitoring, managing it, you can use Intersight, and since you're using it on-prem you can use that same piece of software to manage Kubernetes clusters in a public cloud. Or even manage DMS in a EC2 instance. So. >> Yeah so, the fact that you could, you mentioned Storage Pure, NetApp, so Intersight can manage that infrastructure. I remember the Hashi deal and I, it caught my attention. I mean, of course a lot of companies want to partner with Cisco 'cause you've got such a strong ecosystem, but I thought that was an interesting move, Turbonomic you mentioned. And now you're saying Kubernetes in the public cloud. So a lot different than it was 10 years ago. So my last question is, how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, and it was kind of a tug of war there. We see these two worlds coming together. How will that evolve on for the next few years? >> Well, I think it's the evolution of the model and I, really look at Cloud, you know, 2.0 or 3.0, or depending on, you know, how you're keeping terms. But, I think one thing has become very clear again, we, we've be eating our own dog food, I mean, Intersight is a hybrid cloud SaaS application. So we've learned some of these lessons ourselves. One thing is for sure that the customers are looking for a consistent model, whether it's on the edge, on the COLO, public cloud, on-prem, no data center, it doesn't matter. They're looking for a consistent model for operations, for governance, for upgrades, for reliability. They're looking for a consistent operating model. What (indistinct) tells me I think there's going to be a rise of more custom clouds. It's still going to be hybrid, so applications will want to reside wherever it most makes most sense for them which is obviously data, 'cause you know, data is the most expensive thing. So it's going to be complicated with the data goes on the edge, will be on the edge, COLO, public cloud, doesn't matter. But, you're basically going to see more custom clouds, more industry specific clouds, you know, whether it's for finance, or transportation, or retail, industry specific, I think sovereignty is going to play a huge role, you know, today, if you look at the cloud provider there's a handful of, you know, American and Chinese companies, that leave the rest of the world out when it comes to making, you know, good digital citizens of their people and you know, whether it's data latency, data gravity, data sovereignty, I think that's going to play a huge role. Sovereignty's going to play a huge role. And the distributor cloud also called Edge, is going to be the next frontier. And so, that's where we are trying line up our strategy. And if I had to sum it up in one sentence, it's really, your cloud, your way. Every customer is on a different journey, they will have their choice of like workloads, data, you know, upgrade reliability concern. That's really what we are trying to enable for our customers. >> You know, I think I agree with you on that custom clouds. And I think what you're seeing is, you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers, they're connecting their on-prem to their public cloud. They're doing that across clouds, and they're looking for companies like Cisco to do the hard work, and give me an infrastructure layer that I can build value on top of. 'Cause I'm going to take my financial services business to my cloud model, or my healthcare business. I don't want to mess around with, I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco and your R&D to do that. Do you buy that? >> Absolutely. I think again, it goes back to what I was talking about with platform. You got to give the world a solid open, flexible platform. And flexible in terms of the technology, flexible in how they want to consume it. Some of our customers are fine with the SaaS, you know, software. But if I talk to, you know, my friends in the federal team, no, that does not work. And so, how they want to consume it, they want to, you know, (indistinct) you know, sovereignty we talked about. So, I think, you know, job for an infrastructure vendor like ourselves is to give the world a open platform, give them the knobs, give them the right API tool kit. But the last thing I will mention is, you know, there's still a place for innovation in hardware. And I think some of my colleagues are going to get into some of those, you know, details, whether it's on our X-Series, you know, platform or HyperFlex, but it's really, it's going to be software defined, it's a SaaS service and then, you know, give the world an open rock solid platform. >> Got to run on something All right, Thanks DD, always a pleasure to have you on the, theCUBE, great to see you. >> Thanks for having me. >> You're welcome. In a moment, I'll be back to dig into hyperconverged, and where HyperFlex fits, and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today. >> It used to be all your infrastructure was managed here. But things got more complex in distributing, and now IT operations need to be managed everywhere. But what if you could manage everywhere from somewhere? One scalable place that brings together your teams, technology, and operations. Both on-prem and in the cloud. One automated place that provides full stack visibility to help you optimize performance and stay ahead of problems. One secure place where everyone can work better, faster, and seamlessly together. That's the Cisco Intersight cloud operations platform. The time saving, cost reducing, risk managing solution for your whole IT environment, now and into the future of this ever-changing world of IT. (upbeat music) >> With me now are Manish Agarwal, senior director of product management for HyperFlex at Cisco, @flash4all, number four, I love that, on Twitter. And Darren Williams, the director of business development and sales for Cisco. MrHyperFlex, @MrHyperFlex on Twitter. Thanks guys. Hey, we're going to talk about some news and HyperFlex, and what role it plays in accelerating the hybrid cloud journey. Gentlemen, welcome to theCUBE, good to see you. >> Thanks a lot Dave. >> Thanks Dave. >> All right Darren, let's start with you. So, for a hybrid cloud, you got to have on-prem connection, right? So, you got to have basically a private cloud. What are your thoughts on that? >> Yeah, we agree. You can't have a hybrid cloud without that prime element. And you've got to have a strong foundation in terms of how you set up the whole benefit of the cloud model you're building in terms of what you want to try and get back from the cloud. You need a strong foundation. Hyperconversions provides that. We see more and more customers requiring a private cloud, and they're building it with Hyperconversions, in particular HyperFlex. Now to make all that work, they need a good strong cloud operations model to be able to connect both the private and the public. And that's where we look at Intersight. We've got solution around that to be able to connect that around a SaaS offering. That looks around simplified operations, gives them optimization, and also automation to bring both private and public together in that hybrid world. >> Darren let's stay with you for a minute. When you talk to your customers, what are they thinking these days when it comes to implementing hyperconverged infrastructure in both the enterprise and at the edge, what are they trying to achieve? >> So there's many things they're trying to achieve, probably the most brutal honesty is they're trying to save money, that's probably the quickest answer. But, I think they're trying to look in terms of simplicity, how can they remove layers of components they've had before in their infrastructure? We see obviously collapsing of storage into hyperconversions and storage networking. And we've got customers that have saved 80% worth of savings by doing that collapse into a hyperconversion infrastructure away from their Three Tier infrastructure. Also about scalability, they don't know the end game. So they're looking about how they can size for what they know now, and how they can grow that with hyperconvergence very easy. It's one of the major factors and benefits of hyperconversions. They also obviously need performance and consistent performance. They don't want to compromise performance around their virtual machines when they want to run multiple workloads. They need that consistency all all way through. And then probably one of the biggest ones is that around the simplicity model is the management layer, ease of management. To make it easier for their operations, yeah, we've got customers that have told us, they've saved 50% of costs in their operations model on deploying HyperFlex, also around the time savings they make massive time savings which they can reinvest in their infrastructure and their operations teams in being able to innovate and go forward. And then I think probably one of the biggest pieces we've seen as people move away from three tier architecture is the deployment elements. And the ease of deployment gets easy with hyperconverged, especially with Edge. Edge is a major key use case for us. And, what I want, what our customers want to do is get the benefit of a data center at the edge, without A, the big investment. They don't want to compromise in performance, and they want that simplicity in both management and deployment. And, we've seen our analysts recommendations around what their readers are telling them in terms of how management deployment's key for our IT operations teams. And how much they're actually saving by deploying Edge and taking the burden away when they deploy hyperconversions. And as I said, the savings elements is the key bit, and again, not always, but obviously those are case studies around about public cloud being quite expensive at times, over time for the wrong workloads. So by bringing them back, people can make savings. And we again have customers that have made 50% savings over three years compared to their public cloud usage. So, I'd say that's the key things that customers are looking for. Yeah. >> Great, thank you for that Darren. Manish, we have some hard news, you've been working a lot on evolving the HyperFlex line. What's the big news that you've just announced? >> Yeah, thanks Dave. So there are several things that we are announcing today. The first one is a new offer called HyperFlex Express. This is, you know, Cisco Intersight led and Cisco Intersight managed eight HyperFlex configurations. That we feel are the fastest path to hybrid cloud. The second is we are expanding our server portfolio by adding support for HX on AMD Rack, UCS AMD Rack. And the third is a new capability that we are introducing, that we are calling, local containerized witness. And let me take a minute to explain what this is. This is a pretty nifty capability to optimize for Edge environments. So, you know, this leverages the, Cisco's ubiquitous presence of the networking, you know, products that we have in the environments worldwide. So the smallest HyperFlex configuration that we have is a 2-node configuration, which is primarily used in Edge environments. Think of a, you know, a backroom in a departmental store or a oil rig, or it might even be a smaller data center somewhere around the globe. For these 2-node configurations, there is always a need for a third entity that, you know, industry term for that is either a witness or an arbitrator. We had that for HyperFlex as well. And the problem that customers face is, where you host this witness. It cannot be on the cluster because the job of the witness is to, when the infrastructure is going down, it basically breaks, sort of arbitrates which node gets to survive. So it needs to be outside of the cluster. But finding infrastructure to actually host this is a problem, especially in the Edge environments where these are resource constraint environments. So what we've done is we've taken that witness, we've converted it into a container reform factor. And then qualified a very large slew of Cisco networking products that we have, right from ISR, ASR, Nexus, Catalyst, industrial routers, even a Raspberry Pi that can host this witness. Eliminating the need for you to find yet another piece of infrastructure, or doing any, you know, care and feeding of that infrastructure. You can host it on something that already exists in the environment. So those are the three things that we are announcing today. >> So I want to ask you about HyperFlex Express. You know, obviously the whole demand and supply chain is out of whack. Everybody's, you know, global supply chain issues are in the news, everybody's dealing with it. Can you expand on that a little bit more? Can HyperFlex Express help customers respond to some of these issues? >> Yeah indeed Dave. You know the primary motivation for HyperFlex Express was indeed an idea that, you know, one of the folks are on my team had, which was to build a set of HyperFlex configurations that are, you know, would have a shorter lead time. But as we were brainstorming, we were actually able to tag on multiple other things and make sure that, you know, there is in it for, something in it for our customers, for sales, as well as our partners. So for example, you know, for our customers, we've been able to dramatically simplify the configuration and the install for HyperFlex Express. These are still HyperFlex configurations and you would at the end of it, get a HyperFlex cluster. But the part to that cluster is much, much simplified. Second is that we've added in flexibility where you can now deploy these, these are data center configurations, but you can deploy these with or without fabric interconnects, meaning you can deploy with your existing top of rack. We've also, you know, added attractive price point for these, and of course, you know, these will have better lead times because we've made sure that, you know, we are using components that are, that we have clear line of sight from our supply perspective. For partner and sales, this is, represents a high velocity sales motion, a faster turnaround time, and a frictionless sales motion for our distributors. This is actually a set of disty-friendly configurations, which they would find very easy to stalk, and with a quick turnaround time, this would be very attractive for the distys as well. >> It's interesting Manish, I'm looking at some fresh survey data, more than 70% of the customers that were surveyed, this is the ETR survey again, we mentioned 'em at the top. More than 70% said they had difficulty procuring server hardware and networking was also a huge problem. So that's encouraging. What about, Manish, AMD? That's new for HyperFlex. What's that going to give customers that they couldn't get before? >> Yeah Dave, so, you know, in the short time that we've had UCS AMD Rack support, we've had several record making benchmark results that we've published. So it's a powerful platform with a lot of performance in it. And HyperFlex, you know, the differentiator that we've had from day one is that it has the industry leading storage performance. So with this, we are going to get the fastest compute, together with the fastest storage. And this, we are hoping that we'll, it'll basically unlock, you know, a, unprecedented level of performance and efficiency, but also unlock several new workloads that were previously locked out from the hyperconverged experience. >> Yeah, cool. So Darren, can you give us an idea as to how HyperFlex is doing in the field? >> Sure, absolutely. So, both me and Manish been involved right from the start even before it was called HyperFlex, and we've had a great journey. And it's very exciting to see where we are taking, where we've been with the technology. So we have over 5,000 customers worldwide, and we're currently growing faster year over year than the market. The majority of our customers are repeat buyers, which is always a good sign in terms of coming back when they've proved the technology and are comfortable with the technology. They, repeat buyer for expanded capacity, putting more workloads on. They're using different use cases on there. And from an Edge perspective, more numbers of science. So really good endorsement of the technology. We get used across all verticals, all segments, to house mission critical applications, as well as the traditional virtual server infrastructures. And we are the lifeblood of our customers around those, mission critical customers. I think one big example, and I apologize for the worldwide audience, but this resonates with the American audience is, the Super Bowl. So, the SoFi stadium that housed the Super Bowl, actually has Cisco HyperFlex running all the management services, through from the entire stadium for digital signage, 4k video distribution, and it's completely cashless. So, if that were to break during Super Bowl, that would've been a big news article. But it was run perfectly. We, in the design of the solution, we're able to collapse down nearly 200 servers into a few nodes, across a few racks, and have 120 virtual machines running the whole stadium, without missing a heartbeat. And that is mission critical for you to run Super Bowl, and not be on the front of the press afterwards for the wrong reasons, that's a win for us. So we really are, really happy with HyperFlex, where it's going, what it's doing, and some of the use cases we're getting involved in, very, very exciting. >> Hey, come on Darren, it's Super Bowl, NFL, that's international now. And-- >> Thing is, I follow NFL. >> The NFL's, it's invading London, of course, I see the, the picture, the real football over your shoulder. But, last question for Manish. Give us a little roadmap, what's the future hold for HyperFlex? >> Yeah. So, you know, as Darren said, both Darren and I have been involved with HyperFlex since the beginning. But, I think the best is yet to come. There are three main pillars for HyperFlex. One is, Intersight is central to our strategy. It provides a, you know, lot of customer benefit from a single pane of class management. But we are going to take this beyond the lifecycle management, which is for HyperFlex, which is integrated into Intersight today, and element management. We are going to take it beyond that and start delivering customer value on the dimensions of AI Ops, because Intersight really provides us a ideal platform to gather stats from all the clusters across the globe, do AI/ML and do some predictive analysis with that, and return back as, you know, customer valued, actionable insights. So that is one. The second is UCS expand the HyperFlex portfolio, go beyond UCS to third party server platforms, and newer UCS server platforms as well. But the highlight there is one that I'm really, really excited about and think that there is a lot of potential in terms of the number of customers we can help. Is HX on X-Series. X-Series is another thing that we are going to, you know, add, we're announcing a bunch of capabilities on in this particular launch. But HX on X-Series will have that by the end of this calendar year. And that should unlock with the flexibility of X-Series of hosting a multitude of workloads and the simplicity of HyperFlex. We're hoping that would bring a lot of benefits to new workloads that were locked out previously. And then the last thing is HyperFlex data platform. This is the heart of the offering today. And, you'll see the HyperFlex data platform itself it's a distributed architecture, a unique distributed architecture. Primarily where we get our, you know, record baring performance from. You'll see it can foster more scalable, more resilient, and we'll optimize it for you know, containerized workloads, meaning it'll get granular containerized, container granular management capabilities, and optimize for public cloud. So those are some things that we are, the team is busy working on, and we should see that come to fruition. I'm hoping that we'll be back at this forum in maybe before the end of the year, and talking about some of these newer capabilities. >> That's great. Thank you very much for that, okay guys, we got to leave it there. And you know, Manish was talking about the HX on X-Series that's huge, customers are going to love that and it's a great transition 'cause in a moment, I'll be back with Vikas Ratna and Jim Leach, and we're going to dig into X-Series. Some real serious engineering went into this platform, and we're going to explore what it all means. You're watching Simplifying Hybrid Cloud on theCUBE, your leader in enterprise tech coverage. >> The power is here, and here, but also here. And definitely here. Anywhere you need the full force and power of your infrastructure hyperconverged. It's like having thousands of data centers wherever you need them, powering applications anywhere they live, but manage from the cloud. So you can automate everything from here. (upbeat music) Cisco HyperFlex goes anywhere. Cisco, the bridge to possible. (upbeat music) >> Welcome back to theCUBE's special presentation, Simplifying Hybrid Cloud brought to you by Cisco. We're here with Vikas Ratna who's the director of product management for UCS at Cisco and James Leach, who is director of business development at Cisco. Gents, welcome back to theCUBE, good to see you again. >> Hey, thanks for having us. >> Okay, Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers, and as organizations as they hit the pavement for their hybrid cloud journeys, what are the most common challenges that they face? What are they telling you? How is Cisco, specifically UCS helping them deal with these problems? >> Well, you know, first I think that's a, you know, that's a great question. And you know, customer centric view is the way that we've taken, is kind of the approach we've taken from day one. Right? So I think that if you look at the challenges that we're solving for that our customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications, right? That's the, that's where the rubber meets your proverbial road with the customer. And I would say that, you know, what we're seeing is, the challenges customers are facing within applications come from the the way that applications have evolved. So what we're seeing now is more data centric applications for example. Those require that we, you know, are able to move and process large data sets really in real time. And the other aspect of applications I think to give our customers kind of some, you know, pause some challenges, would be around the fact that they're changing so quickly. So the application that exists today or the day that they, you know, make a purchase of infrastructure to be able to support that application, that application is most likely changing so much more rapidly than the infrastructure can keep up with today. So, that creates some challenges around, you know, how do I build the infrastructure? How do I right size it without over provisioning, for example? But also, there's a need for some flexibility around life cycle and planning those purchase cycles based on the life cycle of the different hardware elements. And within the infrastructure, which I think is the second bucket of challenges, we see customers who are being forced to move away from the, like a modular or blade approach, which offers a lot of operational and consolidation benefits, and they have to move to something like a Rack server model for some applications because of these needs that these data centric applications have, and that creates a lot of you know, opportunity for siloing the infrastructure. And those silos in turn create multiple operating models within the, you know, a data center environment that, you know, again, drive a lot of complexity. So that, complexity is definitely the enemy here. And then finally, I think life cycles. We're seeing this democratization of processing if you will, right? So it's no longer just CPU focused, we have GPU, we have FPGA, we have, you know, things that are being done in storage and the fabrics that stitch them together that are all changing rapidly and have very different life cycles. So, when those life cycles don't align for a lot of our customers, they see a challenge in how they can manage this, you know, these different life cycles and still make a purchase without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So, that is a, you know, kind of the other bucket. And then finally, I think management is huge, right? So management, you know, at its core is really right size for our customers and give them the most value when it meets the mark around scale and scope. You know, back in 2009, we weren't meeting that mark in the industry and UCS came about and took management outside the chassis, right? We put it at the top of the rack and that worked great for the scale and scope we needed at that time. However, as things have changed, we're seeing a very new scale and scope needed, right? So we're talking about a hybrid cloud world that has to manage across data centers, across clouds, and, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those operational pieces that touch the application, that touch the infrastructure, but they're not the same tool. They tend to be disparate tools that have to be put together. >> Right. >> So our customers, you know, don't really enjoy being in the business of, you know, building their own tools, so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >> Right. You know Jim, I said in my open that you guys, Cisco sort of changed the server game with the original UCS, but the X-Series is the next generation, the generation for the next decade which is really important 'cause you touched on a lot of things, these data intensive workload, alternative processors to sort of meet those needs. The whole cloud operating model and hybrid cloud has really changed. So, how's it going with with the X-Series? You made a big splash last year, what's the reception been in the field? >> Actually, it's been great. You know, we're finding that customers can absolutely relate to our, you know, UCS X-Series story. I think that, you know, the main reason they relate to it is they helped create it, right? It was their feedback and their partnership that gave us really the, those problem areas, those areas that we could solve for the customer that actually add, you know, significant value. So, you know, since we brought UCS to market back in 2009, you know, we had this unique architectural paradigm that we created, and I think that created a product which was the fastest in Cisco history in terms of growth. What we're seeing now is X-Series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake. We're seeing all, you know, both in terms of, you know, the number of customers, but also more importantly, the number of workloads that our customers are using, and the types of workloads are growing, right? So we're growing this modular segment that exist, not just, you know, bringing customers onto a new product, but we're actually bring them into the product in the way that we had envisioned, which is one infrastructure that can run any application and do it seamlessly. So we're really excited to be growing this modular segment. I think the other piece, you know, that, you know, we judge ourselves is, you know, sort of not just within Cisco, but also within the industry. And I think right now is a, you know, a great example, you know, our competitors have taken kind of swings and misses over the past five years at this, at a, you know, kind of the new next architecture. And, we're seeing a tremendous amount of growth even faster than any of our competitors have seen when they announced something that was new to this space. So, I think that the ground up work that we did is really paying off. And I think that what we're also seeing is it's not really a leap frog game, as it may have been in the past. X-Series is out in front today, and, you know, we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers and, you know, we're pretty excited that we're seeing the results as well. So, as our competitors hit walls, I think we're, you know, we're executing on the plan that we laid out back in June when we launched X-Series to the world. And, you know, as we continue to do that, we're seeing, you know, again, tremendous uptake from our customers. >> So thank you for that Jim. So Vikas, I was just on Twitter just today actually talking about the gravitational pull, you've got the public clouds pulling CXOs one way and you know, on-prem folks pulling the other way and hybrid cloud. So, organizations are struggling with a lot of different systems and architectures and ways to do things. And I said that what they're trying to do is abstract all that complexity away and they need infrastructure to support that. And I think your stated aim is really to try to help with that confusion with the X series, right? I mean, so how so can you explain that? >> Sure. And, that's the right, the context that you built up right there Dave. If you walk into enterprise data center you'll see plethora of compute systems spread all across. Because, every application has its unique needs, and, hence you find drive node, drive-dense system, memory dense system, GPU dense system, core dense system, and variety of form factors, 1U, 2U, 4U, and, every one of them typically come with, you know, variety of adapters and cables and so forth. This creates the siloness of resources. Fabric is (indistinct), the adapter is (indistinct). The power and cooling implication. The Rack, you know, face challenges. And, above all, the multiple management plane that they come up with, which makes it very difficult for IT to have one common center policy, and enforce it all across, across the firmware and software and so forth. And then think about upgrade challenges of the siloness makes it even more complex as these go through the upgrade processes of their own. As a result, we observe quite a few of our customers, you know, really seeing an inter, slowness in that agility, and high burden in the cost of overall ownership. This is where with the X-Series powered by Intersight, we have one simple goal. We want to make sure our customers get out of that complexities. They become more agile, and drive lower TCOs. And we are delivering it by doing three things, three aspects of simplification. First, simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure. An infrastructure which removes the siloness of form factor. An infrastructure which reduces the Rack footprint that is required. An infrastructure where power and cooling budgets are in the lower. Second, we want to simplify by delivering a cloud operating model, where they can and create the policy once across compute network storage and deploy it all across. And third, we want to take away the pain they have by simplifying the process of upgrade and any platform evolution that they're going to go through in the next two, three years. So that's where the focus is on just driving down the simplicity, lowering down their TCOs. >> Oh, that's key, less friction is always a good thing. Now, of course, Vikas we heard from the HyperFlex guys earlier, they had news not to be outdone. You have hard news as well. What innovations are you announcing around X-Series today? >> Absolutely. So we are following up on the exciting X-Series announcement that we made in June last year, Dave. And we are now introducing three innovation on X-Series with the goal of three things. First, expand the supported workload on X-Series. Second, take the performance to new levels. Third, dramatically reduce the complexities in the data center by driving down the number of adapters and cables that are needed. To that end, three new innovations are coming in. First, we are introducing the support for the GPU node using a cableless and very unique X-Fabric architecture. This is the most elegant design to add the GPUs to the compute node in the modular form factor. Thereby, our customers can now power in AI/ML workload, or any workload that need many more number of GPUs. Second, we are bringing in GPUs right onto the compute node, and thereby our customers can now fire up the accelerated VDI workload for example. And third, which is what you know, we are extremely proud about, is we are innovating again by introducing the fifth generation of our very popular unified fabric technology. With the increased bandwidth that it brings in, coupled with the local drive capacity and densities that we have on the compute node, our customers can now fire up the big data workload, the FCI workload, the SDS workload. All these workloads that have historically not lived in the modular form factor, can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric, we've become the only vendor to now finally enable 100 gig end to end single port bandwidth, and there are multiple of those that are coming in there. And we are working very closely with our CI partners to deliver the benefit of these performance through our Cisco Validated Design to our CI franchise. And third, the innovations in the fifth gen fabric will again allow our customers to have fewer physical adapters made with ethernet adapter, made with power channel adapters, or made with, the other storage adapters. They've reduced it down and coupled with the reduction in the cable. So very, very excited about these three big announcements that we are making in this month's release. >> Great, a lot there, you guys have been busy, so thank you for that Vikas. So, Jim, you talked a little bit about the momentum that you have, customers are adopting, what problems are they telling you that X-Series addresses, and how do they align with where they want to go in the future? >> That's a great question. I think if you go back to, and think about some of the things that we mentioned before, in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what Vikas mentioned I think is really important, right? Those pieces that we just announced really enhance that story and really move again, to the, kind of, to the next level of taking advantage of some of these, you know, problem solving for our customers. You know, if you look at, you know, I think Vikas mentioned accelerated VDI. That's a great example. These are where customers, you know, they need to have this dense compute, they need video acceleration, they need tight policy management, right? And they need to be able to deploy these systems anywhere in the world. Well, that's exactly what we're hitting on here with X-Series right now. We're hitting the market in every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs, and a lot of room to grow. We have the, you know, the premier cloud based management, you know, hybrid cloud suite in the industry, right? So check there. We have the flexible GPU accelerators that Vikas just talked about that we're announcing both on the system and also adding additional ones to the, through the use of the X-Fabric, which is really, really critical to this launch as well. And, you know, I think finally, the fifth generation of fabric interconnect and virtual interface card, and, intelligent fabric module go hand in hand in creating this 100 gig end to end bandwidth story, that we can move a lot of data. Again, you know, having all this performance is only as good as what we can get in and out of it, right? So giving customers the ability to manage it anywhere, to be able to get the bandwidth that they need, to be able to get the accelerators that are flexible that it fit exactly their needs, this is huge, right? This solves a lot of the problems we can tick off right away. With the infrastructure as I mentioned, X-Fabric is really critical here because it opens a lot of doors here, you know, we're talking about GPUs today, but in the future, there are other elements that we can disaggregate, like the GPUs that solve these life cycle mismanagement issues. They solve issues around the form factor limitations. It solves all these issues for like, it does for GPU we can do that with storage or memory in the future. So that's going to be huge, right? This is disaggregation that actually delivers, right? It's not just a gimmicky bar trick here that we're doing, this is something that customers can really get value out of day one. And then finally, I think the, you know, the future readiness here, you know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPUs going to evolve, the CPUs are going to evolve, the drives, you know, the storage modules are going to evolve. All of these things are changing very rapidly. The fabric that stitches them together is critical, and we know that we're just on the edge of some of the development that are coming with CXL, with some of the PCI Express changes that are coming in the very near future, so we're ready to go. And the X-Fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers, right? Our customers are out there saying that, you know, they want to buy into to something like X-Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future, right? We want our customers to take these disruptive technologies and not be disrupted, but use them to disrupt their competition as well. So, you know, we're really excited about the pieces today, and, I think it goes a long way towards continuing to tell the customer benefit story that X-Series brings, and, you know, again, you know, stay tuned because it's going to keep getting better as we go. >> Yeah, a lot of headroom for scale and the management piece is key there. Just have time for one more question Vikas. Give us some nuggets on the roadmap. What's next for X-Series that we can look forward to? >> Absolutely Dave. As we talked about, and as Jim also hinted, this is a future ready architecture. A lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up, no refund replace. And, there we are looking into, enabling the customer's journey as they transition from PCI generation four, to five to six without driven replace, as they embrace CXL without driven replace. As they embrace the newer paradigm of computing through the disaggregated memory, disaggregated PCIe or NVMe based dense drives, and so forth. We are also looking forward to X-Fabric next generation, which will allow dynamic assignment of GPUs anywhere within the chassis and much more. So this is again, all about focusing on the innovation that will make the enterprise data center operations a lot more simpler, and drive down the TCO by keeping them not only covered for today, but also for future. So that's where some of the focus is on Dave. >> Okay. Thank you guys we'll leave it there, in a moment, I'll have some closing thoughts. (upbeat music) We're seeing a major evolution, perhaps even a bit of a revolution in the underlying infrastructure necessary to support hybrid work. Look, virtualizing compute and running general purpose workloads is something IT figured out a long time ago. But just when you have it nailed down in the technology business, things change, don't they? You can count on that. The cloud operating model has bled into on-premises locations. And is creating a new vision for the future, which we heard a lot about today. It's a vision that's turning into reality. And it supports much more diverse and data intensive workloads and alternative compute modes. It's one where flexibility is a watch word, enabling change, attacking complexity, and bringing a management capability that allows for a granular management of resources at massive scale. I hope you've enjoyed this special presentation. Remember, all these videos are available on demand at thecube.net. And if you want to learn more, please click on the information link. Thanks for watching Simplifying Hybrid Cloud brought to you by Cisco and theCUBE, your leader in enterprise tech coverage. This is Dave Vellante, be well and we'll see you next time. (upbeat music)

Published Date : Mar 22 2022

SUMMARY :

and its role in simplifying the complexity Good to see you again. Talk about the trends you're of the big things that, and of course the storage team as well. UCS and we, you know, Well, you know, you brought platform. is not on the customer, like to you know, stock buybacks, on the whole investment. hybrid cloud, the operations Like we did not write Terraform, you know, Kubernetes in the public cloud. that leave the rest of the world out you know, custom infrastructure And flexible in terms of the technology, have you on the, theCUBE, some of the supply chain challenges to help you optimize performance And Darren Williams, the So, for a hybrid cloud, you in terms of what you want to in both the enterprise and at the edge, is that around the simplicity What's the big news that Eliminating the need for you to find are in the news, and of course, you know, more than 70% of the is that it has the industry is doing in the field? and not be on the front Hey, come on Darren, the real football over your shoulder. and return back as, you know, And you know, Manish was Cisco, the bridge to possible. theCUBE, good to see you again. We know that when it comes to navigating or the day that they, you know, the business of, you know, my open that you guys, can absolutely relate to our, you know, and you know, on-prem the context that you What innovations are you And third, which is what you know, the momentum that you have, the future readiness here, you know, for scale and the management a lot more simpler, and drive down the TCO brought to you by Cisco and theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

Dave VellantePERSON

0.99+

UCSORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Manish AgarwalPERSON

0.99+

2009DATE

0.99+

80%QUANTITY

0.99+

DavePERSON

0.99+

50%QUANTITY

0.99+

JuneDATE

0.99+

17QUANTITY

0.99+

36%QUANTITY

0.99+

DarrenPERSON

0.99+

James LeachPERSON

0.99+

threeQUANTITY

0.99+

100 gigQUANTITY

0.99+

Darren WilliamsPERSON

0.99+

Enterprise Technology ResearchORGANIZATION

0.99+

June last yearDATE

0.99+

AMDORGANIZATION

0.99+

FirstQUANTITY

0.99+

one sentenceQUANTITY

0.99+

TurbonomicORGANIZATION

0.99+

Super BowlEVENT

0.99+

thecube.netOTHER

0.99+

more than 70%QUANTITY

0.99+

last yearDATE

0.99+

VikasORGANIZATION

0.99+

third segmentQUANTITY

0.99+

VikasPERSON

0.99+

OneQUANTITY

0.99+

fourth toolQUANTITY

0.99+

AWSORGANIZATION

0.99+

thirdQUANTITY

0.99+

oneQUANTITY

0.99+

Vikas RatnaPERSON

0.99+

IntersightORGANIZATION

0.99+

ETRORGANIZATION

0.99+

SecondQUANTITY

0.99+

HyperFlexORGANIZATION

0.99+

mid 2000sDATE

0.99+

third toolQUANTITY

0.99+

TodayDATE

0.99+

More than 70%QUANTITY

0.99+

X-SeriesTITLE

0.99+

10 years agoDATE

0.99+

Jay Henderson, Alteryx


 

(upbeat music) >> Okay, we're kicking off the program with our first segment. Jay Henderson is the vice president of product management at Alteryx. And we're going to talk about the trends and data where we came from, how we got here, where we're going. We got some launch news. Hello, Jay, welcome to theCUBE. >> Great to be here. Really excited to share some of the things we're working on. >> Yeah, thank you. So look, you have a deep product background, product management, product marketing. You've done strategy work. You've been around software and data your entire career, and we're seeing the collision of software, data, cloud, machine intelligence. Let's start with the customer and maybe we can work back from there. So if you're an analytics or a data executive at an organization, Jay, what's your north star? Where are you trying to take your company from a data and analytics point of view? >> Yeah, I mean, look, I think all organizations are really struggling to get insights out of their data. I think one of the things that we see is you've got digital exhaust creating large volumes of data. Storage is really cheap, so it doesn't cost them much to keep it. And that results in a situation where the organization's drowning in data, but somehow still starving for insights. And so I think, you know, when I talk to customers, they're really excited to figure out how they can put analytics in the hands of every single person in their organization, and really start to democratize the analytics and you know, let the business users and the whole organization get value out of all that data they have. >> And we're going to dig into that throughout this program. And data, I like to say is plentiful. Insights, not always so much. Tell us about your launch today, Jay. And thinking about the trends that just highlighted, the direction that your customers want to go, and the problems that you're solving. What role does the cloud play, and what is what you're launching, how does that fit in? >> Yeah, we're really excited today we're launching the Alteryx analytics cloud. That's really a portfolio of cloud-based solutions that have all been built from the ground up to be cloud native, and to take advantage of things like browser based access. So that it's really easy to give anyone access including folks on a Mac. It also lets you take advantage of elastic compute, so that you can do, you know, in database processing and cloud native solutions that are going to scale to solve the most complex problems. So we've got a portfolio of solutions, things like designer cloud, which is our flagship designer product in a browser and on the cloud. We've got Alteryx machine learning which helps up-skill, regular, old analyst, with advanced machine learning capabilities. We've got auto insights, which brings business users into the fold and automatically unearths insights using AI and machine learning. And we've got our latest edition which is Trifacta, that helps data engineers do data pipelining, and really, you know, create a lot of the underlying data sets that are used in some of this downstream analytics. >> So let's dig into some of those roles, if we could a little bit. I mean, traditionally Alteryx has served the the business analysts, and that's what designer cloud is fit for, I believe. And you've explained kind of the scope. Sorry, you've expanded that scope into the to the business user with Hyper Anna. And in a moment, we're going to talk to Adam Wilson and Suresh, about Trifacta. And that recent acquisition takes you as you said into the data engineering space and IT, but in thinking about the business analyst role, what's unique about designer cloud and how does it help these individuals? >> Yeah, I mean, really I go back to some of the feedback we've had from our customers which is, you know, they oftentimes have dozens or hundreds of seats of our designer desktop product. Really as they look to take the next step, they're trying to figure out, how do I give access to that, those types of analytics to thousands of people within the organization. And designer cloud is really great for that. You've got the browser based interface. So if folks are on a Mac, they can really easily just pop open the browser and get access to all of those prep and blend capabilities to a lot of the analysis we're doing. It's a great way to scale up access to the analytics and start to put it in the hands of really anyone in the organization, not just those highly skilled power users. >> Okay, great. So now then you add in the Hyper Anna acquisition. So now you're targeting the business user, Trifacta comes into the mix, that deeper IT angle that we talked about. How does this all fit together? How should we be thinking about the new Alteryx portfolio? >> Yeah, I mean, I think it's pretty exciting. When you think about democratizing analytics and providing access to all these different groups of people, you've not been able to do it through one platform before. It's not going to be one interface that meets the needs of all these different groups within the organization, you really do need purpose built specialized capabilities for each group. And finally today with the announcement of the Alteryx analytics cloud, we brought together all of those different capabilities, all of those different interfaces into a single end to end application. So, really finally delivering on the promise of providing analytics to all. >> How much of this have you been able to share with your customers and maybe your partners? I mean, I know all this is fairly new but have you been able to get any feedback from them? What are they saying about it? >> Yeah, I mean, it's pretty amazing. We ran early access and limited availability program, that let us put a lot of this technology in the hands of over 600 customers. >> Oh, wow. >> Over the last few months. So we have gotten a lot of feedback. I tell you, it's been overwhelmingly positive. I think organizations are really excited to unlock the insights that have been hidden in all this data they've got. They're excited to be able to use analytics in every decision that they're making so that the decisions they have are more informed and produce better business outcomes. And this idea that they're going to move from, you know, dozens to hundreds or thousands of people who have access to these kinds of capabilities, I think has been a really exciting thing that is going to accelerate the transformation that these customers are on. >> That's good. Those are good numbers for a preview mode. Let's talk a little bit about vision. So if democratizing data is the ultimate goal, which frankly has been elusive for most organizations. Over time, how's your cloud going to address the challenges of putting data to work across the entire enterprise? >> Yeah, I mean, I tend to think about the future and some of the investments we're making in our products and our roadmap across four big themes. And these are really kind of enduring themes that you're going to see us making investments in over the next few years. The first is having cloud centricity. The data gravity has been moving to the cloud. We need to be able to provide access, to be able to ingest and manipulate that data, to be able to write back to it to provide cloud solutions. So, the first one is really around cloud centricity. The second is around big data fluency. Once you have all of that data you need to be able to manipulate it in a performant manner. So, having the elastic cloud infrastructure and in-database processing is so important. The third is around making AI a strategic advantage. So, you know, getting everyone involved in accessing AI and machine learning to unlock those insights, getting it out of the hands of the small group of data scientists, putting it in the hands of analysts and business users. And then the fourth thing is really providing access across the entire organization, IT and data engineers, as well as business owners and analysts. So, cloud centricity, big data fluency, AI as a strategic advantage, and personas across the organization, are really the the four big themes you're going to see us working on over the next few months and coming years. >> That's good, thank you for that. So on a related question, how do you see the data organizations evolving? I mean, traditionally you've had, you know monolithic organizations, very specialized, or I might even say hyper specialized roles. And your mission, of course, as the customer, you and your customers, they want to democratize the data. And so, it seems logical that domain leaders are going to take more responsibility for data life cycles, for data ownerships, low code becomes more important. And perhaps there's kind of challenges the historically highly centralized and really specialized roles that I just talked about. How do you see that evolving, and what role will Alteryx play? >> Yeah, I think we'll see sort of a more federated system start to emerge. Those centralized groups are going to continue to exist, but they're going to start to empower in a much more decentralized way, the people who are closer to the business problems and have better business understanding. I think that's going to let the centralized highly skilled teams work on problems that are of higher value to the organization. The kinds of problems where one or 2% lift in the model result in millions of dollars a day for the business. And then by pushing some of the analytics out closer to the edge and closer to the business, you'll be able to, you know, apply those analytics in every single decision. So I think you're going to see both the decentralized and centralized model start to work in harmony in a little bit more of a, almost a federated sort of way. And I think the exciting thing for us at Alteryx is, you know, we want to facilitate that. We want to give analytic capabilities and solutions to both groups and types of people. We want to help them collaborate better, and drive business outcomes with the analytics they're using. >> Yeah, I mean, I think my take on it, I wonder if you could comment is, to me the technology should be an operational detail. And it has been the dog that wags the tail or maybe the other way around. You mentioned digital exhaust before. I mean, essentially it's digital exhaust coming out of operational systems that then it somehow eventually end up in the hand of the domain users. And I wonder if increasingly we're going to see those domain users, those line of business experts get more access, that's your goal. And then even go beyond analytics, start to build data products that could be monitized. And that maybe it's going to take a decade to play out, but that is sort of a new era of data. Do you see it that way? >> Absolutely. We're actually making big investments in our products and capabilities to be able to create analytic applications, and to enable somebody who's an analyst or a business user to create an application on top of the data and analytics layers that they have, really to help democratize the analytics, to help pre-package some of the analytics that can drive more insights. So I think that's definitely a trend we're going to see more of. >> Yeah, and to your point, if you confederate the governance and automate that... >> Yep. Absolutely. >> Then that can happen. I mean, that's a key part of it, obviously, so... >> Yep. >> All right, Jay, we have to leave it there. Up next, we take a deep dive into the Alteryx recent acquisition of Trifacta with Adam Wilson, who led Trifacta for more than seven years, and Suresh Vittal, who is the chief product officer at Alteryx, to explain the rationale behind the acquisition, and how it's going to impact customers. Keep it right there. You're watching theCUBE, your leader in enterprise tech coverage. (upbeat music)

Published Date : Mar 1 2022

SUMMARY :

the program with our first segment. some of the things we're working on. and data your entire career, and really start to and the problems that you're solving. that are going to scale to into the to the business and start to put it Trifacta comes into the mix, that meets the needs of all these in the hands of over 600 customers. so that the decisions they cloud going to address and machine learning to are going to take more responsibility I think that's going to let And that maybe it's going to and to enable somebody who's Yeah, and to your point, Yep. Then that can happen. and how it's going to impact customers.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

AlteryxORGANIZATION

0.99+

Suresh VittalPERSON

0.99+

Jay HendersonPERSON

0.99+

Adam WilsonPERSON

0.99+

TrifactaORGANIZATION

0.99+

dozensQUANTITY

0.99+

MacCOMMERCIAL_ITEM

0.99+

first segmentQUANTITY

0.99+

more than seven yearsQUANTITY

0.99+

SureshPERSON

0.99+

over 600 customersQUANTITY

0.99+

thirdQUANTITY

0.99+

firstQUANTITY

0.99+

AlteryxPERSON

0.99+

bothQUANTITY

0.99+

each groupQUANTITY

0.99+

one platformQUANTITY

0.99+

hundredsQUANTITY

0.99+

secondQUANTITY

0.98+

thousandsQUANTITY

0.98+

todayDATE

0.98+

one interfaceQUANTITY

0.98+

oneQUANTITY

0.98+

2%QUANTITY

0.97+

hundreds of seatsQUANTITY

0.96+

first oneQUANTITY

0.96+

singleQUANTITY

0.96+

fourth thingQUANTITY

0.94+

Hyper AnnaORGANIZATION

0.93+

both groupsQUANTITY

0.88+

millions of dollars a dayQUANTITY

0.84+

four big themesQUANTITY

0.81+

thousands of peopleQUANTITY

0.79+

yearsDATE

0.74+

theCUBEORGANIZATION

0.7+

lastDATE

0.68+

single decisionQUANTITY

0.66+

single personQUANTITY

0.66+

monthsDATE

0.56+

themesQUANTITY

0.55+

few monthsDATE

0.51+

Accelerating Automated Analytics in the Cloud with Alteryx


 

>>Alteryx is a company with a long history that goes all the way back to the late 1990s. Now the one consistent theme over 20 plus years has been that Ultrix has always been a data company early in the big data and Hadoop cycle. It saw the need to combine and prep different data types so that organizations could analyze data and take action Altrix and similar companies played a critical role in helping companies become data-driven. The problem was the decade of big data, brought a lot of complexities and required immense skills just to get the technology to work as advertised this in turn limited, the pace of adoption and the number of companies that could really lean in and take advantage of the cloud began to change all that and set the foundation for today's theme to Zuora of digital transformation. We hear that phrase a ton digital transformation. >>People used to think it was a buzzword, but of course we learned from the pandemic that if you're not a digital business, you're out of business and a key tenant of digital transformation is democratizing data, meaning enabling, not just hypo hyper specialized experts, but anyone business users to put data to work. Now back to Ultrix, the company has embarked on a major transformation of its own. Over the past couple of years, brought in new management, they've changed the way in which it engaged with customers with the new subscription model and it's topgraded its talent pool. 2021 was even more significant because of two acquisitions that Altrix made hyper Ana and trifecta. Why are these acquisitions important? Well, traditionally Altryx sold to business analysts that were part of the data pipeline. These were fairly technical people who had certain skills and were trained in things like writing Python code with hyper Ana Altryx has added a new persona, the business user, anyone in the business who wanted to gain insights from data and, or let's say use AI without having to be a deep technical expert. >>And then Trifacta a company started in the early days of big data by cube alum, Joe Hellerstein and his colleagues at Berkeley. They knocked down the data engineering persona, and this gives Altryx a complimentary extension into it where things like governance and security are paramount. So as we enter 2022, the post isolation economy is here and we do so with a digital foundation built on the confluence of cloud native technologies, data democratization and machine intelligence or AI, if you prefer. And Altryx is entering that new era with an expanded portfolio, new go-to market vectors, a recurring revenue business model, and a brand new outlook on how to solve customer problems and scale a company. My name is Dave Vellante with the cube and I'll be your host today. And the next hour, we're going to explore the opportunities in this new data market. And we have three segments where we dig into these trends and themes. First we'll talk to Jay Henderson, vice president of product management at Ultrix about cloud acceleration and simplifying complex data operations. Then we'll bring in Suresh Vetol who's the chief product officer at Altrix and Adam Wilson, the CEO of Trifacta, which of course is now part of Altrix. And finally, we'll hear about how Altryx is partnering with snowflake and the ecosystem and how they're integrating with data platforms like snowflake and what this means for customers. And we may have a few surprises sprinkled in as well into the conversation let's get started. >>We're kicking off the program with our first segment. Jay Henderson is the vice president of product management Altryx and we're going to talk about the trends and data, where we came from, how we got here, where we're going. We get some launch news. Well, Jay, welcome to the cube. >>Great to be here, really excited to share some of the things we're working on. >>Yeah. Thank you. So look, you have a deep product background, product management, product marketing, you've done strategy work. You've been around software and data, your entire career, and we're seeing the collision of software data cloud machine intelligence. Let's start with the customer and maybe we can work back from there. So if you're an analytics or data executive in an organization, w J what's your north star, where are you trying to take your company from a data and analytics point of view? >>Yeah, I mean, you know, look, I think all organizations are really struggling to get insights out of their data. I think one of the things that we see is you've got digital exhaust, creating large volumes of data storage is really cheap, so it doesn't cost them much to keep it. And that results in a situation where the organization's, you know, drowning in data, but somehow still starving for insights. And so I think, uh, you know, when I talk to customers, they're really excited to figure out how they can put analytics in the hands of every single person in their organization, and really start to democratize the analytics, um, and, you know, let the, the business users and the whole organization get value out of all that data they have. >>And we're going to dig into that throughout this program data, I like to say is plentiful insights, not always so much. Tell us about your launch today, Jay, and thinking about the trends that you just highlighted, the direction that your customers want to go and the problems that you're solving, what role does the cloud play in? What is what you're launching? How does that fit in? >>Yeah, we're, we're really excited today. We're launching the Altryx analytics cloud. That's really a portfolio of cloud-based solutions that have all been built from the ground up to be cloud native, um, and to take advantage of things like based access. So that it's really easy to give anyone access, including folks on a Mac. Um, it, you know, it also lets you take advantage of elastic compute so that you can do, you know, in database processing and cloud native, um, solutions that are gonna scale to solve the most complex problems. So we've got a portfolio of solutions, things like designer cloud, which is our flagship designer product in a browser and on the cloud, but we've got ultra to machine learning, which helps up-skill regular old analysts with advanced machine learning capabilities. We've got auto insights, which brings a business users into the fold and automatically unearths insights using AI and machine learning. And we've got our latest edition, which is Trifacta that helps data engineers do data pipelining and really, um, you know, create a lot of the underlying data sets that are used in some of this, uh, downstream analytics. >>Let's dig into some of those roles if we could a little bit, I mean, you've traditionally Altryx has served the business analysts and that's what designer cloud is fit for, I believe. And you've explained, you know, kind of the scope, sorry, you've expanded that scope into the, to the business user with hyper Anna. And we're in a moment we're going to talk to Adam Wilson and Suresh, uh, about Trifacta and that recent acquisition takes you, as you said, into the data engineering space in it. But in thinking about the business analyst role, what's unique about designer cloud cloud, and how does it help these individuals? >>Yeah, I mean, you know, really, I go back to some of the feedback we've had from our customers, which is, um, you know, they oftentimes have dozens or hundreds of seats of our designer desktop product, you know, really, as they look to take the next step, they're trying to figure out how do I give access to that? Those types of analytics to thousands of people within the organization and designer cloud is, is really great for that. You've got the browser-based interface. So if folks are on a Mac, they can really easily just pop, open the browser and get access to all of those, uh, prep and blend capabilities to a lot of the analysis we're doing. Um, it's a great way to scale up access to the analytics and then start to put it in the hands of really anyone in the organization, not just those highly skilled power users. >>Okay, great. So now then you add in the hyper Anna acquisition. So now you're targeting the business user Trifacta comes into the mix that deeper it angle that we talked about, how does this all fit together? How should we be thinking about the new Altryx portfolio? >>Yeah, I mean, I think it's pretty exciting. Um, you know, when you think about democratizing analytics and providing access to all these different groups of people, um, you've not been able to do it through one platform before. Um, you know, it's not going to be one interface that meets the, of all these different groups within the organization. You really do need purpose built specialized capabilities for each group. And finally, today with the announcement of the alternates analytics cloud, we brought together all of those different capabilities, all of those different interfaces into a single in the end application. So really finally delivering on the promise of providing analytics to all, >>How much of this you've been able to share with your customers and maybe your partners. I mean, I know OD is fairly new, but if you've been able to get any feedback from them, what are they saying about it? >>Uh, I mean, it's, it's pretty amazing. Um, we ran a early access, limited availability program that led us put a lot of this technology in the hands of over 600 customers, um, over the last few months. So we have gotten a lot of feedback. I tell you, um, it's been overwhelmingly positive. I think organizations are really excited to unlock the insights that have been hidden in all this data. They've got, they're excited to be able to use analytics in every decision that they're making so that the decisions they have or more informed and produce better business outcomes. Um, and, and this idea that they're going to move from, you know, dozens to hundreds or thousands of people who have access to these kinds of capabilities, I think has been a really exciting thing that is going to accelerate the transformation that these customers are on. >>Yeah, those are good. Good, good numbers for, for preview mode. Let's, let's talk a little bit about vision. So it's democratizing data is the ultimate goal, which frankly has been elusive for most organizations over time. How's your cloud going to address the challenges of putting data to work across the entire enterprise? >>Yeah, I mean, I tend to think about the future and some of the investments we're making in our products and our roadmap across four big themes, you know, in the, and these are really kind of enduring themes that you're going to see us making investments in over the next few years, the first is having cloud centricity. You know, the data gravity has been moving to the cloud. We need to be able to provide access, to be able to ingest and manipulate that data, to be able to write back to it, to provide cloud solution. So the first one is really around cloud centricity. The second is around big data fluency. Once you have all of the data, you need to be able to manipulate it in a performant manner. So having the elastic cloud infrastructure and in database processing is so important, the third is around making AI a strategic advantage. >>So, uh, you know, getting everyone involved and accessing AI and machine learning to unlock those insights, getting it out of the hands of the small group of data scientists, putting it in the hands of analysts and business users. Um, and then the fourth thing is really providing access across the entire organization. You know, it and data engineers, uh, as well as business owners and analysts. So, um, cloud centricity, big data fluency, um, AI is a strategic advantage and, uh, personas across the organization are really the four big themes you're going to see us, uh, working on over the next few months and, uh, coming coming year. >>That's good. Thank you for that. So, so on a related question, how do you see the data organizations evolving? I mean, traditionally you've had, you know, monolithic organizations, uh, very specialized or I might even say hyper specialized roles and, and your, your mission of course is the customer. You, you, you, you and your customers, they want to democratize the data. And so it seems logical that domain leaders are going to take more responsibility for data, life cycles, data ownerships, low code becomes more important. And perhaps this kind of challenges, the historically highly centralized and really specialized roles that I just talked about. How do you see that evolving and, and, and what role will Altryx play? >>Yeah. Um, you know, I think we'll see sort of a more federated systems start to emerge. Those centralized groups are going to continue to exist. Um, but they're going to start to empower, you know, in a much more de-centralized way, the people who are closer to the business problems and have better business understanding. I think that's going to let the centralized highly skilled teams work on, uh, problems that are of higher value to the organization. The kinds of problems where one or 2% lift in the model results in millions of dollars a day for the business. And then by pushing some of the analytics out to, uh, closer to the edge and closer to the business, you'll be able to apply those analytics in every single decision. So I think you're going to see, you know, both the decentralized and centralized models start to work in harmony and a little bit more about almost a federated sort of a way. And I think, you know, the exciting thing for us at Altryx is, you know, we want to facilitate that. We want to give analytic capabilities and solutions to both groups and types of people. We want to help them collaborate better, um, and drive business outcomes with the analytics they're using. >>Yeah. I mean, I think my take on another one, if you could comment is to me, the technology should be an operational detail and it has been the, the, the dog that wags the tail, or maybe the other way around, you mentioned digital exhaust before. I mean, essentially it's digital exhaust coming out of operationals systems that then somehow, eventually end up in the hand of the domain users. And I wonder if increasingly we're going to see those domain users, users, those, those line of business experts get more access. That's your goal. And then even go beyond analytics, start to build data products that could be monetized, and that maybe it's going to take a decade to play out, but that is sort of a new era of data. Do you see it that way? >>Absolutely. We're actually making big investments in our products and capabilities to be able to create analytic applications and to enable somebody who's an analyst or business user to create an application on top of the data and analytics layers that they have, um, really to help democratize the analytics, to help prepackage some of the analytics that can drive more insights. So I think that's definitely a trend we're going to see more. >>Yeah. And to your point, if you can federate the governance and automate that, then that can happen. I mean, that's a key part of it, obviously. So, all right, Jay, we have to leave it there up next. We take a deep dive into the Altryx recent acquisition of Trifacta with Adam Wilson who led Trifacta for more than seven years. It's the recipe. Tyler is the chief product officer at Altryx to explain the rationale behind the acquisition and how it's going to impact customers. Keep it right there. You're watching the cube. You're a leader in enterprise tech coverage. >>It's go time, get ready to accelerate your data analytics journey with a unified cloud native platform. That's accessible for everyone on the go from home to office and everywhere in between effortless analytics to help you go from ideas to outcomes and no time. It's your time to shine. It's Altryx analytics cloud time. >>Okay. We're here with. Who's the chief product officer at Altryx and Adam Wilson, the CEO of Trifacta. Now of course, part of Altryx just closed this quarter. Gentlemen. Welcome. >>Great to be here. >>Okay. So let me start with you. In my opening remarks, I talked about Altrix is traditional position serving business analysts and how the hyper Anna acquisition brought you deeper into the business user space. What does Trifacta bring to your portfolio? Why'd you buy the company? >>Yeah. Thank you. Thank you for the question. Um, you know, we see, uh, we see a massive opportunity of helping, um, brands, um, democratize the use of analytics across their business. Um, every knowledge worker, every individual in the company should have access to analytics. It's no longer optional, um, as they navigate their businesses with that in mind, you know, we know designer and are the products that Altrix has been selling the past decade or so do a really great job, um, addressing the business analysts, uh, with, um, hyper Rana now kind of renamed, um, Altrix auto. We even speak with the business owner and the line of business owner. Who's looking for insights that aren't real in traditional dashboards and so on. Um, but we see this opportunity of really helping the data engineering teams and it organizations, um, to also make better use of analytics. Um, and that's where the drive factor comes in for us. Um, drive factor has the best data engineering cloud in the planet. Um, they have an established track record of working across multiple cloud platforms and helping data engineers, um, do better data pipelining and work better with, uh, this massive kind of cloud transformation that's happening in every business. Um, and so fact made so much sense for us. >>Yeah. Thank you for that. I mean, you, look, you could have built it yourself would have taken, you know, who knows how long, you know, but, uh, so definitely a great time to market move, Adam. I wonder if we could dig into Trifacta some more, I mean, I remember interviewing Joe Hellerstein in the early days. You've talked about this as well, uh, on the cube coming at the problem of taking data from raw refined to an experience point of view. And Joe in the early days, talked about flipping the model and starting with data visualization, something Jeff, her was expert at. So maybe explain how we got here. We used to have this cumbersome process of ETL and you may be in some others changed that model with ELL and then T explain how Trifacta really changed the data engineering game. >>Yeah, that's exactly right. Uh, David, it's been a really interesting journey for us because I think the original hypothesis coming out of the campus research, uh, at Berkeley and Stanford that really birth Trifacta was, you know, why is it that the people who know the data best can't do the work? You know, why is this become the exclusive purview of the highly technical? And, you know, can we rethink this and make this a user experience, problem powered by machine learning that will take some of the more complicated things that people want to do with data and really help to automate those. So, so a broader set of, of users can, um, can really see for themselves and help themselves. And, and I think that, um, there was a lot of pent up frustration out there because people have been told for, you know, for a decade now to be more data-driven and then the whole time they're saying, well, then give me the data, you know, in the shape that I could use it with the right level of quality and I'm happy to be, but don't tell me to be more data-driven and then, and, and not empower me, um, to, to get in there and to actually start to work with the data in meaningful ways. >>And so, um, that was really, you know, what, you know, the origin story of the company and I think is, as we, um, saw over the course of the last 5, 6, 7 years that, um, you know, uh, real, uh, excitement to embrace this idea of, of trying to think about data engineering differently, trying to democratize the, the ETL process and to also leverage all these exciting new, uh, engines and platforms that are out there that allow for processing, you know, ever more diverse data sets, ever larger data sets and new and interesting ways. And that's where a lot of the push-down or the ELT approaches that, you know, I think it could really won the day. Um, and that, and that for us was a hallmark of the solution from the very beginning. >>Yeah, this is a huge point that you're making is, is first of all, there's a large business, it's probably about a hundred billion dollar Tam. Uh, and the, the point you're making, because we've looked, we've contextualized most of our operational systems, but the big data pipeline is hasn't gotten there. But, and maybe we could talk about that a little bit because democratizing data is Nirvana, but it's been historically very difficult. You've got a number of companies it's very fragmented and they're all trying to attack their little piece of the problem to achieve an outcome, but it's been hard. And so what's going to be different about Altryx as you bring these puzzle pieces together, how is this going to impact your customers who would like to take that one? >>Yeah, maybe, maybe I'll take a crack at it. And Adam will, um, add on, um, you know, there hasn't been a single platform for analytics, automation in the enterprise, right? People have relied on, uh, different products, um, to solve kind of, uh, smaller problems, um, across this analytics, automation, data transformation domain. Um, and, um, I think uniquely Alcon's has that opportunity. Uh, we've got 7,000 plus customers who rely on analytics for, um, data management, for analytics, for AI and ML, uh, for transformations, uh, for reporting and visualization for automated insights and so on. Um, and so by bringing drive factor, we have the opportunity to scale this even further and solve for more use cases, expand the scenarios where it's applied and so multiple personas. Um, and we just talked about the data engineers. They are really a growing stakeholder in this transformation of data and analytics. >>Yeah, good. Maybe we can stay on this for a minute cause you, you you're right. You bring it together. Now at least three personas the business analyst, the end user slash business user. And now the data engineer, which is really out of an it role in a lot of companies, and you've used this term, the data engineering cloud, what is that? How is it going to integrate in with, or support these other personas? And, and how's it going to integrate into the broader ecosystem of clouds and cloud data warehouses or any other data stores? >>Yeah, no, that's great. Uh, yeah, I think for us, we really looked at this and said, you know, we want to build an open and interactive cloud platform for data engineers, you know, to collaboratively profile pipeline, um, and prepare data for analysis. And that really meant collaborating with the analysts that were in the line of business. And so this is why a big reason why this combination is so magic because ultimately if we can get the data engineers that are creating the data products together with the analysts that are in the line of business that are driving a lot of the decision making and allow for that, what I would describe as collaborative curation of the data together, so that you're starting to see, um, uh, you know, increasing returns to scale as this, uh, as this rolls out. I just think that is an incredibly powerful combination and, and frankly, something that the market is not crack the code on yet. And so, um, I think when we, when I sat down with Suresh and with mark and the team at Ultrix, that was really part of the, the, the big idea, the big vision that was painted and got us really energized about the acquisition and about the potential of the combination. >>And you're really, you're obviously writing the cloud and the cloud native wave. Um, and, but specifically we're seeing, you know, I almost don't even want to call it a data warehouse anyway, because when you look at what's, for instance, Snowflake's doing, of course their marketing is around the data cloud, but I actually think there's real justification for that because it's not like the traditional data warehouse, right. It's, it's simplified get there fast, don't necessarily have to go through the central organization to share data. Uh, and, and, and, but it's really all about simplification, right? Isn't that really what the democratization comes down to. >>Yeah. It's simplification and collaboration. Right. I don't want to, I want to kind of just what Adam said resonates with me deeply. Um, analytics is one of those, um, massive disciplines inside an enterprise that's really had the weakest of tools. Um, and we just have interfaces to collaborate with, and I think truly this was all drinks and a superpower was helping the analysts get more out of their data, get more out of the analytics, like imagine a world where these people are collaborating and sharing insights in real time and sharing workflows and getting access to new data sources, um, understanding data models better, I think, um, uh, curating those insights. I boring Adam's phrase again. Um, I think that creates a real value inside the organization because frankly in scaling analytics and democratizing analytics and data, we're still in such early phases of this journey. >>So how should we think about designer cloud, which is from Altrix it's really been the on-prem and the server desktop offering. And of course Trifacta is with cloud cloud data warehouses. Right. Uh, how, how should we think about those two products? Yeah, >>I think, I think you should think about them. And, uh, um, as, as very complimentary right designer cloud really shares a lot of DNA and heritage with, uh, designer desktop, um, the low code tooling and that interface, uh, the really appeals to the business analysts, um, and gets a lot of the things that they do well, we've also built it with interoperability in mind, right. So if you started building your workflows in designer desktop, you want to share that with design and cloud, we want to make it super easy for you to do that. Um, and I think over time now we're only a week into, um, this Alliance with, um, with, um, Trifacta, um, I think we have to get deeper inside to think about what does the data engineer really need? What's the business analysts really need and how to design a cloud, and Trifacta really support both of those requirements, uh, while kind of continue to build on the trifecta on the amazing Trifacta cloud platform. >>You know, >>I think we're just going to say, I think that's one of the things that, um, you know, creates a lot of, uh, opportunity as we go forward, because ultimately, you know, Trifacta took a platform, uh, first mentality to everything that we built. So thinking about openness and extensibility and, um, and how over time people could build things on top of factor that are a variety of analytic tool chain, or analytic applications. And so, uh, when you think about, um, Ultrix now starting to, uh, to move some of its capabilities or to provide additional capabilities, uh, in the cloud, um, you know, Trifacta becomes a platform that can accelerate, you know, all of that work and create, uh, uh, a cohesive set of, of cloud-based services that, um, share a common platform. And that maintains independence because both companies, um, have been, uh, you know, fiercely independent, uh, and, and really giving people choice. >>Um, so making sure that whether you're, uh, you know, picking one cloud platform and other, whether you're running things on the desktop, uh, whether you're running in hybrid environments, that, um, no matter what your decision, um, you're always in a position to be able to get out your data. You're always in a position to be able to cleanse transform shape structure, that data, and ultimately to deliver, uh, the analytics that you need. And so I think in that sense, um, uh, you know, this, this again is another reason why the combination, you know, fits so well together, giving people, um, the choice. Um, and as they, as they think about their analytics strategy and their platform strategy going forward, >>Yeah. I make a chuckle, but one of the reasons I always liked Altrix is cause you kinda did the little end run on it. It can be a blocker sometimes, but that created problems, right? Because the organization said, wow, this big data stuff has taken off, but we need security. We need governance. And it's interesting because you've got, you know, ETL has been complex, whereas the visualization tools, they really, you know, really weren't great at governance and security. It took some time there. So that's not, not their heritage. You're bringing those worlds together. And I'm interested, you guys just had your sales kickoff, you know, what was their reaction like? Uh, maybe Suresh, you could start off and maybe Adam, you could bring us home. >>Um, thanks for asking about our sales kickoff. So we met for the first time and you've got a two years, right. For, as, as it is for many of us, um, in person, uh, um, which I think was a, was a real breakthrough as Qualtrics has been on its transformation journey. Uh, we added a Trifacta to, um, the, the potty such as the tour, um, and getting all of our sales teams and product organizations, um, to meet in person in one location. I thought that was very powerful for other the company. Uh, but then I tell you, um, um, the reception for Trifacta was beyond anything I could have imagined. Uh, we were working out him and I will, when he's so hot on, on the deal and the core hypotheses and so on. And then you step back and you're going to share the vision with the field organization, and it blows you away, the energy that it creates among our sellers out of partners. >>And I'm sure Madam will and his team were mocked, um, every single day, uh, with questions and opportunities to bring them in. But Adam, maybe you should share. Yeah, no, it was, uh, it was through the roof. I mean, uh, uh, the, uh, the amount of energy, the, uh, certainly how welcoming everybody was, uh, uh, you know, just, I think the story makes so much sense together. I think culturally, the company is, are very aligned. Um, and, uh, it was a real, uh, real capstone moment, uh, to be able to complete the acquisition and to, and to close and announced, you know, at the kickoff event. And, um, I think, you know, for us, when we really thought about it, you know, when we ended, the story that we told was just, you have this opportunity to really cater to what the end users care about, which is a lot about interactivity and self-service, and at the same time. >>And that's, and that's a lot of the goodness that, um, that Altryx is, has brought, you know, through, you know, you know, years and years of, of building a very vibrant community of, you know, thousands, hundreds of thousands of users. And on the other side, you know, Trifacta bringing in this data engineering focus, that's really about, uh, the governance things that you mentioned and the openness, um, that, that it cares deeply about. And all of a sudden, now you have a chance to put that together into a complete story where the data engineering cloud and analytics, automation, you know, coming together. And, um, and I just think, you know, the lights went on, um, you know, for people instantaneously and, you know, this is a story that, um, that I think the market is really hungry for. And certainly the reception we got from, uh, from the broader team at kickoff was, uh, was a great indication. >>Well, I think the story hangs together really well, you know, one of the better ones I've seen in, in this space, um, and, and you guys coming off a really, really strong quarter. So congratulations on that jets. We have to leave it there. I really appreciate your time today. Yeah. Take a look at this short video. And when we come back, we're going to dig into the ecosystem and the integration into cloud data warehouses and how leading organizations are creating modern data teams and accelerating their digital businesses. You're watching the cube you're leader in enterprise tech coverage. >>This is your data housed neatly insecurely in the snowflake data cloud. And all of it has potential the potential to solve complex business problems, deliver personalized financial offerings, protect supply chains from disruption, cut costs, forecast, grow and innovate. All you need to do is put your data in the hands of the right people and give it an opportunity. Luckily for you. That's the easy part because snowflake works with Alteryx and Alteryx turns data into breakthroughs with just a click. Your organization can automate analytics with drag and drop building blocks, easily access snowflake data with both sequel and no SQL options, share insights, powered by Alteryx data science and push processing to snowflake for lightning, fast performance, you get answers you can put to work in your teams, get repeatable processes they can share in that's exciting because not only is your data no longer sitting around in silos, it's also mobilized for the next opportunity. Turn your data into a breakthrough Alteryx and snowflake >>Okay. We're back here in the queue, focusing on the business promise of the cloud democratizing data, making it accessible and enabling everyone to get value from analytics, insights, and data. We're now moving into the eco systems segment the power of many versus the resources of one. And we're pleased to welcome. Barb Hills camp was the senior vice president partners and alliances at Ultrix and a special guest Terek do week head of technology alliances at snowflake folks. Welcome. Good to see you. >>Thank you. Thanks for having me. Good to see >>Dave. Great to see you guys. So cloud migration, it's one of the hottest topics. It's the top one of the top initiatives of senior technology leaders. We have survey data with our partner ETR it's number two behind security, and just ahead of analytics. So we're hovering around all the hot topics here. Barb, what are you seeing with respect to customer, you know, cloud migration momentum, and how does the Ultrix partner strategy fit? >>Yeah, sure. Partners are central company's strategy. They always have been. We recognize that our partners have deep customer relationships. And when you connect that with their domain expertise, they're really helping customers on their cloud and business transformation journey. We've been helping customers achieve their desired outcomes with our partner community for quite some time. And our partner base has been growing an average of 30% year over year, that partner community and strategy now addresses several kinds of partners, spanning solution providers to global SIS and technology partners, such as snowflake and together, we help our customers realize the business promise of their journey to the cloud. Snowflake provides a scalable storage system altereds provides the business user friendly front end. So for example, it departments depend on snowflake to consolidate data across systems into one data cloud with Altryx business users can easily unlock that data in snowflake solving real business outcomes. Our GSI and solution provider partners are instrumental in providing that end to end benefit of a modern analytic stack in the cloud providing platform, guidance, deployment, support, and other professional services. >>Great. Let's get a little bit more into the relationship between Altrix and S in snowflake, the partnership, maybe a little bit about the history, you know, what are the critical aspects that we should really focus on? Barb? Maybe you could start an Interra kindly way in as well. >>Yeah, so the relationship started in 2020 and all shirts made a big bag deep with snowflake co-innovating and optimizing cloud use cases together. We are supporting customers who are looking for that modern analytic stack to replace an old one or to implement their first analytic strategy. And our joint customers want to self-serve with data-driven analytics, leveraging all the benefits of the cloud, scalability, accessibility, governance, and optimizing their costs. Um, Altrix proudly achieved. Snowflake's highest elite tier in their partner program last year. And to do that, we completed a rigorous third party testing process, which also helped us make some recommended improvements to our joint stack. We wanted customers to have confidence. They would benefit from high quality and performance in their investment with us then to help customers get the most value out of the destroyed solution. We developed two great assets. One is the officer starter kit for snowflake, and we coauthored a joint best practices guide. >>The starter kit contains documentation, business workflows, and videos, helping customers to get going more easily with an altered since snowflake solution. And the best practices guide is more of a technical document, bringing together experiences and guidance on how Altryx and snowflake can be deployed together. Internally. We also built a full enablement catalog resources, right? We wanted to provide our account executives more about the value of the snowflake relationship. How do we engage and some best practices. And now we have hundreds of joint customers such as Juniper and Sainsbury who are actively using our joint solution, solving big business problems much faster. >>Cool. Kara, can you give us your perspective on the partnership? >>Yeah, definitely. Dave, so as Barb mentioned, we've got this standing very successful partnership going back years with hundreds of happy joint customers. And when I look at the beginning, Altrix has helped pioneer the concept of self-service analytics, especially with use cases that we worked on with for, for data prep for BI users like Tableau and as Altryx has evolved to now becoming from data prep to now becoming a full end to end data science platform. It's really opened up a lot more opportunities for our partnership. Altryx has invested heavily over the last two years in areas of deep integration for customers to fully be able to expand their investment, both technologies. And those investments include things like in database pushed down, right? So customers can, can leverage that elastic platform, that being the snowflake data cloud, uh, with Alteryx orchestrating the end to end machine learning workflows Alteryx also invested heavily in snow park, a feature we released last year around this concept of data programmability. So all users were regardless of their business analysts, regardless of their data, scientists can use their tools of choice in order to consume and get at data. And now with Altryx cloud, we think it's going to open up even more opportunities. It's going to be a big year for the partnership. >>Yeah. So, you know, Terike, we we've covered snowflake pretty extensively and you initially solve what I used to call the, I still call the snake swallowing the basketball problem and cloud data warehouse changed all that because you had virtually infinite resources, but so that's obviously one of the problems that you guys solved early on, but what are some of the common challenges or patterns or trends that you see with snowflake customers and where does Altryx come in? >>Sure. Dave there's there's handful, um, that I can come up with today, the big challenges or trends for us, and Altrix really helps us across all of them. Um, there are three particular ones I'm going to talk about the first one being self-service analytics. If we think about it, every organization is trying to democratize data. Every organization wants to empower all their users, business users, um, you know, the, the technology users, but the business users, right? I think every organization has realized that if everyone has access to data and everyone can do something with data, it's going to make them competitively, give them a competitive advantage with Altrix is something we share that vision of putting that power in the hands of everyday users, regardless of the skillsets. So, um, with self-service analytics, with Ultrix designer they've they started out with self-service analytics as the forefront, and we're just scratching the surface. >>I think there was an analyst, um, report that shows that less than 20% of organizations are truly getting self-service analytics to their end users. Now, with Altryx going to Ultrix cloud, we think that's going to be a huge opportunity for us. Um, and then that opens up the second challenge, which is machine learning and AI, every organization is trying to get predictive analytics into every application that they have in order to be competitive in order to be competitive. Um, and with Altryx creating this platform so they can cater to both the everyday business user, the quote unquote, citizen data scientists, and making a code friendly for data scientists to be able to get at their notebooks and all the different tools that they want to use. Um, they fully integrated in our snow park platform, which I talked about before, so that now we get an end to end solution caring to all, all lines of business. >>And then finally this concept of data marketplaces, right? We, we created snowflake from the ground up to be able to solve the data sharing problem, the big data problem, the data sharing problem. And Altryx um, if we look at mobilizing your data, getting access to third-party datasets, to enrich with your own data sets, to enrich with, um, with your suppliers and with your partners, data sets, that's what all customers are trying to do in order to get a more comprehensive 360 view, um, within their, their data applications. And so with Altryx alterations, we're working on third-party data sets and marketplaces for quite some time. Now we're working on how do we integrate what Altrix is providing with the snowflake data marketplace so that we can enrich these workflows, these great, great workflows that Altrix writing provides. Now we can add third party data into that workflow. So that opens up a ton of opportunities, Dave. So those are three I see, uh, easily that we're going to be able to solve a lot of customer challenges with. >>So thank you for that. Terrick so let's stay on cloud a little bit. I mean, Altrix is undergoing a major transformation, big focus on the cloud. How does this cloud launch impact the partnership Terike from snowflakes perspective and then Barb, maybe, please add some color. >>Yeah, sure. Dave snowflake started as a cloud data platform. We saw our founders really saw the challenges that customers are having with becoming data-driven. And the biggest challenge was the complexity of having imagine infrastructure to even be able to do it, to get applications off the ground. And so we created something to be cloud-native. We created to be a SAS managed service. So now that that Altrix is moving to the same model, right? A cloud platform, a SAS managed service, we're just, we're just removing more of the friction. So we're going to be able to start to package these end to end solutions that are SAS based that are fully managed. So customers can, can go faster and they don't have to worry about all of the underlying complexities of, of, of stitching things together. Right? So, um, so that's, what's exciting from my viewpoint >>And I'll follow up. So as you said, we're investing heavily in the cloud a year ago, we had two pre desktop products, and today we have four cloud products with cloud. We can provide our users with more flexibility. We want to make it easier for the users to leverage their snowflake data in the Alteryx platform, whether they're using our beloved on-premise solution or the new cloud products were committed to that continued investment in the cloud, enabling our joint partner solutions to meet customer requirements, wherever they store their data. And we're working with snowflake, we're doing just that. So as customers look for a modern analytic stack, they expect that data to be easily accessible, right within a fast, secure and scalable platform. And the launch of our cloud strategy is a huge leap forward in making Altrix more widely accessible to all users in all types of roles, our GSI and our solution provider partners have asked for these cloud capabilities at scale, and they're excited to better support our customers, cloud and analytic >>Are. How about you go to market strategy? How would you describe your joint go to market strategy with snowflake? >>Sure. It's simple. We've got to work backwards from our customer's challenges, right? Driving transformation to solve problems, gain efficiencies, or help them save money. So whether it's with snowflake or other GSI, other partner types, we've outlined a joint journey together from recruit solution development, activation enablement, and then strengthening our go to market strategies to optimize our results together. We launched an updated partner program and within that framework, we've created new benefits for our partners around opportunity registration, new role based enablement and training, basically extending everything we do internally for our own go-to-market teams to our partners. We're offering partner, marketing resources and funding to reach new customers together. And as a matter of fact, we recently launched a fantastic video with snowflake. I love this video that very simply describes the path to insights starting with your snowflake data. Right? We do joint customer webinars. We're working on joint hands-on labs and have a wonderful landing page with a lot of assets for our customers. Once we have an interested customer, we engage our respective account managers, collaborating through discovery questions, proof of concepts really showcasing the desired outcome. And when you combine that with our partners technology or domain expertise, it's quite powerful, >>Dark. How do you see it? You'll go to market strategy. >>Yeah. Dave we've. Um, so we initially started selling, we initially sold snowflake as technology, right? Uh, looking at positioning the diff the architectural differentiators and the scale and concurrency. And we noticed as we got up into the larger enterprise customers, we're starting to see how do they solve their business problems using the technology, as well as them coming to us and saying, look, we want to also know how do you, how do you continue to map back to the specific prescriptive business problems we're having? And so we shifted to an industry focus last year, and this is an area where Altrix has been mature for probably since their inception selling to the line of business, right? Having prescriptive use cases that are particular to an industry like financial services, like retail, like healthcare and life sciences. And so, um, Barb talked about these, these starter kits where it's prescriptive, you've got a demo and, um, a way that customers can get off the ground and running, right? >>Cause we want to be able to shrink that time to market, the time to value that customers can watch these applications. And we want to be able to, to tell them specifically how we can map back to their business initiatives. So I see a huge opportunity to align on these industry solutions. As BARR mentioned, we're already doing that where we've released a few around financial services working in healthcare and retail as well. So that is going to be a way for us to allow customers to go even faster and start to map two lines of business with Alteryx. >>Great. Thanks Derek. Bob, what can we expect if we're observing this relationship? What should we look for in the coming year? >>A lot specifically with snowflake, we'll continue to invest in the partnership. Uh, we're co innovators in this journey, including snow park extensibility efforts, which Derek will tell you more about shortly. We're also launching these great news strategic solution blueprints, and extending that at no charge to our partners with snowflake, we're already collaborating with their retail and CPG team for industry blueprints. We're working with their data marketplace team to highlight solutions, working with that data in their marketplace. More broadly, as I mentioned, we're relaunching the ultra partner program designed to really better support the unique partner types in our global ecosystem, introducing new benefits so that with every partner, achievement or investment with ultra score, providing our partners with earlier access to benefits, um, I could talk about our program for 30 minutes. I know we don't have time. The key message here Alteryx is investing in our partner community across the business, recognizing the incredible value that they bring to our customers every day. >>Tarik will give you the last word. What should we be looking for from, >>Yeah, thanks. Thanks, Dave. As BARR mentioned, Altrix has been the forefront of innovating with us. They've been integrating into, uh, making sure again, that customers get the full investment out of snowflake things like in database push down that I talked about before that extensibility is really what we're excited about. Um, the ability for Ultrix to plug into this extensibility framework that we call snow park and to be able to extend out, um, ways that the end users can consume snowflake through, through sequel, which has traditionally been the way that you consume snowflake as well as Java and Scala, not Python. So we're excited about those, those capabilities. And then we're also excited about the ability to plug into the data marketplace to provide third party data sets, right there probably day sets in, in financial services, third party, data sets and retail. So now customers can build their data applications from end to end using ultrasound snowflake when the comprehensive 360 view of their customers, of their partners, of even their employees. Right? I think it's exciting to see what we're going to be able to do together with these upcoming innovations. Great >>Barb Tara, thanks so much for coming on the program, got to leave it right there in a moment, I'll be back with some closing thoughts in a summary, don't go away. >>1200 hours of wind tunnel testing, 30 million race simulations, 2.4 second pit stops make that 2.3. The sector times out the wazoo, whites are much of this velocity's pressures, temperatures, 80,000 components generating 11.8 billion data points and one analytics platform to make sense of it all. When McLaren needs to turn complex data into insights, they turn to Altryx Qualtrics analytics, automation, >>Okay, let's summarize and wrap up the session. We can pretty much agree the data is plentiful, but organizations continue to struggle to get maximum value out of their data investments. The ROI has been elusive. There are many reasons for that complexity data, trust silos, lack of talent and the like, but the opportunity to transform data operations and drive tangible value is immense collaboration across various roles. And disciplines is part of the answer as is democratizing data. This means putting data in the hands of those domain experts that are closest to the customer and really understand where the opportunity exists and how to best address them. We heard from Jay Henderson that we have all this data exhaust and cheap storage. It allows us to keep it for a long time. It's true, but as he pointed out that doesn't solve the fundamental problem. Data is spewing out from our operational systems, but much of it lacks business context for the data teams chartered with analyzing that data. >>So we heard about the trend toward low code development and federating data access. The reason this is important is because the business lines have the context and the more responsibility they take for data, the more quickly and effectively organizations are going to be able to put data to work. We also talked about the harmonization between centralized teams and enabling decentralized data flows. I mean, after all data by its very nature is distributed. And importantly, as we heard from Adam Wilson and Suresh Vittol to support this model, you have to have strong governance and service the needs of it and engineering teams. And that's where the trifecta acquisition fits into the equation. Finally, we heard about a key partnership between Altrix and snowflake and how the migration to cloud data warehouses is evolving into a global data cloud. This enables data sharing across teams and ecosystems and vertical markets at massive scale all while maintaining the governance required to protect the organizations and individuals alike. >>This is a new and emerging business model that is very exciting and points the way to the next generation of data innovation in the coming decade. We're decentralized domain teams get more facile access to data. Self-service take more responsibility for quality value and data innovation. While at the same time, the governance security and privacy edicts of an organization are centralized in programmatically enforced throughout an enterprise and an external ecosystem. This is Dave Volante. All these videos are available on demand@theqm.net altrix.com. Thanks for watching accelerating automated analytics in the cloud made possible by Altryx. And thanks for watching the queue, your leader in enterprise tech coverage. We'll see you next time.

Published Date : Mar 1 2022

SUMMARY :

It saw the need to combine and prep different data types so that organizations anyone in the business who wanted to gain insights from data and, or let's say use AI without the post isolation economy is here and we do so with a digital We're kicking off the program with our first segment. So look, you have a deep product background, product management, product marketing, And that results in a situation where the organization's, you know, the direction that your customers want to go and the problems that you're solving, what role does the cloud and really, um, you know, create a lot of the underlying data sets that are used in some of this, into the, to the business user with hyper Anna. of our designer desktop product, you know, really, as they look to take the next step, comes into the mix that deeper it angle that we talked about, how does this all fit together? analytics and providing access to all these different groups of people, um, How much of this you've been able to share with your customers and maybe your partners. Um, and, and this idea that they're going to move from, you know, So it's democratizing data is the ultimate goal, which frankly has been elusive for most You know, the data gravity has been moving to the cloud. So, uh, you know, getting everyone involved and accessing AI and machine learning to unlock seems logical that domain leaders are going to take more responsibility for data, And I think, you know, the exciting thing for us at Altryx is, you know, we want to facilitate that. the tail, or maybe the other way around, you mentioned digital exhaust before. the data and analytics layers that they have, um, really to help democratize the We take a deep dive into the Altryx recent acquisition of Trifacta with Adam Wilson It's go time, get ready to accelerate your data analytics journey the CEO of Trifacta. serving business analysts and how the hyper Anna acquisition brought you deeper into the with that in mind, you know, we know designer and are the products And Joe in the early days, talked about flipping the model that really birth Trifacta was, you know, why is it that the people who know the data best can't And so, um, that was really, you know, what, you know, the origin story of the company but the big data pipeline is hasn't gotten there. um, you know, there hasn't been a single platform for And now the data engineer, which is really And so, um, I think when we, when I sat down with Suresh and with mark and the team and, but specifically we're seeing, you know, I almost don't even want to call it a data warehouse anyway, Um, and we just have interfaces to collaborate And of course Trifacta is with cloud cloud data warehouses. What's the business analysts really need and how to design a cloud, and Trifacta really support both in the cloud, um, you know, Trifacta becomes a platform that can You're always in a position to be able to cleanse transform shape structure, that data, and ultimately to deliver, And I'm interested, you guys just had your sales kickoff, you know, what was their reaction like? And then you step back and you're going to share the vision with the field organization, and to close and announced, you know, at the kickoff event. And certainly the reception we got from, Well, I think the story hangs together really well, you know, one of the better ones I've seen in, in this space, And all of it has potential the potential to solve complex business problems, We're now moving into the eco systems segment the power of many Good to see So cloud migration, it's one of the hottest topics. on snowflake to consolidate data across systems into one data cloud with Altryx business the partnership, maybe a little bit about the history, you know, what are the critical aspects that we should really focus Yeah, so the relationship started in 2020 and all shirts made a big bag deep with snowflake And the best practices guide is more of a technical document, bringing together experiences and guidance So customers can, can leverage that elastic platform, that being the snowflake data cloud, one of the problems that you guys solved early on, but what are some of the common challenges or patterns or trends everyone has access to data and everyone can do something with data, it's going to make them competitively, application that they have in order to be competitive in order to be competitive. to enrich with your own data sets, to enrich with, um, with your suppliers and with your partners, So thank you for that. So now that that Altrix is moving to the same model, And the launch of our cloud strategy How would you describe your joint go to market strategy the path to insights starting with your snowflake data. You'll go to market strategy. And so we shifted to an industry focus So that is going to be a way for us to allow What should we look for in the coming year? blueprints, and extending that at no charge to our partners with snowflake, we're already collaborating with Tarik will give you the last word. Um, the ability for Ultrix to plug into this extensibility framework that we call Barb Tara, thanks so much for coming on the program, got to leave it right there in a moment, I'll be back with 11.8 billion data points and one analytics platform to make sense of it all. This means putting data in the hands of those domain experts that are closest to the customer are going to be able to put data to work. While at the same time, the governance security and privacy edicts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DerekPERSON

0.99+

Dave VellantePERSON

0.99+

Suresh VetolPERSON

0.99+

AltryxORGANIZATION

0.99+

JayPERSON

0.99+

Joe HellersteinPERSON

0.99+

DavePERSON

0.99+

Dave VolantePERSON

0.99+

AltrixORGANIZATION

0.99+

Jay HendersonPERSON

0.99+

DavidPERSON

0.99+

AdamPERSON

0.99+

BarbPERSON

0.99+

JeffPERSON

0.99+

2020DATE

0.99+

BobPERSON

0.99+

TrifactaORGANIZATION

0.99+

Suresh VittolPERSON

0.99+

TylerPERSON

0.99+

JuniperORGANIZATION

0.99+

AlteryxORGANIZATION

0.99+

UltrixORGANIZATION

0.99+

30 minutesQUANTITY

0.99+

TerikePERSON

0.99+

Adam WilsonPERSON

0.99+

JoePERSON

0.99+

SureshPERSON

0.99+

TerrickPERSON

0.99+

demand@theqm.netOTHER

0.99+

thousandsQUANTITY

0.99+

AlconORGANIZATION

0.99+

KaraPERSON

0.99+

last yearDATE

0.99+

threeQUANTITY

0.99+

QualtricsORGANIZATION

0.99+

less than 20%QUANTITY

0.99+

hundredsQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

JavaTITLE

0.99+

more than seven yearsQUANTITY

0.99+

two acquisitionsQUANTITY

0.99+

Dave vellante Red Hat Transitions


 

>> So Alex, we're going to do, this is a different segment so I'll do a break, okay. What's that? Yeah, yeah. The 2019 SolarWinds hack represents a new threat milestone in the technology industry. The hackers, they patiently waited and evolved their intrusion over several years, literally. They lived in stealth. They tested, they retested their techniques and they use very sophisticated methods to get into email systems, networks, authentication systems, and numerous points in the software supply chain to replicate the malicious code at massive scale. Now they use techniques like they would insert malware steal data, and then they'd remove the malicious code before it was discovered. And so many other advanced approaches were used to cover their tracks. Now the really scary thing about this breach is people often think, oh, I'm good. Thankfully, I don't use SolarWinds, but it's not true. You're not safe because the domino effect of this hack has created massive concerns. We actually, to this day, we don't know the true scope of this attack and who really was impacted. And we may never know. Connecting all the dots on this breach is extremely difficult. Moreover, new threats like those exposed in the recent Log4j vulnerability, seemed to hit the news cycle weekly. And they further underscore the risk to organizations, not just large companies by the way, but small businesses, mid-size organizations and individuals. Hello, my name is Dave Vellante, and welcome to theCUBE's special look at managing risk in the digital supply chain, made possible by Red Hat. Today we're going to hear from some of the top experts that will help you better understand how to think about the exposures in the software supply chain, some of the steps we can all take to reduce our risks and how an endless game of escalation will likely play out over the next decade. Up next is our first segment hosted by Dave Nicholson of theCUBE. He's with Luke Hinds and Vincent Danen of Red Hat. They're going to talk about where the greatest threats exist. How to think about open source versus other commercial software. And discuss ways organizations can reduce their risks going forward. Let's get started. I'm going to do that again. Same one, I'll do each one twice. The 2019 SolarWinds hack represents a new threat milestone in the technology industry. The hackers, they patiently waited and evolved their intrusion over several years, literally. They lived in stealth. They tested and they retested their techniques and used very sophisticated methods to get into email systems, networks, authentication systems in numerous points in the software supply chain to replicate the malicious code at massive scale. They would use techniques like inserting malware and then they would steal data. And then they would remove the code before it was discovered. And they use many other advanced approaches to cover their tracks. The really scary thing about this breach is, people often think, oh, well, I'm good. Thankfully, I don't use SolarWinds, but it's not true you're not safe, because the domino effect of this hack it's created a massive massive concerns throughout the industry. We actually to this day, we don't know the true scope of this attack and we don't even know who was impacted. We may never know. So connecting all the dots in this breach is extremely difficult. Moreover, new threats like those exposed in the recent Log4j vulnerability, they seem to hit the news like weekly. And they further underscore the risks that organizations face, not just large companies by the way, small businesses, mid-size organizations and individuals. Hello, my name is Dave Vellante, and welcome to theCUBE's special look at managing risk in the digital supply chain, made possible by Red Hat. Today, we're going to hear from some of the top experts that will help you better understand how to think about the exposures in the software supply chain, some of the steps that we can all take to reduce our risks and how an endless game of escalation is likely going to play out over the next decade. Up next is our first segment hosted by Dave Nicholson of theCUBE. He's with Luke Hinds and Vincent Danen of Red Hat. They're going to talk about where the greatest threats exist and how to think about open source versus other commercial software. And discuss ways that organizations can reduce their risk going forward. Let's get started. When we return Andrea Hall, a specialist solution architect and project manager for security and compliance, along with Andrew Block, who is a distinguished architect, both from Red Hat will join me. You're watching theCUBE, the global leader in enterprise tech coverage. Now when we return Andrea Hall, who's a specialist solutions architect and project manager for security and compliance will join me along with Andrew Block, who's a distinguished architect. They're both from Red Hat. You're watching theCUBE, the global leader in enterprise tech coverage. So look, I wish I could say there's an end to these threats, there isn't. They will continue indefinitely. Now the adversaries they're well-funded, they're motivated and sophisticated. Your job as practitioners is to make it less profitable for hackers. At the end of the day, this is a business for them and the hackers want value it's all about ROI. That means benefit over cost. So if you can increase the denominator, it lowers their value and they'll go elsewhere to fish in a more productive place. The hard reality is bad user practices are going to trump good security every time. And that's where the vulnerability starts. So shoring up the basics, that's table stakes. Beyond that, working with strong technology partners can bring expertise to compliment your team's skills and reduce the threat against these sophisticated attacks. We hope this program was informative and will inspire you to take action. All of these videos are available on demand, check out thecube.net and theCUBE's and Red Hat's, social channels, and a variety of other places that we'll share with the community. Thanks to our guests today for Dave Nicholson and the entire CUBE team, this is Dave Vellante. Thanks for watching, and we'll see you next time. Do that again. (cough) Excuse me. So look, I wish I could say there's an end. I'll try it again. So look, I wish I could say there's an end to these threats, there isn't. They will continue indefinitely. The adversaries they're well-funded, they're motivated and they're sophisticated. Your job as practitioners is to try and make it less profitable for the hackers. At the end of the day, this is a business for them. And the hackers, what do they want? They want value. It's all about ROI for them. That means benefit over cost. If you can increase the denominator, it lowers their value and they're going to go elsewhere, and they'll fish in more productive places. The hard reality is that bad user practices will trump good security every time. And that's where the vulnerability starts. So shoring up the basics, that's table stakes. Now beyond that, working with strong technology partners can bring expertise to compliment your team's skills, and reduce the threat against these sophisticated attacks. We hope this program was informative and will inspire you to take action. All of these videos that are available on demand at thecube.net and both theCUBE's and Red Hat's social channels, and a variety of other places that we'll share with the community. Thanks to all our guests today for Dave Nicholson and the entire CUBE team. This is Dave Vellante. I appreciate you watching and we'll see you next time.

Published Date : Feb 1 2022

SUMMARY :

and how to think about open source

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave NicholsonPERSON

0.99+

Vincent DanenPERSON

0.99+

Andrea HallPERSON

0.99+

Luke HindsPERSON

0.99+

Red HatORGANIZATION

0.99+

Andrew BlockPERSON

0.99+

TodayDATE

0.99+

bothQUANTITY

0.99+

first segmentQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

AlexPERSON

0.99+

thecube.netOTHER

0.99+

todayDATE

0.99+

CUBEORGANIZATION

0.99+

each oneQUANTITY

0.94+

next decadeDATE

0.94+

Dave vellantePERSON

0.94+

twiceQUANTITY

0.91+

SolarWindsTITLE

0.84+

Log4jTITLE

0.83+

yearsQUANTITY

0.7+

2019DATE

0.68+

SolarWindsORGANIZATION

0.68+

several yearsQUANTITY

0.61+

SolarWindsEVENT

0.57+

HatORGANIZATION

0.55+

RedTITLE

0.5+

Day 2 Wrap with Jerry Chen | AWS re:Invent 2021


 

(upbeat music) >> Welcome back, everyone, to theCUBE's live coverage, day one wrap-up. I'm John Furrier, with Dave Vellante. We have Jerry Chen, special guest who's been with us every year on theCUBE since inception. Certainly every AWS re:Invent, nine years straight. Jerry Chen, great to see you for our guest analyst's wrap up VC general partner, Greylock partners, good to see you. >> John, Dave, it's great to see you guys. Thanks for having me again. It wouldn't be re:Invent without the three of us sitting here and we missed last year, right, because of COVID. So we have to make up for lost time. >> John: We did a virtual one- >> Dave: we did virtual stuff= >> John: wasn't the same as in-person. >> Dave: Definitely not the same. >> Jerry: Not the same thing. So, it's good to see you guys again in person, and less than 6 feet apart. >> Cheers, yeah. >> And 7,000 people here, showing that the event's still relevant. >> Jerry: Yeah. >> Some people would kill for those numbers, it's a bad year for Amazon, down from 60,000. >> Jerry: Yeah. >> So, ecosystem's booming. Okay, let's get to it. Day one in the books, new CEO, new sheriff in town, his name's Adam Selipsky. Your take? >> Well, Adam's new, but he's old, right? Something, you know, like something new, something old, something blue, right? It's so, Adam was early Amazon, so he had that founding DNA. Left, you know, CEO of Tableau, acquired by Salesforce, came back few months ago. So I think it was a great move, because one, he's got the history and culture under Jassy, so he's definitely the Bezos Jassy tree of leadership, but yet he's been outside the bubble. Right? So he actually knows what it means to run a company not on the Amazon platform. So, I think Adam's a great choice to lead AWS for what we call it, like maybe act two, right? Act one, the first X years with Jassy, and maybe this is the second act under Adam. >> Yeah. And he's got- and he was very technical, hung around all the techies, James Hamilton, DeSantis, all the engineers, built that core primitives. Now, as they say, this cloud next gen's here, act two, it's about applications. >> Jerry: Yeah. >> Infrastructure as code is in place. Interesting area. Where's the growth come from? So, look, you know, the ecosystem has got to build these super clouds, or as you say, Castles on the Cloud, which you coined, but you brought this up years ago, that the moats and the value has to be in there somewhere. Do you want to revise that prediction now that you see what's coming from Selipsky? >> Okay, well, so let's refresh. Greylock.com/castles has worked out, like we did, but a lot of thought leadership and the two of you, have informed my thinking at Castles in the Cloud, how to compete against Amazon in the cloud. So you'd argue act one, the startup phase, the first, you know, X years at Amazon was from 2008 to, you know, 2021, the first X years, building the platform, digging the moats. Right? So what did you have? You have castle the platform business, economies of scale, which means decreasing marginal costs and natural network effects. So once the moat's in place and you had huge market share, what do you for act two, right? Now the moats are in place, you can start exploring the moats for I think, Adam talked about in your article, horizontal and verticals, right? Horizontal solutions up the stack, like Amazon Connect, CRM solutions, right? Horizontal apps, maybe the app layer, and verticals, industrials, financials, healthcare, et cetera. So, I think Jassy did a foundation of the castle and now we're seeing, you know, what Adam and his generation would do for act two. >> So he's, so there's almost like an act one A, because if you take the four hyperscalers, they're about, maybe do 120 billion this year, out of, I don't know, pick a number, it's many hundreds of billions, at least in infrastructure. >> Jerry: Correct. >> And those four hyperscalers growing at 35% collectively, right? So there's some growth there, but I feel like there's got to be deeper business integration, right? It's not just about IT transformation, it's about deeper- So that's maybe where this Connect like stuff comes, but are there enough of those? You know, I didn't, I haven't, I didn't hear a lot of that this morning. I heard a little bit, ML- >> Jerry: Sure. >> AI into Connect, but where's the next Connect, right? They've got to do dozens of those in order to go deeper. >> Either, Dave, dozens of those Connects or more of those premise, so the ML announcement was today. So you look at what Twilio did by buying Segment, right? Deconstruct a CRM to compete against Adam Selipsky's old acquire of Salesforce.com. They bought Segment, so Twilio now has communicates, like texting, messaging, email, but all the data come from Segment. >> Dave: With consumption-based pricing. >> With consumption-based pricing. So, right? So that's an example of kind of what the second act of cloud looks like. It may not look like full SaaS apps like Salesforce.com, but these primitives, both horizontally vertically, because again, what does Amazon have as an asset that other guys don't? Install based developers. Developers aren't going to necessarily build or consume SaaS apps, but they're going to consume things like these API's and primitives. And so you look around, what's cloud act two look like? It may not be VM's or containers. It may be API's like Stripe and Billing, Twilio messaging, right? Concepts like that. So, we'll see what the next act at cloud looks like. And they announced a bunch of stuff today, serverless for the data analytics, right? So serverless is this move towards not consuming raw compute and storage, but APIs. >> What about competition? Microsoft is nipping at the heels of AWS. >> Dave: John put them out of business earlier today. [John and Dave Laugh] >> No, I said, quote, I'll just- let me rephrase. I said, if Amazon goes unchecked- >> Jerry: Sure. >> They'll annihilate Microsoft's ecosystem. Because if you're an ISV, why wouldn't you want to run on the best platform? >> Jerry: Sure. >> Speeds and feeds matter when you have these shifts of software development. >> Jerry: You want them both. >> So, you know, I mean, you thought about the 80's, if you were at database, you wanted the best processor. So I think this Annapurna vertical integrated stacks are interesting because if my app runs better and I have a platform, prefabricated or purpose-built platform, to be there for me, I'm going to build a great SaaS app. If it runs faster and it cost less, I'm going to flop to Amazon. That's just, that's my prediction. >> So I think better changes, right? And so I think if you're Amazon, you say cheaper, better, faster, and they're investing in chips, proprietary silicon to run better, faster, their machine learning training chips, but if you're Azure or Google, you got to redefine what better is. And as a startup investor, we're always trying to do category definition, right? Like here's a category by spin. So now, if you're Azure or Google, there are things you can say that are better, and Google argued their chips, their TensorFlow, are better. Azure say our regions, our security, our enterprise readiness is better. And so all of a sudden, the criteria "what's better" changes. So from faster and cheaper to maybe better compliance, better visibility, better manageability, different colors, I don't know, right? You have to change the game , because if you play the same game on Amazon's turf, to your point, John, it- it's game over because they have economies of scale. But I think Azure and Google and other clouds, the superclouds, or subclouds are changing the game, what it means to compete. And so I think what's going on, just two more seconds, from decentralized cloud, being Web 3 and crypto, that's a whole 'nother can of worms, to Edge compute, what Cloudflare are doing with R2 and storage, they're trying to change the name of the game. >> Well, that's right. If you go frontal against Amazon, you're got to get decimated. You got to move the goalposts for better. And I think that's a good way to look at it, Dave. What does better mean? So that's the question that's on the table. What does that look like? And I think that's an unknown, that's coming. Okay, back to the start-ups. Category definition. That's an awesome term. That to me is a key thing. How do you look at what a category is on your sub- on your Castles of the Cloud, you brought up how many categories of- >> Jerry: 33 markets and a bunch of submarkets, yeah. >> Yeah. Explain that concept. >> So, we did Castle in the Clouds where my team looked at all the services offered at Azure, Google, and Amazon. We downloaded the services and recategorized them to like, 30 plus markets and a bunch of submarkets. Because, the reason why is apples to apples, you know, Amazon, Google, Azure all have databases, but they might call them different things. And so I think first things first is, let's give developers and customers kind of apples to apples comparisons. So I think those are known markets. The key in investing in the cloud, or investing in general, is you're either investing in budget replacement, replacing a known market, cheaper, better database, to your point, or a net new market, right? Which is always tricky. So I think the biggest threat to a lot of the startups and incumbents, the biggest threat by startups and incumbents, is either one, do something cheaper, better in a current market, or find a net new market that they haven't thought about yet. And if you can win that net new market before the rest, then that's unbelievable. We call it the, you know, the blue ocean strategy, >> Dave: Is that essentially what Snowflake has done, started with cheaper, better, and now they're building the data cloud? >> Jerry: I think there's- it's evolution, correct. So they said cheaper, better. And the Castle in the Cloud, we talked about, they actually built deep IP. So they went a known category, data warehouses, right? You had Teradata, Redshift, Snowflake cheaper, better, faster. And now let's say, okay, once you have the customers, let's change the name of the game and create a data cloud. And it's TBD whether or not Snowflake can win data cloud. Like we talked about Rockset, one of my investments that's actually move the goalpost saying, oh, data cloud is nice, but real time data is where it's at, and Snowflake and those guys can't play in real time. >> Dave: No, they're not in a position to play in real time data. >> Jerry: Right. >> Dave: I mean, that's right. >> So again, so that's an example of a startup moving the goalpost on what previously was a startup that moved the goalpost on an incumbent. >> Dave: And when you think about Edge, it's going to be real-time AI inferencing at the Edge, and you're right, Snowflake's not set up well at all for that. >> John: So competition wise, how do the people compete? Because this is what Databricks did the same exact thing. I have Ali on the record going back years, "Well, we love Amazon. We're only on Amazon." Now he's talking multicloud. >> So, you know, once you get there, you kind of change your tune cause you've got some scale, but then you got new potential entrants coming in, like Rockset. >> Jerry: Correct. >> So. >> Dave: But then, and if you add up the market caps of just those two companies, Databricks and Snowflake, it's much larger than the database market. So this, we're defining new markets now. >> Jerry: I think there's market cap, especially Snowflake that's in the public market, Databricks is still private, is optimism that there's a second or third act in the database space left to be unlocked. And you look at what's going on in that space, these real-time analytics or real-time apps, for sure there's optimism there. But, but to John's point, you're right, like you earn the right to play the next act, but it's tricky because startups disrupt incumbents and become incumbents, and they're also victims their own success, right? So you're- there's technical debt, there's also business model debt. So you're victims of your own business model, victims of your own success. And so what got you here may not get you to the next phase. And so I think for Amazon, that's a question. For Databricks and Snowflake, that's a question, is what got them here? Can they play to the next act? And look, Apple did it, multiple acts. >> John: Well, I mean, I think I- [Crosstalk] >> John: I think it's whether you take shortcuts or not, if you have debt, you make it a little bit of a shortcut bet. >> Jerry: Yeah. >> Okay. That's cool. But ultimately what you're getting at here is beachhead thinking. Get a beachhead- >> Jerry: Correct. >> Get in the market, and then sequence to a different position. Classic competitive strategy, 101. That's hard to do because you want to win the beachhead- >> I know. >> John: And take a little technical debt and business model debt, cheat a little bit, and then, is it not fortified yet? So beachhead to expansion is the question. >> Jerry: That's every board meeting, John and Dave, that we're in, right? It's called you need a narrow enough wedge to land. And it is like, I don't want the tip of the spear, I want the poison on the tip of a spear, right? [Dave and John Laugh] >> You want, especially in this cloud market, a super focused wedge to land. And the problem is, as a founder, as investor, you're always thinking about the global max, right? Like the ultimate platform winner, but you don't get the right to play the early- the late innings if you don't make it out of the early innings. And so narrow beachhead, sharp wedge, but you got to land in a space, a place of real estate with adjacent tan, adjacent markets, right? Like Uber, black cars, taxi's, food, whatever, right? Snowflake, data warehouse, data cloud. And so I think the key with all startups is you'll hit some ceiling of market size. Is there a second ramp? >> Dave: So it's- the art is when to scale and how fast to scale. >> Right. Picking when, how fast, in which- which best place, it was tough. And so, the best companies are always thinking about their second or third act while the first act's still going. >> John: Yeah. And leveraging cloud to refactor, I think that's the key to Snowflake, was they had the wedge with data warehouse, they saw the position, but refactored and in the cloud with services that they knew Teradata wouldn't use. >> Jerry: Correct. >> And they're in. From there, it's just competitive IP, crank, go to market. >> And then you have the other unnatural things. You have channel, you have installed base of customers, right? And then you start selling more stuff to the same channel, to the same customers. That's what Amazon's doing. All the incumbent's do that. Amazon's got, you know, 300 services now, launching more this week, so now they have channel distribution, right? Every credit card for all the developers, and they have installed base of customers. And so they will just launch new things and serve the customers. So the startups had to disrupt them somehow. >> Well, it's always great to chat with Jerry. Every year we discover and we riff and we identify, in real time, new stuff. We were talking about this whole vertical, horizontal scale and kind of castles early on, years ago. And now it's happened. You were right. Congratulations. That's a great thesis. There's real advantages to build on a cloud. You can get the- you can build a business there. >> Jerry: Right. >> John: That's your thesis. And by the way, these markets are changing. So if you're smart, you can actually compete. >> Jerry: I think you beat, and to Dave's earlier point, you have to adapt, right? And so what's the Darwin thing, it's not the strongest, but the most adaptable. So both- Amazon's adapt and the startups who are the most adaptable will win. >> Dave: Where are you, you guys might've talked about this, where do you stand on the cost of goods sold issue? >> Jerry: Oh, I think everything's true, right? I think you can save money at some scale to repatriate your cloud, but again, Wall Street rewards growth versus COGS, right? So I think you've got a choice between a dollar of growth versus a dollar reducing COGS, people choose growth right now. That may not always be the case, but at some point, if you're a company at some scale and the dollars of growth is slowing down, you definitely have to reduce the dollars in cost. And so you start optimizing cloud costs, and that could be going to Amazon, Azure, or Google, reducing COGS. >> Dave: Negotiate, yeah. >> John: Or, you have no visibility on new net new opportunities. So growth is about new opportunities. >> Correct. >> If you repatriating things, there's no growth. >> Jerry: It's not either, or- >> That's my opinion. >> Jerry: COGS or growth, right? But they're both valid, definitely, so you can do both. And so I don't think- it's what's your priorities, you can't do everything at once. So if I'm a founder or CEO or in this case investor, and I said, "Hey, Dave, and John, if you said I can either save you 25 basis points in gross margin, or I can increase another 10% top line this year", I'm going to say increase the top line, we'll deal with the gross margin later. Not that it's not important, but right now the early phase- >> Priorities. >> Jerry: It's growth. >> Yeah. All right, Jerry Chen, great to see you. Great to have you on, great CUBE alumni, great guest analyst. Thanks for breaking it down. CUBE coverage here in Las Vegas for re:Invent, back in person. Of course, it's a virtual event, we've got a hybrid event for Amazon, as well as theCUBE. I'm John Furrier, you're watching the leader in worldwide tech coverage. Thanks for watching. (Gentle music)

Published Date : Dec 1 2021

SUMMARY :

Jerry Chen, great to see you John, Dave, it's great to see you guys. So, it's good to see you showing that the event's still relevant. it's a bad year for Day one in the books, new so he's definitely the Bezos all the engineers, the Cloud, which you coined, the first, you know, X years at Amazon because if you take the four hyperscalers, there's got to be deeper those in order to go deeper. So you look at what Twilio And so you look around, what's Microsoft is nipping at the heels of AWS. [John and Dave Laugh] I said, if Amazon goes unchecked- run on the best platform? when you have these shifts So, you know, I mean, And so I think if you're Amazon, So that's the question Jerry: 33 markets and a apples to apples, you know, And the Castle in the Cloud, to play in real time data. of a startup moving the goalpost at the Edge, and you're right, I have Ali on the record going back years, but then you got new it's much larger than the database market. in the database space left to be unlocked. if you have debt, But ultimately what That's hard to do because you So beachhead to expansion is the question. It's called you need a And the problem is, as Dave: So it's- the art is when to scale And so, the best companies I think that's the key to Snowflake, IP, crank, go to market. So the startups had to You can get the- you can And by the way, these Jerry: I think you beat, And so you start optimizing cloud costs, John: Or, you have no visibility If you repatriating but right now the early phase- Great to have you on, great CUBE alumni,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JohnPERSON

0.99+

JerryPERSON

0.99+

AdamPERSON

0.99+

Jerry ChenPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Adam SelipskyPERSON

0.99+

GoogleORGANIZATION

0.99+

Jerry ChenPERSON

0.99+

AppleORGANIZATION

0.99+

John FurrierPERSON

0.99+

secondQUANTITY

0.99+

2008DATE

0.99+

MicrosoftORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

John LaughPERSON

0.99+

TwilioORGANIZATION

0.99+

120 billionQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

300 servicesQUANTITY

0.99+

James HamiltonPERSON

0.99+

twoQUANTITY

0.99+

10%QUANTITY

0.99+

Greg Rokita, Edmunds.com & Joel Minnick, Databricks | AWS re:Invent 2021


 

>>We'll come back to the cubes coverage of AWS reinvent 2021, the industry's most important hybrid event. Very few hybrid events, of course, in the last two years. And the cube is excited to be here. Uh, this is our ninth year covering AWS reinvent this the 10th reinvent we're here with Joel Minnick, who the vice president of product and partner marketing at smoking hot company, Databricks and Greg Rokita, who is executive director of technology at Edmonds. If you're buying a car or leasing a car, you gotta go to Edmund's. We're gonna talk about busting data silos, guys. Great to see you again. >>Welcome. Welcome. Glad to be here. >>All right. So Joel, what the heck is a lake house? This is all over the place. Everybody's talking about lake house. What is it? >>And it did well in a nutshell, a Lakehouse is the ability to have one unified platform to handle all of your traditional analytics workloads. So your BI and reporting Trisha, the lake, the workloads that you would have for your data warehouse on the same platform as the workloads that you would have for data science and machine learning. And so if you think about kind of the way that, uh, most organizations have built their infrastructure in the cloud today, what we have is generally customers will land all their data in a data lake and a data lake is fantastic because it's low cost, it's open. It's able to handle lots of different kinds of data. Um, but the challenges that data lakes have is that they don't necessarily scale very well. It's very hard to govern data in a data lake house. It's very hard to manage that data in a data lake, sorry, in a, in a data lake. >>And so what happens is that customers then move the data out of a data lake into downstream systems and what they tend to move it into our data warehouses to handle those traditional reporting kinds of workloads that they have. And they do that because data warehouses are really great at being able to have really great scale, have really great performance. The challenge though, is that data warehouses really only work for structured data. And regardless of what kind of data warehouse you adopt, all data warehouse and platforms today are built on some kind of proprietary format. So once you've put that data into the data warehouse, that's, that is kind of what you're locked into. The promise of the data lake house was to say, look, what if we could strip away all of that complexity and having to move data back and forth between all these different systems and keep the data exactly where it is today and where it is today is in the data lake. >>And then being able to apply a transaction layer on top of that. And the Databricks case, we do that through a technology and open source technology called data lake, or sorry, Delta lake. And what Delta lake allows us to do is when you need it, apply that performance, that reliability, that quality, that scale that you would expect out of a data warehouse directly on your data lake. And if I can do that, then what I'm able to do now is operate from one single source of truth that handles all of my analytics workloads, both my traditional analytics workloads and my data science and machine learning workloads, and being able to have all of those workloads on one common platform. It means that now not only do I get much, much more simple in the way that my infrastructure works and therefore able to operate at much lower costs, able to get things to production much, much faster. >>Um, but I'm also able to now to leverage open source in a much bigger way being that lake house is inherently built on an open platform. Okay. So I'm no longer locked into any kind of data format. And finally, probably one of the most, uh, lasting benefits of a lake house is that all the roles that have to take that have to touch my data for my data engineers, to my data analyst, my data scientists, they're all working on the same data, which means that collaboration that has to happen to go answer really hard problems with data. I'm now able to do much, much more easy because those silos that traditionally exist inside of my environment no longer have to be there. And so Lakehouse is that is the promise to have one single source of truth, one unified platform for all of my data. Okay, >>Great. Thank you for that very cogent description of what a lake house is now. Let's I want to hear from the customer to see, okay, this is what he just said. True. So actually, let me ask you this, Greg, because the other problem that you, you didn't mention about the data lake is that with no schema on, right, it gets messy and Databricks, I think, correct me if I'm wrong, has begun to solve that problem, right? Through series of tooling and AI. That's what Delta liked us. It's a man, like it's a managed service. Everybody thought you were going to be like the cloud era of spark and Brittany Britain, a brilliant move to create a managed service. And it's worked great. Now everybody has a managed service, but so can you paint a picture at Edmonds as to what you're doing with, maybe take us through your journey the early days of a dupe, a data lake. Oh, that sounds good. Throw it in there, paint a picture as to how you guys are using data and then tie it into what y'all just said. >>As Joel said, that they'll the, it simplifies the architecture quite a bit. Um, in a modern enterprise, you have to deal with a variety of different data sources, structured semi-structured and unstructured in the form of images and videos. And with Delta lake and built a lake, you can have one system that handles all those data sources. So what that does is that basically removes the issue of multiple systems that you have to administer. It lowers the cost, and it provides consistency. If you have multiple systems that deal with data, you always arise as the issue as to which data has to be loaded into which system. And then you have issues with consistency. Once you have issues with consistency, business users, as analysts will stop trusting your data. So that was very critical for us to unify the system of data handling in the one place. >>Additionally, you have a massive scalability. So, um, I went to the talk with from apple saying that, you know, he can process two years worth of data. Instead of just two days in an Edmonds, we have this use case of backfilling the data. So often we changed the logic and went to new. We need to reprocess massive amounts of data with the lake house. We can reprocess months worth of data in, in a matter of minutes or hours. And additionally at the data lake houses based on open, uh, open standards, like parquet that allowed us, allowed us to basically hope open source and third-party tools on top of the Delta lake house. Um, for example, a Mattson, we use a Matson for data discovery, and finally, uh, the lake house approach allows us for different skillsets of people to work on the same source data. We have analysts, we have, uh, data engineers, we have statisticians and data scientists using their own programming languages, but working on the same core of data sets without worrying about duplicating data and consistency issues between the teams. >>So what, what is, what are the primary use cases where you're using house Lakehouse Delta? >>So, um, we work, uh, we have several use cases, one of them more interesting and important use cases as vehicle pricing, you have used the Edmonds. So, you know, you go to our website and you use it to research vehicles, but it turns out that pricing and knowing whether you're getting a good or bad deal is critical for our, uh, for our business. So with the lake house, we were able to develop a data pipeline that ingests the transactions, curates the transactions, cleans them, and then feeds that curated a curated feed into the machine learning model that is also deployed on the lake house. So you have one system that handles this huge complexity. And, um, as you know, it's very hard to find unicorns that know all those technologies, but because we have flexibility of using Scala, Java, uh, Python and SQL, we have different people working on different parts of that pipeline on the same system and on the same data. So, um, having Lakehouse really enabled us to be very agile and allowed us to deploy new sources easily when we, when they arrived and fine tune the model to decrease the error rates for the price prediction. So that process is ongoing and it's, it's a very agile process that kind of takes advantage of the, of the different skill sets of different people on one system. >>Because you know, you guys democratized by car buying, well, at least the data around car buying because as a consumer now, you know, I know what they're paying and I can go in of course, but they changed their algorithms as well. I mean, the, the dealers got really smart and then they got kickbacks from the manufacturer. So you had to get smarter. So it's, it's, it's a moving target, I guess. >>Great. The pricing is actually very complex. Like I, I don't have time to explain it to you, but knowing, especially in this crazy market inflationary market where used car prices are like 38% higher year over year, and new car prices are like 10% higher and they're changing rapidly. So having very responsive pricing model is, is extremely critical. Uh, you, I don't know if you're familiar with Zillow. I mean, they almost went out of business because they mispriced their, uh, their houses. So, so if you own their stock, you probably under shorthand of it, but, you know, >>No, but it's true because I, my lease came up in the middle of the pandemic and I went to Edmonds, say, what's this car worth? It was worth like $7,000. More than that. Then the buyout costs the residual value. I said, I'm taking it, can't pass up that deal. And so you have to be flexible. You're saying the premises though, that open source technology and Delta lake and lake house enabled that flexible. >>Yes, we are able to ingest new transactions daily recalculate our model within less than an hour and deploy the new model with new pricing, you know, almost real time. So, uh, in this environment, it's very critical that you kind of keep up to date and ingest their latest transactions as they prices change and recalculate your model that predicts the future prices. >>Because the business lines inside of Edmond interact with the data teams, you mentioned data engineers, data scientists, analysts, how do the business people get access to their data? >>Originally, we only had a core team that was using Lakehouse, but because the usage was so powerful and easy, we were able to democratize it across our units. So other teams within software engineering picked it up and then analysts picked it up. And then even business users started using the dashboarding and seeing, you know, how the price has changed over time and seeing other, other metrics within the, >>What did that do for data quality? Because I feel like if I'm a business person, I might have context of the data that an analyst might not have. If they're part of a team that's servicing all these lines of business, did you find that the data quality, the collaboration affected data? >>Th the biggest thing for us was the fact that we don't have multiple systems now. So you don't have to load the data. Whenever you have to load the data from one system to another, there is always a lag. There's always a delay. There is always a problematic job that didn't do the copy correctly. And the quality is uncertain. You don't know which system tells you the truth. Now we just have one layer of data. Whether you do reports, whether you're data processing or whether you do modeling, they all read the same data. And the second thing is that with the dashboarding capabilities, people that were not very technical that before we could only use Tableau and Tableau is not the easiest thing to use as if you're not technical. Now they can use it. So anyone can see how our pricing data looks, whether you're an executive, whether you're an analyst or a casual business users, >>But Hey, so many questions, you guys are gonna have to combat. I'm gonna run out of time, but you now allow a consumer to buy a car directly. Yes. Right? So that's a new service that you launched. I presume that required new data. We give, we >>Give consumers offers. Yes. And, and that offer you >>Offered to buy my league. >>Exactly. And that offer leverages the pricing that we develop on top of the lake house. So the most important thing is accurately giving you a very good offer price, right? So if we give you a price, that's not so good. You're going to go somewhere else. If we give you price, that's too high, we're going to go bankrupt like Zillow debt, right. >>It took to enable that you're working off the same dataset. Yes. You're going to have to spin up a, did you have to inject new data? Was there a new data source that we're working on? >>Once we curate the data sources and once we clean it, we see the directly to the model. And all of those components are running on the lake house, whether you're curating the data, cleaning it or running the model. The nice thing about lake house is that machine learning is the first class citizen. If you use something like snowflake, I'm not going to slam snowflake here, but you >>Have two different use case. You have >>To, you have to load it into a different system later. You have to load it into a different system. So like good luck doing machine learning on snowflake. Right. >>Whereas, whereas Databricks, that's kind of your raison d'etre >>So what are your, your, your data engineer? I feel like I should be a salesman or something. Yeah. I'm not, I'm not saying that. Just, just because, you know, I was told to, like, I'm saying it because of that's our use case, >>Your use case. So question for each of you, what, what business results did you see when you went to kind of pre lake house, post lake house? What are the, any metrics you can share? And then I wonder, Joel, if you could share a sort of broader what you're seeing across your customer base, but Greg, what can you tell us? Well, >>Uh, before their lake house, we had two different systems. We had one for processing, which was still data breaks. And the second one for serving and we iterated over Nateeza or Redshift, but we figured that maintaining two different system and loading data from one to the other was a huge overhead administration security costs. Um, the fact that you had to consistency issues. So the fact that you can have one system, um, with, uh, centralized data, solves all those issues. You have to have one security mechanism, one administrative mechanism, and you don't have to load the data from one system to the other. You don't have to make compromises. >>It's scale is not a problem because of the cloud, >>Because you can spend clusters at will for different use cases. So your clusters are independent. You have processing clusters that are not affecting your serving clusters. So, um, in the past, if you were running a serving, say on Nateeza or Redshift, if you were doing heavy processing, your reports would be affected, but now all those clusters are separated. So >>Consumer data consumer can take that data from the producer independ >>Using its own cluster. Okay. >>Yeah. I'll give you the final word, Joel. I know it's been, I said, you guys got to come back. This is what have you seen broadly? >>Yeah. Well, I mean, I think Greg's point about scale. It's an interesting one. So if you look at cross the entire Databricks platform, the platform is launching 9 million VMs every day. Um, and we're in total processing over nine exabytes a month. So in terms of just how much data the platform is able to flow through it, uh, and still maintain a extremely high performance is, is bar none out there. And then in terms of, if you look at just kind of the macro environment of what's happening out there, you know, I think what's been most exciting to watch or what customers are experiencing traditionally or, uh, on the traditional data warehouse and kinds of workloads, because I think that's where the promise of lake house really comes into its own is saying, yes, I can run these traditional data warehousing workloads that require a high concurrency high scale, high performance directly on my data lake. >>And, uh, I think probably the two most salient data points to raise up there is, uh, just last month, Databricks announced it's set the world record for the, for the, uh, TPC D S 100 terabyte benchmark. So that is a place where Databricks at the lake house architecture, that benchmark is built to measure data warehouse performance and the lake house beat data warehouse and sat their own game in terms of overall performance. And then what's that spends from a price performance standpoint, it's customers on Databricks right now are able to enjoy that level of performance at 12 X better price performance than what cloud data warehouses provide. So not only are we jumping on this extremely high scale and performance, but we're able to do it much, much more efficiently. >>We're gonna need a whole nother section second segment to talk about benchmarking that guys. Thanks so much, really interesting session and thank you and best of luck to both join the show. Thank you for having us. Very welcome. Okay. Keep it right there. Everybody you're watching the cube, the leader in high-tech coverage at AWS reinvent 2021

Published Date : Nov 30 2021

SUMMARY :

Great to see you again. Glad to be here. This is all over the place. and reporting Trisha, the lake, the workloads that you would have for your data warehouse on And regardless of what kind of data warehouse you adopt, And what Delta lake allows us to do is when you need it, that all the roles that have to take that have to touch my data for as to how you guys are using data and then tie it into what y'all just said. And with Delta lake and built a lake, you can have one system that handles all Additionally, you have a massive scalability. So you have one system that So you had to get smarter. So, so if you own their stock, And so you have to be flexible. less than an hour and deploy the new model with new pricing, you know, you know, how the price has changed over time and seeing other, other metrics within the, lines of business, did you find that the data quality, the collaboration affected data? So you don't have to load But Hey, so many questions, you guys are gonna have to combat. So the most important thing is accurately giving you a very good offer did you have to inject new data? I'm not going to slam snowflake here, but you You have To, you have to load it into a different system later. Just, just because, you know, I was told to, And then I wonder, Joel, if you could share a sort of broader what you're seeing across your customer base, but Greg, So the fact that you can have one system, So, um, in the past, if you were running a serving, Okay. This is what have you seen broadly? So if you look at cross the entire So not only are we jumping on this extremely high scale and performance, but we're able to do it much, Thanks so much, really interesting session and thank you and best of luck to both join the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

GregPERSON

0.99+

Joel MinnickPERSON

0.99+

$7,000QUANTITY

0.99+

Greg RokitaPERSON

0.99+

38%QUANTITY

0.99+

two daysQUANTITY

0.99+

10%QUANTITY

0.99+

JavaTITLE

0.99+

DatabricksORGANIZATION

0.99+

two yearsQUANTITY

0.99+

one systemQUANTITY

0.99+

oneQUANTITY

0.99+

ScalaTITLE

0.99+

appleORGANIZATION

0.99+

PythonTITLE

0.99+

SQLTITLE

0.99+

ninth yearQUANTITY

0.99+

last monthDATE

0.99+

lake houseORGANIZATION

0.99+

two different systemsQUANTITY

0.99+

TableauTITLE

0.99+

2021DATE

0.99+

9 million VMsQUANTITY

0.99+

second thingQUANTITY

0.99+

less than an hourQUANTITY

0.99+

LakehouseORGANIZATION

0.98+

12 XQUANTITY

0.98+

DeltaORGANIZATION

0.98+

Delta lake houseORGANIZATION

0.98+

one layerQUANTITY

0.98+

one common platformQUANTITY

0.98+

bothQUANTITY

0.97+

AWSORGANIZATION

0.97+

ZillowORGANIZATION

0.97+

Brittany BritainPERSON

0.97+

Edmunds.comORGANIZATION

0.97+

two different systemQUANTITY

0.97+

EdmondsORGANIZATION

0.97+

over nine exabytes a monthQUANTITY

0.97+

todayDATE

0.96+

Lakehouse DeltaORGANIZATION

0.96+

Delta lakeORGANIZATION

0.95+

TrishaPERSON

0.95+

data lakeORGANIZATION

0.94+

MattsonORGANIZATION

0.92+

second segmentQUANTITY

0.92+

eachQUANTITY

0.92+

MatsonORGANIZATION

0.91+

two most salient data pointsQUANTITY

0.9+

EdmondsLOCATION

0.89+

100 terabyteQUANTITY

0.87+

one single sourceQUANTITY

0.86+

first classQUANTITY

0.85+

NateezaTITLE

0.85+

one securityQUANTITY

0.85+

RedshiftTITLE

0.84+