Image Title

Search Results for Sean Knapp:

Sean Knapp, Ascend io | AWS re:Invent 2022 - Global Startup Program


 

>>And welcome back to the Cube everyone. I'm John Walls to continue our coverage here of AWS Reinvent 22. We're part of the AWS Startup Showcase is the global startup program that AWS so proudly sponsors and with us to talk about what they're doing now in the AWS space. Shaun Knapps, the CEO of AS Send IO and Sean, good to have here with us. We appreciate >>It. Thanks for having me, >>John. Yeah, thanks for the time. First off, gotta show the t-shirt. You caught my attention. Big data is a cluster. I don't think you get a lot of argument from some folks, right? But it's your job to make some sense of it, is it not? Yeah. Tell us about a Send io. >>Sure. As Send IO is a data automation platform. What we do is connect a lot of the, the disparate parts of what data teams do when they create ETL and E o T data pipelines. And we use advanced levels of automation to make it easier and faster for them to build these complex systems and have their world be a little bit less of a, a cluster. >>All right. So let's get into automation a little bit then again, I, your definition of automation and how you're applying it to your business case. >>Absolutely. You know, what we see oftentimes is as spaces mature and evolve, the number of repetitive and repeatable tasks that actually become far less differentiating, but far more taxable if you will, right to the business, start to accumulate as those common patterns emerge. And, and, you know, as we see standardization around tech stacks, like on Amazon and on Snowflake and on data bricks, and as you see those patterns really start to, to formalize and standardize, it opens up the door to basically not have your team have to do all those things anymore and write code or perform the same actions that they used to always have to, and you can lean more on technology to properly automate and remove the, the monotony of those tasks and give your teams greater leverage. >>All right. So, so let's talk about at least maybe your, the journey, say in the past 18 months in terms of automation and, and what have you seen from a trend perspective and how are you trying to address that in order to, to meet that need? >>Yeah, I think the last 18 months have become, you know, really exciting as we've seen both that, you know, a very exciting boom and bust cycle that are driving a lot of other macro behaviors. You know, what we've seen over the last 18 months is far greater adoption of the, the standard, what we call the data planes, the, the architectures around snowflake and data bricks and, and Amazon. And what that's created as a result is the emergence of what I would call is the next problem. You know, as you start to solve that category of how >>You, that's it always works too, isn't >>It? Yeah, exactly. Always >>Works that >>This is the wonderful thing about technology is the job security. There's always the next problem to go solve. And that's what we see is, you know, as we we go into cloud, we get that infinite scale, infinite capacity, capacity, infinite flexibility. And you know, with these modern now data platforms, we get that infinite ability to store and process data incredibly quickly with incredible ease. And so what, what do most organizations do? You take a ton of new bodies, like all the people who wanted to do those like really cool things with data you're like, okay, now you can. And so you start throwing a lot more use cases, you start creating a lot more data products, you start doing a lot more things with data. And this is really where that third category starts to emerge, which is you get this data mess, not mesh, but the data mess. >>You get a cluster cluster, you get a cluster exactly where the complexity skyrockets. And as a result that that rapid innovation that, that you are all looking for and, and promised just comes to a screeching halt as you're just, just like trying to swim through molasses. And as a result, this is where that, that new awareness around automation starts really heightened. You know, we, we did a really interesting survey at the start of this year, did it as a blind survey, independent third party surveyed, 500 chief data officers, data scientists, data architects, and asked them a plethora of questions. But one of the questions we asked them was, do you currently or do you intend on investing in data automation to increase your team's productivity? And what was shocking, and I was very surprised by this, okay, what was shocking was only three and a half percent said they do today. Which is really interesting because it really hones in on this notion of automation is beyond what a lot of a think of, you know, tooling and enhancements today, only three and a half percent today had it, but 88.5% said they intend on making data automation investments in the next 12 months. And that stark contrast of how many people have a thing and how many people want that benefit of automation, right? I think it is incredibly critical as we look to 2023 and beyond. >>I mean, this seems like a no-brainer, does it not? I mean, know it is your business, of course you agree with me, but, but of course, of course what brilliant statement. But it is, it seems like, you know, the more you're, you're able to automate certain processes and then free up your resources and your dollars to be spent elsewhere and your, and your human capital, you know, to be invested elsewhere. That just seems to be a layup. I'm really, I'm very surprised by that three and a half percent figure >>I was too. I actually was expecting it to be higher. I was expecting five to 10%. Yeah. As there's other tools in the, the marketplace around ETL tools or orchestration tools that, that some would argue fit in the automation category. And I think the, what, what the market is telling us based on, on that research is that those themselves are, don't qualify as automation. That, that the market has a, a larger vision for automation. Something that is more metadata driven, more AI back, that takes us a greater leap and of leverage for the teams than than what the, the existing capabilities in the industry today can >>Afford. Okay. So if you got this big leap that you can make, but, but, but maybe, you know, should sites be set a little lower, are you, are you in danger of creating too much of an expectation or too much of a false hope? Because you know, I mean sometimes incremental increases are okay. I >>Agree. I I I think the, you know, I think you wanna do a little bit of both. I think you, you want to have a plan for, for reaching for the stars and you gotta be really pragmatic as well. Even inside of a a suni, we actually have a core value, which is build for 10 x plan for a hundred x and so know where you're going, right? But, but solve the problems that are right in front of you today as, as you get to that next scale. And I think the, the really important part for a lot of companies is how do you think about what that trajectory is and be really smart around where you choose to invest as you, one of the, the scenes that we have is last year's innovation is next year's anchor around your neck. And that's because we, we were in this very fortunately, so this really exciting, rapidly moving innovative space, but the thing that was your advantage not too long ago is everybody can move so quickly now becomes commonplace and a year or two later, if you don't jump on whatever that next innovation is that the industry start to standardize on, you're now on hook paying massive debt and, and paying, you know, you thought you had, you know, home mortgage debt and now you're paying the worst of credit card debt trying to pay that down and maintain your velocity. >>It's >>A whole different kind of fomo, right? I'm fair, miss, I'm gonna miss out. What am I missing out on? What the next big thing exactly been missing out >>On that? And so we encourage a lot of folks, you know, as you think about this as it pertains to automation too, is you solve for some of the problems right in front of you, but really make sure that you're, you're designing the right approach that as you stack on, you know, five times, 10 times as many people building data products and, and you, you're, you're your volume and library of, of data weaving throughout your, your business, make sure you're making those right investments. And that's one of the reasons why we do think automation is so important and, and really this, this next generation of automation, which is a, a metadata and AI back to level of automation that can just achieve and accomplish so much more than, than sort of traditional norms. >>Yeah. On that, like, as far as Dex Gen goes, what do you think is gonna be possible that cloud sets the stage for that maybe, you know, not too long ago seem really outta reach, like, like what's gonna give somebody to work on that 88% in there that's gonna make their spin come your way? >>Ah, good question. So I, I think there's a couple fold. I, you know, I think the, right now we see two things happening. You know, we see large movements going to the, the, the dominant data platforms today. And, and you know, frankly, one of the, the biggest challenges we see people having today is just how do you get data in which is insanity to me because that's not even the value extraction, that is the cost center piece of it. Just get data in so you can start to do something with it. And so I think that becomes a, a huge hurdle, but the access to new technologies, the ability to start to unify more of your data and, and in rapid fashion, I think is, is really important. I think as we start to, to invest more in this metadata backed layer that can connect that those notions of how do you ingest your data, how do you transform it, how do you orchestrate it, how do you observe it? One of the really compelling parts of this is metadata does become the new big data itself. And so to do these really advanced things to give these data teams greater levels of automation and leverage, we actually need cloud capabilities to process large volumes of not the data, but the metadata around the data itself to deliver on these really powerful capabilities. And so I think that's why the, this new world that we see of the, the developer platforms for modern data cloud applications actually benefit from being a cloud native application themselves. >>So before you take off, talk about the AWS relationship part of the startup showcase part of the growth program. And we've talked a lot about the cloud, what it's doing for your business, but let's just talk about again, how integral they have been to your success and, and likewise what you're thinking maybe you bring to their table too. Yeah, >>Well we bring a lot to the table. >>Absolutely. I had no doubt about that. >>I mean, honestly, it, working with with AWS has been truly fantastic. Yep. You know, I think, you know, as a, a startup that's really growing and expanding your footprint, having access to the resources in AWS to drive adoption, drive best practices, drive awareness is incredibly impactful. I think, you know, conversely too, the, the value that Ascend provides to the, the AWS ecosystem is tremendous leverage on onboarding and driving faster use cases, faster adoption of all the really great cool, exciting technologies that we get to hear about by bringing more advanced layers of automation to the existing product stack, we can make it easier for more people to build more powerful things faster and safely. Which I think is what most businesses at reinvent really are looking for. >>It's win-win, win-win. Yeah. That's for sure. Sean, thanks for the time. Thank you John. Good job on the t-shirt and keep up the good work. Thank you very much. I appreciate that. Sean Na, joining us here on the AWS startup program, part of their of the Startup Showcase. We are of course on the Cube, I'm John Walls. We're at the Venetian in Las Vegas, and the cube, as you well know, is the leader in high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

We're part of the AWS Startup Showcase is the global startup program I don't think you get a lot of argument from some folks, And we use advanced levels of automation to make it easier and faster for them to build automation and how you're applying it to your business case. And, and, you know, as we see standardization around tech stacks, the journey, say in the past 18 months in terms of automation and, and what have you seen from a Yeah, I think the last 18 months have become, you know, really exciting as we've Yeah, exactly. And that's what we see is, you know, as we we go into cloud, But one of the questions we asked them was, do you currently or you know, the more you're, you're able to automate certain processes and then free up your resources and your and of leverage for the teams than than what the, the existing capabilities Because you know, I mean sometimes incremental increases But, but solve the problems that are right in front of you today as, as you get to that next scale. What the next big thing exactly been And so we encourage a lot of folks, you know, as you think about this as it pertains to automation too, cloud sets the stage for that maybe, you know, not too long ago seem And, and you know, frankly, one of the, the biggest challenges we see people having today is just how do So before you take off, talk about the AWS relationship part of the startup showcase I had no doubt about that. You know, I think, you know, as a, a startup that's really growing and expanding your footprint, We're at the Venetian in Las Vegas, and the cube, as you well know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
fiveQUANTITY

0.99+

Shaun KnappsPERSON

0.99+

John WallsPERSON

0.99+

AWSORGANIZATION

0.99+

Sean KnappPERSON

0.99+

JohnPERSON

0.99+

SeanPERSON

0.99+

10 timesQUANTITY

0.99+

Sean NaPERSON

0.99+

88.5%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

five timesQUANTITY

0.99+

next yearDATE

0.99+

Las VegasLOCATION

0.99+

todayDATE

0.99+

2023DATE

0.99+

last yearDATE

0.99+

88%QUANTITY

0.99+

500 chief data officersQUANTITY

0.99+

oneQUANTITY

0.99+

10%QUANTITY

0.99+

OneQUANTITY

0.99+

third categoryQUANTITY

0.99+

bothQUANTITY

0.98+

VenetianLOCATION

0.97+

three and a half percentQUANTITY

0.97+

FirstQUANTITY

0.96+

this yearDATE

0.96+

a yearDATE

0.96+

AscendORGANIZATION

0.96+

two thingsQUANTITY

0.95+

Send IOTITLE

0.9+

last 18 monthsDATE

0.85+

10 xQUANTITY

0.83+

next 12 monthsDATE

0.83+

hundredQUANTITY

0.8+

22TITLE

0.78+

one of the questionsQUANTITY

0.77+

AS Send IOORGANIZATION

0.76+

past 18 monthsDATE

0.73+

two laterDATE

0.72+

SnowflakeORGANIZATION

0.71+

threeQUANTITY

0.71+

Startup ShowcaseEVENT

0.7+

half percentQUANTITY

0.67+

Send ioTITLE

0.65+

couple foldQUANTITY

0.62+

2022 - Global Startup ProgramTITLE

0.59+

Dex GenCOMMERCIAL_ITEM

0.44+

ReinventEVENT

0.38+

CubePERSON

0.35+

Sean Knapp, Ascend.io & Jason Robinson, Steady | AWS Startup Showcase


 

(upbeat music) >> Hello and welcome to today's session, theCUBE's presentation of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics, Cloud Management Tools, featuring Ascend.io for the data and analytics track. I'm your host, John Furrier with theCUBE. Today, we're proud joined by Sean Knapp, CEO and founder of Ascend.io and Jason Robinson who's the VP of Data Science and Engineering at Steady. Guys, thanks for coming on and congratulations, Sean, for the continued success, loves our cube conversation and Jason, nice to meet you. >> Great to meet you. >> Thanks for having us. >> So, the session today is really kind of looking at automating analytics workloads, right? So, and Steady as a customer. Sean, talk about the relationship with the customer Steady. What's the main product, what's the core relationship? >> Yeah, it's a really great question. when we work with a lot of companies like Steady we're working hand in hand with their data engineering teams, to help them onboard onto the Ascend platform, build these really powerful data pipelines, fueling their analytics and other workloads, and really helping to ensure that they can be successful at getting more leverage and building faster than ever before. So we tend to partner really closely with each other's teams and really think of them even as extensions of each other's own teams. I watch in slack oftentimes and our teams just go back and forth. And it's like, as if we were all just part of the same company. >> It's a really exciting time, Jason, great to have you on as a person cutting your teeth into this kind of what I call next gen data as intellectual property. Sean and I chat on theCUBE conversation previous to this event where every company is a data company, right? And we've heard that cliche. >> Right. >> But it's true, right? It's going to, it's getting more powerful with the edge. You seeing more diverse data, faster data, small, big, large, medium, all kinds of different aspects and patterns. And it's becoming a workflow kind of intellectual property paradigm for companies, not so much. >> That's right. >> Just the tech it's the database is you can, it's the data itself, data in flight, it's moving around, it's got value. What's your take-- >> Absolutely. >> On this trend? >> Basically, Steady helps our members and we have a community of members earn more income. So we want to help them steady their financial lives. And that's all based on data, so we have a web app, you could go to the iOS Store, you could go to the Google Play Store, you can download the app. And we have a large number of members, 3 million plus, who are actively using this. And we also have a very exciting new product called income passport. And this helps 1099 and mixed wage earners verify their income, which is very important for different government benefits. And then third, we help people with emergency cash grants as well as awards. So all of that is built on a bedrock of data, so if you're using our apps, it's all data powered. So what you were mentioning earlier from pipelines that are running it real time to yeah, anything, that's a kind of a small data aggregation, we do everything from small to real-time and large. >> You guys are like a multiple sided marketplace here, you've got it, you're a FinTech app, as well as the future of work and with virtual space-- >> That's right. >> Happening now, this is becoming, actually encapsulates kind of the critical problems that people trying to solve right now, you've got multiple stakeholders. >> That's right. >> In the data. >> Yes, we absolutely do. So we have our members, but we also, within the company, we have product, we have strategy, we have a growth team, we have operations. So data engineering and data science also work with a data analytics organization. So at Steady we're very much a data company. And we have a data organization led by our chief data officer and we have data engineering and data science, which are my teams, but also that business insights and analytics. So a lot of what we're building on the data engineering side is powering those insights and analytics that the business stakeholders use every day to run the organization. >> Sean, I want to get your thoughts on this because we heard from Emily Freeman in the keynote about how this revolution in DevOps or for premiering her talk around how, it's not just one persona anymore, I'm a release engineer, I'm this kind of engineer, you're seeing now all engineering, all developers are developers. You have some specialty, but for the most part, the team makeups are changing. We touched on this in our cube conversation. The journey of data is not just the data people, the data folks. It's like there's, they're developers too. So the confluence of data science, data management, developing, is changing the team and cultural makeup of companies. Could you share your thoughts on this dynamic and how it impacts customers? >> Absolutely, I think the, we're finding a similar trend to what we saw a number of years ago, when we talked about how software was eating the world and every company was now becoming a software company. And as a result, we saw this proliferation and expansion of what the software roles look like and thought of a company pulled through this entire new era of DevOps. We were finding that same pattern now emerging around data as not only is every company a software company, every company is a data company and data really is that field, that oil that fuels the business and in doing so, we're finding that as Jason describes it's pervasive across the team, it is no longer just one team that is creating some insights and reports around operational analytics, or maybe a team over here doing data science or machine learning. It is expensive. And I think the really interesting challenges that start to come with this too, are so many data teams are so over capacity. We did a recent study that highlighted that 96% of data teams are at, or over capacity, only 4% had spare capacity. But as a result, the net is being cast even wider to pull in people from even broader and more adjacent domains to all participate in the data future of their organization. >> Yeah, and I think I'd love to get your guys react to this conversation with Andy Jassy, who's now the CEO of Amazon, but when he was the CEO of AWS last year, I talked with him about how the old guard and new guard are thinking around team formations. Obviously team capacity is growing and challenged when you've got the right formula. So that's one thing, right? But what if you don't have the right formula? If you're in the skills gap, problem, or team formation side of it, where you maybe there was two years ago where the mandate came down? Well, we got to build a data team even in two years, if you're not inquisitive. And this is what Andy and I were talking about is the thinking and the mindset of that mission and being open to discovering and understanding the changes, because if you were deciding what your team was two, three years ago, that might have changed a lot. So team capacity, Sean, to your point, if you got it right, and that's a challenge in and of itself, but what if you don't have it, right? What do you guys think about this? >> Yeah, I think that's exactly right. Basically trying to see, look and gaze into the crystal ball and see what's going to happen in a year or two years, even six months is quite difficult. And if you don't have it right, you do spend a lot of time because of the technical debt that you've amassed. And we certainly spend quite a bit of time with technical debt for things we wanted to build. So, deconvolving that, getting those ETLs to a runnable state, getting performance there, that's what we spend a bit of time on. And yeah, it's something that it's really part of the package. >> What do you guys see as the big challenge on teams? The scaling challenge okay. Formation is one thing, Sean, but like, okay, getting it right, getting it formed properly and then scaling it, what are the big things you're seeing? >> One of the, I think the overarching management themes in general, it is the highest out by the highest performing teams are those where the individual with the context and the idea is able to execute as far and as fast and as efficiently as possible, and removing a lot of those encumbrances and put it a slightly different way. If DevOps was all basically boiled down to, how do we help more people write more software faster and safely data ops would be very similarly, how do we enable more people to do more things with data faster and safely? And to do that, I think the era of these massive multi-year efforts around data are gone and hopefully in the not too distant future, even these multi-quarter efforts around data are gone and we get into a much more agile, nimble methodology where smaller initiatives and smaller efforts are possible by more diverse skillsets across the business. And really what we should be doing is leveraging technology and automation to ensure that people are able to be productive and efficient and that we can trust our data and that systems are automated. And these are problems that technology is good at. And so in many ways, how in the early days Amazon would described as getting people out of the muck of DevOps. I think we're going to do the same thing around getting people out of the muck of the data and get them really focused on the higher level aspects. >> Yeah, we're going to get into that complexity, heavy lifting side muck, and then the heavy lifting taking away from the customers. But I want to go back to real quick with Jason while we're on this topic. Jason, I was just curious, how much has your team grown in the recent year and how much could've, should've grown, what's the status and how has Ascend helped you guys? What's the dynamic there? ' Cause that's their value proposition. So, take us through that. >> Absolutely, so, since the beginning of the year data engineering has doubled. So, we're a lean team, we certainly use the agile mindset and methodologies, but we have gone from, yeah, we've essentially doubled. So a lot of that is there's just so much to do and the capacity problem is certainly there. So we also spend a lot of time figuring out exactly what the right tooling is. And I was mentioning the technical debt. So you have those, there's the big O notation of whatever that involves technical debt. And when you're building new things, you're fixing old things. And then you're trying to maintain everything. That scaling starts to hit hard. So even if we continue to double, I mean, we could easily add more data engineers. And a lot of that is, I mean, you know about the hiring cycles, like, a lot of of great talent, but it's difficult to make all of those hires. So, we do spend quite a bit of time thinking about exactly what tools data engineering is using day-to-day. And what I mentioned, were technologies on the streaming side all the way to like the small batch things, but, like something that starts as a small batch getting grow and grow and grow and take, say 15 hours, it's possible, I've seen it. But, and getting that back down and managing that complexity while not overburdening people who probably don't want to spend all their waking hours building ETLs, maintaining ETL, putting in monitoring, putting in alerting, that I think is quite a challenge. >> It's so funny because you mentioned 18 hours, you got to kind of being, you didn't roll your eyes, but you almost did, but this is, but people want it yesterday, they want real time, so there's a lot of demand-- >> Yes. >> On the minds of the business outcome side of it. So, I got to ask you, because this comes up a lot with technical debt, and now we're starting to see that come into the data conversation. And so I always curious, is there a different kind of technical debt with data? Because again, data is like software, but it's a little bit of more elusive in the sense it's always changing. So is there, what kind of technical debt do you see in the data side that's different than say software side? >> Absolutely, now that's a great question. So a lot of thinking about your data and structuring your data and how you want to use that data going into a particular project might be different from what happens after stakeholders have a new considerations and new products and new items that need to be built. So thinking about how that, let's say you have a document store, or you have something that you thought was going to be nice and structured, how that can evolve and support those particular products can essentially, unless you take the time and go through and say, well, let's architect it perfectly so that we can handle that. You're going to make trade-offs and choices, and essentially that debt builds up. So you start cutting corners, you start changing your normalization. You start essentially taking those implicit schema that then tend to build into big things, big implicit schema. And then of course, with implicit schema, you're going to have a lot of null values, you're going to have a lot of items to deal with. So, how do you deal with that? And then you also have the opportunity to create keys and values and oops, do we take out those keys that were slightly misspelled? So, I could go on for hours, but basically the technical debt certainly is there with on data. I see a lot of this as just a spectrum of technical debt, because it's all trade-offs that you made to build a product, and the efficiency has start to hit you. So, the 15 hour ETL, I was mentioning, basically you start with something and you were building things for stakeholders and essentially you have so much complex logic within that. So for the transforms that you're doing from if you're thinking of the bronze, silver, gold, kind of a framework, going from that bronze to a silver, you may have a massive number of transformations or just a few, just to lightly dust it. But you could also go to gold with many more transformations and managing that, managing the complexity, managing what you're spending for servers day after day after day. That's another real challenge of that technical debt stuff. >> That's a great lead into my next question, for Sean, this is the disparate system complexity, technical debt and software was always kind of the belief was, oh yeah, I'll take some technical debt on and work it off once I get visibility and say, unit economics or some sort of platform or tool feature, and then you work it off as fast as possible. I was, this becomes the art and science of technical debt. Jason, what you're saying is that this can be unwieldy pretty quickly. You got state and you got a lot of different inter moving parts. This is a huge issue, Sean, this is where it's, technical debt in the data world is much different architecturally. If you don't get it right, this is a huge, huge issue. Could you aluminate why that is and what you guys are doing to help unify and change some of those conditions? >> Yeah, absolutely. When we think about technical debt and I'll keep drawing some parallels between DevOps and data ops, 'cause I think there's a tremendous number of similarities in these worlds. We used to always have the saying that "Your tech debt grows manually across microservices, "but exponentially within services." And so you want that right level of architecture and composibility if you will, of your systems where you can deploy changes, you can test, you can have high degrees of competence and the roll-outs. And I think the interesting part in the data side, as Jason highlighted, the big O-notation for tech debt in the data ecosystem, is still fairly exponential or polynomial in nature. As right now, we don't have great decomposition of the components. We have different systems. We have a streaming system, we have a databases, we have documents, doors and so on, but how the whole data pipeline data engineering part works generally tends to be pretty monolithic in nature. You take your whole data pipeline and you deploy the whole thing and you basically just cross your fingers, and hopefully it's not 15 hours, but if it is 15 hours, you go to sleep, you wake up the next morning, grab a coffee and then maybe it worked. And that iteration cycle is really slow. And so when we think about how we can improve these things, right? This is combinations of intelligent systems that do instantaneous schema detection, and validation, excuse me, it's combinations of things that do instantaneous schema detection and validation. It's things like automated lineage and dependency tracking. So you know that when you deploy code, what piece of data it affects, it's things like automated testing on individual core parts of your data pipelines to validate that you're getting the expected output that you need. So it's pulling a lot of these same DevOps style principles into the data world, which is really designed to going back to how do you help more people build more things faster and safely really rapid iterations for rapid feedback. So you know if there's breaks in the system much earlier on. >> Well, I think Sean, you're onto something really big there. And I think this is something that's emerging pretty quickly in the cloud scale that I called, 2.0, whatever, what version we're in, is the systems thinking mindset. 'Cause you mentioned the model that that was essentially a silo or subsystem. It was cohesive in it's own way, but now it's been monolithic. Now you have a broken down set of decomposed sets of data pieces that have to work together. So Jason, this is the big challenge that everyone, not really people are talking about, I think most these guys are, and you're using them. What are you unifying? Because this is a systems operating systems thinking, this is not like a database problem. It's a systems problem applied to data where databases are just pieces of it, what's your thoughts? >> That's absolutely right. And I would, so Sean touched on composibility of ETL and thinking about reusable components, thinking about pieces that all fit together, because as you're building something as complex as some of these ETS are, we do think about the platform itself and how that lends to the overarching output. So one thing, being able to actually see the different components of an ETL and blend those in and you as the dry principal, don't repeat yourself. So you essentially are able to take pieces that one person built, maybe John builds a couple of our connectors coming in, Sean also has a bunch of transforms and I just want this stuff out, so I can use a lot of what you guys have already built. I think that's key, because a lot of engineering and data engineering is about managing complexity. So taking that complexity and essentially getting it out fast and getting out error free, is where we're going with all of the data products we're building. >> What are some of the complexity that you guys have that you're dealing with? Can you be specific and share what these guys are doing to solve that problem for you? That's, this is a big problem everyone's having, I'm seeing that all over the place. >> Absolutely, so I could start at a couple of places. So I don't know if you guys are on the three Vs, four Vs or five Vs, but we have all of those. And if you go to that five, four or five V model, there is the veracity piece, which you have to ask yourself, is it true? Is it accurate when? So change happens throughout the pipeline, change can come from web hooks, change can come from users. You have to make sure that you're managing that complexity and what we we're building, I mentioned that we are paying down a lot of tech debt, but we're also building new products. And one pretty challenging, quite challenging ETL that we're building is something going from a document store to an analytical application. So in that document store, we talked about flexible schema. Basically, you don't really know exactly what you're going to get day to day, and you need to be able to manage that change through the whole process in a way that the ultimate business users find value. So, that's one of the key applications that we're using right now. And that's one that the team at Ascend and my team are working hand in hand going through a lot of those challenges. And it's, I also watch the slack just as Sean does, and it's a very active discussion board. So it is essentially like they're just partnering together. It's fabulous, but yeah-- >> And you're seeing kind of a value on this too, I mean, in terms of output what's the business results? >> Yes, absolutely. So essentially this is all, so yes, the fifth V value. So, getting to that value is essentially, there were a few pieces of the, to the value. So there's some data products that we're building within that product and their data science, data analytics based products that essentially do things with the data that help the user. There's also the question of exactly the usage and those kinds of metrics that people in ops want to understand as well as our growth team. So we have internal and external stakeholders for that. >> Jason, this is a great use case, a great customer, Sean, you guys are automating. For the folks watching, who were seeing their peer living the dream here and the data journey, as we say, things are happening. What's the message to customers that you guys want to send because you guys are really cutting your teeth into a whole another level of data engineering, data platform. That's really about the systems view and about cloud. What's the pitch, Sean? What should people know about the company? >> Absolutely, yeah, well, so one, I'd say even before the pitch, I would encourage people to not accept the status quo. And in particular, in data engineering today, the status quo is an incredibly high degree of pain and discomfort. And I think the important part of why Ascend exists and why we're so helpful for our customers, there is a much more automated future of how we build data products, how we optimize those and how we can get a larger cohort of builders into the data ecosystem. And that helps us get out of the muck as we talked about before and put really advanced technology to work for more people inside of our companies to build these data products, leveraging the latest and greatest technologies to drive increased business value faster. >> Jason, what's your assessment of these guys, as people are watching might say, hey, you know what, I'm going to contact them, I need this. How would you talk about Ascend into your peers? >> Absolutely, so I think just thinking about the whole process has been a great partnership. We started with a POC, I think Ascend likes to start with three use cases, I think we came out with four and we went through the ones that we really cared about and really wanted to bring value to the company with. So we have roadmaps for some, as we're paying down technical debt and transitioning, others we can go directly to. And I think that thinking about just like you're saying, John, that systems view of everything you're building, where that makes sense, you can actually take a lot of that complexity and encapsulate it in a way that you can essentially manage it all in that platform. So the Ascend platform has the composibility piece that we touched on. It also, not only can you compose it, but you can drill into it. And my team is super talented and is going to drill into it. So basically loves to open up each of those data flows each of the components therein and has the control there with the combination of Spark Sequel, PI Spark SQL Scala and so on. And I think that the variety of connections is also quite helpful. So thinking about the dry principle from a systems perspective is extremely useful because it's dry, you often get that in a code review, right? I think you can be a little bit more dry here. >> Yeah. >> But you can really do that in the way that you're composing your systems as well. >> That's a great, great point. One quick thing for the folks that they're watching that are trying to figure this out, and a lot of architecture is going on. A lot of people are looking at different solutions. What things have you learned that you could give them a tip like to avoid like maybe some scar tissue or tips of the trade, where you can say, hey, this way, be careful, what's some of the learnings? Could you give a few pointers to folks out there, if they're kicking tires on the direction, what's the wrong direction? What's the right direction look like? >> Absolutely, I think that, I think it through, and I don't know how much time we have that, that feels like a few days conversation as far as ways to go wrong. But absolutely, I think that thinking through exactly where want to be is the key. Otherwise it's kind of like when you're writing a ticket on Jarrah, if you don't have clear success criteria, if you don't know where you going to go, then you'll end up somewhere building something and it might work. But if you think through your exact destination that you want to be at, that will drive a lot of the decisions as you think backwards to where you started. And also I think that, so Sean also mentioned challenging the status quo. I think that you really have to be ready to challenge the status quo at every step of that journey. So if you start with some particular service that you had and its legacy, if it's not essentially performing what you need, then it's okay to just take a step back and say, well, maybe that's not the one. So I think that thinking through the system, just like you were saying, John, and also I think that having a visual representation of where you want to go is critical. So hopefully that encapsulates a lot of it, but yes, the destination is key. >> Yeah, and having an engineering platform that also unifies the multiple components and it's agile. >> That's right. >> It gets you out of the muck and on the last day and differentiate heavy lifting is a cloud plan. >> Absolutely. >> Sean, wrap it up for us here. What's the bumper sticker for your vision, share your founding principles of the company. >> Absolutely, for us, we started the company as a former in recovery and CTO. The last company I founded, we had nearly 60 people on our data team alone and had invested tremendous amounts of effort over the course of eight years. And one of the things that I've learned is that over time innovation comes just as much from deciding what you're no longer going to do as what you're going to do. And focusing heavily around, how do you get out of that muck? How do you continue to climb up that technology stack? Is incredibly important. And so really we are excited to be a part of it and taking the industry is continuing to climb higher and higher level. We're building more and more advanced levels of automation and what we call our data awareness into the automated engine of the Ascend platform that takes us across the entire data ecosystem, connecting and automating all data movement. And so we have a very exciting vision for this fabric that's emerging over time. >> Awesome, Sean, thank you so much for that insight, Jason, thanks for coming on customer of Ascend.io. >> Thank you. >> I appreciate it, gentlemen, thank you. This is the track on automating analytic workloads. We here at the end of us showcase, startup showcase, the hottest companies here at Ascend.io, I'm John Furrier, with theCUBE, thanks for watching. (upbeat music)

Published Date : Sep 22 2021

SUMMARY :

and Jason, nice to meet you. So, and Steady as a customer. and really helping to ensure great to have you on as a person kind of intellectual property the database is you can, So all of that is built of the critical problems that the business and cultural makeup of companies. and data really is that field, that oil but what if you don't have it, right? that it's really part of the package. What do you guys see as and the idea is able to execute as far grown in the recent year And a lot of that is, I mean, that come into the data conversation. and essentially you have so and then you work it and you basically just cross your fingers, And I think this is something and how that lends to complexity that you guys have and you need to be able of exactly the usage that you guys want to send of builders into the data ecosystem. hey, you know what, I'm going and has the control there in the way that you're that you could give them a tip of where you want to go is critical. Yeah, and having an and on the last day and What's the bumper sticker for your vision, and taking the industry is continuing Awesome, Sean, thank you This is the track on

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

JasonPERSON

0.99+

SeanPERSON

0.99+

Emily FreemanPERSON

0.99+

Sean KnappPERSON

0.99+

Jason RobinsonPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

15 hoursQUANTITY

0.99+

AscendORGANIZATION

0.99+

last yearDATE

0.99+

96%QUANTITY

0.99+

eight yearsQUANTITY

0.99+

15 hourQUANTITY

0.99+

iOS StoreTITLE

0.99+

18 hoursQUANTITY

0.99+

Google Play StoreTITLE

0.99+

Ascend.ioORGANIZATION

0.99+

SteadyORGANIZATION

0.99+

yesterdayDATE

0.99+

six monthsQUANTITY

0.99+

fiveQUANTITY

0.99+

thirdQUANTITY

0.99+

Spark SequelTITLE

0.99+

twoDATE

0.98+

TodayDATE

0.98+

a yearQUANTITY

0.98+

two yearsQUANTITY

0.98+

two years agoDATE

0.98+

todayDATE

0.98+

fourQUANTITY

0.98+

JarrahPERSON

0.98+

eachQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

three years agoDATE

0.97+

oneQUANTITY

0.97+

3 million plusQUANTITY

0.97+

4%QUANTITY

0.97+

one thingQUANTITY

0.96+

one teamQUANTITY

0.95+

three use casesQUANTITY

0.94+

one personQUANTITY

0.93+

nearly 60 peopleQUANTITY

0.93+

one personaQUANTITY

0.91+

Sean Knapp, Ascend.io | CUBE Conversation


 

>>Mhm >>Hello and welcome to this special cube conversation. I'm john furrier here in Palo alto California, host of the cube we're here with Sean Knapp was the Ceo and founder of Ascend dot Io heavily venture backed working on some really cool challenges and solving some big problems around scale data and creating value in a very easy way and companies are struggling to continue to evolve and re factor their business now that they've been re platform with the cloud, you're seeing a lot of new things happening. So Sean great to have you on and and thanks for coming on. >>Thanks for having me john So >>one of the things I've been interesting with your company, not only do you have great pedigree in terms of investors and tech tech staff is that you guys are going after this kind of new scaling challenge um which is not your classic kind of talking points around cloud scale, you know, more servers, more more data more. It's a little bit different. Can you describe what you guys mean around this new scaling challenge? >>Absolutely. The classic sense of scaling, particularly when it comes to the data industry, whether it's big data data science, data engineering has always focused on bits and bytes, how many servers, how big your clusters are and You know, we've watched over the last 5-10 years and those kinds of scaling problems while not entirely solved for most companies are largely solved problems now and the new challenge that is emerging is not how do you store more data or how do you process more data but it's how do you create more data products, how do you drive more value from data? And the challenge that we see many companies today, really struggling to tackle is that data productivity, that data velocity challenge and that's more people problem. It is a how do you get more people able to build more products faster and safely that propelled the business forward? >>You know, that's an interesting topic, We talk about devops and how devops is evolving. Um and you're seeing SRS has become a standard position now in companies site reliability engineers at Google pioneered, which essentially the devops person, but now that you don't need to have a full devops team as you get more automation, That's a big, big part of it. I want to get into that because you're touching on some scale issues around people, the relationships to the machines and the data. It's it's an interesting conversation, but before we do that, can you just take a minute to explain uh what you guys do, what does this send? I o I know you're in Palo alto, it's where I live um and our offices here, what's a sandy all about? >>Absolutely. So what ascend really focuses on is building the software stack on top of modern day, big data infrastructure for data engineers, data scientists, data analyst to self serve and create active data pipelines that feel the rest of their business. Uh And we provide this as a service to a variety of different companies from Australia to Italy finance to IOT uh start ups to large enterprises and really hope elevate their teams, you know, as Bezos said a long time ago, out of the muck of of the underlying infrastructure, we help them do the same thing out of the muck of classic data engineering work, >>that's awesome Andy Jassy now the ceo of amazon who was the sea of avenue too many times over the years and he always has the line undifferentiated heavy lifting. Well, I mean data is actually differentiated and it's also heavy lifting too, but you got, you have differentiation with data but it's super important, it's really you gotta but there's a lot of it now, so there's a lot of heavy lifting, this is where people are struggling, I want to get your thoughts on this because you have an opinion on this around how teams are formed, how teams can scale because we know scales coming on the data side and there's different solutions, you've got data bricks, you've got snowflake yet red shift, there's a zillion other opportunities for companies to deploy data tooling and platforms. >>What's your hands to the >>changes in data? >>Well, I think in the data ecosystem is we're changing very, very quickly uh which makes it for a very exciting industry uh and I do think that we are in this great cycle of continuing to reinvest higher and higher up the stack if you will. Right and in many ways we want to keep elevating our teams or partners or customers or companies out of the non differentiated elements. Uh and this is one of those areas where we see tremendous innovation happening from amazon from data breaks from snowflake, who are solving many of these underlying infrastructure, storage processing and even some application layer challenges proteins. And what we find oftentimes is that teams after having adopted some of these stacks on some of these solutions, then have to start solving the problem of how do we build after, how do we build better? And how do we produce more on top of these incredibly valuable investments that we've made and they're looking for acceleration. There's they're looking for in many ways the autopilot self driving level of capabilities, intelligence to sit on top and help them actually get the most out of these underlying systems. And that's really where we need that big changes >>are self driving data, you gotta have the products first. I think you mentioned earlier a data product data being products, but there's a trend with this idea of data products. Data apps. What is the data product? Um that's a new concept. I mean it's not most, most people really can't get their arms around that because it's kind of new data data, but how how does it become product ties and and how do why is it, why is it growing so fast? >>Yeah, that's a great question. I think, you know, quickly uh talked through a lot of the evolution of the industry. Oftentimes we started with the, well let's just get the data inside of a lake and it was a very autumns up notion of what we just collected then we'll go do something with it. The very field of dreams esque approach. Right? And oftentimes they didn't come in and your data just sat there and became a swamp. Right? And the when we think about a data, product oriented model of building it is let's focus on the how do we just collect and store and process data and it's much more on the business value side of how do we create a new data set in architectural models would be how do we launch a new micro service or a new feature out to a customer? But the data product is a new refined, valuable curated live set of data that can be used by the business. Whether it's for data analysts or data scientists are all the way out to end consumers. It is very heavily oriented towards that piece because that's really where we get to deliver value for our end users or customers. Yeah, >>getting that data fastest key Again, I love this idea of data becoming programmable or kind of a data ops kind of vibe where you're seeing data products that can be nurtured also scaled up to with people as as this continues The next kind of logical question I have for you is okay, I get the data products now I have teams of people, how do I deploy them? How do the teams change? Because now you have low code and no code capabilities and you have some front end tools that make it easier to create new apps and, and um products where data can feed into someone discovers a cool new value metric in the company. Um they can say here boss is a new new metric that we've identified that drives our business now, they've got a product ties that in the app, they used low code, no code. Where do you guys see this going? Because you can almost see a whole, another persona of a developer emerging >>or engine. Team >>emerging. >>Absolutely. And you know, it's, I think this is one of the challenges is when we look at the data ecosystem. Uh we even ran a survey a couple of months ago across hundreds of different developers asking data scientists, data engineers, data analyst about the overall productivity of their teams. And what we found was 96% of teams are at or over capacity, meaning only 4% of teams even have the capacity to start to invest in better tools or better skill sets and most are really under the gun. And what that means is teams and companies are looking for more people with different skill sets, how and frankly how they get more leverage out of the folks where they have, so they spend less than any more than building. And so what ends up starting to happen is this introduction of low code and no conclusions to help broaden the pool of people who can contribute to this. And what we find oftentimes is there's a bit of a standoff happening between engineering teams and analyst teams and data science teams, teams where some people want low code, some people want no code, Some people just want super high code all day all all the time and what we're finding is and even actually part of one of the surveys that we ran, uh, most users very small percentage less than 10% users actually were amenable to no code solutions, But more than 70% were amenable to solutions that leaned towards lower no code but allowed them to still programs in a language of their choice, give them more leverage. So what we see end up happening is really this new era of what we describe as flex code where it doesn't have to be just low code or just no code but teams can actually plug in at different layers of the staff and different abstract layers and contribute side by side with each other all towards the creation of this data product with applicable model of flats code. >>So let's unpack flex code for a second. You don't mind to first define what you mean by flex code and then talk about the implications to to the teams because it sounds like it's it's integrated but yet decoupled at layers. So can you take me through what it is and then let's unpack a little bit >>Absolutely. You know, fuck. So it is really a methodology that of course companies like ours will will go and product ties. But is that the belief structure that you should be able to peel back layers and contribute to an architecture in this case a data architecture, whether it's through building in a no code interface or by writing some low code in sequel or down and actually running lower level systems and languages and it's it's become so critical and key in the data ecosystem. As what classically happened has been the well if we need to go deeper into the stack, we need to customize more of how we run this one particular data job, you end up then throwing away most of the benefits and the adoption of any of these other code and tools. End up shutting off a lot of the rest of the company from contributing. And you then have to be for example, it really advanced scholar developer who understands how to extend doctor runtime environment uh, to contribute. And the reality is you probably want a few of those folks on your team and you do want them contributing, but you still want the data analysts and the data scientists and the software engineers able to contribute at higher levels of the stack, all building that solution together. So it becomes this hybrid architecture >>and I love I met because it's really good exploration here because so what you're saying is it's not that low code and no codes inadequate. It's just that the evolution of the market is such that as people start writing more code, things kind of break down stream. You gotta pull the expert in to kind of fix the plumbing and lower levels of the stack, so to speak, the more higher end systems oriented kind of components. So that's just an evolution of the market. So you're saying flex code is the next level of innovation around product sizing that in an architecture. So you don't waste someone's time to get yanked in to solve a problem just to fix something that's working or broke at this point. So if it works, it breaks. So, you know, it's working that people are coding with no code and low code, it just breaks something else downstream, You're fixing >>that. Absolutely. And that's the um, the idea of being here is, you know, it's one of these old averages. Uh, when you're selling out to customers, we see this and I remember this head of engineering one time I told me, well, you may make 95% of my team's job easier. But if you make the last 5% impossible, it is a non starter. And so a lot of this comes down to the how do we make that 95% of the team's job far easier. But when you really have to go do that one ultra advanced customized thing, how do we make sure you still get all the benefits of Oftentimes through a low code or no code interface, but you can still go back down and really tune and optimize that one piece. >>Yeah, that's really kind of, I mean this is really an architectural decision because that's the classic. You don't want to foreclose the future options. Right? So as a developer, you need to think this is really where you have to make an architecture decision That's really requires you guys to lean into that architectural team. How do you guys do that? What those conversations look like? Is it work with a send and we got you covered or how does those conversations go? Because if someone swinging low code, no code, they might not even know that they're foreclosing that 5%. >>Yeah. Oftentimes the, you know, for them, they're uh, they're the ones that are given the hardest radius gnarliest problems to solve for um, and may not uh even have the visibility that there is a team of 30, you know analysts who can go right incredible data pipelines if they are still afforded a low code or no code interface on top. And so, you know, for us, we really partner heavily with our customers and our users. Uh we do a ton of joint architecture, design decisions, not just for their products, but we actually bring them in to all of our architecture and design and road mapping sessions as well. Uh And we do a lot of collaborative building very much how we treat the developer community around the company. It's all we spent a lot of time on that >>part of your partner strategy. You're building the bridge to the future with the customer. >>Yeah, absolutely. We we work, in fact, almost all of our communications with our customers happen in shared slack channels. We are treated like extensions of our customers team and we treat them as our internal customers as well. >>And that's the way, and that's the way it should be doing some great work, is really cutting edge and really setting the table for, you know, a decade of innovation with the customer if you get it right, if they get if they get it right. So I gotta ask you with this um architecture, you gotta be factoring in automation because orchestration automation. These are the principles of devops to kind of go on the next level. I love this, love this conversation. Devops two point oh four point or whatever you wanna call it. It's the next level. Devops, it's data automation, you're taking it to a whole nother level within your sphere. Talk about automation and how that factors in obviously benefits the automation. Autonomous data pipeline? It would be Cool. No coding. I can see maintenance is an issue. How do you offload developers so that it's not only an easy button but it's a maintenance maintenance button? >>Yeah, absolutely. What we find in the evolution of most technical domains is this shit happened at some point usually towards her from an imperative developer model to a declared developer model. For example, we see this uh in databases with the introduction of sequel, we see it in infrastructure definition with tools like a telephone and now kubernetes and what we do from an automation perspective for uh for data pipelines is very similar to what Kubernetes does for containers? We do for data pipelines, we introduce a declarative model and put in this incredible intelligence that tracks everything around how data moves. Uh for us, metadata alone is a big data problem because we track so much information and all that goes into this central brain that is dynamically adapting to code and data for our users and dynamically generated. So for us, when we look at the biggest potential to automate is to help alleviate maintenance and optimization burdens for users. So they get to spend more time building and less time maintaining and that really goes into the how do you have this central brain that tracks everything that build this really deep understanding of how data moves through an organization. >>That's an awesome vision. I gotta ask my my brains firing off like, okay, so what about runtime assembly as you orchestrate data in real time, you have to kind of pull the assembly to all and link and load all this data together. I can only imagine how hard that is. Right? So can you share your vision because you mentioned docker containers, the benefits of containers is, you know, they can manage state and stateless data. So as you get into this notion of state and stateless uh data, how do you assemble it all in real time? How does that work? How's that brain figured out? What's the secret sauce? >>Yeah, that's a really great question. Uh you know, for us and this is one of the most exciting parts for our customers in our users is uh we hope with this paradigm shift where the classic model has been the you're writing code, you compile it, you ship it, you push it out and then you cross your fingers like, gosh, I really hope that works. Um and it's a very slow iteration cycle. And one of the things that we've been able to do because of this intelligence layer is actually help hybridize that for users. You still have pipelines and they still run and they're still optimizing but we make it an interactive experience at the same time very similar to how notebooks for data science. Help make that such an interactive experience. We make the process of building data pipelines and doing data engineering work iterative and interactive. You're getting instantaneous feedback and evolving very quickly. So they the things that used to take weeks or months due to slow iteration cycles really now can be done in hours or days because you get such fast feedback loops as you build. >>Well, we definitely need your product. We have so much data on the media side all these events are like little, it's like little data but it's big data, it's a lot of little data that makes it a big data problem. And I do, I feel like I'm jumping out of the airplane with a parachute and will it open, you know, one of the >>work you just we >>don't, you know, we don't know right? So a lot of the fear is you know, split, we don't wanna crater and build data products that are you know, praying right? This is this is really kind of everyone's doing right now. It's kind of state of the industry. How do you guys make it easy? That's the question, right. Because you brought up the human aspect, which I love the human scale, the scale teams, nobody wants another project if they are already burnt out with Covid and they don't have enough resources, you know, it's almost like there's a there's a little bit of psychology going on the human mind now saying well enough or burn out or you know, the relationship to humans training data data is now got this human interaction, all of it is around, you know, these are views future of work and simplicity and self service, What's your thoughts on those? >>Oh, I wholeheartedly agree. I think the uh we need to continue to be pushing those boundaries around self service and around developer and frankly just outright data productivity, You know, and for us, I think it's become a really fascinating uh time in the industry, as uh you know, I would say in 2019, much of the industry and users and builders in the industry, I just embrace the fact that frankly the building data pipeline sucked. Uh and it was a badge of honor because it was such a hard and painful thing yet, what we're finding is now as the industry is evolving is an expectation that it should be easier. Uh and people are challenging that conventional wisdom and expecting building data pipelines to be much easier and that's really where we come in is both with a flex code model and with high levels of automation to keep people squarely focused on rapid building versus maintaining and tinkering to deepen the staff. >>You know, I really think you're on to something with the one that scaling challenge of people and teams huge issue to match that at the pace of, you know, cloud and data scale is a huge, huge focus and I'm glad you're focusing on that, that's a human issue and then on the data architecture? I mean we saw what to do, how to do a failed project? You require the customer to create all this, you know, undifferentiated support and heavy lifting and, and time lag just to get to value right? There is no value right in the cloud. So, so this is your on the right track. How do you talk to customers, take a minute to, to share at the folks who are watching or if it's a customer and enterprise or potential customer, what's in it for them? Why ascend why should they work with you? How do they engage with you guys? What's in it for them? >>Yeah, absolutely. Um, what's in it for customers is time to value truncated dramatically. You get projects live and you get them faster, far faster than ever thought possible. Uh, you know, the way that we engage with our customers, uh, is we help partner them with them, We launched them on the, on the application. They can buy us from the marketplace, we will actually help even architect their first project with them, uh, and ensure that they have a full fledged live data product. Data products live within the first four weeks. Uh, not really, I think becomes the most keeping and frankly it doesn't features and functions and so on really don't matter. Ultimately, at the end of day, what really matters is can you get your data products live, Can you deliver business value and are your your team happy as they get to go build. Do they do they smile more throughout the day because they're enjoying that devil over experience. >>So you're providing the services to get them going. It's the old classic expression teaching them how to fish and then they can fish on their own, Is that right? >>Yep. Absolutely. >>And then doing whatever next next damn thing. Yeah >>and then then the, we're excited to watch quarter after quarter year after year our customers build more and more data products uh and their teams are growing faster than most of the other teams in their companies because they're delivering so much value and that's what's so exciting, >>you know, you know the cliche every company is a data company. I know that's kind of cliche but it's true right? Everyone has to have a core D. N. A. But they don't have, they shouldn't have to hire hardcore data engineering. They haven't data team for sure. That team has to create a service model for practitioners inside the company. >>Well how do they agree >>Sean great, great conversation. Um great to unpack the flex code. I love that approach, take it to the next level, take it low code to the next level with data. Great stuff and send I. O Palo Alto based company, congratulations on your success. >>Thank you so much, john >>okay this cube conversation here in Palo Alto, I'm john for your host of the cube. Thanks for watching. Mhm. Mhm.

Published Date : Sep 7 2021

SUMMARY :

So Sean great to have you on and and thanks for coming on. one of the things I've been interesting with your company, not only do you have great pedigree in terms of investors and tech And the challenge that we see many companies today, It's it's an interesting conversation, but before we do that, can you just take a minute to explain uh out of the muck of of the underlying infrastructure, we help them do the same thing out and it's also heavy lifting too, but you got, you have differentiation with data but it's super important, cycle of continuing to reinvest higher and higher up the stack if you will. are self driving data, you gotta have the products first. and store and process data and it's much more on the business value side of how do we create also scaled up to with people as as this continues The next kind of logical question I have for you or engine. And you know, it's, I think this is one of the challenges is when we look what you mean by flex code and then talk about the implications to to But is that the belief structure that you should be able You gotta pull the expert in to kind of fix And so a lot of this comes down to the how do we make that 95% where you have to make an architecture decision That's really requires you guys to And so, you know, You're building the bridge to the future with the customer. of our customers team and we treat them as our internal customers as well. for, you know, a decade of innovation with the customer if you get it right, if they get if they get it right. building and less time maintaining and that really goes into the how do you have this So can you share your vision because you mentioned docker containers, the benefits of containers Uh you know, for us and this is one of the most exciting parts for And I do, I feel like I'm jumping out of the airplane with a parachute and will it open, So a lot of the fear is you know, as uh you know, I would say in 2019, match that at the pace of, you know, cloud and data scale is a huge, huge focus and I'm glad at the end of day, what really matters is can you get your data products live, It's the old classic expression teaching them how to And then doing whatever next next damn thing. you know, you know the cliche every company is a data company. I love that approach, take it to the next level, take it low code to the next level with data. okay this cube conversation here in Palo Alto, I'm john for your host of the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sean KnappPERSON

0.99+

2019DATE

0.99+

96%QUANTITY

0.99+

Andy JassyPERSON

0.99+

Palo AltoLOCATION

0.99+

95%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

amazonORGANIZATION

0.99+

SeanPERSON

0.99+

Ascend dot IoORGANIZATION

0.99+

5%QUANTITY

0.99+

AustraliaLOCATION

0.99+

less than 10%QUANTITY

0.99+

more than 70%QUANTITY

0.99+

first projectQUANTITY

0.99+

oneQUANTITY

0.99+

johnPERSON

0.99+

ItalyLOCATION

0.99+

john furrierPERSON

0.99+

two pointQUANTITY

0.98+

Palo alto CaliforniaLOCATION

0.98+

first four weeksQUANTITY

0.98+

one pieceQUANTITY

0.97+

CeoORGANIZATION

0.97+

bothQUANTITY

0.97+

30QUANTITY

0.96+

firstQUANTITY

0.96+

BezosPERSON

0.96+

D. N. A.LOCATION

0.92+

hundreds of different developersQUANTITY

0.92+

todayDATE

0.91+

4%QUANTITY

0.91+

SRSORGANIZATION

0.87+

one timeQUANTITY

0.86+

couple of months agoDATE

0.83+

Ascend.ioOTHER

0.77+

Palo altoLOCATION

0.76+

four pointQUANTITY

0.73+

one particularQUANTITY

0.65+

thingsQUANTITY

0.65+

yearsDATE

0.65+

a decadeQUANTITY

0.64+

secondQUANTITY

0.62+

KubernetesORGANIZATION

0.59+

partsQUANTITY

0.55+

jobQUANTITY

0.51+

lastDATE

0.5+

CovidPERSON

0.5+

5-10QUANTITY

0.49+