Image Title

Search Results for Ascend dot Io:

Sean Knapp, Ascend.io | CUBE Conversation


 

>>Mhm >>Hello and welcome to this special cube conversation. I'm john furrier here in Palo alto California, host of the cube we're here with Sean Knapp was the Ceo and founder of Ascend dot Io heavily venture backed working on some really cool challenges and solving some big problems around scale data and creating value in a very easy way and companies are struggling to continue to evolve and re factor their business now that they've been re platform with the cloud, you're seeing a lot of new things happening. So Sean great to have you on and and thanks for coming on. >>Thanks for having me john So >>one of the things I've been interesting with your company, not only do you have great pedigree in terms of investors and tech tech staff is that you guys are going after this kind of new scaling challenge um which is not your classic kind of talking points around cloud scale, you know, more servers, more more data more. It's a little bit different. Can you describe what you guys mean around this new scaling challenge? >>Absolutely. The classic sense of scaling, particularly when it comes to the data industry, whether it's big data data science, data engineering has always focused on bits and bytes, how many servers, how big your clusters are and You know, we've watched over the last 5-10 years and those kinds of scaling problems while not entirely solved for most companies are largely solved problems now and the new challenge that is emerging is not how do you store more data or how do you process more data but it's how do you create more data products, how do you drive more value from data? And the challenge that we see many companies today, really struggling to tackle is that data productivity, that data velocity challenge and that's more people problem. It is a how do you get more people able to build more products faster and safely that propelled the business forward? >>You know, that's an interesting topic, We talk about devops and how devops is evolving. Um and you're seeing SRS has become a standard position now in companies site reliability engineers at Google pioneered, which essentially the devops person, but now that you don't need to have a full devops team as you get more automation, That's a big, big part of it. I want to get into that because you're touching on some scale issues around people, the relationships to the machines and the data. It's it's an interesting conversation, but before we do that, can you just take a minute to explain uh what you guys do, what does this send? I o I know you're in Palo alto, it's where I live um and our offices here, what's a sandy all about? >>Absolutely. So what ascend really focuses on is building the software stack on top of modern day, big data infrastructure for data engineers, data scientists, data analyst to self serve and create active data pipelines that feel the rest of their business. Uh And we provide this as a service to a variety of different companies from Australia to Italy finance to IOT uh start ups to large enterprises and really hope elevate their teams, you know, as Bezos said a long time ago, out of the muck of of the underlying infrastructure, we help them do the same thing out of the muck of classic data engineering work, >>that's awesome Andy Jassy now the ceo of amazon who was the sea of avenue too many times over the years and he always has the line undifferentiated heavy lifting. Well, I mean data is actually differentiated and it's also heavy lifting too, but you got, you have differentiation with data but it's super important, it's really you gotta but there's a lot of it now, so there's a lot of heavy lifting, this is where people are struggling, I want to get your thoughts on this because you have an opinion on this around how teams are formed, how teams can scale because we know scales coming on the data side and there's different solutions, you've got data bricks, you've got snowflake yet red shift, there's a zillion other opportunities for companies to deploy data tooling and platforms. >>What's your hands to the >>changes in data? >>Well, I think in the data ecosystem is we're changing very, very quickly uh which makes it for a very exciting industry uh and I do think that we are in this great cycle of continuing to reinvest higher and higher up the stack if you will. Right and in many ways we want to keep elevating our teams or partners or customers or companies out of the non differentiated elements. Uh and this is one of those areas where we see tremendous innovation happening from amazon from data breaks from snowflake, who are solving many of these underlying infrastructure, storage processing and even some application layer challenges proteins. And what we find oftentimes is that teams after having adopted some of these stacks on some of these solutions, then have to start solving the problem of how do we build after, how do we build better? And how do we produce more on top of these incredibly valuable investments that we've made and they're looking for acceleration. There's they're looking for in many ways the autopilot self driving level of capabilities, intelligence to sit on top and help them actually get the most out of these underlying systems. And that's really where we need that big changes >>are self driving data, you gotta have the products first. I think you mentioned earlier a data product data being products, but there's a trend with this idea of data products. Data apps. What is the data product? Um that's a new concept. I mean it's not most, most people really can't get their arms around that because it's kind of new data data, but how how does it become product ties and and how do why is it, why is it growing so fast? >>Yeah, that's a great question. I think, you know, quickly uh talked through a lot of the evolution of the industry. Oftentimes we started with the, well let's just get the data inside of a lake and it was a very autumns up notion of what we just collected then we'll go do something with it. The very field of dreams esque approach. Right? And oftentimes they didn't come in and your data just sat there and became a swamp. Right? And the when we think about a data, product oriented model of building it is let's focus on the how do we just collect and store and process data and it's much more on the business value side of how do we create a new data set in architectural models would be how do we launch a new micro service or a new feature out to a customer? But the data product is a new refined, valuable curated live set of data that can be used by the business. Whether it's for data analysts or data scientists are all the way out to end consumers. It is very heavily oriented towards that piece because that's really where we get to deliver value for our end users or customers. Yeah, >>getting that data fastest key Again, I love this idea of data becoming programmable or kind of a data ops kind of vibe where you're seeing data products that can be nurtured also scaled up to with people as as this continues The next kind of logical question I have for you is okay, I get the data products now I have teams of people, how do I deploy them? How do the teams change? Because now you have low code and no code capabilities and you have some front end tools that make it easier to create new apps and, and um products where data can feed into someone discovers a cool new value metric in the company. Um they can say here boss is a new new metric that we've identified that drives our business now, they've got a product ties that in the app, they used low code, no code. Where do you guys see this going? Because you can almost see a whole, another persona of a developer emerging >>or engine. Team >>emerging. >>Absolutely. And you know, it's, I think this is one of the challenges is when we look at the data ecosystem. Uh we even ran a survey a couple of months ago across hundreds of different developers asking data scientists, data engineers, data analyst about the overall productivity of their teams. And what we found was 96% of teams are at or over capacity, meaning only 4% of teams even have the capacity to start to invest in better tools or better skill sets and most are really under the gun. And what that means is teams and companies are looking for more people with different skill sets, how and frankly how they get more leverage out of the folks where they have, so they spend less than any more than building. And so what ends up starting to happen is this introduction of low code and no conclusions to help broaden the pool of people who can contribute to this. And what we find oftentimes is there's a bit of a standoff happening between engineering teams and analyst teams and data science teams, teams where some people want low code, some people want no code, Some people just want super high code all day all all the time and what we're finding is and even actually part of one of the surveys that we ran, uh, most users very small percentage less than 10% users actually were amenable to no code solutions, But more than 70% were amenable to solutions that leaned towards lower no code but allowed them to still programs in a language of their choice, give them more leverage. So what we see end up happening is really this new era of what we describe as flex code where it doesn't have to be just low code or just no code but teams can actually plug in at different layers of the staff and different abstract layers and contribute side by side with each other all towards the creation of this data product with applicable model of flats code. >>So let's unpack flex code for a second. You don't mind to first define what you mean by flex code and then talk about the implications to to the teams because it sounds like it's it's integrated but yet decoupled at layers. So can you take me through what it is and then let's unpack a little bit >>Absolutely. You know, fuck. So it is really a methodology that of course companies like ours will will go and product ties. But is that the belief structure that you should be able to peel back layers and contribute to an architecture in this case a data architecture, whether it's through building in a no code interface or by writing some low code in sequel or down and actually running lower level systems and languages and it's it's become so critical and key in the data ecosystem. As what classically happened has been the well if we need to go deeper into the stack, we need to customize more of how we run this one particular data job, you end up then throwing away most of the benefits and the adoption of any of these other code and tools. End up shutting off a lot of the rest of the company from contributing. And you then have to be for example, it really advanced scholar developer who understands how to extend doctor runtime environment uh, to contribute. And the reality is you probably want a few of those folks on your team and you do want them contributing, but you still want the data analysts and the data scientists and the software engineers able to contribute at higher levels of the stack, all building that solution together. So it becomes this hybrid architecture >>and I love I met because it's really good exploration here because so what you're saying is it's not that low code and no codes inadequate. It's just that the evolution of the market is such that as people start writing more code, things kind of break down stream. You gotta pull the expert in to kind of fix the plumbing and lower levels of the stack, so to speak, the more higher end systems oriented kind of components. So that's just an evolution of the market. So you're saying flex code is the next level of innovation around product sizing that in an architecture. So you don't waste someone's time to get yanked in to solve a problem just to fix something that's working or broke at this point. So if it works, it breaks. So, you know, it's working that people are coding with no code and low code, it just breaks something else downstream, You're fixing >>that. Absolutely. And that's the um, the idea of being here is, you know, it's one of these old averages. Uh, when you're selling out to customers, we see this and I remember this head of engineering one time I told me, well, you may make 95% of my team's job easier. But if you make the last 5% impossible, it is a non starter. And so a lot of this comes down to the how do we make that 95% of the team's job far easier. But when you really have to go do that one ultra advanced customized thing, how do we make sure you still get all the benefits of Oftentimes through a low code or no code interface, but you can still go back down and really tune and optimize that one piece. >>Yeah, that's really kind of, I mean this is really an architectural decision because that's the classic. You don't want to foreclose the future options. Right? So as a developer, you need to think this is really where you have to make an architecture decision That's really requires you guys to lean into that architectural team. How do you guys do that? What those conversations look like? Is it work with a send and we got you covered or how does those conversations go? Because if someone swinging low code, no code, they might not even know that they're foreclosing that 5%. >>Yeah. Oftentimes the, you know, for them, they're uh, they're the ones that are given the hardest radius gnarliest problems to solve for um, and may not uh even have the visibility that there is a team of 30, you know analysts who can go right incredible data pipelines if they are still afforded a low code or no code interface on top. And so, you know, for us, we really partner heavily with our customers and our users. Uh we do a ton of joint architecture, design decisions, not just for their products, but we actually bring them in to all of our architecture and design and road mapping sessions as well. Uh And we do a lot of collaborative building very much how we treat the developer community around the company. It's all we spent a lot of time on that >>part of your partner strategy. You're building the bridge to the future with the customer. >>Yeah, absolutely. We we work, in fact, almost all of our communications with our customers happen in shared slack channels. We are treated like extensions of our customers team and we treat them as our internal customers as well. >>And that's the way, and that's the way it should be doing some great work, is really cutting edge and really setting the table for, you know, a decade of innovation with the customer if you get it right, if they get if they get it right. So I gotta ask you with this um architecture, you gotta be factoring in automation because orchestration automation. These are the principles of devops to kind of go on the next level. I love this, love this conversation. Devops two point oh four point or whatever you wanna call it. It's the next level. Devops, it's data automation, you're taking it to a whole nother level within your sphere. Talk about automation and how that factors in obviously benefits the automation. Autonomous data pipeline? It would be Cool. No coding. I can see maintenance is an issue. How do you offload developers so that it's not only an easy button but it's a maintenance maintenance button? >>Yeah, absolutely. What we find in the evolution of most technical domains is this shit happened at some point usually towards her from an imperative developer model to a declared developer model. For example, we see this uh in databases with the introduction of sequel, we see it in infrastructure definition with tools like a telephone and now kubernetes and what we do from an automation perspective for uh for data pipelines is very similar to what Kubernetes does for containers? We do for data pipelines, we introduce a declarative model and put in this incredible intelligence that tracks everything around how data moves. Uh for us, metadata alone is a big data problem because we track so much information and all that goes into this central brain that is dynamically adapting to code and data for our users and dynamically generated. So for us, when we look at the biggest potential to automate is to help alleviate maintenance and optimization burdens for users. So they get to spend more time building and less time maintaining and that really goes into the how do you have this central brain that tracks everything that build this really deep understanding of how data moves through an organization. >>That's an awesome vision. I gotta ask my my brains firing off like, okay, so what about runtime assembly as you orchestrate data in real time, you have to kind of pull the assembly to all and link and load all this data together. I can only imagine how hard that is. Right? So can you share your vision because you mentioned docker containers, the benefits of containers is, you know, they can manage state and stateless data. So as you get into this notion of state and stateless uh data, how do you assemble it all in real time? How does that work? How's that brain figured out? What's the secret sauce? >>Yeah, that's a really great question. Uh you know, for us and this is one of the most exciting parts for our customers in our users is uh we hope with this paradigm shift where the classic model has been the you're writing code, you compile it, you ship it, you push it out and then you cross your fingers like, gosh, I really hope that works. Um and it's a very slow iteration cycle. And one of the things that we've been able to do because of this intelligence layer is actually help hybridize that for users. You still have pipelines and they still run and they're still optimizing but we make it an interactive experience at the same time very similar to how notebooks for data science. Help make that such an interactive experience. We make the process of building data pipelines and doing data engineering work iterative and interactive. You're getting instantaneous feedback and evolving very quickly. So they the things that used to take weeks or months due to slow iteration cycles really now can be done in hours or days because you get such fast feedback loops as you build. >>Well, we definitely need your product. We have so much data on the media side all these events are like little, it's like little data but it's big data, it's a lot of little data that makes it a big data problem. And I do, I feel like I'm jumping out of the airplane with a parachute and will it open, you know, one of the >>work you just we >>don't, you know, we don't know right? So a lot of the fear is you know, split, we don't wanna crater and build data products that are you know, praying right? This is this is really kind of everyone's doing right now. It's kind of state of the industry. How do you guys make it easy? That's the question, right. Because you brought up the human aspect, which I love the human scale, the scale teams, nobody wants another project if they are already burnt out with Covid and they don't have enough resources, you know, it's almost like there's a there's a little bit of psychology going on the human mind now saying well enough or burn out or you know, the relationship to humans training data data is now got this human interaction, all of it is around, you know, these are views future of work and simplicity and self service, What's your thoughts on those? >>Oh, I wholeheartedly agree. I think the uh we need to continue to be pushing those boundaries around self service and around developer and frankly just outright data productivity, You know, and for us, I think it's become a really fascinating uh time in the industry, as uh you know, I would say in 2019, much of the industry and users and builders in the industry, I just embrace the fact that frankly the building data pipeline sucked. Uh and it was a badge of honor because it was such a hard and painful thing yet, what we're finding is now as the industry is evolving is an expectation that it should be easier. Uh and people are challenging that conventional wisdom and expecting building data pipelines to be much easier and that's really where we come in is both with a flex code model and with high levels of automation to keep people squarely focused on rapid building versus maintaining and tinkering to deepen the staff. >>You know, I really think you're on to something with the one that scaling challenge of people and teams huge issue to match that at the pace of, you know, cloud and data scale is a huge, huge focus and I'm glad you're focusing on that, that's a human issue and then on the data architecture? I mean we saw what to do, how to do a failed project? You require the customer to create all this, you know, undifferentiated support and heavy lifting and, and time lag just to get to value right? There is no value right in the cloud. So, so this is your on the right track. How do you talk to customers, take a minute to, to share at the folks who are watching or if it's a customer and enterprise or potential customer, what's in it for them? Why ascend why should they work with you? How do they engage with you guys? What's in it for them? >>Yeah, absolutely. Um, what's in it for customers is time to value truncated dramatically. You get projects live and you get them faster, far faster than ever thought possible. Uh, you know, the way that we engage with our customers, uh, is we help partner them with them, We launched them on the, on the application. They can buy us from the marketplace, we will actually help even architect their first project with them, uh, and ensure that they have a full fledged live data product. Data products live within the first four weeks. Uh, not really, I think becomes the most keeping and frankly it doesn't features and functions and so on really don't matter. Ultimately, at the end of day, what really matters is can you get your data products live, Can you deliver business value and are your your team happy as they get to go build. Do they do they smile more throughout the day because they're enjoying that devil over experience. >>So you're providing the services to get them going. It's the old classic expression teaching them how to fish and then they can fish on their own, Is that right? >>Yep. Absolutely. >>And then doing whatever next next damn thing. Yeah >>and then then the, we're excited to watch quarter after quarter year after year our customers build more and more data products uh and their teams are growing faster than most of the other teams in their companies because they're delivering so much value and that's what's so exciting, >>you know, you know the cliche every company is a data company. I know that's kind of cliche but it's true right? Everyone has to have a core D. N. A. But they don't have, they shouldn't have to hire hardcore data engineering. They haven't data team for sure. That team has to create a service model for practitioners inside the company. >>Well how do they agree >>Sean great, great conversation. Um great to unpack the flex code. I love that approach, take it to the next level, take it low code to the next level with data. Great stuff and send I. O Palo Alto based company, congratulations on your success. >>Thank you so much, john >>okay this cube conversation here in Palo Alto, I'm john for your host of the cube. Thanks for watching. Mhm. Mhm.

Published Date : Sep 7 2021

SUMMARY :

So Sean great to have you on and and thanks for coming on. one of the things I've been interesting with your company, not only do you have great pedigree in terms of investors and tech And the challenge that we see many companies today, It's it's an interesting conversation, but before we do that, can you just take a minute to explain uh out of the muck of of the underlying infrastructure, we help them do the same thing out and it's also heavy lifting too, but you got, you have differentiation with data but it's super important, cycle of continuing to reinvest higher and higher up the stack if you will. are self driving data, you gotta have the products first. and store and process data and it's much more on the business value side of how do we create also scaled up to with people as as this continues The next kind of logical question I have for you or engine. And you know, it's, I think this is one of the challenges is when we look what you mean by flex code and then talk about the implications to to But is that the belief structure that you should be able You gotta pull the expert in to kind of fix And so a lot of this comes down to the how do we make that 95% where you have to make an architecture decision That's really requires you guys to And so, you know, You're building the bridge to the future with the customer. of our customers team and we treat them as our internal customers as well. for, you know, a decade of innovation with the customer if you get it right, if they get if they get it right. building and less time maintaining and that really goes into the how do you have this So can you share your vision because you mentioned docker containers, the benefits of containers Uh you know, for us and this is one of the most exciting parts for And I do, I feel like I'm jumping out of the airplane with a parachute and will it open, So a lot of the fear is you know, as uh you know, I would say in 2019, match that at the pace of, you know, cloud and data scale is a huge, huge focus and I'm glad at the end of day, what really matters is can you get your data products live, It's the old classic expression teaching them how to And then doing whatever next next damn thing. you know, you know the cliche every company is a data company. I love that approach, take it to the next level, take it low code to the next level with data. okay this cube conversation here in Palo Alto, I'm john for your host of the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sean KnappPERSON

0.99+

2019DATE

0.99+

96%QUANTITY

0.99+

Andy JassyPERSON

0.99+

Palo AltoLOCATION

0.99+

95%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

amazonORGANIZATION

0.99+

SeanPERSON

0.99+

Ascend dot IoORGANIZATION

0.99+

5%QUANTITY

0.99+

AustraliaLOCATION

0.99+

less than 10%QUANTITY

0.99+

more than 70%QUANTITY

0.99+

first projectQUANTITY

0.99+

oneQUANTITY

0.99+

johnPERSON

0.99+

ItalyLOCATION

0.99+

john furrierPERSON

0.99+

two pointQUANTITY

0.98+

Palo alto CaliforniaLOCATION

0.98+

first four weeksQUANTITY

0.98+

one pieceQUANTITY

0.97+

CeoORGANIZATION

0.97+

bothQUANTITY

0.97+

30QUANTITY

0.96+

firstQUANTITY

0.96+

BezosPERSON

0.96+

D. N. A.LOCATION

0.92+

hundreds of different developersQUANTITY

0.92+

todayDATE

0.91+

4%QUANTITY

0.91+

SRSORGANIZATION

0.87+

one timeQUANTITY

0.86+

couple of months agoDATE

0.83+

Ascend.ioOTHER

0.77+

Palo altoLOCATION

0.76+

four pointQUANTITY

0.73+

one particularQUANTITY

0.65+

thingsQUANTITY

0.65+

yearsDATE

0.65+

a decadeQUANTITY

0.64+

secondQUANTITY

0.62+

KubernetesORGANIZATION

0.59+

partsQUANTITY

0.55+

jobQUANTITY

0.51+

lastDATE

0.5+

CovidPERSON

0.5+

5-10QUANTITY

0.49+