Image Title

Search Results for Forever:

Prakash Darji, Pure Storage


 

(upbeat music) >> Hello, and welcome to the special Cube conversation that we're launching in conjunction with Pure Accelerate. Prakash Darji is here, is the general manager of Digital Experience. They actually have a business unit dedicated to this at Pure Storage. Prakash, welcome back, good to see you. >> Yeah Dave, happy to be here. >> So a few weeks back, you and I were talking about the Shift 2 and as a service economy and which is a good lead up to Accelerate, held today, we're releasing this video in LA. This is the fifth in person Accelerate. It's got a new tagline techfest so you're making it fun, but still hanging out to the tech, which we love. So this morning you guys made some announcements expanding the portfolio. I'm really interested in your reaffirmed commitment to Evergreen. That's something that got this whole trend started in the introduction of Evergreen Flex. What is that all about? What's your vision for Evergreen Flex? >> Well, so look, this is one of the biggest moments that I think we have as a company now, because we introduced Evergreen and that was and probably still is one of the largest disruptions to happen to the industry in a decade. Now, Evergreen Flex takes the power of modernizing performance and capacity to storage beyond the box, full stop. So we first started on a project many years ago to say, okay, how can we bring that modernization concept to our entire portfolio? That means if someone's got 10 boxes, how do you modernize performance and capacity across 10 boxes or across maybe FlashBlade and FlashArray. So with Evergreen Flex, we first are starting to hyper disaggregate performance and capacity and the capacity can be moved to where you need it. So previously, you could have thought of a box saying, okay, it has this performance or capacity range or boundary, but let's think about it beyond the box. Let's think about it as a portfolio. My application needs performance or capacity for storage, what if I could bring the resources to it? So with Evergreen Flex within the QLC family with our FlashBlade and our FlashArray QLC projects, you could actually move QLC capacity to where you need it. And with FlashArray X and XL or TLC family, you could move capacity to where you need it within that family. Now, if you're enabling that, you have to change the business model because the capacity needs to get build where you use it. If you use it in a high performance tier, you could build at a high performance rate. If you use it as a lower performance tier, you could build at a lower performance rate. So we changed the business model to enable this technology flexibility, where customers can buy the hardware and they get a pay per use consumption model for the software and services, but this enables the technology flexibility to use your capacity wherever you need. And we're just continuing that journey of hyper disaggregated. >> Okay, so you solve the problem of having to allocate specific capacity or performance to a particular workload. You can now spread that across whatever products in the portfolio, like you said, you're disaggregating performance and capacity. So that's very cool. Maybe you could double click on that. You obviously talk to customers about doing this. They were in pain a little bit, right? 'Cause they had this sort of stovepipe thing. So talk a little bit about the customer feedback that led you here. >> Well, look, let's just say today if you're an application developer or you haven't written your app yet, but you know you're going to. Well, you need that at least say I need something, right? So someone's going to ask you what kind of storage do you need? How many IOPS, what kind of performance capacity, before you've written your code. And you're going to buy something and you're going to spend that money. Now at that point, you're going to go write your application, run it on that box and then say, okay, was I right or was I wrong? And you know what? You were guessing before you wrote the software. After you wrote the software, you can test it and decide what you need, how it's going to scale, et cetera. But if you were wrong, you already bought something. In a hyper disaggregated world, that capacity is not a sunk cost, you can use it wherever you want. You can use capacity of somewhere else and bring it over there. So in the world of application development and in the world of storage, today people think about, I've got a workload, it's SAP, it's Oracle, I've built this custom app. I need to move it to a tier of storage, a performance class. Like you think about the application and you think about moving the application. And it takes time to move the application, takes performance, takes loan, it's a scheduled event. What if you said, you know what? You don't have to do any of that. You just move the capacity to where you need it, right? >> Yep. >> So the application's there and you actually have the ability to instantaneously move the capacity to where you need it for the application. And eventually, where we're going is we're looking to do the same thing across the performance hearing. So right now, the biggest benefit is the agility and flexibility a customer has across their fleet. So Evergreen was great for the customer with one array, but Evergreen Flex now brings that power to the entire fleet. And that's not tied to just FlashArray or FlashBlade. We've engineered a data plane in our direct flash fabric software to be able to take on the personality of the system it needs to go into. So when a data pack goes into a FlashBlade, that data pack is optimized for use in that scale out architecture with the metadata for FlashBlade. When it goes into a FlashArray C it's optimized for that metadata structure. So our Purity software has made this transformative to be able to do this. And we created a business model that allowed us to take advantage of this technology flexibility. >> Got it. Okay, so you got this mutually interchangeable performance and capacity across the portfolio beautiful. And I want to come back to sort of the Purity, but help me understand how this is different from just normal Evergreen, existing evergreen options. You mentioned the one array, but help us understand that more fully. >> Well, look, so in addition to this, like we had Evergreen Gold historically. We introduced Evergreen Flex and we had Pure as a service. So you had kind of two spectrums previously. You had Evergreen Gold on one hand, which modernized the performance and capacity of a box. You had Pure as a service that said don't worry about the box, tell me how many IOPS you have and will run and operate and manage that service for you. I think we've spoken about that previously on theCUBE. >> Yep. >> Now, we have this model where it's not just about the box, we have this model where we say, you know what, it's your fleet. You're going to run and operate and manage your fleet and you could move the capacity to where you need it. So as we started thinking about this, we decided to unify our entire portfolio of sub software and subscription services under the Evergreen brand. Evergreen Gold we're renaming to Evergreen Forever. We've actually had seven customers just crossed a decade of updates Forever Evergreen within a box. So Evergreen Forever is about modernizing a box. Evergreen Flex is about modernizing your fleet and Evergreen one, which is our rebrand of Pure as a service is about modernizing your labor. Instead of you worrying about it, let us do it for you. Because if you're an application developer and you're trying to figure out, where should I put my capacity? Where should I do it? You can just sign up for the IOPS you need and let us actually deliver and move the components to where you need it for performance, capacity, management, SLAs, et cetera. So as we think about this, for us this is a spectrum and a continuum of where you're at in the modernization journey to software subscription and services. >> Okay, got it. So why did you feel like now was the right time for the rebranding and the renaming convention, what's behind? What was the thing? Take us inside the internal conversations and the chalkboard discussion? >> Well, look, the chalkboard discussion's simple. It's everything was built on the Evergreen stateless architecture where within a box, right? We disaggregated the performance and capacity within the box already, 10 years ago within Evergreen. And that's what enabled us to build Pure as a service. That's why I say like when companies say they built a service, I'm like it's not a service if you have to do a data migration. You need a stateless architecture that's disaggregated. You can almost think of this as the anti hyper-converge, right? That's going the other way. It's hyper disaggregated. >> Right. >> And that foundation is true for our whole portfolio. That was fundamental, the Evergreen architecture. And then if Gold is modernizing a box and Flex is modernizing your fleet and your portfolio and Pure as a service is modernizing the labor, it is more of a continuation in the spectrum of how do you ensure you get better with age, right? And it's like one of those things when you think about a car. Miles driven on a car means your car's getting older and it doesn't necessarily get better with age, right? What's interesting when you think about the human body, yeah, you get older and some people deteriorate with age and some people it turns out for a period of time, you pick up some muscle mass, you get a little bit older, you get a little bit wiser and you get a little bit better with age for a while because you're putting in the work to modernize, right? But where in infrastructure and hardware and technology are you at the point where it always just gets better with age, right? We've introduced that concept 10 years ago. And we've now had proven industry success over a decade, right? As I mentioned, our first seven customers who've had a decade of Evergreen update started with an FA-300 way back when, and since then performance and capacity has been getting better over time with Evergreen Forever. So this is the next 10 years of it getting better and better for the company and not just tying it to the box because now we've grown up, we've got customers with like large fleets. I think one of our customers just hit 900 systems, right? >> Wow. >> So when you have 900 systems, right? And you're running a fleet you need to think about, okay, how am I using these resources? And in this day and age in that world, power becomes a big thing because if you're using resources inefficiently and the cost of power and energy is up, you're going to be in a world of hurt. So by using Flex where you can move the capacity to where it's needed, you're creating the most efficient operating environment, which is actually the lowest power consumption environment as well. >> Right. >> So we're really excited about this journey of modernizing, but that rebranding just became kind of a no brainer to us because it's all part of the spectrum on your journey of whether you're a single array customer, you're a fleet customer, or you don't want to even run, operate and manage. You can actually just say, you know what, give me the guarantee in the SLA. So that's the spectrum that informed the rebranding. >> Got it. Yeah, so to your point about the human body, all you got to do is look at Tom Brady's NFL combine videos and you'll see what a transformation. Fine wine is another one. I like the term hyper disaggregated because that to me is consistent with what's happening with the cloud and edge. We're building this hyper distributed or disaggregated system. So I want to just understand a little bit about you mentioned Purity so there's this software obviously is the enabler here, but what's under the covers? Is it like a virtualizer or megaload balancer, metadata manager, what's the tech behind this? >> Yeah, so we'll do a little bit of a double tech, right? So we have this concept of drives where in Purity, we build our own software for direct flash that takes the NAND and we do the NAND management as we're building our drives in Purity software. Now ,that advantage gives us the ability to say how should this drive behave? So in a FlashArray C system, it can behave as part of a FlashArray C and its usable capacity that you can write because the metadata and some of the system information is in NVRAM as part of the controller, right? So you have some metadata capability there. In a legend architecture for example, you have a distributed Blade architecture. So you need parts of that capacity to operate almost like a single layer chip where you can actually have metadata operations independent of your storage operations that operate like QLC. So we actually manage the NAND in a very very different way based on the persona of the system it's going into, right? So this capacity to make it usable, right? It's like saying a competitor could go ahead name it, Dell that has power max in Isilon, HPE that has single store and three power and nimble and like you name, like can you really from a technology standpoint say your capacity can be used anywhere or all these independent systems. Everyone's thinking about the world like a system, like here's this system, here's that system, here's that system. And your capacity is locked into a system. To be able to unlock that capacity to the system, you need to behave differently with the media type in the operating environment you're going into and that's what Purity does, right? So we are doing that as part of our direct Flex software around how we manage these drives to enable this. >> Well, it's the same thing in the cloud precaution, right? I mean, you got different APIs and primitive for object, for block, for file. Now, it's all programmable infrastructure so that makes it easier, but to the point, it's still somewhat stovepipe. So it's funny, it's good to see your commitment to Evergreen, I think you're right. You lay down the gauntlet a decade plus ago. First everybody ignored you and then they kind of laughed at you, then they criticized you, and then they said, oh, then you guys reached the escape velocity. So you had a winning hand. So I'm interested in that sort of progression over the past decade where you're going, why this is so important to your customers, where you're trying to get them ultimately. >> Well, look, the thing that's most disappointing is if I bought 100 terabytes still have to re-buy it every three or five years. That seems like a kind of ridiculous proposition, but welcome to storage. You know what I mean? That's what most people do with Evergreen. We want to end data migrations. We want to make sure that every software updates, hardware updates, non disruptive. We want to make it easy to deploy and run at scale for your fleet. And eventually we want everyone to move to our Evergreen one, formerly Pure as a service where we can run and operate and manage 'cause this is all about trust. We're trying to create trust with the customer to say, trust us, to run and operate and scale for you and worry about your business because we make tech easy. And like think about this hyper disaggregated if you go further. If you're going further with hyper disaggregated, you can think about it as like performance and capacity is your Lego building blocks. Now for anyone, I have a son, he wants to build a Lego Death Star. He didn't have that manual, he's toast. So when you move to at scale and you have this hyper disaggregated world and you have this unlimited freedom, you have unlimited choice. It's the problem of the cloud today, too much choice, right? There's like hundreds of instances of this, what do I even choose? >> Right. >> Well, so the only way to solve that problem and create simplicity when you have so much choice is put data to work. And that's where Pure one comes in because we've been collecting and we can scan your landscape and tell you, you should move these types of resources here and move those types of resources there, right? In the past, it was always about you should move this application there or you should move this application there. We're actually going to turn the entire industry on it's head. It's not like applications and data have gravity. So let's think about moving resources to where that are needed versus saying resources are a fixed asset, let's move the applications there. So that's a concept that's new to the industry. Like we're creating that concept, we're introducing that concept because now we have the technology to make that reality a new efficient way of running storage for the world. Like this is that big for the company. >> Well, I mean, a lot of the failures in data analytics and data strategies are a function of trying to jam everything into a single monolithic system and hyper centralize it. Data by its very nature is distributed. So hyper disaggregated fits that model and the pendulum's clearly swinging to that. Prakash, great to have you, purestorage.com I presume is where I can learn more? >> Oh, absolutely. We're super excited and our pent up by demand I think in this space is huge so we're looking forward to bringing this innovation to the world. >> All right, hey, thanks again. Great to see you, I appreciate you coming on and explaining this new model and good luck with it. >> All right, thank you. >> All right, and thanks for watching. This is David Vellante, and appreciate you watching this Cube conversation, we'll see you next time. (upbeat music)

Published Date : May 25 2022

SUMMARY :

is the general manager So this morning you guys capacity to where you need it. in the portfolio, like you So someone's going to ask you the capacity to where you and capacity across the the box, tell me how many IOPS you have capacity to where you need it. and the chalkboard discussion? if you have to do a data migration. and technology are you at the point So when you have 900 systems, right? So that's the spectrum that disaggregated because that to me and like you name, like can you really So you had a winning hand. and you have this hyper and create simplicity when you have and the pendulum's to bringing this innovation to the world. appreciate you coming on and appreciate you watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David VellantePERSON

0.99+

EvergreenORGANIZATION

0.99+

PrakashPERSON

0.99+

DellORGANIZATION

0.99+

LALOCATION

0.99+

10 boxesQUANTITY

0.99+

10 boxesQUANTITY

0.99+

DavePERSON

0.99+

AccelerateORGANIZATION

0.99+

Prakash DarjiPERSON

0.99+

todayDATE

0.99+

Tom BradyPERSON

0.99+

900 systemsQUANTITY

0.99+

100 terabytesQUANTITY

0.99+

LegoORGANIZATION

0.99+

Pure AccelerateORGANIZATION

0.99+

five yearsQUANTITY

0.99+

seven customersQUANTITY

0.99+

first seven customersQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

OracleORGANIZATION

0.99+

10 years agoDATE

0.98+

Evergreen GoldORGANIZATION

0.98+

Evergreen ForeverORGANIZATION

0.98+

FirstQUANTITY

0.98+

one arrayQUANTITY

0.97+

oneQUANTITY

0.97+

fifthQUANTITY

0.97+

purestorage.comOTHER

0.95+

singleQUANTITY

0.95+

Forever EvergreenORGANIZATION

0.94+

firstQUANTITY

0.93+

Evergreen FlexORGANIZATION

0.93+

single layerQUANTITY

0.93+

FlashArray CTITLE

0.91+

single storeQUANTITY

0.91+

two spectrumsQUANTITY

0.9+

a decade plus agoDATE

0.9+

TLCORGANIZATION

0.89+

NFLORGANIZATION

0.89+

single arrayQUANTITY

0.88+

threeQUANTITY

0.87+

FA-300COMMERCIAL_ITEM

0.87+

SAPORGANIZATION

0.85+

hundreds of instancesQUANTITY

0.83+

pastDATE

0.83+

over a decadeQUANTITY

0.82+

doubleQUANTITY

0.8+

Shift 2TITLE

0.79+

PurityTITLE

0.79+

FlashBladeCOMMERCIAL_ITEM

0.78+

Death StarCOMMERCIAL_ITEM

0.78+

MilesQUANTITY

0.77+

next 10 yearsDATE

0.73+

PureCOMMERCIAL_ITEM

0.73+

IsilonLOCATION

0.73+

every threeQUANTITY

0.73+

this morningDATE

0.72+

a decadeQUANTITY

0.71+

PurityORGANIZATION

0.71+

a few weeks backDATE

0.71+

PureORGANIZATION

0.69+

Mark Lyons, Dremio | CUBE Conversation


 

(bright upbeat music) >> Hey everyone. Welcome to this "CUBE Conversation" featuring Dremio. I'm your host, Lisa Martin. And I'm excited today to be joined by Mark Lyons the VP of product management at Dremio. Mark thanks for joining us today. >> Hey Lisa, thank you for having me. Looking forward to the top. >> Yeah. Talk to me about what's going on at Dremio. I had the chance to talk to your chief product officer Tomer Shiran in a couple months ago but talk to us about what's going on. >> Yeah, I remember that at re:Invent it's been an exciting few months since re:Invent here at Dremio and just in the new year we raised our Series E since then we ran into our subsurface event which we had over seven, 8,000 registrants and attendees. And then we announced our Dremio cloud product generally available including Dremio Sonar, which is SQL query engine and Dremio Arctic in public preview which is a better store for the lakehouse. >> Great. And we're going to dig into both of those. I saw that over 400 million raised in that Series E raising the valuation of Dremio to 2 billion. So a lot of growth and momentum going on at the company I'm sure. If we think about businesses in any industry they've made large investments in data warehouses, proprietary data warehouses. Talk to me about historically what they've been able to achieve, but then what some those bottlenecks are that they're running into. >> Yeah, for sure. My background is actually in the data warehouse space. I spent over the last eight, maybe close to 10 years and we've seen this shift go on from the traditional enterprise data warehouse to the data lake to the the last couple years is really been the time of the cloud data warehouse. And there's been a large amount of adoption of cloud data warehouses, but fundamentally they still come with a lot of the same challenges that have always existed with the data warehouse, which is first of all you have to load your data into it. So that data's coming from lots of different sources. In many cases, it's landing in a files in the data lake like a repository like S3 first. And then there's a loading process, right? An ETL process. And those pipelines have to be maintained and stay operational. And typically as the data warehouse life cycle of processing moves on the scope of the data that consumers get to access gets smaller and smaller. The control of that data gets tighter and change process gets heavier, and it goes from quick changes of adding a column or adding a field to a file to days if not weeks for businesses to modify their data pipelines and test new scenarios offer new features in the application or answer new questions that the business is interested you know, from an analytics standpoint. So typically we see the same thing even with these cloud data warehouses, the scope of the data shrinks, the time to get answers gets longer. And when new engines come along the same story we see, and this is going on right now in the data warehouse space there's new data that are coming and they say, well we're a thousand faster times faster than the last data warehouse. And then it's like, okay, great. But what's the process? The process is to migrate all your data to the new data warehouse, right? And that comes with all the same baggage. Again, it's a proprietary format that you load your data into. So I think people are ready for a change from that. >> People are not only ready for a change, but as every company has to become a data company these days and access to real time data is no longer a nice to have. It's absolutely essential. The ability to scale the ability to harness the value from as much data as possible and to do so fast is real really table stakes for any organization. How is Dremio helping customers in that situation to operationalize their data? >> Yeah, so that's why I was so intrigued and loved about Dremio when I joined three, four, five months back. Coming from the warehouse space, when I first saw the product I was just like, oh my gosh, this is so much easier for folks. They can access a larger scope of their data faster, which to your point, like is table stakes for all organizations these days they need to be able to analyze data sooner. Sooner is the better. Data has a halflife, right? Like it decays. The value of data decays over time. So typically the most valuable data is the newest data. And that all depends on what we're the industries we're talking about the types of data and the use cases, but it's always basically true that newer data is more valuable and they need to be able to analyze as much of it as possible. The story can't be, no, we have to wait weeks or months to get a new data source or the story can't be you know, that data that includes seasonality. You know, we weren't able to keep in the same location because it's too expensive to keep it in the warehouse or whatever. So for Dremio and our customers our story is simple, is leverage the data where it is so access data in all sorts of sources, whether it's a post press database or an S3 bucket, and don't move the data don't copy the data, analyze it in place. And don't limit the scope of the data you're trying to analyze. If you have new use cases you have additional data sets that you want to add to those use cases, just bring them in, into S3 and you are off to the races and you can easily analyze more data and give more power to the end user. So if there's a field that they want to calculate the simple change convert this miles field, the kilometers well, the end users should be empowered to just make a calculation on the data like that. That should not require an entire cycle through a data engineering team and a backlog and a ticket and pushing that to production and so forth which in many cases it does at many organizations. It's a lot of effort to make new calculations on the data or derive new fields, add a new column and so forth. So Dremio makes the data engineers life easier and more productive. It also makes the data consumers life much easier and happier, and they can just do their job without worrying about and waiting. >> Not only can they do their job but from a business, a high level perspective the business is probably has the opportunity to be far more competitive because it's got a bigger scope of data, as you mentioned, access to it more widely faster and those are only good things in terms of- >> More use cases, more experiments, right? So what I've seen a lot is like there's no shortage of ideas of what people can do with the data. And projects that might be able to be undertaken but no one knows exactly how valuable that will be. How whether that's something that should be funded or should not be funded. So like more use cases, more experiments try more things. Like if it's cheap to try these data problems and see if it's valuable to the business then that's better for the business. Ultimately the business will be more competitive. We'll be able to try more new products we'll be able to have better operational kind of efficiencies, lower risk all those things. >> Right. What about data governance? Talk to me about how the Lakehouse enables that across all these disparate data volumes. >> I think this is where things get really interesting with the Lakehouse concept relative to where we used to be with a data lake, which was a parking ground for just lots of files. And that came with a lot of challenges when you just had a lot of files out there in a data lake, whether that was HDFS, right. I do data lake back in the day or now a cloud storage object, storage data lake. So historically I feel like governance, access authentication, auditing all were extremely challenging with the data lake but now in the modern kind of lake in the modern lakehouse world, all those challenges have been solved. You have great everything from the front of the house with all and access policies and data masking everything that you would expect through commits and tables and transactions and inserts and updates and deletes, and auditing of that data able to see, well who made the changes to the data, which engine, which user when were they made and seeing the whole history of a table and not just one, not just a mess of files in a file store. So it's really come a long way. I feel like where the renaissance stage of the 2.0 data lakes or lakehouses as people call them. But basically what you're seeing is a lot of functionality from the traditional warehouse, all available in the lake. And warehouses had a lot of governance built in. And whether that is encryption and column access policies and row access policies. So only the right user saw the right data or some data masking. So that like the social security was masked out but the analyst knew it was a social security number. That was all there. Now that's all available on the lakehouse and you don't need to copy data into a data warehouse just to meet those type of requirements. Huge one is also deletes, right? Like I feel like deletes were one of the Achilles heels of the original data lake when there was no governance. And people were just copying data sets around modifying data sets for whatever their analytics use case was. If someone said, "Hey, go delete the right. To be forgotten GDPR." Now you've got Californias CCPA and others all coming online. If you said, go delete this per you know, this records or set of records from there from a lake original lake. I think that was impossible, probably for many people to do it with confidence, like to say that like I fully deleted this. Now with the Apache like iceberg cable format that is stores in the lakehouse architecture, you actually have delete functionality, right? Which is a key component that warehouses are traditionally brought to the table. >> That's a huge component from a compliance perspective. You mentioned GDPR, CCPA, which is going to be CPRA in less than a year, but there's so many other regulations data privacy regulations that are coming up that the ability to delete that is going to be table stakes for organizations, something that you guys launched. And we just have a couple minutes left, but you launched I love the name, the forever free data Lakehouse platform. That sounds great. Forever Free. Talk to me about what that really means is consisting of two products the Sonar and Arctic that you mentioned, but talk to me about this Forever Free data Lakehouse. >> Yeah. I feel like this is an amazing step forward in this, in the industry. And because of the Dremio cloud architecture, where the execution and data lives in the customer's cloud account we're able to basically say, hey, the Dremio software the Dremio service side of this platform is Forever Free for users. Now there is a paid tier but there's a standard tier that is truly forever free. Now that that still comes with infrastructure bills from like your cloud provider, right? So if you use AWS, you still have an S3 bill like for your data sets because we're not moving them. They're staying in your Amazon account in your S3 bucket. You still do still have to pay for right. The infrastructure, the EC2 and the compute to do the data analytics but the actual softwares is free forever. And there's no one else in our space offering that at in our space, everything's a free trial. So here's your $500 of credit. Come try my product. And what we're saying is with this kind of our unique architectural approach and this is what I think is preferred by customers too. You know, we take care of all the query planning all the engine management, all the administrative the platform, the upgrades fully available zero downtime platform. So they get all the benefits of SaaS as well as the benefits of maintaining control over their data. And because that data staying in their account and the execution of the analytics is staying in their account. We don't incur that infrastructure bill. So we can have a free forever tier a forever free tier of our platform. And we've had tremendous adoption. I think we announced this beginning of March first week of March. So it's not even the end of March. Hundreds and hundreds of signups and many customers actively are users actively on the platform now live querying their data >> Just kind of summarizes the momentum that Dremio we seeing. Mark, thank you so much. We're out of time, but thanks for talking to me- >> Thank you. >> About what's new at Dremio. What you guys are doing. Next time, we'll have to unpack this even more. I'm sure there's loads more we could talk about but we appreciate that. >> Yeah, this was great. Thank you, Lisa. Thank you. >> My pleasure for Mark Lyons. I'm Lisa Martin. Keep it right here on theCUBE your leader in high tech hybrid event coverage. (upbeat music)

Published Date : Mar 24 2022

SUMMARY :

the VP of product management at Dremio. Looking forward to the top. I had the chance to talk to and just in the new year of Dremio to 2 billion. the time to get answers gets longer. and to do so fast is and pushing that to Ultimately the business Talk to me about how the Lakehouse enables and auditing of that data able to see, that the ability to delete that and the compute to do the data analytics Just kind of summarizes the momentum but we appreciate that. Yeah, this was great. your leader in high tech

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark LyonsPERSON

0.99+

Lisa MartinPERSON

0.99+

$500QUANTITY

0.99+

LisaPERSON

0.99+

2 billionQUANTITY

0.99+

MarkPERSON

0.99+

DremioORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Tomer ShiranPERSON

0.99+

HundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

less than a yearQUANTITY

0.99+

GDPRTITLE

0.99+

bothQUANTITY

0.99+

end of MarchDATE

0.99+

todayDATE

0.99+

over 400 millionQUANTITY

0.98+

over seven, 8,000 registrantsQUANTITY

0.98+

firstQUANTITY

0.97+

SonarORGANIZATION

0.97+

ArcticORGANIZATION

0.97+

ApacheORGANIZATION

0.96+

two productsQUANTITY

0.96+

S3TITLE

0.95+

Dremio ArcticORGANIZATION

0.94+

EC2TITLE

0.94+

LakehouseORGANIZATION

0.94+

CCPATITLE

0.94+

couple months agoDATE

0.93+

re:InventEVENT

0.87+

five months backDATE

0.86+

last couple yearsDATE

0.86+

threeDATE

0.84+

oneQUANTITY

0.84+

couple minutesQUANTITY

0.82+

March first week of MarchDATE

0.82+

hundredsQUANTITY

0.81+

10 yearsQUANTITY

0.76+

fourDATE

0.76+

ForeverTITLE

0.76+

beginningDATE

0.73+

SQLTITLE

0.72+

2.0 dataQUANTITY

0.71+

SeriesEVENT

0.68+

SonarCOMMERCIAL_ITEM

0.67+

EOTHER

0.64+

Series EEVENT

0.64+

FreeORGANIZATION

0.63+

CaliforniasLOCATION

0.59+

signupsQUANTITY

0.57+

ConversationEVENT

0.56+

yearEVENT

0.53+

thousandQUANTITY

0.48+

eightDATE

0.46+

CPRAORGANIZATION

0.42+

CCPAORGANIZATION

0.34+