Vaughn Stewart, Pure Storage & Bharath Aleti, Splunk | Pure Accelerate 2019
>> from Austin, Texas. It's Theo Cube, covering pure storage. Accelerate 2019. Brought to you by pure storage. >> Welcome back to the Cube. Lisa Martin Day Volante is my co host were a pure accelerate 2019 in Austin, Texas. A couple of guests joining us. Next. Please welcome Barack elected director product management for slunk. Welcome back to the Cube. Thank you. And guess who's back. Von Stewart. V. P. A. Technology from pure Avon. Welcome back. >> Hey, thanks for having us guys really excited about this topic. >> We are too. All right, so But we'll start with you. Since you're so excited in your nice orange pocket square is peeking out of your jacket there. Talk about the Splunk, your relationship. Long relationship, new offerings, joint value. What's going on? >> Great set up. So Splunk impure have had a long relationship around accelerating customers analytics The speed at which they can get their questions answered the rate at which they could ingest data right to build just more sources. Look at more data, get faster time to take action. However, I shouldn't be leading this conversation because Split Split has released a new architecture, a significant evolution if you will from the traditional Splunk architectural was built off of Daz and a shared nothing architecture. Leveraging replicas, right? Very similar what you'd have with, like, say, in H D. F s Work it load or H c. I. For those who aren't in the analytic space, they've released the new architecture that's disaggregated based off of cashing and an object store construct called Smart Store, which Broth is the product manager for? >> All right, tell us about that. >> So we release a smart for the future as part of spunk Enterprise. $7 to about a near back back in September Timeframe. Really Genesis or Strong Smart Strong goes back to the key customer problem that we were looking to solve. So one of our customers, they're already ingesting a large volume of data, but the need to retain the data for twice, then one of Peter and in today's architecture, what it required was them to kind of lean nearly scale on the amount of hardware. What we realized it. Sooner or later, all customers are going to run into this issue. But if they want in just more data or reading the data for longer periods, of time, they're going to run into this cost ceiling sooner or later on. The challenge is that into this architecture, today's distributes killer dark picture that we have today, which of all, about 10 years back, with the evolution of the Duke in this particular architecture, the computer and story Jacqui located. And because computer storage acqua located, it allows us to process large volumes of data. But if you look at the demand today, we can see that the demand for storage or placing the demand for computer So these are, too to directly opposite trans that we're seeing in the market space. If you need to basically provide performance at scale, there needs to be a better model. They need a better solution than what we had right now. So that's the reason we basically brought Smart store on denounced availability last September. What's Marceau brings to the table is that a D couples computer and storage, So now you can scale storage independent of computers, so if you need more storage or if you need to read in for longer periods of time, you can just kill independent on the storage and with level age, remote object stores like Bill Flash bid to provide that data depository. But most of your active data said still decides locally on the indexers. So what we did was basically broke the paradigm off computer storage location, and we had a small twist. He said that now the computer stories can be the couple, but you bring comfort and stories closer together only on demand. So that means that when you were running a radio, you know, we're running a search, and whenever the data is being looked for that only when we bring the data together. The other key thing that we do is we have an active data set way ensure that the smart store has ah, very powerful cash manager that allows that ensures that the active data set is always very similar to the time when your laptop, the night when your laptop has active data sets always in the cash always on memory. So very similar to that smarts for cash allows you to have active data set always locally on the index. Start your search performance is not impact. >> Yes, this problem of scaling compute and storage independently. You mentioned H. D. F s you saw it early on there. The hyper converged guys have been trying to solve this problem. Um, some of the database guys like snowflakes have solved it in the cloud. But if I understand correctly, you're doing this on Prem. >> So we're doing this board an on Prem as well as in Cloud. So this smart so feature is already available on tramp were also already using a host all off our spun cloud deployments as well. It's available for customers who want obviously deploy spunk on AWS as well. >> Okay, where do you guys fit in? So we >> fit in with customers anywhere from on the hate say this way. But on the small side, at the hundreds of terabytes up into the tens and hundreds of petabytes side. And that's really just kind of shows the pervasiveness of Splunk both through mid market, all the way up through the through the enterprise, every industry and every vertical. So where we come in relative to smart store is we were a coat co developer, a launch partner. And because our object offering Flash Blade is a high performance object store, we are a little bit different than the rest of the Splunk s story partner ecosystem who have invested in slow more of an archive mode of s tree right, we have always been designed and kind of betting on the future would be based on high performance, large scale object. And so we believe smart store is is a ah, perfect example, if you will, of a modern analytics platform. When you look at the architecture with smart store as brush here with you, you want to suffice a majority of your queries out of cash because the performance difference between reading out a cash that let's say, that's NAND based or envy. Emmy based or obtain, if you will. When you fall, you have to go read a data data out of the Objects store, right. You could have a significant performance. Trade off wean mix significantly minimized that performance drop because you're going to a very high bandwith flash blade. We've done comparison test with other other smart store search results have been published in other vendors, white papers and we show Flash blade. When we run the same benchmark is 80 times faster and so what you can now have without architecture is confidence that should you find yourself in a compliance or regulatory issue, something like Maybe GDP are where you've got 72 hours to notify everyone who's been impacted by a breach. Maybe you've got a cybersecurity case where the average time to find that you've been penetrated occurs 206 days after the event. And now you gotta go dig through your old data illegal discovery, you know, questions around, you know, customer purchases, purchases or credit card payments. Any time where you've got to go back in the history, we're gonna deliver those results and order of magnitude faster than any other object store in the market today. That translates from ours. Today's days, two weeks, and we think that falls into our advantage. Almost two >> orders of magnitude. >> Can this be Flash Player >> at 80%? Sorry, Katie. Time 80 x. Yes, that's what I heard. >> Do you display? Consider what flashlight is doing here. An accelerant of spunk, workloads and customer environment. >> Definitely, because the forward with the smart, strong cash way allow high performance at scale for data that's recites locally in the cash. But now, by using a high performance object store like your flash played. Customers can expect the same high performing board when data is in the cash as well as invented sin. Remorseful >> sparks it. Interesting animal. Um, yeah, you have a point before we >> subjects. Well, I don't want to cut you off. It's OK. So I would say commenting on the performance is just part of the equation when you look at that, UM, common operational activities that a splitting, not a storage team. But a Splunk team has to incur right patch management, whether it's at the Splunk software, maybe the operating system, like linen store windows, that spunk is running on, or any of the other components on side on that platform. Patch Management data Re balancing cause it's unequal. Equally distributed, um, hardware refreshes expansion of the cluster. Maybe you need more computer storage. Those operations in terms of time, whether on smart store versus the classic model, are anywhere from 100 to 1000 times faster with smart store so you could have a deployment that, for example, it takes you two weeks to upgrade all the notes, and it gets done in four hours when it's on Smart store. That is material in terms of your operational costs. >> So I was gonna say, Splunk, we've been watching Splunk for a long time. There's our 10th year of doing the Cube, not our 10th anniversary of our 10th year. I think it will be our ninth year of doing dot com. And so we've seen Splunk emerged very cool company like like pure hip hip vibe to it. And back in the day, we talked about big data. Splunk never used that term, really not widely in its marketing. But then when we started to talk about who's gonna own the big data, that space was a cloud era was gonna be mad. We came back. We said, It's gonna be spunk and that's what's happened. Spunk has become a workload, a variety of workloads that has now permeated the organization, started with log files and security kind of kind of cumbersome. But now it's like everywhere. So I wonder if you could talk to the sort of explosion of Splunk in the workloads and what kind of opportunity this provides for you guys. >> So a very good question here, Right? So what we have seen is that spunk has become the de facto platform for all of one structure data as customers start to realize the value of putting their trying to Splunk on the watch. Your spunk is that this is like a huge differentiate of us. Monk is the read only skim on reed which allows you to basically put all of the data without any structure and ask questions on the flight that allows you to kind of do investigations in real time, be more reactive. What's being proactive? We be more proactive. Was being reactive scaleable platform the skills of large data volumes, highly available platform. All of that are the reason why you're seeing an increase that option. We see the same thing with all other customers as well. They start off with one data source with one use case and then very soon they realize the power of Splunk and they start to add additional use cases in just more and more data sources. >> But this no >> scheme on writer you call scheme on Reed has been so problematic for so many big data practitioners because it just became the state of swamp. >> That didn't >> happen with Splunk. Was that because you had very defined use cases obviously security being one or was it with their architectural considerations as well? >> They just architecture, consideration for security and 90 with the initial use cases, with the fact that the scheme on Reid basically gives open subject possibilities for you. Because there's no structure to the data, you can ask questions on the fly on. You can use that to investigate, to troubleshoot and allies and take remedial actions on what's happening. And now, with our new acquisitions, we have added additional capabilities where we can talk, orchestrate the whole Anto and flow with Phantom, right? So a lot of these acquisitions also helping unable the market. >> So we've been talking about TAM expansion all week. We definitely hit it with Charlie pretty hard. I have. You know, I think it's a really important topic. One of things we haven't hit on is tam expansion through partnerships and that flywheel effect. So how do you see the partners ship with Splunk Just in terms of supporting that tam expansion the next 10 years? >> So, uh, analytics, particularly log and Alex have really taken off for us in the last year. As we put more focus on it, we want to double down on our investments as we go through the end of this year and in the next year with with a focus on Splunk um, a zealous other alliances. We think we are in a unique position because the rollout of smart store right customers are always on a different scale in terms of when they want to adopt a new architecture right. It is a significant decision that they have to make. And so we believe between the combination of flash array for the hot tear and flash played for the cold is a nice way for customers with classic Splunk architecture to modernize their platform. Leverage the benefits of data reduction to drive down some of the cost leverage. The benefits of Flash to increase the rate at which they can ask questions and get answers is a nice stepping stone. And when customers are ready because Flash Blade is one of the few storage platforms in the market at this scale out band with optimized for both NFS and object, they can go through a rolling nondestructive upgrade to smart store, have you no investment protection, and if they can't repurpose that flash rate, they can use peers of service to have the flesh raise the hot today and drop it back off just when they're done within tomorrow. >> And what about C for, you know, big workloads, like like big data workloads. I mean, is that a good fit here? You really need to be more performance oriented. >> So flash Blade is is high bandwith optimization, which really is designed for workload. Like Splunk. Where when you have to do a sparse search, right, we'll find that needle in the haystack question, right? Were you breached? Where were you? Briefed. How were you breached? Go read as much data as possible. You've gotta in just all that data, back to the service as fast as you can. And with beast Cloud blocked, Teresi is really optimized it a tear to form of NAND for that secondary. Maybe transactional data base or virtual machines. >> All right, I want more, and then I'm gonna shut up sick. The signal FX acquisition was very interesting to me for a lot of reasons. One was the cloud. The SAS portion of Splunk was late to that game, but now you're sort of making that transition. You saw Tableau you saw Adobe like rip the band Aid Off and it was somewhat painful. But spunk is it. So I wonder. Any advice that you spend Splunk would have toe von as pure as they make that transition to that sass model. >> So I think definitely, I think it's going to be a challenging one, but I think it's a much needed one in there in the environment that we are in. The key thing is to always because two more focus and I'm sure that you're already our customer focus. But the key is key thing is to make sure that any service is up all the time on make sure that you can provide that up time, which is going to be crucial for beating your customers. Elise. >> That's good. That's good guidance. >> You >> just wanted to cover that for you favor of keeping you date. >> So you gave us some of those really impressive stats In terms of performance. >> They're almost too good to be true. >> Well, what's customer feedback? Let's talk about the real world when you're talking to customers about those numbers. What's the reaction? >> So I don't wanna speak for Broth, so I will say in our engagements within their customer base, while we here, particularly from customers of scale. So the larger the environment, the more aggressive they are to say they will adopt smart store right and on a more aggressive scale than the smaller environments. And it's because the benefits of operating and maintaining the indexer cluster are are so great that they'll actually turn to the stores team and say, This is the new architecture I want. This is a new storage platform and again. So when we're talking about patch management, cluster expansion Harbor Refresh. I mean, you're talking for a large sum. Large installs weeks, not two or 3 10 weeks, 12 weeks on end so it can be. You can reduce that down to a couple of days. It changes your your operational paradigm, your staffing. And so it has got high impact. >> So one of the message that we're hearing from customers is that it's far so they get a significant reduction in the infrastructure spent it almost dropped by 2/3. That's really significant file off our large customers for spending a ton of money on infrastructure, so just dropping that by 2/3 is a significant driver to kind of move too smart. Store this in addition to all the other benefits that get smart store with operational simplicity and the ability that it provides. You >> also have customers because of smart store. They can now actually bursts on demand. And so >> you can think of this and kind of two paradigms, right. Instead of >> having to try to avoid some of the operational pain, right, pre purchase and pre provisional large infrastructure and hope you fill it up. They could do it more of a right sides and kind of grow in increments on demand, whether it's storage or compute. That's something that's net new with smart store um, they can also, if they have ah, significant event occur. They can fire up additional indexer notes and search clusters that can either be bare metal v ems or containers. Right Try to, you know, push the flash, too. It's Max. Once they found the answers that they need gotten through. Whatever the urgent issues, they just deep provisionals assets on demand and return back down to a steady state. So it's very flexible, you know, kind of cloud native, agile platform >> on several guys. I wish we had more time. But thank you so much fun. And Deron, for joining David me on the Cube today and sharing all of the innovation that continues to come from this partnership. >> Great to see you appreciate it >> for Dave Volante. I'm Lisa Martin, and you're watching the Cube?
SUMMARY :
Brought to you by Welcome back to the Cube. Talk about the Splunk, your relationship. if you will from the traditional Splunk architectural was built off of Daz and a shared nothing architecture. What's Marceau brings to the table is that a D couples computer and storage, So now you can scale You mentioned H. D. F s you saw it early on there. So this smart so feature is And now you gotta go dig through your old data illegal at 80%? Do you display? Definitely, because the forward with the smart, strong cash way allow Um, yeah, you have a point before we on the performance is just part of the equation when you look at that, Splunk in the workloads and what kind of opportunity this provides for you guys. Monk is the read only skim on reed which allows you to basically put all of the data without scheme on writer you call scheme on Reed has been so problematic for so many Was that because you had very defined use cases to the data, you can ask questions on the fly on. So how do you see the partners ship with Splunk Flash Blade is one of the few storage platforms in the market at this scale out band with optimized for both NFS And what about C for, you know, big workloads, back to the service as fast as you can. Any advice that you But the key is key thing is to make sure that any service is up all the time on make sure that you can provide That's good. Let's talk about the real world when you're talking to customers about So the larger the environment, the more aggressive they are to say they will adopt smart So one of the message that we're hearing from customers is that it's far so they get a significant And so you can think of this and kind of two paradigms, right. So it's very flexible, you know, kind of cloud native, agile platform And Deron, for joining David me on the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
$7 | QUANTITY | 0.99+ |
Katie | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Barack | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
80 times | QUANTITY | 0.99+ |
ninth year | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
Deron | PERSON | 0.99+ |
12 weeks | QUANTITY | 0.99+ |
72 hours | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
twice | QUANTITY | 0.99+ |
10th year | QUANTITY | 0.99+ |
Von Stewart | PERSON | 0.99+ |
Elise | PERSON | 0.99+ |
last year | DATE | 0.99+ |
hundreds of terabytes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
2019 | DATE | 0.99+ |
today | DATE | 0.99+ |
Vaughn Stewart | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Bharath Aleti | PERSON | 0.99+ |
next year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
September | DATE | 0.98+ |
10th anniversary | QUANTITY | 0.98+ |
80% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Avon | ORGANIZATION | 0.98+ |
Peter | PERSON | 0.98+ |
Alex | PERSON | 0.98+ |
last September | DATE | 0.98+ |
100 | QUANTITY | 0.98+ |
Jacqui | PERSON | 0.98+ |
Lisa Martin Day Volante | PERSON | 0.98+ |
hundreds of petabytes | QUANTITY | 0.97+ |
Splunk | PERSON | 0.97+ |
Spunk | ORGANIZATION | 0.97+ |
Charlie | PERSON | 0.96+ |
Tableau | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
206 days | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
Adobe | ORGANIZATION | 0.95+ |
end of this year | DATE | 0.95+ |
two paradigms | QUANTITY | 0.94+ |
about 10 years back | DATE | 0.93+ |
1000 times | QUANTITY | 0.93+ |
Reed | ORGANIZATION | 0.9+ |
one use case | QUANTITY | 0.89+ |
3 10 weeks | QUANTITY | 0.88+ |
Reid | ORGANIZATION | 0.88+ |
90 | QUANTITY | 0.87+ |
couple of guests | QUANTITY | 0.87+ |
Phantom | ORGANIZATION | 0.87+ |
Flash | PERSON | 0.85+ |
2/3 | QUANTITY | 0.84+ |
Marceau | PERSON | 0.83+ |
TAM | ORGANIZATION | 0.83+ |
days | QUANTITY | 0.82+ |
couple | QUANTITY | 0.82+ |