Mai Lan Tomsen Bukovec, AWS | theCUBE on Cloud 2021
>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. >>We continue >>with Cuban Cloud. We're here with Milan Thompson Bukovec, who's the vice president? Block and object storage at A W s, which comprise comprises elastic block storage, AWS s three and Amazon Glacier. Milan. Great to see you again. Thanks so much for coming on the program. >>Nice to be here. Thanks for having me, David. >>You're very welcome it So here we are. We're unpacking the future of cloud. And we'd love to get your perspectives on how customers should think about the future of infrastructure, things like applying machine intelligence to their data. But just to set the stage when we look back at the history of storage in the Cloud is obviously started with us three. And then a couple years later was introduced CBS for block storage. And those are the most well known services in the portfolio. But there's there's Mawr, this cold storage and new capabilities that you announced recently. It reinvent around, you know, super duper block storage and in tearing is another example. But it looks like AWS is really starting to accelerate and pick up the pace of customer >>options in >>storage. So my first question is, how should we think about this expanding portfolio? >>Well, I think you have to go all the way back to what customers air trying to do with their data. Dave, The path to innovation is paved by data. If you don't have data, you don't have machine learning. You don't have the next generation of analytics applications. That helps you chart a path forward into a world that seems to be changing every week. And so in orderto have that insight in orderto have that predictive forecasting that every company needs, regardless of what industry that you're in today. It all starts from data, and I think the key shift that I've seen is how customers are thinking about that data about being instantly usable, whereas in the past it might have been a backup. Now it's part of a data lake, and if you could bring that data into a data lake, you can have not just analytics or machine learning or auditing applications. It's really what does your application do for your business, and how can it take advantage of that vast amount of shared data set in your business. Awesome. >>So thank you. So I wanna I wanna make sure we're hitting on the big trends that you're seeing in the market. That kind of informing your strategy around the portfolio and what you're seeing with customers Instant usability. You you bring in machine learning into the equation. I think, um, people have really started to understand the benefits of of of cloud storage as a service on the pay paid by the drink and that whole whole model, obviously co vid has accelerated that cloud migration has accelerated. Anything else we're missing there. What are the other big trends that you see if any? >>Well, Dave, you did a good job of capturing a lot of the drivers. The one thing I would say that just sits underneath All of it is the massive growth of digital data year over year I. D. C. Says digital data is growing at a rate of 40% year over year, and that has been true for a while. And it's not going to stop. It's gonna keep on growing because the sources of that data acquisition keeps on expanding and whether it's coyote devices whether it is content created by users. That data is going to grow, and everything you're talking about depends on the ability to not just capture it and store it. But as you say, use it well, >>you know, and we talk about data growth a lot, and sometimes it becomes bromide. But I think the interesting thing that I've observed over the last a couple of decades really is that the growth is nonlinear on. It's really the curve is starting. Thio used to shape exponentially. You guys always talk about that flywheel. Effect it. It's really hard to believe, You know, people say trees don't grow to the moon. It seems like data does. >>It does. And what's interesting about working in the world of AWS storage Dave is that it's counterintuitive. But our goal without data growth is to make it cost effective. And so year over year, how could we make it cheaper and cheaper? Just have customers store more and more data so they can use it. But it's also to think about the definition of usage. And what kind of data is that? Eyes being tapped by businesses for their insights and make that easier than it's ever been before. Let me ask >>you a follow up question on that my life could I get asked this a lot? Or guy here comments a lot that yes, A W s continuously and rigorously reduces pricing. But it's just >>kind of >>following the natural curve of Moore's law or, you know, whatever. How >>do you >>respond to that? And there are other factors involved. Obviously, labor is another cost reducing factor. But what's the trend line say, >>Well, cost efficiencies in our DNA, Dave. We come to work every day and aws across all of our services, and we ask ourselves, How can we lower our costs and be able to pass that along to customers? As you say, there are many different aspects to cost. There's the cost of the storage itself is the cost of the data center. And that's really what we've seen impact a lot of customers that were slower or just getting started with removed. The cloud is they entered 2020 and then they found out exactly how expensive that data center was to maintain because they had to put in safety equipment and they had to do all the things that you have to do in a pandemic in a data center. And so sometimes that cost is a little bit hidden or won't show up until you really don't need to have it land. But the cost of managing that explosive growth of data is very riel. And when we're thinking about cost, we're thinking about cost in terms of how can I lower it on a per gigabyte per month basis? But we're also building into the product itself adaptive discounts like we have a storage class in S three that's called intelligent hearing. And in intelligence hearing, we have built in monitoring where, if particular objects aren't frequently accessed in a given month, ah, customer will automatically get a discounted price for that storage or a customer Can you know, as of late last year, say that they wanna automatically move storage in the storage class that has been stored, for example, longer than 100 and 80 days and saves 95% by moving it into archive storage, deep archives storage? And so it's not just, you know, relentlessly going after and lowering the cost of storage. It's also building into the products these new ways where we can adaptive Lee discount storage based on what a customer's storage is actually doing >>well. And I would, I would add to our audience, is the other thing that does has done is it's really forced transparency almost the same way that Amazon has done on retail. And now my mom, When we talked last I mentioned that s three was an object store. And of course, that's technically technically correct. But your comment to me was Dave. It's more than that. And you started to talk about sage Maker and AI and bringing in machine learning. And I wonder if you could talk a little bit about the future of how storage is gonna be leveraged in the cloud that's may be different than what we've been, you know, used to in the early days of s three and how your customers should be thinking about infrastructure not as bespoke services but as a suite of capabilities and maybe some of those adjacent adjacent services that you see as most leverage a ble for customers And why? >>Well, to tell this story, dude, we're gonna have to go a little bit back in time all the way back to the 19 nineties. Or before then, when all you had waas, a set of hardware appliance vendors that sold you appliances that you put in your data center and inherently created a data silo because those hardware appliances were hardwired to your application. And so an individual application that was dealing with auditing as an example wouldn't really be able to access the storage for another application. Because you know, the architecture er of that legacy world is tied to a data silo and s tree came out launched in 2000 and six and introduced very low cost storage. That is an object. And I'll tell you, Dave, you know, over the last 10 plus years, we have seen all kinds of data come into us three, whereas before it might have been backups or it might have been images and videos. Now a pretty substantial data set is our parquet files and orc files. Thes files are there for business analytics for more real time type of processing. And that has really been the trend of the future. Is taking these different files putting them in a shared file layer, So any application today or in the future can tap into that data. And so this idea of the shared file layer is a major trend that has been taking off for the last. I would say five or six years, and I expect that to not only keep on going, but to really open up the type of services that you can then do on that shared file layer and whether that sage maker or some of the machine learning introduced by our connect service, it's bringing together the data as a starting point. And then the applications can evolve very rapidly. On top of that, I want to >>ask your opinion about big data architectures. One of our guests, Jim Octagon E. She's amazing, uh, data architect, and she's put forth this notion of a distributed global mesh, and I picked him picking up on some of the comments. Andy Jassy made it at reinvent How essentially Hey, we're bringing a W s to the edge. We see the data center is just another edge. Notes. You're seeing this massive distributed system evolving. You guys have talked about that for a while, and data by its very nature is distributed. But we've had this tendency to put into it monolithic Data Lake or a data warehouse on bits sort of antithetical to that distributed nature. So how >>do >>you see that playing out? What do you see customers in the future doing in terms of their big data architectures? And what does that mean for storage? >>It comes down to the nature of the data and again, the usage and Dave. That's where I see the biggest difference in these modern data architectures from the legacy of 20 years ago is the idea that the data need drives the data storage. So let's taken example of the type of data that you always wanna have on the edge. We have customers today that need tohave storage in the field and whether the field of scientific research or oftentimes, it's content creation in the in the film industry or if it's for military operations. There's a lot of data that needs to be captured and analyzed in the field and for us, what that means is that you know we have a suite of products called Snowball and whether it's snowball or snow cone, take your pick. That whole portfolio of AWS services is targeted at customers that need to do work with storage at the edge. And so it you know, if you think about the need for multiple applications acting on the same data set, that's when you keep it in an AWS region. And what we've done in AWS storage is we've recognized that depending on the need of usage, where you put your data and how you interactive, it may vary. But we've built a whole set of services like data transfer to help make sure that we can connect data from, for example, that new snow cone into a region automatically. And so our goal Dave, is to make sure that when customers air operating at the edge or they're operating in the region, they have the same quality of storage service, and they have easy ways to go between them. You shouldn't have to pick. You should be able to do it all. >>So in the spirit of do it all, this is sort of age old dynamic in the tech business, where you've got the friction between the the best of breed and the integrated suite, and my question is around what you're optimizing for for customers. And can you have your cake and eat it too? In other words, why A W S storage does what makes a compelling? Is it because it's kind of a best of breed storage service? Or is it because it's integrated with a W S? Would you ever sub optimize one in in order to get an advantage to the other? Or can you actually, >>you >>know, have your cake and eat it, too? >>The way that we build storage is to focus on being both the breath of capabilities on the depth of capabilities. And so where we identify ah, particular need where we think that it takes a whole new service to deliver, we'll go build that service and example for that is FTP, our AWS sftp service, which you know there's a lot of sftp usage out there and there will be for a while because of the you know, the Legacy B two b type of architectures that still live in the business world today. And so we looked at that problem. We said, How are we gonna build that in the best depth way and the best focus? And we launched a separate service for them. And so our goal is to take the individual building blocks of CBS and Glacier and s three and make the best of class and the most comprehensive in the capabilities of what we can dio and where we identify very specific need. We'll go build a service for. But, Dave, you know, as an example for that idea of both depths and breath s three storage lands is a great example of that s three storage lands is a new capability that we launched last year. And what it does is it lets you look across all your regions and all your accounts and get a summary view of all your s three storage and whether that's buckets or, you know, the most active prefixes that you have and be able to drill down from that and that is built in to the S three service and available for any customer that wants to turn it on in the AWS Management Council. >>Right? And we we saw just recently made I called it super duper block storage. But you made some, you know, improvements and really addressing the highest performance. Um, I want to ask you So we've all learned about an experience the benefits of cloud over the last several years, and especially in the last 10 months during the pandemic. But one >>of >>the challenges, and it's particularly acute with bio is, of course, Leighton see and moving data around and accessing data remotely. It's It's a challenge for customers, you know, due to speed of light, etcetera. So my question is, how was a W s thinking about all that data that still resides on premises? I think we heard that reinvent. That's still 90% of the opportunities or or the workloads. They're still on Prem that live inside a customer's data center. So how do you tap into those and help customers innovate with on Prem data, particularly from a storage >>angle? Well, we always want to provide the best of class solution for those little Leighton see workloads, and that's why we launched Block Express just late last year. It reinvent and Black expresses a new capability and preview on top of our Iot to provisioned eye ops volume type, and what's really interesting about Block Express Dave, is that the way that we're able to deliver the performance of Block Express, which is sound performance with cloud elasticity, is that we went all the way down to the network layer and we customize the hardware software. And at the network Lehrer, we built a Block Express on something called SRD, which stands for a scalable, reliable diagrams. And basically, what is letting us to do is offload all of our EBS operations for Block Express on the Nitro card on hardware. And so that type of innovation where we're able Thio, you know, take advantage of modern cop commodity, multi tenant data center networks where we're sending in this new network protocol across a large number of network paths, and that that type of innovation all the way down to that protocol level helps us innovate in a way that's hard. In fact, I would say impossible for for other sound providers to kind of really catch up and keep up. And so we feel that the amount of innovation that we have for delivering those low latency workloads in our AWS cloud storage is is unlimited, really, Because of that ability to customize software, hardware and network protocols as we go along without requiring upgrades from a customer it just gets better and the customer benefits. Now if you want to stay in your data center, that's why we built outposts. And for outpost, we have EBS and we have s three for outposts. And our goal there is that some customers will have workloads where they want to keep them resident in the data center And for those customers, we want to give them that AWS storage opportunities as well. So >>thank you for coming back to block Express. So you call it in sand in the cloud eso Is that essentially you've you've comprises a custom built, essentially storage storage network. Is that is that right? What kind of what you just described? SRD? I think you call it. >>Yeah, it's SRT is used by other AWS services as well, but it is a custom network protocol that we designed to deliver the lowest latency experience on We're taking advantage of it with Block Express >>sticking with traditional data centers for a moment, I'm interested in your thoughts on the importance of the cloud you know, pricing approach I e. The consumption model to paid by the drink. Obviously, it's one of the most attractive features But But And I ask that because we're seeing what Andy Jassy first, who is the old Guard Institute? Flexible pricing models. Two of the biggest storage companies HP with Green Lake and Dell has this thing called Apex. They've announced such models for on Prem and and presumably, Cross Cloud. How >>do you think >>this is going to impact your customers Leverage of AWS cloud storage? Is it something that you have ah, opinion on? >>Yeah, I think it all comes down to again that usage of the storage And this is where I think there is an inherent advantage for our cloud storage. So there might be an attempt by the old guard toe lower prices or add flexibility. But the end of the day it comes down to what the customer actually needs to to. And if you think about gp three, which is the new E. B s volume, the idea with GP three is we're gonna pass along savings to the customer by making the storage 20% cheaper than GP two. And we're gonna make the product better by giving a great, reliable baseline performance. But we're also going to let customers who want to run work clothes like Cassandra on TBS tune their throughput separately, for example, from their capacity. So if you're running Cassandra, sometimes you don't need to change your capacity. Your storage capacity works just fine, but what happens with for example, Cassandra were quote is that you may need more throughput. And if you're buying hardware appliance, you just have to buy for your peak. You have to buy for the max of what you think, your throughput in the max of what your storage is and this inherent flexibility that we have for AWS storage and being able to tune throughput separate from IOP, separate from capacity like you do for GP three. That is really where the future is for customers having control over costs and control over customer experience without compromising or trading off either one. >>Awesome. Thank you for that. So another time we have remaining my line. I want to talk about the topic of diversity. Uh, social impact on Daz. Ah, woman leader, women executive on. I really wanna get your perspectives on this, and I've shared with the audience previously. One of my breaking analysis segments your your boxing video, which is awesome and eso so you've got a lot of unique, non traditional aspects to your to your life, and and I love it. But I >>want to >>ask you this. So it's obviously, you know, certainly politically and socially correct to talk about diversity, the importance of diversity. There's data that suggests that that that diversity is good both economically, not just socially. And of course, it's the right thing to do. But there are those. Peter Thiel is probably the most prominent, but there are others who say, You know what, >>But >>get that. Just hire people just like you will be able to go faster, ramp up more quickly, hit escape velocity. It's natural. And that's what you should dio. Why is that not the right approach? Why is diversity both course socially responsible, but also good for business? >>For Amazon, we think about diversity as something that is essential toe how we think about innovation. And so, Dave, you know, as you know, from listening to some of the announcements I reinvent, we launched a lot of new ideas, new concepts and new services in AWS and just bringing that lends down to storage U. S. Tree has been reinventing itself every year since we launched in 2000 and six. PBS introduced the first Son on the Cloud late last year and continues to reinvent how customers think about block storage. We would not be able Thio. Look at a product in a different way and think to ourselves Not just what is the legacy system dio in a data center today. But how do we want to build this new distributed system in a way that helps customers achieve not just what they're doing today, but what they want to do in five and 10 years? You can't get that innovative mindset without bringing different perspectives to the table. And so we strongly believe in hiring people who are from underrepresented groups and whether that's gender or it's related racial equality or if its geographic, uh, diversity and bringing them in tow have the conversation. Because those divers viewpoints inform how we can innovate at all levels in a W s >>right. And so I really appreciate the perspectives on that, and we've had a zoo. You probably know the Cube has been, you know, a very big advocate of diversity, you know, generally, but women in tech Specifically, we participated a lot. And you know, I often ask this question is, you know, as a smaller company, uh, I and some of my other colleagues in in small business Sometimes we struggle. Um and so my question is, how >>how do >>you go beyond What's your advice for going beyond, you know, the good old boys network? I think its large companies like AWS and the big players you've got a responsibility to that. You can put somebody in charge and make it you know, their full time job. How should smaller companies, um, that are largely white, male dominated? How should they become more diverse? What should they do? Thio increase that diversity? >>Well, I think the place to start his voice. A lot of what we try to dio is make sure that the underrepresented voice is heard. And so, Dave, any small business owner of any industry can encourage voice for your under represented or your unheard populations. And honestly, it is a simple as being in a meeting and looking around that table, we're on your screen as it were and asking yourself Who hasn't talked? Who hasn't weighed in particularly if the debate is contentious or even animated. And you will see, particularly if you note this. Over time you will see that there may be somebody and whether it's an underrepresented, a group or its ah woman whose early career or it's it's not. It's just a member of your team who happens to be a white male to who's not being hurt. And you can ask that person for their perspective. And that is a step that every one of us can and should do, which is asked toe, have everyone's voice at the table, toe listen and to weigh in on it. So I think that is something everyone should dio. I think if you are a member of an underrepresented groups, as for example, I'm Vietnamese American and I'm the female in Tech. I think it z something to think about how you can make sure that you're always taking that bold step forward. And it's one of the topics that we covered it at reinvent. We had a great discussion with a group of women CEOs, and a lot of it we talked about is being bolt, taking the challenge of being bold in tough situations, and that is an important thing, I think, for anybody to keep in mind, but especially for members of underrepresented groups, because sometimes Dave, that bold step that you kind of think of is like, Oh, I don't know if I should ask for that promotion or I don't know if I should volunteer for that project It's not. It's not a big ask, but it's big in your head. And so if you can internalize as a member of some, you know, a group that maybe hasn't heard or seen as much how you can take those bold challenges and step forward and learn, maybe fell also because that's how you learn. Then that is a way toe. Also have people learn and develop and become leaders in whatever industry it ISS. It's >>great advice, and I reminds me of, I mean, I think most of us can relate to that my land, because when we started in the industry, we may be timid. You didn't want to necessarily speak up, and I think it's incumbent upon those in a position of power. And by the way, power might just be running a meeting agenda to maybe calling those folks that are. Maybe it's not diversity of gender or, you know, our or race. And maybe it's just the underrepresented. Maybe that's a good way to start building muscle memory. So that's unique advice that I hadn't heard before. So thank you very much for that. Appreciate it. And, uh hey, listen, thanks so much for coming on the Cuban cloud. Uh, we're out of time and and really, always appreciate your perspectives. And you're doing a great job, and thank you. >>Great. Thank you, Dave. Thanks for having me and have a great day. >>All right? And keep it right, everybody. You're watching the cube on cloud right back.
SUMMARY :
cloud brought to you by silicon angle. Great to see you again. Nice to be here. capabilities that you announced recently. So my first question is, how should we think about this expanding portfolio? and if you could bring that data into a data lake, you can have not just analytics or What are the other big trends that you see if any? And it's not going to stop. that I've observed over the last a couple of decades really is that the growth is nonlinear And so year over year, how could we make it cheaper and cheaper? you a follow up question on that my life could I get asked this a lot? following the natural curve of Moore's law or, you know, And there are other factors involved. And so it's not just, you know, relentlessly going after And I wonder if you could talk a little bit about the future of how storage is gonna be leveraged in the cloud that's that you put in your data center and inherently created a data silo because those hardware We see the data center is just another And so it you know, if you think about the need And can you have your cake and eat it too? And what it does is it lets you look across all your regions and all your you know, improvements and really addressing the highest performance. It's It's a challenge for customers, you know, And at the network Lehrer, we built a Block Express on something called SRD, What kind of what you just described? Two of the biggest storage companies HP with Green Lake and Dell has this thing called Apex. But the end of the day it comes down to what the customer actually Thank you for that. And of course, it's the right thing to do. And that's what you should dio. Dave, you know, as you know, from listening to some of the announcements I reinvent, we launched a lot You probably know the Cube has been, you know, a very big advocate of diversity, You can put somebody in charge and make it you know, their full time job. And so if you can internalize as a member And maybe it's just the underrepresented. And keep it right, everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
PBS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Peter Thiel | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
six years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2000 | DATE | 0.99+ |
last year | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Green Lake | ORGANIZATION | 0.99+ |
95% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
80 days | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
TBS | ORGANIZATION | 0.98+ |
Moore | PERSON | 0.98+ |
Mai Lan Tomsen Bukovec | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Guard Institute | ORGANIZATION | 0.98+ |
19 nineties | DATE | 0.98+ |
20 years ago | DATE | 0.97+ |
late last year | DATE | 0.97+ |
longer than 100 | QUANTITY | 0.96+ |
late last year | DATE | 0.95+ |
One | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
Cuban | OTHER | 0.94+ |
Milan Thompson Bukovec | PERSON | 0.94+ |
late last year | DATE | 0.94+ |
pandemic | EVENT | 0.94+ |
AWS Management Council | ORGANIZATION | 0.93+ |
a couple years later | DATE | 0.91+ |
Leighton | ORGANIZATION | 0.91+ |
last 10 months | DATE | 0.91+ |
EBS | ORGANIZATION | 0.9+ |
Jim Octagon E. | PERSON | 0.89+ |
first | QUANTITY | 0.89+ |
gp three | TITLE | 0.87+ |
Block Express | COMMERCIAL_ITEM | 0.87+ |
S. Tree | LOCATION | 0.86+ |
Cloud 2021 | TITLE | 0.85+ |
IO TAHOE EPISODE 4 DATA GOVERNANCE V2
>>from around the globe. It's the Cube presenting adaptive data governance brought to you by Iota Ho. >>And we're back with the data automation. Siri's. In this episode, we're gonna learn more about what I owe Tahoe is doing in the field of adaptive data governance how it can help achieve business outcomes and mitigate data security risks. I'm Lisa Martin, and I'm joined by a J. Bihar on the CEO of Iot Tahoe and Lester Waters, the CEO of Bio Tahoe. Gentlemen, it's great to have you on the program. >>Thank you. Lisa is good to be back. >>Great. Staley's >>likewise very socially distant. Of course as we are. Listen, we're gonna start with you. What's going on? And I am Tahoe. What's name? Well, >>I've been with Iot Tahoe for a little over the year, and one thing I've learned is every customer needs air just a bit different. So we've been working on our next major release of the I O. Tahoe product. But to really try to address these customer concerns because, you know, we wanna we wanna be flexible enough in order to come in and not just profile the date and not just understand data quality and lineage, but also to address the unique needs of each and every customer that we have. And so that required a platform rewrite of our product so that we could, uh, extend the product without building a new version of the product. We wanted to be able to have plausible modules. We also focused a lot on performance. That's very important with the bulk of data that we deal with that we're able to pass through that data in a single pass and do the analytics that are needed, whether it's, uh, lineage, data quality or just identifying the underlying data. And we're incorporating all that we've learned. We're tuning up our machine learning we're analyzing on MAWR dimensions than we've ever done before. We're able to do data quality without doing a Nen initial rejects for, for example, just out of the box. So I think it's all of these things were coming together to form our next version of our product. We're really excited by it, >>So it's exciting a J from the CEO's level. What's going on? >>Wow, I think just building on that. But let's still just mentioned there. It's were growing pretty quickly with our partners. And today, here with Oracle are excited. Thio explain how that shaping up lots of collaboration already with Oracle in government, in insurance, on in banking and we're excited because we get to have an impact. It's real satisfying to see how we're able. Thio. Help businesses transform, Redefine what's possible with their data on bond. Having I recall there is a partner, uh, to lean in with is definitely helping. >>Excellent. We're gonna dig into that a little bit later. Let's let's go back over to you. Explain adaptive data governance. Help us understand that >>really adaptive data governance is about achieving business outcomes through automation. It's really also about establishing a data driven culture and pushing what's traditionally managed in I t out to the business. And to do that, you've got to you've got Thio. You've got to enable an environment where people can actually access and look at the information about the data, not necessarily access the underlying data because we've got privacy concerns itself. But they need to understand what kind of data they have, what shape it's in what's dependent on it upstream and downstream, and so that they could make their educated decisions on on what they need to do to achieve those business outcomes. >>Ah, >>lot of a lot of frameworks these days are hardwired, so you can set up a set of business rules, and that set of business rules works for a very specific database and a specific schema. But imagine a world where you could just >>say, you >>know, the start date of alone must always be before the end date of alone and having that generic rule, regardless of the underlying database and applying it even when a new database comes online and having those rules applied. That's what adaptive data governance about I like to think of. It is the intersection of three circles, Really. It's the technical metadata coming together with policies and rules and coming together with the business ontology ease that are that are unique to that particular business. And this all of this. Bringing this all together allows you to enable rapid change in your environment. So it's a mouthful, adaptive data governance. But that's what it kind of comes down to. >>So, Angie, help me understand this. Is this book enterprise companies are doing now? Are they not quite there yet. >>Well, you know, Lisa, I think every organization is is going at its pace. But, you know, markets are changing the economy and the speed at which, um, some of the changes in the economy happening is is compelling more businesses to look at being more digital in how they serve their own customers. Eh? So what we're seeing is a number of trends here from heads of data Chief Data Officers, CEO, stepping back from, ah, one size fits all approach because they've tried that before, and it it just hasn't worked. They've spent millions of dollars on I T programs China Dr Value from that data on Bennett. And they've ended up with large teams of manual processing around data to try and hardwire these policies to fit with the context and each line of business and on that hasn't worked. So the trends that we're seeing emerge really relate. Thio, How do I There's a chief data officer as a CEO. Inject more automation into a lot of these common tax. Andi, you know, we've been able toc that impact. I think the news here is you know, if you're trying to create a knowledge graph a data catalog or Ah, business glossary. And you're trying to do that manually will stop you. You don't have to do that manually anymore. I think best example I can give is Lester and I We we like Chinese food and Japanese food on. If you were sitting there with your chopsticks, you wouldn't eat the bowl of rice with the chopsticks, one grain at a time. What you'd want to do is to find a more productive way to to enjoy that meal before it gets cold. Andi, that's similar to how we're able to help the organizations to digest their data is to get through it faster, enjoy the benefits of putting that data to work. >>And if it was me eating that food with you guys, I would be not using chopsticks. I would be using a fork and probably a spoon. So eso Lester, how then does iota who go about doing this and enabling customers to achieve this? >>Let me, uh, let me show you a little story have here. So if you take a look at the challenges the most customers have, they're very similar, but every customers on a different data journey, so but it all starts with what data do I have? What questions or what shape is that data in? Uh, how is it structured? What's dependent on it? Upstream and downstream. Um, what insights can I derive from that data? And how can I answer all of those questions automatically? So if you look at the challenges for these data professionals, you know, they're either on a journey to the cloud. Maybe they're doing a migration oracle. Maybe they're doing some data governance changes on bits about enabling this. So if you look at these challenges and I'm gonna take you through a >>story here, E, >>I want to introduce Amanda. Man does not live like, uh, anyone in any large organization. She's looking around and she just sees stacks of data. I mean, different databases, the one she knows about, the one she doesn't know about what should know about various different kinds of databases. And a man is just tasking with understanding all of this so that they can embark on her data journey program. So So a man who goes through and she's great. I've got some handy tools. I can start looking at these databases and getting an idea of what we've got. Well, as she digs into the databases, she starts to see that not everything is as clear as she might have hoped it would be. You know, property names or column names, or have ambiguous names like Attribute one and attribute to or maybe date one and date to s Oh, man is starting to struggle, even though she's get tools to visualize. And look what look at these databases. She still No, she's got a long road ahead. And with 2000 databases in her large enterprise, yes, it's gonna be a long turkey but Amanda Smart. So she pulls out her trusty spreadsheet to track all of her findings on what she doesn't know about. She raises a ticket or maybe tries to track down the owner to find what the data means. And she's tracking all this information. Clearly, this doesn't scale that well for Amanda, you know? So maybe organization will get 10 Amanda's to sort of divide and conquer that work. But even that doesn't work that well because they're still ambiguities in the data with Iota ho. What we do is we actually profile the underlying data. By looking at the underlying data, we can quickly see that attribute. One looks very much like a U. S. Social Security number and attribute to looks like a I c D 10 medical code. And we do this by using anthologies and dictionaries and algorithms to help identify the underlying data and then tag it. Key Thio Doing, uh, this automation is really being able to normalize things across different databases, so that where there's differences in column names, I know that in fact, they contain contain the same data. And by going through this exercise with a Tahoe, not only can we identify the data, but we also could gain insights about the data. So, for example, we can see that 97% of that time that column named Attribute one that's got us Social Security numbers has something that looks like a Social Security number. But 3% of the time, it doesn't quite look right. Maybe there's a dash missing. Maybe there's a digit dropped. Or maybe there's even characters embedded in it. So there may be that may be indicative of a data quality issues, so we try to find those kind of things going a step further. We also try to identify data quality relationships. So, for example, we have two columns, one date, one date to through Ah, observation. We can see that date 1 99% of the time is less than date, too. 1% of the time. It's not probably indicative of a data quality issue, but going a step further, we can also build a business rule that says Day one is less than date to. And so then when it pops up again, we can quickly identify and re mediate that problem. So these are the kinds of things that we could do with with iota going even a step further. You could take your your favorite data science solution production ISAT and incorporated into our next version a zey what we call a worker process to do your own bespoke analytics. >>We spoke analytics. Excellent, Lester. Thank you. So a J talk us through some examples of where you're putting this to use. And also what is some of the feedback from >>some customers? But I think it helped do this Bring it to life a little bit. Lisa is just to talk through a case study way. Pull something together. I know it's available for download, but in ah, well known telecommunications media company, they had a lot of the issues that lasted. You spoke about lots of teams of Amanda's, um, super bright data practitioners, um, on baby looking to to get more productivity out of their day on, deliver a good result for their own customers for cell phone subscribers, Um, on broadband users. So you know that some of the examples that we can see here is how we went about auto generating a lot of that understanding off that data within hours. So Amanda had her data catalog populated automatically. A business class three built up on it. Really? Then start to see. Okay, where do I want Thio? Apply some policies to the data to to set in place some controls where they want to adapt, how different lines of business, maybe tax versus customer operations have different access or permissions to that data on What we've been able to do there is, is to build up that picture to see how does data move across the entire organization across the state. Andi on monitor that overtime for improvement, so have taken it from being a reactive. Let's do something Thio. Fix something. Thio, Now more proactive. We can see what's happening with our data. Who's using it? Who's accessing it, how it's being used, how it's being combined. Um, on from there. Taking a proactive approach is a real smart use of of the talents in in that telco organization Onda folks that worked there with data. >>Okay, Jason, dig into that a little bit deeper. And one of the things I was thinking when you were talking through some of those outcomes that you're helping customers achieve is our ally. How do customers measure are? Why? What are they seeing with iota host >>solution? Yeah, right now that the big ticket item is time to value on. And I think in data, a lot of the upfront investment cause quite expensive. They have been today with a lot of the larger vendors and technologies. So what a CEO and economic bio really needs to be certain of is how quickly can I get that are away. I think we've got something we can show. Just pull up a before and after, and it really comes down to hours, days and weeks. Um, where we've been able Thio have that impact on in this playbook that we pulled together before and after picture really shows. You know, those savings that committed a bit through providing data into some actionable form within hours and days to to drive agility, but at the same time being out and forced the controls to protect the use of that data who has access to it. So these are the number one thing I'd have to say. It's time on. We can see that on the the graphic that we've just pulled up here. >>We talk about achieving adaptive data governance. Lester, you guys talk about automation. You talk about machine learning. How are you seeing those technologies being a facilitator of organizations adopting adaptive data governance? Well, >>Azaz, we see Mitt Emmanuel day. The days of manual effort are so I think you know this >>is a >>multi step process. But the very first step is understanding what you have in normalizing that across your data estate. So you couple this with the ontology, that air unique to your business. There is no algorithms, and you basically go across and you identify and tag tag that data that allows for the next steps toe happen. So now I can write business rules not in terms of columns named columns, but I could write him in terms of the tags being able to automate. That is a huge time saver and the fact that we can suggest that as a rule, rather than waiting for a person to come along and say, Oh, wow. Okay, I need this rule. I need this will thes air steps that increased that are, I should say, decrease that time to value that A. J talked about and then, lastly, a couple of machine learning because even with even with great automation and being able to profile all of your data and getting a good understanding, that brings you to a certain point. But there's still ambiguities in the data. So, for example, I might have to columns date one and date to. I may have even observed the date. One should be less than day two, but I don't really know what date one and date to our other than a date. So this is where it comes in, and I might ask the user said, >>Can >>you help me identify what date? One and date You are in this in this table. Turns out they're a start date and an end date for alone That gets remembered, cycled into the machine learning. So if I start to see this pattern of date one day to elsewhere, I'm going to say, Is it start dating and date? And these Bringing all these things together with this all this automation is really what's key to enabling this This'll data governance. Yeah, >>great. Thanks. Lester and a j wanna wrap things up with something that you mentioned in the beginning about what you guys were doing with Oracle. Take us out by telling us what you're doing there. How are you guys working together? >>Yeah, I think those of us who worked in i t for many years we've We've learned Thio trust articles technology that they're shifting now to ah, hybrid on Prohm Cloud Generation to platform, which is exciting. Andi on their existing customers and new customers moving to article on a journey. So? So Oracle came to us and said, you know, we can see how quickly you're able to help us change mindsets Ondas mindsets are locked in a way of thinking around operating models of I t. That there may be no agile and what siloed on day wanting to break free of that and adopt a more agile A p I at driven approach. A lot of the work that we're doing with our recall no is around, uh, accelerating what customers conduce with understanding their data and to build digital APS by identifying the the underlying data that has value. Onda at the time were able to do that in in in hours, days and weeks. Rather many months. Is opening up the eyes to Chief Data Officers CEO to say, Well, maybe we can do this whole digital transformation this year. Maybe we can bring that forward and and transform who we are as a company on that's driving innovation, which we're excited about it. I know Oracle, a keen Thio to drive through and >>helping businesses transformed digitally is so incredibly important in this time as we look Thio things changing in 2021 a. J. Lester thank you so much for joining me on this segment explaining adaptive data governance, how organizations can use it benefit from it and achieve our Oi. Thanks so much, guys. >>Thank you. Thanks again, Lisa. >>In a moment, we'll look a adaptive data governance in banking. This is the Cube, your global leader in high tech coverage. >>Innovation, impact influence. Welcome to the Cube. Disruptors. Developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on the Cube, your global leader in high tech digital coverage. >>Our next segment here is an interesting panel you're gonna hear from three gentlemen about adaptive data. Governments want to talk a lot about that. Please welcome Yusuf Khan, the global director of data services for Iot Tahoe. We also have Santiago Castor, the chief data officer at the First Bank of Nigeria, and good John Vander Wal, Oracle's senior manager of digital transformation and industries. Gentlemen, it's great to have you joining us in this in this panel. Great >>to be >>tried for me. >>Alright, Santiago, we're going to start with you. Can you talk to the audience a little bit about the first Bank of Nigeria and its scale? This is beyond Nigeria. Talk to us about that. >>Yes, eso First Bank of Nigeria was created 125 years ago. One of the oldest ignored the old in Africa because of the history he grew everywhere in the region on beyond the region. I am calling based in London, where it's kind of the headquarters and it really promotes trade, finance, institutional banking, corporate banking, private banking around the world in particular, in relationship to Africa. We are also in Asia in in the Middle East. >>So, Sanjay, go talk to me about what adaptive data governance means to you. And how does it help the first Bank of Nigeria to be able to innovate faster with the data that you have? >>Yes, I like that concept off adaptive data governor, because it's kind of Ah, I would say an approach that can really happen today with the new technologies before it was much more difficult to implement. So just to give you a little bit of context, I I used to work in consulting for 16, 17 years before joining the president of Nigeria, and I saw many organizations trying to apply different type of approaches in the governance on by the beginning early days was really kind of a year. A Chicago A. A top down approach where data governance was seeing as implement a set of rules, policies and procedures. But really, from the top down on is important. It's important to have the battle off your sea level of your of your director. Whatever I saw, just the way it fails, you really need to have a complimentary approach. You can say bottom are actually as a CEO are really trying to decentralize the governor's. Really, Instead of imposing a framework that some people in the business don't understand or don't care about it, it really needs to come from them. So what I'm trying to say is that data basically support business objectives on what you need to do is every business area needs information on the detector decisions toe actually be able to be more efficient or create value etcetera. Now, depending on the business questions they have to solve, they will need certain data set. So they need actually to be ableto have data quality for their own. For us now, when they understand that they become the stores naturally on their own data sets. And that is where my bottom line is meeting my top down. You can guide them from the top, but they need themselves to be also empower and be actually, in a way flexible to adapt the different questions that they have in orderto be able to respond to the business needs. Now I cannot impose at the finish for everyone. I need them to adapt and to bring their answers toe their own business questions. That is adaptive data governor and all That is possible because we have. And I was saying at the very beginning just to finalize the point, we have new technologies that allow you to do this method data classifications, uh, in a very sophisticated way that you can actually create analitico of your metadata. You can understand your different data sources in order to be able to create those classifications like nationalities, a way of classifying your customers, your products, etcetera. >>So one of the things that you just said Santa kind of struck me to enable the users to be adaptive. They probably don't want to be logging in support ticket. So how do you support that sort of self service to meet the demand of the users so that they can be adaptive. >>More and more business users wants autonomy, and they want to basically be ableto grab the data and answer their own question. Now when you have, that is great, because then you have demand of businesses asking for data. They're asking for the insight. Eso How do you actually support that? I would say there is a changing culture that is happening more and more. I would say even the current pandemic has helped a lot into that because you have had, in a way, off course, technology is one of the biggest winners without technology. We couldn't have been working remotely without these technologies where people can actually looking from their homes and still have a market data marketplaces where they self serve their their information. But even beyond that data is a big winner. Data because the pandemic has shown us that crisis happened, that we cannot predict everything and that we are actually facing a new kind of situation out of our comfort zone, where we need to explore that we need to adapt and we need to be flexible. How do we do that with data. Every single company either saw the revenue going down or the revenue going very up For those companies that are very digital already. Now it changed the reality, so they needed to adapt. But for that they needed information. In order to think on innovate, try toe, create responses So that type of, uh, self service off data Haider for data in order to be able to understand what's happening when the prospect is changing is something that is becoming more, uh, the topic today because off the condemning because of the new abilities, the technologies that allow that and then you then are allowed to basically help your data. Citizens that call them in the organization people that no other business and can actually start playing and an answer their own questions. Eso so these technologies that gives more accessibility to the data that is some cataloging so they can understand where to go or what to find lineage and relationships. All this is is basically the new type of platforms and tools that allow you to create what are called a data marketplace. I think these new tools are really strong because they are now allowing for people that are not technology or I t people to be able to play with data because it comes in the digital world There. Used to a given example without your who You have a very interesting search functionality. Where if you want to find your data you want to sell, Sir, you go there in that search and you actually go on book for your data. Everybody knows how to search in Google, everybody's searching Internet. So this is part of the data culture, the digital culture. They know how to use those schools. Now, similarly, that data marketplace is, uh, in you can, for example, see which data sources they're mostly used >>and enabling that speed that we're all demanding today during these unprecedented times. Goodwin, I wanted to go to you as we talk about in the spirit of evolution, technology is changing. Talk to us a little bit about Oracle Digital. What are you guys doing there? >>Yeah, Thank you. Um, well, Oracle Digital is a business unit that Oracle EMEA on. We focus on emerging countries as well as low and enterprises in the mid market, in more developed countries and four years ago. This started with the idea to engage digital with our customers. Fear Central helps across EMEA. That means engaging with video, having conference calls, having a wall, a green wall where we stand in front and engage with our customers. No one at that time could have foreseen how this is the situation today, and this helps us to engage with our customers in the way we were already doing and then about my team. The focus of my team is to have early stage conversations with our with our customers on digital transformation and innovation. And we also have a team off industry experts who engaged with our customers and share expertise across EMEA, and we inspire our customers. The outcome of these conversations for Oracle is a deep understanding of our customer needs, which is very important so we can help the customer and for the customer means that we will help them with our technology and our resource is to achieve their goals. >>It's all about outcomes, right? Good Ron. So in terms of automation, what are some of the things Oracle's doing there to help your clients leverage automation to improve agility? So that they can innovate faster, which in these interesting times it's demanded. >>Yeah, thank you. Well, traditionally, Oracle is known for their databases, which have bean innovated year over year. So here's the first lunch on the latest innovation is the autonomous database and autonomous data warehouse. For our customers, this means a reduction in operational costs by 90% with a multi medal converts, database and machine learning based automation for full life cycle management. Our databases self driving. This means we automate database provisioning, tuning and scaling. The database is self securing. This means ultimate data protection and security, and it's self repairing the automates failure, detection fail over and repair. And then the question is for our customers, What does it mean? It means they can focus on their on their business instead off maintaining their infrastructure and their operations. >>That's absolutely critical use if I want to go over to you now. Some of the things that we've talked about, just the massive progression and technology, the evolution of that. But we know that whether we're talking about beta management or digital transformation, a one size fits all approach doesn't work to address the challenges that the business has, um that the i t folks have, as you're looking through the industry with what Santiago told us about first Bank of Nigeria. What are some of the changes that you're seeing that I owe Tahoe seeing throughout the industry? >>Uh, well, Lisa, I think the first way I'd characterize it is to say, the traditional kind of top down approach to data where you have almost a data Policeman who tells you what you can and can't do, just doesn't work anymore. It's too slow. It's too resource intensive. Uh, data management data, governments, digital transformation itself. It has to be collaborative on. There has to be in a personalization to data users. Um, in the environment we find ourselves in. Now, it has to be about enabling self service as well. Um, a one size fits all model when it comes to those things around. Data doesn't work. As Santiago was saying, it needs to be adapted toe how the data is used. Andi, who is using it on in order to do this cos enterprises organizations really need to know their data. They need to understand what data they hold, where it is on what the sensitivity of it is they can then any more agile way apply appropriate controls on access so that people themselves are and groups within businesses are our job and could innovate. Otherwise, everything grinds to a halt, and you risk falling behind your competitors. >>Yeah, that one size fits all term just doesn't apply when you're talking about adaptive and agility. So we heard from Santiago about some of the impact that they're making with First Bank of Nigeria. Used to talk to us about some of the business outcomes that you're seeing other customers make leveraging automation that they could not do >>before it's it's automatically being able to classify terabytes, terabytes of data or even petabytes of data across different sources to find duplicates, which you can then re mediate on. Deletes now, with the capabilities that iota offers on the Oracle offers, you can do things not just where the five times or 10 times improvement, but it actually enables you to do projects for Stop that otherwise would fail or you would just not be able to dio I mean, uh, classifying multi terrible and multi petabytes states across different sources, formats very large volumes of data in many scenarios. You just can't do that manually. I mean, we've worked with government departments on the issues there is expect are the result of fragmented data. There's a lot of different sources. There's lot of different formats and without these newer technologies to address it with automation on machine learning, the project isn't durable. But now it is on that that could lead to a revolution in some of these businesses organizations >>to enable that revolution that there's got to be the right cultural mindset. And one of the when Santiago was talking about folks really kind of adapted that. The thing I always call that getting comfortably uncomfortable. But that's hard for organizations to. The technology is here to enable that. But well, you're talking with customers use. How do you help them build the trust in the confidence that the new technologies and a new approaches can deliver what they need? How do you help drive the kind of a tech in the culture? >>It's really good question is because it can be quite scary. I think the first thing we'd start with is to say, Look, the technology is here with businesses like I Tahoe. Unlike Oracle, it's already arrived. What you need to be comfortable doing is experimenting being agile around it, Andi trying new ways of doing things. Uh, if you don't wanna get less behind that Santiago on the team that fbn are a great example off embracing it, testing it on a small scale on, then scaling up a Toyota, we offer what we call a data health check, which can actually be done very quickly in a matter of a few weeks. So we'll work with a customer. Picky use case, install the application, uh, analyzed data. Drive out Cem Cem quick winds. So we worked in the last few weeks of a large entity energy supplier, and in about 20 days, we were able to give them an accurate understanding of their critical data. Elements apply. Helping apply data protection policies. Minimize copies of the data on work out what data they needed to delete to reduce their infrastructure. Spend eso. It's about experimenting on that small scale, being agile on, then scaling up in a kind of very modern way. >>Great advice. Uh, Santiago, I'd like to go back to Is we kind of look at again that that topic of culture and the need to get that mindset there to facilitate these rapid changes, I want to understand kind of last question for you about how you're doing that from a digital transformation perspective. We know everything is accelerating in 2020. So how are you building resilience into your data architecture and also driving that cultural change that can help everyone in this shift to remote working and a lot of the the digital challenges and changes that we're all going through? >>The new technologies allowed us to discover the dating anyway. Toe flawed and see very quickly Information toe. Have new models off over in the data on giving autonomy to our different data units. Now, from that autonomy, they can then compose an innovator own ways. So for me now, we're talking about resilience because in a way, autonomy and flexibility in a organization in a data structure with platform gives you resilience. The organizations and the business units that I have experienced in the pandemic are working well. Are those that actually because they're not physically present during more in the office, you need to give them their autonomy and let them actually engaged on their own side that do their own job and trust them in a way on as you give them, that they start innovating and they start having a really interesting ideas. So autonomy and flexibility. I think this is a key component off the new infrastructure. But even the new reality that on then it show us that, yes, we used to be very kind off structure, policies, procedures as very important. But now we learn flexibility and adaptability of the same side. Now, when you have that a key, other components of resiliency speed, because people want, you know, to access the data and access it fast and on the site fast, especially changes are changing so quickly nowadays that you need to be ableto do you know, interact. Reiterate with your information to answer your questions. Pretty, um, so technology that allows you toe be flexible iterating on in a very fast job way continue will allow you toe actually be resilient in that way, because you are flexible, you adapt your job and you continue answering questions as they come without having everything, setting a structure that is too hard. We also are a partner off Oracle and Oracle. Embodies is great. They have embedded within the transactional system many algorithms that are allowing us to calculate as the transactions happened. What happened there is that when our customers engaged with algorithms and again without your powers, well, the machine learning that is there for for speeding the automation of how you find your data allows you to create a new alliance with the machine. The machine is their toe, actually, in a way to your best friend to actually have more volume of data calculated faster. In a way, it's cover more variety. I mean, we couldn't hope without being connected to this algorithm on >>that engagement is absolutely critical. Santiago. Thank you for sharing that. I do wanna rap really quickly. Good On one last question for you, Santiago talked about Oracle. You've talked about a little bit. As we look at digital resilience, talk to us a little bit in the last minute about the evolution of Oracle. What you guys were doing there to help your customers get the resilience that they have toe have to be not just survive but thrive. >>Yeah. Oracle has a cloud offering for infrastructure, database, platform service and a complete solutions offered a South on Daz. As Santiago also mentioned, We are using AI across our entire portfolio and by this will help our customers to focus on their business innovation and capitalize on data by enabling new business models. Um, and Oracle has a global conference with our cloud regions. It's massively investing and innovating and expanding their clouds. And by offering clouds as public cloud in our data centers and also as private cloud with clouded customer, we can meet every sovereignty and security requirements. And in this way we help people to see data in new ways. We discover insights and unlock endless possibilities. And and maybe 11 of my takeaways is if I If I speak with customers, I always tell them you better start collecting your data. Now we enable this partners like Iota help us as well. If you collect your data now, you are ready for tomorrow. You can never collect your data backwards, So that is my take away for today. >>You can't collect your data backwards. Excellently, John. Gentlemen, thank you for sharing all of your insights. Very informative conversation in a moment, we'll address the question. Do you know your data? >>Are you interested in test driving the iota Ho platform kick Start the benefits of data automation for your business through the Iota Ho Data Health check program. Ah, flexible, scalable sandbox environment on the cloud of your choice with set up service and support provided by Iota ho. Look time with a data engineer to learn more and see Io Tahoe in action from around the globe. It's the Cube presenting adaptive data governance brought to you by Iota Ho. >>In this next segment, we're gonna be talking to you about getting to know your data. And specifically you're gonna hear from two folks at Io Tahoe. We've got enterprise account execs to be to Davis here, as well as Enterprise Data engineer Patrick Simon. They're gonna be sharing insights and tips and tricks for how you could get to know your data and quickly on. We also want to encourage you to engage with the media and Patrick, use the chat feature to the right, send comments, questions or feedback so you can participate. All right, Patrick Savita, take it away. Alright. >>Thankfully saw great to be here as Lisa mentioned guys, I'm the enterprise account executive here in Ohio. Tahoe you Pat? >>Yeah. Hey, everyone so great to be here. I said my name is Patrick Samit. I'm the enterprise data engineer here in Ohio Tahoe. And we're so excited to be here and talk about this topic as one thing we're really trying to perpetuate is that data is everyone's business. >>So, guys, what patent I got? I've actually had multiple discussions with clients from different organizations with different roles. So we spoke with both your technical and your non technical audience. So while they were interested in different aspects of our platform, we found that what they had in common was they wanted to make data easy to understand and usable. So that comes back. The pats point off to being everybody's business because no matter your role, we're all dependent on data. So what Pan I wanted to do today was wanted to walk you guys through some of those client questions, slash pain points that we're hearing from different industries and different rules and demo how our platform here, like Tahoe, is used for automating Dozier related tasks. So with that said are you ready for the first one, Pat? >>Yeah, Let's do it. >>Great. So I'm gonna put my technical hat on for this one. So I'm a data practitioner. I just started my job. ABC Bank. I have, like, over 100 different data sources. So I have data kept in Data Lakes, legacy data, sources, even the cloud. So my issue is I don't know what those data sources hold. I don't know what data sensitive, and I don't even understand how that data is connected. So how can I saw who help? >>Yeah, I think that's a very common experience many are facing and definitely something I've encountered in my past. Typically, the first step is to catalog the data and then start mapping the relationships between your various data stores. Now, more often than not, this has tackled through numerous meetings and a combination of excel and something similar to video which are too great tools in their own part. But they're very difficult to maintain. Just due to the rate that we are creating data in the modern world. It starts to beg for an idea that can scale with your business needs. And this is where a platform like Io Tahoe becomes so appealing, you can see here visualization of the data relationships created by the I. O. Tahoe service. Now, what is fantastic about this is it's not only laid out in a very human and digestible format in the same action of creating this view, the data catalog was constructed. >>Um so is the data catalog automatically populated? Correct. Okay, so So what I'm using Iota hope at what I'm getting is this complete, unified automated platform without the added cost? Of course. >>Exactly. And that's at the heart of Iota Ho. A great feature with that data catalog is that Iota Ho will also profile your data as it creates the catalog, assigning some meaning to those pesky column underscore ones and custom variable underscore tents. They're always such a joy to deal with. Now, by leveraging this interface, we can start to answer the first part of your question and understand where the core relationships within our data exists. Uh, personally, I'm a big fan of this view, as it really just helps the i b naturally John to these focal points that coincide with these key columns following that train of thought, Let's examine the customer I D column that seems to be at the center of a lot of these relationships. We can see that it's a fairly important column as it's maintaining the relationship between at least three other tables. >>Now you >>notice all the connectors are in this blue color. This means that their system defined relationships. But I hope Tahoe goes that extra mile and actually creates thes orange colored connectors as well. These air ones that are machine learning algorithms have predicted to be relationships on. You can leverage to try and make new and powerful relationships within your data. >>Eso So this is really cool, and I can see how this could be leverage quickly now. What if I added new data sources or your multiple data sources and need toe identify what data sensitive can iota who detect that? >>Yeah, definitely. Within the hotel platform. There, already over 300 pre defined policies such as hip for C, C, P. A and the like one can choose which of these policies to run against their data along for flexibility and efficiency and running the policies that affect organization. >>Okay, so so 300 is an exceptional number. I'll give you that. But what about internal policies that apply to my organization? Is there any ability for me to write custom policies? >>Yeah, that's no issue. And it's something that clients leverage fairly often to utilize this function when simply has to write a rejects that our team has helped many deploy. After that, the custom policy is stored for future use to profile sensitive data. One then selects the data sources they're interested in and select the policies that meet your particular needs. The interface will automatically take your data according to the policies of detects, after which you can review the discoveries confirming or rejecting the tagging. All of these insights are easily exported through the interface. Someone can work these into the action items within your project management systems, and I think this lends to the collaboration as a team can work through the discovery simultaneously, and as each item is confirmed or rejected, they can see it ni instantaneously. All this translates to a confidence that with iota hope, you can be sure you're in compliance. >>So I'm glad you mentioned compliance because that's extremely important to my organization. So what you're saying when I use the eye a Tahoe automated platform, we'd be 90% more compliant that before were other than if you were going to be using a human. >>Yeah, definitely the collaboration and documentation that the Iot Tahoe interface lends itself to really help you build that confidence that your compliance is sound. >>So we're planning a migration. Andi, I have a set of reports I need to migrate. But what I need to know is, uh well, what what data sources? Those report those reports are dependent on. And what's feeding those tables? >>Yeah, it's a fantastic questions to be toe identifying critical data elements, and the interdependencies within the various databases could be a time consuming but vital process and the migration initiative. Luckily, Iota Ho does have an answer, and again, it's presented in a very visual format. >>Eso So what I'm looking at here is my entire day landscape. >>Yes, exactly. >>Let's say I add another data source. I can still see that unified 3 60 view. >>Yeah, One future that is particularly helpful is the ability to add data sources after the data lineage. Discovery has finished alone for the flexibility and scope necessary for any data migration project. If you only need need to select a few databases or your entirety, this service will provide the answers. You're looking for things. Visual representation of the connectivity makes the identification of critical data elements a simple matter. The connections air driven by both system defined flows as well as those predicted by our algorithms, the confidence of which, uh, can actually be customized to make sure that they're meeting the needs of the initiative that you have in place. This also provides tabular output in case you needed for your own internal documentation or for your action items, which we can see right here. Uh, in this interface, you can actually also confirm or deny the pair rejection the pair directions, allowing to make sure that the data is as accurate as possible. Does that help with your data lineage needs? >>Definitely. So So, Pat, My next big question here is So now I know a little bit about my data. How do I know I can trust >>it? So >>what I'm interested in knowing, really is is it in a fit state for me to use it? Is it accurate? Does it conform to the right format? >>Yeah, that's a great question. And I think that is a pain point felt across the board, be it by data practitioners or data consumers alike. Another service that I owe Tahoe provides is the ability to write custom data quality rules and understand how well the data pertains to these rules. This dashboard gives a unified view of the strength of these rules, and your dad is overall quality. >>Okay, so Pat s o on on the accuracy scores there. So if my marketing team needs to run, a campaign can read dependent those accuracy scores to know what what tables have quality data to use for our marketing campaign. >>Yeah, this view would allow you to understand your overall accuracy as well as dive into the minutia to see which data elements are of the highest quality. So for that marketing campaign, if you need everything in a strong form, you'll be able to see very quickly with these high level numbers. But if you're only dependent on a few columns to get that information out the door, you can find that within this view, eso >>you >>no longer have to rely on reports about reports, but instead just come to this one platform to help drive conversations between stakeholders and data practitioners. >>So I get now the value of IATA who brings by automatically capturing all those technical metadata from sources. But how do we match that with the business glossary? >>Yeah, within the same data quality service that we just reviewed, one can actually add business rules detailing the definitions and the business domains that these fall into. What's more is that the data quality rules were just looking at can then be tied into these definitions. Allowing insight into the strength of these business rules is this service that empowers stakeholders across the business to be involved with the data life cycle and take ownership over the rules that fall within their domain. >>Okay, >>so those custom rules can I apply that across data sources? >>Yeah, you could bring in as many data sources as you need, so long as you could tie them to that unified definition. >>Okay, great. Thanks so much bad. And we just want to quickly say to everyone working in data, we understand your pain, so please feel free to reach out to us. we are Website the chapel. Oh, Arlington. And let's get a conversation started on how iota Who can help you guys automate all those manual task to help save you time and money. Thank you. Thank >>you. Your Honor, >>if I could ask you one quick question, how do you advise customers? You just walk in this great example this banking example that you instantly to talk through. How do you advise customers get started? >>Yeah, I think the number one thing that customers could do to get started with our platform is to just run the tag discovery and build up that data catalog. It lends itself very quickly to the other needs you might have, such as thes quality rules. A swell is identifying those kind of tricky columns that might exist in your data. Those custom variable underscore tens I mentioned before >>last questions to be to anything to add to what Pat just described as a starting place. >>I'm no, I think actually passed something that pretty well, I mean, just just by automating all those manual task. I mean, it definitely can save your company a lot of time and money, so we we encourage you just reach out to us. Let's get that conversation >>started. Excellent. So, Pete and Pat, thank you so much. We hope you have learned a lot from these folks about how to get to know your data. Make sure that it's quality, something you can maximize the value of it. Thanks >>for watching. Thanks again, Lisa, for that very insightful and useful deep dive into the world of adaptive data governance with Iota Ho Oracle First Bank of Nigeria This is Dave a lot You won't wanna mess Iota, whose fifth episode in the data automation Siri's in that we'll talk to experts from Red Hat and Happiest Minds about their best practices for managing data across hybrid cloud Inter Cloud multi Cloud I T environment So market calendar for Wednesday, January 27th That's Episode five. You're watching the Cube Global Leader digital event technique
SUMMARY :
adaptive data governance brought to you by Iota Ho. Gentlemen, it's great to have you on the program. Lisa is good to be back. Great. Listen, we're gonna start with you. But to really try to address these customer concerns because, you know, we wanna we So it's exciting a J from the CEO's level. It's real satisfying to see how we're able. Let's let's go back over to you. But they need to understand what kind of data they have, what shape it's in what's dependent lot of a lot of frameworks these days are hardwired, so you can set up a set It's the technical metadata coming together with policies Is this book enterprise companies are doing now? help the organizations to digest their data is to And if it was me eating that food with you guys, I would be not using chopsticks. So if you look at the challenges for these data professionals, you know, they're either on a journey to the cloud. Well, as she digs into the databases, she starts to see that So a J talk us through some examples of where But I think it helped do this Bring it to life a little bit. And one of the things I was thinking when you were talking through some We can see that on the the graphic that we've just How are you seeing those technologies being think you know this But the very first step is understanding what you have in normalizing that So if I start to see this pattern of date one day to elsewhere, I'm going to say, in the beginning about what you guys were doing with Oracle. So Oracle came to us and said, you know, we can see things changing in 2021 a. J. Lester thank you so much for joining me on this segment Thank you. is the Cube, your global leader in high tech coverage. Enjoy the best this community has to offer on the Cube, Gentlemen, it's great to have you joining us in this in this panel. Can you talk to the audience a little bit about the first Bank of One of the oldest ignored the old in Africa because of the history And how does it help the first Bank of Nigeria to be able to innovate faster with the point, we have new technologies that allow you to do this method data So one of the things that you just said Santa kind of struck me to enable the users to be adaptive. Now it changed the reality, so they needed to adapt. I wanted to go to you as we talk about in the spirit of evolution, technology is changing. customer and for the customer means that we will help them with our technology and our resource is to achieve doing there to help your clients leverage automation to improve agility? So here's the first lunch on the latest innovation Some of the things that we've talked about, Otherwise, everything grinds to a halt, and you risk falling behind your competitors. Used to talk to us about some of the business outcomes that you're seeing other customers make leveraging automation different sources to find duplicates, which you can then re And one of the when Santiago was talking about folks really kind of adapted that. Minimize copies of the data can help everyone in this shift to remote working and a lot of the the and on the site fast, especially changes are changing so quickly nowadays that you need to be What you guys were doing there to help your customers I always tell them you better start collecting your data. Gentlemen, thank you for sharing all of your insights. adaptive data governance brought to you by Iota Ho. In this next segment, we're gonna be talking to you about getting to know your data. Thankfully saw great to be here as Lisa mentioned guys, I'm the enterprise account executive here in Ohio. I'm the enterprise data engineer here in Ohio Tahoe. So with that said are you ready for the first one, Pat? So I have data kept in Data Lakes, legacy data, sources, even the cloud. Typically, the first step is to catalog the data and then start mapping the relationships Um so is the data catalog automatically populated? i b naturally John to these focal points that coincide with these key columns following These air ones that are machine learning algorithms have predicted to be relationships Eso So this is really cool, and I can see how this could be leverage quickly now. such as hip for C, C, P. A and the like one can choose which of these policies policies that apply to my organization? And it's something that clients leverage fairly often to utilize this So I'm glad you mentioned compliance because that's extremely important to my organization. interface lends itself to really help you build that confidence that your compliance is Andi, I have a set of reports I need to migrate. Yeah, it's a fantastic questions to be toe identifying critical data elements, I can still see that unified 3 60 view. Yeah, One future that is particularly helpful is the ability to add data sources after So now I know a little bit about my data. the data pertains to these rules. So if my marketing team needs to run, a campaign can read dependent those accuracy scores to know what the minutia to see which data elements are of the highest quality. no longer have to rely on reports about reports, but instead just come to this one So I get now the value of IATA who brings by automatically capturing all those technical to be involved with the data life cycle and take ownership over the rules that fall within their domain. Yeah, you could bring in as many data sources as you need, so long as you could manual task to help save you time and money. you. this banking example that you instantly to talk through. Yeah, I think the number one thing that customers could do to get started with our so we we encourage you just reach out to us. folks about how to get to know your data. into the world of adaptive data governance with Iota Ho Oracle First Bank of Nigeria
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amanda | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Patrick Simon | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Santiago | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Yusuf Khan | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
16 | QUANTITY | 0.99+ |
Santiago Castor | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
ABC Bank | ORGANIZATION | 0.99+ |
Patrick Savita | PERSON | 0.99+ |
10 times | QUANTITY | 0.99+ |
Sanjay | PERSON | 0.99+ |
Angie | PERSON | 0.99+ |
Wednesday, January 27th | DATE | 0.99+ |
Africa | LOCATION | 0.99+ |
Thio | PERSON | 0.99+ |
John Vander Wal | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Patrick | PERSON | 0.99+ |
two columns | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Bio Tahoe | ORGANIZATION | 0.99+ |
Azaz | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
11 | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
Oracle Digital | ORGANIZATION | 0.99+ |
J. Bihar | PERSON | 0.99+ |
1% | QUANTITY | 0.99+ |
Staley | PERSON | 0.99+ |
Iot Tahoe | ORGANIZATION | 0.99+ |
Iota ho | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Ron | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Iota Ho | ORGANIZATION | 0.99+ |
Andi | PERSON | 0.99+ |
Io Tahoe | ORGANIZATION | 0.99+ |
one date | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
excel | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
First Bank of Nigeria | ORGANIZATION | 0.99+ |
Middle East | LOCATION | 0.99+ |
Patrick Samit | PERSON | 0.99+ |
I. O. Tahoe | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
97% | QUANTITY | 0.99+ |
Lester | PERSON | 0.99+ |
two folks | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
fifth episode | QUANTITY | 0.99+ |
one grain | QUANTITY | 0.99+ |
Vaughn Stewart, Pure Storage & Bharath Aleti, Splunk | Pure Accelerate 2019
>> from Austin, Texas. It's Theo Cube, covering pure storage. Accelerate 2019. Brought to you by pure storage. >> Welcome back to the Cube. Lisa Martin Day Volante is my co host were a pure accelerate 2019 in Austin, Texas. A couple of guests joining us. Next. Please welcome Barack elected director product management for slunk. Welcome back to the Cube. Thank you. And guess who's back. Von Stewart. V. P. A. Technology from pure Avon. Welcome back. >> Hey, thanks for having us guys really excited about this topic. >> We are too. All right, so But we'll start with you. Since you're so excited in your nice orange pocket square is peeking out of your jacket there. Talk about the Splunk, your relationship. Long relationship, new offerings, joint value. What's going on? >> Great set up. So Splunk impure have had a long relationship around accelerating customers analytics The speed at which they can get their questions answered the rate at which they could ingest data right to build just more sources. Look at more data, get faster time to take action. However, I shouldn't be leading this conversation because Split Split has released a new architecture, a significant evolution if you will from the traditional Splunk architectural was built off of Daz and a shared nothing architecture. Leveraging replicas, right? Very similar what you'd have with, like, say, in H D. F s Work it load or H c. I. For those who aren't in the analytic space, they've released the new architecture that's disaggregated based off of cashing and an object store construct called Smart Store, which Broth is the product manager for? >> All right, tell us about that. >> So we release a smart for the future as part of spunk Enterprise. $7 to about a near back back in September Timeframe. Really Genesis or Strong Smart Strong goes back to the key customer problem that we were looking to solve. So one of our customers, they're already ingesting a large volume of data, but the need to retain the data for twice, then one of Peter and in today's architecture, what it required was them to kind of lean nearly scale on the amount of hardware. What we realized it. Sooner or later, all customers are going to run into this issue. But if they want in just more data or reading the data for longer periods, of time, they're going to run into this cost ceiling sooner or later on. The challenge is that into this architecture, today's distributes killer dark picture that we have today, which of all, about 10 years back, with the evolution of the Duke in this particular architecture, the computer and story Jacqui located. And because computer storage acqua located, it allows us to process large volumes of data. But if you look at the demand today, we can see that the demand for storage or placing the demand for computer So these are, too to directly opposite trans that we're seeing in the market space. If you need to basically provide performance at scale, there needs to be a better model. They need a better solution than what we had right now. So that's the reason we basically brought Smart store on denounced availability last September. What's Marceau brings to the table is that a D couples computer and storage, So now you can scale storage independent of computers, so if you need more storage or if you need to read in for longer periods of time, you can just kill independent on the storage and with level age, remote object stores like Bill Flash bid to provide that data depository. But most of your active data said still decides locally on the indexers. So what we did was basically broke the paradigm off computer storage location, and we had a small twist. He said that now the computer stories can be the couple, but you bring comfort and stories closer together only on demand. So that means that when you were running a radio, you know, we're running a search, and whenever the data is being looked for that only when we bring the data together. The other key thing that we do is we have an active data set way ensure that the smart store has ah, very powerful cash manager that allows that ensures that the active data set is always very similar to the time when your laptop, the night when your laptop has active data sets always in the cash always on memory. So very similar to that smarts for cash allows you to have active data set always locally on the index. Start your search performance is not impact. >> Yes, this problem of scaling compute and storage independently. You mentioned H. D. F s you saw it early on there. The hyper converged guys have been trying to solve this problem. Um, some of the database guys like snowflakes have solved it in the cloud. But if I understand correctly, you're doing this on Prem. >> So we're doing this board an on Prem as well as in Cloud. So this smart so feature is already available on tramp were also already using a host all off our spun cloud deployments as well. It's available for customers who want obviously deploy spunk on AWS as well. >> Okay, where do you guys fit in? So we >> fit in with customers anywhere from on the hate say this way. But on the small side, at the hundreds of terabytes up into the tens and hundreds of petabytes side. And that's really just kind of shows the pervasiveness of Splunk both through mid market, all the way up through the through the enterprise, every industry and every vertical. So where we come in relative to smart store is we were a coat co developer, a launch partner. And because our object offering Flash Blade is a high performance object store, we are a little bit different than the rest of the Splunk s story partner ecosystem who have invested in slow more of an archive mode of s tree right, we have always been designed and kind of betting on the future would be based on high performance, large scale object. And so we believe smart store is is a ah, perfect example, if you will, of a modern analytics platform. When you look at the architecture with smart store as brush here with you, you want to suffice a majority of your queries out of cash because the performance difference between reading out a cash that let's say, that's NAND based or envy. Emmy based or obtain, if you will. When you fall, you have to go read a data data out of the Objects store, right. You could have a significant performance. Trade off wean mix significantly minimized that performance drop because you're going to a very high bandwith flash blade. We've done comparison test with other other smart store search results have been published in other vendors, white papers and we show Flash blade. When we run the same benchmark is 80 times faster and so what you can now have without architecture is confidence that should you find yourself in a compliance or regulatory issue, something like Maybe GDP are where you've got 72 hours to notify everyone who's been impacted by a breach. Maybe you've got a cybersecurity case where the average time to find that you've been penetrated occurs 206 days after the event. And now you gotta go dig through your old data illegal discovery, you know, questions around, you know, customer purchases, purchases or credit card payments. Any time where you've got to go back in the history, we're gonna deliver those results and order of magnitude faster than any other object store in the market today. That translates from ours. Today's days, two weeks, and we think that falls into our advantage. Almost two >> orders of magnitude. >> Can this be Flash Player >> at 80%? Sorry, Katie. Time 80 x. Yes, that's what I heard. >> Do you display? Consider what flashlight is doing here. An accelerant of spunk, workloads and customer environment. >> Definitely, because the forward with the smart, strong cash way allow high performance at scale for data that's recites locally in the cash. But now, by using a high performance object store like your flash played. Customers can expect the same high performing board when data is in the cash as well as invented sin. Remorseful >> sparks it. Interesting animal. Um, yeah, you have a point before we >> subjects. Well, I don't want to cut you off. It's OK. So I would say commenting on the performance is just part of the equation when you look at that, UM, common operational activities that a splitting, not a storage team. But a Splunk team has to incur right patch management, whether it's at the Splunk software, maybe the operating system, like linen store windows, that spunk is running on, or any of the other components on side on that platform. Patch Management data Re balancing cause it's unequal. Equally distributed, um, hardware refreshes expansion of the cluster. Maybe you need more computer storage. Those operations in terms of time, whether on smart store versus the classic model, are anywhere from 100 to 1000 times faster with smart store so you could have a deployment that, for example, it takes you two weeks to upgrade all the notes, and it gets done in four hours when it's on Smart store. That is material in terms of your operational costs. >> So I was gonna say, Splunk, we've been watching Splunk for a long time. There's our 10th year of doing the Cube, not our 10th anniversary of our 10th year. I think it will be our ninth year of doing dot com. And so we've seen Splunk emerged very cool company like like pure hip hip vibe to it. And back in the day, we talked about big data. Splunk never used that term, really not widely in its marketing. But then when we started to talk about who's gonna own the big data, that space was a cloud era was gonna be mad. We came back. We said, It's gonna be spunk and that's what's happened. Spunk has become a workload, a variety of workloads that has now permeated the organization, started with log files and security kind of kind of cumbersome. But now it's like everywhere. So I wonder if you could talk to the sort of explosion of Splunk in the workloads and what kind of opportunity this provides for you guys. >> So a very good question here, Right? So what we have seen is that spunk has become the de facto platform for all of one structure data as customers start to realize the value of putting their trying to Splunk on the watch. Your spunk is that this is like a huge differentiate of us. Monk is the read only skim on reed which allows you to basically put all of the data without any structure and ask questions on the flight that allows you to kind of do investigations in real time, be more reactive. What's being proactive? We be more proactive. Was being reactive scaleable platform the skills of large data volumes, highly available platform. All of that are the reason why you're seeing an increase that option. We see the same thing with all other customers as well. They start off with one data source with one use case and then very soon they realize the power of Splunk and they start to add additional use cases in just more and more data sources. >> But this no >> scheme on writer you call scheme on Reed has been so problematic for so many big data practitioners because it just became the state of swamp. >> That didn't >> happen with Splunk. Was that because you had very defined use cases obviously security being one or was it with their architectural considerations as well? >> They just architecture, consideration for security and 90 with the initial use cases, with the fact that the scheme on Reid basically gives open subject possibilities for you. Because there's no structure to the data, you can ask questions on the fly on. You can use that to investigate, to troubleshoot and allies and take remedial actions on what's happening. And now, with our new acquisitions, we have added additional capabilities where we can talk, orchestrate the whole Anto and flow with Phantom, right? So a lot of these acquisitions also helping unable the market. >> So we've been talking about TAM expansion all week. We definitely hit it with Charlie pretty hard. I have. You know, I think it's a really important topic. One of things we haven't hit on is tam expansion through partnerships and that flywheel effect. So how do you see the partners ship with Splunk Just in terms of supporting that tam expansion the next 10 years? >> So, uh, analytics, particularly log and Alex have really taken off for us in the last year. As we put more focus on it, we want to double down on our investments as we go through the end of this year and in the next year with with a focus on Splunk um, a zealous other alliances. We think we are in a unique position because the rollout of smart store right customers are always on a different scale in terms of when they want to adopt a new architecture right. It is a significant decision that they have to make. And so we believe between the combination of flash array for the hot tear and flash played for the cold is a nice way for customers with classic Splunk architecture to modernize their platform. Leverage the benefits of data reduction to drive down some of the cost leverage. The benefits of Flash to increase the rate at which they can ask questions and get answers is a nice stepping stone. And when customers are ready because Flash Blade is one of the few storage platforms in the market at this scale out band with optimized for both NFS and object, they can go through a rolling nondestructive upgrade to smart store, have you no investment protection, and if they can't repurpose that flash rate, they can use peers of service to have the flesh raise the hot today and drop it back off just when they're done within tomorrow. >> And what about C for, you know, big workloads, like like big data workloads. I mean, is that a good fit here? You really need to be more performance oriented. >> So flash Blade is is high bandwith optimization, which really is designed for workload. Like Splunk. Where when you have to do a sparse search, right, we'll find that needle in the haystack question, right? Were you breached? Where were you? Briefed. How were you breached? Go read as much data as possible. You've gotta in just all that data, back to the service as fast as you can. And with beast Cloud blocked, Teresi is really optimized it a tear to form of NAND for that secondary. Maybe transactional data base or virtual machines. >> All right, I want more, and then I'm gonna shut up sick. The signal FX acquisition was very interesting to me for a lot of reasons. One was the cloud. The SAS portion of Splunk was late to that game, but now you're sort of making that transition. You saw Tableau you saw Adobe like rip the band Aid Off and it was somewhat painful. But spunk is it. So I wonder. Any advice that you spend Splunk would have toe von as pure as they make that transition to that sass model. >> So I think definitely, I think it's going to be a challenging one, but I think it's a much needed one in there in the environment that we are in. The key thing is to always because two more focus and I'm sure that you're already our customer focus. But the key is key thing is to make sure that any service is up all the time on make sure that you can provide that up time, which is going to be crucial for beating your customers. Elise. >> That's good. That's good guidance. >> You >> just wanted to cover that for you favor of keeping you date. >> So you gave us some of those really impressive stats In terms of performance. >> They're almost too good to be true. >> Well, what's customer feedback? Let's talk about the real world when you're talking to customers about those numbers. What's the reaction? >> So I don't wanna speak for Broth, so I will say in our engagements within their customer base, while we here, particularly from customers of scale. So the larger the environment, the more aggressive they are to say they will adopt smart store right and on a more aggressive scale than the smaller environments. And it's because the benefits of operating and maintaining the indexer cluster are are so great that they'll actually turn to the stores team and say, This is the new architecture I want. This is a new storage platform and again. So when we're talking about patch management, cluster expansion Harbor Refresh. I mean, you're talking for a large sum. Large installs weeks, not two or 3 10 weeks, 12 weeks on end so it can be. You can reduce that down to a couple of days. It changes your your operational paradigm, your staffing. And so it has got high impact. >> So one of the message that we're hearing from customers is that it's far so they get a significant reduction in the infrastructure spent it almost dropped by 2/3. That's really significant file off our large customers for spending a ton of money on infrastructure, so just dropping that by 2/3 is a significant driver to kind of move too smart. Store this in addition to all the other benefits that get smart store with operational simplicity and the ability that it provides. You >> also have customers because of smart store. They can now actually bursts on demand. And so >> you can think of this and kind of two paradigms, right. Instead of >> having to try to avoid some of the operational pain, right, pre purchase and pre provisional large infrastructure and hope you fill it up. They could do it more of a right sides and kind of grow in increments on demand, whether it's storage or compute. That's something that's net new with smart store um, they can also, if they have ah, significant event occur. They can fire up additional indexer notes and search clusters that can either be bare metal v ems or containers. Right Try to, you know, push the flash, too. It's Max. Once they found the answers that they need gotten through. Whatever the urgent issues, they just deep provisionals assets on demand and return back down to a steady state. So it's very flexible, you know, kind of cloud native, agile platform >> on several guys. I wish we had more time. But thank you so much fun. And Deron, for joining David me on the Cube today and sharing all of the innovation that continues to come from this partnership. >> Great to see you appreciate it >> for Dave Volante. I'm Lisa Martin, and you're watching the Cube?
SUMMARY :
Brought to you by Welcome back to the Cube. Talk about the Splunk, your relationship. if you will from the traditional Splunk architectural was built off of Daz and a shared nothing architecture. What's Marceau brings to the table is that a D couples computer and storage, So now you can scale You mentioned H. D. F s you saw it early on there. So this smart so feature is And now you gotta go dig through your old data illegal at 80%? Do you display? Definitely, because the forward with the smart, strong cash way allow Um, yeah, you have a point before we on the performance is just part of the equation when you look at that, Splunk in the workloads and what kind of opportunity this provides for you guys. Monk is the read only skim on reed which allows you to basically put all of the data without scheme on writer you call scheme on Reed has been so problematic for so many Was that because you had very defined use cases to the data, you can ask questions on the fly on. So how do you see the partners ship with Splunk Flash Blade is one of the few storage platforms in the market at this scale out band with optimized for both NFS And what about C for, you know, big workloads, back to the service as fast as you can. Any advice that you But the key is key thing is to make sure that any service is up all the time on make sure that you can provide That's good. Let's talk about the real world when you're talking to customers about So the larger the environment, the more aggressive they are to say they will adopt smart So one of the message that we're hearing from customers is that it's far so they get a significant And so you can think of this and kind of two paradigms, right. So it's very flexible, you know, kind of cloud native, agile platform And Deron, for joining David me on the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
$7 | QUANTITY | 0.99+ |
Katie | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Barack | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
80 times | QUANTITY | 0.99+ |
ninth year | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
Deron | PERSON | 0.99+ |
12 weeks | QUANTITY | 0.99+ |
72 hours | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
twice | QUANTITY | 0.99+ |
10th year | QUANTITY | 0.99+ |
Von Stewart | PERSON | 0.99+ |
Elise | PERSON | 0.99+ |
last year | DATE | 0.99+ |
hundreds of terabytes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
2019 | DATE | 0.99+ |
today | DATE | 0.99+ |
Vaughn Stewart | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Bharath Aleti | PERSON | 0.99+ |
next year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
September | DATE | 0.98+ |
10th anniversary | QUANTITY | 0.98+ |
80% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Avon | ORGANIZATION | 0.98+ |
Peter | PERSON | 0.98+ |
Alex | PERSON | 0.98+ |
last September | DATE | 0.98+ |
100 | QUANTITY | 0.98+ |
Jacqui | PERSON | 0.98+ |
Lisa Martin Day Volante | PERSON | 0.98+ |
hundreds of petabytes | QUANTITY | 0.97+ |
Splunk | PERSON | 0.97+ |
Spunk | ORGANIZATION | 0.97+ |
Charlie | PERSON | 0.96+ |
Tableau | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
206 days | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
Adobe | ORGANIZATION | 0.95+ |
end of this year | DATE | 0.95+ |
two paradigms | QUANTITY | 0.94+ |
about 10 years back | DATE | 0.93+ |
1000 times | QUANTITY | 0.93+ |
Reed | ORGANIZATION | 0.9+ |
one use case | QUANTITY | 0.89+ |
3 10 weeks | QUANTITY | 0.88+ |
Reid | ORGANIZATION | 0.88+ |
90 | QUANTITY | 0.87+ |
couple of guests | QUANTITY | 0.87+ |
Phantom | ORGANIZATION | 0.87+ |
Flash | PERSON | 0.85+ |
2/3 | QUANTITY | 0.84+ |
Marceau | PERSON | 0.83+ |
TAM | ORGANIZATION | 0.83+ |
days | QUANTITY | 0.82+ |
couple | QUANTITY | 0.82+ |
Liran Zvibel, WekaIO | CUBEConversations, June 2019
>> from our studios in the heart of Silicon Valley. HOLLOWAY ALTO, California It is a cube conversation. >> Hi! And welcome to the Cube studios from the Cube conversation, where we go in depth with thought leaders driving innovation across the tech industry on hosted a Peter Burress. What are we talking about today? One of the key indicators of success and additional business is how fast you can translate your data into new value streams. That means sharing it better, accelerating the rate at which you're running those models, making it dramatically easier to administrate large volumes of data at scale with a lot of different uses. That's a significant challenge. Is going to require a rethinking of how we manage many of those data assets and how we utilize him. Notto have that conversation. We're here with Le'Ron v. Bell, who was the CEO of work a Iot leering. Welcome back to the Cube. >> Thank you very much for having >> me. So before we get to the kind of a big problem, give us an update. What's going on at work a Iot these days? >> So very recently we announced around CIA financing for the company. Another 31.7 a $1,000,000 we've actually had a very unorthodox way of raising thiss round. Instead of going to the traditional VC lead round, we actually went to our business partners and joined forces with them into building a stronger where Collier for customers we started with and video that has seen a lot of success going with us to their customers. Because when Abel and Video to deploy more G pews so they're customers can either solve bigger problems or solve their problems faster. The second pillar off the data center is networking. So we've had melon ox investing in the company because there are the leader ofthe fast NETWORKINGS. So between and Vidia, melon, ox and work are yo u have very strong pillars. Iran compute network and storage performance is crucial, but it's not the only thing customers care about, so customers need extremely fast access to their data. But they're also accumulating and keeping and storing tremendous amount of it. So we've actually had the whole hard drive industry investing in us, with Sigi and Western Digital both investing in the company and finally one off a very successful go to market partner, Hewlett Pocket enterprise invested in us throw their Pathfinder program. So we're showing tremendous back from the industry, supporting our vision off, enabling next generation performance, two applications and the ability to scale to any workload >> graduations. And it's good money. But it's also smart money that has a lot of operational elements and just repeat it. It's a melon ox, our video video, H P E C Gate and Western Digital eso. It's It's an interesting group, but it's a group that will absolutely sustain and further your drive to try to solve some of these key data Orient problems. But let's talk about what some of those key day or data oriented problems where I set up front that one of the challenges that any business that has that generates a lot of it's value out of digital assets is how fast and how easily and with what kind of fidelity can I reuse and process and move those data assets? How are how is the industry attending? How's that working in the industry today, and where do you think we're going? >> So that's part on So businesses today, through different kind of workloads, need toe access, tremendous amount of data extremely quickly, and the question of how they're going to compare to their cohort is actually based on how quickly and how well they can go through the data and process it. And that's what we're solving for our customers. And we're now looking into several applications where speed and performance. On the one hand, I have to go hand in hand with extreme scale. So we see great success in machine learning, where in videos in we're going after Life Sciences, where the genomic models, the cryo here microscopy the computational chemistry all are now accelerated. And for the pharmacy, because for the research interested to actually get to conclusion, they serve to sift through a lot of data. We are working extremely well at financial analytics, either for the banks, for the hedge funds for the quantitative trading Cos. Because we allow them to go through data much, much quicker. Actually, only last week I had the grades to rate the customer where we were able to change the amount of time they go through one analytic cycle from almost two hours, four minutes. >> This is in a financial analytics >> Exactly. And I think last time I was here was telling you about one of their turn was driving companies using us taking, uh, time to I poke another their single up from two weeks to four hours. So we see consistent 122 orders of monk to speed time in wall clock. So we're not just showing we're faster for a benchmark. We're showing our customer that by leveraging our technology, they get results significantly faster. We're also successful in engineering around chip designed soft rebuild fluid dynamics. We've announced Melon ox as an idiot customer. The chip designed customers, so they're not only a partner, they have brought our technology in house, and they're leveraging us for the next chips. And recently we've also discovered that we are great help for running Noah scale databases in the clouds running ah sparkles plank or Cassandra over work. A Iot is more than twice faster than running over the Standard MPs elected elastic clock services. >> All right, so let's talk about this because your solving problems that really only recently have been within range of some of the technology, but we still see some struggling. The way I described it is that storage for a long time was focused on persisting data transactions executed. Make sure you persisted Now is moved to these life life sciences, machine learning, genomics, those types of outpatients of five workloads we're talking about. How can I share data? How can I deploy and use data faster? But the historian of the storage industry still predicated on this designs were mainly focused on persistent. You think about block storage and filers and whatnot. How is Wecker Io advancing that knowledge that technology space of, you know, reorganizing are rethinking storage for the types, performance and scale that some of these use cases require. >> This is actually a great question. We actually started the company. We We had a long legacy at IBM. We now have no Andy from, uh, metta, uh, kind of prints from the emcee. We see what happens. Page be current storage portfolio for the large Players are very big and very convoluted, and we've decided when we're starting to come see that we're solving it. So our aim is to solve all the little issues storage has had for the last four decades. So if you look at what customers used today, if they need the out most performance they go to direct attached. This's what fusion I awards a violin memory today, these air Envy me devices. The downside is that data is cannot be sure, but it cannot even be backed up. If a server goes away, you're done. Then if customers had to have some way of managing the data they bought Block san, and then they deployed the volume to a server and run still a local file system over that it wasn't as performance as the Daz. But at least you could back it up. You can manage it some. What has happened over the last 15 years, customers realized more. Moore's law has ended, so upscaling stopped working and people have to go out scaling. And now it means that they have to share data to stop to solve their problems. >> More perils more >> probably them out ofthe Mohr servers. More computers have to share data to actually being able to solve the problem, and for a while customers were able to use the traditional filers like Aneta. For this, kill a pilot like an eyes alone or the traditional parlor file system like the GP affair spectrum scale or luster, but these were significantly slower than sand and block or direct attached. Also, they could never scale matter data. You were limited about how many files that can put in a single, uh, directory, and you were limited by hot spots into that meta data. And to solve that, some customers moved to an object storage. It was a lot harder to work with. Performance was unimpressive. You had to rewrite our application, but at least he could scale what were doing at work a Iot. We're reconfiguring the storage market. We're creating a storage solution that's actually not part of any of these for categories that the industry has, uh, become used to. So we are fasted and direct attached, they say is some people hear it that their mind blows off were faster, the direct attached, whereas resilient and durable as San, we provide the semantics off shirt file, so it's perfect your ability and where as Kayla Bill for capacity and matter data as an object storage >> so performance and scale, plus administrative control and simplicity exactly alright. So because that's kind of what you just went through is those four things now now is we think about this. So the solution needs to be borrow from the best of these, but in a way that allows to be applied to work clothes that feature very, very large amounts of data but typically organized as smaller files requiring an enormous amount of parallelism on a lot of change. Because that's a big part of their hot spot with metadata is that you're constantly re shuffling things. So going forward, how does this how does the work I owe solution generally hit that hot spot And specifically, how are you going to apply these partnerships that you just put together on the investment toe actually come to market even faster and more successfully? >> All right, so these are actually two questions. True, the technology that we have eyes the only one that paralyzed Io in a perfect way and also meditate on the perfect way >> to strangers >> and sustains it parla Liz, um, buy load balancing. So for a CZ, we talked about the hot sport some customers have, or we also run natively in the cloud. You may get a noisy neighbor, so if you aren't employing constant load balancing alongside the extreme parallelism, you're going to be bound to a bottleneck, and we're the only solution that actually couples the ability to break each operation to a lot of small ones and make sure it distributed work to the re sources that are available. Doing that allows us to provide the tremendous performance at tremendous scale, so that answers the technology question >> without breaking or without without introducing unbelievable complexity in the administration. >> It's actually makes everything simpler because looking, for example, in the ER our town was driving example. Um, the reason they were able to break down from two weeks to four hours is that before us they had to copy data from their objects, George to a filer. But the father wasn't fast enough, so they also had to copy the data from the filer to a local file system. And these copies are what has added so much complexity into the workflow and made it so slow because when you copy, you don't compute >> and loss of fidelity along the way right? OK, so how is this money and these partnerships going to translate into accelerated ionization? >> So we are leveraging some off the funds for Mohr Engineering coming up with more features supporting Mohr enterprise applications were gonna leverage some of the funds for doing marketing. And we're actually spending on marketing programs with thes five good partners within video with melon ox with sick it with Western Digital and with Hewlett Packard Enterprise. But we're also deploying joint sales motion. So we're now plugged into in video and plugged, anted to melon ox and plugging booked the Western Digital and to Hillary Pocket Enterprise so we can leverage their internal resource now that they have realized through their business units and the investment arm that we make sense that we can actually go and serve their customers more effectively and better. >> Well, well, Kaio is introduced A road through the unique on new technology into makes perfect sense. But it is unique and it's relatively new, and sometimes enterprises might go well. That's a little bit too immature for me, but if the problem than it solves is that valuable will bite the bullet. But even more importantly, a partnership line up like this has got to be ameliorating some of the concerns that your fearing from the marketplace >> definitely so when and video tells the customers Hey, we have tested it in our laps. Where in Hewlett Packard Enterprise? Till the customer, not only we have tested it in our lab, but the support is going to come out of point. Next. Thes customers now have the ability to keep buying from their trusted partners. But get the intellectual property off a nor company with better, uh, intellectual property abilities another great benefit that comes to us. We are 100% channel lead company. We are not doing direct sales and working with these partners, we actually have their channel plans open to us so we can go together and we can implement Go to Market Strategy is together with they're partners that already know howto work with them. And we're just enabling and answering the technical of technical questions, talking about the roadmap, talking about how to deploy. But the whole ecosystem keeps running in the fishing way it already runs, so we don't have to go and reinvent the whales on how how we interact with these partners. Obviously, we also interact with them directly. >> You could focus on solving the problem exactly great. Alright, so once again, thanks for joining us for another cube conversation. Le'Ron zero ofwork I Oh, it's been great talking to you again in the Cube. >> Thank you very much. I always enjoy coming over here >> on Peter Burress until next time.
SUMMARY :
from our studios in the heart of Silicon Valley. One of the key indicators of me. So before we get to the kind of a big problem, give us an update. is crucial, but it's not the only thing customers care about, How are how is the industry attending? And for the pharmacy, because for the research interested to actually get to conclusion, in the clouds running ah sparkles plank or Cassandra over But the historian of the storage industry still predicated on this And now it means that they have to share data to stop to solve We're reconfiguring the storage market. So the solution needs to be borrow and also meditate on the perfect way actually couples the ability to break each operation to a lot of small ones and Um, the reason they were able to break down from two weeks to four hours So we are leveraging some off the funds for Mohr Engineering coming up is that valuable will bite the bullet. Thes customers now have the ability to keep buying from their You could focus on solving the problem exactly great. Thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Western Digital | ORGANIZATION | 0.99+ |
Liran Zvibel | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Mohr Engineering | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Peter Burress | PERSON | 0.99+ |
George | PERSON | 0.99+ |
June 2019 | DATE | 0.99+ |
122 orders | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
four minutes | QUANTITY | 0.99+ |
Mohr | ORGANIZATION | 0.99+ |
Sigi | ORGANIZATION | 0.99+ |
Hewlett Pocket | ORGANIZATION | 0.99+ |
two applications | QUANTITY | 0.99+ |
five good partners | QUANTITY | 0.98+ |
second pillar | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
31.7 | QUANTITY | 0.98+ |
Andy | PERSON | 0.98+ |
Collier | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
H P E C Gate | ORGANIZATION | 0.97+ |
more than twice | QUANTITY | 0.97+ |
Le'Ron | PERSON | 0.96+ |
Hillary Pocket Enterprise | ORGANIZATION | 0.95+ |
Bell | PERSON | 0.95+ |
four things | QUANTITY | 0.95+ |
Melon ox | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.94+ |
five workloads | QUANTITY | 0.92+ |
each operation | QUANTITY | 0.92+ |
Cube | ORGANIZATION | 0.92+ |
Abel | ORGANIZATION | 0.91+ |
Western Digital eso | ORGANIZATION | 0.91+ |
$1,000,000 | QUANTITY | 0.89+ |
almost two hours | QUANTITY | 0.89+ |
single | QUANTITY | 0.89+ |
Io | PERSON | 0.88+ |
last 15 years | DATE | 0.86+ |
HOLLOWAY ALTO, California | LOCATION | 0.86+ |
One | QUANTITY | 0.85+ |
one analytic cycle | QUANTITY | 0.81+ |
Cassandra | PERSON | 0.78+ |
last four decades | DATE | 0.77+ |
Kayla Bill | PERSON | 0.74+ |
Kaio | ORGANIZATION | 0.69+ |
Moore | PERSON | 0.67+ |
Vidia | ORGANIZATION | 0.59+ |
Noah | COMMERCIAL_ITEM | 0.59+ |
Video | ORGANIZATION | 0.56+ |
Iran | LOCATION | 0.54+ |
Wecker | ORGANIZATION | 0.54+ |
Aneta | ORGANIZATION | 0.53+ |
Block san | ORGANIZATION | 0.5+ |
Iot | TITLE | 0.5+ |
Daz | ORGANIZATION | 0.44+ |
WekaIO | PERSON | 0.41+ |
Pathfinder | TITLE | 0.32+ |
Matt Kixmoeller, Pure Storage | CUBEcoversation, April 2019
>> we'LL run. Welcome to this special. Keep conversation. We're here in Mountain View, California. The pure storage headquarters here in Castro Tree, one of the many buildings they have here as they continue to grow as a public company. Our next guest is Kicks Vice President of strategy Employee number six Pure. Great to see you. Thanks for spending time. Thanks for having me. So cloud is the big wave that's coming around the future itself here. Now, people really impacted by it operationally coming to the reality that they got to actually use the cloud of benefits for many, many multiple benefits. But you guys have major bones in storage, flash arrays continuing to take take territory. So as you guys do that, what's the cloud play? How to customers who were using pure. And we've heard some good testimonials Yet a lot of happy customers. We've seen great performance, Easy to get in reliability performances. They're in the storage side on premise. Right? Okay. Now Operations says, Hey, I build faster. Cloud is certainly path there. Certainly. Good one. Your thoughts on strategy for the cloud? >> Absolutely. So look for about ten years into the journey here, a pure. And a lot of what we did in the first ten years was helped bring flash onto the scene. Um, and you know what a vision when we started the company of the All Flash Data Center and I'd like to first of all, remind people that look, we ain't there yet. If you look at the analyst numbers, about a third of the storage sold this year will be flashed two thirds disk. So we still have a long way to go in the old flash data center and a lot of work to do there. But of course, increasingly customers are wanting to move, were close to the cloud. And I think the last couple of years have almost seen a pendulum swing a little bit more back to reality. You know, when I met with CEOs to three years ago, you often heard we're going all cloud. We're going to cloud first and, you know, now there a few years into it. And they've realized that that cloud is a very powerful weapon in their in their arsenal for agility, for flexibility. But it's not necessarily cheaper on DH. So I think the swing back to really believe in in hybrid is the model of the day, and I think that I think people have realised in that journey is that the club early works best when you build a nap for the cloud natively. But what if you have a bunch of on prime maps that are in traditional architecture? How do I get in the cloud? And so one of the things we really focused on is how we can help customers take their mission critical applications and move them seamlessly to the cloud without re architecture. Because for most customers, that's really going to start. I mean, they could build some new stuff in the cloud, but the bulk of their business, if they want to move substantial portions of the cloud, they've got to figure out how to move what they've got. And we think we really had value in that. >> And the economics of the cloud is undeniable. People who are born in the cloud will testify that certainly as you guys have been successful on premise with the cloud, how do you make those economics, he seem, was as well as the operations. This seems to be the number one goal when you talk about how important that is and how hard it is, because it sounds easy just to say it. But it's actually really difficult to have seamless operations on Prime because, you know, Amazon, Google, Microsoft, they all got computing storage in the cloud and you got story. John Premise. This equation is a really important one to figure out what the importance and how hard is it to some of things that you guys are doing to solve that. >> Yeah, So I heard two things that question one around costs and one around operations on. You know, the first thing I think that has been nice to see over the last couple of years as people realizing that both the cloud and on from our cost effective in different ways, and I think a little bit about the way that I think about owning a car. Owning a car is relatively cost effective for me, and there's times and taken uber is relatively cost effective. I think they're both cheap when you look it on one metric, though, about what I pay per mile, it's way more expensive to own a car to take a number look about acquisition cost. It's way more expensive. Car, right? And so I think both of them provide value of my lives in the way that hybrid does today. But once you start to use both than the operational, part of your question comes in. How do I think about these two different worlds? And I think we believe that that storage is actually one of the areas where these two worlds are totally different on dso a couple things we've done to find a bridge together. First off on the cost side, one of the things we realised was that people that are going to run large amounts of on prime infrastructure increasingly want to do it in the cloud model. And so we introduced a new pricing model that we call the S to evergreen storage service, which will essentially allows you to subscribe to our storage even in your own data center. And so you can have an optics experience in the cloud. You gotta monoprix experience on Prem and when you buy and yes, to those licenses are transferrable so you can start on Prem, Move your stories to the cloud with pure go back and forth tons of flexibility. From the operational point of view, I think we're trying to get to the same experience as well such that you have a single storage experience for a manageability and automation point of view across both. And I think that last word of automation is key, because if you look at people who are really invested in cloud, it's all about automation. In one of the nice things I think that's made pure, so successful in on Prime Claude environments is this combination of simplicity and automation. You can't we automate what isn't simple to begin with on DH. So we started with simplicity. But as we've added rich FBI's, we're really seeing that become the dominant way that people administrated our storage. And so as we've gone to the cloud because it's the same software on both sides, literally the same integrations, the same AP calls everything works transparently across both places. >> That's a great point. We've been reporting on silicon ng on the Cube for years. Automation grave. You have to couple of manual taxes and automated, but the values and shifting and you guys in the storage business you know this data's data data is very valuable. You mentioned the car and Alice just take uber uber is an app. It's got Web services in the back end. So when you start thinking about cloud, you think you hear ap eyes You hear micro services as more and more applications going to need the data, they're going to need to have that in real time, some cases not near real time, either real time. And they're gonna need to have at the right time. So the role of data becomes important, which makes storage more important. So you automate the story, Okay, Take away that mundane tasks. Now the value shifts to making sure data is being presented properly. This is the renaissance of application development. Right now we're seeing this. How do you guys attack that market? How do you guys enable that? Mark, how do you satisfy that market? Because this is where the AP eyes could be connectors. This is where the data can be valuable. Whether it's analytic, score an app like uber. That's just, you know, slinging AP eyes together for a service that is now going to go public. Yeah, >> I think the mindset around data is one of the biggest differences between the old world in the New World. And if you think about the old world of applications. Yeah, monolithic databases that kind of privately owned their own data stores and the whole name of the game was delivering that as reliably as possible, kind of locking it down, making it super reliable. If you look at the idea of the Web scale application, the idea of an application is broken up into lots of little micro services, and those maker services somehow have to work together on data. And so what does it mean that the data level, it's not this kind of monolithic database anymore? It's got to be this open shared environment and, you know, as a result, if you look in the Web in Amazon's case, for example, the vast majority of applications are written on history object storage that's inherently shared. And so I think one of the bigger interesting challenges right now is how you get data constructs to actually go both ways. You know, if you want to take a non prime map that kind of is built around the database, you've got to figure out a way to move it to the cloud and ronit reliably on the flipside of the coin. If you want to build on Web skill tools and then be hybrid and run some of those things on Prem, well, you need an object store on prim and most people don't have that. And so you know, this whole kind of compatibility to make hybrid reality. It's forcing people on both sides of the weir to understand the other architecture er, and make sure they're compatible both ways >> and throw more complex into that equation. Is that skills, gaps? I know I know that cloud needed. But now men on premise so different skill got you guys had an announcement that's come out. So I want to ask you about your product announcement and your acquisitions. Go back to past six months. What's the most notable product announcements inequities that you guys have done? And what does that mean for pure and your customers? Yeah, >> absolutely. So I'll just kind of walk through it, So the first thing we announced was our new set of Cloud data services, and this was in essence, bringing our core software that runs on our purity. Operating environment right into the cloud. And so we call that cloud block store. And again, this is a lot of what I've been talking about, how you can take a tier one block storage application on Prem and seamlessly move it to the cloud along that same timeline. We also introduce something called P S O, which is the pure service orchestrator. And this was a tool set that we built specifically for the containers world for communities so that basically, in a container environment, our storage could be completely automated. It's been really fun watching customers use and just see how different that storage is in a container environment. You know, we look at our call home data with an R P. R. One application, and in our traditional on prime environment, the average array has about one administrative tasks per day. Make a volume. Delete something, Whatever. If you look in a container environment, that's tens of thousands, and so it's just a much more fluid environment, which there's no way a storage at Ben's going to do something ten thousand times a day they've got on, >> and that's where automation comes in. But what does that mean? the continuous station. That means the clients are using containers to be more flexible, they deploying more. What's the What's the inside of this container trend? >> You know, I think ultimately it's just a farm or fluid environment. It's totally automated, Andi. It's built on a world of share data. And so you need a shared, reliable data service that can power these containers, Um and then, you know, back to original question about about kind of product expansion. The next thing that we haven't announced last year was acquisition of a company called Story Juice, and we've subsequently brought out as a product that we call Object Engine. And this is all about a new type of data moving into the club, which is backup data and facilitating in this backup process. You know, in the past, people moved from tape back up to the space back up and, you know, we saw kind of two new inflection points here. Number one the opportunity Use flash on Prem. So the people have really fast recoveries on prep because in most environments now, space recovery just aren't fast enough, and then using low cost object storage in the cloud for retention. So the combination of flash on Prem and Object Storage in the Cloud can completely replace both disc and tape in the back of process >> case. I won the competition because you guys came in really with the vision of all Flash Data Center. You now have a cloud software that runs on Amazon and others with words. No hardware, he just the blocks are great solution. How have the competition fallen behind you guys really kind of catapulted into the lead, took share certainly from other vendors. In my public, someone predicted that pure would never make it to escape velocity. Some other pundits and other CEOs of tech company said that you guys achieve that, but their success now You guys go the next level. What is the importance of that ability you have? And what's the inability of the competition? So, you know, I like >> to joke with folks. When we started the company, I think flashes. It's an excuse, you know, We just tried to build a better storage company and we went out and I talkto many, many, many customers, and I found in general they didn't just not like their stories products they didn't like the companies that sold it to them, and so we tried to look at that overall experience. And, you know, we, of course, innovated around flash use. Consumer fresh brought the price down so I could actually afford to use it with the duplication. But we also just looked at that ownership experience. And when I talk to folks in the history, I think now we might even be better known for are evergreen approach that even for Flash. And it's been neat to watch customers now that even the earliest your customers or two or three cycles of refreshing they've seen a dramatic difference in just the storage experience that you can essentially subscribe to. A known over time through many generations of technology. Turn as opposed to that cycle of replacing a raise >> share a story of a custom that's been through that's reached fresh cycles from their first experience to what they're experienced. Now what what? Some of the experiences like any share some some insight. >> Yeah, so, you know, one of one of the first customers that really turn us on to this. That scale was a large telco provider, and they were interesting they run, you know, hundreds of here wanna raise from from competitors and you know, they do a three year cycle. But as they really like, looked at the cost of that three or cycle. They realized that it was eighteen months of usable life in those three years because it took him nine months to get the dirt on the array. And then when they knew the end was coming, it took him nine months to get the data off the array. And so parade it was cost him a million dollars just in data migration costs alone. Then you've wasted half of your life of the array, and so add that up over hundreds of raising your environment. You can quickly get the math. >> It's just it's a total cost of ownership, gets out of control, right? And >> so as we brought in Evergreen, there's just an immediate roo. I mean, it was accost equation. It was, you know, on parity with flash disk anyway. But if you look at all those operational savings, itjust is completed. And so I think what we started with Evergreen, we realised it was much more of a subscription model where people subscribe to a service with us. We updated. Refresh the hardware over time and it just keeps getting better over time. Sounds >> a lot like the cloud, right? And so we really your strategies bring common set of tools in there and read them again. That kind of service that been Kia. >> Yeah, I think you know another thing that we did from Day one was like, We're never gonna build a piece of on prime management software. So are on print. Our management experience from Day one was pure one, which is our SAS base management platform. You know, it started out as a call home application, but now is a very full featured south space management experience. And that's also served us well as you go to the cloud, because when you want to manage on permanent cloud together, we're about to do it from then the cloud itself >> tell about the application environment you mentioned earlier hybrid on multi class here. Ah, a lot of pressure and I t to get top line revenue, not just cost reduction was a good benefits you mentioned certainly gets their attention. But changing the organization's value proposition to their customers is about the experience either app driven or some other tech. This is now an imperative. It's happening very fast. Modernisation Renaissance. People call it all these things. How you guys helping that piece of the >> puzzle? Yeah, I mean, I think ultimately, for most customers, as they start toe really getting their mindset, that technology is there. Differentiation speed into Julia there, developers becomes key. And so you know, modern CEO is much less about being a cost cutting CEO today, and much more about that empower in Seo and how you can actually build the tools and bring them there for the ordination. Run faster. And a lot of that is about unlocking consumption. And so it's been it's been fun to see some of the lessons of the cloud in terms of instant consumption, agility growth actually come to the mindset of how people think about on Primus. Well, and so a lot of what we've done is tried Teo armed people on prom with those same capabilities so that they can easily deliver storage of service to their customers so folks can consume the FBI without having to call somebody to ask for storage. So things could take seconds, not weeks of procurement, right? And then now, as we bridge those models between on permanent cloud, it becomes a single spot where you can basically have that same experience to request storage wherever it may be. In the organization, >> the infrastructures code is really just, you know, pushing code not from local host or the machine, but to cloud or on prim and just kind of trickle all the way through. This is one of the focuses we're hearing in cloud native conversations, as you know, words like containers We talked briefly about you mentioned in the activities. Hi, Cooper Netease is really hot right now. Service meshes Micro services state ful Data's stateless data. These air like really hyped up areas, but a lot of traction force people take a look at it. How do you guys speak to the customers when they say, hey, kicks? We love all the pure stuff. We're on our third enter federation or anything about being a customer. I got this looming, you know, trend. I gotto understand, and either operationalize or not around. Cooper Netease service mesh these kinds of club native tools. How do you guys talk to that customer. What's the pitch? That's the value proposition. >> Yeah. I mean, I think you know, your your new Kupres environment is the last place you should consider a legacy Storage, You know, all all joking aside, we've We've been really, I think possibly impressed around how fast the adoption it started around containers in general. And Cooper, that is, You know, it started out as a developer thing. And, you know, we first saw it in our environment. When we started to build our second product up your flash blade four, five years ago, the engineering team started with honors from Day one. It was like, That's interesting. And so we started to >> see their useful. We have containers and communities worker straight, pretty nights. And >> so, you know, we just started to see that grow way also started to see it more within analytics and a I, you know, as we got into a I would area and are broader push around going after Big Gate and analytics. Those tool chains in particular, were very well set up to take advantage of containers because they're much more modern. That's much more about, you know, fluidly creating this data pipeline. And so it started in these key use cases. But I think you know, it's at a point right now where every enterprises considering it, there's certainly an opportunity in the development environment. And, you know, despite all of that, the folks who tend to use these containers, they don't think about storage. You know that if they go to the cloud and they start to build applications, they're not thinking many layers down in the organization. What the story is that supports me looks like. And so if you look at a storage team's job or never structure seems job is to provide the same experience to your container centric consumers, right? They should just be able Teo, orchestrate and build, and then stories should just happen underneath. >> I told Agree that I think that success milestone. If you could have that conversation that he had, you know you're winning what they do care about. We're hearing more of what you mentioned earlier about data pipeline data they care about because applications will be needing data. But it's a retail app or whatever. I might need to have access to multiple data, not some siloed or you know, data warehouse that might have little, you know. Hi, Leighton. See, they need data in the AP at the right moment. This has been a key discussion. Real time. I mean, this is the date. It's It's been a hard problem. Yeah. How do you guys look at that solution opportunity for your customers? I >> think one of the insights we had was that fundamentally folks needed infrastructure that cannot just run one tool or another tool, but a whole bunch of them. And, you know, you look at people building a data pipeline there, stitching together six, eight, ten tools that exist today and another twenty that don't exist tomorrow. And that flexibility is key, right? A lot of the original thought in that space was going to pick the right storage for this piece of the write stories for that piece. But as we introduced our flash blade product, we really position it as a data hub for these modern applications. And each of them requires something a little different. But the flexibility and scale of flash played was able to provide everything those applications needed. We're now seeing another opportunity in that space with Daz and the traditional architecture. You know, as we came out with envy me over fabrics within our flash ray product line. We see this is a way to really take Web scale architecture on Prem. You know, you look Quinn's within Google and Amazon and whatnot, right? They're not using hyper converge there, not using Daz disc inside of the same chassis that happens. We're on applications. They have dedicated in frustration for storage. That's simply design for dedicated servers. And they're connected with fast Internet, you know, networking on demand. And so we're basically trying to bring that same architecture to the on prime environment with nd me over fabric because they need me over fabric can make local disc feel like you know as fun. >> But this is the shift that's really going on here. This is a complete re architecture of computing and storage. Resource is >> absolutely, you know, and I think the thing that's changing it is that need for consolidation. In the early days, I might have said, Okay, I'm gonna deploy. I don't know, two hundred nodes of the Duke and all just design a server for her dupe with the right amount of discontent and put him over in those racks, and that will be like this. Then I'LL design something else for something else. Right now, people are looking for defining Iraq. They can print out, over and over and over and over again, and that rack needs to be flexible enough to deliver the right amount of storage to every application on demand over and >> over. You know, one trend I want get your reaction to a surveillance because this kind of points that value proposition functions have been very popular. It's still early days on what functions are, but is a tell sign a little bit on where this is going to your point around thinking, rethinking on Prem not in the radical wholesale business model change, but just more of operating change. I was deployed and how it works with the cloud because those two things, if working together, make server Lis very interesting. >> Yeah, absolutely. I mean, it's just a further form of abstraction, ultimately from the underlying hardware. And so you know, if you think about functions on demand or that kind of thing, that's absolute, something that just needs a big shared pool of storage and not to have any persistent findings to anything you know, Bill, to get to the storage needs, do its task, right? What it needs to and get out of the way. Right? >> Well, VP of strategy. A big roll. You guys did a good job. So congratulations being the number six employees of pure. How's the journey been? You guys have gone public, Still growing. Been around for it on those ten years. You're not really small little couple anymore. So you're getting into bigger accounts growing. How's that journey been for you? >> It's so it's been an amazing right. That's why I'm still here, coming in every day, excited to come to work. I think they think that we're the proudest of is it still feels like a small company. It still feels with, like we have a much aggression and much excitement to go out for the market everyday, as we always have the oranges very, very strong. But on the flip side, it's now fun that we get to solve customer problems at a scale that we probably could have even imagined in the early days. And I would also say right now it really feels like there's this next chapter opening up. You know, the first chapter was delivering the all flashes, and we're not even done with that yet. But as we bring our software to the cloud and really poured it natively be optimized for each of the clouds. It kind of opens up. Our engineers tto be creative in different ways. >> Generational shift happening. Seeing it, you know again. Application, modernization, hybrid multi clouded. Just some key pillars. But there's so much more opportunity to go. I want your thoughts. You've had the luxury of being working under two CEOs that have been very senior veterans Scott Dietzen and Charlie. What's it like working with both of them? And what's it like with Charlie? Now it's What's the big mandate? What what's the Hill you guys are trying to climb? Share some of the vision around Charlene's? Well, >> I'd say the thing that binds both Scott and truly together in DNA is that they're fundamentally both innovators. And, you know, if you look at pure, we're never going to be the low cost leader. We're not going to be. The company tells you everything, so we have to be the company that's most innovative in the spaces we playing. And so you know, that's job number one. It pure after reliability. So let's say that you remember, too. But that's key. And I think both of both of our CEOs have shared that common DNA, which is their fundamentally product innovators. And I think that's the fun thing about working for Charlie is he's really thoughtful about how you run a company of very large scale. How you how you manage the custom relationship to never sacrifice that experience because that's been great for pure but ultimately how you also, unlike people to run faster and a big organization, >> check every John Chambers, who Charlie worked with Cisco. With the back on the day, he said, One of the key things about a CEO is picking the right wave the right time. What is that way for pure. What do you guys riding that takes advantage of? The work still got to do in the data center on the story side. What's the big wave? >> So, you know, look, the first way was flash. That was a great way to be on and before its not over. But we really see a and an enormous opportunity where cloud infrastructure mentality comes on. And, you know, we think that's going to finally be the thing that gets people out of the mindset of doing things the old way. You know, you fundamentally could take the lessons we learned over here and apply it to the other side of my hybrid cloud. Every talks about hybrid cloud and all the thought processes what happens over the cloud half of the hybrid. Well, Ian from half of the hybrid is just as important. And getting that to be truly Cloudera is a key focus of >> Arya. And then again, micro Services only helped accelerate. And you want modern story, your point to make that work absolutely kicks. Thanks for spending time in sparing the insides. I really appreciate it. It's the Cube conversation here of Pure stores. Headquarters were in the arcade room. Get the insights and share in the data with you. I'm job for your Thanks for watching this cube conversation
SUMMARY :
in Castro Tree, one of the many buildings they have here as they continue to grow as a public company. is that the club early works best when you build a nap for the cloud natively. one to figure out what the importance and how hard is it to some of things that you guys are doing to solve that. the S to evergreen storage service, which will essentially allows you to subscribe to our storage even in your own data taxes and automated, but the values and shifting and you guys in the storage business you know this data's data of the bigger interesting challenges right now is how you get data constructs to actually go both ways. What's the most notable product announcements inequities that you guys have done? this is a lot of what I've been talking about, how you can take a tier one block storage application on Prem and seamlessly move What's the What's the inside of this container trend? And so you need a shared, reliable data service that can power these containers, What is the importance of that ability you have? a dramatic difference in just the storage experience that you can essentially subscribe to. Some of the experiences like any share some some insight. Yeah, so, you know, one of one of the first customers that really turn us on to this. It was, you know, on parity with flash disk anyway. And so we really your strategies bring common set of tools in there and read them again. And that's also served us well as you go to the cloud, because when you want to manage on tell about the application environment you mentioned earlier hybrid on multi class here. And so you know, modern CEO is much less about being a cost the infrastructures code is really just, you know, pushing code not from local host or the machine, And, you know, we first saw it in our environment. And But I think you know, it's at a point right now where every enterprises considering it, there's certainly an opportunity I might need to have access to multiple data, not some siloed or you know, And they're connected with fast Internet, you know, networking on demand. But this is the shift that's really going on here. absolutely, you know, and I think the thing that's changing it is that need for consolidation. You know, one trend I want get your reaction to a surveillance because this kind of points that value proposition functions something that just needs a big shared pool of storage and not to have any persistent findings to anything you know, So congratulations being the number six employees of pure. the first chapter was delivering the all flashes, and we're not even done with that yet. What what's the Hill you guys are trying to climb? And so you know, that's job number one. What do you guys riding that takes advantage of? You know, you fundamentally could take the lessons we learned over here and apply it to the other side of And you want modern story,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Kixmoeller | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Charlie | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
eighteen months | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
Scott | PERSON | 0.99+ |
April 2019 | DATE | 0.99+ |
Charlene | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Castro Tree | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Scott Dietzen | PERSON | 0.99+ |
Ian | PERSON | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
three years | QUANTITY | 0.99+ |
Kia | ORGANIZATION | 0.99+ |
three year | QUANTITY | 0.99+ |
John Premise | PERSON | 0.99+ |
first ten years | QUANTITY | 0.99+ |
uber | ORGANIZATION | 0.99+ |
Story Juice | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Evergreen | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
first chapter | QUANTITY | 0.99+ |
ten years | QUANTITY | 0.99+ |
Cooper | PERSON | 0.99+ |
one metric | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
both places | QUANTITY | 0.99+ |
second product | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
first experience | QUANTITY | 0.99+ |
Mark | PERSON | 0.99+ |
two worlds | QUANTITY | 0.99+ |
All Flash Data Center | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
eight | QUANTITY | 0.98+ |
about ten years | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
single | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
Object Engine | ORGANIZATION | 0.98+ |
both ways | QUANTITY | 0.97+ |
Prime | COMMERCIAL_ITEM | 0.97+ |
today | DATE | 0.97+ |
this year | DATE | 0.97+ |
three cycles | QUANTITY | 0.97+ |
Leighton | PERSON | 0.96+ |
first thing | QUANTITY | 0.96+ |
a million dollars | QUANTITY | 0.96+ |
six employees | QUANTITY | 0.95+ |
two different worlds | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |