Image Title

Search Results for activio:

Scott Buckles, IBM | Actifio Data Driven 2020


 

>> Narrator: From around the globe. It's theCUBE, with digital coverage of Actifio Data Driven 2020, brought to you by Actifio. >> Welcome back. I'm Stuart Miniman and this is theCUBE's coverage of Actifio Data Driven 2020. We wish everybody could join us in Boston, but instead we're doing it online this year, of course, and really excited. We're going to be digging into the value of data, how DataOps, data scientists are leveraging data. And joining me on the program, Scott Buckles, he's the North American Business Executive for database data science and DataOps with IBM, Scott, welcome to theCUBE. >> Thanks Stuart, thanks for having me, great to see you. >> Start with the Actifio-IBM partnership. Anyone that knows that Actifio knows that the IBM partnership is really the oldest one that they've had, either it's hardware through software, those joint solutions go together. So tell us about the partnership here in 2020. >> Sure. So it's been a fabulous partnership. In the DataOps world where we are looking to help, all of our customers gain efficiency and effectiveness in their data pipeline and getting value out of their data, Actifio really compliments a lot of the solutions that we have very well. So the folks from everybody from the up top, all the way through the engineering team, is a great team to work with. We're very, very fortunate to have them. How many or any specific examples or anonymized examples that you can share about joint (indistinct). >> I'm going to stay safe and go on the anonymized side. But we've had a lot of great wins, several significantly large wins, where we've had clients that have been struggling with their different data pipelines. And I say data pipeline, I mean getting value from understanding their data, to developing models and and doing the testing on that, and we can get into this in a minute, but those folks have really needed a solution where Actifio has stepped in and provided that solution. To do that at several of the largest banks in the world, including one that was a very recent merger down in the Southeast, where we were able to bring in the Actifio solution and address our, the customer's needs around how they were testing and how they were trying to really move through that testing cycle, because it was a very iterative process, a very sequential process, and they just weren't doing it fast enough, and Actifio stepped in and helped us deliver that in a much more effective way, in a much more efficient way, especially when you into a bank or two banks rather that are merging and have a lot of work to convert systems into one another and converge data, not an easy task. And that was one of the best wins that we've had in the recent months. And again, going back to the partnership, it was an awesome, awesome opportunity to work with them. >> Well, Scott, as I teed up for the beginning of the conversation, you've got data science and DataOps, help us understand how this isn't just a storage solution, when you're talking about BDP. How does DevOps fit into this? Talk a little bit about some of the constituents inside your customers that are engaging with the solution. >> Yeah. So we call it DataOps, and DataOps is both a methodology, which is really trying to combine the best of the way that we've transformed how we develop applications with DevOps and Agile Development. So going back 20 years ago, everything was a waterfall approach, everything was very slow , and then you had to wait a long time to figure out whether you had success or failure in the application that you had developed and whether it was the right application. And with the advent of DevOps and continuous delivery, the advent of things like Agile Development methodologies, DataOps is really converging that and applying that to our data pipelines. So when we look at the opportunity ahead of us, with the world exploding with data, we see it all the time. And it's not just structured data anymore, it's unstructured data, it's how do we take advantage of all the data that we have so that we can make that impact to our business. But oftentimes we are seeing where it's still a very slow process. Data scientists are struggling or business analysts are struggling to get the data in the right form so that they can create a model, and then they're having to go through a long process of trying to figure out whether that model that they've created in Python or R is an effective model. So DataOps is all about driving more efficiency, more speed to that process, and doing it in a much more effective manner. And we've had a lot of good success, and so it's part methodology, which is really cool, and applying that to certain use cases within the, in the data science world, and then it's also a part of how do we build our solutions within IBM, so that we are aligning with that methodology and taking advantage of it. So that we have the AI machine learning capabilities built in to increase that speed which is required by our customers. Because data science is great, AI is great, but you still have to have good data underneath and you have to do it at speed. Well, yeah, Scott, definitely a theme that I heard loud and clear read. IBM think this year, we do a lot of interviews with theCUBE there, it was helping with the tools, helping with the processes, and as you said, helping customers move fast. A big piece of IBM strategy there are the Cloud Paks. My understanding you've got an update with regards to BDP and Cloud Pak. So to tell us what the new releases here for the show. >> Yeah. So in our (indistinct) release that's coming up, we will be to launch BDP directly from Cloud Pak, so that you can take advantage of the Activio capabilities, which we call virtual data pipeline, straight from within Cloud Pak. So it's a native integration, and that's the first of many things to come with how we are tying those two capabilities and those two solutions more closely together. So we're excited about it and we're looking forward to getting it in our customer's hands. >> All right. And that's the Cloud Pak for Data, if I have that correct, right? >> That's called Cloud Pak for data, correct, sorry, yes. Absolutely, I should have been more clear. >> No, it's all right. It's, it's definitely, we've been watching that, those different solutions that IBM is building out with the Cloud Paks, and of course data, as we said, it's so important. Bring us inside a little bit, if you could, the customers. What are the use cases, those problems that you're helping your customers solve with these solution? >> Sure. So there's three primary use cases. One is about accelerating the development process. Getting into how do you take data from its raw form, which may or may not be usable, in a lot of cases it's not, and getting it to a business ready state, so that your data scientists, your business, your data models can take advantage of it, about speed. The second is about reducing storage costs. As data has exponentially grown so has storage costs. We've been in the test data management world for a number of years now. And our ability to help customers reduce that storage footprint is also tied to actually the acceleration piece, but helping them reduce that cost is a big part of it. And then the third part is about mitigating risk. With the amount of data security challenges that we've seen, customers are continuously looking for ways to mitigate their exposure to somebody manipulating data, accessing production data and manipulating production data, especially sensitive data. And by virtualizing that data, we really almost fully mitigate that risk of them being able to do that. Somebody either unintentionally or intentionally altering that data and exposing a client. >> Scott, I know IBM is speaking at the Data Driven event. I read through some of the pieces that they're talking about. It looks like really what you talk about accelerating customer outcomes, helping them be more productive, if you could, what, what are some of key measurements, KPIs that your customers have when they successfully deploy the solution? >> So when it comes to speed, it's really about, we're looking at about how are we reducing the time of that project, right? Are we able to have a material impact on the amount of time that we see clients get through a testing cycle, right? Are we taking them from months to days, are we taking them from weeks to hours? Having that type of material impact. The other piece on storage costs is certainly looking at what is the future growth? You're not necessarily going to reduce storage costs, but are you reducing the growth or the speed at which your storage costs are growing. And then the third piece is really looking at how are we minimizing the vulnerabilities that we have. And when you go through an audit, internally or externally around your data, understanding that the number of exposures and helping find a material impact there, those vulnerabilities are reduced. >> Scott, last question I have for you. You talk about making data scientists more efficient and the like, what are you seeing organizationally, have teams come together or are they planning together, who has the enablement to be able to leverage some of the more modern technologies out there? >> Well, that's a great question. And it varies. I think the organizations that we see that have the most impact are the ones that are most open to bringing their data science as close to the business as possible. The ones that are integrating their data organizations, either the CDO organization or wherever that may set it. Even if you don't have a CDO, that data organization and who owned those data scientists, and folding them and integrating them into the business so that they're an integral part of it, rather than a standalone organization. I think the ones that sort of weave them into the fabric of the business are the ones that get the most benefit and we've seen have the most success thus far. >> Well, Scott, absolutely. We know how important data is and getting full value out of those data scientists, critical initiative for customers. Thanks so much for joining us. Great to get the updates. >> Oh, thank you for having me. Greatly appreciated. >> Stay tuned for more coverage from Activio Data Driven 2020. I'm Stuart Miniman, and thank you for watching theCUBE. (upbeat music)

Published Date : Sep 16 2020

SUMMARY :

Narrator: From around the globe. And joining me on the thanks for having me, great to see you. is really the oldest one that they've had, the solutions that we have very well. To do that at several of the beginning of the conversation, in the application that you had developed and that's the first of And that's the Cloud Pak for Data, Absolutely, I should have been more clear. What are the use cases, and getting it to a business ready state, at the Data Driven event. on the amount of time that we see leverage some of the more are the ones that are most open to and getting full value out of Oh, thank you for having me. I'm Stuart Miniman, and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

StuartPERSON

0.99+

IBMORGANIZATION

0.99+

BostonLOCATION

0.99+

Scott BucklesPERSON

0.99+

Stuart MinimanPERSON

0.99+

2020DATE

0.99+

third pieceQUANTITY

0.99+

ActifioORGANIZATION

0.99+

two banksQUANTITY

0.99+

OneQUANTITY

0.99+

Cloud PakTITLE

0.99+

two solutionsQUANTITY

0.99+

PythonTITLE

0.99+

DevOpsTITLE

0.99+

third partQUANTITY

0.99+

secondQUANTITY

0.99+

firstQUANTITY

0.99+

Actifio Data Driven 2020TITLE

0.98+

oneQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

two capabilitiesQUANTITY

0.98+

Cloud PaksTITLE

0.97+

20 years agoDATE

0.97+

this yearDATE

0.96+

three primary use casesQUANTITY

0.96+

bothQUANTITY

0.95+

DataOpsORGANIZATION

0.95+

DataOpsTITLE

0.94+

SoutheastLOCATION

0.94+

AgileTITLE

0.94+

Agile DevelopmentTITLE

0.92+

RTITLE

0.88+

North AmericanPERSON

0.78+

Activio Data Driven 2020TITLE

0.74+

CloudCOMMERCIAL_ITEM

0.74+

BDPTITLE

0.7+

Data DrivenEVENT

0.67+

BDPORGANIZATION

0.53+

PaksTITLE

0.52+

minuteQUANTITY

0.52+

Actifio Analysis | Actifio Data Driven 2020


 

from around the globe it's thecube with digital coverage of actifio data driven 2020 brought to you by actifio hi and welcome to the cube's coverage of actifio data driven 2020 i'm stu miniman my co-host for this event is dave vellante but joining me to help kick off this discussion is david floyer he is the co-founder and chief technology officer of wikibon of course the research arm of siliconangle media which includes also thecube david great to see you thanks so much for joining us great to see you stu all right so we we've got a really nice lineup of course last year dave and i were in boston with the actifio team they had a really good lineup uh you know analysts thought leaders and of course lots of users you know love to talk to those users uh you and i are quite familiar with actifio uh really the company that that created copy data management as a category and a solution out there so why don't we start there david you know what what's the importance of copy data management you know here in 2020 you know many years after uh when actifio had created it well this year has really uh amplified the importance of copy data management and being able to manage across different locations across different clouds manage the copies manage the the reuse of data in different places um the the the covert has really emphasized the importance for example of putting just backup onto a cloud because it's on many occasions it's not going to be possible to get into your own data centers or if you're sharing a data center so uh automation and uh use of clouds multiple clouds has really driven uh i've become of a supreme importance uh since covert had started and and that's how it's going to be from here on in that's not going to change yeah david absolutely i mean we said for many years when you you know adopt cloud you know i still need to think about my data protection i need to think about security uh those aren't just covered uh because i have you know lovely object storage or it you know spreads things out amongst the different cloud regions um and even this year's you brought up covid uh we've been having so many conversations with with companies uh in many cases they're accelerating or new groups are diving in and therefore we need to make sure that they take the proper control precautions so you know my my disaster recover me my backup is so important uh maybe flush out a little bit for us if you would you know cloud we've been looking at uh that you know hybrid and multi-cloud architectures how people should be building it and of course data the critical component uh that we look at there what what should people be looking at well absolutely if you're going to have a multi-cloud strategy you uh you have to there are several things which are really important you have to be able to operate across each cloud natively in the cloud it's not it's not good enough to uh just be an appendage if you like um so and equally important is that you have to make sure that you're taking advantage of the characteristics of the cloud in particular object storage backup has always gone to object storage but object storage itself is not that fantastic if you're trying to just recover something from a from a lot of different objects unless you put an architecture around that unless you make it such that you can uh take all the workloads and be able to address them in the cloud itself and uh in in particular what's very interesting is there are two fundamental philosophies of moving to the cloud one of which is that you migrate everything you you convert all of your databases to a database that's operating in the cloud that you go to um and the other one is to say well that type of lift and shift is not good enough what you want to be able to do is be able to use the same databases the same applications that you're using at the moment avoid that enormous expensive cost of moving everything and then be able to operate on those databases using the cloud principles the cloud object store and have the same level of performance yeah absolutely david i know i'm looking forward to uh you know dave's got uh you know ash the ceo of actifio uh on today tomorrow uh i'll be talking to david chang who's the co-founder uh also onto the product there to really understand you know how is activio building an architecture that meets what you were just talking about uh and david you know things i i've heard you talking about for many years you know uh migrations obviously are something that anybody in it dreads uh i i used to say in the storage world uh it you know upgrade came with that four-letter word it was migration because you you had to do that and you know databases of critical importance um one of the other uh discussions i have is with ibm and ibm has had a long partnership with activio um but they're also they're they're getting involved with that data usage so maybe if you could expound a little bit you know how is it just you know the early days copy data management i looked at it it was a you know financial savings it was okay hey we've got way too many copies out there how can we enable them to be used better and not have you know just lots and lots of big capacity that the the storage vendors uh as it was you know hard disk and then flash converting there so you know how are we actually unlocking the value of data in today's world well there are two aspects of that one of which is you want the the original data wherever possible you uh you you want to have be able to access that data as quickly as possible so if you have for example a system of record and you want to be able to access that system of record uh it may be one day you want to be able to bring it right to one day before the day before not have a week waiting for it coffee management is essential to be able to access that data and the same data for everybody and know that and know from a compliance point of view you have the right data so that's the first stage but then from a development point of view you want to have the flexibility of using real-time data whenever you can so you want to be able to access any data you want from anywhere and know that it's the correct data and and move your business processes from asynchronous business processes to as synchronous as you can and you can only do that with automation through uh real-time data management yeah uh absolutely david and it's even it's even more pertinent right now as everyone is you know the discussion is you know work from home is becoming work from anywhere uh so it's it's not just oh hey i can get into the data warehouse uh and know that i have uh you know a low latency connection when i'm sitting in the corporate uh internet now you know developers uh typically are dispersed people need to be able to access it um talk a little bit about uh the data pipeline the discussion we've been hearing from uh you know the cdo events that we've gone to as well as discussions you know how does you know actifio in the industry as a whole streamline that data pipeline that you started talking about yeah that that's absolutely essential uh you you you have to have processes and procedures that identify the data where it's going to go uh and and have essentially a data plane managed data plane which is taking it from where it need where it is to where it needs to go sharing the metadata across that fabric um those are the ways that you build a consistent data pipeline where people know what the provenance of that data is and the less copies that you have and the more single copies of that data a a a copy of record a single version of the truth then the less complicated the systems become and even more the the systems between the systems the the human interaction that's required to to manage that data goes down so it and it makes development so much easier so a data pipeline is absolutely essential and it's part of that data plane and it's part of the overall architecture that has to be there we've lived in silos for so long and getting out of silos is not it's not easy at all and uh you've got to have the right tools to be able to do that yeah uh the the keynote speaker uh that actifio has for the event is gene kim somebody we've had on the cube a few times and excited to have him back on at this event uh what i thought was really interesting david i read his first book uh his first fiction book i should say he's also written many non-fiction books uh the phoenix project was really the go-to book to kind of understand devops i've i've recommended so many friends uh people in the industry his new one the unicorn project is really about software development but what i found really interesting because i i didn't get to read it earlier this year because there was just no travel but made sure i did read it ahead of this event and the lesson that it called out to me was you know moving faster using these modern tools you know breaking through silos was all well and good but the the real turning point for the company was enabling that use of data and as you said that real time not looking historically but be able to react fast so you know not giving away the secrets of the book there but uh you know a retail organization that could trial things could update in real time what the inventory was and having everybody in the company get access to that so the product people the marketing people uh the field people all accessing that single source of truth and that being fed throughout the organization really invigorated and drove uh the the ability for a company to react and move fast which really is the the clarion call for business today so david yeah you know any any final word from you as to you know we've we've been beating that drum for years that you know data data data um is is critically important whether you're taking that specific example if you can take that all of that data and then start updating the pricing according to that data you've suddenly made repricing a dynamic event uh one that's going to respond to the customer and they their characteristics uh good or bad and the availability of those uh availability and the uh and the the pipeline of products if you understand all of that then suddenly your ability to increase revenue by being able to reprice more quickly uh automatically become an amazingly uh effective in terms of revenue increase yeah absolutely i i feel like uh i remember back in the early days of hadoop it was you know how can i make an ad better to increase increase click rate but the promise of unlocking data today is to really understand and customize for that environment so some of it is we can maximize profitability there will be certain clients um which are willing to pay for more premium products and others uh that you need to have that value option but when you understand the data you understand the customer you understand the need for the portfolio of solutions you have data can just be that key enabler all right well hey david floyer thank you so much for helping us kick off our coverage here i want to tell everybody make sure to you know tune in for the rest of it uh dave vellante and myself going through the interviews of course on demand with actifio as well as i'm sorry live with actifio as well as on demand on thecube.net as always for david floyer dave vellante i'm stu miniman thank you for joining us for activio data driven and thank you for watching thecube [Music] you

Published Date : Sep 15 2020

SUMMARY :

forward to uh you know dave's got uh

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2020DATE

0.99+

davePERSON

0.99+

bostonLOCATION

0.99+

first bookQUANTITY

0.99+

david floyerPERSON

0.99+

davidPERSON

0.99+

dave vellantePERSON

0.99+

last yearDATE

0.99+

thecube.netOTHER

0.99+

david changPERSON

0.99+

two aspectsQUANTITY

0.98+

ibmORGANIZATION

0.98+

todayDATE

0.98+

first stageQUANTITY

0.98+

actifioORGANIZATION

0.98+

first fiction bookQUANTITY

0.98+

each cloudQUANTITY

0.96+

earlier this yearDATE

0.95+

this yearDATE

0.95+

four-letterQUANTITY

0.94+

two fundamental philosophiesQUANTITY

0.94+

singleQUANTITY

0.92+

this yearDATE

0.9+

today tomorrowDATE

0.88+

one dayQUANTITY

0.88+

oneQUANTITY

0.87+

single sourceQUANTITY

0.85+

yearsQUANTITY

0.84+

wikibonORGANIZATION

0.83+

single versionQUANTITY

0.83+

manyQUANTITY

0.7+

activioORGANIZATION

0.69+

many yearsQUANTITY

0.68+

a weekQUANTITY

0.56+

phoenixORGANIZATION

0.53+

minimanPERSON

0.47+

hadoopTITLE

0.46+